venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
ICLR
Title Extreme Q-Learning: MaxEnt RL without Entropy Abstract Modern Deep Reinforcement Learning (RL) algorithms require estimates of the maximal Q-value, which are difficult to compute in continuous domains with an infinite number of possible actions. In this work, we introduce a new update rule for online and offline RL which directly models the maximal value using Extreme Value Theory (EVT), drawing inspiration from economics. By doing so, we avoid computing Q-values using out-of-distribution actions which is often a substantial source of error. Our key insight is to introduce an objective that directly estimates the optimal soft-value functions (LogSumExp) in the maximum entropy RL setting without needing to sample from a policy. Using EVT, we derive our Extreme Q-Learning framework and consequently online and, for the first time, offline MaxEnt Q-learning algorithms, that do not explicitly require access to a policy or its entropy. Our method obtains consistently strong performance in the D4RL benchmark, outperforming prior works by 10+ points on the challenging Franka Kitchen tasks while offering moderate improvements over SAC and TD3 on online DM Control tasks. Visualizations and code can be found on our website 1. 1 INTRODUCTION Modern Deep Reinforcement Learning (RL) algorithms have shown broad success in challenging control (Haarnoja et al., 2018; Schulman et al., 2015) and game-playing domains (Mnih et al., 2013). While tabular Q-iteration or value-iteration methods are well understood, state of the art RL algorithms often make theoretical compromises in order to deal with deep networks, high dimensional state spaces, and continuous action spaces. In particular, standard Q-learning algorithms require computing the max or soft-max over the Q-function in order to fit the Bellman equations. Yet, almost all current off-policy RL algorithms for continuous control only indirectly estimate the Q-value of the next state with separate policy networks. Consequently, these methods only estimate the Q-function of the current policy, instead of the optimal Q∗, and rely on policy improvement via an actor. Moreover, actor-critic approaches on their own have shown to be catastrophic in the offline settings where actions sampled from a policy are consistently out-of-distribution (Kumar et al., 2020; Fujimoto et al., 2018). As such, computing maxQ for Bellman targets remains a core issue in deep RL. One popular approach is to train Maximum Entropy (MaxEnt) policies, in hopes that they are more robust to modeling and estimation errors (Ziebart, 2010). However, the Bellman backup B∗ used in MaxEnt RL algorithms still requires computing the log-partition function over Q-values, which is usually intractable in high-dimensional action spaces. Instead, current methods like SAC (Haarnoja et al., 2018) rely on auxiliary policy networks, and as a result do not estimate B∗, the optimal Bellman backup. Our key insight is to apply extreme value analysis used in branches of Finance and Economics to Reinforcement Learning. Ultimately, this will allow us to directly model the LogSumExp over Q-functions in the MaxEnt Framework. ∗Equal Contribution 1https://div99.github.io/XQL/ Intuitively, reward or utility-seeking agents will consider the maximum of the set of possible future returns. The Extreme Value Theorem (EVT) tells us that maximal values drawn from any exponential tailed distribution follows the Generalized Extreme Value (GEV) Type-1 distribution, also referred to as the Gumbel Distribution G. The Gumbel distribution is thus a prime candidate for modeling errors in Q-functions. In fact, McFadden’s 2000 Nobel-prize winning work in Economics on discrete choice models (McFadden, 1972) showed that soft-optimal utility functions with logit (or softmax) choice probabilities naturally arise when utilities are assumed to have Gumbel-distributed errors. This was subsequently generalized to stochastic MDPs by Rust (1986). Nevertheless, these results have remained largely unknown in the RL community. By introducing a novel loss optimization framework, we bring them into the world of modern deep RL. Empirically, we find that even modern deep RL approaches, for which errors are typically assumed to be Gaussian, exhibit errors that better approximate the Gumbel Distribution, see Figure 1. By assuming errors to be Gumbel distributed, we obtain Gumbel Regression, a consistent estimator over log-partition functions even in continuous spaces. Furthermore, making this assumption about Qvalues lets us derive a new Bellman loss objective that directly solves for the optimal MaxEnt Bellman operator B∗, instead of the operator under the current policy Bπ . As soft optimality emerges from our framework, we can run MaxEnt RL independently of the policy. In the online setting, we avoid using a policy network to explicitly compute entropies. In the offline setting, we completely avoid sampling from learned policy networks, minimizing the aforementioned extrapolation error. Our resulting algorithms surpass or consistently match state-of-the-art (SOTA) methods while being practically simpler. In this paper we outline the theoretical motivation for using Gumbel distributions in reinforcement learning, and show how it can be used to derive practical online and offline MaxEnt RL algorithms. Concretely, our contributions are as follows: • We motivate Gumbel Regression and show it allows calculation of the log-partition function (LogSumExp) in continuous spaces. We apply it to MDPs to present a novel loss objective for RL using maximum-likelihood estimation. • Our formulation extends soft-Q learning to offline RL as well as continuous action spaces without the need of policy entropies. It allows us to compute optimal soft-values V ∗ and soft-Bellman updates B∗ using SGD, which are usually intractable in continuous settings. • We provide the missing theoretical link between soft and conservative Q-learning, showing how these formulations can be made equivalent. We also show how Max-Ent RL emerges naturally from vanilla RL as a conservatism in our framework. • Finally, we empirically demonstrate strong results in Offline RL, improving over prior methods by a large margin on the D4RL Franka Kitchen tasks, and performing moderately better than SAC and TD3 in Online RL, while theoretically avoiding actor-critic formulations. 2 PRELIMINARIES In this section we introduce Maximium Entropy (MaxEnt) RL and Extreme Value Theory (EVT), which we use to motivate our framework to estimate extremal values in RL. We consider an infinite-horizon Markov decision process (MDP), defined by the tuple (S,A,P, r, γ), where S,A represent state and action spaces, P(s′|s,a) represents the environment dynamics, r(s,a) represents the reward function, and γ ∈ (0, 1) represents the discount factor. In the offline RL setting, we are given a dataset D = (s,a, r, s′) of tuples sampled from trajectories under a behavior policy πD without any additional environment interactions. We use ρπ(s) to denote the distribution of states that a policy π(a|s) generates. In the MaxEnt framework, an MDP with entropy-regularization is referred to as a soft-MDP (Bloem & Bambos, 2014) and we often use this notation. 2.1 MAXIMUM ENTROPY RL Standard RL seeks to learn a policy that maximizes the expected sum of (discounted) rewards Eπ [ ∑∞ t=0 γ tr(st,at)], for (st,at) drawn at timestep t from the trajectory distribution that π generates. We consider a generalized version of Maximum Entropy RL that augments the standard reward objective with the KL-divergence between the policy and a reference distribution µ: Eπ[ ∑∞ t=0 γ t(r(st,at)− β log π(at|st)µ(at|st) )], where β is the regularization strength. When µ is uniform U , this becomes the standard MaxEnt objective used in online RL up to a constant. In the offline RL setting, we choose µ to be the behavior policy πD that generated the fixed dataset D. Consequently, this objective enforces a conservative KL-constraint on the learned policy, keeping it close to the behavior policy (Neu et al., 2017; Haarnoja et al., 2018). In MaxEnt RL, the soft-Bellman operator B∗ : RS×A → RS×A is defined as (B∗Q)(s,a) = r(s,a)+ γEs′∼P(·|s,a)V ∗(s′) where Q is the soft-Q function and V ∗ is the optimal soft-value satisfying: V ∗(s) = β log ∑ a µ(a|s) exp (Q(s,a)/β) := Lβa∼µ(·|s) [Q(s,a)] , (1) where we denote the log-sum-exp (LSE) using an operator Lβ for succinctness2. The soft-Bellman operator has a unique contraction Q∗ (Haarnoja et al., 2018) given by the soft-Bellman equation: Q∗ = B∗Q∗ and the optimal policy satisfies (Haarnoja et al., 2017): π∗(a|s) = µ(a|s) exp ((Q∗(s,a)− V ∗(s))/β). (2) Instead of estimating soft-values for a policy V π(s) = Ea∼π(·|s) [ Q(s,a)− β log π(a|s)µ(a|s) ] , our approach will seek to directly fit the optimal soft-values V ∗, i.e. the log-sum-exp (LSE) of Q values. 2.2 EXTREME VALUE THEOREM The Fisher-Tippett or Extreme Value Theorem tells us that the maximum of i.i.d. samples from exponentially tailed distributions will asymptotically converge to the Gumbel distribution G(µ, β), which has PDF p(x) = exp(−(z + e−z)) where z = (x− µ)/β with location parameter µ and scale parameter β. Theorem 1 (Extreme Value Theorem (EVT) (Mood, 1950; Fisher & Tippett, 1928)). For i.i.d. random variables X1, ..., Xn ∼ fX , with exponential tails, limn→∞ maxi(Xi) follows the Gumbel (GEV-1) distribution. Furthermore, G is max-stable, i.e. if Xi ∼ G, then maxi(Xi) ∼ G holds. This result is similar to the Central Limit Theorem (CLT), which states that means of i.i.d. errors approach the normal distribution. Thus, under a chain of max operations, any i.i.d. exponential tailed errors3 will tend to become Gumbel distributed and stay as such. EVT will ultimately suggest us to characterize nested errors in Q-learning as following a Gumbel distribution. In particular, the Gumbel distribution G exhibits unique properties we will exploit. One intriguing consequence of the Gumbel’s max-stability is its ability to convert the maximum over a discrete set into a softmax. This is known as the Gumbel-Max Trick (Papandreou & Yuille, 2010; Hazan & Jaakkola, 2012). Concretely for i.i.d. ϵi ∼ G(0, β) added to a set {x1, ..., xn} ∈ R, maxi(xi+ ϵi) ∼ G(β log ∑ i exp (xi/β), β), and argmax(xi+ ϵi) ∼ softmax(xi/β). Furthermore, the Max-trick is unique to the Gumbel (Luce, 1977). These properties lead into the McFadden-Rust model (McFadden, 1972; Rust, 1986) of MDPs as we state below. McFadden-Rust model: An MDP following the standard Bellman equations with stochasticity in the rewards due to unobserved state variables will satisfy the soft-Bellman equations over the observed state with actual rewards r̄(s,a), given two conditions: 1. Additive separability (AS): observed rewards have additive i.i.d. Gumbel noise, i.e. r(s,a) = r̄(s,a) + ϵ(s,a), with actual rewards r̄(s,a) and i.i.d. noise ϵ(s,a) ∼ G(0, β). 2. Conditional Independence (CI): the noise ϵ(s,a) in a given state-action pair is conditionally independent of that in any other state-action pair. Moreover, the converse also holds: Any MDP satisfying the Bellman equations and following a softmax policy, necessarily has any i.i.d. noise in the rewards with AS + CI conditions be Gumbel distributed. These results were first shown to hold in discrete choice theory by McFadden (1972), with the AS + CI conditions derived by Rust (1986) for discrete MDPs. We formalize these results in Appendix A and give succinct proofs using the developed properties of the Gumbel distribution. These results enable the view of a soft-MDP as an MDP with hidden i.i.d. Gumbel noise in the rewards. Notably, this result gives a different interpretation of a soft-MDP than entropy regularization to allow us to recover the soft-Bellman equations. 2In continuous action spaces, the sum over actions is replaced with an integral over the distribution µ. 3Bounded random variables are sub-Gaussian (Young, 2020) which have exponential tails. 3 EXTREME Q-LEARNING In this section, we motivate our Extreme Q-learning framework, which directly models the softoptimal values V ∗, and show it naturally extends soft-Q learning. Notably, we use the Gumbel distribution to derive a new optimization framework for RL via maximum-likelihood estimation and apply it to both online and offline settings. 3.1 GUMBEL ERROR MODEL Although assuming Gumbel errors in MDPs leads to intriguing properties, it is not obvious why the errors might be distributed as such. First, we empirically investigate the distribution of Bellman errors by computing them over the course of training. Specifically, we compute r(s,a) − γQ(s′, π(s′)) − Q(s,a) for samples (s,a, s′) from the replay-buffer using a single Q-function from SAC (Haarnoja et al., 2018) (See Appendix D for more details). In Figure 1, we find the errors to be skewed and better fit by a Gumbel distribution. We explain this using EVT. Consider fitting Q-functions by learning an unbiased function approximator Q̂ to solve the Bellman equation. We will assume access to M such function approximators, each of which are assumed to be independent e.g. parallel runs of a model over an experiment. We can see approximate Q-iteration as performing: Q̂t(s,a) = Q̄t(s,a) + ϵt(s,a), (3) where E[Q̂] = Q̄t is the expected value of our prediction Q̂t for an intended target Q̄t over our estimators, and ϵt is the (zero-centered) error in our estimate. Here, we assume the error ϵt comes from the same underlying distribution for each of our estimators, and thus are i.i.d. random variables with a zero-mean. Now, consider the bootstrapped estimate using one of our M estimators chosen randomly: B̂∗Q̂t(s,a) = r(s,a) + γmax a′ Q̂t(s ′,a′) = r(s,a) + γmax a′ (Q̄t(s ′,a′) + ϵt(s ′,a′)). (4) We now examine what happens after a subsequent update. At time t + 1, suppose that we fit a fresh set of M independent functional approximators Q̂t+1 with the target B̂∗Q̂t, introducing a new unbiased error ϵt+1. Then, for Q̄t+1 = E[Q̂t+1] it holds that Q̄t+1(s,a) = r(s,a) + γEs′|s,a[Eϵt [max a′ (Q̄t(s ′,a′) + ϵt(s ′,a′))]]. (5) As Q̄t+1 is an expectation over both the dynamics and the functional errors, it accounts for all uncertainty (here E[ϵt+1] = 0). But, the i.i.d. error ϵt remains and will be propagated through the Bellman equations and its chain of max operations. Due to Theorem 1, ϵt will become Gumbel distributed in the limit of t, and remain so due to the Gumbel distribution’s max-stability.4 This highlights a fundamental issue with approximation-based RL algorithms that minimize the MeanSquared Error (MSE) in the Bellman Equation: they implicitly assume, via maximum likelihood estimation, that errors are Gaussian. In Appendix A, we further study the propagation of errors using the McFadden-Rust MDP model, and use it to develop a simplified Gumbel Error Model (GEM) for errors under functional approximation. In practice, the Gumbel nature of the errors may be weakened as estimators between timesteps share parameters and errors will be correlated across states and actions. 3.2 GUMBEL REGRESSION The goal of our work is to directly model the log-partition function (LogSumExp) over Q(s, a) to avoid all of the aforementioned issues with taking a max in the function approximation domain. 4The same holds for soft-MDPs as log-sum-exp can be expanded as a max over i.i.d. Gumbel random vars. In this section we derive an objective function that models the LogSumExp by simply assuming errors follow a Gumbel distribution. Consider estimating a parameter h for a random variable X using samples xi from a dataset D, which have Gumbel distributed noise, i.e. xi = h + ϵi where ϵi ∼ −G(0, β). Then, the average log-likelihood of the dataset D as a function of h is given as: Exi∼D [log p(xi)] = Exi∼D [ −e((xi−h)/β) + (xi − h)/β ] (6) Maximizing the log-likelihood yields the following convex minimization objective in h, L(h) = Exi∼D [ e(xi−h)/β − (xi − h)/β − 1 ] (7) which forms our objective function L(·), which resembles the Linex loss from econometrics (Parsian & Kirmani, 2002) 5. β is fixed as a hyper-parameter, and we show its affect on the loss in Figure 2. Critically, the minima of this objective under a fixed β is given by h = β logExi∼D[exi/β ], which resembles the LogSumExp with the summation replaced with an (empirical) expectation. In fact, this solution is the the same as the operator Lβµ(X) defined for MaxEnt in Section 2.1 with xi sampled from µ. In Figure 2, we show plots of Gumbel Regression on a simple dataset with different values of β. As this objective recovers Lβ(X), we next use it to model soft-values in Max-Ent RL. 3.2.1 THEORY Here we show that Gumbel regression is well behaved, considering the previously defined operator Lβ for random variables Lβ(X) := β logE [ eX/β ] . First, we show it models the extremum. Lemma 3.1. For any β1 > β2, we have Lβ1(X) < Lβ2(X). And L∞(X) = E [X], L0(X) = sup(X). Thus, for any β ∈ (0,∞), the operator Lβ(X) is a measure that interpolates between the expectation and the max of X . The operator Lβ(X) is known as the cumulant-generating function or the log-Laplace transform, and is a measure of the tail-risk closely linked to the entropic value at risk (EVaR) (Ahmadi-Javid, 2012) . Lemma 3.2. The risk measure L has a unique minima at β logE [ eX/β ] . And an empirical risk L̂ is an unbiased estimate of the true risk. Furthermore, for β ≫ 1, L(θ) ≈ 12β2Exi∼D[(xi − θ) 2], thus behaving as the MSE loss with errors ∼ N (0, β). In particular, the empirical loss L̂ over a dataset of N samples can be minimized using stochastic gradient-descent (SGD) methods to give an unbiased estimate of the LogSumExp over the N samples. Lemma 3.3. L̂β(X) over a finite N samples is a consistent estimator of the log-partition function Lβ(X). Similarly, exp(L̂β(X)/β) is an unbiased estimator for the partition function Z = E [ eX/β ] We provide PAC learning bounds for Lemma 3.3, and further theoretical discussion on Gumbel Regression in Appendix B. 3.3 MAXENT RL WITHOUT ENTROPY Given Gumbel Regression can be used to directly model the LogSumExp , we apply it to Q-learning. First, we connect our framework to conservative Q-learning (Kumar et al., 2020). 5We add −1 to make the loss 0 for a perfect fit, as ex − x− 1 ≥ 0 with equality at x = 0. Lemma 3.4. Consider the loss objective over Q-functions: L(Q) = Es∼ρµ,a∼µ(·|s) [ e(T πQ̂k(s,a)−Q(s,a))/β ] − Es∼ρµ,a∼µ(·|s)[(T πQ̂k(s,a)−Q(s,a))/β]− 1 (8) where T π := r(s,a) + γEs′|s,aEa′∼π[Q(s′,a′)] is the vanilla Bellman operator under the policy π(a|s). Then minimizing L gives the update rule: ∀s,a, k Q̂k+1(s,a) = T πQ̂k(s,a)− β log π(a | s) µ(a | s) = BπQ̂k(s,a). The above lemma transforms the regular Bellman backup into the soft-Bellman backup without the need for entropies, letting us convert standard RL into MaxEnt RL. Here, L(·) does a conservative Q-update similar to CQL (Kumar et al., 2020) with the nice property that the implied conservative term is just the KL-constraint between π and µ.6 This enforces a entropy-regularization on our policy with respect to the behavior policy without the need of entropy. Thus, soft-Q learning naturally emerges as a conservative update on regular Q-learning under our objective. Here, Equation 8 is the dual of the KL-divergence between µ and π (Garg et al., 2021), and we motivate this objective for RL and establish formal equivalence with conservative Q-learning in Appendix C. In our framework, we use the MaxEnt Bellman operator B∗ which gives our ExtremeQ loss, which is the same as our Gumbel loss from the previous section: L(Q) = Es,a∼µ [ e(B̂ ∗Q̂k(s,a)−Q(s,a))/β ] − Es,a∼µ[(B̂∗Q̂k(s,a)−Q(s,a))/β]− 1 (9) This gives an update rule: Q̂k+1(s,a) = B∗Q̂k(s,a). L(·) here requires estimation of B∗ which is very hard in continuous action spaces. Under deterministic dynamics, L can be obtained without B∗ as shown in Appendix C. However, in general we still need to estimate B∗. Next, we motivate how we can solve this issue. Consider the soft-Bellman equation from Section 2.1 (Equation 1), B∗Q = r(s,a) + γEs′∼P (·|s,a)[V ∗(s′)], (10) where V ∗(s) = Lβa∼µ(·|s′)[Q(s,a)]. Then V ∗ can be directly estimated using Gumbel regression by setting the temperature β to the regularization strength in the MaxEnt framework. This gives us the following ExtremeV loss objective: J (V ) = Es,a∼µ [ e(Q̂ k(s,a)−V (s))/β ] − Es,a∼µ[(Q̂k(s,a)− V (s))/β]− 1. (11) Lemma 3.5. Minimizing J over values gives the update rule: V̂ k(s) = Lβa∼µ(·|s)[Q̂ k(s,a)]. Then we can obtain V ∗ from Q(s, a) using Gumbel regression and substitute in Equation 10 to estimate the optimal bellman backup B∗Q. Thus, Lemma 3.4 and 3.5 give us a scheme to solve the Max-Ent RL problem without the need of entropy. 3.4 LEARNING POLICIES In the above section we derived a Q-learning strategy that does not require explicit use of a policy π. However, in continuous settings we still often want to recover a policy that can be run in the environment. Per Eq. 2 (Section 2.2), the optimal MaxEnt policy π∗(a|s) = µ(a|s)e(Q(s,a)−V (s))/β . By minimizing the forward KL-divergence between π and the optimal π∗ induced by Q and V we obtain the following training objective: π∗ = argmax π Eρµ(s,a)[e (Q(s,a)−V (s))/β log π]. (12) If we take ρµ to be a dataset D generated from a behavior policy πD, we exactly recover the AWR objective used by prior works in Offline RL (Peng et al., 2019; Nair et al., 2020), which can easily be computed using the offline dataset. This objective does not require sampling actions, which may 6In fact, theorems of CQL (Kumar et al., 2020) hold for our objective by replacing DCQL with DKL. potentially take Q(s, a) out of distribution. Alternatively, if we want to sample from the policy instead of the reference distribution µ, we can minimize the Reverse-KL divergence which gives us the SAC-like actor update: π∗ = argmax π Eρπ(s)π(a|s)[Q(s,a)− β log(π(a|s)/µ(a|s))]. (13) Interestingly, we note this doesn’t depend on V (s). If µ is chosen to be the last policy πk, the second term becomes the KL-divergence between the current policy and πk, performing a trust region update on π (Schulman et al., 2015; Vieillard et al., 2020).7 While estimating the log ratio log(π(a|s)/µ(a|s)) can be difficult depending on choice of µ, our Gumbel Loss J removes the need for µ during Q learning by estimating soft-Q values of the form Q(s,a)− β log(π(a|s)/µ(a|s)). 3.5 PRACTICAL ALGORITHMS Algorithm 1 Extreme Q-learning (X -QL) (Under Stochastic Dynamics) 1: Init Qϕ, Vθ , and πψ 2: Let D = {(s,a, r, s′)} be data from πD (of- fline) or replay buffer (online) 3: for step t in {1...N} do 4: Train Qϕ using L(ϕ) from Eq. 14 5: Train Vθ using J (θ) from Eq. 11 (with a ∼ D (offline) or a ∼ πψ (online)) 6: Update πψ via Eq. 12 (offline) or Eq. 13 (online) 7: end for In this section we develop a practical approach to Extreme Q-learning (X -QL) for both online and offline RL. We consider parameterized functions Vθ(s), Qϕ(s,a), and πψ(a|s) and let D be the training data distribution. A core issue with directly optimizing Eq. 10 is over-optimism about dynamics (Levine, 2018) when using simple-sample estimates for the Bellman backup. To overcome this issue in stochastic settings, we separate out the optimization of Vθ from that of Qϕ following Section 3.3. We learn Vθ using Eq. 11 to directly fit the optimal soft-values V ∗(s) based on Gumbel regression. Using Vθ(s′) we can get single-sample estimates of B∗ as r(s,a) + γVθ(s′). Now we can learn an unbiased expectation over the dynamics, Qϕ ≈ Es′|s,a[r(s,a) + γVθ(s′)] by minimizing the Mean-squared-error (MSE) loss between the single-sample targets and Qϕ: L(ϕ) = E(s,a,s′)∼D [ (Qϕ(s,a)− r(s,a)− γVθ(s′))2 ] . (14) In deterministic dynamics, our approach is largely simplified and we directly learn a single Qϕ using Eq. 9 without needing to learn B∗ or V ∗. Similarly, we learn soft-optimal policies using Eq. 12 (offline) or Eq. 13 (online) settings. Offline RL. In the offline setting, D is specified as an offline dataset assumed to be collected with the behavior policy πD. Here, learning values with Eq. 11 has a number of practical benefits. First, we are able to fit the optimal soft-values V ∗ without sampling from a policy network, which has been shown to cause large out-of-distribution errors in the offline setting where mistakes cannot be corrected by collecting additional data. Second, we inherently enforce a KL-constraint on the optimal policy π∗ and the behavior policy πD. This provides tunable conservatism via the temperature β. After offline training of Qϕ and Vθ, we can recover the policy post-training using the AWR objective (Eq. 12). Our practical implementation follows the training style of Kostrikov et al. (2021), but we train value network using using our ExtremeQ loss. Online RL. In the online setting, D is usually given as a replay buffer of previously sampled states and actions. In practice, however, obtaining a good estimate of V ∗(s′) requires that we sample actions with high Q-values instead of uniform sampling from D. As online learning allows agents to correct over-optimism by collecting additional data, we use a previous version of the policy network πψ to sample actions for the Bellman backup, amounting to the trust-region policy updates detailed at the end of Section 3.4. In practice, we modify SAC and TD3 with our formulation. To embue SAC (Haarnoja et al., 2018) with the benefits of Extreme Q-learning, we simply train Vθ using Eq. 11 with s ∼ D,a ∼ πψk(a|s). This means that we do not use action probabilities when updating the value networks, unlike other MaxEnt RL approaches. The policy is learned via the objective maxψ E[Qϕ(s, πψ(s))] with added entropy regularization, as SAC does not use a fixed noise schedule. TD3 by default does not use a value network, and thus we use our algorithm for deterministic dynamics by changing the loss to train Q in TD3 to directly follow Eq. 9. The policy is learned as in SAC, except without entropy regularization as TD3 uses a fixed noise schedule. 7Choosing µ to be uniform U gives the regular SAC update. 4 EXPERIMENTS We compare our Extreme Q-Learning (X -QL) approach to state-of-the-art algorithms across a wide set of continuous control tasks in both online and offline settings. In practice, the exponential nature of the Gumbel regression poses difficult optimization challenges. We provide Offline results on Androit, details of loss implementation, ablations, and hyperparameters in Appendix D. 4.1 OFFLINE RL Our offline results with fixed hyperparameters for each domain outperform prior methods (Chen et al., 2021; Kumar et al., 2019; 2020; Kostrikov et al., 2021; Fujimoto & Gu, 2021) in several environments, reaching state-of-the-art on the Franka Kitchen tasks, as shown in Table 1. We find performance on the Gym locomotion tasks to be already largely saturated without introducing ensembles An et al. (2021), but our method achieves consistently high performance across environments. While we attain good performance using fixed hyper-parameters per domain, X -QL achieves even higher absolute performance and faster convergence than IQL’s reported results when hyper-parameters are turned per environment. With additional tuning, we also see particularly large improvements on the AntMaze tasks, which require a significant amount of “stitching” between trajectories (Kostrikov et al., 2021). Full learning curves are in the Appendix. Like IQL, X -QL can be easily fine-tuned using online data to attain even higher performance as shown in Table 2. 4.2 ONLINE RL Table 2: Finetuning results on the AntMaze environments Dataset CQL IQL X -QL T umaze-v0 70.1 → 99.4 86.7 → 96.0 93.8 → 99.6 umaze-diverse-v0 31.1 → 99.4 75.0 → 84.0 82.0 → 99.0 medium-play-v0 23.0 → 0.0 72.0 → 95.0 76.0 → 97.0 medium-diverse-v0 23.0 → 32.3 68.3 → 92.0 73.6 → 97.1 large-play-v0 1.0 → 0.0 25.5 → 46.0 45.1 → 59.3 large-diverse-v0 1.0 → 0.0 42.6 → 60.7 49.0 → 82.1 We compare ExtremeQ variants of SAC (Haarnoja et al., 2018) and TD3 (Fujimoto et al., 2018), denoted X -SAC and X -TD3, to their vanilla versions on tasks in the DM Control, shown in Figure 3. Across all tasks an ExtremeQ variant matches or surpasses the performance of baselines. We see particularly large gains in the Hopper environment, and more significant gains in comparison to TD3 overall. Consistent with SAC (Haarnoja et al., 2018), we find the temperature β needs to be tuned for different environments with different reward scales and sparsity. A core component of TD3 introduced by Fujimoto et al. (2018) is Double Q-Learning, which takes the minimum of two Q functions to remove overestimate bias in the Q-target. As we assume errors to be Gumbel distributed, we expect our X -variants to be more robust to such errors. In all environments except Cheetah Run, our X -TD3 without the Double-Q trick, denoted X -QL - DQ, performs better than standard TD3. While the gains from Extreme-Q learning are modest in online settings, none of our methods require access to the policy distribution to learn the Q-values. 5 RELATED WORK Our approach builds on works online and offline RL. Here we review the most salient ones. Inspiration for our framework comes from econometrics (Rust, 1986; McFadden, 1972), and our Gumbel loss is motivated by IQ-Learn (Garg et al., 2021). Online RL. Our work bridges the theoretical gap between RL and Max-Ent RL by introducing our Gumbel loss function. Unlike past work in MaxEnt RL (Haarnoja et al., 2018; Eysenbach & Levine, 2020), our method does not require explicit entropy estimation and instead addresses the problem of obtaining soft-value estimates (LogSumExp) in high-dimensional or continuous spaces (Vieillard et al., 2021) by directly modeling them via our proposed Gumbel loss, which to our knowledge has not previously been used in RL. Our loss objective is intrinsically linked to the KL divergence, and similar objectives have been used for mutual information estimation (Poole et al., 2019) and statistical learning Parsian & Kirmani (2002); Atiyah et al. (2020). IQ-Learn (Garg et al., 2021) proposes learning Q-functions to solve imitation introduced the same loss in IL to obtain an unbiased dual form for the reverse KL-divergence between an expert and policy distribution. Other works have also used forward KL-divergence to derive policy objectives (Peng et al., 2019) or for regularization (Schulman et al., 2015; Abdolmaleki et al., 2018). Prior work in RL has also examined using other types of loss functions (Bas-Serrano et al., 2021) or other formulations of the argmax in order to ease optimization (Asadi & Littman, 2017). Distinct from most off-Policy RL Methods (Lillicrap et al., 2015; Fujimoto et al., 2018; Haarnoja et al., 2018), we directly model B∗ like Haarnoja et al. (2017); Heess et al. (2015) but attain significantly more stable results. Offline RL. Prior works in offline RL can largely be categorized as relying on constrained or regularized Q-learning (Wu et al., 2019; Fujimoto & Gu, 2021; Fujimoto et al., 2019; Kumar et al., 2019; 2020; Nair et al., 2020), or extracting a greedy policy from the known behavior policy (Peng et al., 2019; Brandfonbrener et al., 2021; Chen et al., 2021). Most similar to our work, IQL (Kostrikov et al., 2021) fits expectiles of the Q-function of the behavior policy, but is not motivated to solve a particular problem or remain conservative. On the other hand, conservatism in CQL (Kumar et al., 2020) is motivated by lower-bounding the Q-function. Our method shares the best of both worlds – like IQL we do not evaluate the Q-function on out of distribution actions and like CQL we enjoy the benefits of conservatism. Compared to CQL, our approach uses a KL constraint with the behavior policy, and for the first time extends soft-Q learning to offline RL without needing a policy or explicit entropy values. Our choice of using the reverse KL divergence for offline RL follows closely with BRAC (Wu et al., 2019) but avoids learning a policy during training. 6 CONCLUSION We propose Extreme Q-Learning, a new framework for MaxEnt RL that directly estimates the optimal Bellman backup B∗ without relying on explicit access to a policy. Theoretically, we bridge the gap between the regular, soft, and conservative Q-learning formulations. Empirically, we show that our framework can be used to develop simple, performant RL algorithms. A number of future directions remain such as improving stability with training with the exponential Gumbel Loss function and integrating automatic tuning methods for temperature β like SAC (Haarnoja et al., 2018). Finally, we hope that our framework can find general use in Machine Learning for estimating log-partition functions. Acknowledgements Div derived the theory for Extreme Q-learning and Gumbel regression framework and ran the tuned offline RL experiments. Joey ran the consistent offline experiments and online experiments. Both authors contributed equally to paper writing. We thank John Schulman and Bo Dai for helpful discussions. Our research was supported by NSF(1651565), AFOSR (FA95501910024), ARO (W911NF-21-1-0125), ONR, CZ Biohub, and a Sloan Fellowship. Joey was supported by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate (NDSEG) Fellowship Program. A THE GUMBEL ERROR MODEL FOR MDPS In this section, we functionally analyze Q-learning using our framework and further develop the Gumbel Error Model (GEM) for MDPs. A.1 RUST-MCFADDEN MODEL OF MDPS For an MDP following the Bellman equations, we assume the observed rewards to be stochastic due to an unobserved component of the state. Let s be the observed state, and (s, z) be the actual state with hidden component z. Then, Q(s, z,a) = R(s, z,a) + γEs′∼P (·|s,a)[Ez′|s′ [V (s′, z′)], (15) V (s, z) = max a Q(s, z,a). (16) Lemma A.1. Given, 1) conditional independence (CI) assumption that z′ depends only on s′, i.e. p(s′, z′|s, z,a) = p(z′|s′)p(s′|s,a) and 2) additive separablity (AS) assumption on the hidden noise: R(s,a, z) = r(s,a) + ϵ(z,a). Then for i.i.d. ϵ(z,a) ∼ G(0, β), we recover the soft-Bellman equations for Q(s, z,a) = q(s,a) + ϵ(z,a) and v(s) = Ez[V (s, z)], with rewards r(s,a) and entropy regularization β. Hence, a soft-MDP in MaxEntRL is equivalent to an MDP with an extra hidden variable in the state that introduces i.i.d. Gumbel noise in the rewards and follows the AS+CI conditions. Proof. We have, q(s,a) = r(s,a) + γEs′∼P (·|s,a)[Ez′|s′ [V (s′, z′)] (17) v(s) = Ez[V (s, z)] = Ez[max a (q(s,a) + ϵ(z))]. (18) From this, we can get fixed-point equations for q and π, q(s,a) = r(s,a) + γEs′∼P (·|s,a)[Ez′|s′ [max a′ (q(s′,a′) + ϵ(z′,a′))]], (19) π(·|s) = Ez[argmax a (q(s,a) + ϵ(z,a))] ∈ ∆A, (20) where ∆A is the set of all policies. Now, let ϵ(z,a) ∼ G(0, β) and assumed independent for each (z,a) (or equivalently (s,a) due to the CI condition). Then we can use the Gumbel-Max trick to recover the soft-Bellman equations for q(s,a) and v(s) with rewards r(s,a): q(s,a) = r(s,a) + γEs′∼P (·|s,a)[Lβa′ [q(s ′,a′)]], (21) π(·|s) = softmax a (q(s,a)). (22) Thus, we have that the soft-Bellman optimality equation and related optimal policy can arise either from the entropic regularization viewpoint or from the Gumbel error viewpoint for an MDP. Corollary A.1.1. Converse: An MDP following the Bellman optimality equation and having a policy that is softmax distributed, necessarily has any i.i.d. noise in the rewards due to hidden state variables be Gumbel distributed, given the AS+CI conditions hold. Proof. McFadden (McFadden, 1972) proved this converse in his seminal work on discrete choice theory, that for i.i.d. ϵ satisfiying Equation 19 with a choice policy π ∼ softmax has ϵ be Gumbel distributed. And we show a proof here similar to the original for MDPs. Considering Equation 20, we want π(a|s) to be softmax distributed. Let ϵ have an unknown CDF F and we consider there to be N possible actions. Then, P (argmax a (q(s,a) + ϵ(z,a)) = ai|s, z) = P (q(s,ai) + ϵ(z,ai) ≥ q(s,aj) + ϵ(z,aj) ∀i ̸= j |s, z) = P (ϵ(z,aj)− ϵ(z,ai) ≤ q(s,ai)− q(s,aj) ∀i ̸= j |s, z) Simplifying the notation, we write ϵ(z,ai) = ϵi and q(s,ai) = qi. Then ϵ1, ..., ϵN has a joint CDF G: G(ϵ1, ..., ϵN ) = N∏ j=1 P (ϵj ≤ ϵi + qi − qj) = N∏ j=1 F (ϵi + qi − qj) and we can get the required probability π(i) as: π(i) = ∫ +∞ ε=−∞ N∏ j=1,j ̸=i F (ε+ qi − qj)dF (ε) (23) For π = softmax(q), McFadden (McFadden, 1972) proved the uniqueness of F to be the Gumbel CDF, assuming translation completeness property to hold for F . Later this uniqueness was shown to hold in general for any N ≥ 3 (Luce, 1977). A.2 GUMBEL ERROR MODEL (GEM) FOR MDPS To develop our Gumbel Error Model (GEM) for MDPs under functional approximation as in Section 3.1, we follow our simplified scheme of M independent estimators Q̂, which results in the following equation over Q̄ = E[Q̂]: Q̄t+1(s,a) = r(s,a) + γEs′|s,a[Eϵt [max a′ (Q̄t(s ′,a′) + ϵt(s ′,a′))]]. (24) Here, the maximum of random variables will generally be greater than the true max, i.e. Eϵ[maxa′(Q̄(s′,a′) + ϵ(s′,a′))] ≥ maxa′ Q̄(s′,a′) (Thrun & Schwartz, 1999). As a result, even initially zero-mean error can cause Q updates to propagate consistent overestimation bias through the Bellman equation. This is a known issue with function approximation in RL (Fujimoto et al., 2018). Now, we can use the Rust-McFadden model from before. To account for the stochasticity, we consider extra unobserved state variables z in the MDP to be the model parameters θ used in the functional approximation. The errors from functional approximation ϵt can thus be considered as noise added in the reward. Here, CI condition holds as ϵ is separate from the dynamics and becomes conditionally independent for each state-action pair and AS condition is implied. Then for Q̄ satisfying Equation 24, we can apply the McFadden-Rust model, which implies that for the policy to be soft-optimal i.e. a softmax over Q̄, ϵ will be Gumbel distributed. Conversely, for the i.i.d. ϵ ∼ G, Q̄(s,a) follows the soft-Bellman equations and π(a|s) = softmax(Q(s,a)). This indicates an optimality condition on the MDP – for us to eventually attain the optimal softmax policy in the presence of functional boostrapping (Equation 24), the errors should follow the Gumbel distribution. A.2.1 TIME EVOLUTION OF ERRORS IN MDPS UNDER DETERMINISTIC DYNAMICS In this section, we characterize the time evolution of errors in an MDP using GEM. We assume deterministic dynamics to simplify our analysis. We suppose that we know the distribution of Q-values at time t and model the evolution of this distribution through the Bellman equations. Let Zt(s,a) be a random variable sampled from the distribution of Q-values at time t, then the following Bellman equation holds: Zt+1(s,a) = r(s,a) + γmax a′ Zt(s ′,a′). (25) Here, Zt+1(s,a) = maxa′ [r(s,a) + γZt(s′,a′)] is a maximal distribution and based on EVT should eventually converge to an extreme value distribution, which we can model as a Gumbel. Concretely, let’s assume that we fix Zt(s,a) ∼ G(Qt(s,a), β) for some Qt(s,a) ∈ R and β > 0. Furthermore, we assume that the Q-value distribution is jointly independent over different stateactions i.e. Z(s,a) is independent from Z(s′,a′) for ∀ (s,a) ̸= (s′,a′). Then maxa′ Zt(s′,a′) ∼ G(V (s′), β) with V (s) = Lβa [Q(s,a)] using the Gumbel-max trick. Then substituting in Equation 25 and rescaling Zt with γ, we get: Zt+1(s,a) ∼ G ( r(s,a) + γLβa′ [Q(s ′,a′)], γβ ) . (26) So very interestingly the Q-distribution becomes a Gumbel process, where the location parameter Q(s,a) follows the optimal soft-Bellman equation. Similarly, the temperature scales as γβ and the distribution becomes sharper after every timestep. After a number of timesteps, we see that Z(s,a) eventually collapses to the Delta distibution over the unique contraction Q∗(s,a). Here, γ controls the rate of decay of the Gumbel distribution into the collapsed Delta distribution. Thus we get the expected result in deterministic dynamics that the optimal Q-function will be deterministic and its distribution will be peaked. So if a Gumbel error enters into the MDP through a functional error or some other source at a timestep t in some state s, it will trigger off an wave that propagates the Gumbel error into its child states following Equation 26. Thus, this Gumbel error process will decay at a γ rate every timestep and eventually settle down with Q-values reaching the the steady solution Q∗. The variance of this Gumbel process given as π 2 6 β 2 will decay as γ2, similarly the bias will decay as γ-contraction in the L∞ norm. Hence, GEM gives us an analytic characterization of error propogation in MDPs under deterministic dynamics. Nevertheless under stochastic dynamics, characterization of errors using GEM becomes non-trivial as Gumbel is not mean-stable unlike the Gaussian distribution. We hypothesise that the errors will follow some mix of Gumbel-Gaussian distributions, and leave this characterization as a future open direction. B GUMBEL REGRESSION We characterize the concentration bounds for Gumbel Regression in this section. First, we bound the bias on applying Lβ to inputs containing errors. Second, we bound the PAC learning error due to an empirical L̂β over finite N samples. B.1 OVERESTIMATION BIAS Let Q̂(s,a) be a random variable representing a Q-value estimate for a state and action pair (s,a). We assume that it is an unbiased estimate of the true Q-value Q(s,a) with E[Q̂(s,a)] = Q(s,a). Let Q(s,a) ∈ [−Qmax, Qmax] Then, V (s) = Lβa∼µQ(s,a) is the true value function, and V̂ (s) = Lβa∼µQ̂(s,a) is its estimate. Lemma B.1. We have V (s) ≤ E[V̂ (s)] ≤ Ea∼µ[Q(s,a)] + β log cosh(Qmax/β). Proof. The lower bound V (s) ≤ E[V̂ (s)] is easy to show using Jensen’s Inequality as log_sum_exp is a convex function. For the upper bound, we can use a reverse Jensen’s inequality (Simić, 2009) that for any convex mapping f on the interval [a, b] it holds that:∑ i pif (xi) ≤ f (∑ i pixi ) + f(a) + f(b)− f ( a+ b 2 ) Setting f = − log(·) and xi = eQ̂(s,a)/β , we get: Ea∼µ[− log(eQ̂(s,a)/β)] ≤ − log(Ea∼µ[eQ̂(s,a)/β ])−log(eQmax/β)−log(e−Qmax/β)+log ( eQmax/β + e−Qmax/β 2 ) On simplifying, V̂ (s) = β log(Ea∼µeQ̂(s,a)/β) ≤ Ea∼µ[Q̂(s,a)] + β log cosh(Qmax/β) Taking expectations on both sides, E[V̂ (s)] ≤ Ea∼µ[Q(s,a)] + β log cosh(Qmax/β). This gives an estimate of how much the LogSumExp overestimates compared to taking the expectation over actions for random variables Q̂. This bias monotonically decreases with β, with β = 0 having a max bias of Qmax and for large β decaying as 12βQ 2 max. B.2 PAC LEARNING BOUNDS FOR GUMBEL REGRESSION Lemma B.2. exp(L̂β(X)/β) over a finite N samples is an unbiased estimator for the partition function Zβ = E [ eX/β ] and with a probability at least 1− δ it holds that: exp(L̂β(X)/β) ≤ Zβ + sinh(Xmax/β) √ 2 log (1/δ) N . Similarly, L̂β(X) over a finite N samples is a consistent estimator of Lβ(X) and with a probability at least 1− δ it holds that: L̂β(X) ≤ Lβ(X) + β sinh(Xmax/β) Zβ √ 2 log (1/δ) N . Proof. To prove these concentration bounds, we consider random variables eX1/β , ..., eXn/β with β > 0, such that ai ≤ Xi ≤ bi almost surely, i.e. eai/β ≤ eXi/β ≤ ebi/β . We consider the sum Sn = ∑N i=1 e Xi/β and use Hoeffding’s inequality, so that for all t > 0: P (Sn − ESn ≥ t) ≤ exp ( −2t2∑n i=1 ( ebi/β − eai/β )2 ) (27) To simplify, we let ai = −Xmax and bi = Xmax for all i. We also rescale t as t = Ns, for s > 0. Then P (Sn − ESn ≥ Ns) ≤ exp ( −Ns2 2 sinh2(Xmax/β) ) (28) We can notice that L.H.S. is same as P (exp(L̂β(X)/β)−exp(Lβ(X)/β) ≥ s), which is the required probability we want. Letting the R.H.S. have a value δ, we get s = sinh(Xmax/β) √ 2 log (1/δ) N Thus, with a probability 1− δ, it holds that: exp(L̂β(X)/β) ≤ exp(Lβ(X)/β) + sinh(Xmax/β) √ 2 log (1/δ) N (29) Thus, we get a concentration bound on exp(L̂β(X)/β) which is an unbiased estimator of the partition function Zβ = exp(Lβ(X)/β). This bound becomes tighter with increasing β, and asymptotically behaves as Xmaxβ √ 2 log(1/δ) N . Similarly, to prove the bound on the log-partition function L̂β(X), we can further take log(·) on both sides and use the inequality log(1 + x) ≤ x, to get a direct concentration bound on L̂β(X), L̂β(X) ≤ Lβ(X) + β log ( 1 + sinh(Xmax/β)e −Lβ(X)/β √ 2 log (1/δ) N ) (30) = Lβ(X) + β sinh(Xmax/β)e−L β(X)/β √ 2 log (1/δ) N (31) = Lβ(X) + β sinh(Xmax/β) Zβ √ 2 log (1/δ) N (32) This bound also becomes tighter with increasing β, and asymptotically behaves as Xmax Zβ √ 2 log(1/δ) N . C EXTREME Q-LEARNING In this section we provide additional theoretical details of our algorithm, X -QL, and its connection to conservatism in CQL (Kumar et al., 2020). C.1 X -QL For the soft-Bellman equation given as: Q(s,a) = r(s,a) + γEs′∼P (·|s,a)V (s), (33) V (s) = Lβµ(·|s)(Q(s,a)), (34) we have the fixed-point characterization, that can be found with a recurrence: V (s) = Lβµ(·|s) ( r(s,a) + γEs′∼P (·|s,a)V (s) ) . (35) In the main paper we discuss the case of X -QL under stochastic dynamics which requires the estimation of B∗. Under deterministic dynamic, however, this can be avoided as we do not need to account for an expectation over the next states. This simplifies the bellman equations. We develop two simple algorithms for this case without needing B∗. Value Iteration. We can write the value-iteration objective as: Q(s,a)← r(s,a) + γVθ(s′), (36) J (θ) = Es∼ρµ,a∼µ(·|s) [ e(Q(s,a)−Vθ(s))/β − (Q(s,a)− Vθ(s))/β − 1 ] . (37) Here, we learn a single model of the values Vθ(s) to directly solve Equation 35. For the current value estimate Vθ(s), we calculate targets r(s,a) + γVθ(s) and find a new estimate V ′θ (s) by fitting Lβµ with our objective J . Using our Gumbel Regression framework, we can guarantee that as J finds a consistent estimate of the Lβµ, and Vθ(s) will converge to the optimal V (s) upto some sampling error. Q-Iteration. Alternatively, we can develop a Q-iteration objective solving the recurrence: Qt+1(s,a) = r(s,a) + γLβa′∼µ [Qt(s ′,a′)] (38) = r(s,a) + Lγβa′∼µ [γQt(s ′,a′)] (39) = Lγβa′∼µ [r(s,a) + γQt(s ′,a′)] . (40) where we can rescale β to γβ to move L out. This gives the objective: Qt(s,a)← r(s,a) + γQθ(s′,a′), (41) J (Qθ) = Eµ(s,a,s′) [ e(Q t(s,a)−Qθ(s,a))/γβ − (Qt(s,a)−Qθ(s,a))/γβ − 1 ] . (42) Thus, this gives a method to directly estimate Qθ without learning values, and forms our X -TD3 method in the main paper. Note, that β is a hyperparameter, so we can use an alternative hyperparameter β′ = γβ to simplify the above. We can formalize this as a Lemma in the deterministic case: Lemma C.1. Let J (TµQ−Q′) = Es,a,s′,a′∼µ [ e(TµQ(s,a)−Q ′(s,a)/γβ − (TµQ(s,a)−Q′(s,a))/γβ − 1 ] . where Tµ is a linear operator that maps Q from current (s,a) to the next (s′,a′): TµQ(s,a) := r(s,a) + γQ(s′,a′) Then we have B∗Qt = argmin Q′∈Ω J (TµQt −Q′), where Ω is the space of Q-functions. Proof. We use that in deterministic dynamics, Lγβa′∼µ[TµQ(s,a)] = r(s,a) + γL β a′∼µ[Q(s ′,a′)] = B∗Q(s,a) Then solving for the unique minima for J establishes the above results. Thus, optimizing J with a fixed-point is equivalent to Q-iteration with the Bellman operator. C.2 BRIDGING SOFT AND CONSERVATIVE Q-LEARNING Inherent Convervatism in X -QL Our method is inherently conservative similar to CQL (Kumar et al., 2020) in that it underestimates the value function (in vanilla Q-learning) V π(s) by −β Ea∼π(a|s) [ log π(a|s)πD(a|s) ] , whereas CQL understimates values by a factor −β Ea∼π(a|s) [ π(a|s) πD(a|s) − 1 ] , where πD is the behavior policy. Notice that the underestimation factor transforms V π in vanilla Q-learning into V π used in the soft-Q learning formulation. Thus, we observe that KL-regularized Q-learning is inherently conservative, and this conservatism is built into our method. Furthermore, it can be noted that CQL conservatism can be derived as adding a χ2 regularization to an MDP and although not shown by the original work (Kumar et al., 2020) or any follow-ups to our awareness, the last term of Eq. 14 in CQL’s Appendix B (Kumar et al., 2020), is simply χ2(π||πD) and what the original work refers to as DCQL is actually the χ2 divergence. Thus, it is possible to show that all the results for CQL hold for our method by simply replacing DCQL with DKL i.e. the χ2 divergence with the KL divergence everywhere. We show a simple proof below that DCQL is the χ2 divergence: DCQL (π, πD) (s) := ∑ a π(a | s) [ π(a | s) πD(a | s) − 1 ] = ∑ a (π(a | s)− πD(a | s) + πD(a | s)) [ π(a | s) πD(a | s) − 1 ] = ∑ a (π(a | s)− πD(a | s)) [ π(a | s)− πD(a | s) πD(a | s) ] + ∑ a πD(a | s) [ π(a | s) πD(a | s) − 1 ] = ∑ a πD(a | s) [ π(a | s) πD(a | s) − 1 ]2 + 0 since, ∑ a π(a | s) = ∑ a πD(a | s) = 1 = χ2(π(· | s) || πD(· | s)), using the definition of chi-square divergence Why X–QL is better than CQL for offline RL In light of the above results, we know that CQL adds a χ2 regularization to the policy π with respect to the behavior policy πD, whereas our method does the same using the reverse-KL divergence. Now, the reverse-KL divergence has a mode-seeking behavior, and thus our method will find a policy that better fits the mode of the behavior policy and is more robust to random actions in the offline dataset. CQL does not have such a property and can be easily affected by noisy actions in the dataset. Connection to Dual KL representation For given distributions µ and π, we can write their KL-divergence using the dual representation proposed by IQ-Learn (Garg et al., 2021): DKL(π || µ) = max x∈R Eµ[−e−x]− Eπ[x]− 1, which is maximized for x = − log(π/µ). We can make a clever substitution to exploit the above relationship. Let x = (Q− T πQ̂k)/β for a variable Q ∈ R and a fixed constant T πQ̂k, then on variable substitution we get the equation: Es∼ρµ [DKL(π(·|s) || µ(·|s))] = min Q L(Q),with L(Q) = Es∼ρµ,a∼µ(·|s) [ e(T πQ̂k(s,a)−Q(s,a))/β ] − Es∼ρµ,a∼π(·|s)[(T πQ̂k(s,a)−Q(s,a))/β]− 1 This gives us Equation 8 in Section 3.3 of the main paper, and is minimized for Q = T πQ̂k − β log(π/µ) as we desire. Thus, this lets us transform the regular Bellman update into the soft-Bellman update. D EXPERIMENTS In this section we provide additional results and more details on all experimental procedures. D.2 BELLMAN ERROR PLOTS D.1 A TOY EXAMPLE Additional plots of the error distributions for SAC and TD3 can be found in Figure 5 and Figure 6, respectively. Figure 1 and the aforementioned plots were generated by running RL algorithms for 100,000 timesteps and logging the bellman errors every 5,000 steps. In particular, the Bellman errors were computed as: r(s,a) + γQθ1(s ′, πψ(s ′))−Qθ1(s,a) In the above equation Qθ1 represents the first of the two Q networks used in the Double Q trick. We do not use target networks to compute the bellman error, and instead compute the fully online quantity. πψ(s′) represents the mean or deterministic output of the current policy distribution. We used an implementation of SAC based on Yarats & Kostrikov (2020) and an implementation of TD3 based on Fujimoto et al. (2018). For SAC we did the entropy term was not added when computing the error as we seek to characterize the standard bellman error and not the soft-bellman error. Before generating plots the errors were clipped to the ranges shown. This tended prevented over-fitting to large outliers. The Gumbel and Gaussian curves we fit using MLE via Scipy. D.3 NUMERIC STABILITY In practice, a naive implementation of the Gumbel loss function J from Equation 11 suffers from stability issues due to the exponential term. We found that stabilizing the loss objective was essential for training. Practically, we follow the common max-normalization trick used in softmax computation. This amounts to factoring out emaxz z from the loss and consequently scaling the gradients. This adds a per-batch adaptive normalization to the learning rate. We additionally clip loss inputs that are too large to prevent outliers. An example code snippet in Pytorch is included below: def gumbel_loss(pred, label, beta, clip): z = (label - pred)/beta z = torch.clamp(z, -clip, clip) max_z = torch.max(z) max_z = torch.where(max_z < -1.0, torch.tensor(-1.0), max_z) max_z = max_z.detach() # Detach the gradients loss = torch.exp(z - max_z) - z*torch.exp(-max_z) - torch.exp(-max_z) return loss.mean() In some experiments we additionally clip the value of the gradients for stability. D.4 OFFLINE EXPERIMENTS In this subsection, we provide additional results in the offline setting and hyper-parameter and implementation details. Table 3 shows results for the Androit benchmark in D4RL. Again, we see strong results for X -QL, where X -QL-C with the same hyperparameters as used in the Franka Kitchen environments surpasses prior works on five of the eight tasks. Figure 7 shows learning curves which include baseline methods. We see that X -QL exhibits extremely fast convergence, particularly when tuned. One issue however, is numerical stability. The untuned version of X -QL exhibits divergence on the Antmaze environment. We base our implementation of X -QL off the official implementation of IQL from Kostrikov et al. (2021). We use the same network architecture and also apply the Double-Q trick. We also apply the same data preprocessing which is described in their appendix. We additionally take their baseline results and use them in Table 1, Table 2, and Table 3 for accurate comparison. We keep our general algorithm hyper-parameters and evaluation procedure the same but tune β and the gradient clipping value for each environment. Tuning values of β was done via hyper-parameter sweeps over a fixed set of values [0.6, 0.8, 1, 2, 5] for offline save for a few environments where larger values were clearly better. Increasing the batch size tended to also help with stability, since our rescaled loss does a per-batch normalization. AWAC parameters were left identical to those in IQL. For MuJoCo locomotion tasks we average mean returns over 10 evaluation trajectories and 6 random seeds. For the AntMaze tasks, we average over 1000 evaluation trajectories. We don’t see stability issues in the mujoco locomotion environments, but found that offline runs for the AntMaze environments could occasionally exhibit divergence in training for a small β < 1. In order to help mitigate this, we found adding Layer Normalization (Ba et al., 2016) to the Value networks to work well. Full hyper-parameters we used for experiments are given in Table 4. D.5 OFFLINE ABLATIONS In this section we show hyper-parameter ablations for the offline experiments. In particular, we ablate the temperature parameter, β, and the batch size. The temperature β controls the strength of KL penalization between the learned policy and the dataset behavior policy, and a small β is beneficial for datasets with lots of random noisy actions, whereas a high β favors more expert-like datasets. Because our implementation of the Gumbel regression loss normalizes gradients at the batch level, larger batches tended to be more stable and in some environments lead to higher final performance. To show that our tuned X -QL method is not simply better than IQL due to bigger batch sizes, we show a comparison with a fixed batch size of 1024 in Fig. 7. D.6 ONLINE EXPERIMENTS We base our implementation of SAC off pytorch_sac (Yarats & Kostrikov, 2020) but modify it to use a Value function as described in Haarnoja et al. (2017). Empirically we see similar performance with and without using the value function, but leave it in for fair comparison against our X -SAC variant. We base our implementation of TD3 on the original author’s code from Fujimoto et al. (2018). Like in offline experiments, hyper-parameters were left as default except for β, which we tuned for each environment. For online experiments we swept over [1, 2, 5] for X–SAC and TD3. We found that these values did not work as well for TD3 - DQ, and swept over values [3, 4, 10, 20]. In online experiments we used an exponential clip value of 8. For SAC we ran three seeds in each environment as it tended to be more stable. For TD3 we ran four. Occasionally, our X - variants would experience instability due to outliers in collected online policy rollouts causing exploding loss terms. We see this primarily in the Hopper and Quadruped environments, and rarely for Cheetah or Walker. For Hopper and Quadruped, we found that approximately one in six runs became unstable after about 100k gradient steps. This sort of instability is also common in other online RL algorithms like PPO due to noisy online policy collection. We restarted runs that become unstable during training. We verified our SAC results by comparing to Yarats & Kostrikov (2020) and our TD3 results by comparing to Li (2021) . We found that our TD3 implementation performed marginally better overall.
1. What is the focus of the paper regarding Q-learning frameworks? 2. What are the strengths and weaknesses of the proposed approach, particularly in its formulation and performance? 3. Do you have any concerns or questions about the experiments and comparisons presented in the paper? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. What is the significance of modeling TD-error as a Gumbel distribution in the proposed method? 6. Can you explain why the proposed method requires double-Q learning technique despite its robustness to Q-value overestimation? 7. What is the issue with the runtime comparison between the proposed method and IQL, and how might it affect the conclusion drawn from the result? 8. Could you provide more information or context regarding the lemma 3.4 and figure 3 mentioned in the review?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a new Q-learning framework by formulating the TD-error as a Gumbel distribution rather than a Gaussian. This new formulation leads to MaxEnt-RL style RL algorithms, but without the need to sample from out-of-distribution examples. The proposed method shows fairly well performance on standard D4RL benchmarks. Strengths And Weaknesses Strengths Incorporates the MaxEnt RL framework while avoiding the major problem of offline RL (extrapolation error from referring to OOD examples). Modeling TD-error as a Gumbel distribution seems to be more appropriate compared to modeling it as a Gaussian. Weaknesses On offline RL, the proposed method still requires double-Q learning technique even though the method is expected to be more robust to Q-value overestimation. The experiments table for offline RL (Table 1) is misleading in several aspects: First, the hyperparameters for the proposed method are tuned dataset-wise (refer to Table 4). However, previous works such as CQL or IQL keep their hyperparameters the same at least for each environment. For example, IQL fixes its hyperparameters the same for all MuJoCo locomotion tasks. Thus, the table results are not a fair comparison. Since the paper does not provide any hyperparameter sensitivity results, it is hard to conclude that the proposed method is actually the new "state-of-the-art" as the authors argue. Also, even if dataset-wise hyperparameter tuning is freely allowed, recent works seem to show higher performance on several datasets [1, 2]. Second, the runtime comparison also seems to be misleading. The authors note that the proposed method converges faster than IQL (Figure 6) and runs half the epochs compared IQL. However, based on the source code in the supplementary material, it seems that the proposed method uses a much larger batch size (1024) compared to IQL (256) [3]. Are the authors increasing the batch size and claiming the proposed method converges with much fewer iterations? Questions On Lemma 3.4, why is the first expectation over \mu while the second expectation is over \pi? On Figure 3, what does -DQ on TD3 mean? [1] An et al., Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble, NeurIPS 2021. [2] Cheng at al., Adversarially Trained Actor Critic for Offline Reinforcement Learning, ICML 2022. [3] https://github.com/ikostrikov/implicit_q_learning Clarity, Quality, Novelty And Reproducibility Mentioned above
ICLR
Title Extreme Q-Learning: MaxEnt RL without Entropy Abstract Modern Deep Reinforcement Learning (RL) algorithms require estimates of the maximal Q-value, which are difficult to compute in continuous domains with an infinite number of possible actions. In this work, we introduce a new update rule for online and offline RL which directly models the maximal value using Extreme Value Theory (EVT), drawing inspiration from economics. By doing so, we avoid computing Q-values using out-of-distribution actions which is often a substantial source of error. Our key insight is to introduce an objective that directly estimates the optimal soft-value functions (LogSumExp) in the maximum entropy RL setting without needing to sample from a policy. Using EVT, we derive our Extreme Q-Learning framework and consequently online and, for the first time, offline MaxEnt Q-learning algorithms, that do not explicitly require access to a policy or its entropy. Our method obtains consistently strong performance in the D4RL benchmark, outperforming prior works by 10+ points on the challenging Franka Kitchen tasks while offering moderate improvements over SAC and TD3 on online DM Control tasks. Visualizations and code can be found on our website 1. 1 INTRODUCTION Modern Deep Reinforcement Learning (RL) algorithms have shown broad success in challenging control (Haarnoja et al., 2018; Schulman et al., 2015) and game-playing domains (Mnih et al., 2013). While tabular Q-iteration or value-iteration methods are well understood, state of the art RL algorithms often make theoretical compromises in order to deal with deep networks, high dimensional state spaces, and continuous action spaces. In particular, standard Q-learning algorithms require computing the max or soft-max over the Q-function in order to fit the Bellman equations. Yet, almost all current off-policy RL algorithms for continuous control only indirectly estimate the Q-value of the next state with separate policy networks. Consequently, these methods only estimate the Q-function of the current policy, instead of the optimal Q∗, and rely on policy improvement via an actor. Moreover, actor-critic approaches on their own have shown to be catastrophic in the offline settings where actions sampled from a policy are consistently out-of-distribution (Kumar et al., 2020; Fujimoto et al., 2018). As such, computing maxQ for Bellman targets remains a core issue in deep RL. One popular approach is to train Maximum Entropy (MaxEnt) policies, in hopes that they are more robust to modeling and estimation errors (Ziebart, 2010). However, the Bellman backup B∗ used in MaxEnt RL algorithms still requires computing the log-partition function over Q-values, which is usually intractable in high-dimensional action spaces. Instead, current methods like SAC (Haarnoja et al., 2018) rely on auxiliary policy networks, and as a result do not estimate B∗, the optimal Bellman backup. Our key insight is to apply extreme value analysis used in branches of Finance and Economics to Reinforcement Learning. Ultimately, this will allow us to directly model the LogSumExp over Q-functions in the MaxEnt Framework. ∗Equal Contribution 1https://div99.github.io/XQL/ Intuitively, reward or utility-seeking agents will consider the maximum of the set of possible future returns. The Extreme Value Theorem (EVT) tells us that maximal values drawn from any exponential tailed distribution follows the Generalized Extreme Value (GEV) Type-1 distribution, also referred to as the Gumbel Distribution G. The Gumbel distribution is thus a prime candidate for modeling errors in Q-functions. In fact, McFadden’s 2000 Nobel-prize winning work in Economics on discrete choice models (McFadden, 1972) showed that soft-optimal utility functions with logit (or softmax) choice probabilities naturally arise when utilities are assumed to have Gumbel-distributed errors. This was subsequently generalized to stochastic MDPs by Rust (1986). Nevertheless, these results have remained largely unknown in the RL community. By introducing a novel loss optimization framework, we bring them into the world of modern deep RL. Empirically, we find that even modern deep RL approaches, for which errors are typically assumed to be Gaussian, exhibit errors that better approximate the Gumbel Distribution, see Figure 1. By assuming errors to be Gumbel distributed, we obtain Gumbel Regression, a consistent estimator over log-partition functions even in continuous spaces. Furthermore, making this assumption about Qvalues lets us derive a new Bellman loss objective that directly solves for the optimal MaxEnt Bellman operator B∗, instead of the operator under the current policy Bπ . As soft optimality emerges from our framework, we can run MaxEnt RL independently of the policy. In the online setting, we avoid using a policy network to explicitly compute entropies. In the offline setting, we completely avoid sampling from learned policy networks, minimizing the aforementioned extrapolation error. Our resulting algorithms surpass or consistently match state-of-the-art (SOTA) methods while being practically simpler. In this paper we outline the theoretical motivation for using Gumbel distributions in reinforcement learning, and show how it can be used to derive practical online and offline MaxEnt RL algorithms. Concretely, our contributions are as follows: • We motivate Gumbel Regression and show it allows calculation of the log-partition function (LogSumExp) in continuous spaces. We apply it to MDPs to present a novel loss objective for RL using maximum-likelihood estimation. • Our formulation extends soft-Q learning to offline RL as well as continuous action spaces without the need of policy entropies. It allows us to compute optimal soft-values V ∗ and soft-Bellman updates B∗ using SGD, which are usually intractable in continuous settings. • We provide the missing theoretical link between soft and conservative Q-learning, showing how these formulations can be made equivalent. We also show how Max-Ent RL emerges naturally from vanilla RL as a conservatism in our framework. • Finally, we empirically demonstrate strong results in Offline RL, improving over prior methods by a large margin on the D4RL Franka Kitchen tasks, and performing moderately better than SAC and TD3 in Online RL, while theoretically avoiding actor-critic formulations. 2 PRELIMINARIES In this section we introduce Maximium Entropy (MaxEnt) RL and Extreme Value Theory (EVT), which we use to motivate our framework to estimate extremal values in RL. We consider an infinite-horizon Markov decision process (MDP), defined by the tuple (S,A,P, r, γ), where S,A represent state and action spaces, P(s′|s,a) represents the environment dynamics, r(s,a) represents the reward function, and γ ∈ (0, 1) represents the discount factor. In the offline RL setting, we are given a dataset D = (s,a, r, s′) of tuples sampled from trajectories under a behavior policy πD without any additional environment interactions. We use ρπ(s) to denote the distribution of states that a policy π(a|s) generates. In the MaxEnt framework, an MDP with entropy-regularization is referred to as a soft-MDP (Bloem & Bambos, 2014) and we often use this notation. 2.1 MAXIMUM ENTROPY RL Standard RL seeks to learn a policy that maximizes the expected sum of (discounted) rewards Eπ [ ∑∞ t=0 γ tr(st,at)], for (st,at) drawn at timestep t from the trajectory distribution that π generates. We consider a generalized version of Maximum Entropy RL that augments the standard reward objective with the KL-divergence between the policy and a reference distribution µ: Eπ[ ∑∞ t=0 γ t(r(st,at)− β log π(at|st)µ(at|st) )], where β is the regularization strength. When µ is uniform U , this becomes the standard MaxEnt objective used in online RL up to a constant. In the offline RL setting, we choose µ to be the behavior policy πD that generated the fixed dataset D. Consequently, this objective enforces a conservative KL-constraint on the learned policy, keeping it close to the behavior policy (Neu et al., 2017; Haarnoja et al., 2018). In MaxEnt RL, the soft-Bellman operator B∗ : RS×A → RS×A is defined as (B∗Q)(s,a) = r(s,a)+ γEs′∼P(·|s,a)V ∗(s′) where Q is the soft-Q function and V ∗ is the optimal soft-value satisfying: V ∗(s) = β log ∑ a µ(a|s) exp (Q(s,a)/β) := Lβa∼µ(·|s) [Q(s,a)] , (1) where we denote the log-sum-exp (LSE) using an operator Lβ for succinctness2. The soft-Bellman operator has a unique contraction Q∗ (Haarnoja et al., 2018) given by the soft-Bellman equation: Q∗ = B∗Q∗ and the optimal policy satisfies (Haarnoja et al., 2017): π∗(a|s) = µ(a|s) exp ((Q∗(s,a)− V ∗(s))/β). (2) Instead of estimating soft-values for a policy V π(s) = Ea∼π(·|s) [ Q(s,a)− β log π(a|s)µ(a|s) ] , our approach will seek to directly fit the optimal soft-values V ∗, i.e. the log-sum-exp (LSE) of Q values. 2.2 EXTREME VALUE THEOREM The Fisher-Tippett or Extreme Value Theorem tells us that the maximum of i.i.d. samples from exponentially tailed distributions will asymptotically converge to the Gumbel distribution G(µ, β), which has PDF p(x) = exp(−(z + e−z)) where z = (x− µ)/β with location parameter µ and scale parameter β. Theorem 1 (Extreme Value Theorem (EVT) (Mood, 1950; Fisher & Tippett, 1928)). For i.i.d. random variables X1, ..., Xn ∼ fX , with exponential tails, limn→∞ maxi(Xi) follows the Gumbel (GEV-1) distribution. Furthermore, G is max-stable, i.e. if Xi ∼ G, then maxi(Xi) ∼ G holds. This result is similar to the Central Limit Theorem (CLT), which states that means of i.i.d. errors approach the normal distribution. Thus, under a chain of max operations, any i.i.d. exponential tailed errors3 will tend to become Gumbel distributed and stay as such. EVT will ultimately suggest us to characterize nested errors in Q-learning as following a Gumbel distribution. In particular, the Gumbel distribution G exhibits unique properties we will exploit. One intriguing consequence of the Gumbel’s max-stability is its ability to convert the maximum over a discrete set into a softmax. This is known as the Gumbel-Max Trick (Papandreou & Yuille, 2010; Hazan & Jaakkola, 2012). Concretely for i.i.d. ϵi ∼ G(0, β) added to a set {x1, ..., xn} ∈ R, maxi(xi+ ϵi) ∼ G(β log ∑ i exp (xi/β), β), and argmax(xi+ ϵi) ∼ softmax(xi/β). Furthermore, the Max-trick is unique to the Gumbel (Luce, 1977). These properties lead into the McFadden-Rust model (McFadden, 1972; Rust, 1986) of MDPs as we state below. McFadden-Rust model: An MDP following the standard Bellman equations with stochasticity in the rewards due to unobserved state variables will satisfy the soft-Bellman equations over the observed state with actual rewards r̄(s,a), given two conditions: 1. Additive separability (AS): observed rewards have additive i.i.d. Gumbel noise, i.e. r(s,a) = r̄(s,a) + ϵ(s,a), with actual rewards r̄(s,a) and i.i.d. noise ϵ(s,a) ∼ G(0, β). 2. Conditional Independence (CI): the noise ϵ(s,a) in a given state-action pair is conditionally independent of that in any other state-action pair. Moreover, the converse also holds: Any MDP satisfying the Bellman equations and following a softmax policy, necessarily has any i.i.d. noise in the rewards with AS + CI conditions be Gumbel distributed. These results were first shown to hold in discrete choice theory by McFadden (1972), with the AS + CI conditions derived by Rust (1986) for discrete MDPs. We formalize these results in Appendix A and give succinct proofs using the developed properties of the Gumbel distribution. These results enable the view of a soft-MDP as an MDP with hidden i.i.d. Gumbel noise in the rewards. Notably, this result gives a different interpretation of a soft-MDP than entropy regularization to allow us to recover the soft-Bellman equations. 2In continuous action spaces, the sum over actions is replaced with an integral over the distribution µ. 3Bounded random variables are sub-Gaussian (Young, 2020) which have exponential tails. 3 EXTREME Q-LEARNING In this section, we motivate our Extreme Q-learning framework, which directly models the softoptimal values V ∗, and show it naturally extends soft-Q learning. Notably, we use the Gumbel distribution to derive a new optimization framework for RL via maximum-likelihood estimation and apply it to both online and offline settings. 3.1 GUMBEL ERROR MODEL Although assuming Gumbel errors in MDPs leads to intriguing properties, it is not obvious why the errors might be distributed as such. First, we empirically investigate the distribution of Bellman errors by computing them over the course of training. Specifically, we compute r(s,a) − γQ(s′, π(s′)) − Q(s,a) for samples (s,a, s′) from the replay-buffer using a single Q-function from SAC (Haarnoja et al., 2018) (See Appendix D for more details). In Figure 1, we find the errors to be skewed and better fit by a Gumbel distribution. We explain this using EVT. Consider fitting Q-functions by learning an unbiased function approximator Q̂ to solve the Bellman equation. We will assume access to M such function approximators, each of which are assumed to be independent e.g. parallel runs of a model over an experiment. We can see approximate Q-iteration as performing: Q̂t(s,a) = Q̄t(s,a) + ϵt(s,a), (3) where E[Q̂] = Q̄t is the expected value of our prediction Q̂t for an intended target Q̄t over our estimators, and ϵt is the (zero-centered) error in our estimate. Here, we assume the error ϵt comes from the same underlying distribution for each of our estimators, and thus are i.i.d. random variables with a zero-mean. Now, consider the bootstrapped estimate using one of our M estimators chosen randomly: B̂∗Q̂t(s,a) = r(s,a) + γmax a′ Q̂t(s ′,a′) = r(s,a) + γmax a′ (Q̄t(s ′,a′) + ϵt(s ′,a′)). (4) We now examine what happens after a subsequent update. At time t + 1, suppose that we fit a fresh set of M independent functional approximators Q̂t+1 with the target B̂∗Q̂t, introducing a new unbiased error ϵt+1. Then, for Q̄t+1 = E[Q̂t+1] it holds that Q̄t+1(s,a) = r(s,a) + γEs′|s,a[Eϵt [max a′ (Q̄t(s ′,a′) + ϵt(s ′,a′))]]. (5) As Q̄t+1 is an expectation over both the dynamics and the functional errors, it accounts for all uncertainty (here E[ϵt+1] = 0). But, the i.i.d. error ϵt remains and will be propagated through the Bellman equations and its chain of max operations. Due to Theorem 1, ϵt will become Gumbel distributed in the limit of t, and remain so due to the Gumbel distribution’s max-stability.4 This highlights a fundamental issue with approximation-based RL algorithms that minimize the MeanSquared Error (MSE) in the Bellman Equation: they implicitly assume, via maximum likelihood estimation, that errors are Gaussian. In Appendix A, we further study the propagation of errors using the McFadden-Rust MDP model, and use it to develop a simplified Gumbel Error Model (GEM) for errors under functional approximation. In practice, the Gumbel nature of the errors may be weakened as estimators between timesteps share parameters and errors will be correlated across states and actions. 3.2 GUMBEL REGRESSION The goal of our work is to directly model the log-partition function (LogSumExp) over Q(s, a) to avoid all of the aforementioned issues with taking a max in the function approximation domain. 4The same holds for soft-MDPs as log-sum-exp can be expanded as a max over i.i.d. Gumbel random vars. In this section we derive an objective function that models the LogSumExp by simply assuming errors follow a Gumbel distribution. Consider estimating a parameter h for a random variable X using samples xi from a dataset D, which have Gumbel distributed noise, i.e. xi = h + ϵi where ϵi ∼ −G(0, β). Then, the average log-likelihood of the dataset D as a function of h is given as: Exi∼D [log p(xi)] = Exi∼D [ −e((xi−h)/β) + (xi − h)/β ] (6) Maximizing the log-likelihood yields the following convex minimization objective in h, L(h) = Exi∼D [ e(xi−h)/β − (xi − h)/β − 1 ] (7) which forms our objective function L(·), which resembles the Linex loss from econometrics (Parsian & Kirmani, 2002) 5. β is fixed as a hyper-parameter, and we show its affect on the loss in Figure 2. Critically, the minima of this objective under a fixed β is given by h = β logExi∼D[exi/β ], which resembles the LogSumExp with the summation replaced with an (empirical) expectation. In fact, this solution is the the same as the operator Lβµ(X) defined for MaxEnt in Section 2.1 with xi sampled from µ. In Figure 2, we show plots of Gumbel Regression on a simple dataset with different values of β. As this objective recovers Lβ(X), we next use it to model soft-values in Max-Ent RL. 3.2.1 THEORY Here we show that Gumbel regression is well behaved, considering the previously defined operator Lβ for random variables Lβ(X) := β logE [ eX/β ] . First, we show it models the extremum. Lemma 3.1. For any β1 > β2, we have Lβ1(X) < Lβ2(X). And L∞(X) = E [X], L0(X) = sup(X). Thus, for any β ∈ (0,∞), the operator Lβ(X) is a measure that interpolates between the expectation and the max of X . The operator Lβ(X) is known as the cumulant-generating function or the log-Laplace transform, and is a measure of the tail-risk closely linked to the entropic value at risk (EVaR) (Ahmadi-Javid, 2012) . Lemma 3.2. The risk measure L has a unique minima at β logE [ eX/β ] . And an empirical risk L̂ is an unbiased estimate of the true risk. Furthermore, for β ≫ 1, L(θ) ≈ 12β2Exi∼D[(xi − θ) 2], thus behaving as the MSE loss with errors ∼ N (0, β). In particular, the empirical loss L̂ over a dataset of N samples can be minimized using stochastic gradient-descent (SGD) methods to give an unbiased estimate of the LogSumExp over the N samples. Lemma 3.3. L̂β(X) over a finite N samples is a consistent estimator of the log-partition function Lβ(X). Similarly, exp(L̂β(X)/β) is an unbiased estimator for the partition function Z = E [ eX/β ] We provide PAC learning bounds for Lemma 3.3, and further theoretical discussion on Gumbel Regression in Appendix B. 3.3 MAXENT RL WITHOUT ENTROPY Given Gumbel Regression can be used to directly model the LogSumExp , we apply it to Q-learning. First, we connect our framework to conservative Q-learning (Kumar et al., 2020). 5We add −1 to make the loss 0 for a perfect fit, as ex − x− 1 ≥ 0 with equality at x = 0. Lemma 3.4. Consider the loss objective over Q-functions: L(Q) = Es∼ρµ,a∼µ(·|s) [ e(T πQ̂k(s,a)−Q(s,a))/β ] − Es∼ρµ,a∼µ(·|s)[(T πQ̂k(s,a)−Q(s,a))/β]− 1 (8) where T π := r(s,a) + γEs′|s,aEa′∼π[Q(s′,a′)] is the vanilla Bellman operator under the policy π(a|s). Then minimizing L gives the update rule: ∀s,a, k Q̂k+1(s,a) = T πQ̂k(s,a)− β log π(a | s) µ(a | s) = BπQ̂k(s,a). The above lemma transforms the regular Bellman backup into the soft-Bellman backup without the need for entropies, letting us convert standard RL into MaxEnt RL. Here, L(·) does a conservative Q-update similar to CQL (Kumar et al., 2020) with the nice property that the implied conservative term is just the KL-constraint between π and µ.6 This enforces a entropy-regularization on our policy with respect to the behavior policy without the need of entropy. Thus, soft-Q learning naturally emerges as a conservative update on regular Q-learning under our objective. Here, Equation 8 is the dual of the KL-divergence between µ and π (Garg et al., 2021), and we motivate this objective for RL and establish formal equivalence with conservative Q-learning in Appendix C. In our framework, we use the MaxEnt Bellman operator B∗ which gives our ExtremeQ loss, which is the same as our Gumbel loss from the previous section: L(Q) = Es,a∼µ [ e(B̂ ∗Q̂k(s,a)−Q(s,a))/β ] − Es,a∼µ[(B̂∗Q̂k(s,a)−Q(s,a))/β]− 1 (9) This gives an update rule: Q̂k+1(s,a) = B∗Q̂k(s,a). L(·) here requires estimation of B∗ which is very hard in continuous action spaces. Under deterministic dynamics, L can be obtained without B∗ as shown in Appendix C. However, in general we still need to estimate B∗. Next, we motivate how we can solve this issue. Consider the soft-Bellman equation from Section 2.1 (Equation 1), B∗Q = r(s,a) + γEs′∼P (·|s,a)[V ∗(s′)], (10) where V ∗(s) = Lβa∼µ(·|s′)[Q(s,a)]. Then V ∗ can be directly estimated using Gumbel regression by setting the temperature β to the regularization strength in the MaxEnt framework. This gives us the following ExtremeV loss objective: J (V ) = Es,a∼µ [ e(Q̂ k(s,a)−V (s))/β ] − Es,a∼µ[(Q̂k(s,a)− V (s))/β]− 1. (11) Lemma 3.5. Minimizing J over values gives the update rule: V̂ k(s) = Lβa∼µ(·|s)[Q̂ k(s,a)]. Then we can obtain V ∗ from Q(s, a) using Gumbel regression and substitute in Equation 10 to estimate the optimal bellman backup B∗Q. Thus, Lemma 3.4 and 3.5 give us a scheme to solve the Max-Ent RL problem without the need of entropy. 3.4 LEARNING POLICIES In the above section we derived a Q-learning strategy that does not require explicit use of a policy π. However, in continuous settings we still often want to recover a policy that can be run in the environment. Per Eq. 2 (Section 2.2), the optimal MaxEnt policy π∗(a|s) = µ(a|s)e(Q(s,a)−V (s))/β . By minimizing the forward KL-divergence between π and the optimal π∗ induced by Q and V we obtain the following training objective: π∗ = argmax π Eρµ(s,a)[e (Q(s,a)−V (s))/β log π]. (12) If we take ρµ to be a dataset D generated from a behavior policy πD, we exactly recover the AWR objective used by prior works in Offline RL (Peng et al., 2019; Nair et al., 2020), which can easily be computed using the offline dataset. This objective does not require sampling actions, which may 6In fact, theorems of CQL (Kumar et al., 2020) hold for our objective by replacing DCQL with DKL. potentially take Q(s, a) out of distribution. Alternatively, if we want to sample from the policy instead of the reference distribution µ, we can minimize the Reverse-KL divergence which gives us the SAC-like actor update: π∗ = argmax π Eρπ(s)π(a|s)[Q(s,a)− β log(π(a|s)/µ(a|s))]. (13) Interestingly, we note this doesn’t depend on V (s). If µ is chosen to be the last policy πk, the second term becomes the KL-divergence between the current policy and πk, performing a trust region update on π (Schulman et al., 2015; Vieillard et al., 2020).7 While estimating the log ratio log(π(a|s)/µ(a|s)) can be difficult depending on choice of µ, our Gumbel Loss J removes the need for µ during Q learning by estimating soft-Q values of the form Q(s,a)− β log(π(a|s)/µ(a|s)). 3.5 PRACTICAL ALGORITHMS Algorithm 1 Extreme Q-learning (X -QL) (Under Stochastic Dynamics) 1: Init Qϕ, Vθ , and πψ 2: Let D = {(s,a, r, s′)} be data from πD (of- fline) or replay buffer (online) 3: for step t in {1...N} do 4: Train Qϕ using L(ϕ) from Eq. 14 5: Train Vθ using J (θ) from Eq. 11 (with a ∼ D (offline) or a ∼ πψ (online)) 6: Update πψ via Eq. 12 (offline) or Eq. 13 (online) 7: end for In this section we develop a practical approach to Extreme Q-learning (X -QL) for both online and offline RL. We consider parameterized functions Vθ(s), Qϕ(s,a), and πψ(a|s) and let D be the training data distribution. A core issue with directly optimizing Eq. 10 is over-optimism about dynamics (Levine, 2018) when using simple-sample estimates for the Bellman backup. To overcome this issue in stochastic settings, we separate out the optimization of Vθ from that of Qϕ following Section 3.3. We learn Vθ using Eq. 11 to directly fit the optimal soft-values V ∗(s) based on Gumbel regression. Using Vθ(s′) we can get single-sample estimates of B∗ as r(s,a) + γVθ(s′). Now we can learn an unbiased expectation over the dynamics, Qϕ ≈ Es′|s,a[r(s,a) + γVθ(s′)] by minimizing the Mean-squared-error (MSE) loss between the single-sample targets and Qϕ: L(ϕ) = E(s,a,s′)∼D [ (Qϕ(s,a)− r(s,a)− γVθ(s′))2 ] . (14) In deterministic dynamics, our approach is largely simplified and we directly learn a single Qϕ using Eq. 9 without needing to learn B∗ or V ∗. Similarly, we learn soft-optimal policies using Eq. 12 (offline) or Eq. 13 (online) settings. Offline RL. In the offline setting, D is specified as an offline dataset assumed to be collected with the behavior policy πD. Here, learning values with Eq. 11 has a number of practical benefits. First, we are able to fit the optimal soft-values V ∗ without sampling from a policy network, which has been shown to cause large out-of-distribution errors in the offline setting where mistakes cannot be corrected by collecting additional data. Second, we inherently enforce a KL-constraint on the optimal policy π∗ and the behavior policy πD. This provides tunable conservatism via the temperature β. After offline training of Qϕ and Vθ, we can recover the policy post-training using the AWR objective (Eq. 12). Our practical implementation follows the training style of Kostrikov et al. (2021), but we train value network using using our ExtremeQ loss. Online RL. In the online setting, D is usually given as a replay buffer of previously sampled states and actions. In practice, however, obtaining a good estimate of V ∗(s′) requires that we sample actions with high Q-values instead of uniform sampling from D. As online learning allows agents to correct over-optimism by collecting additional data, we use a previous version of the policy network πψ to sample actions for the Bellman backup, amounting to the trust-region policy updates detailed at the end of Section 3.4. In practice, we modify SAC and TD3 with our formulation. To embue SAC (Haarnoja et al., 2018) with the benefits of Extreme Q-learning, we simply train Vθ using Eq. 11 with s ∼ D,a ∼ πψk(a|s). This means that we do not use action probabilities when updating the value networks, unlike other MaxEnt RL approaches. The policy is learned via the objective maxψ E[Qϕ(s, πψ(s))] with added entropy regularization, as SAC does not use a fixed noise schedule. TD3 by default does not use a value network, and thus we use our algorithm for deterministic dynamics by changing the loss to train Q in TD3 to directly follow Eq. 9. The policy is learned as in SAC, except without entropy regularization as TD3 uses a fixed noise schedule. 7Choosing µ to be uniform U gives the regular SAC update. 4 EXPERIMENTS We compare our Extreme Q-Learning (X -QL) approach to state-of-the-art algorithms across a wide set of continuous control tasks in both online and offline settings. In practice, the exponential nature of the Gumbel regression poses difficult optimization challenges. We provide Offline results on Androit, details of loss implementation, ablations, and hyperparameters in Appendix D. 4.1 OFFLINE RL Our offline results with fixed hyperparameters for each domain outperform prior methods (Chen et al., 2021; Kumar et al., 2019; 2020; Kostrikov et al., 2021; Fujimoto & Gu, 2021) in several environments, reaching state-of-the-art on the Franka Kitchen tasks, as shown in Table 1. We find performance on the Gym locomotion tasks to be already largely saturated without introducing ensembles An et al. (2021), but our method achieves consistently high performance across environments. While we attain good performance using fixed hyper-parameters per domain, X -QL achieves even higher absolute performance and faster convergence than IQL’s reported results when hyper-parameters are turned per environment. With additional tuning, we also see particularly large improvements on the AntMaze tasks, which require a significant amount of “stitching” between trajectories (Kostrikov et al., 2021). Full learning curves are in the Appendix. Like IQL, X -QL can be easily fine-tuned using online data to attain even higher performance as shown in Table 2. 4.2 ONLINE RL Table 2: Finetuning results on the AntMaze environments Dataset CQL IQL X -QL T umaze-v0 70.1 → 99.4 86.7 → 96.0 93.8 → 99.6 umaze-diverse-v0 31.1 → 99.4 75.0 → 84.0 82.0 → 99.0 medium-play-v0 23.0 → 0.0 72.0 → 95.0 76.0 → 97.0 medium-diverse-v0 23.0 → 32.3 68.3 → 92.0 73.6 → 97.1 large-play-v0 1.0 → 0.0 25.5 → 46.0 45.1 → 59.3 large-diverse-v0 1.0 → 0.0 42.6 → 60.7 49.0 → 82.1 We compare ExtremeQ variants of SAC (Haarnoja et al., 2018) and TD3 (Fujimoto et al., 2018), denoted X -SAC and X -TD3, to their vanilla versions on tasks in the DM Control, shown in Figure 3. Across all tasks an ExtremeQ variant matches or surpasses the performance of baselines. We see particularly large gains in the Hopper environment, and more significant gains in comparison to TD3 overall. Consistent with SAC (Haarnoja et al., 2018), we find the temperature β needs to be tuned for different environments with different reward scales and sparsity. A core component of TD3 introduced by Fujimoto et al. (2018) is Double Q-Learning, which takes the minimum of two Q functions to remove overestimate bias in the Q-target. As we assume errors to be Gumbel distributed, we expect our X -variants to be more robust to such errors. In all environments except Cheetah Run, our X -TD3 without the Double-Q trick, denoted X -QL - DQ, performs better than standard TD3. While the gains from Extreme-Q learning are modest in online settings, none of our methods require access to the policy distribution to learn the Q-values. 5 RELATED WORK Our approach builds on works online and offline RL. Here we review the most salient ones. Inspiration for our framework comes from econometrics (Rust, 1986; McFadden, 1972), and our Gumbel loss is motivated by IQ-Learn (Garg et al., 2021). Online RL. Our work bridges the theoretical gap between RL and Max-Ent RL by introducing our Gumbel loss function. Unlike past work in MaxEnt RL (Haarnoja et al., 2018; Eysenbach & Levine, 2020), our method does not require explicit entropy estimation and instead addresses the problem of obtaining soft-value estimates (LogSumExp) in high-dimensional or continuous spaces (Vieillard et al., 2021) by directly modeling them via our proposed Gumbel loss, which to our knowledge has not previously been used in RL. Our loss objective is intrinsically linked to the KL divergence, and similar objectives have been used for mutual information estimation (Poole et al., 2019) and statistical learning Parsian & Kirmani (2002); Atiyah et al. (2020). IQ-Learn (Garg et al., 2021) proposes learning Q-functions to solve imitation introduced the same loss in IL to obtain an unbiased dual form for the reverse KL-divergence between an expert and policy distribution. Other works have also used forward KL-divergence to derive policy objectives (Peng et al., 2019) or for regularization (Schulman et al., 2015; Abdolmaleki et al., 2018). Prior work in RL has also examined using other types of loss functions (Bas-Serrano et al., 2021) or other formulations of the argmax in order to ease optimization (Asadi & Littman, 2017). Distinct from most off-Policy RL Methods (Lillicrap et al., 2015; Fujimoto et al., 2018; Haarnoja et al., 2018), we directly model B∗ like Haarnoja et al. (2017); Heess et al. (2015) but attain significantly more stable results. Offline RL. Prior works in offline RL can largely be categorized as relying on constrained or regularized Q-learning (Wu et al., 2019; Fujimoto & Gu, 2021; Fujimoto et al., 2019; Kumar et al., 2019; 2020; Nair et al., 2020), or extracting a greedy policy from the known behavior policy (Peng et al., 2019; Brandfonbrener et al., 2021; Chen et al., 2021). Most similar to our work, IQL (Kostrikov et al., 2021) fits expectiles of the Q-function of the behavior policy, but is not motivated to solve a particular problem or remain conservative. On the other hand, conservatism in CQL (Kumar et al., 2020) is motivated by lower-bounding the Q-function. Our method shares the best of both worlds – like IQL we do not evaluate the Q-function on out of distribution actions and like CQL we enjoy the benefits of conservatism. Compared to CQL, our approach uses a KL constraint with the behavior policy, and for the first time extends soft-Q learning to offline RL without needing a policy or explicit entropy values. Our choice of using the reverse KL divergence for offline RL follows closely with BRAC (Wu et al., 2019) but avoids learning a policy during training. 6 CONCLUSION We propose Extreme Q-Learning, a new framework for MaxEnt RL that directly estimates the optimal Bellman backup B∗ without relying on explicit access to a policy. Theoretically, we bridge the gap between the regular, soft, and conservative Q-learning formulations. Empirically, we show that our framework can be used to develop simple, performant RL algorithms. A number of future directions remain such as improving stability with training with the exponential Gumbel Loss function and integrating automatic tuning methods for temperature β like SAC (Haarnoja et al., 2018). Finally, we hope that our framework can find general use in Machine Learning for estimating log-partition functions. Acknowledgements Div derived the theory for Extreme Q-learning and Gumbel regression framework and ran the tuned offline RL experiments. Joey ran the consistent offline experiments and online experiments. Both authors contributed equally to paper writing. We thank John Schulman and Bo Dai for helpful discussions. Our research was supported by NSF(1651565), AFOSR (FA95501910024), ARO (W911NF-21-1-0125), ONR, CZ Biohub, and a Sloan Fellowship. Joey was supported by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate (NDSEG) Fellowship Program. A THE GUMBEL ERROR MODEL FOR MDPS In this section, we functionally analyze Q-learning using our framework and further develop the Gumbel Error Model (GEM) for MDPs. A.1 RUST-MCFADDEN MODEL OF MDPS For an MDP following the Bellman equations, we assume the observed rewards to be stochastic due to an unobserved component of the state. Let s be the observed state, and (s, z) be the actual state with hidden component z. Then, Q(s, z,a) = R(s, z,a) + γEs′∼P (·|s,a)[Ez′|s′ [V (s′, z′)], (15) V (s, z) = max a Q(s, z,a). (16) Lemma A.1. Given, 1) conditional independence (CI) assumption that z′ depends only on s′, i.e. p(s′, z′|s, z,a) = p(z′|s′)p(s′|s,a) and 2) additive separablity (AS) assumption on the hidden noise: R(s,a, z) = r(s,a) + ϵ(z,a). Then for i.i.d. ϵ(z,a) ∼ G(0, β), we recover the soft-Bellman equations for Q(s, z,a) = q(s,a) + ϵ(z,a) and v(s) = Ez[V (s, z)], with rewards r(s,a) and entropy regularization β. Hence, a soft-MDP in MaxEntRL is equivalent to an MDP with an extra hidden variable in the state that introduces i.i.d. Gumbel noise in the rewards and follows the AS+CI conditions. Proof. We have, q(s,a) = r(s,a) + γEs′∼P (·|s,a)[Ez′|s′ [V (s′, z′)] (17) v(s) = Ez[V (s, z)] = Ez[max a (q(s,a) + ϵ(z))]. (18) From this, we can get fixed-point equations for q and π, q(s,a) = r(s,a) + γEs′∼P (·|s,a)[Ez′|s′ [max a′ (q(s′,a′) + ϵ(z′,a′))]], (19) π(·|s) = Ez[argmax a (q(s,a) + ϵ(z,a))] ∈ ∆A, (20) where ∆A is the set of all policies. Now, let ϵ(z,a) ∼ G(0, β) and assumed independent for each (z,a) (or equivalently (s,a) due to the CI condition). Then we can use the Gumbel-Max trick to recover the soft-Bellman equations for q(s,a) and v(s) with rewards r(s,a): q(s,a) = r(s,a) + γEs′∼P (·|s,a)[Lβa′ [q(s ′,a′)]], (21) π(·|s) = softmax a (q(s,a)). (22) Thus, we have that the soft-Bellman optimality equation and related optimal policy can arise either from the entropic regularization viewpoint or from the Gumbel error viewpoint for an MDP. Corollary A.1.1. Converse: An MDP following the Bellman optimality equation and having a policy that is softmax distributed, necessarily has any i.i.d. noise in the rewards due to hidden state variables be Gumbel distributed, given the AS+CI conditions hold. Proof. McFadden (McFadden, 1972) proved this converse in his seminal work on discrete choice theory, that for i.i.d. ϵ satisfiying Equation 19 with a choice policy π ∼ softmax has ϵ be Gumbel distributed. And we show a proof here similar to the original for MDPs. Considering Equation 20, we want π(a|s) to be softmax distributed. Let ϵ have an unknown CDF F and we consider there to be N possible actions. Then, P (argmax a (q(s,a) + ϵ(z,a)) = ai|s, z) = P (q(s,ai) + ϵ(z,ai) ≥ q(s,aj) + ϵ(z,aj) ∀i ̸= j |s, z) = P (ϵ(z,aj)− ϵ(z,ai) ≤ q(s,ai)− q(s,aj) ∀i ̸= j |s, z) Simplifying the notation, we write ϵ(z,ai) = ϵi and q(s,ai) = qi. Then ϵ1, ..., ϵN has a joint CDF G: G(ϵ1, ..., ϵN ) = N∏ j=1 P (ϵj ≤ ϵi + qi − qj) = N∏ j=1 F (ϵi + qi − qj) and we can get the required probability π(i) as: π(i) = ∫ +∞ ε=−∞ N∏ j=1,j ̸=i F (ε+ qi − qj)dF (ε) (23) For π = softmax(q), McFadden (McFadden, 1972) proved the uniqueness of F to be the Gumbel CDF, assuming translation completeness property to hold for F . Later this uniqueness was shown to hold in general for any N ≥ 3 (Luce, 1977). A.2 GUMBEL ERROR MODEL (GEM) FOR MDPS To develop our Gumbel Error Model (GEM) for MDPs under functional approximation as in Section 3.1, we follow our simplified scheme of M independent estimators Q̂, which results in the following equation over Q̄ = E[Q̂]: Q̄t+1(s,a) = r(s,a) + γEs′|s,a[Eϵt [max a′ (Q̄t(s ′,a′) + ϵt(s ′,a′))]]. (24) Here, the maximum of random variables will generally be greater than the true max, i.e. Eϵ[maxa′(Q̄(s′,a′) + ϵ(s′,a′))] ≥ maxa′ Q̄(s′,a′) (Thrun & Schwartz, 1999). As a result, even initially zero-mean error can cause Q updates to propagate consistent overestimation bias through the Bellman equation. This is a known issue with function approximation in RL (Fujimoto et al., 2018). Now, we can use the Rust-McFadden model from before. To account for the stochasticity, we consider extra unobserved state variables z in the MDP to be the model parameters θ used in the functional approximation. The errors from functional approximation ϵt can thus be considered as noise added in the reward. Here, CI condition holds as ϵ is separate from the dynamics and becomes conditionally independent for each state-action pair and AS condition is implied. Then for Q̄ satisfying Equation 24, we can apply the McFadden-Rust model, which implies that for the policy to be soft-optimal i.e. a softmax over Q̄, ϵ will be Gumbel distributed. Conversely, for the i.i.d. ϵ ∼ G, Q̄(s,a) follows the soft-Bellman equations and π(a|s) = softmax(Q(s,a)). This indicates an optimality condition on the MDP – for us to eventually attain the optimal softmax policy in the presence of functional boostrapping (Equation 24), the errors should follow the Gumbel distribution. A.2.1 TIME EVOLUTION OF ERRORS IN MDPS UNDER DETERMINISTIC DYNAMICS In this section, we characterize the time evolution of errors in an MDP using GEM. We assume deterministic dynamics to simplify our analysis. We suppose that we know the distribution of Q-values at time t and model the evolution of this distribution through the Bellman equations. Let Zt(s,a) be a random variable sampled from the distribution of Q-values at time t, then the following Bellman equation holds: Zt+1(s,a) = r(s,a) + γmax a′ Zt(s ′,a′). (25) Here, Zt+1(s,a) = maxa′ [r(s,a) + γZt(s′,a′)] is a maximal distribution and based on EVT should eventually converge to an extreme value distribution, which we can model as a Gumbel. Concretely, let’s assume that we fix Zt(s,a) ∼ G(Qt(s,a), β) for some Qt(s,a) ∈ R and β > 0. Furthermore, we assume that the Q-value distribution is jointly independent over different stateactions i.e. Z(s,a) is independent from Z(s′,a′) for ∀ (s,a) ̸= (s′,a′). Then maxa′ Zt(s′,a′) ∼ G(V (s′), β) with V (s) = Lβa [Q(s,a)] using the Gumbel-max trick. Then substituting in Equation 25 and rescaling Zt with γ, we get: Zt+1(s,a) ∼ G ( r(s,a) + γLβa′ [Q(s ′,a′)], γβ ) . (26) So very interestingly the Q-distribution becomes a Gumbel process, where the location parameter Q(s,a) follows the optimal soft-Bellman equation. Similarly, the temperature scales as γβ and the distribution becomes sharper after every timestep. After a number of timesteps, we see that Z(s,a) eventually collapses to the Delta distibution over the unique contraction Q∗(s,a). Here, γ controls the rate of decay of the Gumbel distribution into the collapsed Delta distribution. Thus we get the expected result in deterministic dynamics that the optimal Q-function will be deterministic and its distribution will be peaked. So if a Gumbel error enters into the MDP through a functional error or some other source at a timestep t in some state s, it will trigger off an wave that propagates the Gumbel error into its child states following Equation 26. Thus, this Gumbel error process will decay at a γ rate every timestep and eventually settle down with Q-values reaching the the steady solution Q∗. The variance of this Gumbel process given as π 2 6 β 2 will decay as γ2, similarly the bias will decay as γ-contraction in the L∞ norm. Hence, GEM gives us an analytic characterization of error propogation in MDPs under deterministic dynamics. Nevertheless under stochastic dynamics, characterization of errors using GEM becomes non-trivial as Gumbel is not mean-stable unlike the Gaussian distribution. We hypothesise that the errors will follow some mix of Gumbel-Gaussian distributions, and leave this characterization as a future open direction. B GUMBEL REGRESSION We characterize the concentration bounds for Gumbel Regression in this section. First, we bound the bias on applying Lβ to inputs containing errors. Second, we bound the PAC learning error due to an empirical L̂β over finite N samples. B.1 OVERESTIMATION BIAS Let Q̂(s,a) be a random variable representing a Q-value estimate for a state and action pair (s,a). We assume that it is an unbiased estimate of the true Q-value Q(s,a) with E[Q̂(s,a)] = Q(s,a). Let Q(s,a) ∈ [−Qmax, Qmax] Then, V (s) = Lβa∼µQ(s,a) is the true value function, and V̂ (s) = Lβa∼µQ̂(s,a) is its estimate. Lemma B.1. We have V (s) ≤ E[V̂ (s)] ≤ Ea∼µ[Q(s,a)] + β log cosh(Qmax/β). Proof. The lower bound V (s) ≤ E[V̂ (s)] is easy to show using Jensen’s Inequality as log_sum_exp is a convex function. For the upper bound, we can use a reverse Jensen’s inequality (Simić, 2009) that for any convex mapping f on the interval [a, b] it holds that:∑ i pif (xi) ≤ f (∑ i pixi ) + f(a) + f(b)− f ( a+ b 2 ) Setting f = − log(·) and xi = eQ̂(s,a)/β , we get: Ea∼µ[− log(eQ̂(s,a)/β)] ≤ − log(Ea∼µ[eQ̂(s,a)/β ])−log(eQmax/β)−log(e−Qmax/β)+log ( eQmax/β + e−Qmax/β 2 ) On simplifying, V̂ (s) = β log(Ea∼µeQ̂(s,a)/β) ≤ Ea∼µ[Q̂(s,a)] + β log cosh(Qmax/β) Taking expectations on both sides, E[V̂ (s)] ≤ Ea∼µ[Q(s,a)] + β log cosh(Qmax/β). This gives an estimate of how much the LogSumExp overestimates compared to taking the expectation over actions for random variables Q̂. This bias monotonically decreases with β, with β = 0 having a max bias of Qmax and for large β decaying as 12βQ 2 max. B.2 PAC LEARNING BOUNDS FOR GUMBEL REGRESSION Lemma B.2. exp(L̂β(X)/β) over a finite N samples is an unbiased estimator for the partition function Zβ = E [ eX/β ] and with a probability at least 1− δ it holds that: exp(L̂β(X)/β) ≤ Zβ + sinh(Xmax/β) √ 2 log (1/δ) N . Similarly, L̂β(X) over a finite N samples is a consistent estimator of Lβ(X) and with a probability at least 1− δ it holds that: L̂β(X) ≤ Lβ(X) + β sinh(Xmax/β) Zβ √ 2 log (1/δ) N . Proof. To prove these concentration bounds, we consider random variables eX1/β , ..., eXn/β with β > 0, such that ai ≤ Xi ≤ bi almost surely, i.e. eai/β ≤ eXi/β ≤ ebi/β . We consider the sum Sn = ∑N i=1 e Xi/β and use Hoeffding’s inequality, so that for all t > 0: P (Sn − ESn ≥ t) ≤ exp ( −2t2∑n i=1 ( ebi/β − eai/β )2 ) (27) To simplify, we let ai = −Xmax and bi = Xmax for all i. We also rescale t as t = Ns, for s > 0. Then P (Sn − ESn ≥ Ns) ≤ exp ( −Ns2 2 sinh2(Xmax/β) ) (28) We can notice that L.H.S. is same as P (exp(L̂β(X)/β)−exp(Lβ(X)/β) ≥ s), which is the required probability we want. Letting the R.H.S. have a value δ, we get s = sinh(Xmax/β) √ 2 log (1/δ) N Thus, with a probability 1− δ, it holds that: exp(L̂β(X)/β) ≤ exp(Lβ(X)/β) + sinh(Xmax/β) √ 2 log (1/δ) N (29) Thus, we get a concentration bound on exp(L̂β(X)/β) which is an unbiased estimator of the partition function Zβ = exp(Lβ(X)/β). This bound becomes tighter with increasing β, and asymptotically behaves as Xmaxβ √ 2 log(1/δ) N . Similarly, to prove the bound on the log-partition function L̂β(X), we can further take log(·) on both sides and use the inequality log(1 + x) ≤ x, to get a direct concentration bound on L̂β(X), L̂β(X) ≤ Lβ(X) + β log ( 1 + sinh(Xmax/β)e −Lβ(X)/β √ 2 log (1/δ) N ) (30) = Lβ(X) + β sinh(Xmax/β)e−L β(X)/β √ 2 log (1/δ) N (31) = Lβ(X) + β sinh(Xmax/β) Zβ √ 2 log (1/δ) N (32) This bound also becomes tighter with increasing β, and asymptotically behaves as Xmax Zβ √ 2 log(1/δ) N . C EXTREME Q-LEARNING In this section we provide additional theoretical details of our algorithm, X -QL, and its connection to conservatism in CQL (Kumar et al., 2020). C.1 X -QL For the soft-Bellman equation given as: Q(s,a) = r(s,a) + γEs′∼P (·|s,a)V (s), (33) V (s) = Lβµ(·|s)(Q(s,a)), (34) we have the fixed-point characterization, that can be found with a recurrence: V (s) = Lβµ(·|s) ( r(s,a) + γEs′∼P (·|s,a)V (s) ) . (35) In the main paper we discuss the case of X -QL under stochastic dynamics which requires the estimation of B∗. Under deterministic dynamic, however, this can be avoided as we do not need to account for an expectation over the next states. This simplifies the bellman equations. We develop two simple algorithms for this case without needing B∗. Value Iteration. We can write the value-iteration objective as: Q(s,a)← r(s,a) + γVθ(s′), (36) J (θ) = Es∼ρµ,a∼µ(·|s) [ e(Q(s,a)−Vθ(s))/β − (Q(s,a)− Vθ(s))/β − 1 ] . (37) Here, we learn a single model of the values Vθ(s) to directly solve Equation 35. For the current value estimate Vθ(s), we calculate targets r(s,a) + γVθ(s) and find a new estimate V ′θ (s) by fitting Lβµ with our objective J . Using our Gumbel Regression framework, we can guarantee that as J finds a consistent estimate of the Lβµ, and Vθ(s) will converge to the optimal V (s) upto some sampling error. Q-Iteration. Alternatively, we can develop a Q-iteration objective solving the recurrence: Qt+1(s,a) = r(s,a) + γLβa′∼µ [Qt(s ′,a′)] (38) = r(s,a) + Lγβa′∼µ [γQt(s ′,a′)] (39) = Lγβa′∼µ [r(s,a) + γQt(s ′,a′)] . (40) where we can rescale β to γβ to move L out. This gives the objective: Qt(s,a)← r(s,a) + γQθ(s′,a′), (41) J (Qθ) = Eµ(s,a,s′) [ e(Q t(s,a)−Qθ(s,a))/γβ − (Qt(s,a)−Qθ(s,a))/γβ − 1 ] . (42) Thus, this gives a method to directly estimate Qθ without learning values, and forms our X -TD3 method in the main paper. Note, that β is a hyperparameter, so we can use an alternative hyperparameter β′ = γβ to simplify the above. We can formalize this as a Lemma in the deterministic case: Lemma C.1. Let J (TµQ−Q′) = Es,a,s′,a′∼µ [ e(TµQ(s,a)−Q ′(s,a)/γβ − (TµQ(s,a)−Q′(s,a))/γβ − 1 ] . where Tµ is a linear operator that maps Q from current (s,a) to the next (s′,a′): TµQ(s,a) := r(s,a) + γQ(s′,a′) Then we have B∗Qt = argmin Q′∈Ω J (TµQt −Q′), where Ω is the space of Q-functions. Proof. We use that in deterministic dynamics, Lγβa′∼µ[TµQ(s,a)] = r(s,a) + γL β a′∼µ[Q(s ′,a′)] = B∗Q(s,a) Then solving for the unique minima for J establishes the above results. Thus, optimizing J with a fixed-point is equivalent to Q-iteration with the Bellman operator. C.2 BRIDGING SOFT AND CONSERVATIVE Q-LEARNING Inherent Convervatism in X -QL Our method is inherently conservative similar to CQL (Kumar et al., 2020) in that it underestimates the value function (in vanilla Q-learning) V π(s) by −β Ea∼π(a|s) [ log π(a|s)πD(a|s) ] , whereas CQL understimates values by a factor −β Ea∼π(a|s) [ π(a|s) πD(a|s) − 1 ] , where πD is the behavior policy. Notice that the underestimation factor transforms V π in vanilla Q-learning into V π used in the soft-Q learning formulation. Thus, we observe that KL-regularized Q-learning is inherently conservative, and this conservatism is built into our method. Furthermore, it can be noted that CQL conservatism can be derived as adding a χ2 regularization to an MDP and although not shown by the original work (Kumar et al., 2020) or any follow-ups to our awareness, the last term of Eq. 14 in CQL’s Appendix B (Kumar et al., 2020), is simply χ2(π||πD) and what the original work refers to as DCQL is actually the χ2 divergence. Thus, it is possible to show that all the results for CQL hold for our method by simply replacing DCQL with DKL i.e. the χ2 divergence with the KL divergence everywhere. We show a simple proof below that DCQL is the χ2 divergence: DCQL (π, πD) (s) := ∑ a π(a | s) [ π(a | s) πD(a | s) − 1 ] = ∑ a (π(a | s)− πD(a | s) + πD(a | s)) [ π(a | s) πD(a | s) − 1 ] = ∑ a (π(a | s)− πD(a | s)) [ π(a | s)− πD(a | s) πD(a | s) ] + ∑ a πD(a | s) [ π(a | s) πD(a | s) − 1 ] = ∑ a πD(a | s) [ π(a | s) πD(a | s) − 1 ]2 + 0 since, ∑ a π(a | s) = ∑ a πD(a | s) = 1 = χ2(π(· | s) || πD(· | s)), using the definition of chi-square divergence Why X–QL is better than CQL for offline RL In light of the above results, we know that CQL adds a χ2 regularization to the policy π with respect to the behavior policy πD, whereas our method does the same using the reverse-KL divergence. Now, the reverse-KL divergence has a mode-seeking behavior, and thus our method will find a policy that better fits the mode of the behavior policy and is more robust to random actions in the offline dataset. CQL does not have such a property and can be easily affected by noisy actions in the dataset. Connection to Dual KL representation For given distributions µ and π, we can write their KL-divergence using the dual representation proposed by IQ-Learn (Garg et al., 2021): DKL(π || µ) = max x∈R Eµ[−e−x]− Eπ[x]− 1, which is maximized for x = − log(π/µ). We can make a clever substitution to exploit the above relationship. Let x = (Q− T πQ̂k)/β for a variable Q ∈ R and a fixed constant T πQ̂k, then on variable substitution we get the equation: Es∼ρµ [DKL(π(·|s) || µ(·|s))] = min Q L(Q),with L(Q) = Es∼ρµ,a∼µ(·|s) [ e(T πQ̂k(s,a)−Q(s,a))/β ] − Es∼ρµ,a∼π(·|s)[(T πQ̂k(s,a)−Q(s,a))/β]− 1 This gives us Equation 8 in Section 3.3 of the main paper, and is minimized for Q = T πQ̂k − β log(π/µ) as we desire. Thus, this lets us transform the regular Bellman update into the soft-Bellman update. D EXPERIMENTS In this section we provide additional results and more details on all experimental procedures. D.2 BELLMAN ERROR PLOTS D.1 A TOY EXAMPLE Additional plots of the error distributions for SAC and TD3 can be found in Figure 5 and Figure 6, respectively. Figure 1 and the aforementioned plots were generated by running RL algorithms for 100,000 timesteps and logging the bellman errors every 5,000 steps. In particular, the Bellman errors were computed as: r(s,a) + γQθ1(s ′, πψ(s ′))−Qθ1(s,a) In the above equation Qθ1 represents the first of the two Q networks used in the Double Q trick. We do not use target networks to compute the bellman error, and instead compute the fully online quantity. πψ(s′) represents the mean or deterministic output of the current policy distribution. We used an implementation of SAC based on Yarats & Kostrikov (2020) and an implementation of TD3 based on Fujimoto et al. (2018). For SAC we did the entropy term was not added when computing the error as we seek to characterize the standard bellman error and not the soft-bellman error. Before generating plots the errors were clipped to the ranges shown. This tended prevented over-fitting to large outliers. The Gumbel and Gaussian curves we fit using MLE via Scipy. D.3 NUMERIC STABILITY In practice, a naive implementation of the Gumbel loss function J from Equation 11 suffers from stability issues due to the exponential term. We found that stabilizing the loss objective was essential for training. Practically, we follow the common max-normalization trick used in softmax computation. This amounts to factoring out emaxz z from the loss and consequently scaling the gradients. This adds a per-batch adaptive normalization to the learning rate. We additionally clip loss inputs that are too large to prevent outliers. An example code snippet in Pytorch is included below: def gumbel_loss(pred, label, beta, clip): z = (label - pred)/beta z = torch.clamp(z, -clip, clip) max_z = torch.max(z) max_z = torch.where(max_z < -1.0, torch.tensor(-1.0), max_z) max_z = max_z.detach() # Detach the gradients loss = torch.exp(z - max_z) - z*torch.exp(-max_z) - torch.exp(-max_z) return loss.mean() In some experiments we additionally clip the value of the gradients for stability. D.4 OFFLINE EXPERIMENTS In this subsection, we provide additional results in the offline setting and hyper-parameter and implementation details. Table 3 shows results for the Androit benchmark in D4RL. Again, we see strong results for X -QL, where X -QL-C with the same hyperparameters as used in the Franka Kitchen environments surpasses prior works on five of the eight tasks. Figure 7 shows learning curves which include baseline methods. We see that X -QL exhibits extremely fast convergence, particularly when tuned. One issue however, is numerical stability. The untuned version of X -QL exhibits divergence on the Antmaze environment. We base our implementation of X -QL off the official implementation of IQL from Kostrikov et al. (2021). We use the same network architecture and also apply the Double-Q trick. We also apply the same data preprocessing which is described in their appendix. We additionally take their baseline results and use them in Table 1, Table 2, and Table 3 for accurate comparison. We keep our general algorithm hyper-parameters and evaluation procedure the same but tune β and the gradient clipping value for each environment. Tuning values of β was done via hyper-parameter sweeps over a fixed set of values [0.6, 0.8, 1, 2, 5] for offline save for a few environments where larger values were clearly better. Increasing the batch size tended to also help with stability, since our rescaled loss does a per-batch normalization. AWAC parameters were left identical to those in IQL. For MuJoCo locomotion tasks we average mean returns over 10 evaluation trajectories and 6 random seeds. For the AntMaze tasks, we average over 1000 evaluation trajectories. We don’t see stability issues in the mujoco locomotion environments, but found that offline runs for the AntMaze environments could occasionally exhibit divergence in training for a small β < 1. In order to help mitigate this, we found adding Layer Normalization (Ba et al., 2016) to the Value networks to work well. Full hyper-parameters we used for experiments are given in Table 4. D.5 OFFLINE ABLATIONS In this section we show hyper-parameter ablations for the offline experiments. In particular, we ablate the temperature parameter, β, and the batch size. The temperature β controls the strength of KL penalization between the learned policy and the dataset behavior policy, and a small β is beneficial for datasets with lots of random noisy actions, whereas a high β favors more expert-like datasets. Because our implementation of the Gumbel regression loss normalizes gradients at the batch level, larger batches tended to be more stable and in some environments lead to higher final performance. To show that our tuned X -QL method is not simply better than IQL due to bigger batch sizes, we show a comparison with a fixed batch size of 1024 in Fig. 7. D.6 ONLINE EXPERIMENTS We base our implementation of SAC off pytorch_sac (Yarats & Kostrikov, 2020) but modify it to use a Value function as described in Haarnoja et al. (2017). Empirically we see similar performance with and without using the value function, but leave it in for fair comparison against our X -SAC variant. We base our implementation of TD3 on the original author’s code from Fujimoto et al. (2018). Like in offline experiments, hyper-parameters were left as default except for β, which we tuned for each environment. For online experiments we swept over [1, 2, 5] for X–SAC and TD3. We found that these values did not work as well for TD3 - DQ, and swept over values [3, 4, 10, 20]. In online experiments we used an exponential clip value of 8. For SAC we ran three seeds in each environment as it tended to be more stable. For TD3 we ran four. Occasionally, our X - variants would experience instability due to outliers in collected online policy rollouts causing exploding loss terms. We see this primarily in the Hopper and Quadruped environments, and rarely for Cheetah or Walker. For Hopper and Quadruped, we found that approximately one in six runs became unstable after about 100k gradient steps. This sort of instability is also common in other online RL algorithms like PPO due to noisy online policy collection. We restarted runs that become unstable during training. We verified our SAC results by comparing to Yarats & Kostrikov (2020) and our TD3 results by comparing to Li (2021) . We found that our TD3 implementation performed marginally better overall.
1. What is the focus and contribution of the paper regarding Q-learning? 2. What are the strengths of the proposed approach, particularly in its connection to CQL and policy evaluation? 3. What are the weaknesses of the paper, especially regarding its motivation and presentation? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper makes an interesting observation that the error after the bellman optimality operation, instead of following gaussian noise, should in theory follow Gumbel distribution. This motivates the usage of Gumbel regression instead of least square to learn the Q functions, and the paper shows its connection to CQL in the policy evaluation regime. The paper also shows in practice one can introduce another V function and policy to deal with continuous action space, and the experiments show that the practical algorithm achieves promising results in both online and offline benchmarks. Strengths And Weaknesses Strength The paper is very well organized, with sufficient background introduction so that readers without much previous knowledge could also understand the context. The usage of Gumbel regression is well organized, with good theoretical support. The paper also makes a good connection and shows how the minimizer of the Gumbel regression objective recovers previous update rules that require explicitly policy action distribution knowledge such as SAC or CQL, while XQL does not require the access to the policy during the value function update. The practical algorithm shows very promising improvement over previous methods, especially in the offline benchmarks. The practical algorithm seems also easy to be built on previous online methods. Weakness The motivation for using Gumbel regression seems not very obvious in the continuous action region because it's not obvious how to take the max operator. I am also confused about the presentation of the practical motivation (such as Fig.1) for two reasons: The bellman error is recorded during the training. However, some value functions we learned during the training may not even represent a valid value function for any policy? Then what does this bellman error represent? The bellman error is actually not calculated with max, but with the action that the corresponding policy takes. Again I understand this is due to the difficulty from the continuous action space, but this seems to deviate from the theoretical motivation. Although the paper claims that not requiring the access to the policy during value function update is a merit of the proposed algorithm, I could not see why this is significant in practice: in practice, we still need to train a policy anyway so we could assume we always have the access to the policy information? Clarity, Quality, Novelty And Reproducibility Clarity The paper is well-organized and easy to read. Quality The technical part seems correct. Novelty The paper makes an interesting observation and makes good practical contribution. Reproducibility The submission includes a code base. The paper includes many implementation details.
ICLR
Title MILE: A Multi-Level Framework for Scalable Graph Embedding Abstract Recently there has been a surge of interest in designing graph embedding methods. Few, if any, can scale to a large-sized graph with millions of nodes due to both computational complexity and memory requirements. In this paper, we relax this limitation by introducing the MultI-Level Embedding (MILE) framework – a generic methodology allowing contemporary graph embedding methods to scale to large graphs. MILE repeatedly coarsens the graph into smaller ones using a hybrid matching technique to maintain the backbone structure of the graph. It then applies existing embedding methods on the coarsest graph and refines the embeddings to the original graph through a graph convolution neural network that it learns. The proposed MILE framework is agnostic to the underlying graph embedding techniques and can be applied to many existing graph embedding methods without modifying them. We employ our framework on several popular graph embedding techniques and conduct embedding for real-world graphs. Experimental results on five large-scale datasets demonstrate that MILE significantly boosts the speed (order of magnitude) of graph embedding while generating embeddings of better quality, for the task of node classification. MILE can comfortably scale to a graph with 9 million nodes and 40 million edges, on which existing methods run out of memory or take too long to compute on a modern workstation. 1 Introduction In recent years, graph embedding has attracted much interest due to its broad applicability for various tasks (Perozzi et al., 2014; Wang et al., 2016; Henderson et al., 2012). However, such methods rarely scale to large datasets (e.g., graphs with over 1 million nodes) since they are computationally expensive and often memory intensive. For example, random-walkbased embedding techniques require a large amount of CPU time to generate a sufficient number of walks and train the embedding model. As another example, embedding methods based on matrix factorization, including GraRep (Cao et al., 2015) and NetMF (Qiu et al., 2018), requires constructing an enormous objective matrix (usually much denser than adjacency matrix), on which matrix factorization is performed. Even a medium-size graph with 100K nodes can easily require hundreds of GB of memory using those methods. On the other hand, many graph datasets in the real world tend to be large-scale with millions or even billions of nodes. To the best of our knowledge, none of the existing efforts examines how to scale up graph embedding in a generic way. We make the first attempt to close this gap. We are also interested in the related question of whether the quality of such embeddings can be improved along the way. Specifically, we ask: 1) Can we scale up the existing embedding techniques in an agnostic manner so that they can be directly applied to larger datasets? 2) Can the quality of such embedding methods be strengthened by incorporating the holistic view of the graph? To tackle these problems, we propose a MultI-Level Embedding (MILE) framework for graph embedding. Our approach relies on a three-step process: first, we repeatedly coarsen the original graph into smaller ones by employing a hybrid matching strategy; second, we compute the embeddings on the coarsest graph using an existing embedding techniques - and third, we propose a novel refinement model based on learning a graph convolution network to refine the embeddings from the coarsest graph to the original graph – learning a graph convolution network allows us to compute a refinement procedure that levers the dependencies inherent to the graph structure and the embedding method of choice. To summarize, we find that: • MILE is generalizable : Our MILE framework is agnostic to the underlying graph embedding techniques and treats them as black boxes. • MILE is scalable : MILE can significantly improve the scalability of the embedding methods (up to 30-fold), by reducing the running time and memory consumption. • MILE generates high-quality embeddings : In many cases, we find that the quality of embeddings improves by levering MILE (in some cases is in excess of 10%). 2 Related Work Many techniques for graph or network embedding have been proposed in recent years. DeepWalk and Node2Vec generate truncated random walks on graphs and apply the Skip Gram by treating the walks as sentences (Perozzi et al., 2014; Grover & Leskovec, 2016). LINE learns the node embeddings by preserving the first-order and second-order proximities (Tang et al., 2015). Following LINE, SDNE leverages deep neural networks to capture the highly non-linear structure (Wang et al., 2016). Other methods construct a particular objective matrix and use matrix factorization techniques to generate embeddings, e.g., GraRep (Cao et al., 2015) and NetMF (Qiu et al., 2018). This also led to the proliferation of network embedding methods for information-rich graphs, including heterogeneous information networks (Chang et al., 2015; Dong et al., 2017) and attributed graphs (Pan et al., 2016; Liang et al., 2018; Yang et al., 2015; Kipf & Welling, 2017). On the other hand, there are very few efforts, focusing on the scalability of network embedding (Yang et al., 2017; Huang et al., 2017). First, such efforts are specific to a particular embedding strategy and do not generalize. Second, the scalability of such efforts is limited to moderately sized datasets. Finally, and notably, these efforts at scalability are actually orthogonal to our strategy and can potentially be employed along with our efforts to afford even greater speedup. The closest work to this paper is the very recently proposed HARP (Chen et al., 2018), which proposes a hierarchical paradigm for graph embedding based on iterative learning methods (e.g., DeepWalk and Node2Vec). However, HARP focuses on improving the quality of embeddings by using the learned embeddings from the previous level as the initialized embeddings for the next level, which introduces a huge computational overhead. Moreover, it is not immediately obvious how a HARP like methodology would be extended to other graph embedding techniques (e.g., GraRep and NetMF) in an agnostic manner since such an approach would necessarily require one to modify the embedding methods to preset their initialized embeddings. In this paper, we focus on designing a general-purpose framework to scale up embedding methods treating them as black boxes. 3 Problem Formulation Let G = (V,E) be the input graph where V and E are respectively the node set and edge set. Let A be the adjacency matrix of the graph and we assume G is undirected, though our problem can be easily extended (Chung, 2005; Gleich, 2006; Satuluri & Parthasarathy, 2011) to directed graph. We first define graph embedding: Definition 3.1 Graph Embedding Given a graph G = (V,E) and a dimensionality d (d |V |), the problem of graph embedding is to learn a d-dimension vector representation for each node in G so that graph properties are best preserved. Following this, a graph embedding method is essentially a mapping function f : R|V |×|V | 7→ R|V |×d, whose input is the adjacency matrix A (or G) and output is a lower dimension matrix. Motivated by the fact that the majority of graph embedding methods cannot scale to large datasets, we seek to speed up existing graph embedding methods without sacrificing quality. We formulate the problem as: Given a graph G = (V,E) and a graph embedding method f(·), we aim to realize a strengthened graph embedding method f̂(·) so that it is more scalable than f(·) while generating embeddings of comparable or even better quality. 4 Methodology MILE framework consists of three key phases: graph coarsening, base embedding, and embeddings refining. Figure 1a shows the overview. 4.1 Graph Coarsening In this phase, the input graph G (or G0) is repeatedly coarsened into a series of smaller graphs G1, G2, ..., Gm such that |V0| > |V1| > ... > |Vm|. In order to coarsen a graph from Gi to Gi+1, multiple nodes in Gi are collapsed to form super-nodes in Gi+1, and the edges incident on a super-node are the union of the edges on the original nodes in Gi. Here the set of nodes forming a super-node is called a matching. We propose a hybrid matching technique containing two matching strategies that can efficiently coarsen the graph while retaining the global structure. An example is shared in Figure 2. Structural Equivalence Matching (SEM) : Given two vertices u and v in an unweighted graph G, we call they are structurally equivalent if they are incident on the same set of neighborhoods. In figure 2a, node D and E are structurally equivalent. The intuition of matching structually equivalent nodes is that if two vertices are structurally equivalent, then their node embeddings will be similar. Normalized Heavy Edge Matching (NHEM) : Heavy edge matching is a popular matching method for graph coarsening (Karypis & Kumar, 1998). For an unmatched node u in Gi, its heavy edge matching is a pair of vertices (u, v) such that the weight of the edge between u and v is the largest. In this paper, we propose to normalize the edge weights when applying heavy edge matching using the formula as follows Wi(u, v) = Ai(u, v)√ Di(u, u) ·Di(v, v) . (1) Here, the weight of an edge is normalized by the degree of the two vertices on which the edge is incident. Intuitively, it penalizes the weights of edges connected with high-degree nodes. As we will show in Sec. 4.3, this normalization is tightly connected with the graph convolution kernel. Hybrid Matching Method : We use a hybrid of two matching methods above for graph coarsening. To construct Gi+1 from Gi, we first find out all the structural equivalence matching (SEM) M1, where Gi is treated as an unweighted graph. This is followed by the searching of the normalized heavy edge matching (NHEM) M2 on Gi. Nodes in each matching are then collapsed into a super-node in Gi+1. Note that some nodes might not be matched at all and they will be directly copied to Gi+1. Formally, we build the adjacency matrix Ai+1 of Gi+1 through matrix operations. To this end, we define the matching matrix storing the matching information from graph Gi to Gi+1 as a binary matrix Mi,i+1 ∈ {0, 1}|Vi|×|Vi+1|. The r-th row and c-th column of Mi,i+1 is set to 1 if node r in Gi will be collapsed to super-node c in Gi+1, and is set to 0 if otherwise. Each column of Mi,i+1 represents a matching with the 1s representing the nodes in it. Each unmatched vertex appears as an individual column in Mi,i+1 with merely one entry set to 1. Following this formulation, we construct the adjacency matrix of Gi+1 by using Ai+1 = MTi,i+1AiMi,i+1. (2) 4.2 Base Embedding on Coarsened Graph The size of the graph reduces drastically after each iteration of coarsening, halving the size of the graph in the best case. We coarsen the graph for m iterations and apply the graph embedding method f(·) on the coarsest graph Gm. Denoting the embeddings on Gm as Em, we have Em = f(Gm ). Since our framework is agnostic to the adopted graph embedding method, we can use any graph embedding algorithm for base embedding. 4.3 Refinement of Embeddings The final phase of MILE is the embeddings refinement phase. Given a series of coarsened graph G0,G1,G2, ...,Gm, their corresponding matching matrix M0,1,M1,2, ...,Mm−1,m, and the node embeddings Em on Gm, we seek to develop an approach to derive the node embeddings of G0 from Gm. To this end, we first study an easier subtask: given a graph Gi, its coarsened graph Gi+1, the matching matrix Mi,i+1 and the node embeddings Ei+1 on Gi+1, how to infer the embeddings Ei on graph Gi. Once we solved this subtask, we can then iteratively apply the technique on each pair of consecutive graphs from Gm to G0 and eventually derive the node embeddings on G0. In this work, we propose to use a graph-based neural network model to perform embeddings refinement. Graph Convolution Network for Refinement Learning : Since we know the matching information between the two consecutive graphs Gi and Gi+1, we can easily project the node embeddings from the coarse-grained graph Gi+1 to the fine-grained graph Gi using Epi = Mi,i+1Ei+1 (3) In this case, embedding of a super-node is directly copied to its original node(s). We call Epi the projected embeddings from Gi+1 to Gi, or simply projected embeddings without ambiguity. While this way of simple projection maintains some information of node embeddings, it has obvious limitations that nodes will share the same embeddings if they are matched and collapsed into a super-node during the coarsening phase. This problem will be more serious when the embedding refinement is performed iteratively from Gm, ..., G0. To address this issue, we propose to use a graph convolution network for embedding refinement. Specifically, we design a graph-based neural network model Ei = R(Epi , Ai), which derives the embeddings Ei on graph Gi based on the projected embeddings Epi and the graph adjacency matrix Ai. Given graph G with adjacency matrix A, we consider the fast approximation of graph convolution from (Kipf & Welling, 2017). The k-th layer of this neural network model is H(k)(X,A) = σ ( D̃− 1 2 ÃD̃− 1 2H(k−1)(X,A)Θ(k) ) (4) where σ(·) is an activation function, Θ(k) is a layer-specific trainable weight matrix, and H(0)(X,A) = X. In this paper, we define our embedding refinement model as a l-layer graph convolution model Ei = R (Epi , Ai) ≡ H (l) (Epi , Ai) . (5) The architecture of the refinement model is shown in Figure 1b. The intuition behind this refinement model is to integrate the structural information of the current graph Gi into the projected embedding Epi by repeatedly performing the spectral graph convolution. Each layer of graph convolution network in Eq. 4 can be regarded as one iteration of embedding propagation in the graph following the re-normalized adjacency matrix D̃− 12 ÃD̃− 12 . Note that this re-normalized matrix is well aligned with the way we conduct normalized heavy edge matching in Eq. 1. We next discuss how the weight matrix Θ(k) is learned. Intricacies of Refinement Learning : The learning of the refinement model is essentially learning Θ(k) for each k ∈ [1, l] according to Eq. 4. Here we study how to design the learning task and construct the loss function. Since the graph convolution model H(l)(·) aims to predict the embeddings Ei on graph Gi, we can directly run a base embedding on Gi to generate the “ground-truth” embeddings and use the difference between these embeddings and the predicted ones as the loss function for training. We propose to learn Θ(k) on the coarsest graph and reuse them across all the levels for refinement. Specifically, we can define the loss function as the mean square error as follows L = 1 |Vm| ∥∥∥Em −H(l)(Mm,m+1Em+1, Am)∥∥∥2 . (6) We refer to the learning task associated with the above loss function as double-base embedding learning. We point out, however, there are two key drawbacks to this method. First of all, the above loss function requires one more level of coarsening to construct Gm+1 and an extra base embedding on Gm+1. These two steps, especially the latter, introduce nonnegligible overheads to the MILE framework, which contradicts our motivation of scaling up graph embedding. More importantly, Em might not be a desirable “ground truth” for the refined embeddings. This is because most of the embedding methods are invariant to an orthogonal transformation of the embeddings, i.e., the embeddings can be rotated by an arbitrary orthogonal matrix (Hamilton et al., 2017). In other words, the embedding spaces of graph Gm and Gm+1 can be totally different since the two base embeddings are learned independently. Even if we follow the paradigm in (Chen et al., 2018) and conduct base embedding on Gm using the simple projected embeddings from Gm+1 (Epm) as initialization, the embedding space does not naturally generalize and can drift during re-training. One possible solution is to use an alignment procedure to force the embeddings to be aligned between the two graphs (Hamilton et al., 2016). But it could be very expensive. In this paper, we propose a very simple method to address the above issues. Instead of conducting an additional level of coarsening, we construct a dummy coarsened graph by simply copying Gm, i.e., Mm,m+1 = I and Gm+1 = Gm. By doing this, we not only reduce one iteration of graph coarsening, but also avoid performing base embedding on Gm+1 simply because Em+1 = Em. Moreover, the embeddings of Gm and Gm+1 are guaranteed to be in the same space in this case without any drift. With this strategy, we change the loss function for model learning as follows L = 1 |Vm| ∥∥∥Em −H(l)(Em, Am)∥∥∥2 . (7) With the above loss function, we adopt gradient descent with back-propagation to learn the parameters Θ(k), k ∈ [1, l]. In the subsequent refinement steps, we apply the same set of parameters Θ(k) to infer the refined embeddings. We point out that the training of the refinement model is rather efficient as it is done on the coarsest graph. The embeddings refinement process involves merely sparse matrix multiplications using Eq. 5 and is relatively affordable compared to conducting embedding on the original graph. With these different components, we summarize the whole algorithm of our MILE framework in Algorithm 1. The appendix contains the time complexity of the algorithm in Section A.2 Algorithm 1 Multi-Level Algorithm for Graph Embedding Input: A input graph G0 = (V0, E0), # coarsening levels m, and a base embedding method f(·). Output: Graph embeddings E0 on G0. 1: Coarsen G0 into G1,G2, ...,Gm using proposed hybrid matching method. 2: Perform base embedding on the coarsest graph Gm (See Section. 4.2). 3: Learn the weights Θ(k) using the loss function in Eq. 7. 4: for i = (m− 1)...0 do 5: Compute the projected embeddings Epi on Gi. 6: Use Eq. 4 and Eq. 5 to compute refined embeddings Ei. 7: Return graph embeddings E0 on G0. 5 Experiments and Analysis 5.1 Experimental Configuration The datasets used in our experiments is shown in Table 1. Yelp dataset is preprocessed by us following similar procedures in (Huang et al., 2017)1. To demonstrate that MILE can work with different graph embedding methods , we explore several popular methods for graph embedding, mainly, DeepWalk (Perozzi et al., 2014), Node2vec (Grover & Leskovec, 2016), Line (Tang et al., 2015), GraRep (Cao et al., 2015) and NetMF (Qiu et al., 2018). To evaluate the quality of the embeddings, we follow the typical method in existing work to perform multi-label node classification (Perozzi et al., 2014; Grover & Leskovec, 2016). Dataset # Nodes # Edges # Classes 5.2 MILE Framework Performance We first evaluate the performance of our MILE framework when applied to different graph embedding methods. Figure 3 summarizes the performance of MILE on different datasets with various base embedding methods on various coarsening levels2 (exact numbers can be seen in Table 3 of Appendix). Note that m=0 corresponds to original embedding method. We make the following observations: • MILE is scalable. MILE greatly boosts the speed of the explored embedding methods. With a single level of coarsening (m=1), we are able to achieve speedup ranging from 1.5× to 3.4× (on PPI, Blog, and Flickr) while improving qualitative performance. Larger speedups are typically observed on GraRep and NetMF. Increasing the coarsening level m to 2, the speedup increases further (up to 14.4×), while the quality of the embeddings is comparable with the original methods reflected by Micro-F1. On YouTube, for the coarsening levels 6 and 8, we observe more than 10× speedup for DeepWalk, Node2Vec and LINE. For NetMF on YouTube, the speedup is even larger – original NetMF runs out of memory within 9.5 hours while MILE (NetMF) only takes around 20 minutes (m = 8). • MILE improves quality. For the smaller coarsening levels across all the datasets and methods, MILE-enhanced embeddings almost always offer a qualitative improvement over 1Raw data: https://www.yelp.com/dataset_challenge/dataset 2We discuss the results of Yelp later. MILE (DeepWalk) MILE (Node2Vec) MILE (Line) MILE (GraRep) MILE (NetMF) 0 1 2 3 4 # Levels 0.18 0.20 0.22 0.24 0.26 0.28 0.30 M icr of1 (a) PPI (Micro-F1) 0 1 2 3 4 5 6 # Levels 0.15 0.20 0.25 0.30 0.35 0.40 0.45 M icr of1 (b) Blog (Micro-F1) 0 1 2 3 4 5 6 7 8 # Levels 0.15 0.20 0.25 0.30 0.35 0.40 0.45 M icr of1 (c) Flickr (Micro-F1) 0 1 2 3 4 5 6 7 8 # Levels 0.38 0.40 0.42 0.44 0.46 0.48 0.50 0.52 M icr of1 (d) YouTube (Micro-F1) the original embedding method as evaluated by the Micro-F1 score (as high as 24.2% while many others also show a 10%+ increase). Examples include MILE (DeepWalk, m = 1) on Blog/PPI, MILE (Line, m = 1) on PPI and MILE (NetMF, m = 1) on PPI/Blog/Flickr. Even with higher number of coarsening level (m = 2 for PPI/Blog/Flickr; m = 6, 8 for YouTube), MILE in addition to being much faster can still improve, qualitatively, over the original methods on most of the datasets, e.g., MILE(NetMF, m = 2) NETMF on PPI, Blog, and Flickr. We conjecture the observed improvement on quality is because the embeddings begin to rely on a more holistic view of the graph. • MILE supports multiple embedding strategies. We make some embedding-specific observations here. We observe that MILE consistently improves both the quality and the efficiency of NetMF on all four datasets (for YouTube, NetMF runs out of memory). For the largest dataset, the speedups afforded exceed 30-fold. We observe that for GraRep, while speedups with MILE are consistently observed, the qualitative improvements, if any, are smaller (for both YouTube and Flickr, the base method runs out of memory). For Line, even though its time complexity is linear to the number of edges (Tang et al., 2015), applying MILE framework on top of it still generates significant speed-up (likely due to the fact that the complexity of Line contains a larger constant factor k than MILE). On the other hand, MILE on top of Line generates better quality of embeddings on PPI and YouTube while falling a bit short on Blog and Flickr. For DeepWalk and Node2Vec, we again observe consistent improvements in scalability (up to 11-fold on the larger datasets) as well as quality using MILE with a few levels of coarsening. However, when the coarsening level is increased, the additional speedup afforded (up to 17-fold) comes at a mixed cost to quality (micro-F1 drops slightly). • Impact of varying coarsening levels on MILE. When coarsening level m is small, MILE tends to significantly improve the quality of embeddings while taking much less time. From m = 0 to m = 1, we see a clear jump of the Micro-F1 score on all the datasets across the base embedding methods. This observation is more evident on larger datasets (Flickr and YouTube). On YouTube, MILE (DeepWalk) withm=1 increases the Micro-F1 score by 5.3% while only consuming half of the time compared to the original DeepWalk. MILE (DeepWalk) continues to generate embeddings of better quality than DeepWalk until m = 7, where the speedup is 13×. As the coarsening level m in MILE increases, the running time drops dramatically while the quality of embeddings only decreases slightly. PPI Blog Mi-F1 Time Mi-F1 Time DeepWalk (DW) 23.0 2.4 37.0 8.0 MILE (DW) 25.6 1.2 42.9 4.6 HARP (DW) 24.1 3.0 41.3 9.8 Node2Vec (NV) 24.3 4.0 39.1 13.0 MILE (NV) 25.9 1.7 42.8 6.9 HARP (NV) 22.3 3.9 36.2 13.16 Flickr YouTube Mi-F1 Time Mi-F1 Time DeepWalk 40.0 50.0 45.2 604.8 MILE (DW) 40.4 34.4 46.1 55.2 HARP (DW) 40.6 78.2 46.6 1727.7 Node2Vec 40.5 78.2 45.5 951.2 MILE (NV) 40.7 50.5 46.3 83.5 HARP (NV) 40.5 101.1 47.2 1981.3 Table 2: MILE vs. HARP MILE (DeepWalk) MILE (Node2Vec) MILE (Line) MILE (GraRep) MILE (NetMF) 0 2 4 6 8 10 12 14 16 18 20 22 # Levels 0.60 0.62 0.64 0.66 0.68 0.70 M icr of1 0 2 4 6 8 10 12 14 16 18 20 22 # Levels 102 103 Ti m e (m in s) Figure 4: Running MILE on Yelp dataset. The running time decreases at an almost exponential rate (logarithm scale on the y-axis in the second row of Figure 3). On the other hand, the Micro-F1 score descends much more slowly (the first row of Figure 3). most of which are still better than the original methods. This shows that MILE can not only consolidates the existing embedding methods, but also provides nice trade-off between effectiveness and efficency. 5.3 Comparing MILE with HARP HARP is a multi-level method primarily for improving the quality of graph embeddings. We compare HARP with our MILE framework using DeepWalk and Node2vec as the base embedding methods3. Table 2 shows the performance of these two methods on the four datasets (coarsening level is 1 on PPI/Blog/Flickr and 6 on YouTube). From the table we can observe that MILE generates embeddings of comparable quality with HARP. MILE performs much better than HARP on PPI and Blog, marginally better on Flickr and marginally worse on YouTube. However, MILE is significantly faster than HARP on all the four datasets (e.g. on YouTube, MILE affords a 31× speedup). This is because HARP requires running the whole embedding algorithm on each coarsened graph, which introduces a huge computational overhead. Note that for PPI and BLOG – MILE with NetMF (not shown) as its base embeddings produces the best micro-F1 of 26.9 and 43.8, respectively. This shows another advantage of MILE - agnostic to the base embedding when compared with HARP. 5.4 MILE: Large Graph Embedding We now explore the scalability of MILE on the large Yelp dataset. None of the five graph embedding methods studied in this paper can successfully conduct graph embedding on Yelp within 60 hours on a modern machine with 28 cores and 128 GB RAM. Even extending the run-time deadline to 100 hours, we see DeepWalk and Line barely finish. Leveraging the proposed MILE framework now makes it much easier to perform graph embedding on this scale of datasets (see Figure 4 for the results). We observe that MILE significantly reduces the running time and improves the Micro-F1 score. For example, Micro-F1 score of original DeepWalk and Line are 0.640 and 0.625 respectively, which all take more than 80 hours. But using MILE with m = 4, the micro-F1 score improves to 0.643 (DeepWalk) and 0.642 (Line) while achiving speedups of around 1.6×. Moreover, MILE reduces the running time of DeepWalk from 53 hours (coarsening level 4) to 2 hours (coarsening level 22) while reducing the Micro-F1 score just by 1% (from 0.643 to 0.634). Meanwhile, there is no change in the Micro-F1 score from coarsening level 4 to 10, where the running time is improved by a factor of two. These results affirm the power of the proposed MILE framework on scaling up graph embedding algorithms while generating quality embeddings. 6 Conclusion In this work, we propose a novel multi-level embedding (MILE) framework to scale up graph embedding techniques, without modifying them. Our framework incorporates existing embedding techniques as black boxes, and significantly improves the scalability of extant methods by reducing both the running time and memory consumption. Additionally, MILE also provides a lift in the quality of node embeddings in most of the cases. A fundamental contribution of MILE is its ability to learn a refinement strategy that depends on both the underlying graph properties and the embedding method in use. In the future, we plan to generalize MILE for information-rich graphs and employing MILE for more applications. 3https://github.com/GTmac/HARP A Appendix A.1 Experimental Configuration Details A.1.1 Datasets The details about the datasets used in our experiments are : • PPI is a Protein-Protein Interaction graph constructed based on the interplay activity between proteins of Homo Sapiens, where the labels represent biological states. • Blog is a network of social relationship of bloggers on BlogCatalog and the labels indicate interests of the bloggers. • Flickr is a social network of the contacts between users on flickr.com with labels denoting the interest groups. • YouTube is a social network between users on YouTube, where labels represent genres of groups subscribed by users. • Yelp is a social network of friends on Yelp and labels indicate the business categories on which the users review. A.1.2 Baseline Methods Baseline Methods: To demonstrate that MILE can work with different graph embedding methods, we explore several popular methods for graph embedding. • DeepWalk (DW) (Perozzi et al., 2014): Following the original work (Perozzi et al., 2014), we set the length of random walks as 80, number of walks per node as 10, and context windows size as 10. • Node2Vec (NV) (Grover & Leskovec, 2016): We use the same setting as DeepWalk for those common hyper-parameters while setting p = 4.0 and q = 1.0, which we found empirically to generate better results across all the datasets. • Line (LN) (Tang et al., 2015): This method aims at preserving first-order and secondorder proximities and has been applied on large-scale graph. We learn the first-order and second-order embeddings respectively and concatenate them to a unified embedding. • GraRep (GR) (Cao et al., 2015): This method considers different powers (up to k) of the adjacency matrix to preserve higher-order graph proximity for graph embedding. It uses SVD decomposition to generate the low-dimensional representation of nodes. We set k = 4 as suggested in the original work. • NetMF (NM) (Qiu et al., 2018): It is a recent effort that supports graph embedding via matrix factorization. We set the window size to 10 and the rank h to 1024, and lever the approximate version, as suggested and reported by the authors. A.1.3 MILE-specific Settings For all the above base embedding methods, we set the embedding dimensionality d as 128. When applying our MILE framework, we vary the coarsening levelsm from 1 to 10 whenever possible. For the graph convolution network model, the self-loop weight λ is set to 0.05, the number of hidden layers l is 2, and tanh(·) is used as the activation function, the learning rate is set to 0.001 and the number of training epochs is 200. The Adam Optimizer is used for model training. A.1.4 System Specification The experiments were conducted on a machine running Linux with an Intel Xeon E5-2680 CPU (28 cores, 2.40GHz) and 128 GB of RAM. We implement our MILE framework in Python. Our code and data are will be available for the replicability purpose. For all the five base embedding methods, we adapt the original code from the authors4. We additionally use TensorFlow package for the embeddings refinement learning component. We lever the available parallelism (on 28 cores) for each method (e.g., the generation of random walks in DeepWalk and Node2Vec, the training of the refinement model in MILE, etc.). A.1.5 Evaluation Metrics To evaluate the quality of the embeddings, we follow the typical method in existing work to perform multi-label node classification (Perozzi et al., 2014; Grover & Leskovec, 2016). Specifically, after the graph embeddings are learned for nodes (label is not used for this part), we run a 10-fold cross validation using the embeddings as features and report the average Micro-F1 and average Macro-F1. We also record the end-to-end wallclock time consumed by each method for scalability comparisons. A.2 Time Complexity It is non-trivial to derive the exact time complexity of MILE as it is dependent on the graph structure, the chosen base embedding method, and the convergence rate of the GCN model training. Here, we provide a rough estimation of the time complexity. For simplicity, we assume the number of vertices and the number of edges are reduced by factor α and β respectively at each step of coarsening (α > 1.0 and β > 1.0), i.e., Vi = 1αVi−1 and Ei = 1βEi−1. (we found α and β in range [1.5, 2.0], empirically). Withm levels of coarsening, the coarsening complexity is approximately O((1− 1/βm)/(1− 1/β)×E)) and since 1/βm is small, the complexity reduces to O( ββ−1 × E). For the base embedding phase, if the embedding algorithm has time complexity of T (V,E), the complexity of the base embedding phase is T ( Vαm , E βm ). For the refinement phase, the time complexity can be divided into two parts, i.e. the GCN model training and the embedding inference applying the GCN model. The former has similar complexity as the original GCN and can be denoted as O(k1 ∗ Eβm ) (Kipf & Welling, 2017), where k1 is a small constant related to embedding dimensionality and the number of training epochs. The embedding inference part is simply sparse matrix multiplication using Eq. 4 with time complexity O(k2 ∗Ei) when refining the embeddings on graph Gi, where k2 is an even smaller constant (k2 < k1). As a result, the time complexity of the whole refinement phase is O(k1 ∗ Eβm +k2 ∗ (E+ E β1 ...+ E βm−1 )) ≈ O(k3 ∗E) where k3 is a small constant. Overall, for an embedding algorithm of time complexity T (V,E), the MILE framework can reduce it to be T ( Vαm , E βm )+O(k∗E). This is a significant improvement considering T (V,E) is usually very large. The reduction in time complexity is attributed to the fact that we run the embedding learning and refinement model training at the coarsest graph. In addition, the overhead introduced by the coarsening phase and recursive embedding refinement is relatively small (linear to the number of edges E). Note that the constant factor k in the complexity term is usually small and we empirically found it to be in the scale of tens. Because of this, even when the complexity of the original embedding algorithm is linear to E, our MILE framework could still potentially speed up the embedding process because the complexity of MILE contains a smaller constant factor k (see Sec. 5.2 for the experiment of applying MILE on LINE). Furthermore, it is worth noting that many of the existing embedding strategies involve hyperparameters tunning for the best performance, especially for those methods based on neural networks (e.g., DeepWalk, Node2Vec, etc.). This in turn requires the algorithm to be run repeatedly – hence any savings in runtime by applying MILE are magnified across multiple runs of the algorithm with different hyper-parameter settings. 4DeepWalk: https://github.com/phanein/deepwalk; Node2Vec: http://snap.stanford.edu/node2vec/; Line: https://github.com/tangjianpku/LINE ; GraRep: https://github.com/thunlp/OpenNE; NetMF: https://github.com/xptree/NetMF A.3 MILE Performance The detailed information about performance evaluation is available in Table 3. the speedup compared to the original method. “N/A” indicates the method runs out of memory and we show the amount of running time spent when it happens. A.4 MILE Drilldown: Design Choices We now study the role of the design choices we make within the MILE framework related to the coarsening and refinement procedures described. To this end, we examine alternative design choices and systematically examine their performance. The alternatives we consider are: • Random Matching (MILE-rm): For each iteration of coarsening, we repeatedly pick a random pair of connected nodes as a match and merge them into a super-node until no more matching can be found. The rest of the algorithm is the same as our MILE. • Simple Projection (MILE-proj): We replace our embedding refinement model with a simple projection method. In other words, we directly copy the embedding of a super-node to its original node(s) without any refinement (see Eq. 3). • Averaging Neighborhoods (MILE-avg): For this baseline method, the refined embedding of each node is a weighted average node embeddings of its neighborhoods (weighted by the edge weights). This can be regarded as an embeddings propagation method. We add self-loop to each node6 and conduct the embeddings propagation for two rounds. • Untrained Refinement Model (MILE-untr): Instead of training the refinement model to minimize the loss defined in Eq. 7, this baseline merely uses a fixed set of values for parameters Θ(k) without training (values are randomly generated; other parts of the model in Eq. 4 are the same, including à and D̃). 6Self-loop weights are tuned to the best performance. • Double-base Embedding for Refinement Training (MILE-2base): This method replaces the loss function in Eq. 7 with the alternative one in Eq. 6 for model training. It conducts one more layer of coarsening and base embedding (level m + 1), from which the embeddings are projected to level m and used as the input for model training. • GraphSAGE as Refinement Model (MILE-gs): It replaces the graph convolution network in our refinement method with GraphSAGE (Hamilton et al., 2017)7. We choose max-pooling for aggregation and set the number of sampled neighbors as 100, as suggested by the authors. Also, concatenation is conducted instead of replacement during the process of propagation. Table 4 shows the comparison of performance on these methods across the four datasets. Here, we focus on using DeepWalk and NetMF for base embedding with a smaller coarsening level (m = 1 for PPI, Blog, and Flickr; m = 6 for YouTube). Results are similar for the other embedding options we consider. We hereby summarize the key information derived from Table 4 as follows: • The matching methods used within MILE offer a qualitative benefit at a minimal cost to execution time. Comparing MILE with MILE-rm for all the datasets, we can see that MILE generates better embeddings than MILE-rm using either DeepWalk or NetMF as the base embedding method. Though MILE-rm is slightly faster than MILE due to its random matching, its Micro-F1 score and Macro-F1 score are consistently lower than of MILE. • The graph convolution based refinement learning methodology in MILE is particularly effective. Simple projection-based MILE-proj, performs significantly worse than MILE. The other two variants (MILE-avg and MILE-untr) which do not train the refinement model at all, also perform much worse than the proposed method. Note MILE-untr is the same as MILE except it uses a default set of parameters instead of learning those parameters. Clearly, the model learning part of our refinement method is a fundamental contributing factor to the effectiveness of MILE. Through training, the refinement model is tailored to the specific graph under the base embedding method in use. The overhead cost of this learning (comparing MILE with MILE-untr), can vary depending on the base embedding employed (for instance on the YouTube dataset, it is 7Adapt code from https://github.com/williamleif/GraphSAGE an insignificant 1.2% on DeepWalk - while being up to 20% on NetMF) but is still worth it due to qualitative benefits (Micro-F1 up from 30.2 to 40.9 with NetMF on YouTube). • Graph convolution refinement learning outperforms GraphSAGE. Replacing the graph convolution network with GraphSAGE for embeddings refinement, MILE-gs does not perform as well as MILE. It is also computationally more expensive, partially due to its reliance on embeddings concatenation, instead of replacement, during the process the embeddings propagation (higher model complexity). • Double-base embedding learning is not effective. In Sec. 4.3, we discuss the issues with unaligned embeddings of the double-base embedding method for the refinement model learning. The performance gap between MILE and MILE-2base in Table 4 provides empirical evidence supporting our argument. This gap is likely caused by the fact that the base embeddings of level m and level m + 1 might not lie in the same embedding space (rotated by some orthogonal matrix) (Hamilton et al., 2017). As a result, using the projected embeddings Epm as input for model training (MILE-2base) is not as good as directly using Em (MILE). Moreover, Table 4 shows that the additional round of base embedding in MILE-2base introduces a non-trivial overhead. On YouTube, the running time of MILE-2base is 1.6 times as much as MILE. A.5 MILE Drilldown: Memory Consumption We also study the impact of MILE on reducing memory consumption. For this purpose, we focus on MILE (GraRep) and MILE (NetMF), with GraRep and NetMF as base embedding methods respectively. Both of these are embedding methods based on matrix factorization, which possibly involves a dense objective matrix and could be rather memory expensive. We do not explore DeepWalk and Node2Vec here since their embedding learning methods generate truncated random walks (training data) on the fly with almost negligible memory consumption (compared to the space storing the graph and the embeddings). Figure 5 shows the memory consumption of MILE (GraRep) and MILE(NetMF) as the coarsening level increases on Blog (results on other datasets are similar). We observe that MILE significantly reduces the memory consumption as the coarsening level increases. Even with one level of coarsening, the memory consumption of GraRep and NetMF reduces by 64% and 42% respectively. The dramatic reduction continues as the coarsening level increases until it reaches 4, where the memory consumption is mainly contributed by the storage of the graph and the embeddings. This memory reduction is consistent with our intuition, since both # rows and # columns in the objective matrix reduce almost by half with one level of coarsening. A.6 MILE Drilldown: Discussion on reusing Θ(k) across all levels Similar to GCN, Θ(k) is a matrix of filter parameters and is of size d ∗ d (where d is the embedding dimensionality). Eq. 4 in this paper defines how the embeddings are propagated during embedding refinements, parameterized by Θ(k) . Intuitively, Θ(k) defines how different embedding dimensions interact with each other during the embedding propagation. This interaction is dependent on graph structure and base embedding method, which can be learned from the coarsest level. Ideally, we would like to learn this parameter Θ(k) on every two consecutive levels. But this is not practical since this could be expensive as the graph get more fine-grained (and defeat our purpose of scaling up graph embedding). This trick of “sharing” parameters across different levels is the trade-off between efficiency and effectiveness. To some extent, it is similar to the original GCN (Kipf & Welling, 2017), where the authors share the same filter parameters Θ(k) over the whole graph (as opposed to using different Θ(k) for different nodes; see Eq (6) and (7) in(Kipf & Welling, 2017)). Moreover, we empirically found this works good enough and much more efficient. Table 4 shows that if we do not share Θ(k) values and use random values for Θ(k) during refinements, the quality of embedding is much worse (see baseline MILE-untr). A.7 MILE Drilldown: Discussion on choice of embedding methods We wish to point out that we chose the base embedding methods as they are either recently proposed NetMF (introduced in 2018) or are widely used (DeepWalk, Node2Vec, LINE). By showing the performance gain of using MILE on top of these methods, we want to ensure the contribution of this work is of broad interest to the community. We also want to reiterate that these methods are quite different in nature: • DeepWalk (DW) and Node2vec (N2V) rely on the use of random walks for latent representation of features. • LINE learns an embedding that directly optimizes a carefully constructed objective function that preserves both first/second order proximity among nodes in the embedding space. • GraRep constructs multiple objective matrices based on high orders of random walk laplacians, factories each objective matrix to generate embeddings and then concatenates the generated embeddings to form final embedding. • NetMF constructs an objective matrix based on random walk Laplacian and factorizes the objective matrix in order to generate the embeddings. Indeed NetMF (Qiu et al., 2018; Levy & Goldberg, 2014) with an appropriately constructed objective matrix has been shown to approximate DW, N2V and LINE allowing such be conducting implicit matrix factorization of approximated matrices. There are limitations to such approximations (shown in a related context by (Arora et al., 2016)) - the most important one is the requirement of a sufficiently large embedding dimensionality. Additionally, we note that while unification is possible under such a scenario, the methods based on matrix factorization are quite different from the original methods and do place a much larger premium on space (memory consumption) - in fact this is observed by the fact we are unable to run NetMF and GraRep in many cases without incorporating them within MILE. A.8 MILE Drilldown: Discussion on extending MILE to directed graphs Note that as pointed out by (Chung, 2005), one can construct random-walk Laplacians for a directed graph thus incorporating approaches like NetMF to accommodate such solutions. Another simple solution is to symmetrize the graph while accounting for directionality. Once the graph is symmetrized, any of the embedding strategies we discuss can be employed within the MILE framework (including the coarsening technique). There are many ideas for symmetrization of directed graphs (see for example work described by (Gleich, 2006) or (Satuluri & Parthasarathy, 2011). A.9 MILE Drilldown: Discussion on effectiveness of SEM The effectiveness of structurally equivalent matching (SEM) is highly dependent on graph structure but in general 5% - 20% of nodes are structurally equivalent (most of which are low-degree nodes). For example, during the first level of coarsening, YouTube has 172,906 nodes (or 86,453 pairs) out of 1,134,890 nodes that are found to be SEM (around 15%); Yelp has 875,236 nodes (or 437,618 pairs) out of 8,938,630 nodes are SEM (around 10%). In fact, more nodes are involved in SEM as SEM is run iteratively at each coarsening level.
1. What is the main contribution of the paper in the field of network embedding? 2. How does the proposed method, MILE, improve the scalability of existing network embedding methods? 3. What are some strengths of the paper regarding its clarity, idea presentation, and ability to reduce computational cost? 4. Are there any limitations to the applicability of MILE across different types of network embedding methods? 5. How does the reviewer assess the overall impact of the paper in the rapidly growing field of network embedding?
Review
Review This paper proposes a multi-level embedding (MILE) framework, which can be applied on top of existing network embedding methods and helps them scale to large scale networks with faster speed. To get the backbone structure of graph, MILE repeatedly coarsens the graph into smaller ones using a hybrid matching technique, and GCN is used for the refinement of embeddings. [+] The paper is well-written and the idea is clearly presented. [+] MILE is able to reduce computational cost while achieving comparable, or sometimes even better embedding quality. [+] MILE is general enough to apply to different underlying embedding strategies. [-] Most of the baseline methods are of similar type, since LINE, DeepWalk, node2vec and NetMF can all be unified to matrix factorization framework. There have been many new network embedding methods proposed in the past two years. It would be interesting to see how much MILE can help scale these methods. Overall, though there have already been hundreds of papers on network embedding in the past 2~3 years, I think this paper can be an interesting addition to this fast-growing area. Therefore, I would recommend to accept it.
ICLR
Title MILE: A Multi-Level Framework for Scalable Graph Embedding Abstract Recently there has been a surge of interest in designing graph embedding methods. Few, if any, can scale to a large-sized graph with millions of nodes due to both computational complexity and memory requirements. In this paper, we relax this limitation by introducing the MultI-Level Embedding (MILE) framework – a generic methodology allowing contemporary graph embedding methods to scale to large graphs. MILE repeatedly coarsens the graph into smaller ones using a hybrid matching technique to maintain the backbone structure of the graph. It then applies existing embedding methods on the coarsest graph and refines the embeddings to the original graph through a graph convolution neural network that it learns. The proposed MILE framework is agnostic to the underlying graph embedding techniques and can be applied to many existing graph embedding methods without modifying them. We employ our framework on several popular graph embedding techniques and conduct embedding for real-world graphs. Experimental results on five large-scale datasets demonstrate that MILE significantly boosts the speed (order of magnitude) of graph embedding while generating embeddings of better quality, for the task of node classification. MILE can comfortably scale to a graph with 9 million nodes and 40 million edges, on which existing methods run out of memory or take too long to compute on a modern workstation. 1 Introduction In recent years, graph embedding has attracted much interest due to its broad applicability for various tasks (Perozzi et al., 2014; Wang et al., 2016; Henderson et al., 2012). However, such methods rarely scale to large datasets (e.g., graphs with over 1 million nodes) since they are computationally expensive and often memory intensive. For example, random-walkbased embedding techniques require a large amount of CPU time to generate a sufficient number of walks and train the embedding model. As another example, embedding methods based on matrix factorization, including GraRep (Cao et al., 2015) and NetMF (Qiu et al., 2018), requires constructing an enormous objective matrix (usually much denser than adjacency matrix), on which matrix factorization is performed. Even a medium-size graph with 100K nodes can easily require hundreds of GB of memory using those methods. On the other hand, many graph datasets in the real world tend to be large-scale with millions or even billions of nodes. To the best of our knowledge, none of the existing efforts examines how to scale up graph embedding in a generic way. We make the first attempt to close this gap. We are also interested in the related question of whether the quality of such embeddings can be improved along the way. Specifically, we ask: 1) Can we scale up the existing embedding techniques in an agnostic manner so that they can be directly applied to larger datasets? 2) Can the quality of such embedding methods be strengthened by incorporating the holistic view of the graph? To tackle these problems, we propose a MultI-Level Embedding (MILE) framework for graph embedding. Our approach relies on a three-step process: first, we repeatedly coarsen the original graph into smaller ones by employing a hybrid matching strategy; second, we compute the embeddings on the coarsest graph using an existing embedding techniques - and third, we propose a novel refinement model based on learning a graph convolution network to refine the embeddings from the coarsest graph to the original graph – learning a graph convolution network allows us to compute a refinement procedure that levers the dependencies inherent to the graph structure and the embedding method of choice. To summarize, we find that: • MILE is generalizable : Our MILE framework is agnostic to the underlying graph embedding techniques and treats them as black boxes. • MILE is scalable : MILE can significantly improve the scalability of the embedding methods (up to 30-fold), by reducing the running time and memory consumption. • MILE generates high-quality embeddings : In many cases, we find that the quality of embeddings improves by levering MILE (in some cases is in excess of 10%). 2 Related Work Many techniques for graph or network embedding have been proposed in recent years. DeepWalk and Node2Vec generate truncated random walks on graphs and apply the Skip Gram by treating the walks as sentences (Perozzi et al., 2014; Grover & Leskovec, 2016). LINE learns the node embeddings by preserving the first-order and second-order proximities (Tang et al., 2015). Following LINE, SDNE leverages deep neural networks to capture the highly non-linear structure (Wang et al., 2016). Other methods construct a particular objective matrix and use matrix factorization techniques to generate embeddings, e.g., GraRep (Cao et al., 2015) and NetMF (Qiu et al., 2018). This also led to the proliferation of network embedding methods for information-rich graphs, including heterogeneous information networks (Chang et al., 2015; Dong et al., 2017) and attributed graphs (Pan et al., 2016; Liang et al., 2018; Yang et al., 2015; Kipf & Welling, 2017). On the other hand, there are very few efforts, focusing on the scalability of network embedding (Yang et al., 2017; Huang et al., 2017). First, such efforts are specific to a particular embedding strategy and do not generalize. Second, the scalability of such efforts is limited to moderately sized datasets. Finally, and notably, these efforts at scalability are actually orthogonal to our strategy and can potentially be employed along with our efforts to afford even greater speedup. The closest work to this paper is the very recently proposed HARP (Chen et al., 2018), which proposes a hierarchical paradigm for graph embedding based on iterative learning methods (e.g., DeepWalk and Node2Vec). However, HARP focuses on improving the quality of embeddings by using the learned embeddings from the previous level as the initialized embeddings for the next level, which introduces a huge computational overhead. Moreover, it is not immediately obvious how a HARP like methodology would be extended to other graph embedding techniques (e.g., GraRep and NetMF) in an agnostic manner since such an approach would necessarily require one to modify the embedding methods to preset their initialized embeddings. In this paper, we focus on designing a general-purpose framework to scale up embedding methods treating them as black boxes. 3 Problem Formulation Let G = (V,E) be the input graph where V and E are respectively the node set and edge set. Let A be the adjacency matrix of the graph and we assume G is undirected, though our problem can be easily extended (Chung, 2005; Gleich, 2006; Satuluri & Parthasarathy, 2011) to directed graph. We first define graph embedding: Definition 3.1 Graph Embedding Given a graph G = (V,E) and a dimensionality d (d |V |), the problem of graph embedding is to learn a d-dimension vector representation for each node in G so that graph properties are best preserved. Following this, a graph embedding method is essentially a mapping function f : R|V |×|V | 7→ R|V |×d, whose input is the adjacency matrix A (or G) and output is a lower dimension matrix. Motivated by the fact that the majority of graph embedding methods cannot scale to large datasets, we seek to speed up existing graph embedding methods without sacrificing quality. We formulate the problem as: Given a graph G = (V,E) and a graph embedding method f(·), we aim to realize a strengthened graph embedding method f̂(·) so that it is more scalable than f(·) while generating embeddings of comparable or even better quality. 4 Methodology MILE framework consists of three key phases: graph coarsening, base embedding, and embeddings refining. Figure 1a shows the overview. 4.1 Graph Coarsening In this phase, the input graph G (or G0) is repeatedly coarsened into a series of smaller graphs G1, G2, ..., Gm such that |V0| > |V1| > ... > |Vm|. In order to coarsen a graph from Gi to Gi+1, multiple nodes in Gi are collapsed to form super-nodes in Gi+1, and the edges incident on a super-node are the union of the edges on the original nodes in Gi. Here the set of nodes forming a super-node is called a matching. We propose a hybrid matching technique containing two matching strategies that can efficiently coarsen the graph while retaining the global structure. An example is shared in Figure 2. Structural Equivalence Matching (SEM) : Given two vertices u and v in an unweighted graph G, we call they are structurally equivalent if they are incident on the same set of neighborhoods. In figure 2a, node D and E are structurally equivalent. The intuition of matching structually equivalent nodes is that if two vertices are structurally equivalent, then their node embeddings will be similar. Normalized Heavy Edge Matching (NHEM) : Heavy edge matching is a popular matching method for graph coarsening (Karypis & Kumar, 1998). For an unmatched node u in Gi, its heavy edge matching is a pair of vertices (u, v) such that the weight of the edge between u and v is the largest. In this paper, we propose to normalize the edge weights when applying heavy edge matching using the formula as follows Wi(u, v) = Ai(u, v)√ Di(u, u) ·Di(v, v) . (1) Here, the weight of an edge is normalized by the degree of the two vertices on which the edge is incident. Intuitively, it penalizes the weights of edges connected with high-degree nodes. As we will show in Sec. 4.3, this normalization is tightly connected with the graph convolution kernel. Hybrid Matching Method : We use a hybrid of two matching methods above for graph coarsening. To construct Gi+1 from Gi, we first find out all the structural equivalence matching (SEM) M1, where Gi is treated as an unweighted graph. This is followed by the searching of the normalized heavy edge matching (NHEM) M2 on Gi. Nodes in each matching are then collapsed into a super-node in Gi+1. Note that some nodes might not be matched at all and they will be directly copied to Gi+1. Formally, we build the adjacency matrix Ai+1 of Gi+1 through matrix operations. To this end, we define the matching matrix storing the matching information from graph Gi to Gi+1 as a binary matrix Mi,i+1 ∈ {0, 1}|Vi|×|Vi+1|. The r-th row and c-th column of Mi,i+1 is set to 1 if node r in Gi will be collapsed to super-node c in Gi+1, and is set to 0 if otherwise. Each column of Mi,i+1 represents a matching with the 1s representing the nodes in it. Each unmatched vertex appears as an individual column in Mi,i+1 with merely one entry set to 1. Following this formulation, we construct the adjacency matrix of Gi+1 by using Ai+1 = MTi,i+1AiMi,i+1. (2) 4.2 Base Embedding on Coarsened Graph The size of the graph reduces drastically after each iteration of coarsening, halving the size of the graph in the best case. We coarsen the graph for m iterations and apply the graph embedding method f(·) on the coarsest graph Gm. Denoting the embeddings on Gm as Em, we have Em = f(Gm ). Since our framework is agnostic to the adopted graph embedding method, we can use any graph embedding algorithm for base embedding. 4.3 Refinement of Embeddings The final phase of MILE is the embeddings refinement phase. Given a series of coarsened graph G0,G1,G2, ...,Gm, their corresponding matching matrix M0,1,M1,2, ...,Mm−1,m, and the node embeddings Em on Gm, we seek to develop an approach to derive the node embeddings of G0 from Gm. To this end, we first study an easier subtask: given a graph Gi, its coarsened graph Gi+1, the matching matrix Mi,i+1 and the node embeddings Ei+1 on Gi+1, how to infer the embeddings Ei on graph Gi. Once we solved this subtask, we can then iteratively apply the technique on each pair of consecutive graphs from Gm to G0 and eventually derive the node embeddings on G0. In this work, we propose to use a graph-based neural network model to perform embeddings refinement. Graph Convolution Network for Refinement Learning : Since we know the matching information between the two consecutive graphs Gi and Gi+1, we can easily project the node embeddings from the coarse-grained graph Gi+1 to the fine-grained graph Gi using Epi = Mi,i+1Ei+1 (3) In this case, embedding of a super-node is directly copied to its original node(s). We call Epi the projected embeddings from Gi+1 to Gi, or simply projected embeddings without ambiguity. While this way of simple projection maintains some information of node embeddings, it has obvious limitations that nodes will share the same embeddings if they are matched and collapsed into a super-node during the coarsening phase. This problem will be more serious when the embedding refinement is performed iteratively from Gm, ..., G0. To address this issue, we propose to use a graph convolution network for embedding refinement. Specifically, we design a graph-based neural network model Ei = R(Epi , Ai), which derives the embeddings Ei on graph Gi based on the projected embeddings Epi and the graph adjacency matrix Ai. Given graph G with adjacency matrix A, we consider the fast approximation of graph convolution from (Kipf & Welling, 2017). The k-th layer of this neural network model is H(k)(X,A) = σ ( D̃− 1 2 ÃD̃− 1 2H(k−1)(X,A)Θ(k) ) (4) where σ(·) is an activation function, Θ(k) is a layer-specific trainable weight matrix, and H(0)(X,A) = X. In this paper, we define our embedding refinement model as a l-layer graph convolution model Ei = R (Epi , Ai) ≡ H (l) (Epi , Ai) . (5) The architecture of the refinement model is shown in Figure 1b. The intuition behind this refinement model is to integrate the structural information of the current graph Gi into the projected embedding Epi by repeatedly performing the spectral graph convolution. Each layer of graph convolution network in Eq. 4 can be regarded as one iteration of embedding propagation in the graph following the re-normalized adjacency matrix D̃− 12 ÃD̃− 12 . Note that this re-normalized matrix is well aligned with the way we conduct normalized heavy edge matching in Eq. 1. We next discuss how the weight matrix Θ(k) is learned. Intricacies of Refinement Learning : The learning of the refinement model is essentially learning Θ(k) for each k ∈ [1, l] according to Eq. 4. Here we study how to design the learning task and construct the loss function. Since the graph convolution model H(l)(·) aims to predict the embeddings Ei on graph Gi, we can directly run a base embedding on Gi to generate the “ground-truth” embeddings and use the difference between these embeddings and the predicted ones as the loss function for training. We propose to learn Θ(k) on the coarsest graph and reuse them across all the levels for refinement. Specifically, we can define the loss function as the mean square error as follows L = 1 |Vm| ∥∥∥Em −H(l)(Mm,m+1Em+1, Am)∥∥∥2 . (6) We refer to the learning task associated with the above loss function as double-base embedding learning. We point out, however, there are two key drawbacks to this method. First of all, the above loss function requires one more level of coarsening to construct Gm+1 and an extra base embedding on Gm+1. These two steps, especially the latter, introduce nonnegligible overheads to the MILE framework, which contradicts our motivation of scaling up graph embedding. More importantly, Em might not be a desirable “ground truth” for the refined embeddings. This is because most of the embedding methods are invariant to an orthogonal transformation of the embeddings, i.e., the embeddings can be rotated by an arbitrary orthogonal matrix (Hamilton et al., 2017). In other words, the embedding spaces of graph Gm and Gm+1 can be totally different since the two base embeddings are learned independently. Even if we follow the paradigm in (Chen et al., 2018) and conduct base embedding on Gm using the simple projected embeddings from Gm+1 (Epm) as initialization, the embedding space does not naturally generalize and can drift during re-training. One possible solution is to use an alignment procedure to force the embeddings to be aligned between the two graphs (Hamilton et al., 2016). But it could be very expensive. In this paper, we propose a very simple method to address the above issues. Instead of conducting an additional level of coarsening, we construct a dummy coarsened graph by simply copying Gm, i.e., Mm,m+1 = I and Gm+1 = Gm. By doing this, we not only reduce one iteration of graph coarsening, but also avoid performing base embedding on Gm+1 simply because Em+1 = Em. Moreover, the embeddings of Gm and Gm+1 are guaranteed to be in the same space in this case without any drift. With this strategy, we change the loss function for model learning as follows L = 1 |Vm| ∥∥∥Em −H(l)(Em, Am)∥∥∥2 . (7) With the above loss function, we adopt gradient descent with back-propagation to learn the parameters Θ(k), k ∈ [1, l]. In the subsequent refinement steps, we apply the same set of parameters Θ(k) to infer the refined embeddings. We point out that the training of the refinement model is rather efficient as it is done on the coarsest graph. The embeddings refinement process involves merely sparse matrix multiplications using Eq. 5 and is relatively affordable compared to conducting embedding on the original graph. With these different components, we summarize the whole algorithm of our MILE framework in Algorithm 1. The appendix contains the time complexity of the algorithm in Section A.2 Algorithm 1 Multi-Level Algorithm for Graph Embedding Input: A input graph G0 = (V0, E0), # coarsening levels m, and a base embedding method f(·). Output: Graph embeddings E0 on G0. 1: Coarsen G0 into G1,G2, ...,Gm using proposed hybrid matching method. 2: Perform base embedding on the coarsest graph Gm (See Section. 4.2). 3: Learn the weights Θ(k) using the loss function in Eq. 7. 4: for i = (m− 1)...0 do 5: Compute the projected embeddings Epi on Gi. 6: Use Eq. 4 and Eq. 5 to compute refined embeddings Ei. 7: Return graph embeddings E0 on G0. 5 Experiments and Analysis 5.1 Experimental Configuration The datasets used in our experiments is shown in Table 1. Yelp dataset is preprocessed by us following similar procedures in (Huang et al., 2017)1. To demonstrate that MILE can work with different graph embedding methods , we explore several popular methods for graph embedding, mainly, DeepWalk (Perozzi et al., 2014), Node2vec (Grover & Leskovec, 2016), Line (Tang et al., 2015), GraRep (Cao et al., 2015) and NetMF (Qiu et al., 2018). To evaluate the quality of the embeddings, we follow the typical method in existing work to perform multi-label node classification (Perozzi et al., 2014; Grover & Leskovec, 2016). Dataset # Nodes # Edges # Classes 5.2 MILE Framework Performance We first evaluate the performance of our MILE framework when applied to different graph embedding methods. Figure 3 summarizes the performance of MILE on different datasets with various base embedding methods on various coarsening levels2 (exact numbers can be seen in Table 3 of Appendix). Note that m=0 corresponds to original embedding method. We make the following observations: • MILE is scalable. MILE greatly boosts the speed of the explored embedding methods. With a single level of coarsening (m=1), we are able to achieve speedup ranging from 1.5× to 3.4× (on PPI, Blog, and Flickr) while improving qualitative performance. Larger speedups are typically observed on GraRep and NetMF. Increasing the coarsening level m to 2, the speedup increases further (up to 14.4×), while the quality of the embeddings is comparable with the original methods reflected by Micro-F1. On YouTube, for the coarsening levels 6 and 8, we observe more than 10× speedup for DeepWalk, Node2Vec and LINE. For NetMF on YouTube, the speedup is even larger – original NetMF runs out of memory within 9.5 hours while MILE (NetMF) only takes around 20 minutes (m = 8). • MILE improves quality. For the smaller coarsening levels across all the datasets and methods, MILE-enhanced embeddings almost always offer a qualitative improvement over 1Raw data: https://www.yelp.com/dataset_challenge/dataset 2We discuss the results of Yelp later. MILE (DeepWalk) MILE (Node2Vec) MILE (Line) MILE (GraRep) MILE (NetMF) 0 1 2 3 4 # Levels 0.18 0.20 0.22 0.24 0.26 0.28 0.30 M icr of1 (a) PPI (Micro-F1) 0 1 2 3 4 5 6 # Levels 0.15 0.20 0.25 0.30 0.35 0.40 0.45 M icr of1 (b) Blog (Micro-F1) 0 1 2 3 4 5 6 7 8 # Levels 0.15 0.20 0.25 0.30 0.35 0.40 0.45 M icr of1 (c) Flickr (Micro-F1) 0 1 2 3 4 5 6 7 8 # Levels 0.38 0.40 0.42 0.44 0.46 0.48 0.50 0.52 M icr of1 (d) YouTube (Micro-F1) the original embedding method as evaluated by the Micro-F1 score (as high as 24.2% while many others also show a 10%+ increase). Examples include MILE (DeepWalk, m = 1) on Blog/PPI, MILE (Line, m = 1) on PPI and MILE (NetMF, m = 1) on PPI/Blog/Flickr. Even with higher number of coarsening level (m = 2 for PPI/Blog/Flickr; m = 6, 8 for YouTube), MILE in addition to being much faster can still improve, qualitatively, over the original methods on most of the datasets, e.g., MILE(NetMF, m = 2) NETMF on PPI, Blog, and Flickr. We conjecture the observed improvement on quality is because the embeddings begin to rely on a more holistic view of the graph. • MILE supports multiple embedding strategies. We make some embedding-specific observations here. We observe that MILE consistently improves both the quality and the efficiency of NetMF on all four datasets (for YouTube, NetMF runs out of memory). For the largest dataset, the speedups afforded exceed 30-fold. We observe that for GraRep, while speedups with MILE are consistently observed, the qualitative improvements, if any, are smaller (for both YouTube and Flickr, the base method runs out of memory). For Line, even though its time complexity is linear to the number of edges (Tang et al., 2015), applying MILE framework on top of it still generates significant speed-up (likely due to the fact that the complexity of Line contains a larger constant factor k than MILE). On the other hand, MILE on top of Line generates better quality of embeddings on PPI and YouTube while falling a bit short on Blog and Flickr. For DeepWalk and Node2Vec, we again observe consistent improvements in scalability (up to 11-fold on the larger datasets) as well as quality using MILE with a few levels of coarsening. However, when the coarsening level is increased, the additional speedup afforded (up to 17-fold) comes at a mixed cost to quality (micro-F1 drops slightly). • Impact of varying coarsening levels on MILE. When coarsening level m is small, MILE tends to significantly improve the quality of embeddings while taking much less time. From m = 0 to m = 1, we see a clear jump of the Micro-F1 score on all the datasets across the base embedding methods. This observation is more evident on larger datasets (Flickr and YouTube). On YouTube, MILE (DeepWalk) withm=1 increases the Micro-F1 score by 5.3% while only consuming half of the time compared to the original DeepWalk. MILE (DeepWalk) continues to generate embeddings of better quality than DeepWalk until m = 7, where the speedup is 13×. As the coarsening level m in MILE increases, the running time drops dramatically while the quality of embeddings only decreases slightly. PPI Blog Mi-F1 Time Mi-F1 Time DeepWalk (DW) 23.0 2.4 37.0 8.0 MILE (DW) 25.6 1.2 42.9 4.6 HARP (DW) 24.1 3.0 41.3 9.8 Node2Vec (NV) 24.3 4.0 39.1 13.0 MILE (NV) 25.9 1.7 42.8 6.9 HARP (NV) 22.3 3.9 36.2 13.16 Flickr YouTube Mi-F1 Time Mi-F1 Time DeepWalk 40.0 50.0 45.2 604.8 MILE (DW) 40.4 34.4 46.1 55.2 HARP (DW) 40.6 78.2 46.6 1727.7 Node2Vec 40.5 78.2 45.5 951.2 MILE (NV) 40.7 50.5 46.3 83.5 HARP (NV) 40.5 101.1 47.2 1981.3 Table 2: MILE vs. HARP MILE (DeepWalk) MILE (Node2Vec) MILE (Line) MILE (GraRep) MILE (NetMF) 0 2 4 6 8 10 12 14 16 18 20 22 # Levels 0.60 0.62 0.64 0.66 0.68 0.70 M icr of1 0 2 4 6 8 10 12 14 16 18 20 22 # Levels 102 103 Ti m e (m in s) Figure 4: Running MILE on Yelp dataset. The running time decreases at an almost exponential rate (logarithm scale on the y-axis in the second row of Figure 3). On the other hand, the Micro-F1 score descends much more slowly (the first row of Figure 3). most of which are still better than the original methods. This shows that MILE can not only consolidates the existing embedding methods, but also provides nice trade-off between effectiveness and efficency. 5.3 Comparing MILE with HARP HARP is a multi-level method primarily for improving the quality of graph embeddings. We compare HARP with our MILE framework using DeepWalk and Node2vec as the base embedding methods3. Table 2 shows the performance of these two methods on the four datasets (coarsening level is 1 on PPI/Blog/Flickr and 6 on YouTube). From the table we can observe that MILE generates embeddings of comparable quality with HARP. MILE performs much better than HARP on PPI and Blog, marginally better on Flickr and marginally worse on YouTube. However, MILE is significantly faster than HARP on all the four datasets (e.g. on YouTube, MILE affords a 31× speedup). This is because HARP requires running the whole embedding algorithm on each coarsened graph, which introduces a huge computational overhead. Note that for PPI and BLOG – MILE with NetMF (not shown) as its base embeddings produces the best micro-F1 of 26.9 and 43.8, respectively. This shows another advantage of MILE - agnostic to the base embedding when compared with HARP. 5.4 MILE: Large Graph Embedding We now explore the scalability of MILE on the large Yelp dataset. None of the five graph embedding methods studied in this paper can successfully conduct graph embedding on Yelp within 60 hours on a modern machine with 28 cores and 128 GB RAM. Even extending the run-time deadline to 100 hours, we see DeepWalk and Line barely finish. Leveraging the proposed MILE framework now makes it much easier to perform graph embedding on this scale of datasets (see Figure 4 for the results). We observe that MILE significantly reduces the running time and improves the Micro-F1 score. For example, Micro-F1 score of original DeepWalk and Line are 0.640 and 0.625 respectively, which all take more than 80 hours. But using MILE with m = 4, the micro-F1 score improves to 0.643 (DeepWalk) and 0.642 (Line) while achiving speedups of around 1.6×. Moreover, MILE reduces the running time of DeepWalk from 53 hours (coarsening level 4) to 2 hours (coarsening level 22) while reducing the Micro-F1 score just by 1% (from 0.643 to 0.634). Meanwhile, there is no change in the Micro-F1 score from coarsening level 4 to 10, where the running time is improved by a factor of two. These results affirm the power of the proposed MILE framework on scaling up graph embedding algorithms while generating quality embeddings. 6 Conclusion In this work, we propose a novel multi-level embedding (MILE) framework to scale up graph embedding techniques, without modifying them. Our framework incorporates existing embedding techniques as black boxes, and significantly improves the scalability of extant methods by reducing both the running time and memory consumption. Additionally, MILE also provides a lift in the quality of node embeddings in most of the cases. A fundamental contribution of MILE is its ability to learn a refinement strategy that depends on both the underlying graph properties and the embedding method in use. In the future, we plan to generalize MILE for information-rich graphs and employing MILE for more applications. 3https://github.com/GTmac/HARP A Appendix A.1 Experimental Configuration Details A.1.1 Datasets The details about the datasets used in our experiments are : • PPI is a Protein-Protein Interaction graph constructed based on the interplay activity between proteins of Homo Sapiens, where the labels represent biological states. • Blog is a network of social relationship of bloggers on BlogCatalog and the labels indicate interests of the bloggers. • Flickr is a social network of the contacts between users on flickr.com with labels denoting the interest groups. • YouTube is a social network between users on YouTube, where labels represent genres of groups subscribed by users. • Yelp is a social network of friends on Yelp and labels indicate the business categories on which the users review. A.1.2 Baseline Methods Baseline Methods: To demonstrate that MILE can work with different graph embedding methods, we explore several popular methods for graph embedding. • DeepWalk (DW) (Perozzi et al., 2014): Following the original work (Perozzi et al., 2014), we set the length of random walks as 80, number of walks per node as 10, and context windows size as 10. • Node2Vec (NV) (Grover & Leskovec, 2016): We use the same setting as DeepWalk for those common hyper-parameters while setting p = 4.0 and q = 1.0, which we found empirically to generate better results across all the datasets. • Line (LN) (Tang et al., 2015): This method aims at preserving first-order and secondorder proximities and has been applied on large-scale graph. We learn the first-order and second-order embeddings respectively and concatenate them to a unified embedding. • GraRep (GR) (Cao et al., 2015): This method considers different powers (up to k) of the adjacency matrix to preserve higher-order graph proximity for graph embedding. It uses SVD decomposition to generate the low-dimensional representation of nodes. We set k = 4 as suggested in the original work. • NetMF (NM) (Qiu et al., 2018): It is a recent effort that supports graph embedding via matrix factorization. We set the window size to 10 and the rank h to 1024, and lever the approximate version, as suggested and reported by the authors. A.1.3 MILE-specific Settings For all the above base embedding methods, we set the embedding dimensionality d as 128. When applying our MILE framework, we vary the coarsening levelsm from 1 to 10 whenever possible. For the graph convolution network model, the self-loop weight λ is set to 0.05, the number of hidden layers l is 2, and tanh(·) is used as the activation function, the learning rate is set to 0.001 and the number of training epochs is 200. The Adam Optimizer is used for model training. A.1.4 System Specification The experiments were conducted on a machine running Linux with an Intel Xeon E5-2680 CPU (28 cores, 2.40GHz) and 128 GB of RAM. We implement our MILE framework in Python. Our code and data are will be available for the replicability purpose. For all the five base embedding methods, we adapt the original code from the authors4. We additionally use TensorFlow package for the embeddings refinement learning component. We lever the available parallelism (on 28 cores) for each method (e.g., the generation of random walks in DeepWalk and Node2Vec, the training of the refinement model in MILE, etc.). A.1.5 Evaluation Metrics To evaluate the quality of the embeddings, we follow the typical method in existing work to perform multi-label node classification (Perozzi et al., 2014; Grover & Leskovec, 2016). Specifically, after the graph embeddings are learned for nodes (label is not used for this part), we run a 10-fold cross validation using the embeddings as features and report the average Micro-F1 and average Macro-F1. We also record the end-to-end wallclock time consumed by each method for scalability comparisons. A.2 Time Complexity It is non-trivial to derive the exact time complexity of MILE as it is dependent on the graph structure, the chosen base embedding method, and the convergence rate of the GCN model training. Here, we provide a rough estimation of the time complexity. For simplicity, we assume the number of vertices and the number of edges are reduced by factor α and β respectively at each step of coarsening (α > 1.0 and β > 1.0), i.e., Vi = 1αVi−1 and Ei = 1βEi−1. (we found α and β in range [1.5, 2.0], empirically). Withm levels of coarsening, the coarsening complexity is approximately O((1− 1/βm)/(1− 1/β)×E)) and since 1/βm is small, the complexity reduces to O( ββ−1 × E). For the base embedding phase, if the embedding algorithm has time complexity of T (V,E), the complexity of the base embedding phase is T ( Vαm , E βm ). For the refinement phase, the time complexity can be divided into two parts, i.e. the GCN model training and the embedding inference applying the GCN model. The former has similar complexity as the original GCN and can be denoted as O(k1 ∗ Eβm ) (Kipf & Welling, 2017), where k1 is a small constant related to embedding dimensionality and the number of training epochs. The embedding inference part is simply sparse matrix multiplication using Eq. 4 with time complexity O(k2 ∗Ei) when refining the embeddings on graph Gi, where k2 is an even smaller constant (k2 < k1). As a result, the time complexity of the whole refinement phase is O(k1 ∗ Eβm +k2 ∗ (E+ E β1 ...+ E βm−1 )) ≈ O(k3 ∗E) where k3 is a small constant. Overall, for an embedding algorithm of time complexity T (V,E), the MILE framework can reduce it to be T ( Vαm , E βm )+O(k∗E). This is a significant improvement considering T (V,E) is usually very large. The reduction in time complexity is attributed to the fact that we run the embedding learning and refinement model training at the coarsest graph. In addition, the overhead introduced by the coarsening phase and recursive embedding refinement is relatively small (linear to the number of edges E). Note that the constant factor k in the complexity term is usually small and we empirically found it to be in the scale of tens. Because of this, even when the complexity of the original embedding algorithm is linear to E, our MILE framework could still potentially speed up the embedding process because the complexity of MILE contains a smaller constant factor k (see Sec. 5.2 for the experiment of applying MILE on LINE). Furthermore, it is worth noting that many of the existing embedding strategies involve hyperparameters tunning for the best performance, especially for those methods based on neural networks (e.g., DeepWalk, Node2Vec, etc.). This in turn requires the algorithm to be run repeatedly – hence any savings in runtime by applying MILE are magnified across multiple runs of the algorithm with different hyper-parameter settings. 4DeepWalk: https://github.com/phanein/deepwalk; Node2Vec: http://snap.stanford.edu/node2vec/; Line: https://github.com/tangjianpku/LINE ; GraRep: https://github.com/thunlp/OpenNE; NetMF: https://github.com/xptree/NetMF A.3 MILE Performance The detailed information about performance evaluation is available in Table 3. the speedup compared to the original method. “N/A” indicates the method runs out of memory and we show the amount of running time spent when it happens. A.4 MILE Drilldown: Design Choices We now study the role of the design choices we make within the MILE framework related to the coarsening and refinement procedures described. To this end, we examine alternative design choices and systematically examine their performance. The alternatives we consider are: • Random Matching (MILE-rm): For each iteration of coarsening, we repeatedly pick a random pair of connected nodes as a match and merge them into a super-node until no more matching can be found. The rest of the algorithm is the same as our MILE. • Simple Projection (MILE-proj): We replace our embedding refinement model with a simple projection method. In other words, we directly copy the embedding of a super-node to its original node(s) without any refinement (see Eq. 3). • Averaging Neighborhoods (MILE-avg): For this baseline method, the refined embedding of each node is a weighted average node embeddings of its neighborhoods (weighted by the edge weights). This can be regarded as an embeddings propagation method. We add self-loop to each node6 and conduct the embeddings propagation for two rounds. • Untrained Refinement Model (MILE-untr): Instead of training the refinement model to minimize the loss defined in Eq. 7, this baseline merely uses a fixed set of values for parameters Θ(k) without training (values are randomly generated; other parts of the model in Eq. 4 are the same, including à and D̃). 6Self-loop weights are tuned to the best performance. • Double-base Embedding for Refinement Training (MILE-2base): This method replaces the loss function in Eq. 7 with the alternative one in Eq. 6 for model training. It conducts one more layer of coarsening and base embedding (level m + 1), from which the embeddings are projected to level m and used as the input for model training. • GraphSAGE as Refinement Model (MILE-gs): It replaces the graph convolution network in our refinement method with GraphSAGE (Hamilton et al., 2017)7. We choose max-pooling for aggregation and set the number of sampled neighbors as 100, as suggested by the authors. Also, concatenation is conducted instead of replacement during the process of propagation. Table 4 shows the comparison of performance on these methods across the four datasets. Here, we focus on using DeepWalk and NetMF for base embedding with a smaller coarsening level (m = 1 for PPI, Blog, and Flickr; m = 6 for YouTube). Results are similar for the other embedding options we consider. We hereby summarize the key information derived from Table 4 as follows: • The matching methods used within MILE offer a qualitative benefit at a minimal cost to execution time. Comparing MILE with MILE-rm for all the datasets, we can see that MILE generates better embeddings than MILE-rm using either DeepWalk or NetMF as the base embedding method. Though MILE-rm is slightly faster than MILE due to its random matching, its Micro-F1 score and Macro-F1 score are consistently lower than of MILE. • The graph convolution based refinement learning methodology in MILE is particularly effective. Simple projection-based MILE-proj, performs significantly worse than MILE. The other two variants (MILE-avg and MILE-untr) which do not train the refinement model at all, also perform much worse than the proposed method. Note MILE-untr is the same as MILE except it uses a default set of parameters instead of learning those parameters. Clearly, the model learning part of our refinement method is a fundamental contributing factor to the effectiveness of MILE. Through training, the refinement model is tailored to the specific graph under the base embedding method in use. The overhead cost of this learning (comparing MILE with MILE-untr), can vary depending on the base embedding employed (for instance on the YouTube dataset, it is 7Adapt code from https://github.com/williamleif/GraphSAGE an insignificant 1.2% on DeepWalk - while being up to 20% on NetMF) but is still worth it due to qualitative benefits (Micro-F1 up from 30.2 to 40.9 with NetMF on YouTube). • Graph convolution refinement learning outperforms GraphSAGE. Replacing the graph convolution network with GraphSAGE for embeddings refinement, MILE-gs does not perform as well as MILE. It is also computationally more expensive, partially due to its reliance on embeddings concatenation, instead of replacement, during the process the embeddings propagation (higher model complexity). • Double-base embedding learning is not effective. In Sec. 4.3, we discuss the issues with unaligned embeddings of the double-base embedding method for the refinement model learning. The performance gap between MILE and MILE-2base in Table 4 provides empirical evidence supporting our argument. This gap is likely caused by the fact that the base embeddings of level m and level m + 1 might not lie in the same embedding space (rotated by some orthogonal matrix) (Hamilton et al., 2017). As a result, using the projected embeddings Epm as input for model training (MILE-2base) is not as good as directly using Em (MILE). Moreover, Table 4 shows that the additional round of base embedding in MILE-2base introduces a non-trivial overhead. On YouTube, the running time of MILE-2base is 1.6 times as much as MILE. A.5 MILE Drilldown: Memory Consumption We also study the impact of MILE on reducing memory consumption. For this purpose, we focus on MILE (GraRep) and MILE (NetMF), with GraRep and NetMF as base embedding methods respectively. Both of these are embedding methods based on matrix factorization, which possibly involves a dense objective matrix and could be rather memory expensive. We do not explore DeepWalk and Node2Vec here since their embedding learning methods generate truncated random walks (training data) on the fly with almost negligible memory consumption (compared to the space storing the graph and the embeddings). Figure 5 shows the memory consumption of MILE (GraRep) and MILE(NetMF) as the coarsening level increases on Blog (results on other datasets are similar). We observe that MILE significantly reduces the memory consumption as the coarsening level increases. Even with one level of coarsening, the memory consumption of GraRep and NetMF reduces by 64% and 42% respectively. The dramatic reduction continues as the coarsening level increases until it reaches 4, where the memory consumption is mainly contributed by the storage of the graph and the embeddings. This memory reduction is consistent with our intuition, since both # rows and # columns in the objective matrix reduce almost by half with one level of coarsening. A.6 MILE Drilldown: Discussion on reusing Θ(k) across all levels Similar to GCN, Θ(k) is a matrix of filter parameters and is of size d ∗ d (where d is the embedding dimensionality). Eq. 4 in this paper defines how the embeddings are propagated during embedding refinements, parameterized by Θ(k) . Intuitively, Θ(k) defines how different embedding dimensions interact with each other during the embedding propagation. This interaction is dependent on graph structure and base embedding method, which can be learned from the coarsest level. Ideally, we would like to learn this parameter Θ(k) on every two consecutive levels. But this is not practical since this could be expensive as the graph get more fine-grained (and defeat our purpose of scaling up graph embedding). This trick of “sharing” parameters across different levels is the trade-off between efficiency and effectiveness. To some extent, it is similar to the original GCN (Kipf & Welling, 2017), where the authors share the same filter parameters Θ(k) over the whole graph (as opposed to using different Θ(k) for different nodes; see Eq (6) and (7) in(Kipf & Welling, 2017)). Moreover, we empirically found this works good enough and much more efficient. Table 4 shows that if we do not share Θ(k) values and use random values for Θ(k) during refinements, the quality of embedding is much worse (see baseline MILE-untr). A.7 MILE Drilldown: Discussion on choice of embedding methods We wish to point out that we chose the base embedding methods as they are either recently proposed NetMF (introduced in 2018) or are widely used (DeepWalk, Node2Vec, LINE). By showing the performance gain of using MILE on top of these methods, we want to ensure the contribution of this work is of broad interest to the community. We also want to reiterate that these methods are quite different in nature: • DeepWalk (DW) and Node2vec (N2V) rely on the use of random walks for latent representation of features. • LINE learns an embedding that directly optimizes a carefully constructed objective function that preserves both first/second order proximity among nodes in the embedding space. • GraRep constructs multiple objective matrices based on high orders of random walk laplacians, factories each objective matrix to generate embeddings and then concatenates the generated embeddings to form final embedding. • NetMF constructs an objective matrix based on random walk Laplacian and factorizes the objective matrix in order to generate the embeddings. Indeed NetMF (Qiu et al., 2018; Levy & Goldberg, 2014) with an appropriately constructed objective matrix has been shown to approximate DW, N2V and LINE allowing such be conducting implicit matrix factorization of approximated matrices. There are limitations to such approximations (shown in a related context by (Arora et al., 2016)) - the most important one is the requirement of a sufficiently large embedding dimensionality. Additionally, we note that while unification is possible under such a scenario, the methods based on matrix factorization are quite different from the original methods and do place a much larger premium on space (memory consumption) - in fact this is observed by the fact we are unable to run NetMF and GraRep in many cases without incorporating them within MILE. A.8 MILE Drilldown: Discussion on extending MILE to directed graphs Note that as pointed out by (Chung, 2005), one can construct random-walk Laplacians for a directed graph thus incorporating approaches like NetMF to accommodate such solutions. Another simple solution is to symmetrize the graph while accounting for directionality. Once the graph is symmetrized, any of the embedding strategies we discuss can be employed within the MILE framework (including the coarsening technique). There are many ideas for symmetrization of directed graphs (see for example work described by (Gleich, 2006) or (Satuluri & Parthasarathy, 2011). A.9 MILE Drilldown: Discussion on effectiveness of SEM The effectiveness of structurally equivalent matching (SEM) is highly dependent on graph structure but in general 5% - 20% of nodes are structurally equivalent (most of which are low-degree nodes). For example, during the first level of coarsening, YouTube has 172,906 nodes (or 86,453 pairs) out of 1,134,890 nodes that are found to be SEM (around 15%); Yelp has 875,236 nodes (or 437,618 pairs) out of 8,938,630 nodes are SEM (around 10%). In fact, more nodes are involved in SEM as SEM is run iteratively at each coarsening level.
1. How does the proposed method improve the embedding quality compared to existing methods? 2. Why do the experimental results fail to support the authors' claims about the proposed method's improved embedding quality? 3. How does the comparison with existing methods fall short? 4. What experiment details are missing, and how might they impact the results? 5. Can the proposed method be easily extended to directed graphs, and what experiments might demonstrate its effectiveness? 6. Does the proposed graph coarsening work well for real-world graphs, and what percentage of nodes might have the property of being "structurally equivalent"? 7. Is there an efficiency-quality trade-off in the proposed method, and how might it be studied?
Review
Review In this submission, the authors propose a three-stage framework for large-scale graph embedding. The proposed method first constructs a small graph by graph coarsening, then applies any existing graph embedding method, and last refines the learned embeddings. It is useful, however, the experimental results are not convincing and cannot support the authors' claims about the proposed method. First, in many places, the authors claim that the embedding quality of the proposed method is improved. For example, the last sentence of Section 1, and "MILE improves quality" paragraph on Page 7. However, the experimental results fail to support this. As the proposed method is for the large-scale graph, let's focus on the results of YouTube dataset and Yelp dataset first. For Youtube dataset ((d) of Table 2), when m is set to be 8, for all the cases, the performance drops. For Yelp dataset (Figure 3), the authors do not provide Micro-f1 for the original graph (m = 0) or m = 1, 2, so it is hard or impossible to demonstrate that the quality of the proposed method is still good. Second, the comparison with existing methods is not sufficient. For the most important Yelp dataset (as this dataset fits the motivation scenario (large-scale graph) of this submission), the authors fail to report any comparison. Thus it might not be weak to demonstrate the benefit of the proposed method. Third, some experiment details are missing. For example, how the authors compute the running time of the proposed method? All the three stages are included? How the authors implement the existing methods? Are these implementations good enough to ensure a fair comparison? ******* Some other questions: a) On page 2, the authors mention that the proposed method "can be easily extended to directed graph". However, based on my understanding, directly graph will affect both the graph coarsening and embedding refining steps, and it seems not so easy to extend. Do the authors have the solution and experiments for directed graph? It would be interesting to see such results, which enlarges the application scope of the proposed method. b) The toy example on page 3 is very clear. However, for real-world graphs, does the proposed graph coarsening work well? For example, one property the proposed method utilizes is "structurally equivalent". What is the percentage of the nodes that can have such property for real-world graphs? ******** Some other comments: Generally speaking, this submission studies a very practical task. Although the authors claim that the proposed method has great efficiency while the embedding quality is comparable good or even better than the existing methods, I think that there is an efficiency-quality trade-off based on the experimental results in this submission. When m increases, the graph coarsening step causes more information loss, and thus the quality may decrease. Embedding refining step can be regarded as a procedure to reduce such information loss, but may not improve the embedding quality better than the original graph. So to me, it would be more meaningful to study such efficiency-quality trade-off for large-scale graph embedding.
ICLR
Title MILE: A Multi-Level Framework for Scalable Graph Embedding Abstract Recently there has been a surge of interest in designing graph embedding methods. Few, if any, can scale to a large-sized graph with millions of nodes due to both computational complexity and memory requirements. In this paper, we relax this limitation by introducing the MultI-Level Embedding (MILE) framework – a generic methodology allowing contemporary graph embedding methods to scale to large graphs. MILE repeatedly coarsens the graph into smaller ones using a hybrid matching technique to maintain the backbone structure of the graph. It then applies existing embedding methods on the coarsest graph and refines the embeddings to the original graph through a graph convolution neural network that it learns. The proposed MILE framework is agnostic to the underlying graph embedding techniques and can be applied to many existing graph embedding methods without modifying them. We employ our framework on several popular graph embedding techniques and conduct embedding for real-world graphs. Experimental results on five large-scale datasets demonstrate that MILE significantly boosts the speed (order of magnitude) of graph embedding while generating embeddings of better quality, for the task of node classification. MILE can comfortably scale to a graph with 9 million nodes and 40 million edges, on which existing methods run out of memory or take too long to compute on a modern workstation. 1 Introduction In recent years, graph embedding has attracted much interest due to its broad applicability for various tasks (Perozzi et al., 2014; Wang et al., 2016; Henderson et al., 2012). However, such methods rarely scale to large datasets (e.g., graphs with over 1 million nodes) since they are computationally expensive and often memory intensive. For example, random-walkbased embedding techniques require a large amount of CPU time to generate a sufficient number of walks and train the embedding model. As another example, embedding methods based on matrix factorization, including GraRep (Cao et al., 2015) and NetMF (Qiu et al., 2018), requires constructing an enormous objective matrix (usually much denser than adjacency matrix), on which matrix factorization is performed. Even a medium-size graph with 100K nodes can easily require hundreds of GB of memory using those methods. On the other hand, many graph datasets in the real world tend to be large-scale with millions or even billions of nodes. To the best of our knowledge, none of the existing efforts examines how to scale up graph embedding in a generic way. We make the first attempt to close this gap. We are also interested in the related question of whether the quality of such embeddings can be improved along the way. Specifically, we ask: 1) Can we scale up the existing embedding techniques in an agnostic manner so that they can be directly applied to larger datasets? 2) Can the quality of such embedding methods be strengthened by incorporating the holistic view of the graph? To tackle these problems, we propose a MultI-Level Embedding (MILE) framework for graph embedding. Our approach relies on a three-step process: first, we repeatedly coarsen the original graph into smaller ones by employing a hybrid matching strategy; second, we compute the embeddings on the coarsest graph using an existing embedding techniques - and third, we propose a novel refinement model based on learning a graph convolution network to refine the embeddings from the coarsest graph to the original graph – learning a graph convolution network allows us to compute a refinement procedure that levers the dependencies inherent to the graph structure and the embedding method of choice. To summarize, we find that: • MILE is generalizable : Our MILE framework is agnostic to the underlying graph embedding techniques and treats them as black boxes. • MILE is scalable : MILE can significantly improve the scalability of the embedding methods (up to 30-fold), by reducing the running time and memory consumption. • MILE generates high-quality embeddings : In many cases, we find that the quality of embeddings improves by levering MILE (in some cases is in excess of 10%). 2 Related Work Many techniques for graph or network embedding have been proposed in recent years. DeepWalk and Node2Vec generate truncated random walks on graphs and apply the Skip Gram by treating the walks as sentences (Perozzi et al., 2014; Grover & Leskovec, 2016). LINE learns the node embeddings by preserving the first-order and second-order proximities (Tang et al., 2015). Following LINE, SDNE leverages deep neural networks to capture the highly non-linear structure (Wang et al., 2016). Other methods construct a particular objective matrix and use matrix factorization techniques to generate embeddings, e.g., GraRep (Cao et al., 2015) and NetMF (Qiu et al., 2018). This also led to the proliferation of network embedding methods for information-rich graphs, including heterogeneous information networks (Chang et al., 2015; Dong et al., 2017) and attributed graphs (Pan et al., 2016; Liang et al., 2018; Yang et al., 2015; Kipf & Welling, 2017). On the other hand, there are very few efforts, focusing on the scalability of network embedding (Yang et al., 2017; Huang et al., 2017). First, such efforts are specific to a particular embedding strategy and do not generalize. Second, the scalability of such efforts is limited to moderately sized datasets. Finally, and notably, these efforts at scalability are actually orthogonal to our strategy and can potentially be employed along with our efforts to afford even greater speedup. The closest work to this paper is the very recently proposed HARP (Chen et al., 2018), which proposes a hierarchical paradigm for graph embedding based on iterative learning methods (e.g., DeepWalk and Node2Vec). However, HARP focuses on improving the quality of embeddings by using the learned embeddings from the previous level as the initialized embeddings for the next level, which introduces a huge computational overhead. Moreover, it is not immediately obvious how a HARP like methodology would be extended to other graph embedding techniques (e.g., GraRep and NetMF) in an agnostic manner since such an approach would necessarily require one to modify the embedding methods to preset their initialized embeddings. In this paper, we focus on designing a general-purpose framework to scale up embedding methods treating them as black boxes. 3 Problem Formulation Let G = (V,E) be the input graph where V and E are respectively the node set and edge set. Let A be the adjacency matrix of the graph and we assume G is undirected, though our problem can be easily extended (Chung, 2005; Gleich, 2006; Satuluri & Parthasarathy, 2011) to directed graph. We first define graph embedding: Definition 3.1 Graph Embedding Given a graph G = (V,E) and a dimensionality d (d |V |), the problem of graph embedding is to learn a d-dimension vector representation for each node in G so that graph properties are best preserved. Following this, a graph embedding method is essentially a mapping function f : R|V |×|V | 7→ R|V |×d, whose input is the adjacency matrix A (or G) and output is a lower dimension matrix. Motivated by the fact that the majority of graph embedding methods cannot scale to large datasets, we seek to speed up existing graph embedding methods without sacrificing quality. We formulate the problem as: Given a graph G = (V,E) and a graph embedding method f(·), we aim to realize a strengthened graph embedding method f̂(·) so that it is more scalable than f(·) while generating embeddings of comparable or even better quality. 4 Methodology MILE framework consists of three key phases: graph coarsening, base embedding, and embeddings refining. Figure 1a shows the overview. 4.1 Graph Coarsening In this phase, the input graph G (or G0) is repeatedly coarsened into a series of smaller graphs G1, G2, ..., Gm such that |V0| > |V1| > ... > |Vm|. In order to coarsen a graph from Gi to Gi+1, multiple nodes in Gi are collapsed to form super-nodes in Gi+1, and the edges incident on a super-node are the union of the edges on the original nodes in Gi. Here the set of nodes forming a super-node is called a matching. We propose a hybrid matching technique containing two matching strategies that can efficiently coarsen the graph while retaining the global structure. An example is shared in Figure 2. Structural Equivalence Matching (SEM) : Given two vertices u and v in an unweighted graph G, we call they are structurally equivalent if they are incident on the same set of neighborhoods. In figure 2a, node D and E are structurally equivalent. The intuition of matching structually equivalent nodes is that if two vertices are structurally equivalent, then their node embeddings will be similar. Normalized Heavy Edge Matching (NHEM) : Heavy edge matching is a popular matching method for graph coarsening (Karypis & Kumar, 1998). For an unmatched node u in Gi, its heavy edge matching is a pair of vertices (u, v) such that the weight of the edge between u and v is the largest. In this paper, we propose to normalize the edge weights when applying heavy edge matching using the formula as follows Wi(u, v) = Ai(u, v)√ Di(u, u) ·Di(v, v) . (1) Here, the weight of an edge is normalized by the degree of the two vertices on which the edge is incident. Intuitively, it penalizes the weights of edges connected with high-degree nodes. As we will show in Sec. 4.3, this normalization is tightly connected with the graph convolution kernel. Hybrid Matching Method : We use a hybrid of two matching methods above for graph coarsening. To construct Gi+1 from Gi, we first find out all the structural equivalence matching (SEM) M1, where Gi is treated as an unweighted graph. This is followed by the searching of the normalized heavy edge matching (NHEM) M2 on Gi. Nodes in each matching are then collapsed into a super-node in Gi+1. Note that some nodes might not be matched at all and they will be directly copied to Gi+1. Formally, we build the adjacency matrix Ai+1 of Gi+1 through matrix operations. To this end, we define the matching matrix storing the matching information from graph Gi to Gi+1 as a binary matrix Mi,i+1 ∈ {0, 1}|Vi|×|Vi+1|. The r-th row and c-th column of Mi,i+1 is set to 1 if node r in Gi will be collapsed to super-node c in Gi+1, and is set to 0 if otherwise. Each column of Mi,i+1 represents a matching with the 1s representing the nodes in it. Each unmatched vertex appears as an individual column in Mi,i+1 with merely one entry set to 1. Following this formulation, we construct the adjacency matrix of Gi+1 by using Ai+1 = MTi,i+1AiMi,i+1. (2) 4.2 Base Embedding on Coarsened Graph The size of the graph reduces drastically after each iteration of coarsening, halving the size of the graph in the best case. We coarsen the graph for m iterations and apply the graph embedding method f(·) on the coarsest graph Gm. Denoting the embeddings on Gm as Em, we have Em = f(Gm ). Since our framework is agnostic to the adopted graph embedding method, we can use any graph embedding algorithm for base embedding. 4.3 Refinement of Embeddings The final phase of MILE is the embeddings refinement phase. Given a series of coarsened graph G0,G1,G2, ...,Gm, their corresponding matching matrix M0,1,M1,2, ...,Mm−1,m, and the node embeddings Em on Gm, we seek to develop an approach to derive the node embeddings of G0 from Gm. To this end, we first study an easier subtask: given a graph Gi, its coarsened graph Gi+1, the matching matrix Mi,i+1 and the node embeddings Ei+1 on Gi+1, how to infer the embeddings Ei on graph Gi. Once we solved this subtask, we can then iteratively apply the technique on each pair of consecutive graphs from Gm to G0 and eventually derive the node embeddings on G0. In this work, we propose to use a graph-based neural network model to perform embeddings refinement. Graph Convolution Network for Refinement Learning : Since we know the matching information between the two consecutive graphs Gi and Gi+1, we can easily project the node embeddings from the coarse-grained graph Gi+1 to the fine-grained graph Gi using Epi = Mi,i+1Ei+1 (3) In this case, embedding of a super-node is directly copied to its original node(s). We call Epi the projected embeddings from Gi+1 to Gi, or simply projected embeddings without ambiguity. While this way of simple projection maintains some information of node embeddings, it has obvious limitations that nodes will share the same embeddings if they are matched and collapsed into a super-node during the coarsening phase. This problem will be more serious when the embedding refinement is performed iteratively from Gm, ..., G0. To address this issue, we propose to use a graph convolution network for embedding refinement. Specifically, we design a graph-based neural network model Ei = R(Epi , Ai), which derives the embeddings Ei on graph Gi based on the projected embeddings Epi and the graph adjacency matrix Ai. Given graph G with adjacency matrix A, we consider the fast approximation of graph convolution from (Kipf & Welling, 2017). The k-th layer of this neural network model is H(k)(X,A) = σ ( D̃− 1 2 ÃD̃− 1 2H(k−1)(X,A)Θ(k) ) (4) where σ(·) is an activation function, Θ(k) is a layer-specific trainable weight matrix, and H(0)(X,A) = X. In this paper, we define our embedding refinement model as a l-layer graph convolution model Ei = R (Epi , Ai) ≡ H (l) (Epi , Ai) . (5) The architecture of the refinement model is shown in Figure 1b. The intuition behind this refinement model is to integrate the structural information of the current graph Gi into the projected embedding Epi by repeatedly performing the spectral graph convolution. Each layer of graph convolution network in Eq. 4 can be regarded as one iteration of embedding propagation in the graph following the re-normalized adjacency matrix D̃− 12 ÃD̃− 12 . Note that this re-normalized matrix is well aligned with the way we conduct normalized heavy edge matching in Eq. 1. We next discuss how the weight matrix Θ(k) is learned. Intricacies of Refinement Learning : The learning of the refinement model is essentially learning Θ(k) for each k ∈ [1, l] according to Eq. 4. Here we study how to design the learning task and construct the loss function. Since the graph convolution model H(l)(·) aims to predict the embeddings Ei on graph Gi, we can directly run a base embedding on Gi to generate the “ground-truth” embeddings and use the difference between these embeddings and the predicted ones as the loss function for training. We propose to learn Θ(k) on the coarsest graph and reuse them across all the levels for refinement. Specifically, we can define the loss function as the mean square error as follows L = 1 |Vm| ∥∥∥Em −H(l)(Mm,m+1Em+1, Am)∥∥∥2 . (6) We refer to the learning task associated with the above loss function as double-base embedding learning. We point out, however, there are two key drawbacks to this method. First of all, the above loss function requires one more level of coarsening to construct Gm+1 and an extra base embedding on Gm+1. These two steps, especially the latter, introduce nonnegligible overheads to the MILE framework, which contradicts our motivation of scaling up graph embedding. More importantly, Em might not be a desirable “ground truth” for the refined embeddings. This is because most of the embedding methods are invariant to an orthogonal transformation of the embeddings, i.e., the embeddings can be rotated by an arbitrary orthogonal matrix (Hamilton et al., 2017). In other words, the embedding spaces of graph Gm and Gm+1 can be totally different since the two base embeddings are learned independently. Even if we follow the paradigm in (Chen et al., 2018) and conduct base embedding on Gm using the simple projected embeddings from Gm+1 (Epm) as initialization, the embedding space does not naturally generalize and can drift during re-training. One possible solution is to use an alignment procedure to force the embeddings to be aligned between the two graphs (Hamilton et al., 2016). But it could be very expensive. In this paper, we propose a very simple method to address the above issues. Instead of conducting an additional level of coarsening, we construct a dummy coarsened graph by simply copying Gm, i.e., Mm,m+1 = I and Gm+1 = Gm. By doing this, we not only reduce one iteration of graph coarsening, but also avoid performing base embedding on Gm+1 simply because Em+1 = Em. Moreover, the embeddings of Gm and Gm+1 are guaranteed to be in the same space in this case without any drift. With this strategy, we change the loss function for model learning as follows L = 1 |Vm| ∥∥∥Em −H(l)(Em, Am)∥∥∥2 . (7) With the above loss function, we adopt gradient descent with back-propagation to learn the parameters Θ(k), k ∈ [1, l]. In the subsequent refinement steps, we apply the same set of parameters Θ(k) to infer the refined embeddings. We point out that the training of the refinement model is rather efficient as it is done on the coarsest graph. The embeddings refinement process involves merely sparse matrix multiplications using Eq. 5 and is relatively affordable compared to conducting embedding on the original graph. With these different components, we summarize the whole algorithm of our MILE framework in Algorithm 1. The appendix contains the time complexity of the algorithm in Section A.2 Algorithm 1 Multi-Level Algorithm for Graph Embedding Input: A input graph G0 = (V0, E0), # coarsening levels m, and a base embedding method f(·). Output: Graph embeddings E0 on G0. 1: Coarsen G0 into G1,G2, ...,Gm using proposed hybrid matching method. 2: Perform base embedding on the coarsest graph Gm (See Section. 4.2). 3: Learn the weights Θ(k) using the loss function in Eq. 7. 4: for i = (m− 1)...0 do 5: Compute the projected embeddings Epi on Gi. 6: Use Eq. 4 and Eq. 5 to compute refined embeddings Ei. 7: Return graph embeddings E0 on G0. 5 Experiments and Analysis 5.1 Experimental Configuration The datasets used in our experiments is shown in Table 1. Yelp dataset is preprocessed by us following similar procedures in (Huang et al., 2017)1. To demonstrate that MILE can work with different graph embedding methods , we explore several popular methods for graph embedding, mainly, DeepWalk (Perozzi et al., 2014), Node2vec (Grover & Leskovec, 2016), Line (Tang et al., 2015), GraRep (Cao et al., 2015) and NetMF (Qiu et al., 2018). To evaluate the quality of the embeddings, we follow the typical method in existing work to perform multi-label node classification (Perozzi et al., 2014; Grover & Leskovec, 2016). Dataset # Nodes # Edges # Classes 5.2 MILE Framework Performance We first evaluate the performance of our MILE framework when applied to different graph embedding methods. Figure 3 summarizes the performance of MILE on different datasets with various base embedding methods on various coarsening levels2 (exact numbers can be seen in Table 3 of Appendix). Note that m=0 corresponds to original embedding method. We make the following observations: • MILE is scalable. MILE greatly boosts the speed of the explored embedding methods. With a single level of coarsening (m=1), we are able to achieve speedup ranging from 1.5× to 3.4× (on PPI, Blog, and Flickr) while improving qualitative performance. Larger speedups are typically observed on GraRep and NetMF. Increasing the coarsening level m to 2, the speedup increases further (up to 14.4×), while the quality of the embeddings is comparable with the original methods reflected by Micro-F1. On YouTube, for the coarsening levels 6 and 8, we observe more than 10× speedup for DeepWalk, Node2Vec and LINE. For NetMF on YouTube, the speedup is even larger – original NetMF runs out of memory within 9.5 hours while MILE (NetMF) only takes around 20 minutes (m = 8). • MILE improves quality. For the smaller coarsening levels across all the datasets and methods, MILE-enhanced embeddings almost always offer a qualitative improvement over 1Raw data: https://www.yelp.com/dataset_challenge/dataset 2We discuss the results of Yelp later. MILE (DeepWalk) MILE (Node2Vec) MILE (Line) MILE (GraRep) MILE (NetMF) 0 1 2 3 4 # Levels 0.18 0.20 0.22 0.24 0.26 0.28 0.30 M icr of1 (a) PPI (Micro-F1) 0 1 2 3 4 5 6 # Levels 0.15 0.20 0.25 0.30 0.35 0.40 0.45 M icr of1 (b) Blog (Micro-F1) 0 1 2 3 4 5 6 7 8 # Levels 0.15 0.20 0.25 0.30 0.35 0.40 0.45 M icr of1 (c) Flickr (Micro-F1) 0 1 2 3 4 5 6 7 8 # Levels 0.38 0.40 0.42 0.44 0.46 0.48 0.50 0.52 M icr of1 (d) YouTube (Micro-F1) the original embedding method as evaluated by the Micro-F1 score (as high as 24.2% while many others also show a 10%+ increase). Examples include MILE (DeepWalk, m = 1) on Blog/PPI, MILE (Line, m = 1) on PPI and MILE (NetMF, m = 1) on PPI/Blog/Flickr. Even with higher number of coarsening level (m = 2 for PPI/Blog/Flickr; m = 6, 8 for YouTube), MILE in addition to being much faster can still improve, qualitatively, over the original methods on most of the datasets, e.g., MILE(NetMF, m = 2) NETMF on PPI, Blog, and Flickr. We conjecture the observed improvement on quality is because the embeddings begin to rely on a more holistic view of the graph. • MILE supports multiple embedding strategies. We make some embedding-specific observations here. We observe that MILE consistently improves both the quality and the efficiency of NetMF on all four datasets (for YouTube, NetMF runs out of memory). For the largest dataset, the speedups afforded exceed 30-fold. We observe that for GraRep, while speedups with MILE are consistently observed, the qualitative improvements, if any, are smaller (for both YouTube and Flickr, the base method runs out of memory). For Line, even though its time complexity is linear to the number of edges (Tang et al., 2015), applying MILE framework on top of it still generates significant speed-up (likely due to the fact that the complexity of Line contains a larger constant factor k than MILE). On the other hand, MILE on top of Line generates better quality of embeddings on PPI and YouTube while falling a bit short on Blog and Flickr. For DeepWalk and Node2Vec, we again observe consistent improvements in scalability (up to 11-fold on the larger datasets) as well as quality using MILE with a few levels of coarsening. However, when the coarsening level is increased, the additional speedup afforded (up to 17-fold) comes at a mixed cost to quality (micro-F1 drops slightly). • Impact of varying coarsening levels on MILE. When coarsening level m is small, MILE tends to significantly improve the quality of embeddings while taking much less time. From m = 0 to m = 1, we see a clear jump of the Micro-F1 score on all the datasets across the base embedding methods. This observation is more evident on larger datasets (Flickr and YouTube). On YouTube, MILE (DeepWalk) withm=1 increases the Micro-F1 score by 5.3% while only consuming half of the time compared to the original DeepWalk. MILE (DeepWalk) continues to generate embeddings of better quality than DeepWalk until m = 7, where the speedup is 13×. As the coarsening level m in MILE increases, the running time drops dramatically while the quality of embeddings only decreases slightly. PPI Blog Mi-F1 Time Mi-F1 Time DeepWalk (DW) 23.0 2.4 37.0 8.0 MILE (DW) 25.6 1.2 42.9 4.6 HARP (DW) 24.1 3.0 41.3 9.8 Node2Vec (NV) 24.3 4.0 39.1 13.0 MILE (NV) 25.9 1.7 42.8 6.9 HARP (NV) 22.3 3.9 36.2 13.16 Flickr YouTube Mi-F1 Time Mi-F1 Time DeepWalk 40.0 50.0 45.2 604.8 MILE (DW) 40.4 34.4 46.1 55.2 HARP (DW) 40.6 78.2 46.6 1727.7 Node2Vec 40.5 78.2 45.5 951.2 MILE (NV) 40.7 50.5 46.3 83.5 HARP (NV) 40.5 101.1 47.2 1981.3 Table 2: MILE vs. HARP MILE (DeepWalk) MILE (Node2Vec) MILE (Line) MILE (GraRep) MILE (NetMF) 0 2 4 6 8 10 12 14 16 18 20 22 # Levels 0.60 0.62 0.64 0.66 0.68 0.70 M icr of1 0 2 4 6 8 10 12 14 16 18 20 22 # Levels 102 103 Ti m e (m in s) Figure 4: Running MILE on Yelp dataset. The running time decreases at an almost exponential rate (logarithm scale on the y-axis in the second row of Figure 3). On the other hand, the Micro-F1 score descends much more slowly (the first row of Figure 3). most of which are still better than the original methods. This shows that MILE can not only consolidates the existing embedding methods, but also provides nice trade-off between effectiveness and efficency. 5.3 Comparing MILE with HARP HARP is a multi-level method primarily for improving the quality of graph embeddings. We compare HARP with our MILE framework using DeepWalk and Node2vec as the base embedding methods3. Table 2 shows the performance of these two methods on the four datasets (coarsening level is 1 on PPI/Blog/Flickr and 6 on YouTube). From the table we can observe that MILE generates embeddings of comparable quality with HARP. MILE performs much better than HARP on PPI and Blog, marginally better on Flickr and marginally worse on YouTube. However, MILE is significantly faster than HARP on all the four datasets (e.g. on YouTube, MILE affords a 31× speedup). This is because HARP requires running the whole embedding algorithm on each coarsened graph, which introduces a huge computational overhead. Note that for PPI and BLOG – MILE with NetMF (not shown) as its base embeddings produces the best micro-F1 of 26.9 and 43.8, respectively. This shows another advantage of MILE - agnostic to the base embedding when compared with HARP. 5.4 MILE: Large Graph Embedding We now explore the scalability of MILE on the large Yelp dataset. None of the five graph embedding methods studied in this paper can successfully conduct graph embedding on Yelp within 60 hours on a modern machine with 28 cores and 128 GB RAM. Even extending the run-time deadline to 100 hours, we see DeepWalk and Line barely finish. Leveraging the proposed MILE framework now makes it much easier to perform graph embedding on this scale of datasets (see Figure 4 for the results). We observe that MILE significantly reduces the running time and improves the Micro-F1 score. For example, Micro-F1 score of original DeepWalk and Line are 0.640 and 0.625 respectively, which all take more than 80 hours. But using MILE with m = 4, the micro-F1 score improves to 0.643 (DeepWalk) and 0.642 (Line) while achiving speedups of around 1.6×. Moreover, MILE reduces the running time of DeepWalk from 53 hours (coarsening level 4) to 2 hours (coarsening level 22) while reducing the Micro-F1 score just by 1% (from 0.643 to 0.634). Meanwhile, there is no change in the Micro-F1 score from coarsening level 4 to 10, where the running time is improved by a factor of two. These results affirm the power of the proposed MILE framework on scaling up graph embedding algorithms while generating quality embeddings. 6 Conclusion In this work, we propose a novel multi-level embedding (MILE) framework to scale up graph embedding techniques, without modifying them. Our framework incorporates existing embedding techniques as black boxes, and significantly improves the scalability of extant methods by reducing both the running time and memory consumption. Additionally, MILE also provides a lift in the quality of node embeddings in most of the cases. A fundamental contribution of MILE is its ability to learn a refinement strategy that depends on both the underlying graph properties and the embedding method in use. In the future, we plan to generalize MILE for information-rich graphs and employing MILE for more applications. 3https://github.com/GTmac/HARP A Appendix A.1 Experimental Configuration Details A.1.1 Datasets The details about the datasets used in our experiments are : • PPI is a Protein-Protein Interaction graph constructed based on the interplay activity between proteins of Homo Sapiens, where the labels represent biological states. • Blog is a network of social relationship of bloggers on BlogCatalog and the labels indicate interests of the bloggers. • Flickr is a social network of the contacts between users on flickr.com with labels denoting the interest groups. • YouTube is a social network between users on YouTube, where labels represent genres of groups subscribed by users. • Yelp is a social network of friends on Yelp and labels indicate the business categories on which the users review. A.1.2 Baseline Methods Baseline Methods: To demonstrate that MILE can work with different graph embedding methods, we explore several popular methods for graph embedding. • DeepWalk (DW) (Perozzi et al., 2014): Following the original work (Perozzi et al., 2014), we set the length of random walks as 80, number of walks per node as 10, and context windows size as 10. • Node2Vec (NV) (Grover & Leskovec, 2016): We use the same setting as DeepWalk for those common hyper-parameters while setting p = 4.0 and q = 1.0, which we found empirically to generate better results across all the datasets. • Line (LN) (Tang et al., 2015): This method aims at preserving first-order and secondorder proximities and has been applied on large-scale graph. We learn the first-order and second-order embeddings respectively and concatenate them to a unified embedding. • GraRep (GR) (Cao et al., 2015): This method considers different powers (up to k) of the adjacency matrix to preserve higher-order graph proximity for graph embedding. It uses SVD decomposition to generate the low-dimensional representation of nodes. We set k = 4 as suggested in the original work. • NetMF (NM) (Qiu et al., 2018): It is a recent effort that supports graph embedding via matrix factorization. We set the window size to 10 and the rank h to 1024, and lever the approximate version, as suggested and reported by the authors. A.1.3 MILE-specific Settings For all the above base embedding methods, we set the embedding dimensionality d as 128. When applying our MILE framework, we vary the coarsening levelsm from 1 to 10 whenever possible. For the graph convolution network model, the self-loop weight λ is set to 0.05, the number of hidden layers l is 2, and tanh(·) is used as the activation function, the learning rate is set to 0.001 and the number of training epochs is 200. The Adam Optimizer is used for model training. A.1.4 System Specification The experiments were conducted on a machine running Linux with an Intel Xeon E5-2680 CPU (28 cores, 2.40GHz) and 128 GB of RAM. We implement our MILE framework in Python. Our code and data are will be available for the replicability purpose. For all the five base embedding methods, we adapt the original code from the authors4. We additionally use TensorFlow package for the embeddings refinement learning component. We lever the available parallelism (on 28 cores) for each method (e.g., the generation of random walks in DeepWalk and Node2Vec, the training of the refinement model in MILE, etc.). A.1.5 Evaluation Metrics To evaluate the quality of the embeddings, we follow the typical method in existing work to perform multi-label node classification (Perozzi et al., 2014; Grover & Leskovec, 2016). Specifically, after the graph embeddings are learned for nodes (label is not used for this part), we run a 10-fold cross validation using the embeddings as features and report the average Micro-F1 and average Macro-F1. We also record the end-to-end wallclock time consumed by each method for scalability comparisons. A.2 Time Complexity It is non-trivial to derive the exact time complexity of MILE as it is dependent on the graph structure, the chosen base embedding method, and the convergence rate of the GCN model training. Here, we provide a rough estimation of the time complexity. For simplicity, we assume the number of vertices and the number of edges are reduced by factor α and β respectively at each step of coarsening (α > 1.0 and β > 1.0), i.e., Vi = 1αVi−1 and Ei = 1βEi−1. (we found α and β in range [1.5, 2.0], empirically). Withm levels of coarsening, the coarsening complexity is approximately O((1− 1/βm)/(1− 1/β)×E)) and since 1/βm is small, the complexity reduces to O( ββ−1 × E). For the base embedding phase, if the embedding algorithm has time complexity of T (V,E), the complexity of the base embedding phase is T ( Vαm , E βm ). For the refinement phase, the time complexity can be divided into two parts, i.e. the GCN model training and the embedding inference applying the GCN model. The former has similar complexity as the original GCN and can be denoted as O(k1 ∗ Eβm ) (Kipf & Welling, 2017), where k1 is a small constant related to embedding dimensionality and the number of training epochs. The embedding inference part is simply sparse matrix multiplication using Eq. 4 with time complexity O(k2 ∗Ei) when refining the embeddings on graph Gi, where k2 is an even smaller constant (k2 < k1). As a result, the time complexity of the whole refinement phase is O(k1 ∗ Eβm +k2 ∗ (E+ E β1 ...+ E βm−1 )) ≈ O(k3 ∗E) where k3 is a small constant. Overall, for an embedding algorithm of time complexity T (V,E), the MILE framework can reduce it to be T ( Vαm , E βm )+O(k∗E). This is a significant improvement considering T (V,E) is usually very large. The reduction in time complexity is attributed to the fact that we run the embedding learning and refinement model training at the coarsest graph. In addition, the overhead introduced by the coarsening phase and recursive embedding refinement is relatively small (linear to the number of edges E). Note that the constant factor k in the complexity term is usually small and we empirically found it to be in the scale of tens. Because of this, even when the complexity of the original embedding algorithm is linear to E, our MILE framework could still potentially speed up the embedding process because the complexity of MILE contains a smaller constant factor k (see Sec. 5.2 for the experiment of applying MILE on LINE). Furthermore, it is worth noting that many of the existing embedding strategies involve hyperparameters tunning for the best performance, especially for those methods based on neural networks (e.g., DeepWalk, Node2Vec, etc.). This in turn requires the algorithm to be run repeatedly – hence any savings in runtime by applying MILE are magnified across multiple runs of the algorithm with different hyper-parameter settings. 4DeepWalk: https://github.com/phanein/deepwalk; Node2Vec: http://snap.stanford.edu/node2vec/; Line: https://github.com/tangjianpku/LINE ; GraRep: https://github.com/thunlp/OpenNE; NetMF: https://github.com/xptree/NetMF A.3 MILE Performance The detailed information about performance evaluation is available in Table 3. the speedup compared to the original method. “N/A” indicates the method runs out of memory and we show the amount of running time spent when it happens. A.4 MILE Drilldown: Design Choices We now study the role of the design choices we make within the MILE framework related to the coarsening and refinement procedures described. To this end, we examine alternative design choices and systematically examine their performance. The alternatives we consider are: • Random Matching (MILE-rm): For each iteration of coarsening, we repeatedly pick a random pair of connected nodes as a match and merge them into a super-node until no more matching can be found. The rest of the algorithm is the same as our MILE. • Simple Projection (MILE-proj): We replace our embedding refinement model with a simple projection method. In other words, we directly copy the embedding of a super-node to its original node(s) without any refinement (see Eq. 3). • Averaging Neighborhoods (MILE-avg): For this baseline method, the refined embedding of each node is a weighted average node embeddings of its neighborhoods (weighted by the edge weights). This can be regarded as an embeddings propagation method. We add self-loop to each node6 and conduct the embeddings propagation for two rounds. • Untrained Refinement Model (MILE-untr): Instead of training the refinement model to minimize the loss defined in Eq. 7, this baseline merely uses a fixed set of values for parameters Θ(k) without training (values are randomly generated; other parts of the model in Eq. 4 are the same, including à and D̃). 6Self-loop weights are tuned to the best performance. • Double-base Embedding for Refinement Training (MILE-2base): This method replaces the loss function in Eq. 7 with the alternative one in Eq. 6 for model training. It conducts one more layer of coarsening and base embedding (level m + 1), from which the embeddings are projected to level m and used as the input for model training. • GraphSAGE as Refinement Model (MILE-gs): It replaces the graph convolution network in our refinement method with GraphSAGE (Hamilton et al., 2017)7. We choose max-pooling for aggregation and set the number of sampled neighbors as 100, as suggested by the authors. Also, concatenation is conducted instead of replacement during the process of propagation. Table 4 shows the comparison of performance on these methods across the four datasets. Here, we focus on using DeepWalk and NetMF for base embedding with a smaller coarsening level (m = 1 for PPI, Blog, and Flickr; m = 6 for YouTube). Results are similar for the other embedding options we consider. We hereby summarize the key information derived from Table 4 as follows: • The matching methods used within MILE offer a qualitative benefit at a minimal cost to execution time. Comparing MILE with MILE-rm for all the datasets, we can see that MILE generates better embeddings than MILE-rm using either DeepWalk or NetMF as the base embedding method. Though MILE-rm is slightly faster than MILE due to its random matching, its Micro-F1 score and Macro-F1 score are consistently lower than of MILE. • The graph convolution based refinement learning methodology in MILE is particularly effective. Simple projection-based MILE-proj, performs significantly worse than MILE. The other two variants (MILE-avg and MILE-untr) which do not train the refinement model at all, also perform much worse than the proposed method. Note MILE-untr is the same as MILE except it uses a default set of parameters instead of learning those parameters. Clearly, the model learning part of our refinement method is a fundamental contributing factor to the effectiveness of MILE. Through training, the refinement model is tailored to the specific graph under the base embedding method in use. The overhead cost of this learning (comparing MILE with MILE-untr), can vary depending on the base embedding employed (for instance on the YouTube dataset, it is 7Adapt code from https://github.com/williamleif/GraphSAGE an insignificant 1.2% on DeepWalk - while being up to 20% on NetMF) but is still worth it due to qualitative benefits (Micro-F1 up from 30.2 to 40.9 with NetMF on YouTube). • Graph convolution refinement learning outperforms GraphSAGE. Replacing the graph convolution network with GraphSAGE for embeddings refinement, MILE-gs does not perform as well as MILE. It is also computationally more expensive, partially due to its reliance on embeddings concatenation, instead of replacement, during the process the embeddings propagation (higher model complexity). • Double-base embedding learning is not effective. In Sec. 4.3, we discuss the issues with unaligned embeddings of the double-base embedding method for the refinement model learning. The performance gap between MILE and MILE-2base in Table 4 provides empirical evidence supporting our argument. This gap is likely caused by the fact that the base embeddings of level m and level m + 1 might not lie in the same embedding space (rotated by some orthogonal matrix) (Hamilton et al., 2017). As a result, using the projected embeddings Epm as input for model training (MILE-2base) is not as good as directly using Em (MILE). Moreover, Table 4 shows that the additional round of base embedding in MILE-2base introduces a non-trivial overhead. On YouTube, the running time of MILE-2base is 1.6 times as much as MILE. A.5 MILE Drilldown: Memory Consumption We also study the impact of MILE on reducing memory consumption. For this purpose, we focus on MILE (GraRep) and MILE (NetMF), with GraRep and NetMF as base embedding methods respectively. Both of these are embedding methods based on matrix factorization, which possibly involves a dense objective matrix and could be rather memory expensive. We do not explore DeepWalk and Node2Vec here since their embedding learning methods generate truncated random walks (training data) on the fly with almost negligible memory consumption (compared to the space storing the graph and the embeddings). Figure 5 shows the memory consumption of MILE (GraRep) and MILE(NetMF) as the coarsening level increases on Blog (results on other datasets are similar). We observe that MILE significantly reduces the memory consumption as the coarsening level increases. Even with one level of coarsening, the memory consumption of GraRep and NetMF reduces by 64% and 42% respectively. The dramatic reduction continues as the coarsening level increases until it reaches 4, where the memory consumption is mainly contributed by the storage of the graph and the embeddings. This memory reduction is consistent with our intuition, since both # rows and # columns in the objective matrix reduce almost by half with one level of coarsening. A.6 MILE Drilldown: Discussion on reusing Θ(k) across all levels Similar to GCN, Θ(k) is a matrix of filter parameters and is of size d ∗ d (where d is the embedding dimensionality). Eq. 4 in this paper defines how the embeddings are propagated during embedding refinements, parameterized by Θ(k) . Intuitively, Θ(k) defines how different embedding dimensions interact with each other during the embedding propagation. This interaction is dependent on graph structure and base embedding method, which can be learned from the coarsest level. Ideally, we would like to learn this parameter Θ(k) on every two consecutive levels. But this is not practical since this could be expensive as the graph get more fine-grained (and defeat our purpose of scaling up graph embedding). This trick of “sharing” parameters across different levels is the trade-off between efficiency and effectiveness. To some extent, it is similar to the original GCN (Kipf & Welling, 2017), where the authors share the same filter parameters Θ(k) over the whole graph (as opposed to using different Θ(k) for different nodes; see Eq (6) and (7) in(Kipf & Welling, 2017)). Moreover, we empirically found this works good enough and much more efficient. Table 4 shows that if we do not share Θ(k) values and use random values for Θ(k) during refinements, the quality of embedding is much worse (see baseline MILE-untr). A.7 MILE Drilldown: Discussion on choice of embedding methods We wish to point out that we chose the base embedding methods as they are either recently proposed NetMF (introduced in 2018) or are widely used (DeepWalk, Node2Vec, LINE). By showing the performance gain of using MILE on top of these methods, we want to ensure the contribution of this work is of broad interest to the community. We also want to reiterate that these methods are quite different in nature: • DeepWalk (DW) and Node2vec (N2V) rely on the use of random walks for latent representation of features. • LINE learns an embedding that directly optimizes a carefully constructed objective function that preserves both first/second order proximity among nodes in the embedding space. • GraRep constructs multiple objective matrices based on high orders of random walk laplacians, factories each objective matrix to generate embeddings and then concatenates the generated embeddings to form final embedding. • NetMF constructs an objective matrix based on random walk Laplacian and factorizes the objective matrix in order to generate the embeddings. Indeed NetMF (Qiu et al., 2018; Levy & Goldberg, 2014) with an appropriately constructed objective matrix has been shown to approximate DW, N2V and LINE allowing such be conducting implicit matrix factorization of approximated matrices. There are limitations to such approximations (shown in a related context by (Arora et al., 2016)) - the most important one is the requirement of a sufficiently large embedding dimensionality. Additionally, we note that while unification is possible under such a scenario, the methods based on matrix factorization are quite different from the original methods and do place a much larger premium on space (memory consumption) - in fact this is observed by the fact we are unable to run NetMF and GraRep in many cases without incorporating them within MILE. A.8 MILE Drilldown: Discussion on extending MILE to directed graphs Note that as pointed out by (Chung, 2005), one can construct random-walk Laplacians for a directed graph thus incorporating approaches like NetMF to accommodate such solutions. Another simple solution is to symmetrize the graph while accounting for directionality. Once the graph is symmetrized, any of the embedding strategies we discuss can be employed within the MILE framework (including the coarsening technique). There are many ideas for symmetrization of directed graphs (see for example work described by (Gleich, 2006) or (Satuluri & Parthasarathy, 2011). A.9 MILE Drilldown: Discussion on effectiveness of SEM The effectiveness of structurally equivalent matching (SEM) is highly dependent on graph structure but in general 5% - 20% of nodes are structurally equivalent (most of which are low-degree nodes). For example, during the first level of coarsening, YouTube has 172,906 nodes (or 86,453 pairs) out of 1,134,890 nodes that are found to be SEM (around 15%); Yelp has 875,236 nodes (or 437,618 pairs) out of 8,938,630 nodes are SEM (around 10%). In fact, more nodes are involved in SEM as SEM is run iteratively at each coarsening level.
1. What is the focus of the paper regarding node embeddings for large-scale graphs? 2. What are the strengths of the proposed multi-level framework, particularly in terms of effectiveness and efficiency? 3. What are the weaknesses of the paper, especially regarding its heuristic nature and some claims that are wrong according to existing literature? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any questions or concerns regarding specific aspects of the proposed method, such as the use of Equation (7) for learning parameters or the approach to graph coarsening?
Review
Review This paper proposed a multi-Level framework for learning node embeddings for large-scale graphs. The author first coarsens the graphs into different levels of subgraphs. The low-level subgraphs are obtained with the node embeddings of the higher-level graphs with a graph convolutional neural network. By iteratively applying this procedure, the node embeddings of the original graphs can be obtained. Experimental results on several networks (including one network with ~10M node) prove the effective and efficiency of the proposed method over existing state-of-the-art approaches. Strength: - scaling up node embedding methods is a very important and practical problem - experiments show that the proposed methods seems to be very effective. Weakness: - the proposed method seems to be very heuristic - some claims in the papers are wrong according to existing literatures Overall, the paper is well written and easy to follow. The proposed method is simple but heuristic. However, the performance seems to be quite effective according to the experiments. The reasons that why the method works need to be better explained, which can significantly the quality of the paper and its impact in the future. Details: -- In the introduction part, "However, such methods rarely scale to large datasets (e.g., graphs with over 1 million nodes) since they are computationally expensive and often memory intensive". This is not TRUE! In the paper of LINE (Tang et al. 2015). It shows the LINE model can easily scale up to networks with one million nodes with a few hours. -- The authors use Equation (7) to learn the parameters of the graph convolutional neural network. I am really surprised that this method works. Especially the learned parameters are shared across different layers. -- Have you tried and compared different approaches of graph coarsening? -- In Figure 2. (a), according to Equation (1), in the second step, the weight of the edge between A and DE should be 2/sqrt(3)*sqrt(4)?
ICLR
Title Two-Timescale Networks for Nonlinear Value Function Approximation Abstract A key component for many reinforcement learning agents is to learn a value function, either for policy evaluation or control. Many of the algorithms for learning values, however, are designed for linear function approximation—with a fixed basis or fixed representation. Though there have been a few sound extensions to nonlinear function approximation, such as nonlinear gradient temporal difference learning, these methods have largely not been adopted, eschewed in favour of simpler but not sound methods like temporal difference learning and Q-learning. In this work, we provide a two-timescale network (TTN) architecture that enables linear methods to be used to learn values, with a nonlinear representation learned at a slower timescale. The approach facilitates the use of algorithms developed for the linear setting, such as data-efficient least-squares methods, eligibility traces and the myriad of recently developed linear policy evaluation algorithms, to provide nonlinear value estimates. We prove convergence for TTNs, with particular care given to ensure convergence of the fast linear component under potentially dependent features provided by the learned representation. We empirically demonstrate the benefits of TTNs, compared to other nonlinear value function approximation algorithms, both for policy evaluation and control. 1 INTRODUCTION Value function approximation—estimating the expected returns from states for a policy—is heavily reliant on the quality of the representation of state. One strategy has been to design a basis—such as radial basis functions (Sutton and Barto, 1998) or a Fourier basis (Konidaris et al., 2011)—for use with a linear function approximator and temporal difference (TD) learning (Sutton, 1988). For low-dimensional observation vectors, this approach has been effective, but can be onerous to extend to high-dimensional observations, potentially requiring significant domain expertise. Another strategy has been to learn the representation, such as with basis adaptation or neural networks. Though there is still the need to specify the parametric form, learning these representations alleviates the burden of expert specification. Further, it is more feasible to scale to high-dimensional observations, such as images, with neural networks (Mnih et al., 2015; Silver et al., 2016). Learning representations necessitates algorithms for nonlinear function approximation. Despite the deficiencies in specification for fixed bases, linear function approximation for estimating value functions has several benefits over nonlinear estimators. They enable least-squares methods, which can be much more data-efficient for policy evaluation (Bradtke and Barto, 1996; Szepesvari, 2010; van Seijen and Sutton, 2015), as well as robust to meta-parameters (Pan et al., 2017). Linear algorithms can also make use of eligibility traces, which can significantly speed learning (Sutton, 1988; Dann et al., 2014; White and White, 2016), but have not been able to be extended to nonlinear value function approximation. Additionally, there have been a variety of algorithms derived for the linear setting, both for on-policy and off-policy learning (Sutton et al., 2009; Maei, 2011; van Seijen and Sutton, 2014; van Hasselt et al., 2014; Mahadevan et al., 2014; Sutton et al., 2016; Mahmood et al., 2017). These linear methods have also been well-explored theoretically (Tsitsiklis and Van Roy, 1997; Maei, 2011; Mahmood and Sutton, 2015; Yu, 2015) and empirically (Dann et al., 2014; White and White, 2016), with some insights into improvements from gradient methods (Sutton et al., 2009), true-online traces (van Seijen and Sutton, 2014) and emphatic weightings (Sutton et al., 2016). These algorithms are easy to implement, with relatively simple objectives. Objectives for nonlinear value function approximation, on the other hand, can be quite complex (Maei et al., 2009), resulting in more complex algorithms (Menache et al., 2005; Di Castro and Mannor, 2010; Bhatnagar et al., 2013) or requiring a primal-dual formulation as has been done for control (Dai et al., 2017). In this work, we pursue a simple strategy to take advantage of the benefits of linear methods, while still learning the representation. The main idea is to run two learning processes in parallel: the first learns nonlinear features using a surrogate loss and the second estimates the value function as a linear function of those features. We show that these Two-timescale Networks (TTNs) converge, because the features change on a sufficiently slow scale, so that they are effectively fixed for the fast linear value function estimator. Similar ideas have previously been explored for basis adaptation, but without this key aspect of TTNs—namely the separation of the loss for the representation and value function. This separation is critical because it enables simpler objectives—for which the gradient can be easily sampled—to drive the representation, but still enables use of the mean squared projected Bellman error (MSPBE)—on which all the above linear algorithms are based. This separation avoids the complexity of the nonlinear MSPBE, but maintains the useful properties of the (linear) MSPBE. A variety of basis adaptation approaches have used a two-timescale approach, but with the same objective for the representation and the values (Menache et al., 2005; Di Castro and Mannor, 2010; Bhatnagar et al., 2013; J et al., 2016). Yu and Bertsekas (2009) provided algorithms for basis adaptation using other losses, such as Bellman error using Monte carlo samples, taking derivatives through fixed point solutions for the value function. Levine et al. (2017) periodically compute a closed form least-squares solution for the last layer of neural network, with a Bayesian update to prevent too much change. Because these methods did not separate the value learn and basis adaptation, the resulting algorithms are more complex. The strategy of using two different heads—one to drive the representation and one to learn the values—has yet to be systematically explored. We show that TTNs are a promising direction for nonlinear function approximation, allowing us to leverage linear algorithms while retaining the flexibility of nonlinear function approximators. We first discuss a variety of possible surrogate losses, and their potential for learning a useful representation. We then show that TTNs converge, despite the fact that a linear algorithm is used with a changing representation. This proof is similar to previous convergence proofs for policy evaluation, but with a relaxation on the requirement that features be independent, which is unlikely for learned features. We then show empirically that TTNs are effective compared to other nonlinear value function approximations and that they can exploit several benefits of linear value approximations algorithms. In particular, for both low-dimensional and high-dimensional (image-based) observations, we show (a) the utility of least-squares (or batch) methods, (b) advantages from eligibility traces and (c) gains from being able to select amongst different linear policy evaluation algorithms. We demonstrate that TTNs can be effective for control with neural networks, enabling use of fitted Q-iteration within TTNs as an alternative to target networks. 2 BACKGROUND We assume the agents act in a finite Markov Decision Process (MDP), with notation from (White, 2017). The dynamics of the MDP are defined by the 3-tuple (S,A, P ), where S is the set of states, A the set of actions and P : S × A × S 7→ [0, 1] the transition probability function. The task in this environment is defined by a reward function R : S × A × S 7→ R and a discount function γ : S × A × S 7→ [0, 1]. At each time step, the agent takes an action At according to a policy π : S ×A 7→ [0, 1] and the environment returns reward Rt+1, next state St+1 and discount γt+1. The goal in policy evaluation is to compute the value function: the expected sum of discounted rewards from every state under a fixed policy π. The value function Vπ : S → R is defined recursively from each state s ∈ S as Vπ(s) def = E[Rt+1 + γt+1Vπ(St+1)|St = s] = ∑ a∈A π(s, a) ∑ s′∈S P (s, a, s′)(r + γVπ(s ′)). (1) When using linear function approximation, this goal translates into finding parameters w ∈ Rd to approximate the value function V̂ (s) def = x(s)>w ≈ Vπ(s) where x : S → Rd is a feature function. (2) More generally, a nonlinear function V̂ (s) could be learned to estimate Vπ . To formulate this learning problem, we need to consider the objective for learning the function V̂ . Let Vπ, V̂ ∈ R|S| be the vectors of values for Vπ, V̂ . The recursive formula (1) defines a Bellman operator Bπ where the fixed point satisfies BπVπ = Vπ . Consider a restricted value function class, such as the set of linear value functions V̂ ∈ F = {Xw | w ∈ Rd} where X ∈ R|S|×d is a matrix with the i-th row set to x(s) for ith state s ∈ S. Then, it may no longer be possible to satisfy the recursion. Instead, an alternative is to find a projected fixed point ΠFBπV̂ = V̂ where the projection operator ΠF projects BπV̂ to the space spanned by this linear basis: ΠFV def = arg min V̄ ∈F ‖V̄ − V ‖2d (3) where d ∈ R|S| is a vector which weights each state in the weighted norm ‖V ‖2d = ∑ s∈S d(s)V (s) 2. Many linear policy evaluation algorithms estimate this projected fixed point, including TD (Sutton, 1988), least-squares TD (Bradtke and Barto, 1996) and gradient TD (Sutton et al., 2009). The objective formulated for this projected fixed-point, however, is more complex for nonlinear function approximation. For linear function approximation, the projection operator simplifies into a closed form solution involving only the features X. Letting δt = Rt+1 + γV̂ (St+1)− V̂ (St), the resulting mean-squared projected Bellman error (MSPBE) can be written as MSPBE(w) def= ‖ΠFBπV̂ − V̂ ‖2d = E[δtxt]E[xtx>t ] −1 E[δtxt] (4) where E[δtxt] = ∑ s∈S d(s)E[δt|St = s]x(s). For nonlinear function classes, the projection does not have a closed form solution and may be expensive to compute. Further, the projection involves the value function parameters, so the projection changes as parameters change. The nonlinear MSPBE and resulting algorithm are more complex (Maei et al., 2009), and have not seen widespread use. Another option is simply to consider different objectives. However, as we discuss below, other objectives for learning the value function either are similarly difficult to optimize or provide poor value estimates. In the next section, we discuss some of these alternatives and introduce Two-timescale Networks as a different strategy to enable nonlinear value function approximation. 3 TWO-TIMESCALE NETWORKS AND SURROGATE OBJECTIVES We first introduce Two-timescale Networks (TTNs), and then describe different surrogate objectives that can be used in TTNs. We discuss why these surrogate objectives within TTNs are useful to drive the representation, but are not good replacements for the MSPBE for learning the value function. TTNs use two concurrent optimization processes: one for the parameters of the network θ and one for the parameters of the value function w. The value function is approximated as V̂ (s) def= xθ(s)>w where the features xθ : S → Rd are a parametrized function and θ ∈ Rm is adjusted to provide better features. For a neural network, θ consists of all the parameters in the hidden layers, to produce the final hidden layer xθ(s). The two optimization processes maintain different time scales, with the parameters θ for the representation changed as a slow process, and the parameters w for the value estimate changed as a fast process relative to θ. The separation between these two processes could be problematic, since the target problem— estimating the value function—is not influencing the representation! The slow process is driven by a completely separate objective than the fast process. However, the key is to select this surrogate loss for the slow process so that it is related to the value estimation process, but still straightforward to compute the gradient of the loss. We use V̂ (s) as the output of the fast part, which corresponds to the value estimate used by the agent. To distinguish, Ŷ (s) denotes the output for the slow-part (depicted in Figure 1), which may or may not be an estimate of the value, as we discuss below. Consider first the mean-squared TD error (MSTDE), which corresponds to ∑ s∈S d(s)E[δ2t |St = s]. Notice that this does not correspond to the mean-squared Bellman error (MSBE), for which it is more difficult to compute gradients ‖BπV̂ − V̂ ‖2d = ∑ s∈S d(s) (E[δt|St = s]) 2. Using the MSTDE as a surrogate loss, with Ŷ (s) = xθ(s)>w̄, the slow part of the network minimizes Lslow(θ)= min w̄∈Rd ∑ s∈S d(s)E[δt(θ, w̄)2|St = s] . δt(θ, w̄) def =Rt+1+γt+1xθ(St+1) >w̄−xθ(St)>w̄. This slow part has its own weights w̄ associated with estimating the value function, but learned instead according to the MSTDE. The advantage here is that stochastic gradient descent on the MSTDE is straightforward, with gradient δt∇{θ,w̄}[γt+1Ŷ (St+1)− Ŷ (St)] where ∇{θ,w̄}Ŷ (St) is the gradient of the neural network, including the head of the slow part which uses weights w̄. Using the MSTDE has been found to provide worse value estimates than the MSPBE—which we re-affirm in our experiments. It could, nonetheless, play a useful role as a surrogate loss, where it can inform the representation towards estimating values. w There are a variety of other surrogate losses that could be considered, related to the value function. However, many of these losses are problematic to sample incrementally, without storing large amounts of data. For example, the mean-squared return error (MSRE) could be used, which takes samples of return and minimizes mean-squared error to those sampled returns. Obtaining such returns requires waiting many steps, and so delays updating the representation for the current state. Another alternative is the MSBE. The gradient of the nonlinear MSBE is not as complex as the gradient of the nonlinear MSPBE, because it does not involve the gradient of a projection. However, it suffers from the double sampling problem: sampling the gradient requires two independent samples. For these reasons, we explore the MSTDE as the simplest surrogate loss involving the value function. Finally, surrogate losses could also be defined that are not directly related to the value function. Two natural choices are losses based on predicting the next state and reward. The output of the slow part could correspond to a vector of values, such as Yt = St+1 ∈ Rn or Yt = [ St+1 Rt+1 ] . The ability to predict the next state and reward is intuitively useful for enabling prediction of value, that also has some theoretical grounding. Szepesvari (2010, Section 3.2.1) shows that the Bellman error is small, if the features can capture a horizon of immediate rewards and expected next states. For linear encoders, Song et al. (2016) show that an optimal set of features enables predictions of next state and reward. More generally, learning representations using auxiliary tasks or self-supervised tasks have had some successes in RL, such as using pixel control (Jaderberg et al., 2016) or classifying the temporal distance between frames (Aytar et al., 2018). In computer vision, Gidaris et al. (2018) showed that using rotated images as self-supervised tasks produced a useful representation for the main loss, without training the representation with the main loss. Any of these self-supervised tasks could also be used for the surrogate objective, and motivate that separating out representation learning does not degrade performance. For now, we restrict focus on simpler surrogate objectives, as the main purpose of this work is to demonstrate that the separation in TTNs is a sound approach for learning values. 4 CONVERGENCE OF TWO-TIMESCALE NETWORK ALGORITHM Training TTNs is fully online, using a single transition from the environment at a time. Projected stochastic gradient descent is used to reduce the surrogate loss, Lslow(θ) and a linear policy evaluation algorithm, such as GTD2 or TD(λ), is coupled to the network where the prediction vector w is callibrated proportional to −∇wMSPBEθ(w). The full procedure is summarized in Algorithm 1, in Appendix A. Regarding the convergence of TTNs, a few remarks are in order: 1. The network needs to evolve sufficiently slowly relative to the linear prediction weights. In our theoretical analysis, this is achieved by ensuring that the step sizes ξt and αt of the network and the linear policy evaluation algorithm respectively decay to zero at different rates. In particular, ξt/αt → 0 as t→∞. With this relative disparity in magnitudes, one can assume that the network is essentially quasi-static, while the faster linear component is equilibrated relative to the static features. 2. The linear prediction algorithms need to converge for any set of features provided by the neural network, particularly linearly dependent features. This induces a technical bottleneck since linear independence of the features are a necessary condition for the convergence of the prediction methods GTD and GTD2. We overcome this by following a differential inclusion based analysis for GTD2. 3. Finally, we need to guarantee the stability of the iterates (both feature vector θt and the prediction vector wt) and this is ensured by projecting the iterates to respective compact, convex sets. The analysis for the convergence of the neural network is general, enabling any network architectures that are twice continuously differentiable. We prove that the TTNs converge asymptotically to the stable equilibria of a projected ODE which completely captures the mean dynamics of the algorithm. We now state our main result (for notations and technical details, please refer Appendix B). The results are provided for cases when TD(λ) or GTD2 is used as the linear prediction method. However, note that similar results can be obtained for other linear prediction methods. Theorem 1. Let θ̄ = (θ, w̄)> and Θ ⊂ Rm+d be a compact, convex subset with smooth boundary. Let the projection operator ΓΘ be Frechet differentiable and Γ̂Θ θ̄ (− 12∇Lslow)(θ̄) be Lipschitz con- tinuous. Also, let Assumptions 1-3 hold. Let K be the set of asymptotically stable equilibria of the following ODE contained inside Θ: d dt θ̄(t) = Γ̂Θθ̄(t)(− 1 2 ∇θ̄Lslow)(θ̄(t)), θ̄(0) ∈ Θ̊ and t ∈ R+. Then the stochastic sequence {θ̄t}t∈N generated by the TTN converges almost surely to K (sample path dependent). Further, TD(λ) Convergence: Under the additional Assumption 4-TD(λ), we obtain the following result: For any λ ∈ [0, 1], the stochastic sequence {wt}t∈N generated by the TD(λ) algorithm (Algorithm 2) within the TTN setting converges almost surely to the limit w∗, where w∗ satisfies Πθ̄∗T (λ)(Φθ̄∗w ∗) = Φθ̄∗w ∗, (5) with θ̄∗ ∈ K (sample path dependent). 5 EXPERIMENTS We investigate the performance of TTNs versus a variety of other nonlinear policy evaluation algorithms, as well as the impact of choices within TTNs. We particularly aim to answer (a) is it beneficial to optimize the MSPBE to obtain value estimates, rather than using value estimates from surrogate losses like the MSTDE; (b) do TTNs provide gains over other nonlinear policy evaluation algorithms; and (c) can TTNs benefit from the variety of options in linear algorithms, including leastsquares approaches, eligibility traces and different policy evaluation algorithms. More speculatively, we also investigate if TTNs can provide a competitive alternative to deep Q-learning in control. Experiments were performed on-policy in five environments. We use three classic continuous-state domains: Puddle World, a continuous-state grid world with high-magnitude negative rewards for walking through a puddle; Acrobot, where a robot has to swing itself up; and Cartpole, which involves balancing a pole. We also use two game domains: Catcher, which involves catching falling apples; and Puck World, in which the agent has to chase a puck (Tasfi, 2016). Catcher includes both a variant with 4-dimensional observations—position and velocity of the paddle, and (x,y) of the apple— and one with image-based observations—with two consecutive 64-by-64 grayscale images as input. This domain enables us to analyze the benefit of the algorithms, on the same domain, both with low-dimensional and high-dimensional observations. We describe the policies evaluated for these domains in Appendix D. We include a subset of results in the main body, with additional results in the appendix. Results in Cartpole are similar to Acrobot; Cartpole results are only in the appendix. The value estimates are evaluated using root-mean-squared value error (RMSVE), where value error is (Vπ(s)− V̂ (s))2. The optimal values for a set of 500 states are obtained using extensive rollouts from each state and the RMSVE is computed across these 500 states. For the algorithms, we use the following settings, unless specified otherwise. For the slow part (features), we minimize the mean-squared TD error (MSTDE) using the AMSGrad optimizer (Reddi et al., 2018) with β1 = 0 and β2 = 0.99. The network weights use Xavier initialization (Glorot and Bengio, 2010); the weights for the fast part were initialized to 0. In Puddle World, the neural network consists of a single hidden layer of 128 units with ReLU activations. In the other environments, we use 256 units instead. To choose hyperparameters, we first did a preliminary sweep on a broad range and then chose a smaller range where the algorithms usually made progress, summarized in Appendix D. Results are reported for hyperparameters in the refined range, chosen based on RMSVE over the latter half of a run with shaded regions corresponding to one standard error. TTN vs. competitors. We compare to the following algorithms: nonlinear TD, nonlinear GTD (Maei et al., 2009), Adaptive Bases (ABBE and ABTD) (Di Castro and Mannor, 2010), nonlinear TD + LSTD regularization (inspired by Levine et al. (2017)). We describe these algorithms in more depth in Appendix D. All of the algorithms involve more complex updates compared to TTNs, except for nonlinear TD, which corresponds to a semi-gradient TD update with nonlinear function approximation. For TTNs, we use LSTD for the linear, fast part. In Figure 2, TTN is able to perform as well or better than the competitor algorithms. Especially in Puddle World, its error is significantly lower than the second best algorithm. Interestingly, Nonlinear GTD also performs well across domains, suggesting an advantage for theoretically-sound algorithms. The utility of optimizing the MSPBE. First, we show that the TTN benefits from having a second head learning at a faster timescale. To do so, we compare the prediction errors of using TTN, with the fast process optimizing the MSPBE (using LSTD) and the slow one optimizing the MSTDE, and one trained end-to-end using the MSTDE with AMSGrad. As a baseline, we include TTN with a fixed representation (a randomly initialized neural network) to highlight that the slow process is indeed improving the representation. We also include results for optimizing the MSTDE with the fixed representation. Cartpole In Figure 3, we see that optimizing the MSPBE indeed gives better results than optimizing the MSTDE. Additionally, we can conclude that using the MSTDE, despite being a poor objective to learn the value function, can still be effective for driving feature-learning since it outperforms the fixed representation. Linear algorithms and eligibility traces. TTNs give us the flexibility to choose any linear policy evaluation algorithm for the fast part. We compare several choices: TD, least-squares TD (LSTD) (Bradtke and Barto, 1996), forgetful LSTD (FLSTD) (van Seijen and Sutton, 2015), emphatic TD (Sutton et al., 2016), gradient TD (the TDC variant) (Sutton et al., 2009) and their true-online versions (van Seijen and Sutton, 2014; van Hasselt et al., 2014) to learn the value function. GTD and ETD are newer temporal difference methods which have better convergence properties and can offer increased stability. The true-online variants modify the update rules to improve the behavior of the algorithms when learning online and seem to outperform their counterparts empirically (van Seijen and Sutton, 2014). Least-squares methods summarize past interaction, but are often avoided due to quadratic computation in the number of features. For TTNs, however, there is no computational disadvantage to using LSTD methods, for two reasons. It is common to choose deep but skinny architectures (Mnih et al., 2015; Hessel et al., 2017). Furthermore, if the last layer is fully connected, then we already need to store O(d2) weights and use O(d2) time to compute a forward pass—the same as LSTD. We include FLSTD, which progressively forgets older interaction, as this could be advantageous when the feature representation changes over time. For TTN, incremental versions of the least-squares algorithms are used to maintain estimates of the required quantities online (see appendix D). All of these linear algorithms can use eligibility traces to increase their sample efficiency by propagating TD errors back in time. The trace parameter λ can also provide a bias-variance tradeoff for the value estimates (Sutton, 1988; Dann et al., 2014). For nonlinear function approximation, eligibility traces can no longer be derived for TD. Though invalid, we can naively extend them to this case by keeping one trace per weight, giving us nonlinear TD(λ). The results overall indicate that TTNs can benefit from the ability to use different linear policy evaluation algorithms and traces, in particular from the use of least-squares methods as shown in Figure 4 for Puddle World and Catcher. The dominance of LSTD versus the other linear algorithms is consistent, including in terms of parameter sensitivity, persists for the other three domains. We additionally investigated sensitivity to λ, and found that most of the TTN variants benefit from a nonzero λ value and, in many cases, the best setting is high, near 1. One exception is the least-squares methods, where LSTD performs similarly for most values of λ. Nonlinear TD(λ), on the other hand, performs markedly worse as λ increases. This is unsurprising considering the naive addition of eligibility traces is unsound. We include these sensitivity plots in the appendix, in Figure ??. Surrogate loss functions. For all the previous experiments, we optimized the MSTDE for the slow part of the network, but as discussed in Section 3, other objectives can be used. We compare a variety of objectives, by choosing different Yt, including Yt = Rt+1 (Reward); Yt = St+1 (Next State); and Yt = Rt+1 + Ŷ (St+1). (Semi-gradient MSTDE). In Puck World, in Figure 5 a), we can see that every auxiliary loss performed well. This does not appear to be universally true, as in Acrobot we found that the MSTDE was a less effective surrogate loss, leading to slower learning (see Figure 5 b). Alternate losses such as the semi-gradient MSTDE and next state predictions were more successful in that domain. These results suggest that there is no universally superior surrogate loss and that choosing the appropriate one can yield benefits in certain domains. Control Although the focus of this work is policy evaluation, we also provide some preliminary results for the control setting. For control, we include some standard additions to competitor learning algorithms to enable learning with neural networks. The DQN algorithm (Mnih et al., 2015) utilizes two main tricks to stabilize training: experience replay—storing past transitions and replaying them multiple times—and a target network—which keeps the value function in the Q-learning targets fixed, updating the target network infrequently (e.g., every k = 10, 000 steps). We use an alternative strategy to target networks for TTN. The use of a target network is motivated by fitted Q-iteration (FQI) (Ernst et al., 2005), which updates towards fixed Q-values with one sweep through a batch of data. TTNs provide a straightforward mechanism to instead directly use FQI, where we can solve for the weights on the entire replay buffer, taking advantage of the closed form solution for linear regression towards the Q-values from the last update. Batch FQI requires storing all data, whereas we instead have a sliding window of experience. We therefore additionally incorporate a regularization term, which prevents the weights from changing too significantly between updates, similarly to Levine et al. (2017). Each FQI iteration requires solving a least squares problem on the entire buffer, an operation costing O(nd2) computation where d is the number of features in the last layer of the network and n is the size of the buffer; we update the network every k steps, which reduces the per-step computation to O(nd2/k). The slow part drives feature-learning by minimizing the semi-gradient MSTDE for state-action values. As another competitor, we include LS-DQN (Levine et al., 2017), a DQN variant which also utilizes adjustments to the final layer’s weights towards the FQI solution, similar to TTN-FQI. The experimental details differ for control. On nonimage Catcher, we do a sweep over αslow and λreg , the regularization parameter, for TTN and sweep over the learning rate and the number of steps over which is annealed for DQN. On image Catcher, runs require significantly more computation so we only tune hyperparameters by hand. The FQI update in TTNs was done every 1000 (10000) steps for non-image (image) Catcher. We run each algorithm 10 times (5 times) for 200 thousand steps (10 million steps) on the non-image (image) Catcher. We see that TTN is able to perform well on both versions of Catcher in Figure 6, particularly learning more quickly than the DQN variants. This difference is especially pronounced in the image version of catcher, where TTN is also able to achieve much higher average returns than DQN. Both algorithms seem to suffer from catastrophic forgetting later during training as the performance dips down after an initial rise, although TTN still stabilizes on a better policy. Overall, these results suggest that TTNs are a promising direction for improving sample efficiency in control, whilst still maintaining stability when training neural networks. 6 DISCUSSION AND CONCLUSION In this work, we proposed Two-timescale Networks as a new strategy for policy evaluation with nonlinear function approximation. As opposed to many other algorithms derived for nonlinear value function approximation, TTNs are intentionally designed to be simple to promote ease-of-use. The algorithm combines a slow learning process for adapting features and a fast process for learning a linear value function, both of which are straightforward to train. By leveraging these two timescales, we are able to prove convergence guarantees for a broad class of choices for both the fast and slow learning components. We highlighted several cases where the decoupled architecture in TTNs can improve learning, particularly enabling the use of linear methods—which facilitates use of least-squares methods and eligibility traces. This work has only begun the investigation into which combinations for surrogate losses and linear value function approximation algorithms are most effective. We provided some evidence that, when using stochastic approximation algorithms rather than least-squares algorithms, the addition of traces can have a significant effect within TTNs. This contrasts nonlinear TD, where traces were not effective. The ability to use traces is potentially one of the most exciting outcomes for TTNs, since traces have been so effective for linear methods. More generally, TTNs provide the opportunity to investigate the utility of the many linear value function algorithms, in more complex domains with learned representations. For example, emphatic algorithms have improved asymptotic properties (Sutton et al., 2016), but to the best of our knowledge, have not been used with neural networks. Another promising direction for TTNs is for off-policy learning, where many value functions are learned in parallel. Off-policy learning can suffer from variance due to large magnitude corrections (importance sampling ratios). With a large collection of value functions, it is more likely that some of them will cause large updates, potentially destabilizing learning in the network if trained in an end-to-end fashion. TTNs would not suffer from this problem, because a different objective can be used to drive learning in the network. We provide some preliminary experiments in the appendix supporting this hypothesis (Appendix C.7). A TTN ALGORITHM Algorithm 1 Training of TTNs 1: procedure TRAIN(w,θ, w̄, π) . π is a fixed policy 2: Initialize θ, w̄ with Xavier initialization, w to 0 and thestarting state s according to the environment 3: while training do 4: a← action chosen by π(s) 5: r, s′ ← Environment(s, a) . Get reward and next state 6: θ, w̄← GradientDescent on Lslow using sample (s, r, s′) 7: w← Update on Lvalue using sample (s, r, s′) 8: s← s′ 9: end while 10: return learned parameters w,θ, w̄ 11: end procedure B CONVERGENCE PROOF OF TWO-TIMESCALE NETWORKS B.1 DEFINITIONS & NOTATIONS - Let R+ denote the set of non-negative real numbers, N = {0, 1, 2, . . . } and ‖ · ‖ denote the Euclidean norm or any equivalent norm. - A map f : Rd → Rd is Lipschitz continuous if ‖f(x) − f(y)‖ ≤ L(‖x − y‖), for some L ∈ (0,∞), ∀x,y ∈ Rd. - A set-valued map h : Rd → {subsets of Rd} is called a Marchaud map, if it satisfies the following conditions: 1. For each x ∈ Rd, h(x) is convex and compact. 2. For each x ∈ Rd, ∃K ∈ (0,∞) such that supy∈h(x) ‖y‖ ≤ K(1 + ‖x‖). 3. h is upper-semicontinuous, i.e., if {xn}n∈N → x and {yn}n∈N → y, where xn ∈ Rd, yn ∈ h(xn), ∀n ∈ N, then y ∈ h(x). - For x1,x2 ∈ Rd and D ∈ Rk×k a diagonal matrix, we define the inner-product < x1,x2 >D, x>1 Dx2. We also define the semi-norm ‖x‖D ,< x,x > 1 2 D. If all the diagonal elements of D are strictly positive, then ‖ · ‖D is a norm. - For any set X , let X̊ denote the interior of X and ∂X denote the boundary of X . - For brevity, let θ̄ = (θ, w̄)> and Φθ̄ be the feature matrix corresponding to the feature parameter θ̄, i.e. Φθ̄ , xθ(s1) > xθ(s2) > ... xθ(s|S|) > |S|×d , (6) where xθ(s)> is the row-vector corresponding to state s. Further, define the |S| × |S|-matrix Pπ as follows: Pπs,s′ , ∑ a∈A π(s, a)P (s, a, s′), s, s′ ∈ S. (7) - Also, recall that Lslow(θ) = MSTDE(θ) , E [ E [ δ2t |St ]] . - A function Γ : U ⊆ Rd1 → X ⊆ Rd2 is Frechet differentiable at x ∈ U if there exists a bounded linear operator Γ̂x : Rd1 → Rd2 such that the limit lim ↓0 Γ(x + y)− x (8) exists and is equal to Γ̂x(y). We say Γ is Frechet differentiable if Frechet derivative of Γ exists at every point in its domain. B.2 ASSUMPTIONS Assumption 1: The pre-determined, deterministic, step-size sequence {ξt}t∈N satisfies ξt > 0,∀t ∈ N, ∑ t∈N ξt =∞, ∑ t∈N ξ2t <∞. Assumption 2: The Markov chain induced by the given policy π is ergodic, i.e., aperiodic and irreducible. Assumption 2 implies that the underlying Markov chain is asymptotically stationary and henceforth it guarantees the existence of a unique steady-state distribution dπ over the state space S (Levin and Peres, 2017), i.e., limt→∞ P(St = s) = dπ(s), ∀s ∈ S. Assumption 3: Given a realization of the transition dynamics of the MDP in the form of a sample trajectory Oπ = {S0, A0, R1, S1, A1, R2, S2, . . . }, where the initial state S0 ∈ S is chosen arbitrarily, while the action A 3 At ∼ π(St, ·), the transitioned state S 3 St+1 ∼ P (St, At, ·) and the reward R 3 Rt+1 = R(St, At, St+1). To analyze the long-run behaviour of our algorithm, we employ the ODE based analysis (Borkar, 2008; Kushner and Yin, 2003; Ljung, 1977) of the stochastic recursive algorithms. Here, we consider a deterministic ordinary differential equation (ODE) whose asymptotic flow is equivalent to the long-run behaviour of the stochastic recursion. Then we analyze the qualitative behaviour of the solutions of the ODE to determine the asymptotically stable sets. The ODE-based analysis is elegant and conclusive and it further guarantees that the limit points of the stochastic recursion will almost surely belong to the compact connected internally chain transitive invariant set of the equivalent ODE. Since the algorithm follows a multi-timescale stochastic approximation framework, we will also resort to the more generalized multi-timescale differential inclusion based analysis proposed in (Borkar, 1997; Ramaswamy and Bhatnagar, 2016). Note that there exists only a unilateral coupling between the neural network (where the feature vectors θ̄t are calibrated by following a stochastic gradient descent w.r.t. Lslow) and the various policy evaluation algorithms (see Figure 7). This literally implies that the policy evaluation algorithms depend on the feature vectors θ̄t but not vice-versa. Therefore, one can independently analyze the asymptotic behaviour of the feature vectors {θ̄t}t∈N. Also, as a technical requirement, note that since one cannot guarantee the stability (almost sure boundedness) of the iterates {θ̄t}t∈N (which is a necessary condition required for the ODE based analysis. Please refer Chapter 2 of Borkar (2008)), we consider the following projected stochastic recursion: θ̄t+1 = Γ Θ ( θ̄t + ξtδt ( ∇θ̄t Ŷθ̄(St)− γt+1∇θ̄t Ŷθ̄(St+1) )) , (9) where ΓΘ(·) is the projection onto a pre-determined compact and convex subset Θ ⊂ Rm+d, i.e., ΓΘ(x) = x, for x ∈ Θ̊, while for x /∈ Θ̊, it is the nearest point in Θ w.r.t. the Euclidean distance (or equivalent metric). Define the filtration {Ft}t∈N, a family of increasing natural σ-fields, where Ft , σ ( {θ̄i, Si, Ri; 0 ≤ i ≤ t} ) . The following lemma characterizes the limiting behaviour of the iterates {θ̄t}t∈N: Lemma 1. Let Assumptions 1-3 hold. Let Θ ⊂ Rm+d be a compact, convex subset with smooth boundary. Let ΓΘ be Frechet differentiable. Further, let Γ̂Θ θ̄ (− 12∇Lslow)(θ̄) be Lipschitz continuous. Let K be the set of asymptotically stable equilibria of the following ODE contained inside Θ: d dt θ̄(t) = Γ̂Θθ̄(t)(− 1 2 ∇θ̄Lslow)(θ̄(t)), θ̄(0) ∈ Θ̊ and t ∈ R+. Then the stochastic sequence {θ̄t}t∈N generated by the TTN converges almost surely to K. Proof. We employ here the ODE based analysis as proposed in (Borkar, 2008; Kushner and Clark, 2012). Firstly, we recall here the stochastic recursion which updates θ̄t: θ̄t+1 = Γ Θ ( θ̄t + ξtδt ( ∇θ̄t Ŷθ̄(St)− γt+1∇θ̄t Ŷθ̄(St+1) )) , (10) where ΓΘ is the projection onto a pre-determined compact and convex subset Θ ⊂ Rm+d. Here, δt , Rt+1 + γt+1Ŷθ̄t(St+1)− Ŷθ̄t(St) is the temporal difference. Also,∇θ̄t Ŷθ̄ ∈ R (m+d)×|S| is the Jacobian of Ŷθ̄ at θ̄ = θ̄t and ∇θ̄t Ŷθ̄(s) is the column corresponding to state s. Now the above equation can be rewritten as θ̄t+1 = Γ Θ ( θ̄t + ξt ( h1(θ̄t) + M1t+1 + `1t )) , (11) where h1(θ̄) , E [ δt ( ∇θ̄Ŷθ̄(St)− γt+1∇θ̄Ŷθ̄(St+1) )] , the noise term M1t+1 , δt ( ∇θ̄t Ŷθ̄(St)− γt+1∇θ̄t Ŷθ̄(St+1) ) − E [ δt ( ∇θ̄t Ŷθ̄(St)− γt+1∇θ̄t Ŷθ̄(St+1) ) |Ft ] and the bias `1t , E [ δt ( ∇θ̄t Ŷθ̄(St)− γt+1∇θ̄t Ŷθ̄(St+1) ) |Ft ] − h1(θ̄t). Further, θ̄t+1 = θ̄t + ξt ΓΘ ( θ̄t + ξt ( h1(θ̄t) + M1t+1 + `1t ) − θ̄t ) ξt = θ̄t + ξt ( Γ̂Θθt(h 1(θ̄t)) + Γ̂ Θ θ̄t ( M1t+1 ) + Γ̂Θθ̄t ( `1t ) + o(ξt) ) , (12) where Γ̂Θ is the Frechet derivative (defined in Eq. (8). Note that ΓΘ is single-valued since Θ is convex and also the above limit exists since the boundary ∂Θ is assumed smooth. Further, for x ∈ Θ̊, we have Γ̂Θx (y) = lim →0 ΓΘ (x + y)− x = lim →0 x + y − x = y (for sufficiently small ), (13) i.e., Γ̂Θx (·) is an identity map for x ∈ Θ̊. A few observations are in order: C1: Γ̂Θ θ̄ (h1(θ̄)) is a Lipschitz continuous function in θ̄. This follows from the hypothesis of the Lemma. C2: Γ̂Θ θ̄t ( M1t+1 ) is a truncated martingale difference noise. Indeed, it is easy to verify that the noise sequence {M1t+1}t∈N is a martingale-difference noise sequence w.r.t to the filtration {Ft+1}t∈N, i.e., M1t+1 is Ft+1-measurable and integrable, ∀t ∈ N and E [ M1t+1|Ft ] = 0 a.s., ∀t ∈ N. Also, since ΓΘ(·) is a continuous linear operator, we have Γ̂Θ(M1t+1) to be Ft+1-measurable and integrable, ∀t ∈ N likewise. C3: Γ̂Θ θ̄t ( `1t ) → 0 as t→∞ a.s. Indeed, ∥∥∥Γ̂Θθ̄t (`1t ) ∥∥∥ = ∥∥∥∥∥ lim →0 ΓΘ ( θ̄t + ` 1 t ) − θ̄t ∥∥∥∥∥ ≤ lim →0 ∥∥∥ΓΘ (θ̄t + `1t )− ΓΘ (θ̄t) ∥∥∥ ≤ lim →0 ∥∥∥θ̄t + `1t − θ̄t∥∥∥ = ‖`1t‖. By taking t→∞, C3 follows directly from the ergodicity (Levin and Peres, 2017) (Assumption 2) and finiteness of the underlying Markov chain. C4: o(ξt)→ 0 as t→∞ (follows from Assumption 1). C5: Iterates {θ̄t}t∈N are stable (forcefully), i.e. bounded almost surely, since θt ∈ Θ, ∀t ∈ N (ensured by the projection operator ΓΘ) and Θ is compact (i.e., closed and bounded). C6: ∃K0 ∈ (0,∞), such that E [ ‖Γ̂Θθ̄t ( M1t+1 ) ‖2|Ft ] ≤ K0 ( 1 + ‖θ̄t‖2 ) a.s. (14) This follows directly from the finiteness of the Markov chain and from the assumption that the boundary ∂Θ is smooth. Now, by appealing to Theorem 2, Chapter 2 of (Borkar, 2008)), we conclude that the stochastic recursion (10) asymptotically tracks the following ODE d dt θ̄(t) = Γ̂Θθ̄(t)(h 1(θ̄(t)), θ̄(0) ∈ Θ̊ and t ∈ R+ = Γ̂Θθ̄(t)(− 1 2 ∇θ̄Lslow)(θ̄(t)), θ̄(0) ∈ Θ̊ and t ∈ R+. (15) In other words, the stochastic recursion (10) converges to the asymptotically stable equilibria of the ODE (15) contained inside Θ. Remark 1. It is indeed non-trivial to determine the constraint set Θ without prior adequate knowledge about the limit set of the ODE (15). A pragmatic approach to overcome this concern is to initiate the stochastic recursion with an arbitrary convex, compact set Θ with a smooth boundary and gradually spread to the whole of Rm+d (Chen, 2006). Remark 2. It is also important to characterize the hypothesis of the above lemma (i.e., Γ̂Θ θ̄ (− 12∇Lslow)(θ̄) is Lipschitz continuous) with respect to the features Ŷθ̄. To achieve that one has to consider the non-projected form of the ODE (15). Apparently, when one considers the spreading approach proposed in the above remark, then it is essentially encouraged to consider the non-projected form since the limiting flow of the ODE arising from the projected stochastic recursion is more likely to lie inside the compact, convex set as Θ becomes larger. Thereupon, it is easy to observe that the condition Ŷθ̄ is twice continuously-differentiable is sufficient to ensure the Lipschitz continuity of Γ̂Θ θ̄ (− 12∇Lslow)(θ̄). Additionally, in that case K = {θ̄|∇θ̄Lslow(θ̄) = 0} which is the set of local extrema of J . B.4 TD(λ) ALGORITHM One can directly apply the TD(λ) with linear function approximation algorithm to estimate the value function with respect to the features provided by the neural network. The TD(λ) algorithm is provided in Algorithm 2. Here et,wt ∈ Rd. Further, δt , Rt+1 +γt+1w>t xθt(St+1)−w>t xθt(St) is the temporal difference. Algorithm 2 TD(λ) algorithm Parameters: αt > 0, λ ∈ [0, 1]; Initialization: w0 = 0, e0 = 0; For each transition (St, Rt+1, St+1) in Oπ, do: et+1 = xθt(St) + γt+1λet; (16) wt+1 = wt + αt ( Rt+1 + γt+1w > t xθt(St+1)−w>t xθt(St) ) et; (17) Assumption 4-TD(λ): The pre-determined, deterministic, step-size sequence {αt}t∈N satisfies: αt > 0,∀t ∈ N, ∑ t∈N αt =∞, ∑ t∈N α2t <∞, lim t→∞ ξt αt = 0. Note that the step-size schedules {αt}t∈N and {ξt}t∈N satisfy ξtαt → 0, which implies that {ξt} converges to 0 relatively faster than {αt}. This disparity in terms of the learning rates induces an asynchronous convergence behaviour asymptotically (Borkar, 1997), with feature parameter sequence {θ̄t} converging slower relative to the TD(λ) sequence {wt}. The rationale being that the increment term of the underlying stochastic gradient descent of the neural network is smaller compared to that of the TD(λ) recursion (17), since the neural network SGD is weighted by the step-size schedule {ξt}t∈N which is smaller than {αt}t∈N for all but finitely many t. This unique pseudo heterogeneity induces multiple perspectives, i.e., when viewed from the faster timescale recursion (recursion controlled by αt), the slower timescale recursion (recursion controlled by ξt) seems quasi-static (‘almost a constant’), while viewed from the slower timescale, the faster timescale recursion seems equilibrated. Further, it is analytically admissible (Borkar, 1997) to consider the slow timescale stochastic recursion (i.e., the neural network SGD) to be quasi-stationary (i.e., θ̄t ≡ θ̄, ∀t ∈ N), while analyzing the asymptotic behaviour of the relatively faster timescale stochastic recursion (17). Thereupon, we obtain the following directly from Theorem 1 of (Tsitsiklis and Van Roy, 1997). Lemma 2. Assume θ̄t ≡ θ̄, ∀t ∈ N. Let Assumptions 1-3 and 4-TD(λ) hold. Then for any λ ∈ [0, 1], the stochastic sequence {wt}t∈N generated by the TD(λ) algorithm (Algorithm 2) within the TTN setting converges almost surely to the limit w∗, where w∗ satisfies Πθ̄T (λ)(Φθ̄w ∗) = Φθ̄w ∗, (18) with T (λ)J(s) , (1− λ) ∑∞ i=0 λ iE [∑i j=0 γ [j]Rj+1 + γ [i+1]J(Si+1) ∣∣S0 = s] and γ[j] = Πji=0γi (with γ0 = 1). Also, Πθ̄ is defined according to Eq. (3) with F = {Φθ̄w|w ∈ Rd}. For other single-timescale prediction methods like ETD and LSPE, similar results follow. Regarding the least squares method LSTD, which offers the significant advantage of non-dependency on stepsizes (albeit computationally expensive) couples smoothly within the TTN setting without any additional consideration. B.5 GTD2 ALGORITHM However, one cannot directly apply the original GTD2 and TDC algorithms to the TTN setting, since a necessary condition required for the convergence of these algorithms is the non-singularity of the feature specific matrices E [ xθt(St)xθt(St) > ] and E [ (xθt(St)− γt+1xθt(St+1)) xθt(St)> ] . Please refer Theorem 1 and Theorem 2 of (Sutton et al., 2009). Without the non-singularity assumption, it is indeed hard to guarantee the almost sure boundedness of the GTD2/TDC iterates. In the TTN setting that we consider in this paper, one cannot explicitly assure this condition, since the features are apparently administered by a neural network and it is not directly intuitive on how to control the neural network to generate a collection of features with the desired non-singularity characteristic. Henceforth, one has to consider the projected versions of these algorithms. We consider here the projected GTD2 algorithm provided in Algorithm 3. Algorithm 3 GTD2 algorithm Parameters: αt, βt; Initialization: u0 ∈ U,w0 ∈W ; For each transition (St, Rt+1, St+1) in Oπ do: wt+1 = Γ W ( wt + αt ( δutt+1xθt(St)− ( wt >xθt(St) ) xθt(St) )) ; (19) ut+1 = Γ U ( ut + βt (xθt(St)− γt+1xθt(St+1)) ( wt >xθt(St) )) ; (20) Here ut,wt ∈ Rd. Further, δut+1 , Rt+1 + γt+1u>xθt(St+1) − u>xθt(St) is the temporal difference. Here, ΓW (·) is the projection operator onto a pre-determined convex, compact subset W ⊂ Rd with a smooth boundary ∂W . Therefore, ΓW maps vectors in Rd to the nearest vectors in W w.r.t. the Euclidean distance (or equivalent metric). Convexity and compactness ensure that the projection is unique and belongs to W . Similarly, U is a pre-determined convex, compact subset of Rd with a smooth boundary ∂U . Projection is required since the stability of the iterates {wt}t∈N and {ut}t∈N are hard to guarantee otherwise. Assumption 4-GTD2: The pre-determined, deterministic, step-size sequences {αt}t∈N and {βt}t∈N satisfy αt, βt > 0,∀t ∈ N, ∑ t∈N αt = ∑ t∈N βt =∞, ∑ t∈N ( α2t + β 2 t ) <∞, lim t→∞ βt αt = 0, lim t→∞ ξt βt = 0. Define the filtration {Ft}t∈N, a family of increasing natural σ-fields, where Ft , σ ( {wi,ui, θ̄i, Si, Ri; 0 ≤ i ≤ t} ) . Similar to the TD(λ) case, here also we follow the quasi-stationary argument. Henceforth, we analyze the asymptotic behaviour of GTD2 algorithm under the assumption that feature vector θ̄t is quasi-static, i.e. θ̄t ≡ θ̄ = (θ, w̄)>. Lemma 3. Assume θ̄t ≡ θ̄ = (θ, w̄)>, ∀t ∈ N. Let Assumptions 1-3 and 4-GTD2 hold. Then{ (u,w)> ∣∣ lim inf t→∞ ∥∥(u,w)> − (ut,wt)>∥∥} ⊆ ⋃ u∈A∗ { (u,w)> ∣∣w ∈ Au}, (21) where A∗ is the set of asymptotically stable equilibria of the following ODE: d dt u(t) = Γ̂Uu(t) ( Φ>θ̄ Ddπ (I− γt+1P π)Φθ̄u(t) ) , u(0) ∈ Ů , t ∈ R+ (22) and Au is the asymptotically stable equilibria of the following ODE: d dt w(t) = Γ̂Ww(t) (( Φ>θ̄ Ddπδ u − Φ>θ̄ DdπΦθ̄ ) w(t) ) , w(0) ∈ W̊ and t ∈ R+, with δu defined in Eq. (29). Proof. The two equations in the modified GTD2 algorithm constitute a multi-timescale stochastic approximation recursion, where there exists a bilateral coupling between the stochastic recursions (19) and (20). Since the step-size sequences {αt}t∈N and {βt}t∈N satisfy βtαt → 0, we have βt → 0 faster than αt → 0. This disparity in terms of the learning rates induces a pseudo heterogeneous rate of convergence (or timescales) between the individual stochastic recursions which results in a pseudo asynchronous convergence behaviour when considered over a finite time window. Also note that the coherent long-run behaviour of the multi-timescale stochastic recursion will asymptotically follow this short-term behaviour with the window size extending to infinity(Borkar, 1997; 2008). This pseudo behaviour induces multiple viewpoints, i.e., when observed from the faster timescale recursion (recursion controlled by αt), the slower timescale recursion (recursion controlled by βt) appears quasi-static (‘almost a constant’), while observed from the slower timescale, the faster timescale recursion seems equilibrated. Further, it is analytically admissible (Borkar, 1997) to consider the slow timescale stochastic recursion (20) to be quasi-stationary (i.e., ut ≡ u, ∀t ∈ N), while analyzing the limiting behaviour of the relatively faster timescale stochastic recursion 19. Analysis of the faster time-scale recursion: The faster time-scale stochastic recursion of the GTD2 algorithm is the following: wt+1 = Γ W ( wt + αt ( δutt+1xθt(St)− ( wt >xθt(St) ) xθt(St) )) . (23) Under the previously mentioned quasi-stationary premise that ut ≡ u and θ̄t ≡ θ̄ = (θ, w̄)>, ∀t ∈ N, thereupon, we analyze the long-term behaviour of the following recursion: wt+1 = Γ W ( wt + αt ( δut+1xt − ( wt >xt ) xt )) , (24) where xt = xθ(St) and δut+1 , Rt+1 + γt+1u >xt+1 − u>xt. The above equation can be rearranged as the following: wt+1 = Γ W ( wt + αt ( h2(wt) + M2t+1 + `2t )) , where the noise M2t+1 , δut+1xt − ( w>t xt ) xt − E [ δut+1xt − ( w>t xt ) xt|Ft ] , h2(w) , E [ δut+1xt − ( w>t xt ) xt ] and the bias `2t = E [ δut+1xt − ( w>t xt ) xt|Ft ] − E [ δut+1xt − ( w>t xt ) xt ] . Similar to Equation (12), we can rewrite the above recursion as follows: wt = wt + αt ( Γ̂Wwt(h 2(wt)) + Γ̂ W wt ( M2t+1 ) + Γ̂Wwt ( `2t ) + o(αt) ) , (25) where Γ̂Wwt(·) is the Frechet derivative (defined in Equation (8)) of the projection operator Γ W . A few observations are in order: D1: The iterates {wt}t∈N are stable, i.e., supt∈N ‖wt‖ <∞ a.s. This immediately follows since W is bounded. D2: {Γ̂Wwt ( M2t+1 ) }t∈N is a martingale-difference noise sequence with respect to the filtration {Ft+1}t∈N. This follows directly since {M2t+1}t∈N is a martingale-difference noise sequence with respect to the same filtration. D3: {Γ̂Wwt ( M2t+1 ) }t∈N are square-integrable and ∃K2 ∈ (0,∞) such that E [ ‖Γ̂Wwt ( M2t+1 ) ‖2|Ft ] ≤ K2 ( 1 + ‖wt‖2 ) a.s., t ∈ N. (26) This follows directly from the finiteness of the underlying Markov chain and from the assumption that the boundary ∂W is smooth. D4: Γ̂Ww (h 2(w)) is Lipschitz continuous with respect to w. Proof similar to C1. D5: Γ̂Wwt ( `2t ) → 0 as t→∞ a.s. Proof similar to C3. Now by appealing to Theorem 2, Chapter 2 of (Borkar, 2008) along with the above observations, we conclude that the stochastic recursion 23 asymptotically tracks the following ODE almost surely: d dt w(t) = Γ̂Ww(t)(h 2(w(t)), w(0) ∈ W̊ and t ∈ R+. = Γ̂Ww(t) ( E [ δut+1xt ] − E [ xtx > t ] w(t) ) , w(0) ∈ W̊ and t ∈ R+. (27) Therefore, wt converges asymptotically to the stable equilibria of the above ODE contained inside W almost surely. Qualitative analysis of the solutions of ODE (27): A trivial qualitative analysis of the long-run behaviour of the flow induced by the above ODE attests that the stable limit set is indeed the solutions of the following linear system insideW (This follows since Γ̂Ww (y) = y for w ∈ W̊ and also because Γ̂Ww (·) does not contribute any additional limit points on the boundary other than the roots of h2 since ∂W is smooth). E [ xtx > t ] E [ δut+1xt ] − E [ xtx > t ] E [ xtx > t ] w = 0. ⇒ E [ xtx > t ] w = E [ δut+1xt ] . (28) Note that E [ xtx > t ] = Φ> θ̄ DdπΦθ̄. Claim 1: The above linear system of equations is consistent, i.e., E [ δut+1xt ] ∈ R(Φ> θ̄ DdπΦθ̄), i.e., the range-space of Φ> θ̄ DdπΦθ̄: To see that, note that the above system can indeed be viewed as the least squares solution to the Φθ̄w = δ u with respect to the weighted-norm ‖ · ‖Ddπ , where δu(s) = R̄π(s) + γt+1 ∑ s′∈S Pπs,s′u >xθ(s ′)− ∑ s′∈S Pπs,s′u >xθ(s ′), (29) where R̄ is the expected reward. (Note that E [ δut+1xt ] = Φ> θ̄ Ddπδ u). The least-squares solution w0 ∈ Rd (which certainly exists but may not be unique) satisfies < Φθ̄w, δ u − Φθ̄w0 >Ddπ= 0, ∀w ∈ R d ⇒ < w, D−1dπΦ > θ̄ Ddπ (δ u − Φθ̄w0) >Ddπ= 0, ∀w ∈ R d. Now choose w = D−1dπΦ > θ̄ Ddπ (δ u − Φθ̄w0). Then Φ>θ̄ Ddπ (δ u − Φθ̄w0) = 0 ⇒ Φ>θ̄ DdπΦθ̄w0 = Φ > θ̄ Ddπδ u. [ End of proof of Claim 1 ] Since Φ> θ̄ DdπΦθ̄ may be singular (i.e., Φ > θ̄ DdπΦθ̄ is not invertible), the above least squares solution may not be unique and hence the collection of asymptotically stable equilibria of the flow induced by the ODE (27) may not be singleton for every u. Let’s denote the asymptotically stable equilibria of the flow induced by the said ODE to be the set Au, where ∅ 6= Au ⊆W . Analysis of the slower time-scale recursion: The slower time-scale stochastic recursion of the GTD2 algorithm is the following: ut+1 = Γ U ( ut + βt (xt − γt+1xt+1) ( wt >xt )) , ut ∈ Rd, u0 ∈ U. (30) Note that since ξtβt → 0, the stochastic recursion (20) is managed on a faster timescale relative to the the neural network stochastic recursion (10) and henceforth, we continue to maintain here the quasi-stationary condition θ̄t ≡ θ̄ = (θ, w̄)>. Now the above equation can be rearranged as follows: ut+1 = Γ U ( ut + βt ( E [ ∆wtt+1 ] + M3t+1 + `3t )) , (31) where ∆wtt+1 , (xt − γt+1xt+1) ( wt >xt ) = ( (xt − γt+1xt+1) x>t ) wt, the noise term M3t+1 , ∆wtt+1 − E [ ∆wtt+1|Ft ] and the bias `3t , E [ ∆wtt+1|Ft ] − E [ ∆wtt+1 ] . Similar to Equation (12), we can rewrite the above recursion as follows: ut+1 = ut + βt ( Γ̂Uut ( E [ ∆wtt+1 ]) + Γ̂Uut ( M3t+1 ) + Γ̂Uut ( `3t ) + o(βt) ) , (32) where Γ̂Uut(·) is the Frechet derivative (defined in Equation (8)) of the projection operator Γ U . Now the above equation can be interpreted in terms of stochastic recursive inclusion as follows: ut+1 = ut + βt ( Γ̂Uut ( E [ ∆wtt+1 ]) + Γ̂Uut ( M3t+1 ) + Γ̂Uut ( `3t ) + o(βt) ) , (33) with Γ̂Uut ( E [ ∆wtt+1 ]) ∈ h3(ut), where the set-valued map h3 : Rd → {subsets of Rd} is defined as h3(u) , { Γ̂Uu ( E [ ∆wt+1 ]) , where w ∈ Au } . (34) Indeed h3(u) = { Γ̂Uu (Bw) , where B = E [( (xt − γt+1xt+1) x>t )] and w ∈ Au } . It is easy to verify that B = Φ> θ̄ Ddπ (I− γt+1Pπ)Φθ̄. Here, one cannot directly apply the multi-timescale stochastic approximation results from (Borkar, 1997) since the said paper assumes that the limit point from the slower timescale recursion is unique (Please see Chapter 6 of (Borkar, 2008)). But in our setting, the slower timescale recursion (23) has several limit points (note that the stable limit set Au is not singleton). This is where our analysis differs from that of the seminal paper on GTD2 algorithm, where it is assumed that both the matrices E [ xtx > t ] and E [ (xt − γt+1xt+1)x>t ] are certainly non-singular. However, in our TTN setting, one cannot guarantee this condition, since the features are apparently provided by a neural network and it is hard to fabricate the neural network to generate a collection of features with the desired non-singularity properties. In order to analyze the limiting behaviour of the GTD2 algorithm under the relaxed singularity setting, henceforth one has to view the stochastic recursion (30) as a stochastic recursion inclusion (Benaïm et al., 2005) and apply the recent results from (Ramaswamy and Bhatnagar, 2016) which analyzes the asymptotic behaviour of general multi-timescale stochastic recursive inclusions. A few observations are in order: E1: For each u ∈ U , h3(u) is a singleton: This follows from the definition of h3 and Claim 1 above, where we established that each w ∈ Au is the least squares solution to the linear system of equations Φθ̄w = δ u. Therefore, it further implies that that h3 is a Marchaud map as well. E2: supt∈N (‖wt‖+ ‖ut‖) <∞ a.s. This follows since W and U are bounded sets. E3: {Γ̂Uut ( M3t+1 ) }t∈N is a martingale-difference noise sequence with respect to the filtration {Ft+1}t∈N. This follows directly since {M3t+1}t∈N is a martingale-difference noise sequence with respect to the same filtration. E4: {Γ̂Uut ( M3t+1 ) }t∈N are square-integrable and ∃K3 ∈ (0,∞), such that E [∥∥Γ̂Uut (M3t+1) ∥∥2∣∣Ft] ≤ K3 (1 + ‖ut‖2 + ‖wt‖2) a.s., t ∈ N. (35) This follows directly from the finiteness of the underlying Markov chain and from the assumption that the boundary ∂U is smooth. E5: Γ̂Uut ( `3t ) → 0 as t→∞ a.s. Proof similar to C3. This implies that the bias is asymptotically irrelevant. E6: For each u ∈ U , the set Au is a globally attracting set of the ODE (27) and is also Lyapunov stable. Further, there exists K4 ∈ (0,∞) such that supw∈Au ‖w‖ ≤ K4(1 + ‖u‖). This follows since Au ⊆W and W is bounded. E7: The set-valued map q : U → {subsets of Rd} given by q(u) = Au is upper-semicontinuous: Consider the convergent sequences {un}n∈N → u and {wn}n∈N → w with un ∈ U and wn ∈ q(un) = Au. Note that w ∈ W , u ∈ U since W and U are compact. Also Φ> θ̄ DdπΦθ̄wn = Φ > θ̄ Ddπδ un (from Claim 1). Now taking limits on both sides we get lim n→∞ Φ>θ̄ DdπΦθ̄wn = limn→∞ Φ>θ̄ Ddπδ un ⇒ Φ>θ̄ DdπΦθ̄w = Φ > θ̄ Ddπδ u. This implies that w ∈ Au = q(u). The claim thus follows. Thus we have established all the necessary conditions demanded by Theorem 3.10 of (Ramaswamy and Bhatnagar, 2016) to characterize the limiting behaviour of the stochastic recursive inclusion (33). Now by appealing to the said theorem, we obtain the following result on the asymptotic behaviour of the GTD2 algorithm:{ (u,w)> ∣∣ lim inf t→∞ ∥∥(u,w)>− (ut,wt)>∥∥} ⊆ ⋃ u∈A∗ { (u,w)> ∣∣w ∈ Au}, (36) where A∗ is the set of asymptotically stable equilibria of the following ODE: d dt u(t) = h3(u(t)), u(0) ∈ Ů , t ∈ R+. (37) One can obtain similar results for projected TDC. We now state our main result: Theorem 2. Let Θ ⊂ Rm+d be a compact, convex subset with smooth boundary. Let ΓΘ be Frechet differentiable. Further, let Γ̂Θ θ̄ (− 12∇Lslow)(θ̄) be Lipschitz continuous. Also, let Assumptions 1-3 hold. Let K be the set of asymptotically stable equilibria of the following ODE contained inside Θ: d dt θ̄(t) = Γ̂Θθ̄(t)(− 1 2 ∇θ̄Lslow)(θ̄(t)), θ̄(0) ∈ Θ̊ and t ∈ R+. Then the stochastic sequence {θ̄t}t∈N generated by the TTN converges almost surely to K (sample path dependent). Further, TD(λ) Convergence: Under the additional Assumption 4-TD(λ), we obtain the following result: For any λ ∈ [0, 1], the stochastic sequence {wt}t∈N generated by the TD(λ) algorithm (Algorithm 2) within the TTN setting converges almost surely to the limit w∗, where w∗ satisfies Πθ̄∗T (λ)(Φθ̄∗w ∗) = Φθ̄∗w ∗, (38) with T (λ) defined in Lemma 2 and θ̄∗ ∈ K (sample path dependent). GTD2 Convergence: Let W,U ⊂ Rd be compact, convex subsets with smooth boundaries. Let Assumption 4-GTD2 hold. Let ΓW and ΓU be Frechet differentiable. Then the stochastic sequences {wt}t∈N and {ut}t∈N generated by the GTD2 algorithm (Algorithm 3) within the TTN setting satisfy{ (u,w)> ∣∣ lim inf t→∞ ∥∥(u,w)> − (ut,wt)>∥∥} ⊆ ⋃ u∈A∗ { (u,w)> ∣∣w ∈ Au}, where A∗ is the set of asymptotically stable equilibria of the following ODE: d dt u(t) = Γ̂Uu(t) ( Φ>θ̄∗Ddπ (I− γt+1P π)Φθ̄∗u(t) ) , u(0) ∈ Ů , t ∈ R+ and Au is the asymptotically stable equilibria of the following ODE: d dt w(t) = Γ̂Ww(t) (( Φ>θ̄∗Ddπδ u − Φ>θ̄∗DdπΦθ̄∗ ) w(t) ) , w(0) ∈ W̊ and t ∈ R+, with θ̄∗ ∈ K (sample path dependent) and δu defined in Eq. (29). C.1 NONIMAGE CATCHER C ADDITIONAL EXPERIMENTS C.2 PUDDLEWORLD C.3 IMAGE CATCHER We also ran policy evaluation experiments on image-based catcher with 2 stacked 64x64 frames as input. The policy evaluated was the same as was used in the non-image setting. Similar to the non-imaged based catcher experiments, we have similar plots. C.4 CARTPOLE In the classic Cartpole environment, the agent has to balance a pole on a cart. The state is given by vector of 4 numbers (cart position, cart velocity, pole angle, pole velocity). The two available actions are applying a force towards the left or the right. Rewards are +1 at every timestep and an episode terminates once the pole dips below a certain angle or the cart moves too far from the center. We use the OpenAI gym implementation (Brockman et al., 2016). The policy to be evaluated consists of applying force in the direction the pole is moving with probability 0.9 (stabilizing the pole) or applying force in the direction of the cart’s velocity with probability 0.1. We inject some stochasticity so that the resulting policy does not perform overly well, which would lead to an uninteresting value function. C.5 ACROBOT In the classic Acrobot domain, the agent consisting of two links has to swing up past a certain height. The agent observes a 4-dimensional state consisting of the angles and the angular velocities of each link. The avaiable actions are three possible levels of torque to be applied to the joint. The evaluated policy is obtained by training an agent with true-online Sarsa on a tile coding representation and then fixing its learned epsilon-greedy policy. C.6 PUCK WORLD In Puck World (Tasfi, 2016), the agent has to move in a two-dimensional box towards a good puck while staying away from a bad puck. The 8-dimensional state consists of (player x location, player y location, player x velocity, player y velocity, good puck x location, good puck y location, bad puck x location, bad puck y location). Each action increases the agent’s velocity in one of the four cardinal directions apart from a “None” action which does nothing. The reward is the negative distance to the good puck plus a penalty of −10 + x if the agent is within a certain radius of the bad puck, where x ∈ [−2, 0] depends on the distance to the bad puck (the reward is slightly modified from the original game to make the value function more interesting). The policy moves the agent towards the good puck, while having a soft cap on the agent’s velocity. In more detail, to choose one action, it is defined by the following procedure: First, we choose some eligible actions. The None action is always eligible. The actions which move the agent towards the good puck are also eligible. For example, if the good puck is Northeast of the agent, the North and East actions are eligible. If the agent’s velocity in a certain direction is above 30, then the action for that direction is no longer eligible. Finally, the agent picks uniformly at random from all eligible actions. C.7 OFF-POLICY CATCHER We run a preliminary experiment to check if TTN can have an advantage in the off-policy setting. The target policy is the same as the one used for other Catcher experiments (described in Appendix D). The behaviour policy is slightly different. If the apple is within 20 units (the target policy is 25 units), then the agent takes the action in the direction of the apple with probability 0.7 and one of the other two actions with probability 0.15 each. If the apple is not within range, then the agent takes the None action 10% of the time and one of the other two with equal probability. This combination of behaviour and target policies results in importance sampling ratios in the range of 0 to 8.7, moderately large values. We try TTN with three off-policy algorithms (TD, TDC and LSTD) and compare to off-policy Nonlinear TD. For TTN, the features are learnt optimizing the MSTDE on the behaviour policy while the values are learned off-policy. The main difference between TTN and Nonlinear TD is the fact that Nonlinear TD does off-policy updates to the entire network while TTN only changes the linear part
1. What is the main contribution of the paper regarding two-timescale frameworks for learning value functions and state representations? 2. What are the strengths and weaknesses of the proposed approach compared to prior works in reinforcement learning? 3. Do you have any concerns or suggestions regarding the organization and writing style of the paper? 4. How does the reviewer assess the significance and relevance of the paper's topic and content? 5. Are there any questions or issues regarding the proof of convergence, empirical evaluation, and experimental setup that the reviewer would like to know more about?
Review
Review The paper proposes a two-timescale framework for learning the value function and a state representation altogether with nonlinear approximators. The authors provide proof of convergence and a good empirical evaluation. The topic is very interesting and relevant to ICLR. However, I think that the paper is not ready for a publication. First, although the paper is well written, the writing can be improved. For instance, I found already the abstract a bit confusing. There, the authors state that they "provide a two-timescale network (TTN) architecture that enables LINEAR methods to be used to learn values [...] The approach facilitates use of algorithms developed for the LINEAR setting [...] We prove convergence for TTNs, with particular care given to ensure convergence of the fast LINEAR component." Yet, the title says NONLINEAR and in the remainder of the paper they use neural networks. The major problem of the paper is, however, its organization. The novelty of the paper (the proof of convergence) is relegated to the appendix, and too much is spent in the introduction, when actually the idea of having the V-function depending on a slowly changing network is also not novel in RL. For instance, the authors say that V depends on \theta and w, and that \theta changes at slower pace compared to w. This recalls the use of target networks in the TD error for many actor-critic algorithms. (It is not the same thing, but there is a strong connection). Furthermore, in the introduction, the authors say that eligibility traces have been used only with linear function approximators, but GAE by Schulman et al. uses the same principle (their advantage is actually the TD(\lambda) error) to learn an advantage function estimator, and it became SOTA for learning the value function. I am also a bit skeptical about the use of MSBE in the experiment. First, in Eq 4 and 5 the authors state that using the MSTDE is easier than MSBE, then in the experiments they evaluate both. However, the MSBE error involves the square of an expectation, which should be biased. How do you compute it? (Furthermore, you should spend a couple of sentences to explain the problem of this square and the double-sampling problem of Bellman residual algorithms. For someone unfamiliar with the problem, this issue could be unclear.) I appreciate the extensive evaluation, but its organization can also be improved, considering that some important information are, again, in the appendix. Furthermore, results on control experiment are not significative and should be removed (at the current stage, at least). In the non-image version there is a lot of variance in your runs (one blue curve is really bad), while for the image version all runs are very unstable, going always up and down. In conclusion, there is a lot of interesting material in this paper. Even though the novelty is not great, the proofs, analysis and evaluation make it a solid paper. However, because there is so much do discuss, I would suggest to reorganize the paper and submit directly to a journal track (the paper is already 29 pages including the appendix).
ICLR
Title Two-Timescale Networks for Nonlinear Value Function Approximation Abstract A key component for many reinforcement learning agents is to learn a value function, either for policy evaluation or control. Many of the algorithms for learning values, however, are designed for linear function approximation—with a fixed basis or fixed representation. Though there have been a few sound extensions to nonlinear function approximation, such as nonlinear gradient temporal difference learning, these methods have largely not been adopted, eschewed in favour of simpler but not sound methods like temporal difference learning and Q-learning. In this work, we provide a two-timescale network (TTN) architecture that enables linear methods to be used to learn values, with a nonlinear representation learned at a slower timescale. The approach facilitates the use of algorithms developed for the linear setting, such as data-efficient least-squares methods, eligibility traces and the myriad of recently developed linear policy evaluation algorithms, to provide nonlinear value estimates. We prove convergence for TTNs, with particular care given to ensure convergence of the fast linear component under potentially dependent features provided by the learned representation. We empirically demonstrate the benefits of TTNs, compared to other nonlinear value function approximation algorithms, both for policy evaluation and control. 1 INTRODUCTION Value function approximation—estimating the expected returns from states for a policy—is heavily reliant on the quality of the representation of state. One strategy has been to design a basis—such as radial basis functions (Sutton and Barto, 1998) or a Fourier basis (Konidaris et al., 2011)—for use with a linear function approximator and temporal difference (TD) learning (Sutton, 1988). For low-dimensional observation vectors, this approach has been effective, but can be onerous to extend to high-dimensional observations, potentially requiring significant domain expertise. Another strategy has been to learn the representation, such as with basis adaptation or neural networks. Though there is still the need to specify the parametric form, learning these representations alleviates the burden of expert specification. Further, it is more feasible to scale to high-dimensional observations, such as images, with neural networks (Mnih et al., 2015; Silver et al., 2016). Learning representations necessitates algorithms for nonlinear function approximation. Despite the deficiencies in specification for fixed bases, linear function approximation for estimating value functions has several benefits over nonlinear estimators. They enable least-squares methods, which can be much more data-efficient for policy evaluation (Bradtke and Barto, 1996; Szepesvari, 2010; van Seijen and Sutton, 2015), as well as robust to meta-parameters (Pan et al., 2017). Linear algorithms can also make use of eligibility traces, which can significantly speed learning (Sutton, 1988; Dann et al., 2014; White and White, 2016), but have not been able to be extended to nonlinear value function approximation. Additionally, there have been a variety of algorithms derived for the linear setting, both for on-policy and off-policy learning (Sutton et al., 2009; Maei, 2011; van Seijen and Sutton, 2014; van Hasselt et al., 2014; Mahadevan et al., 2014; Sutton et al., 2016; Mahmood et al., 2017). These linear methods have also been well-explored theoretically (Tsitsiklis and Van Roy, 1997; Maei, 2011; Mahmood and Sutton, 2015; Yu, 2015) and empirically (Dann et al., 2014; White and White, 2016), with some insights into improvements from gradient methods (Sutton et al., 2009), true-online traces (van Seijen and Sutton, 2014) and emphatic weightings (Sutton et al., 2016). These algorithms are easy to implement, with relatively simple objectives. Objectives for nonlinear value function approximation, on the other hand, can be quite complex (Maei et al., 2009), resulting in more complex algorithms (Menache et al., 2005; Di Castro and Mannor, 2010; Bhatnagar et al., 2013) or requiring a primal-dual formulation as has been done for control (Dai et al., 2017). In this work, we pursue a simple strategy to take advantage of the benefits of linear methods, while still learning the representation. The main idea is to run two learning processes in parallel: the first learns nonlinear features using a surrogate loss and the second estimates the value function as a linear function of those features. We show that these Two-timescale Networks (TTNs) converge, because the features change on a sufficiently slow scale, so that they are effectively fixed for the fast linear value function estimator. Similar ideas have previously been explored for basis adaptation, but without this key aspect of TTNs—namely the separation of the loss for the representation and value function. This separation is critical because it enables simpler objectives—for which the gradient can be easily sampled—to drive the representation, but still enables use of the mean squared projected Bellman error (MSPBE)—on which all the above linear algorithms are based. This separation avoids the complexity of the nonlinear MSPBE, but maintains the useful properties of the (linear) MSPBE. A variety of basis adaptation approaches have used a two-timescale approach, but with the same objective for the representation and the values (Menache et al., 2005; Di Castro and Mannor, 2010; Bhatnagar et al., 2013; J et al., 2016). Yu and Bertsekas (2009) provided algorithms for basis adaptation using other losses, such as Bellman error using Monte carlo samples, taking derivatives through fixed point solutions for the value function. Levine et al. (2017) periodically compute a closed form least-squares solution for the last layer of neural network, with a Bayesian update to prevent too much change. Because these methods did not separate the value learn and basis adaptation, the resulting algorithms are more complex. The strategy of using two different heads—one to drive the representation and one to learn the values—has yet to be systematically explored. We show that TTNs are a promising direction for nonlinear function approximation, allowing us to leverage linear algorithms while retaining the flexibility of nonlinear function approximators. We first discuss a variety of possible surrogate losses, and their potential for learning a useful representation. We then show that TTNs converge, despite the fact that a linear algorithm is used with a changing representation. This proof is similar to previous convergence proofs for policy evaluation, but with a relaxation on the requirement that features be independent, which is unlikely for learned features. We then show empirically that TTNs are effective compared to other nonlinear value function approximations and that they can exploit several benefits of linear value approximations algorithms. In particular, for both low-dimensional and high-dimensional (image-based) observations, we show (a) the utility of least-squares (or batch) methods, (b) advantages from eligibility traces and (c) gains from being able to select amongst different linear policy evaluation algorithms. We demonstrate that TTNs can be effective for control with neural networks, enabling use of fitted Q-iteration within TTNs as an alternative to target networks. 2 BACKGROUND We assume the agents act in a finite Markov Decision Process (MDP), with notation from (White, 2017). The dynamics of the MDP are defined by the 3-tuple (S,A, P ), where S is the set of states, A the set of actions and P : S × A × S 7→ [0, 1] the transition probability function. The task in this environment is defined by a reward function R : S × A × S 7→ R and a discount function γ : S × A × S 7→ [0, 1]. At each time step, the agent takes an action At according to a policy π : S ×A 7→ [0, 1] and the environment returns reward Rt+1, next state St+1 and discount γt+1. The goal in policy evaluation is to compute the value function: the expected sum of discounted rewards from every state under a fixed policy π. The value function Vπ : S → R is defined recursively from each state s ∈ S as Vπ(s) def = E[Rt+1 + γt+1Vπ(St+1)|St = s] = ∑ a∈A π(s, a) ∑ s′∈S P (s, a, s′)(r + γVπ(s ′)). (1) When using linear function approximation, this goal translates into finding parameters w ∈ Rd to approximate the value function V̂ (s) def = x(s)>w ≈ Vπ(s) where x : S → Rd is a feature function. (2) More generally, a nonlinear function V̂ (s) could be learned to estimate Vπ . To formulate this learning problem, we need to consider the objective for learning the function V̂ . Let Vπ, V̂ ∈ R|S| be the vectors of values for Vπ, V̂ . The recursive formula (1) defines a Bellman operator Bπ where the fixed point satisfies BπVπ = Vπ . Consider a restricted value function class, such as the set of linear value functions V̂ ∈ F = {Xw | w ∈ Rd} where X ∈ R|S|×d is a matrix with the i-th row set to x(s) for ith state s ∈ S. Then, it may no longer be possible to satisfy the recursion. Instead, an alternative is to find a projected fixed point ΠFBπV̂ = V̂ where the projection operator ΠF projects BπV̂ to the space spanned by this linear basis: ΠFV def = arg min V̄ ∈F ‖V̄ − V ‖2d (3) where d ∈ R|S| is a vector which weights each state in the weighted norm ‖V ‖2d = ∑ s∈S d(s)V (s) 2. Many linear policy evaluation algorithms estimate this projected fixed point, including TD (Sutton, 1988), least-squares TD (Bradtke and Barto, 1996) and gradient TD (Sutton et al., 2009). The objective formulated for this projected fixed-point, however, is more complex for nonlinear function approximation. For linear function approximation, the projection operator simplifies into a closed form solution involving only the features X. Letting δt = Rt+1 + γV̂ (St+1)− V̂ (St), the resulting mean-squared projected Bellman error (MSPBE) can be written as MSPBE(w) def= ‖ΠFBπV̂ − V̂ ‖2d = E[δtxt]E[xtx>t ] −1 E[δtxt] (4) where E[δtxt] = ∑ s∈S d(s)E[δt|St = s]x(s). For nonlinear function classes, the projection does not have a closed form solution and may be expensive to compute. Further, the projection involves the value function parameters, so the projection changes as parameters change. The nonlinear MSPBE and resulting algorithm are more complex (Maei et al., 2009), and have not seen widespread use. Another option is simply to consider different objectives. However, as we discuss below, other objectives for learning the value function either are similarly difficult to optimize or provide poor value estimates. In the next section, we discuss some of these alternatives and introduce Two-timescale Networks as a different strategy to enable nonlinear value function approximation. 3 TWO-TIMESCALE NETWORKS AND SURROGATE OBJECTIVES We first introduce Two-timescale Networks (TTNs), and then describe different surrogate objectives that can be used in TTNs. We discuss why these surrogate objectives within TTNs are useful to drive the representation, but are not good replacements for the MSPBE for learning the value function. TTNs use two concurrent optimization processes: one for the parameters of the network θ and one for the parameters of the value function w. The value function is approximated as V̂ (s) def= xθ(s)>w where the features xθ : S → Rd are a parametrized function and θ ∈ Rm is adjusted to provide better features. For a neural network, θ consists of all the parameters in the hidden layers, to produce the final hidden layer xθ(s). The two optimization processes maintain different time scales, with the parameters θ for the representation changed as a slow process, and the parameters w for the value estimate changed as a fast process relative to θ. The separation between these two processes could be problematic, since the target problem— estimating the value function—is not influencing the representation! The slow process is driven by a completely separate objective than the fast process. However, the key is to select this surrogate loss for the slow process so that it is related to the value estimation process, but still straightforward to compute the gradient of the loss. We use V̂ (s) as the output of the fast part, which corresponds to the value estimate used by the agent. To distinguish, Ŷ (s) denotes the output for the slow-part (depicted in Figure 1), which may or may not be an estimate of the value, as we discuss below. Consider first the mean-squared TD error (MSTDE), which corresponds to ∑ s∈S d(s)E[δ2t |St = s]. Notice that this does not correspond to the mean-squared Bellman error (MSBE), for which it is more difficult to compute gradients ‖BπV̂ − V̂ ‖2d = ∑ s∈S d(s) (E[δt|St = s]) 2. Using the MSTDE as a surrogate loss, with Ŷ (s) = xθ(s)>w̄, the slow part of the network minimizes Lslow(θ)= min w̄∈Rd ∑ s∈S d(s)E[δt(θ, w̄)2|St = s] . δt(θ, w̄) def =Rt+1+γt+1xθ(St+1) >w̄−xθ(St)>w̄. This slow part has its own weights w̄ associated with estimating the value function, but learned instead according to the MSTDE. The advantage here is that stochastic gradient descent on the MSTDE is straightforward, with gradient δt∇{θ,w̄}[γt+1Ŷ (St+1)− Ŷ (St)] where ∇{θ,w̄}Ŷ (St) is the gradient of the neural network, including the head of the slow part which uses weights w̄. Using the MSTDE has been found to provide worse value estimates than the MSPBE—which we re-affirm in our experiments. It could, nonetheless, play a useful role as a surrogate loss, where it can inform the representation towards estimating values. w There are a variety of other surrogate losses that could be considered, related to the value function. However, many of these losses are problematic to sample incrementally, without storing large amounts of data. For example, the mean-squared return error (MSRE) could be used, which takes samples of return and minimizes mean-squared error to those sampled returns. Obtaining such returns requires waiting many steps, and so delays updating the representation for the current state. Another alternative is the MSBE. The gradient of the nonlinear MSBE is not as complex as the gradient of the nonlinear MSPBE, because it does not involve the gradient of a projection. However, it suffers from the double sampling problem: sampling the gradient requires two independent samples. For these reasons, we explore the MSTDE as the simplest surrogate loss involving the value function. Finally, surrogate losses could also be defined that are not directly related to the value function. Two natural choices are losses based on predicting the next state and reward. The output of the slow part could correspond to a vector of values, such as Yt = St+1 ∈ Rn or Yt = [ St+1 Rt+1 ] . The ability to predict the next state and reward is intuitively useful for enabling prediction of value, that also has some theoretical grounding. Szepesvari (2010, Section 3.2.1) shows that the Bellman error is small, if the features can capture a horizon of immediate rewards and expected next states. For linear encoders, Song et al. (2016) show that an optimal set of features enables predictions of next state and reward. More generally, learning representations using auxiliary tasks or self-supervised tasks have had some successes in RL, such as using pixel control (Jaderberg et al., 2016) or classifying the temporal distance between frames (Aytar et al., 2018). In computer vision, Gidaris et al. (2018) showed that using rotated images as self-supervised tasks produced a useful representation for the main loss, without training the representation with the main loss. Any of these self-supervised tasks could also be used for the surrogate objective, and motivate that separating out representation learning does not degrade performance. For now, we restrict focus on simpler surrogate objectives, as the main purpose of this work is to demonstrate that the separation in TTNs is a sound approach for learning values. 4 CONVERGENCE OF TWO-TIMESCALE NETWORK ALGORITHM Training TTNs is fully online, using a single transition from the environment at a time. Projected stochastic gradient descent is used to reduce the surrogate loss, Lslow(θ) and a linear policy evaluation algorithm, such as GTD2 or TD(λ), is coupled to the network where the prediction vector w is callibrated proportional to −∇wMSPBEθ(w). The full procedure is summarized in Algorithm 1, in Appendix A. Regarding the convergence of TTNs, a few remarks are in order: 1. The network needs to evolve sufficiently slowly relative to the linear prediction weights. In our theoretical analysis, this is achieved by ensuring that the step sizes ξt and αt of the network and the linear policy evaluation algorithm respectively decay to zero at different rates. In particular, ξt/αt → 0 as t→∞. With this relative disparity in magnitudes, one can assume that the network is essentially quasi-static, while the faster linear component is equilibrated relative to the static features. 2. The linear prediction algorithms need to converge for any set of features provided by the neural network, particularly linearly dependent features. This induces a technical bottleneck since linear independence of the features are a necessary condition for the convergence of the prediction methods GTD and GTD2. We overcome this by following a differential inclusion based analysis for GTD2. 3. Finally, we need to guarantee the stability of the iterates (both feature vector θt and the prediction vector wt) and this is ensured by projecting the iterates to respective compact, convex sets. The analysis for the convergence of the neural network is general, enabling any network architectures that are twice continuously differentiable. We prove that the TTNs converge asymptotically to the stable equilibria of a projected ODE which completely captures the mean dynamics of the algorithm. We now state our main result (for notations and technical details, please refer Appendix B). The results are provided for cases when TD(λ) or GTD2 is used as the linear prediction method. However, note that similar results can be obtained for other linear prediction methods. Theorem 1. Let θ̄ = (θ, w̄)> and Θ ⊂ Rm+d be a compact, convex subset with smooth boundary. Let the projection operator ΓΘ be Frechet differentiable and Γ̂Θ θ̄ (− 12∇Lslow)(θ̄) be Lipschitz con- tinuous. Also, let Assumptions 1-3 hold. Let K be the set of asymptotically stable equilibria of the following ODE contained inside Θ: d dt θ̄(t) = Γ̂Θθ̄(t)(− 1 2 ∇θ̄Lslow)(θ̄(t)), θ̄(0) ∈ Θ̊ and t ∈ R+. Then the stochastic sequence {θ̄t}t∈N generated by the TTN converges almost surely to K (sample path dependent). Further, TD(λ) Convergence: Under the additional Assumption 4-TD(λ), we obtain the following result: For any λ ∈ [0, 1], the stochastic sequence {wt}t∈N generated by the TD(λ) algorithm (Algorithm 2) within the TTN setting converges almost surely to the limit w∗, where w∗ satisfies Πθ̄∗T (λ)(Φθ̄∗w ∗) = Φθ̄∗w ∗, (5) with θ̄∗ ∈ K (sample path dependent). 5 EXPERIMENTS We investigate the performance of TTNs versus a variety of other nonlinear policy evaluation algorithms, as well as the impact of choices within TTNs. We particularly aim to answer (a) is it beneficial to optimize the MSPBE to obtain value estimates, rather than using value estimates from surrogate losses like the MSTDE; (b) do TTNs provide gains over other nonlinear policy evaluation algorithms; and (c) can TTNs benefit from the variety of options in linear algorithms, including leastsquares approaches, eligibility traces and different policy evaluation algorithms. More speculatively, we also investigate if TTNs can provide a competitive alternative to deep Q-learning in control. Experiments were performed on-policy in five environments. We use three classic continuous-state domains: Puddle World, a continuous-state grid world with high-magnitude negative rewards for walking through a puddle; Acrobot, where a robot has to swing itself up; and Cartpole, which involves balancing a pole. We also use two game domains: Catcher, which involves catching falling apples; and Puck World, in which the agent has to chase a puck (Tasfi, 2016). Catcher includes both a variant with 4-dimensional observations—position and velocity of the paddle, and (x,y) of the apple— and one with image-based observations—with two consecutive 64-by-64 grayscale images as input. This domain enables us to analyze the benefit of the algorithms, on the same domain, both with low-dimensional and high-dimensional observations. We describe the policies evaluated for these domains in Appendix D. We include a subset of results in the main body, with additional results in the appendix. Results in Cartpole are similar to Acrobot; Cartpole results are only in the appendix. The value estimates are evaluated using root-mean-squared value error (RMSVE), where value error is (Vπ(s)− V̂ (s))2. The optimal values for a set of 500 states are obtained using extensive rollouts from each state and the RMSVE is computed across these 500 states. For the algorithms, we use the following settings, unless specified otherwise. For the slow part (features), we minimize the mean-squared TD error (MSTDE) using the AMSGrad optimizer (Reddi et al., 2018) with β1 = 0 and β2 = 0.99. The network weights use Xavier initialization (Glorot and Bengio, 2010); the weights for the fast part were initialized to 0. In Puddle World, the neural network consists of a single hidden layer of 128 units with ReLU activations. In the other environments, we use 256 units instead. To choose hyperparameters, we first did a preliminary sweep on a broad range and then chose a smaller range where the algorithms usually made progress, summarized in Appendix D. Results are reported for hyperparameters in the refined range, chosen based on RMSVE over the latter half of a run with shaded regions corresponding to one standard error. TTN vs. competitors. We compare to the following algorithms: nonlinear TD, nonlinear GTD (Maei et al., 2009), Adaptive Bases (ABBE and ABTD) (Di Castro and Mannor, 2010), nonlinear TD + LSTD regularization (inspired by Levine et al. (2017)). We describe these algorithms in more depth in Appendix D. All of the algorithms involve more complex updates compared to TTNs, except for nonlinear TD, which corresponds to a semi-gradient TD update with nonlinear function approximation. For TTNs, we use LSTD for the linear, fast part. In Figure 2, TTN is able to perform as well or better than the competitor algorithms. Especially in Puddle World, its error is significantly lower than the second best algorithm. Interestingly, Nonlinear GTD also performs well across domains, suggesting an advantage for theoretically-sound algorithms. The utility of optimizing the MSPBE. First, we show that the TTN benefits from having a second head learning at a faster timescale. To do so, we compare the prediction errors of using TTN, with the fast process optimizing the MSPBE (using LSTD) and the slow one optimizing the MSTDE, and one trained end-to-end using the MSTDE with AMSGrad. As a baseline, we include TTN with a fixed representation (a randomly initialized neural network) to highlight that the slow process is indeed improving the representation. We also include results for optimizing the MSTDE with the fixed representation. Cartpole In Figure 3, we see that optimizing the MSPBE indeed gives better results than optimizing the MSTDE. Additionally, we can conclude that using the MSTDE, despite being a poor objective to learn the value function, can still be effective for driving feature-learning since it outperforms the fixed representation. Linear algorithms and eligibility traces. TTNs give us the flexibility to choose any linear policy evaluation algorithm for the fast part. We compare several choices: TD, least-squares TD (LSTD) (Bradtke and Barto, 1996), forgetful LSTD (FLSTD) (van Seijen and Sutton, 2015), emphatic TD (Sutton et al., 2016), gradient TD (the TDC variant) (Sutton et al., 2009) and their true-online versions (van Seijen and Sutton, 2014; van Hasselt et al., 2014) to learn the value function. GTD and ETD are newer temporal difference methods which have better convergence properties and can offer increased stability. The true-online variants modify the update rules to improve the behavior of the algorithms when learning online and seem to outperform their counterparts empirically (van Seijen and Sutton, 2014). Least-squares methods summarize past interaction, but are often avoided due to quadratic computation in the number of features. For TTNs, however, there is no computational disadvantage to using LSTD methods, for two reasons. It is common to choose deep but skinny architectures (Mnih et al., 2015; Hessel et al., 2017). Furthermore, if the last layer is fully connected, then we already need to store O(d2) weights and use O(d2) time to compute a forward pass—the same as LSTD. We include FLSTD, which progressively forgets older interaction, as this could be advantageous when the feature representation changes over time. For TTN, incremental versions of the least-squares algorithms are used to maintain estimates of the required quantities online (see appendix D). All of these linear algorithms can use eligibility traces to increase their sample efficiency by propagating TD errors back in time. The trace parameter λ can also provide a bias-variance tradeoff for the value estimates (Sutton, 1988; Dann et al., 2014). For nonlinear function approximation, eligibility traces can no longer be derived for TD. Though invalid, we can naively extend them to this case by keeping one trace per weight, giving us nonlinear TD(λ). The results overall indicate that TTNs can benefit from the ability to use different linear policy evaluation algorithms and traces, in particular from the use of least-squares methods as shown in Figure 4 for Puddle World and Catcher. The dominance of LSTD versus the other linear algorithms is consistent, including in terms of parameter sensitivity, persists for the other three domains. We additionally investigated sensitivity to λ, and found that most of the TTN variants benefit from a nonzero λ value and, in many cases, the best setting is high, near 1. One exception is the least-squares methods, where LSTD performs similarly for most values of λ. Nonlinear TD(λ), on the other hand, performs markedly worse as λ increases. This is unsurprising considering the naive addition of eligibility traces is unsound. We include these sensitivity plots in the appendix, in Figure ??. Surrogate loss functions. For all the previous experiments, we optimized the MSTDE for the slow part of the network, but as discussed in Section 3, other objectives can be used. We compare a variety of objectives, by choosing different Yt, including Yt = Rt+1 (Reward); Yt = St+1 (Next State); and Yt = Rt+1 + Ŷ (St+1). (Semi-gradient MSTDE). In Puck World, in Figure 5 a), we can see that every auxiliary loss performed well. This does not appear to be universally true, as in Acrobot we found that the MSTDE was a less effective surrogate loss, leading to slower learning (see Figure 5 b). Alternate losses such as the semi-gradient MSTDE and next state predictions were more successful in that domain. These results suggest that there is no universally superior surrogate loss and that choosing the appropriate one can yield benefits in certain domains. Control Although the focus of this work is policy evaluation, we also provide some preliminary results for the control setting. For control, we include some standard additions to competitor learning algorithms to enable learning with neural networks. The DQN algorithm (Mnih et al., 2015) utilizes two main tricks to stabilize training: experience replay—storing past transitions and replaying them multiple times—and a target network—which keeps the value function in the Q-learning targets fixed, updating the target network infrequently (e.g., every k = 10, 000 steps). We use an alternative strategy to target networks for TTN. The use of a target network is motivated by fitted Q-iteration (FQI) (Ernst et al., 2005), which updates towards fixed Q-values with one sweep through a batch of data. TTNs provide a straightforward mechanism to instead directly use FQI, where we can solve for the weights on the entire replay buffer, taking advantage of the closed form solution for linear regression towards the Q-values from the last update. Batch FQI requires storing all data, whereas we instead have a sliding window of experience. We therefore additionally incorporate a regularization term, which prevents the weights from changing too significantly between updates, similarly to Levine et al. (2017). Each FQI iteration requires solving a least squares problem on the entire buffer, an operation costing O(nd2) computation where d is the number of features in the last layer of the network and n is the size of the buffer; we update the network every k steps, which reduces the per-step computation to O(nd2/k). The slow part drives feature-learning by minimizing the semi-gradient MSTDE for state-action values. As another competitor, we include LS-DQN (Levine et al., 2017), a DQN variant which also utilizes adjustments to the final layer’s weights towards the FQI solution, similar to TTN-FQI. The experimental details differ for control. On nonimage Catcher, we do a sweep over αslow and λreg , the regularization parameter, for TTN and sweep over the learning rate and the number of steps over which is annealed for DQN. On image Catcher, runs require significantly more computation so we only tune hyperparameters by hand. The FQI update in TTNs was done every 1000 (10000) steps for non-image (image) Catcher. We run each algorithm 10 times (5 times) for 200 thousand steps (10 million steps) on the non-image (image) Catcher. We see that TTN is able to perform well on both versions of Catcher in Figure 6, particularly learning more quickly than the DQN variants. This difference is especially pronounced in the image version of catcher, where TTN is also able to achieve much higher average returns than DQN. Both algorithms seem to suffer from catastrophic forgetting later during training as the performance dips down after an initial rise, although TTN still stabilizes on a better policy. Overall, these results suggest that TTNs are a promising direction for improving sample efficiency in control, whilst still maintaining stability when training neural networks. 6 DISCUSSION AND CONCLUSION In this work, we proposed Two-timescale Networks as a new strategy for policy evaluation with nonlinear function approximation. As opposed to many other algorithms derived for nonlinear value function approximation, TTNs are intentionally designed to be simple to promote ease-of-use. The algorithm combines a slow learning process for adapting features and a fast process for learning a linear value function, both of which are straightforward to train. By leveraging these two timescales, we are able to prove convergence guarantees for a broad class of choices for both the fast and slow learning components. We highlighted several cases where the decoupled architecture in TTNs can improve learning, particularly enabling the use of linear methods—which facilitates use of least-squares methods and eligibility traces. This work has only begun the investigation into which combinations for surrogate losses and linear value function approximation algorithms are most effective. We provided some evidence that, when using stochastic approximation algorithms rather than least-squares algorithms, the addition of traces can have a significant effect within TTNs. This contrasts nonlinear TD, where traces were not effective. The ability to use traces is potentially one of the most exciting outcomes for TTNs, since traces have been so effective for linear methods. More generally, TTNs provide the opportunity to investigate the utility of the many linear value function algorithms, in more complex domains with learned representations. For example, emphatic algorithms have improved asymptotic properties (Sutton et al., 2016), but to the best of our knowledge, have not been used with neural networks. Another promising direction for TTNs is for off-policy learning, where many value functions are learned in parallel. Off-policy learning can suffer from variance due to large magnitude corrections (importance sampling ratios). With a large collection of value functions, it is more likely that some of them will cause large updates, potentially destabilizing learning in the network if trained in an end-to-end fashion. TTNs would not suffer from this problem, because a different objective can be used to drive learning in the network. We provide some preliminary experiments in the appendix supporting this hypothesis (Appendix C.7). A TTN ALGORITHM Algorithm 1 Training of TTNs 1: procedure TRAIN(w,θ, w̄, π) . π is a fixed policy 2: Initialize θ, w̄ with Xavier initialization, w to 0 and thestarting state s according to the environment 3: while training do 4: a← action chosen by π(s) 5: r, s′ ← Environment(s, a) . Get reward and next state 6: θ, w̄← GradientDescent on Lslow using sample (s, r, s′) 7: w← Update on Lvalue using sample (s, r, s′) 8: s← s′ 9: end while 10: return learned parameters w,θ, w̄ 11: end procedure B CONVERGENCE PROOF OF TWO-TIMESCALE NETWORKS B.1 DEFINITIONS & NOTATIONS - Let R+ denote the set of non-negative real numbers, N = {0, 1, 2, . . . } and ‖ · ‖ denote the Euclidean norm or any equivalent norm. - A map f : Rd → Rd is Lipschitz continuous if ‖f(x) − f(y)‖ ≤ L(‖x − y‖), for some L ∈ (0,∞), ∀x,y ∈ Rd. - A set-valued map h : Rd → {subsets of Rd} is called a Marchaud map, if it satisfies the following conditions: 1. For each x ∈ Rd, h(x) is convex and compact. 2. For each x ∈ Rd, ∃K ∈ (0,∞) such that supy∈h(x) ‖y‖ ≤ K(1 + ‖x‖). 3. h is upper-semicontinuous, i.e., if {xn}n∈N → x and {yn}n∈N → y, where xn ∈ Rd, yn ∈ h(xn), ∀n ∈ N, then y ∈ h(x). - For x1,x2 ∈ Rd and D ∈ Rk×k a diagonal matrix, we define the inner-product < x1,x2 >D, x>1 Dx2. We also define the semi-norm ‖x‖D ,< x,x > 1 2 D. If all the diagonal elements of D are strictly positive, then ‖ · ‖D is a norm. - For any set X , let X̊ denote the interior of X and ∂X denote the boundary of X . - For brevity, let θ̄ = (θ, w̄)> and Φθ̄ be the feature matrix corresponding to the feature parameter θ̄, i.e. Φθ̄ , xθ(s1) > xθ(s2) > ... xθ(s|S|) > |S|×d , (6) where xθ(s)> is the row-vector corresponding to state s. Further, define the |S| × |S|-matrix Pπ as follows: Pπs,s′ , ∑ a∈A π(s, a)P (s, a, s′), s, s′ ∈ S. (7) - Also, recall that Lslow(θ) = MSTDE(θ) , E [ E [ δ2t |St ]] . - A function Γ : U ⊆ Rd1 → X ⊆ Rd2 is Frechet differentiable at x ∈ U if there exists a bounded linear operator Γ̂x : Rd1 → Rd2 such that the limit lim ↓0 Γ(x + y)− x (8) exists and is equal to Γ̂x(y). We say Γ is Frechet differentiable if Frechet derivative of Γ exists at every point in its domain. B.2 ASSUMPTIONS Assumption 1: The pre-determined, deterministic, step-size sequence {ξt}t∈N satisfies ξt > 0,∀t ∈ N, ∑ t∈N ξt =∞, ∑ t∈N ξ2t <∞. Assumption 2: The Markov chain induced by the given policy π is ergodic, i.e., aperiodic and irreducible. Assumption 2 implies that the underlying Markov chain is asymptotically stationary and henceforth it guarantees the existence of a unique steady-state distribution dπ over the state space S (Levin and Peres, 2017), i.e., limt→∞ P(St = s) = dπ(s), ∀s ∈ S. Assumption 3: Given a realization of the transition dynamics of the MDP in the form of a sample trajectory Oπ = {S0, A0, R1, S1, A1, R2, S2, . . . }, where the initial state S0 ∈ S is chosen arbitrarily, while the action A 3 At ∼ π(St, ·), the transitioned state S 3 St+1 ∼ P (St, At, ·) and the reward R 3 Rt+1 = R(St, At, St+1). To analyze the long-run behaviour of our algorithm, we employ the ODE based analysis (Borkar, 2008; Kushner and Yin, 2003; Ljung, 1977) of the stochastic recursive algorithms. Here, we consider a deterministic ordinary differential equation (ODE) whose asymptotic flow is equivalent to the long-run behaviour of the stochastic recursion. Then we analyze the qualitative behaviour of the solutions of the ODE to determine the asymptotically stable sets. The ODE-based analysis is elegant and conclusive and it further guarantees that the limit points of the stochastic recursion will almost surely belong to the compact connected internally chain transitive invariant set of the equivalent ODE. Since the algorithm follows a multi-timescale stochastic approximation framework, we will also resort to the more generalized multi-timescale differential inclusion based analysis proposed in (Borkar, 1997; Ramaswamy and Bhatnagar, 2016). Note that there exists only a unilateral coupling between the neural network (where the feature vectors θ̄t are calibrated by following a stochastic gradient descent w.r.t. Lslow) and the various policy evaluation algorithms (see Figure 7). This literally implies that the policy evaluation algorithms depend on the feature vectors θ̄t but not vice-versa. Therefore, one can independently analyze the asymptotic behaviour of the feature vectors {θ̄t}t∈N. Also, as a technical requirement, note that since one cannot guarantee the stability (almost sure boundedness) of the iterates {θ̄t}t∈N (which is a necessary condition required for the ODE based analysis. Please refer Chapter 2 of Borkar (2008)), we consider the following projected stochastic recursion: θ̄t+1 = Γ Θ ( θ̄t + ξtδt ( ∇θ̄t Ŷθ̄(St)− γt+1∇θ̄t Ŷθ̄(St+1) )) , (9) where ΓΘ(·) is the projection onto a pre-determined compact and convex subset Θ ⊂ Rm+d, i.e., ΓΘ(x) = x, for x ∈ Θ̊, while for x /∈ Θ̊, it is the nearest point in Θ w.r.t. the Euclidean distance (or equivalent metric). Define the filtration {Ft}t∈N, a family of increasing natural σ-fields, where Ft , σ ( {θ̄i, Si, Ri; 0 ≤ i ≤ t} ) . The following lemma characterizes the limiting behaviour of the iterates {θ̄t}t∈N: Lemma 1. Let Assumptions 1-3 hold. Let Θ ⊂ Rm+d be a compact, convex subset with smooth boundary. Let ΓΘ be Frechet differentiable. Further, let Γ̂Θ θ̄ (− 12∇Lslow)(θ̄) be Lipschitz continuous. Let K be the set of asymptotically stable equilibria of the following ODE contained inside Θ: d dt θ̄(t) = Γ̂Θθ̄(t)(− 1 2 ∇θ̄Lslow)(θ̄(t)), θ̄(0) ∈ Θ̊ and t ∈ R+. Then the stochastic sequence {θ̄t}t∈N generated by the TTN converges almost surely to K. Proof. We employ here the ODE based analysis as proposed in (Borkar, 2008; Kushner and Clark, 2012). Firstly, we recall here the stochastic recursion which updates θ̄t: θ̄t+1 = Γ Θ ( θ̄t + ξtδt ( ∇θ̄t Ŷθ̄(St)− γt+1∇θ̄t Ŷθ̄(St+1) )) , (10) where ΓΘ is the projection onto a pre-determined compact and convex subset Θ ⊂ Rm+d. Here, δt , Rt+1 + γt+1Ŷθ̄t(St+1)− Ŷθ̄t(St) is the temporal difference. Also,∇θ̄t Ŷθ̄ ∈ R (m+d)×|S| is the Jacobian of Ŷθ̄ at θ̄ = θ̄t and ∇θ̄t Ŷθ̄(s) is the column corresponding to state s. Now the above equation can be rewritten as θ̄t+1 = Γ Θ ( θ̄t + ξt ( h1(θ̄t) + M1t+1 + `1t )) , (11) where h1(θ̄) , E [ δt ( ∇θ̄Ŷθ̄(St)− γt+1∇θ̄Ŷθ̄(St+1) )] , the noise term M1t+1 , δt ( ∇θ̄t Ŷθ̄(St)− γt+1∇θ̄t Ŷθ̄(St+1) ) − E [ δt ( ∇θ̄t Ŷθ̄(St)− γt+1∇θ̄t Ŷθ̄(St+1) ) |Ft ] and the bias `1t , E [ δt ( ∇θ̄t Ŷθ̄(St)− γt+1∇θ̄t Ŷθ̄(St+1) ) |Ft ] − h1(θ̄t). Further, θ̄t+1 = θ̄t + ξt ΓΘ ( θ̄t + ξt ( h1(θ̄t) + M1t+1 + `1t ) − θ̄t ) ξt = θ̄t + ξt ( Γ̂Θθt(h 1(θ̄t)) + Γ̂ Θ θ̄t ( M1t+1 ) + Γ̂Θθ̄t ( `1t ) + o(ξt) ) , (12) where Γ̂Θ is the Frechet derivative (defined in Eq. (8). Note that ΓΘ is single-valued since Θ is convex and also the above limit exists since the boundary ∂Θ is assumed smooth. Further, for x ∈ Θ̊, we have Γ̂Θx (y) = lim →0 ΓΘ (x + y)− x = lim →0 x + y − x = y (for sufficiently small ), (13) i.e., Γ̂Θx (·) is an identity map for x ∈ Θ̊. A few observations are in order: C1: Γ̂Θ θ̄ (h1(θ̄)) is a Lipschitz continuous function in θ̄. This follows from the hypothesis of the Lemma. C2: Γ̂Θ θ̄t ( M1t+1 ) is a truncated martingale difference noise. Indeed, it is easy to verify that the noise sequence {M1t+1}t∈N is a martingale-difference noise sequence w.r.t to the filtration {Ft+1}t∈N, i.e., M1t+1 is Ft+1-measurable and integrable, ∀t ∈ N and E [ M1t+1|Ft ] = 0 a.s., ∀t ∈ N. Also, since ΓΘ(·) is a continuous linear operator, we have Γ̂Θ(M1t+1) to be Ft+1-measurable and integrable, ∀t ∈ N likewise. C3: Γ̂Θ θ̄t ( `1t ) → 0 as t→∞ a.s. Indeed, ∥∥∥Γ̂Θθ̄t (`1t ) ∥∥∥ = ∥∥∥∥∥ lim →0 ΓΘ ( θ̄t + ` 1 t ) − θ̄t ∥∥∥∥∥ ≤ lim →0 ∥∥∥ΓΘ (θ̄t + `1t )− ΓΘ (θ̄t) ∥∥∥ ≤ lim →0 ∥∥∥θ̄t + `1t − θ̄t∥∥∥ = ‖`1t‖. By taking t→∞, C3 follows directly from the ergodicity (Levin and Peres, 2017) (Assumption 2) and finiteness of the underlying Markov chain. C4: o(ξt)→ 0 as t→∞ (follows from Assumption 1). C5: Iterates {θ̄t}t∈N are stable (forcefully), i.e. bounded almost surely, since θt ∈ Θ, ∀t ∈ N (ensured by the projection operator ΓΘ) and Θ is compact (i.e., closed and bounded). C6: ∃K0 ∈ (0,∞), such that E [ ‖Γ̂Θθ̄t ( M1t+1 ) ‖2|Ft ] ≤ K0 ( 1 + ‖θ̄t‖2 ) a.s. (14) This follows directly from the finiteness of the Markov chain and from the assumption that the boundary ∂Θ is smooth. Now, by appealing to Theorem 2, Chapter 2 of (Borkar, 2008)), we conclude that the stochastic recursion (10) asymptotically tracks the following ODE d dt θ̄(t) = Γ̂Θθ̄(t)(h 1(θ̄(t)), θ̄(0) ∈ Θ̊ and t ∈ R+ = Γ̂Θθ̄(t)(− 1 2 ∇θ̄Lslow)(θ̄(t)), θ̄(0) ∈ Θ̊ and t ∈ R+. (15) In other words, the stochastic recursion (10) converges to the asymptotically stable equilibria of the ODE (15) contained inside Θ. Remark 1. It is indeed non-trivial to determine the constraint set Θ without prior adequate knowledge about the limit set of the ODE (15). A pragmatic approach to overcome this concern is to initiate the stochastic recursion with an arbitrary convex, compact set Θ with a smooth boundary and gradually spread to the whole of Rm+d (Chen, 2006). Remark 2. It is also important to characterize the hypothesis of the above lemma (i.e., Γ̂Θ θ̄ (− 12∇Lslow)(θ̄) is Lipschitz continuous) with respect to the features Ŷθ̄. To achieve that one has to consider the non-projected form of the ODE (15). Apparently, when one considers the spreading approach proposed in the above remark, then it is essentially encouraged to consider the non-projected form since the limiting flow of the ODE arising from the projected stochastic recursion is more likely to lie inside the compact, convex set as Θ becomes larger. Thereupon, it is easy to observe that the condition Ŷθ̄ is twice continuously-differentiable is sufficient to ensure the Lipschitz continuity of Γ̂Θ θ̄ (− 12∇Lslow)(θ̄). Additionally, in that case K = {θ̄|∇θ̄Lslow(θ̄) = 0} which is the set of local extrema of J . B.4 TD(λ) ALGORITHM One can directly apply the TD(λ) with linear function approximation algorithm to estimate the value function with respect to the features provided by the neural network. The TD(λ) algorithm is provided in Algorithm 2. Here et,wt ∈ Rd. Further, δt , Rt+1 +γt+1w>t xθt(St+1)−w>t xθt(St) is the temporal difference. Algorithm 2 TD(λ) algorithm Parameters: αt > 0, λ ∈ [0, 1]; Initialization: w0 = 0, e0 = 0; For each transition (St, Rt+1, St+1) in Oπ, do: et+1 = xθt(St) + γt+1λet; (16) wt+1 = wt + αt ( Rt+1 + γt+1w > t xθt(St+1)−w>t xθt(St) ) et; (17) Assumption 4-TD(λ): The pre-determined, deterministic, step-size sequence {αt}t∈N satisfies: αt > 0,∀t ∈ N, ∑ t∈N αt =∞, ∑ t∈N α2t <∞, lim t→∞ ξt αt = 0. Note that the step-size schedules {αt}t∈N and {ξt}t∈N satisfy ξtαt → 0, which implies that {ξt} converges to 0 relatively faster than {αt}. This disparity in terms of the learning rates induces an asynchronous convergence behaviour asymptotically (Borkar, 1997), with feature parameter sequence {θ̄t} converging slower relative to the TD(λ) sequence {wt}. The rationale being that the increment term of the underlying stochastic gradient descent of the neural network is smaller compared to that of the TD(λ) recursion (17), since the neural network SGD is weighted by the step-size schedule {ξt}t∈N which is smaller than {αt}t∈N for all but finitely many t. This unique pseudo heterogeneity induces multiple perspectives, i.e., when viewed from the faster timescale recursion (recursion controlled by αt), the slower timescale recursion (recursion controlled by ξt) seems quasi-static (‘almost a constant’), while viewed from the slower timescale, the faster timescale recursion seems equilibrated. Further, it is analytically admissible (Borkar, 1997) to consider the slow timescale stochastic recursion (i.e., the neural network SGD) to be quasi-stationary (i.e., θ̄t ≡ θ̄, ∀t ∈ N), while analyzing the asymptotic behaviour of the relatively faster timescale stochastic recursion (17). Thereupon, we obtain the following directly from Theorem 1 of (Tsitsiklis and Van Roy, 1997). Lemma 2. Assume θ̄t ≡ θ̄, ∀t ∈ N. Let Assumptions 1-3 and 4-TD(λ) hold. Then for any λ ∈ [0, 1], the stochastic sequence {wt}t∈N generated by the TD(λ) algorithm (Algorithm 2) within the TTN setting converges almost surely to the limit w∗, where w∗ satisfies Πθ̄T (λ)(Φθ̄w ∗) = Φθ̄w ∗, (18) with T (λ)J(s) , (1− λ) ∑∞ i=0 λ iE [∑i j=0 γ [j]Rj+1 + γ [i+1]J(Si+1) ∣∣S0 = s] and γ[j] = Πji=0γi (with γ0 = 1). Also, Πθ̄ is defined according to Eq. (3) with F = {Φθ̄w|w ∈ Rd}. For other single-timescale prediction methods like ETD and LSPE, similar results follow. Regarding the least squares method LSTD, which offers the significant advantage of non-dependency on stepsizes (albeit computationally expensive) couples smoothly within the TTN setting without any additional consideration. B.5 GTD2 ALGORITHM However, one cannot directly apply the original GTD2 and TDC algorithms to the TTN setting, since a necessary condition required for the convergence of these algorithms is the non-singularity of the feature specific matrices E [ xθt(St)xθt(St) > ] and E [ (xθt(St)− γt+1xθt(St+1)) xθt(St)> ] . Please refer Theorem 1 and Theorem 2 of (Sutton et al., 2009). Without the non-singularity assumption, it is indeed hard to guarantee the almost sure boundedness of the GTD2/TDC iterates. In the TTN setting that we consider in this paper, one cannot explicitly assure this condition, since the features are apparently administered by a neural network and it is not directly intuitive on how to control the neural network to generate a collection of features with the desired non-singularity characteristic. Henceforth, one has to consider the projected versions of these algorithms. We consider here the projected GTD2 algorithm provided in Algorithm 3. Algorithm 3 GTD2 algorithm Parameters: αt, βt; Initialization: u0 ∈ U,w0 ∈W ; For each transition (St, Rt+1, St+1) in Oπ do: wt+1 = Γ W ( wt + αt ( δutt+1xθt(St)− ( wt >xθt(St) ) xθt(St) )) ; (19) ut+1 = Γ U ( ut + βt (xθt(St)− γt+1xθt(St+1)) ( wt >xθt(St) )) ; (20) Here ut,wt ∈ Rd. Further, δut+1 , Rt+1 + γt+1u>xθt(St+1) − u>xθt(St) is the temporal difference. Here, ΓW (·) is the projection operator onto a pre-determined convex, compact subset W ⊂ Rd with a smooth boundary ∂W . Therefore, ΓW maps vectors in Rd to the nearest vectors in W w.r.t. the Euclidean distance (or equivalent metric). Convexity and compactness ensure that the projection is unique and belongs to W . Similarly, U is a pre-determined convex, compact subset of Rd with a smooth boundary ∂U . Projection is required since the stability of the iterates {wt}t∈N and {ut}t∈N are hard to guarantee otherwise. Assumption 4-GTD2: The pre-determined, deterministic, step-size sequences {αt}t∈N and {βt}t∈N satisfy αt, βt > 0,∀t ∈ N, ∑ t∈N αt = ∑ t∈N βt =∞, ∑ t∈N ( α2t + β 2 t ) <∞, lim t→∞ βt αt = 0, lim t→∞ ξt βt = 0. Define the filtration {Ft}t∈N, a family of increasing natural σ-fields, where Ft , σ ( {wi,ui, θ̄i, Si, Ri; 0 ≤ i ≤ t} ) . Similar to the TD(λ) case, here also we follow the quasi-stationary argument. Henceforth, we analyze the asymptotic behaviour of GTD2 algorithm under the assumption that feature vector θ̄t is quasi-static, i.e. θ̄t ≡ θ̄ = (θ, w̄)>. Lemma 3. Assume θ̄t ≡ θ̄ = (θ, w̄)>, ∀t ∈ N. Let Assumptions 1-3 and 4-GTD2 hold. Then{ (u,w)> ∣∣ lim inf t→∞ ∥∥(u,w)> − (ut,wt)>∥∥} ⊆ ⋃ u∈A∗ { (u,w)> ∣∣w ∈ Au}, (21) where A∗ is the set of asymptotically stable equilibria of the following ODE: d dt u(t) = Γ̂Uu(t) ( Φ>θ̄ Ddπ (I− γt+1P π)Φθ̄u(t) ) , u(0) ∈ Ů , t ∈ R+ (22) and Au is the asymptotically stable equilibria of the following ODE: d dt w(t) = Γ̂Ww(t) (( Φ>θ̄ Ddπδ u − Φ>θ̄ DdπΦθ̄ ) w(t) ) , w(0) ∈ W̊ and t ∈ R+, with δu defined in Eq. (29). Proof. The two equations in the modified GTD2 algorithm constitute a multi-timescale stochastic approximation recursion, where there exists a bilateral coupling between the stochastic recursions (19) and (20). Since the step-size sequences {αt}t∈N and {βt}t∈N satisfy βtαt → 0, we have βt → 0 faster than αt → 0. This disparity in terms of the learning rates induces a pseudo heterogeneous rate of convergence (or timescales) between the individual stochastic recursions which results in a pseudo asynchronous convergence behaviour when considered over a finite time window. Also note that the coherent long-run behaviour of the multi-timescale stochastic recursion will asymptotically follow this short-term behaviour with the window size extending to infinity(Borkar, 1997; 2008). This pseudo behaviour induces multiple viewpoints, i.e., when observed from the faster timescale recursion (recursion controlled by αt), the slower timescale recursion (recursion controlled by βt) appears quasi-static (‘almost a constant’), while observed from the slower timescale, the faster timescale recursion seems equilibrated. Further, it is analytically admissible (Borkar, 1997) to consider the slow timescale stochastic recursion (20) to be quasi-stationary (i.e., ut ≡ u, ∀t ∈ N), while analyzing the limiting behaviour of the relatively faster timescale stochastic recursion 19. Analysis of the faster time-scale recursion: The faster time-scale stochastic recursion of the GTD2 algorithm is the following: wt+1 = Γ W ( wt + αt ( δutt+1xθt(St)− ( wt >xθt(St) ) xθt(St) )) . (23) Under the previously mentioned quasi-stationary premise that ut ≡ u and θ̄t ≡ θ̄ = (θ, w̄)>, ∀t ∈ N, thereupon, we analyze the long-term behaviour of the following recursion: wt+1 = Γ W ( wt + αt ( δut+1xt − ( wt >xt ) xt )) , (24) where xt = xθ(St) and δut+1 , Rt+1 + γt+1u >xt+1 − u>xt. The above equation can be rearranged as the following: wt+1 = Γ W ( wt + αt ( h2(wt) + M2t+1 + `2t )) , where the noise M2t+1 , δut+1xt − ( w>t xt ) xt − E [ δut+1xt − ( w>t xt ) xt|Ft ] , h2(w) , E [ δut+1xt − ( w>t xt ) xt ] and the bias `2t = E [ δut+1xt − ( w>t xt ) xt|Ft ] − E [ δut+1xt − ( w>t xt ) xt ] . Similar to Equation (12), we can rewrite the above recursion as follows: wt = wt + αt ( Γ̂Wwt(h 2(wt)) + Γ̂ W wt ( M2t+1 ) + Γ̂Wwt ( `2t ) + o(αt) ) , (25) where Γ̂Wwt(·) is the Frechet derivative (defined in Equation (8)) of the projection operator Γ W . A few observations are in order: D1: The iterates {wt}t∈N are stable, i.e., supt∈N ‖wt‖ <∞ a.s. This immediately follows since W is bounded. D2: {Γ̂Wwt ( M2t+1 ) }t∈N is a martingale-difference noise sequence with respect to the filtration {Ft+1}t∈N. This follows directly since {M2t+1}t∈N is a martingale-difference noise sequence with respect to the same filtration. D3: {Γ̂Wwt ( M2t+1 ) }t∈N are square-integrable and ∃K2 ∈ (0,∞) such that E [ ‖Γ̂Wwt ( M2t+1 ) ‖2|Ft ] ≤ K2 ( 1 + ‖wt‖2 ) a.s., t ∈ N. (26) This follows directly from the finiteness of the underlying Markov chain and from the assumption that the boundary ∂W is smooth. D4: Γ̂Ww (h 2(w)) is Lipschitz continuous with respect to w. Proof similar to C1. D5: Γ̂Wwt ( `2t ) → 0 as t→∞ a.s. Proof similar to C3. Now by appealing to Theorem 2, Chapter 2 of (Borkar, 2008) along with the above observations, we conclude that the stochastic recursion 23 asymptotically tracks the following ODE almost surely: d dt w(t) = Γ̂Ww(t)(h 2(w(t)), w(0) ∈ W̊ and t ∈ R+. = Γ̂Ww(t) ( E [ δut+1xt ] − E [ xtx > t ] w(t) ) , w(0) ∈ W̊ and t ∈ R+. (27) Therefore, wt converges asymptotically to the stable equilibria of the above ODE contained inside W almost surely. Qualitative analysis of the solutions of ODE (27): A trivial qualitative analysis of the long-run behaviour of the flow induced by the above ODE attests that the stable limit set is indeed the solutions of the following linear system insideW (This follows since Γ̂Ww (y) = y for w ∈ W̊ and also because Γ̂Ww (·) does not contribute any additional limit points on the boundary other than the roots of h2 since ∂W is smooth). E [ xtx > t ] E [ δut+1xt ] − E [ xtx > t ] E [ xtx > t ] w = 0. ⇒ E [ xtx > t ] w = E [ δut+1xt ] . (28) Note that E [ xtx > t ] = Φ> θ̄ DdπΦθ̄. Claim 1: The above linear system of equations is consistent, i.e., E [ δut+1xt ] ∈ R(Φ> θ̄ DdπΦθ̄), i.e., the range-space of Φ> θ̄ DdπΦθ̄: To see that, note that the above system can indeed be viewed as the least squares solution to the Φθ̄w = δ u with respect to the weighted-norm ‖ · ‖Ddπ , where δu(s) = R̄π(s) + γt+1 ∑ s′∈S Pπs,s′u >xθ(s ′)− ∑ s′∈S Pπs,s′u >xθ(s ′), (29) where R̄ is the expected reward. (Note that E [ δut+1xt ] = Φ> θ̄ Ddπδ u). The least-squares solution w0 ∈ Rd (which certainly exists but may not be unique) satisfies < Φθ̄w, δ u − Φθ̄w0 >Ddπ= 0, ∀w ∈ R d ⇒ < w, D−1dπΦ > θ̄ Ddπ (δ u − Φθ̄w0) >Ddπ= 0, ∀w ∈ R d. Now choose w = D−1dπΦ > θ̄ Ddπ (δ u − Φθ̄w0). Then Φ>θ̄ Ddπ (δ u − Φθ̄w0) = 0 ⇒ Φ>θ̄ DdπΦθ̄w0 = Φ > θ̄ Ddπδ u. [ End of proof of Claim 1 ] Since Φ> θ̄ DdπΦθ̄ may be singular (i.e., Φ > θ̄ DdπΦθ̄ is not invertible), the above least squares solution may not be unique and hence the collection of asymptotically stable equilibria of the flow induced by the ODE (27) may not be singleton for every u. Let’s denote the asymptotically stable equilibria of the flow induced by the said ODE to be the set Au, where ∅ 6= Au ⊆W . Analysis of the slower time-scale recursion: The slower time-scale stochastic recursion of the GTD2 algorithm is the following: ut+1 = Γ U ( ut + βt (xt − γt+1xt+1) ( wt >xt )) , ut ∈ Rd, u0 ∈ U. (30) Note that since ξtβt → 0, the stochastic recursion (20) is managed on a faster timescale relative to the the neural network stochastic recursion (10) and henceforth, we continue to maintain here the quasi-stationary condition θ̄t ≡ θ̄ = (θ, w̄)>. Now the above equation can be rearranged as follows: ut+1 = Γ U ( ut + βt ( E [ ∆wtt+1 ] + M3t+1 + `3t )) , (31) where ∆wtt+1 , (xt − γt+1xt+1) ( wt >xt ) = ( (xt − γt+1xt+1) x>t ) wt, the noise term M3t+1 , ∆wtt+1 − E [ ∆wtt+1|Ft ] and the bias `3t , E [ ∆wtt+1|Ft ] − E [ ∆wtt+1 ] . Similar to Equation (12), we can rewrite the above recursion as follows: ut+1 = ut + βt ( Γ̂Uut ( E [ ∆wtt+1 ]) + Γ̂Uut ( M3t+1 ) + Γ̂Uut ( `3t ) + o(βt) ) , (32) where Γ̂Uut(·) is the Frechet derivative (defined in Equation (8)) of the projection operator Γ U . Now the above equation can be interpreted in terms of stochastic recursive inclusion as follows: ut+1 = ut + βt ( Γ̂Uut ( E [ ∆wtt+1 ]) + Γ̂Uut ( M3t+1 ) + Γ̂Uut ( `3t ) + o(βt) ) , (33) with Γ̂Uut ( E [ ∆wtt+1 ]) ∈ h3(ut), where the set-valued map h3 : Rd → {subsets of Rd} is defined as h3(u) , { Γ̂Uu ( E [ ∆wt+1 ]) , where w ∈ Au } . (34) Indeed h3(u) = { Γ̂Uu (Bw) , where B = E [( (xt − γt+1xt+1) x>t )] and w ∈ Au } . It is easy to verify that B = Φ> θ̄ Ddπ (I− γt+1Pπ)Φθ̄. Here, one cannot directly apply the multi-timescale stochastic approximation results from (Borkar, 1997) since the said paper assumes that the limit point from the slower timescale recursion is unique (Please see Chapter 6 of (Borkar, 2008)). But in our setting, the slower timescale recursion (23) has several limit points (note that the stable limit set Au is not singleton). This is where our analysis differs from that of the seminal paper on GTD2 algorithm, where it is assumed that both the matrices E [ xtx > t ] and E [ (xt − γt+1xt+1)x>t ] are certainly non-singular. However, in our TTN setting, one cannot guarantee this condition, since the features are apparently provided by a neural network and it is hard to fabricate the neural network to generate a collection of features with the desired non-singularity properties. In order to analyze the limiting behaviour of the GTD2 algorithm under the relaxed singularity setting, henceforth one has to view the stochastic recursion (30) as a stochastic recursion inclusion (Benaïm et al., 2005) and apply the recent results from (Ramaswamy and Bhatnagar, 2016) which analyzes the asymptotic behaviour of general multi-timescale stochastic recursive inclusions. A few observations are in order: E1: For each u ∈ U , h3(u) is a singleton: This follows from the definition of h3 and Claim 1 above, where we established that each w ∈ Au is the least squares solution to the linear system of equations Φθ̄w = δ u. Therefore, it further implies that that h3 is a Marchaud map as well. E2: supt∈N (‖wt‖+ ‖ut‖) <∞ a.s. This follows since W and U are bounded sets. E3: {Γ̂Uut ( M3t+1 ) }t∈N is a martingale-difference noise sequence with respect to the filtration {Ft+1}t∈N. This follows directly since {M3t+1}t∈N is a martingale-difference noise sequence with respect to the same filtration. E4: {Γ̂Uut ( M3t+1 ) }t∈N are square-integrable and ∃K3 ∈ (0,∞), such that E [∥∥Γ̂Uut (M3t+1) ∥∥2∣∣Ft] ≤ K3 (1 + ‖ut‖2 + ‖wt‖2) a.s., t ∈ N. (35) This follows directly from the finiteness of the underlying Markov chain and from the assumption that the boundary ∂U is smooth. E5: Γ̂Uut ( `3t ) → 0 as t→∞ a.s. Proof similar to C3. This implies that the bias is asymptotically irrelevant. E6: For each u ∈ U , the set Au is a globally attracting set of the ODE (27) and is also Lyapunov stable. Further, there exists K4 ∈ (0,∞) such that supw∈Au ‖w‖ ≤ K4(1 + ‖u‖). This follows since Au ⊆W and W is bounded. E7: The set-valued map q : U → {subsets of Rd} given by q(u) = Au is upper-semicontinuous: Consider the convergent sequences {un}n∈N → u and {wn}n∈N → w with un ∈ U and wn ∈ q(un) = Au. Note that w ∈ W , u ∈ U since W and U are compact. Also Φ> θ̄ DdπΦθ̄wn = Φ > θ̄ Ddπδ un (from Claim 1). Now taking limits on both sides we get lim n→∞ Φ>θ̄ DdπΦθ̄wn = limn→∞ Φ>θ̄ Ddπδ un ⇒ Φ>θ̄ DdπΦθ̄w = Φ > θ̄ Ddπδ u. This implies that w ∈ Au = q(u). The claim thus follows. Thus we have established all the necessary conditions demanded by Theorem 3.10 of (Ramaswamy and Bhatnagar, 2016) to characterize the limiting behaviour of the stochastic recursive inclusion (33). Now by appealing to the said theorem, we obtain the following result on the asymptotic behaviour of the GTD2 algorithm:{ (u,w)> ∣∣ lim inf t→∞ ∥∥(u,w)>− (ut,wt)>∥∥} ⊆ ⋃ u∈A∗ { (u,w)> ∣∣w ∈ Au}, (36) where A∗ is the set of asymptotically stable equilibria of the following ODE: d dt u(t) = h3(u(t)), u(0) ∈ Ů , t ∈ R+. (37) One can obtain similar results for projected TDC. We now state our main result: Theorem 2. Let Θ ⊂ Rm+d be a compact, convex subset with smooth boundary. Let ΓΘ be Frechet differentiable. Further, let Γ̂Θ θ̄ (− 12∇Lslow)(θ̄) be Lipschitz continuous. Also, let Assumptions 1-3 hold. Let K be the set of asymptotically stable equilibria of the following ODE contained inside Θ: d dt θ̄(t) = Γ̂Θθ̄(t)(− 1 2 ∇θ̄Lslow)(θ̄(t)), θ̄(0) ∈ Θ̊ and t ∈ R+. Then the stochastic sequence {θ̄t}t∈N generated by the TTN converges almost surely to K (sample path dependent). Further, TD(λ) Convergence: Under the additional Assumption 4-TD(λ), we obtain the following result: For any λ ∈ [0, 1], the stochastic sequence {wt}t∈N generated by the TD(λ) algorithm (Algorithm 2) within the TTN setting converges almost surely to the limit w∗, where w∗ satisfies Πθ̄∗T (λ)(Φθ̄∗w ∗) = Φθ̄∗w ∗, (38) with T (λ) defined in Lemma 2 and θ̄∗ ∈ K (sample path dependent). GTD2 Convergence: Let W,U ⊂ Rd be compact, convex subsets with smooth boundaries. Let Assumption 4-GTD2 hold. Let ΓW and ΓU be Frechet differentiable. Then the stochastic sequences {wt}t∈N and {ut}t∈N generated by the GTD2 algorithm (Algorithm 3) within the TTN setting satisfy{ (u,w)> ∣∣ lim inf t→∞ ∥∥(u,w)> − (ut,wt)>∥∥} ⊆ ⋃ u∈A∗ { (u,w)> ∣∣w ∈ Au}, where A∗ is the set of asymptotically stable equilibria of the following ODE: d dt u(t) = Γ̂Uu(t) ( Φ>θ̄∗Ddπ (I− γt+1P π)Φθ̄∗u(t) ) , u(0) ∈ Ů , t ∈ R+ and Au is the asymptotically stable equilibria of the following ODE: d dt w(t) = Γ̂Ww(t) (( Φ>θ̄∗Ddπδ u − Φ>θ̄∗DdπΦθ̄∗ ) w(t) ) , w(0) ∈ W̊ and t ∈ R+, with θ̄∗ ∈ K (sample path dependent) and δu defined in Eq. (29). C.1 NONIMAGE CATCHER C ADDITIONAL EXPERIMENTS C.2 PUDDLEWORLD C.3 IMAGE CATCHER We also ran policy evaluation experiments on image-based catcher with 2 stacked 64x64 frames as input. The policy evaluated was the same as was used in the non-image setting. Similar to the non-imaged based catcher experiments, we have similar plots. C.4 CARTPOLE In the classic Cartpole environment, the agent has to balance a pole on a cart. The state is given by vector of 4 numbers (cart position, cart velocity, pole angle, pole velocity). The two available actions are applying a force towards the left or the right. Rewards are +1 at every timestep and an episode terminates once the pole dips below a certain angle or the cart moves too far from the center. We use the OpenAI gym implementation (Brockman et al., 2016). The policy to be evaluated consists of applying force in the direction the pole is moving with probability 0.9 (stabilizing the pole) or applying force in the direction of the cart’s velocity with probability 0.1. We inject some stochasticity so that the resulting policy does not perform overly well, which would lead to an uninteresting value function. C.5 ACROBOT In the classic Acrobot domain, the agent consisting of two links has to swing up past a certain height. The agent observes a 4-dimensional state consisting of the angles and the angular velocities of each link. The avaiable actions are three possible levels of torque to be applied to the joint. The evaluated policy is obtained by training an agent with true-online Sarsa on a tile coding representation and then fixing its learned epsilon-greedy policy. C.6 PUCK WORLD In Puck World (Tasfi, 2016), the agent has to move in a two-dimensional box towards a good puck while staying away from a bad puck. The 8-dimensional state consists of (player x location, player y location, player x velocity, player y velocity, good puck x location, good puck y location, bad puck x location, bad puck y location). Each action increases the agent’s velocity in one of the four cardinal directions apart from a “None” action which does nothing. The reward is the negative distance to the good puck plus a penalty of −10 + x if the agent is within a certain radius of the bad puck, where x ∈ [−2, 0] depends on the distance to the bad puck (the reward is slightly modified from the original game to make the value function more interesting). The policy moves the agent towards the good puck, while having a soft cap on the agent’s velocity. In more detail, to choose one action, it is defined by the following procedure: First, we choose some eligible actions. The None action is always eligible. The actions which move the agent towards the good puck are also eligible. For example, if the good puck is Northeast of the agent, the North and East actions are eligible. If the agent’s velocity in a certain direction is above 30, then the action for that direction is no longer eligible. Finally, the agent picks uniformly at random from all eligible actions. C.7 OFF-POLICY CATCHER We run a preliminary experiment to check if TTN can have an advantage in the off-policy setting. The target policy is the same as the one used for other Catcher experiments (described in Appendix D). The behaviour policy is slightly different. If the apple is within 20 units (the target policy is 25 units), then the agent takes the action in the direction of the apple with probability 0.7 and one of the other two actions with probability 0.15 each. If the apple is not within range, then the agent takes the None action 10% of the time and one of the other two with equal probability. This combination of behaviour and target policies results in importance sampling ratios in the range of 0 to 8.7, moderately large values. We try TTN with three off-policy algorithms (TD, TDC and LSTD) and compare to off-policy Nonlinear TD. For TTN, the features are learnt optimizing the MSTDE on the behaviour policy while the values are learned off-policy. The main difference between TTN and Nonlinear TD is the fact that Nonlinear TD does off-policy updates to the entire network while TTN only changes the linear part
1. What is the focus of the paper regarding reinforcement learning? 2. What is the novel approach introduced by the paper in addressing convergent and stable nonlinear algorithms? 3. How does the reviewer assess the clarity and efficiency of the paper's content? 4. What are the strengths and weaknesses of the experimental evaluation? 5. Do you have any suggestions for improving the comparison with other works in the field?
Review
Review This paper proposes Two-Timescale Networks (TTNs), a reinforcement learning algorithm where feature representations are learned by a neural network trained on a surrogate loss function (i.e. value), and a value function is learned on top of the feature representation using a "fast" least-squares algorithm. The authors prove the convergence of this method using methods from two time-scale stochastic approximation. Convergent and stable nonlinear algorithms is an important problem in reinforcement learning, and this paper offers an interesting approach for addressing this issue. The idea of using a "fast" linear learner on top of a slowly changing representation is not new in RL (Levine et. al, 2017), but the authors somewhat motivate this approach by showing that it results in a stable and convergent algorithm. Thus, I view the convergence proof as the main contribution of the paper. The paper is written clearly, but could benefit from more efficient use of space in the main paper. For example, I feel that the introduction and discussion in Section 3 on surrogate objectives could be considerably shortened, and a formal proof statement could be included from the appendix in Section 4, with the full proof in the appendix. The experimental evaluation is detailed, and ablation tests show the value of different choices of surrogate loss for value function training, linear value function learning methods, and comparisons against other nonlinear algorithms such as DQN and Nonlinear GTD/TD/variants. A minor criticism is that it is difficult to position this work against the "simpler but not sound" deep RL methods, as the authors only compare to DQN on a non-standard benchmark task. As additional related work, SBEED (Dai et. al, ICML 2018) also shows convergence for a nonlinear reinforcement learning algorithm (in the control setting), and quantifies the convergence rate while accounting for finite sample error. It would be good to include discussion of this work, although the proposed method and proofs are derived very differently.
ICLR
Title Two-Timescale Networks for Nonlinear Value Function Approximation Abstract A key component for many reinforcement learning agents is to learn a value function, either for policy evaluation or control. Many of the algorithms for learning values, however, are designed for linear function approximation—with a fixed basis or fixed representation. Though there have been a few sound extensions to nonlinear function approximation, such as nonlinear gradient temporal difference learning, these methods have largely not been adopted, eschewed in favour of simpler but not sound methods like temporal difference learning and Q-learning. In this work, we provide a two-timescale network (TTN) architecture that enables linear methods to be used to learn values, with a nonlinear representation learned at a slower timescale. The approach facilitates the use of algorithms developed for the linear setting, such as data-efficient least-squares methods, eligibility traces and the myriad of recently developed linear policy evaluation algorithms, to provide nonlinear value estimates. We prove convergence for TTNs, with particular care given to ensure convergence of the fast linear component under potentially dependent features provided by the learned representation. We empirically demonstrate the benefits of TTNs, compared to other nonlinear value function approximation algorithms, both for policy evaluation and control. 1 INTRODUCTION Value function approximation—estimating the expected returns from states for a policy—is heavily reliant on the quality of the representation of state. One strategy has been to design a basis—such as radial basis functions (Sutton and Barto, 1998) or a Fourier basis (Konidaris et al., 2011)—for use with a linear function approximator and temporal difference (TD) learning (Sutton, 1988). For low-dimensional observation vectors, this approach has been effective, but can be onerous to extend to high-dimensional observations, potentially requiring significant domain expertise. Another strategy has been to learn the representation, such as with basis adaptation or neural networks. Though there is still the need to specify the parametric form, learning these representations alleviates the burden of expert specification. Further, it is more feasible to scale to high-dimensional observations, such as images, with neural networks (Mnih et al., 2015; Silver et al., 2016). Learning representations necessitates algorithms for nonlinear function approximation. Despite the deficiencies in specification for fixed bases, linear function approximation for estimating value functions has several benefits over nonlinear estimators. They enable least-squares methods, which can be much more data-efficient for policy evaluation (Bradtke and Barto, 1996; Szepesvari, 2010; van Seijen and Sutton, 2015), as well as robust to meta-parameters (Pan et al., 2017). Linear algorithms can also make use of eligibility traces, which can significantly speed learning (Sutton, 1988; Dann et al., 2014; White and White, 2016), but have not been able to be extended to nonlinear value function approximation. Additionally, there have been a variety of algorithms derived for the linear setting, both for on-policy and off-policy learning (Sutton et al., 2009; Maei, 2011; van Seijen and Sutton, 2014; van Hasselt et al., 2014; Mahadevan et al., 2014; Sutton et al., 2016; Mahmood et al., 2017). These linear methods have also been well-explored theoretically (Tsitsiklis and Van Roy, 1997; Maei, 2011; Mahmood and Sutton, 2015; Yu, 2015) and empirically (Dann et al., 2014; White and White, 2016), with some insights into improvements from gradient methods (Sutton et al., 2009), true-online traces (van Seijen and Sutton, 2014) and emphatic weightings (Sutton et al., 2016). These algorithms are easy to implement, with relatively simple objectives. Objectives for nonlinear value function approximation, on the other hand, can be quite complex (Maei et al., 2009), resulting in more complex algorithms (Menache et al., 2005; Di Castro and Mannor, 2010; Bhatnagar et al., 2013) or requiring a primal-dual formulation as has been done for control (Dai et al., 2017). In this work, we pursue a simple strategy to take advantage of the benefits of linear methods, while still learning the representation. The main idea is to run two learning processes in parallel: the first learns nonlinear features using a surrogate loss and the second estimates the value function as a linear function of those features. We show that these Two-timescale Networks (TTNs) converge, because the features change on a sufficiently slow scale, so that they are effectively fixed for the fast linear value function estimator. Similar ideas have previously been explored for basis adaptation, but without this key aspect of TTNs—namely the separation of the loss for the representation and value function. This separation is critical because it enables simpler objectives—for which the gradient can be easily sampled—to drive the representation, but still enables use of the mean squared projected Bellman error (MSPBE)—on which all the above linear algorithms are based. This separation avoids the complexity of the nonlinear MSPBE, but maintains the useful properties of the (linear) MSPBE. A variety of basis adaptation approaches have used a two-timescale approach, but with the same objective for the representation and the values (Menache et al., 2005; Di Castro and Mannor, 2010; Bhatnagar et al., 2013; J et al., 2016). Yu and Bertsekas (2009) provided algorithms for basis adaptation using other losses, such as Bellman error using Monte carlo samples, taking derivatives through fixed point solutions for the value function. Levine et al. (2017) periodically compute a closed form least-squares solution for the last layer of neural network, with a Bayesian update to prevent too much change. Because these methods did not separate the value learn and basis adaptation, the resulting algorithms are more complex. The strategy of using two different heads—one to drive the representation and one to learn the values—has yet to be systematically explored. We show that TTNs are a promising direction for nonlinear function approximation, allowing us to leverage linear algorithms while retaining the flexibility of nonlinear function approximators. We first discuss a variety of possible surrogate losses, and their potential for learning a useful representation. We then show that TTNs converge, despite the fact that a linear algorithm is used with a changing representation. This proof is similar to previous convergence proofs for policy evaluation, but with a relaxation on the requirement that features be independent, which is unlikely for learned features. We then show empirically that TTNs are effective compared to other nonlinear value function approximations and that they can exploit several benefits of linear value approximations algorithms. In particular, for both low-dimensional and high-dimensional (image-based) observations, we show (a) the utility of least-squares (or batch) methods, (b) advantages from eligibility traces and (c) gains from being able to select amongst different linear policy evaluation algorithms. We demonstrate that TTNs can be effective for control with neural networks, enabling use of fitted Q-iteration within TTNs as an alternative to target networks. 2 BACKGROUND We assume the agents act in a finite Markov Decision Process (MDP), with notation from (White, 2017). The dynamics of the MDP are defined by the 3-tuple (S,A, P ), where S is the set of states, A the set of actions and P : S × A × S 7→ [0, 1] the transition probability function. The task in this environment is defined by a reward function R : S × A × S 7→ R and a discount function γ : S × A × S 7→ [0, 1]. At each time step, the agent takes an action At according to a policy π : S ×A 7→ [0, 1] and the environment returns reward Rt+1, next state St+1 and discount γt+1. The goal in policy evaluation is to compute the value function: the expected sum of discounted rewards from every state under a fixed policy π. The value function Vπ : S → R is defined recursively from each state s ∈ S as Vπ(s) def = E[Rt+1 + γt+1Vπ(St+1)|St = s] = ∑ a∈A π(s, a) ∑ s′∈S P (s, a, s′)(r + γVπ(s ′)). (1) When using linear function approximation, this goal translates into finding parameters w ∈ Rd to approximate the value function V̂ (s) def = x(s)>w ≈ Vπ(s) where x : S → Rd is a feature function. (2) More generally, a nonlinear function V̂ (s) could be learned to estimate Vπ . To formulate this learning problem, we need to consider the objective for learning the function V̂ . Let Vπ, V̂ ∈ R|S| be the vectors of values for Vπ, V̂ . The recursive formula (1) defines a Bellman operator Bπ where the fixed point satisfies BπVπ = Vπ . Consider a restricted value function class, such as the set of linear value functions V̂ ∈ F = {Xw | w ∈ Rd} where X ∈ R|S|×d is a matrix with the i-th row set to x(s) for ith state s ∈ S. Then, it may no longer be possible to satisfy the recursion. Instead, an alternative is to find a projected fixed point ΠFBπV̂ = V̂ where the projection operator ΠF projects BπV̂ to the space spanned by this linear basis: ΠFV def = arg min V̄ ∈F ‖V̄ − V ‖2d (3) where d ∈ R|S| is a vector which weights each state in the weighted norm ‖V ‖2d = ∑ s∈S d(s)V (s) 2. Many linear policy evaluation algorithms estimate this projected fixed point, including TD (Sutton, 1988), least-squares TD (Bradtke and Barto, 1996) and gradient TD (Sutton et al., 2009). The objective formulated for this projected fixed-point, however, is more complex for nonlinear function approximation. For linear function approximation, the projection operator simplifies into a closed form solution involving only the features X. Letting δt = Rt+1 + γV̂ (St+1)− V̂ (St), the resulting mean-squared projected Bellman error (MSPBE) can be written as MSPBE(w) def= ‖ΠFBπV̂ − V̂ ‖2d = E[δtxt]E[xtx>t ] −1 E[δtxt] (4) where E[δtxt] = ∑ s∈S d(s)E[δt|St = s]x(s). For nonlinear function classes, the projection does not have a closed form solution and may be expensive to compute. Further, the projection involves the value function parameters, so the projection changes as parameters change. The nonlinear MSPBE and resulting algorithm are more complex (Maei et al., 2009), and have not seen widespread use. Another option is simply to consider different objectives. However, as we discuss below, other objectives for learning the value function either are similarly difficult to optimize or provide poor value estimates. In the next section, we discuss some of these alternatives and introduce Two-timescale Networks as a different strategy to enable nonlinear value function approximation. 3 TWO-TIMESCALE NETWORKS AND SURROGATE OBJECTIVES We first introduce Two-timescale Networks (TTNs), and then describe different surrogate objectives that can be used in TTNs. We discuss why these surrogate objectives within TTNs are useful to drive the representation, but are not good replacements for the MSPBE for learning the value function. TTNs use two concurrent optimization processes: one for the parameters of the network θ and one for the parameters of the value function w. The value function is approximated as V̂ (s) def= xθ(s)>w where the features xθ : S → Rd are a parametrized function and θ ∈ Rm is adjusted to provide better features. For a neural network, θ consists of all the parameters in the hidden layers, to produce the final hidden layer xθ(s). The two optimization processes maintain different time scales, with the parameters θ for the representation changed as a slow process, and the parameters w for the value estimate changed as a fast process relative to θ. The separation between these two processes could be problematic, since the target problem— estimating the value function—is not influencing the representation! The slow process is driven by a completely separate objective than the fast process. However, the key is to select this surrogate loss for the slow process so that it is related to the value estimation process, but still straightforward to compute the gradient of the loss. We use V̂ (s) as the output of the fast part, which corresponds to the value estimate used by the agent. To distinguish, Ŷ (s) denotes the output for the slow-part (depicted in Figure 1), which may or may not be an estimate of the value, as we discuss below. Consider first the mean-squared TD error (MSTDE), which corresponds to ∑ s∈S d(s)E[δ2t |St = s]. Notice that this does not correspond to the mean-squared Bellman error (MSBE), for which it is more difficult to compute gradients ‖BπV̂ − V̂ ‖2d = ∑ s∈S d(s) (E[δt|St = s]) 2. Using the MSTDE as a surrogate loss, with Ŷ (s) = xθ(s)>w̄, the slow part of the network minimizes Lslow(θ)= min w̄∈Rd ∑ s∈S d(s)E[δt(θ, w̄)2|St = s] . δt(θ, w̄) def =Rt+1+γt+1xθ(St+1) >w̄−xθ(St)>w̄. This slow part has its own weights w̄ associated with estimating the value function, but learned instead according to the MSTDE. The advantage here is that stochastic gradient descent on the MSTDE is straightforward, with gradient δt∇{θ,w̄}[γt+1Ŷ (St+1)− Ŷ (St)] where ∇{θ,w̄}Ŷ (St) is the gradient of the neural network, including the head of the slow part which uses weights w̄. Using the MSTDE has been found to provide worse value estimates than the MSPBE—which we re-affirm in our experiments. It could, nonetheless, play a useful role as a surrogate loss, where it can inform the representation towards estimating values. w There are a variety of other surrogate losses that could be considered, related to the value function. However, many of these losses are problematic to sample incrementally, without storing large amounts of data. For example, the mean-squared return error (MSRE) could be used, which takes samples of return and minimizes mean-squared error to those sampled returns. Obtaining such returns requires waiting many steps, and so delays updating the representation for the current state. Another alternative is the MSBE. The gradient of the nonlinear MSBE is not as complex as the gradient of the nonlinear MSPBE, because it does not involve the gradient of a projection. However, it suffers from the double sampling problem: sampling the gradient requires two independent samples. For these reasons, we explore the MSTDE as the simplest surrogate loss involving the value function. Finally, surrogate losses could also be defined that are not directly related to the value function. Two natural choices are losses based on predicting the next state and reward. The output of the slow part could correspond to a vector of values, such as Yt = St+1 ∈ Rn or Yt = [ St+1 Rt+1 ] . The ability to predict the next state and reward is intuitively useful for enabling prediction of value, that also has some theoretical grounding. Szepesvari (2010, Section 3.2.1) shows that the Bellman error is small, if the features can capture a horizon of immediate rewards and expected next states. For linear encoders, Song et al. (2016) show that an optimal set of features enables predictions of next state and reward. More generally, learning representations using auxiliary tasks or self-supervised tasks have had some successes in RL, such as using pixel control (Jaderberg et al., 2016) or classifying the temporal distance between frames (Aytar et al., 2018). In computer vision, Gidaris et al. (2018) showed that using rotated images as self-supervised tasks produced a useful representation for the main loss, without training the representation with the main loss. Any of these self-supervised tasks could also be used for the surrogate objective, and motivate that separating out representation learning does not degrade performance. For now, we restrict focus on simpler surrogate objectives, as the main purpose of this work is to demonstrate that the separation in TTNs is a sound approach for learning values. 4 CONVERGENCE OF TWO-TIMESCALE NETWORK ALGORITHM Training TTNs is fully online, using a single transition from the environment at a time. Projected stochastic gradient descent is used to reduce the surrogate loss, Lslow(θ) and a linear policy evaluation algorithm, such as GTD2 or TD(λ), is coupled to the network where the prediction vector w is callibrated proportional to −∇wMSPBEθ(w). The full procedure is summarized in Algorithm 1, in Appendix A. Regarding the convergence of TTNs, a few remarks are in order: 1. The network needs to evolve sufficiently slowly relative to the linear prediction weights. In our theoretical analysis, this is achieved by ensuring that the step sizes ξt and αt of the network and the linear policy evaluation algorithm respectively decay to zero at different rates. In particular, ξt/αt → 0 as t→∞. With this relative disparity in magnitudes, one can assume that the network is essentially quasi-static, while the faster linear component is equilibrated relative to the static features. 2. The linear prediction algorithms need to converge for any set of features provided by the neural network, particularly linearly dependent features. This induces a technical bottleneck since linear independence of the features are a necessary condition for the convergence of the prediction methods GTD and GTD2. We overcome this by following a differential inclusion based analysis for GTD2. 3. Finally, we need to guarantee the stability of the iterates (both feature vector θt and the prediction vector wt) and this is ensured by projecting the iterates to respective compact, convex sets. The analysis for the convergence of the neural network is general, enabling any network architectures that are twice continuously differentiable. We prove that the TTNs converge asymptotically to the stable equilibria of a projected ODE which completely captures the mean dynamics of the algorithm. We now state our main result (for notations and technical details, please refer Appendix B). The results are provided for cases when TD(λ) or GTD2 is used as the linear prediction method. However, note that similar results can be obtained for other linear prediction methods. Theorem 1. Let θ̄ = (θ, w̄)> and Θ ⊂ Rm+d be a compact, convex subset with smooth boundary. Let the projection operator ΓΘ be Frechet differentiable and Γ̂Θ θ̄ (− 12∇Lslow)(θ̄) be Lipschitz con- tinuous. Also, let Assumptions 1-3 hold. Let K be the set of asymptotically stable equilibria of the following ODE contained inside Θ: d dt θ̄(t) = Γ̂Θθ̄(t)(− 1 2 ∇θ̄Lslow)(θ̄(t)), θ̄(0) ∈ Θ̊ and t ∈ R+. Then the stochastic sequence {θ̄t}t∈N generated by the TTN converges almost surely to K (sample path dependent). Further, TD(λ) Convergence: Under the additional Assumption 4-TD(λ), we obtain the following result: For any λ ∈ [0, 1], the stochastic sequence {wt}t∈N generated by the TD(λ) algorithm (Algorithm 2) within the TTN setting converges almost surely to the limit w∗, where w∗ satisfies Πθ̄∗T (λ)(Φθ̄∗w ∗) = Φθ̄∗w ∗, (5) with θ̄∗ ∈ K (sample path dependent). 5 EXPERIMENTS We investigate the performance of TTNs versus a variety of other nonlinear policy evaluation algorithms, as well as the impact of choices within TTNs. We particularly aim to answer (a) is it beneficial to optimize the MSPBE to obtain value estimates, rather than using value estimates from surrogate losses like the MSTDE; (b) do TTNs provide gains over other nonlinear policy evaluation algorithms; and (c) can TTNs benefit from the variety of options in linear algorithms, including leastsquares approaches, eligibility traces and different policy evaluation algorithms. More speculatively, we also investigate if TTNs can provide a competitive alternative to deep Q-learning in control. Experiments were performed on-policy in five environments. We use three classic continuous-state domains: Puddle World, a continuous-state grid world with high-magnitude negative rewards for walking through a puddle; Acrobot, where a robot has to swing itself up; and Cartpole, which involves balancing a pole. We also use two game domains: Catcher, which involves catching falling apples; and Puck World, in which the agent has to chase a puck (Tasfi, 2016). Catcher includes both a variant with 4-dimensional observations—position and velocity of the paddle, and (x,y) of the apple— and one with image-based observations—with two consecutive 64-by-64 grayscale images as input. This domain enables us to analyze the benefit of the algorithms, on the same domain, both with low-dimensional and high-dimensional observations. We describe the policies evaluated for these domains in Appendix D. We include a subset of results in the main body, with additional results in the appendix. Results in Cartpole are similar to Acrobot; Cartpole results are only in the appendix. The value estimates are evaluated using root-mean-squared value error (RMSVE), where value error is (Vπ(s)− V̂ (s))2. The optimal values for a set of 500 states are obtained using extensive rollouts from each state and the RMSVE is computed across these 500 states. For the algorithms, we use the following settings, unless specified otherwise. For the slow part (features), we minimize the mean-squared TD error (MSTDE) using the AMSGrad optimizer (Reddi et al., 2018) with β1 = 0 and β2 = 0.99. The network weights use Xavier initialization (Glorot and Bengio, 2010); the weights for the fast part were initialized to 0. In Puddle World, the neural network consists of a single hidden layer of 128 units with ReLU activations. In the other environments, we use 256 units instead. To choose hyperparameters, we first did a preliminary sweep on a broad range and then chose a smaller range where the algorithms usually made progress, summarized in Appendix D. Results are reported for hyperparameters in the refined range, chosen based on RMSVE over the latter half of a run with shaded regions corresponding to one standard error. TTN vs. competitors. We compare to the following algorithms: nonlinear TD, nonlinear GTD (Maei et al., 2009), Adaptive Bases (ABBE and ABTD) (Di Castro and Mannor, 2010), nonlinear TD + LSTD regularization (inspired by Levine et al. (2017)). We describe these algorithms in more depth in Appendix D. All of the algorithms involve more complex updates compared to TTNs, except for nonlinear TD, which corresponds to a semi-gradient TD update with nonlinear function approximation. For TTNs, we use LSTD for the linear, fast part. In Figure 2, TTN is able to perform as well or better than the competitor algorithms. Especially in Puddle World, its error is significantly lower than the second best algorithm. Interestingly, Nonlinear GTD also performs well across domains, suggesting an advantage for theoretically-sound algorithms. The utility of optimizing the MSPBE. First, we show that the TTN benefits from having a second head learning at a faster timescale. To do so, we compare the prediction errors of using TTN, with the fast process optimizing the MSPBE (using LSTD) and the slow one optimizing the MSTDE, and one trained end-to-end using the MSTDE with AMSGrad. As a baseline, we include TTN with a fixed representation (a randomly initialized neural network) to highlight that the slow process is indeed improving the representation. We also include results for optimizing the MSTDE with the fixed representation. Cartpole In Figure 3, we see that optimizing the MSPBE indeed gives better results than optimizing the MSTDE. Additionally, we can conclude that using the MSTDE, despite being a poor objective to learn the value function, can still be effective for driving feature-learning since it outperforms the fixed representation. Linear algorithms and eligibility traces. TTNs give us the flexibility to choose any linear policy evaluation algorithm for the fast part. We compare several choices: TD, least-squares TD (LSTD) (Bradtke and Barto, 1996), forgetful LSTD (FLSTD) (van Seijen and Sutton, 2015), emphatic TD (Sutton et al., 2016), gradient TD (the TDC variant) (Sutton et al., 2009) and their true-online versions (van Seijen and Sutton, 2014; van Hasselt et al., 2014) to learn the value function. GTD and ETD are newer temporal difference methods which have better convergence properties and can offer increased stability. The true-online variants modify the update rules to improve the behavior of the algorithms when learning online and seem to outperform their counterparts empirically (van Seijen and Sutton, 2014). Least-squares methods summarize past interaction, but are often avoided due to quadratic computation in the number of features. For TTNs, however, there is no computational disadvantage to using LSTD methods, for two reasons. It is common to choose deep but skinny architectures (Mnih et al., 2015; Hessel et al., 2017). Furthermore, if the last layer is fully connected, then we already need to store O(d2) weights and use O(d2) time to compute a forward pass—the same as LSTD. We include FLSTD, which progressively forgets older interaction, as this could be advantageous when the feature representation changes over time. For TTN, incremental versions of the least-squares algorithms are used to maintain estimates of the required quantities online (see appendix D). All of these linear algorithms can use eligibility traces to increase their sample efficiency by propagating TD errors back in time. The trace parameter λ can also provide a bias-variance tradeoff for the value estimates (Sutton, 1988; Dann et al., 2014). For nonlinear function approximation, eligibility traces can no longer be derived for TD. Though invalid, we can naively extend them to this case by keeping one trace per weight, giving us nonlinear TD(λ). The results overall indicate that TTNs can benefit from the ability to use different linear policy evaluation algorithms and traces, in particular from the use of least-squares methods as shown in Figure 4 for Puddle World and Catcher. The dominance of LSTD versus the other linear algorithms is consistent, including in terms of parameter sensitivity, persists for the other three domains. We additionally investigated sensitivity to λ, and found that most of the TTN variants benefit from a nonzero λ value and, in many cases, the best setting is high, near 1. One exception is the least-squares methods, where LSTD performs similarly for most values of λ. Nonlinear TD(λ), on the other hand, performs markedly worse as λ increases. This is unsurprising considering the naive addition of eligibility traces is unsound. We include these sensitivity plots in the appendix, in Figure ??. Surrogate loss functions. For all the previous experiments, we optimized the MSTDE for the slow part of the network, but as discussed in Section 3, other objectives can be used. We compare a variety of objectives, by choosing different Yt, including Yt = Rt+1 (Reward); Yt = St+1 (Next State); and Yt = Rt+1 + Ŷ (St+1). (Semi-gradient MSTDE). In Puck World, in Figure 5 a), we can see that every auxiliary loss performed well. This does not appear to be universally true, as in Acrobot we found that the MSTDE was a less effective surrogate loss, leading to slower learning (see Figure 5 b). Alternate losses such as the semi-gradient MSTDE and next state predictions were more successful in that domain. These results suggest that there is no universally superior surrogate loss and that choosing the appropriate one can yield benefits in certain domains. Control Although the focus of this work is policy evaluation, we also provide some preliminary results for the control setting. For control, we include some standard additions to competitor learning algorithms to enable learning with neural networks. The DQN algorithm (Mnih et al., 2015) utilizes two main tricks to stabilize training: experience replay—storing past transitions and replaying them multiple times—and a target network—which keeps the value function in the Q-learning targets fixed, updating the target network infrequently (e.g., every k = 10, 000 steps). We use an alternative strategy to target networks for TTN. The use of a target network is motivated by fitted Q-iteration (FQI) (Ernst et al., 2005), which updates towards fixed Q-values with one sweep through a batch of data. TTNs provide a straightforward mechanism to instead directly use FQI, where we can solve for the weights on the entire replay buffer, taking advantage of the closed form solution for linear regression towards the Q-values from the last update. Batch FQI requires storing all data, whereas we instead have a sliding window of experience. We therefore additionally incorporate a regularization term, which prevents the weights from changing too significantly between updates, similarly to Levine et al. (2017). Each FQI iteration requires solving a least squares problem on the entire buffer, an operation costing O(nd2) computation where d is the number of features in the last layer of the network and n is the size of the buffer; we update the network every k steps, which reduces the per-step computation to O(nd2/k). The slow part drives feature-learning by minimizing the semi-gradient MSTDE for state-action values. As another competitor, we include LS-DQN (Levine et al., 2017), a DQN variant which also utilizes adjustments to the final layer’s weights towards the FQI solution, similar to TTN-FQI. The experimental details differ for control. On nonimage Catcher, we do a sweep over αslow and λreg , the regularization parameter, for TTN and sweep over the learning rate and the number of steps over which is annealed for DQN. On image Catcher, runs require significantly more computation so we only tune hyperparameters by hand. The FQI update in TTNs was done every 1000 (10000) steps for non-image (image) Catcher. We run each algorithm 10 times (5 times) for 200 thousand steps (10 million steps) on the non-image (image) Catcher. We see that TTN is able to perform well on both versions of Catcher in Figure 6, particularly learning more quickly than the DQN variants. This difference is especially pronounced in the image version of catcher, where TTN is also able to achieve much higher average returns than DQN. Both algorithms seem to suffer from catastrophic forgetting later during training as the performance dips down after an initial rise, although TTN still stabilizes on a better policy. Overall, these results suggest that TTNs are a promising direction for improving sample efficiency in control, whilst still maintaining stability when training neural networks. 6 DISCUSSION AND CONCLUSION In this work, we proposed Two-timescale Networks as a new strategy for policy evaluation with nonlinear function approximation. As opposed to many other algorithms derived for nonlinear value function approximation, TTNs are intentionally designed to be simple to promote ease-of-use. The algorithm combines a slow learning process for adapting features and a fast process for learning a linear value function, both of which are straightforward to train. By leveraging these two timescales, we are able to prove convergence guarantees for a broad class of choices for both the fast and slow learning components. We highlighted several cases where the decoupled architecture in TTNs can improve learning, particularly enabling the use of linear methods—which facilitates use of least-squares methods and eligibility traces. This work has only begun the investigation into which combinations for surrogate losses and linear value function approximation algorithms are most effective. We provided some evidence that, when using stochastic approximation algorithms rather than least-squares algorithms, the addition of traces can have a significant effect within TTNs. This contrasts nonlinear TD, where traces were not effective. The ability to use traces is potentially one of the most exciting outcomes for TTNs, since traces have been so effective for linear methods. More generally, TTNs provide the opportunity to investigate the utility of the many linear value function algorithms, in more complex domains with learned representations. For example, emphatic algorithms have improved asymptotic properties (Sutton et al., 2016), but to the best of our knowledge, have not been used with neural networks. Another promising direction for TTNs is for off-policy learning, where many value functions are learned in parallel. Off-policy learning can suffer from variance due to large magnitude corrections (importance sampling ratios). With a large collection of value functions, it is more likely that some of them will cause large updates, potentially destabilizing learning in the network if trained in an end-to-end fashion. TTNs would not suffer from this problem, because a different objective can be used to drive learning in the network. We provide some preliminary experiments in the appendix supporting this hypothesis (Appendix C.7). A TTN ALGORITHM Algorithm 1 Training of TTNs 1: procedure TRAIN(w,θ, w̄, π) . π is a fixed policy 2: Initialize θ, w̄ with Xavier initialization, w to 0 and thestarting state s according to the environment 3: while training do 4: a← action chosen by π(s) 5: r, s′ ← Environment(s, a) . Get reward and next state 6: θ, w̄← GradientDescent on Lslow using sample (s, r, s′) 7: w← Update on Lvalue using sample (s, r, s′) 8: s← s′ 9: end while 10: return learned parameters w,θ, w̄ 11: end procedure B CONVERGENCE PROOF OF TWO-TIMESCALE NETWORKS B.1 DEFINITIONS & NOTATIONS - Let R+ denote the set of non-negative real numbers, N = {0, 1, 2, . . . } and ‖ · ‖ denote the Euclidean norm or any equivalent norm. - A map f : Rd → Rd is Lipschitz continuous if ‖f(x) − f(y)‖ ≤ L(‖x − y‖), for some L ∈ (0,∞), ∀x,y ∈ Rd. - A set-valued map h : Rd → {subsets of Rd} is called a Marchaud map, if it satisfies the following conditions: 1. For each x ∈ Rd, h(x) is convex and compact. 2. For each x ∈ Rd, ∃K ∈ (0,∞) such that supy∈h(x) ‖y‖ ≤ K(1 + ‖x‖). 3. h is upper-semicontinuous, i.e., if {xn}n∈N → x and {yn}n∈N → y, where xn ∈ Rd, yn ∈ h(xn), ∀n ∈ N, then y ∈ h(x). - For x1,x2 ∈ Rd and D ∈ Rk×k a diagonal matrix, we define the inner-product < x1,x2 >D, x>1 Dx2. We also define the semi-norm ‖x‖D ,< x,x > 1 2 D. If all the diagonal elements of D are strictly positive, then ‖ · ‖D is a norm. - For any set X , let X̊ denote the interior of X and ∂X denote the boundary of X . - For brevity, let θ̄ = (θ, w̄)> and Φθ̄ be the feature matrix corresponding to the feature parameter θ̄, i.e. Φθ̄ , xθ(s1) > xθ(s2) > ... xθ(s|S|) > |S|×d , (6) where xθ(s)> is the row-vector corresponding to state s. Further, define the |S| × |S|-matrix Pπ as follows: Pπs,s′ , ∑ a∈A π(s, a)P (s, a, s′), s, s′ ∈ S. (7) - Also, recall that Lslow(θ) = MSTDE(θ) , E [ E [ δ2t |St ]] . - A function Γ : U ⊆ Rd1 → X ⊆ Rd2 is Frechet differentiable at x ∈ U if there exists a bounded linear operator Γ̂x : Rd1 → Rd2 such that the limit lim ↓0 Γ(x + y)− x (8) exists and is equal to Γ̂x(y). We say Γ is Frechet differentiable if Frechet derivative of Γ exists at every point in its domain. B.2 ASSUMPTIONS Assumption 1: The pre-determined, deterministic, step-size sequence {ξt}t∈N satisfies ξt > 0,∀t ∈ N, ∑ t∈N ξt =∞, ∑ t∈N ξ2t <∞. Assumption 2: The Markov chain induced by the given policy π is ergodic, i.e., aperiodic and irreducible. Assumption 2 implies that the underlying Markov chain is asymptotically stationary and henceforth it guarantees the existence of a unique steady-state distribution dπ over the state space S (Levin and Peres, 2017), i.e., limt→∞ P(St = s) = dπ(s), ∀s ∈ S. Assumption 3: Given a realization of the transition dynamics of the MDP in the form of a sample trajectory Oπ = {S0, A0, R1, S1, A1, R2, S2, . . . }, where the initial state S0 ∈ S is chosen arbitrarily, while the action A 3 At ∼ π(St, ·), the transitioned state S 3 St+1 ∼ P (St, At, ·) and the reward R 3 Rt+1 = R(St, At, St+1). To analyze the long-run behaviour of our algorithm, we employ the ODE based analysis (Borkar, 2008; Kushner and Yin, 2003; Ljung, 1977) of the stochastic recursive algorithms. Here, we consider a deterministic ordinary differential equation (ODE) whose asymptotic flow is equivalent to the long-run behaviour of the stochastic recursion. Then we analyze the qualitative behaviour of the solutions of the ODE to determine the asymptotically stable sets. The ODE-based analysis is elegant and conclusive and it further guarantees that the limit points of the stochastic recursion will almost surely belong to the compact connected internally chain transitive invariant set of the equivalent ODE. Since the algorithm follows a multi-timescale stochastic approximation framework, we will also resort to the more generalized multi-timescale differential inclusion based analysis proposed in (Borkar, 1997; Ramaswamy and Bhatnagar, 2016). Note that there exists only a unilateral coupling between the neural network (where the feature vectors θ̄t are calibrated by following a stochastic gradient descent w.r.t. Lslow) and the various policy evaluation algorithms (see Figure 7). This literally implies that the policy evaluation algorithms depend on the feature vectors θ̄t but not vice-versa. Therefore, one can independently analyze the asymptotic behaviour of the feature vectors {θ̄t}t∈N. Also, as a technical requirement, note that since one cannot guarantee the stability (almost sure boundedness) of the iterates {θ̄t}t∈N (which is a necessary condition required for the ODE based analysis. Please refer Chapter 2 of Borkar (2008)), we consider the following projected stochastic recursion: θ̄t+1 = Γ Θ ( θ̄t + ξtδt ( ∇θ̄t Ŷθ̄(St)− γt+1∇θ̄t Ŷθ̄(St+1) )) , (9) where ΓΘ(·) is the projection onto a pre-determined compact and convex subset Θ ⊂ Rm+d, i.e., ΓΘ(x) = x, for x ∈ Θ̊, while for x /∈ Θ̊, it is the nearest point in Θ w.r.t. the Euclidean distance (or equivalent metric). Define the filtration {Ft}t∈N, a family of increasing natural σ-fields, where Ft , σ ( {θ̄i, Si, Ri; 0 ≤ i ≤ t} ) . The following lemma characterizes the limiting behaviour of the iterates {θ̄t}t∈N: Lemma 1. Let Assumptions 1-3 hold. Let Θ ⊂ Rm+d be a compact, convex subset with smooth boundary. Let ΓΘ be Frechet differentiable. Further, let Γ̂Θ θ̄ (− 12∇Lslow)(θ̄) be Lipschitz continuous. Let K be the set of asymptotically stable equilibria of the following ODE contained inside Θ: d dt θ̄(t) = Γ̂Θθ̄(t)(− 1 2 ∇θ̄Lslow)(θ̄(t)), θ̄(0) ∈ Θ̊ and t ∈ R+. Then the stochastic sequence {θ̄t}t∈N generated by the TTN converges almost surely to K. Proof. We employ here the ODE based analysis as proposed in (Borkar, 2008; Kushner and Clark, 2012). Firstly, we recall here the stochastic recursion which updates θ̄t: θ̄t+1 = Γ Θ ( θ̄t + ξtδt ( ∇θ̄t Ŷθ̄(St)− γt+1∇θ̄t Ŷθ̄(St+1) )) , (10) where ΓΘ is the projection onto a pre-determined compact and convex subset Θ ⊂ Rm+d. Here, δt , Rt+1 + γt+1Ŷθ̄t(St+1)− Ŷθ̄t(St) is the temporal difference. Also,∇θ̄t Ŷθ̄ ∈ R (m+d)×|S| is the Jacobian of Ŷθ̄ at θ̄ = θ̄t and ∇θ̄t Ŷθ̄(s) is the column corresponding to state s. Now the above equation can be rewritten as θ̄t+1 = Γ Θ ( θ̄t + ξt ( h1(θ̄t) + M1t+1 + `1t )) , (11) where h1(θ̄) , E [ δt ( ∇θ̄Ŷθ̄(St)− γt+1∇θ̄Ŷθ̄(St+1) )] , the noise term M1t+1 , δt ( ∇θ̄t Ŷθ̄(St)− γt+1∇θ̄t Ŷθ̄(St+1) ) − E [ δt ( ∇θ̄t Ŷθ̄(St)− γt+1∇θ̄t Ŷθ̄(St+1) ) |Ft ] and the bias `1t , E [ δt ( ∇θ̄t Ŷθ̄(St)− γt+1∇θ̄t Ŷθ̄(St+1) ) |Ft ] − h1(θ̄t). Further, θ̄t+1 = θ̄t + ξt ΓΘ ( θ̄t + ξt ( h1(θ̄t) + M1t+1 + `1t ) − θ̄t ) ξt = θ̄t + ξt ( Γ̂Θθt(h 1(θ̄t)) + Γ̂ Θ θ̄t ( M1t+1 ) + Γ̂Θθ̄t ( `1t ) + o(ξt) ) , (12) where Γ̂Θ is the Frechet derivative (defined in Eq. (8). Note that ΓΘ is single-valued since Θ is convex and also the above limit exists since the boundary ∂Θ is assumed smooth. Further, for x ∈ Θ̊, we have Γ̂Θx (y) = lim →0 ΓΘ (x + y)− x = lim →0 x + y − x = y (for sufficiently small ), (13) i.e., Γ̂Θx (·) is an identity map for x ∈ Θ̊. A few observations are in order: C1: Γ̂Θ θ̄ (h1(θ̄)) is a Lipschitz continuous function in θ̄. This follows from the hypothesis of the Lemma. C2: Γ̂Θ θ̄t ( M1t+1 ) is a truncated martingale difference noise. Indeed, it is easy to verify that the noise sequence {M1t+1}t∈N is a martingale-difference noise sequence w.r.t to the filtration {Ft+1}t∈N, i.e., M1t+1 is Ft+1-measurable and integrable, ∀t ∈ N and E [ M1t+1|Ft ] = 0 a.s., ∀t ∈ N. Also, since ΓΘ(·) is a continuous linear operator, we have Γ̂Θ(M1t+1) to be Ft+1-measurable and integrable, ∀t ∈ N likewise. C3: Γ̂Θ θ̄t ( `1t ) → 0 as t→∞ a.s. Indeed, ∥∥∥Γ̂Θθ̄t (`1t ) ∥∥∥ = ∥∥∥∥∥ lim →0 ΓΘ ( θ̄t + ` 1 t ) − θ̄t ∥∥∥∥∥ ≤ lim →0 ∥∥∥ΓΘ (θ̄t + `1t )− ΓΘ (θ̄t) ∥∥∥ ≤ lim →0 ∥∥∥θ̄t + `1t − θ̄t∥∥∥ = ‖`1t‖. By taking t→∞, C3 follows directly from the ergodicity (Levin and Peres, 2017) (Assumption 2) and finiteness of the underlying Markov chain. C4: o(ξt)→ 0 as t→∞ (follows from Assumption 1). C5: Iterates {θ̄t}t∈N are stable (forcefully), i.e. bounded almost surely, since θt ∈ Θ, ∀t ∈ N (ensured by the projection operator ΓΘ) and Θ is compact (i.e., closed and bounded). C6: ∃K0 ∈ (0,∞), such that E [ ‖Γ̂Θθ̄t ( M1t+1 ) ‖2|Ft ] ≤ K0 ( 1 + ‖θ̄t‖2 ) a.s. (14) This follows directly from the finiteness of the Markov chain and from the assumption that the boundary ∂Θ is smooth. Now, by appealing to Theorem 2, Chapter 2 of (Borkar, 2008)), we conclude that the stochastic recursion (10) asymptotically tracks the following ODE d dt θ̄(t) = Γ̂Θθ̄(t)(h 1(θ̄(t)), θ̄(0) ∈ Θ̊ and t ∈ R+ = Γ̂Θθ̄(t)(− 1 2 ∇θ̄Lslow)(θ̄(t)), θ̄(0) ∈ Θ̊ and t ∈ R+. (15) In other words, the stochastic recursion (10) converges to the asymptotically stable equilibria of the ODE (15) contained inside Θ. Remark 1. It is indeed non-trivial to determine the constraint set Θ without prior adequate knowledge about the limit set of the ODE (15). A pragmatic approach to overcome this concern is to initiate the stochastic recursion with an arbitrary convex, compact set Θ with a smooth boundary and gradually spread to the whole of Rm+d (Chen, 2006). Remark 2. It is also important to characterize the hypothesis of the above lemma (i.e., Γ̂Θ θ̄ (− 12∇Lslow)(θ̄) is Lipschitz continuous) with respect to the features Ŷθ̄. To achieve that one has to consider the non-projected form of the ODE (15). Apparently, when one considers the spreading approach proposed in the above remark, then it is essentially encouraged to consider the non-projected form since the limiting flow of the ODE arising from the projected stochastic recursion is more likely to lie inside the compact, convex set as Θ becomes larger. Thereupon, it is easy to observe that the condition Ŷθ̄ is twice continuously-differentiable is sufficient to ensure the Lipschitz continuity of Γ̂Θ θ̄ (− 12∇Lslow)(θ̄). Additionally, in that case K = {θ̄|∇θ̄Lslow(θ̄) = 0} which is the set of local extrema of J . B.4 TD(λ) ALGORITHM One can directly apply the TD(λ) with linear function approximation algorithm to estimate the value function with respect to the features provided by the neural network. The TD(λ) algorithm is provided in Algorithm 2. Here et,wt ∈ Rd. Further, δt , Rt+1 +γt+1w>t xθt(St+1)−w>t xθt(St) is the temporal difference. Algorithm 2 TD(λ) algorithm Parameters: αt > 0, λ ∈ [0, 1]; Initialization: w0 = 0, e0 = 0; For each transition (St, Rt+1, St+1) in Oπ, do: et+1 = xθt(St) + γt+1λet; (16) wt+1 = wt + αt ( Rt+1 + γt+1w > t xθt(St+1)−w>t xθt(St) ) et; (17) Assumption 4-TD(λ): The pre-determined, deterministic, step-size sequence {αt}t∈N satisfies: αt > 0,∀t ∈ N, ∑ t∈N αt =∞, ∑ t∈N α2t <∞, lim t→∞ ξt αt = 0. Note that the step-size schedules {αt}t∈N and {ξt}t∈N satisfy ξtαt → 0, which implies that {ξt} converges to 0 relatively faster than {αt}. This disparity in terms of the learning rates induces an asynchronous convergence behaviour asymptotically (Borkar, 1997), with feature parameter sequence {θ̄t} converging slower relative to the TD(λ) sequence {wt}. The rationale being that the increment term of the underlying stochastic gradient descent of the neural network is smaller compared to that of the TD(λ) recursion (17), since the neural network SGD is weighted by the step-size schedule {ξt}t∈N which is smaller than {αt}t∈N for all but finitely many t. This unique pseudo heterogeneity induces multiple perspectives, i.e., when viewed from the faster timescale recursion (recursion controlled by αt), the slower timescale recursion (recursion controlled by ξt) seems quasi-static (‘almost a constant’), while viewed from the slower timescale, the faster timescale recursion seems equilibrated. Further, it is analytically admissible (Borkar, 1997) to consider the slow timescale stochastic recursion (i.e., the neural network SGD) to be quasi-stationary (i.e., θ̄t ≡ θ̄, ∀t ∈ N), while analyzing the asymptotic behaviour of the relatively faster timescale stochastic recursion (17). Thereupon, we obtain the following directly from Theorem 1 of (Tsitsiklis and Van Roy, 1997). Lemma 2. Assume θ̄t ≡ θ̄, ∀t ∈ N. Let Assumptions 1-3 and 4-TD(λ) hold. Then for any λ ∈ [0, 1], the stochastic sequence {wt}t∈N generated by the TD(λ) algorithm (Algorithm 2) within the TTN setting converges almost surely to the limit w∗, where w∗ satisfies Πθ̄T (λ)(Φθ̄w ∗) = Φθ̄w ∗, (18) with T (λ)J(s) , (1− λ) ∑∞ i=0 λ iE [∑i j=0 γ [j]Rj+1 + γ [i+1]J(Si+1) ∣∣S0 = s] and γ[j] = Πji=0γi (with γ0 = 1). Also, Πθ̄ is defined according to Eq. (3) with F = {Φθ̄w|w ∈ Rd}. For other single-timescale prediction methods like ETD and LSPE, similar results follow. Regarding the least squares method LSTD, which offers the significant advantage of non-dependency on stepsizes (albeit computationally expensive) couples smoothly within the TTN setting without any additional consideration. B.5 GTD2 ALGORITHM However, one cannot directly apply the original GTD2 and TDC algorithms to the TTN setting, since a necessary condition required for the convergence of these algorithms is the non-singularity of the feature specific matrices E [ xθt(St)xθt(St) > ] and E [ (xθt(St)− γt+1xθt(St+1)) xθt(St)> ] . Please refer Theorem 1 and Theorem 2 of (Sutton et al., 2009). Without the non-singularity assumption, it is indeed hard to guarantee the almost sure boundedness of the GTD2/TDC iterates. In the TTN setting that we consider in this paper, one cannot explicitly assure this condition, since the features are apparently administered by a neural network and it is not directly intuitive on how to control the neural network to generate a collection of features with the desired non-singularity characteristic. Henceforth, one has to consider the projected versions of these algorithms. We consider here the projected GTD2 algorithm provided in Algorithm 3. Algorithm 3 GTD2 algorithm Parameters: αt, βt; Initialization: u0 ∈ U,w0 ∈W ; For each transition (St, Rt+1, St+1) in Oπ do: wt+1 = Γ W ( wt + αt ( δutt+1xθt(St)− ( wt >xθt(St) ) xθt(St) )) ; (19) ut+1 = Γ U ( ut + βt (xθt(St)− γt+1xθt(St+1)) ( wt >xθt(St) )) ; (20) Here ut,wt ∈ Rd. Further, δut+1 , Rt+1 + γt+1u>xθt(St+1) − u>xθt(St) is the temporal difference. Here, ΓW (·) is the projection operator onto a pre-determined convex, compact subset W ⊂ Rd with a smooth boundary ∂W . Therefore, ΓW maps vectors in Rd to the nearest vectors in W w.r.t. the Euclidean distance (or equivalent metric). Convexity and compactness ensure that the projection is unique and belongs to W . Similarly, U is a pre-determined convex, compact subset of Rd with a smooth boundary ∂U . Projection is required since the stability of the iterates {wt}t∈N and {ut}t∈N are hard to guarantee otherwise. Assumption 4-GTD2: The pre-determined, deterministic, step-size sequences {αt}t∈N and {βt}t∈N satisfy αt, βt > 0,∀t ∈ N, ∑ t∈N αt = ∑ t∈N βt =∞, ∑ t∈N ( α2t + β 2 t ) <∞, lim t→∞ βt αt = 0, lim t→∞ ξt βt = 0. Define the filtration {Ft}t∈N, a family of increasing natural σ-fields, where Ft , σ ( {wi,ui, θ̄i, Si, Ri; 0 ≤ i ≤ t} ) . Similar to the TD(λ) case, here also we follow the quasi-stationary argument. Henceforth, we analyze the asymptotic behaviour of GTD2 algorithm under the assumption that feature vector θ̄t is quasi-static, i.e. θ̄t ≡ θ̄ = (θ, w̄)>. Lemma 3. Assume θ̄t ≡ θ̄ = (θ, w̄)>, ∀t ∈ N. Let Assumptions 1-3 and 4-GTD2 hold. Then{ (u,w)> ∣∣ lim inf t→∞ ∥∥(u,w)> − (ut,wt)>∥∥} ⊆ ⋃ u∈A∗ { (u,w)> ∣∣w ∈ Au}, (21) where A∗ is the set of asymptotically stable equilibria of the following ODE: d dt u(t) = Γ̂Uu(t) ( Φ>θ̄ Ddπ (I− γt+1P π)Φθ̄u(t) ) , u(0) ∈ Ů , t ∈ R+ (22) and Au is the asymptotically stable equilibria of the following ODE: d dt w(t) = Γ̂Ww(t) (( Φ>θ̄ Ddπδ u − Φ>θ̄ DdπΦθ̄ ) w(t) ) , w(0) ∈ W̊ and t ∈ R+, with δu defined in Eq. (29). Proof. The two equations in the modified GTD2 algorithm constitute a multi-timescale stochastic approximation recursion, where there exists a bilateral coupling between the stochastic recursions (19) and (20). Since the step-size sequences {αt}t∈N and {βt}t∈N satisfy βtαt → 0, we have βt → 0 faster than αt → 0. This disparity in terms of the learning rates induces a pseudo heterogeneous rate of convergence (or timescales) between the individual stochastic recursions which results in a pseudo asynchronous convergence behaviour when considered over a finite time window. Also note that the coherent long-run behaviour of the multi-timescale stochastic recursion will asymptotically follow this short-term behaviour with the window size extending to infinity(Borkar, 1997; 2008). This pseudo behaviour induces multiple viewpoints, i.e., when observed from the faster timescale recursion (recursion controlled by αt), the slower timescale recursion (recursion controlled by βt) appears quasi-static (‘almost a constant’), while observed from the slower timescale, the faster timescale recursion seems equilibrated. Further, it is analytically admissible (Borkar, 1997) to consider the slow timescale stochastic recursion (20) to be quasi-stationary (i.e., ut ≡ u, ∀t ∈ N), while analyzing the limiting behaviour of the relatively faster timescale stochastic recursion 19. Analysis of the faster time-scale recursion: The faster time-scale stochastic recursion of the GTD2 algorithm is the following: wt+1 = Γ W ( wt + αt ( δutt+1xθt(St)− ( wt >xθt(St) ) xθt(St) )) . (23) Under the previously mentioned quasi-stationary premise that ut ≡ u and θ̄t ≡ θ̄ = (θ, w̄)>, ∀t ∈ N, thereupon, we analyze the long-term behaviour of the following recursion: wt+1 = Γ W ( wt + αt ( δut+1xt − ( wt >xt ) xt )) , (24) where xt = xθ(St) and δut+1 , Rt+1 + γt+1u >xt+1 − u>xt. The above equation can be rearranged as the following: wt+1 = Γ W ( wt + αt ( h2(wt) + M2t+1 + `2t )) , where the noise M2t+1 , δut+1xt − ( w>t xt ) xt − E [ δut+1xt − ( w>t xt ) xt|Ft ] , h2(w) , E [ δut+1xt − ( w>t xt ) xt ] and the bias `2t = E [ δut+1xt − ( w>t xt ) xt|Ft ] − E [ δut+1xt − ( w>t xt ) xt ] . Similar to Equation (12), we can rewrite the above recursion as follows: wt = wt + αt ( Γ̂Wwt(h 2(wt)) + Γ̂ W wt ( M2t+1 ) + Γ̂Wwt ( `2t ) + o(αt) ) , (25) where Γ̂Wwt(·) is the Frechet derivative (defined in Equation (8)) of the projection operator Γ W . A few observations are in order: D1: The iterates {wt}t∈N are stable, i.e., supt∈N ‖wt‖ <∞ a.s. This immediately follows since W is bounded. D2: {Γ̂Wwt ( M2t+1 ) }t∈N is a martingale-difference noise sequence with respect to the filtration {Ft+1}t∈N. This follows directly since {M2t+1}t∈N is a martingale-difference noise sequence with respect to the same filtration. D3: {Γ̂Wwt ( M2t+1 ) }t∈N are square-integrable and ∃K2 ∈ (0,∞) such that E [ ‖Γ̂Wwt ( M2t+1 ) ‖2|Ft ] ≤ K2 ( 1 + ‖wt‖2 ) a.s., t ∈ N. (26) This follows directly from the finiteness of the underlying Markov chain and from the assumption that the boundary ∂W is smooth. D4: Γ̂Ww (h 2(w)) is Lipschitz continuous with respect to w. Proof similar to C1. D5: Γ̂Wwt ( `2t ) → 0 as t→∞ a.s. Proof similar to C3. Now by appealing to Theorem 2, Chapter 2 of (Borkar, 2008) along with the above observations, we conclude that the stochastic recursion 23 asymptotically tracks the following ODE almost surely: d dt w(t) = Γ̂Ww(t)(h 2(w(t)), w(0) ∈ W̊ and t ∈ R+. = Γ̂Ww(t) ( E [ δut+1xt ] − E [ xtx > t ] w(t) ) , w(0) ∈ W̊ and t ∈ R+. (27) Therefore, wt converges asymptotically to the stable equilibria of the above ODE contained inside W almost surely. Qualitative analysis of the solutions of ODE (27): A trivial qualitative analysis of the long-run behaviour of the flow induced by the above ODE attests that the stable limit set is indeed the solutions of the following linear system insideW (This follows since Γ̂Ww (y) = y for w ∈ W̊ and also because Γ̂Ww (·) does not contribute any additional limit points on the boundary other than the roots of h2 since ∂W is smooth). E [ xtx > t ] E [ δut+1xt ] − E [ xtx > t ] E [ xtx > t ] w = 0. ⇒ E [ xtx > t ] w = E [ δut+1xt ] . (28) Note that E [ xtx > t ] = Φ> θ̄ DdπΦθ̄. Claim 1: The above linear system of equations is consistent, i.e., E [ δut+1xt ] ∈ R(Φ> θ̄ DdπΦθ̄), i.e., the range-space of Φ> θ̄ DdπΦθ̄: To see that, note that the above system can indeed be viewed as the least squares solution to the Φθ̄w = δ u with respect to the weighted-norm ‖ · ‖Ddπ , where δu(s) = R̄π(s) + γt+1 ∑ s′∈S Pπs,s′u >xθ(s ′)− ∑ s′∈S Pπs,s′u >xθ(s ′), (29) where R̄ is the expected reward. (Note that E [ δut+1xt ] = Φ> θ̄ Ddπδ u). The least-squares solution w0 ∈ Rd (which certainly exists but may not be unique) satisfies < Φθ̄w, δ u − Φθ̄w0 >Ddπ= 0, ∀w ∈ R d ⇒ < w, D−1dπΦ > θ̄ Ddπ (δ u − Φθ̄w0) >Ddπ= 0, ∀w ∈ R d. Now choose w = D−1dπΦ > θ̄ Ddπ (δ u − Φθ̄w0). Then Φ>θ̄ Ddπ (δ u − Φθ̄w0) = 0 ⇒ Φ>θ̄ DdπΦθ̄w0 = Φ > θ̄ Ddπδ u. [ End of proof of Claim 1 ] Since Φ> θ̄ DdπΦθ̄ may be singular (i.e., Φ > θ̄ DdπΦθ̄ is not invertible), the above least squares solution may not be unique and hence the collection of asymptotically stable equilibria of the flow induced by the ODE (27) may not be singleton for every u. Let’s denote the asymptotically stable equilibria of the flow induced by the said ODE to be the set Au, where ∅ 6= Au ⊆W . Analysis of the slower time-scale recursion: The slower time-scale stochastic recursion of the GTD2 algorithm is the following: ut+1 = Γ U ( ut + βt (xt − γt+1xt+1) ( wt >xt )) , ut ∈ Rd, u0 ∈ U. (30) Note that since ξtβt → 0, the stochastic recursion (20) is managed on a faster timescale relative to the the neural network stochastic recursion (10) and henceforth, we continue to maintain here the quasi-stationary condition θ̄t ≡ θ̄ = (θ, w̄)>. Now the above equation can be rearranged as follows: ut+1 = Γ U ( ut + βt ( E [ ∆wtt+1 ] + M3t+1 + `3t )) , (31) where ∆wtt+1 , (xt − γt+1xt+1) ( wt >xt ) = ( (xt − γt+1xt+1) x>t ) wt, the noise term M3t+1 , ∆wtt+1 − E [ ∆wtt+1|Ft ] and the bias `3t , E [ ∆wtt+1|Ft ] − E [ ∆wtt+1 ] . Similar to Equation (12), we can rewrite the above recursion as follows: ut+1 = ut + βt ( Γ̂Uut ( E [ ∆wtt+1 ]) + Γ̂Uut ( M3t+1 ) + Γ̂Uut ( `3t ) + o(βt) ) , (32) where Γ̂Uut(·) is the Frechet derivative (defined in Equation (8)) of the projection operator Γ U . Now the above equation can be interpreted in terms of stochastic recursive inclusion as follows: ut+1 = ut + βt ( Γ̂Uut ( E [ ∆wtt+1 ]) + Γ̂Uut ( M3t+1 ) + Γ̂Uut ( `3t ) + o(βt) ) , (33) with Γ̂Uut ( E [ ∆wtt+1 ]) ∈ h3(ut), where the set-valued map h3 : Rd → {subsets of Rd} is defined as h3(u) , { Γ̂Uu ( E [ ∆wt+1 ]) , where w ∈ Au } . (34) Indeed h3(u) = { Γ̂Uu (Bw) , where B = E [( (xt − γt+1xt+1) x>t )] and w ∈ Au } . It is easy to verify that B = Φ> θ̄ Ddπ (I− γt+1Pπ)Φθ̄. Here, one cannot directly apply the multi-timescale stochastic approximation results from (Borkar, 1997) since the said paper assumes that the limit point from the slower timescale recursion is unique (Please see Chapter 6 of (Borkar, 2008)). But in our setting, the slower timescale recursion (23) has several limit points (note that the stable limit set Au is not singleton). This is where our analysis differs from that of the seminal paper on GTD2 algorithm, where it is assumed that both the matrices E [ xtx > t ] and E [ (xt − γt+1xt+1)x>t ] are certainly non-singular. However, in our TTN setting, one cannot guarantee this condition, since the features are apparently provided by a neural network and it is hard to fabricate the neural network to generate a collection of features with the desired non-singularity properties. In order to analyze the limiting behaviour of the GTD2 algorithm under the relaxed singularity setting, henceforth one has to view the stochastic recursion (30) as a stochastic recursion inclusion (Benaïm et al., 2005) and apply the recent results from (Ramaswamy and Bhatnagar, 2016) which analyzes the asymptotic behaviour of general multi-timescale stochastic recursive inclusions. A few observations are in order: E1: For each u ∈ U , h3(u) is a singleton: This follows from the definition of h3 and Claim 1 above, where we established that each w ∈ Au is the least squares solution to the linear system of equations Φθ̄w = δ u. Therefore, it further implies that that h3 is a Marchaud map as well. E2: supt∈N (‖wt‖+ ‖ut‖) <∞ a.s. This follows since W and U are bounded sets. E3: {Γ̂Uut ( M3t+1 ) }t∈N is a martingale-difference noise sequence with respect to the filtration {Ft+1}t∈N. This follows directly since {M3t+1}t∈N is a martingale-difference noise sequence with respect to the same filtration. E4: {Γ̂Uut ( M3t+1 ) }t∈N are square-integrable and ∃K3 ∈ (0,∞), such that E [∥∥Γ̂Uut (M3t+1) ∥∥2∣∣Ft] ≤ K3 (1 + ‖ut‖2 + ‖wt‖2) a.s., t ∈ N. (35) This follows directly from the finiteness of the underlying Markov chain and from the assumption that the boundary ∂U is smooth. E5: Γ̂Uut ( `3t ) → 0 as t→∞ a.s. Proof similar to C3. This implies that the bias is asymptotically irrelevant. E6: For each u ∈ U , the set Au is a globally attracting set of the ODE (27) and is also Lyapunov stable. Further, there exists K4 ∈ (0,∞) such that supw∈Au ‖w‖ ≤ K4(1 + ‖u‖). This follows since Au ⊆W and W is bounded. E7: The set-valued map q : U → {subsets of Rd} given by q(u) = Au is upper-semicontinuous: Consider the convergent sequences {un}n∈N → u and {wn}n∈N → w with un ∈ U and wn ∈ q(un) = Au. Note that w ∈ W , u ∈ U since W and U are compact. Also Φ> θ̄ DdπΦθ̄wn = Φ > θ̄ Ddπδ un (from Claim 1). Now taking limits on both sides we get lim n→∞ Φ>θ̄ DdπΦθ̄wn = limn→∞ Φ>θ̄ Ddπδ un ⇒ Φ>θ̄ DdπΦθ̄w = Φ > θ̄ Ddπδ u. This implies that w ∈ Au = q(u). The claim thus follows. Thus we have established all the necessary conditions demanded by Theorem 3.10 of (Ramaswamy and Bhatnagar, 2016) to characterize the limiting behaviour of the stochastic recursive inclusion (33). Now by appealing to the said theorem, we obtain the following result on the asymptotic behaviour of the GTD2 algorithm:{ (u,w)> ∣∣ lim inf t→∞ ∥∥(u,w)>− (ut,wt)>∥∥} ⊆ ⋃ u∈A∗ { (u,w)> ∣∣w ∈ Au}, (36) where A∗ is the set of asymptotically stable equilibria of the following ODE: d dt u(t) = h3(u(t)), u(0) ∈ Ů , t ∈ R+. (37) One can obtain similar results for projected TDC. We now state our main result: Theorem 2. Let Θ ⊂ Rm+d be a compact, convex subset with smooth boundary. Let ΓΘ be Frechet differentiable. Further, let Γ̂Θ θ̄ (− 12∇Lslow)(θ̄) be Lipschitz continuous. Also, let Assumptions 1-3 hold. Let K be the set of asymptotically stable equilibria of the following ODE contained inside Θ: d dt θ̄(t) = Γ̂Θθ̄(t)(− 1 2 ∇θ̄Lslow)(θ̄(t)), θ̄(0) ∈ Θ̊ and t ∈ R+. Then the stochastic sequence {θ̄t}t∈N generated by the TTN converges almost surely to K (sample path dependent). Further, TD(λ) Convergence: Under the additional Assumption 4-TD(λ), we obtain the following result: For any λ ∈ [0, 1], the stochastic sequence {wt}t∈N generated by the TD(λ) algorithm (Algorithm 2) within the TTN setting converges almost surely to the limit w∗, where w∗ satisfies Πθ̄∗T (λ)(Φθ̄∗w ∗) = Φθ̄∗w ∗, (38) with T (λ) defined in Lemma 2 and θ̄∗ ∈ K (sample path dependent). GTD2 Convergence: Let W,U ⊂ Rd be compact, convex subsets with smooth boundaries. Let Assumption 4-GTD2 hold. Let ΓW and ΓU be Frechet differentiable. Then the stochastic sequences {wt}t∈N and {ut}t∈N generated by the GTD2 algorithm (Algorithm 3) within the TTN setting satisfy{ (u,w)> ∣∣ lim inf t→∞ ∥∥(u,w)> − (ut,wt)>∥∥} ⊆ ⋃ u∈A∗ { (u,w)> ∣∣w ∈ Au}, where A∗ is the set of asymptotically stable equilibria of the following ODE: d dt u(t) = Γ̂Uu(t) ( Φ>θ̄∗Ddπ (I− γt+1P π)Φθ̄∗u(t) ) , u(0) ∈ Ů , t ∈ R+ and Au is the asymptotically stable equilibria of the following ODE: d dt w(t) = Γ̂Ww(t) (( Φ>θ̄∗Ddπδ u − Φ>θ̄∗DdπΦθ̄∗ ) w(t) ) , w(0) ∈ W̊ and t ∈ R+, with θ̄∗ ∈ K (sample path dependent) and δu defined in Eq. (29). C.1 NONIMAGE CATCHER C ADDITIONAL EXPERIMENTS C.2 PUDDLEWORLD C.3 IMAGE CATCHER We also ran policy evaluation experiments on image-based catcher with 2 stacked 64x64 frames as input. The policy evaluated was the same as was used in the non-image setting. Similar to the non-imaged based catcher experiments, we have similar plots. C.4 CARTPOLE In the classic Cartpole environment, the agent has to balance a pole on a cart. The state is given by vector of 4 numbers (cart position, cart velocity, pole angle, pole velocity). The two available actions are applying a force towards the left or the right. Rewards are +1 at every timestep and an episode terminates once the pole dips below a certain angle or the cart moves too far from the center. We use the OpenAI gym implementation (Brockman et al., 2016). The policy to be evaluated consists of applying force in the direction the pole is moving with probability 0.9 (stabilizing the pole) or applying force in the direction of the cart’s velocity with probability 0.1. We inject some stochasticity so that the resulting policy does not perform overly well, which would lead to an uninteresting value function. C.5 ACROBOT In the classic Acrobot domain, the agent consisting of two links has to swing up past a certain height. The agent observes a 4-dimensional state consisting of the angles and the angular velocities of each link. The avaiable actions are three possible levels of torque to be applied to the joint. The evaluated policy is obtained by training an agent with true-online Sarsa on a tile coding representation and then fixing its learned epsilon-greedy policy. C.6 PUCK WORLD In Puck World (Tasfi, 2016), the agent has to move in a two-dimensional box towards a good puck while staying away from a bad puck. The 8-dimensional state consists of (player x location, player y location, player x velocity, player y velocity, good puck x location, good puck y location, bad puck x location, bad puck y location). Each action increases the agent’s velocity in one of the four cardinal directions apart from a “None” action which does nothing. The reward is the negative distance to the good puck plus a penalty of −10 + x if the agent is within a certain radius of the bad puck, where x ∈ [−2, 0] depends on the distance to the bad puck (the reward is slightly modified from the original game to make the value function more interesting). The policy moves the agent towards the good puck, while having a soft cap on the agent’s velocity. In more detail, to choose one action, it is defined by the following procedure: First, we choose some eligible actions. The None action is always eligible. The actions which move the agent towards the good puck are also eligible. For example, if the good puck is Northeast of the agent, the North and East actions are eligible. If the agent’s velocity in a certain direction is above 30, then the action for that direction is no longer eligible. Finally, the agent picks uniformly at random from all eligible actions. C.7 OFF-POLICY CATCHER We run a preliminary experiment to check if TTN can have an advantage in the off-policy setting. The target policy is the same as the one used for other Catcher experiments (described in Appendix D). The behaviour policy is slightly different. If the apple is within 20 units (the target policy is 25 units), then the agent takes the action in the direction of the apple with probability 0.7 and one of the other two actions with probability 0.15 each. If the apple is not within range, then the agent takes the None action 10% of the time and one of the other two with equal probability. This combination of behaviour and target policies results in importance sampling ratios in the range of 0 to 8.7, moderately large values. We try TTN with three off-policy algorithms (TD, TDC and LSTD) and compare to off-policy Nonlinear TD. For TTN, the features are learnt optimizing the MSTDE on the behaviour policy while the values are learned off-policy. The main difference between TTN and Nonlinear TD is the fact that Nonlinear TD does off-policy updates to the entire network while TTN only changes the linear part
1. What is the main contribution of the paper regarding Two-Timescale Networks? 2. What are the strengths and weaknesses of the proposed method compared to other non-linear value function approximation methods? 3. How does the reviewer assess the originality and significance of the paper's content? 4. Are there any concerns or questions regarding the theoretical analysis and experimental results? 5. What suggestions does the reviewer have to improve the paper and its contributions?
Review
Review Summary: This paper presents a Two-Timescale Network (TTN) that enables linear methods to be used to learn values. On the slow timescale non-linear features are learned using a surrogate loss. On the fast timescale, a value function is estimated as a linear function of those features. It appears to be a single network, where one head drives the representation and the second head learns the values. They investigate multiple surrogate losses and end up using the MSTDE for its simplicity, even though it provides worse value estimates than MSPBE as detailed in their experiments. They provide convergence results - regular two-timescale stochastic approximation results from Borkar, for the two-timescale procedure and provide empirical evidence for the benefits of this method compared to other non-linear value function approximation methods. Clarity and Quality: The paper is well written in general, the mathematics seems to be sound and the experimental results appear to be thorough. Originality: Using two different heads, one to drive the representation and the second to learn the values appears to be an architectural detail. The surrogate loss to learn the features coupled with a linear policy evaluation algorithm appear to be novel, but does not warrant, in my opinion, the novelty necessary for publication at ICLR. The theoretical results appear to be a straightforward application of Borkar’s two-timescale stochastic approximation algorithm to this architecture to get convergence. This therefore, does not appear to be a novel contribution. You state after equaltion (3) that non-linear function classes do not have a closed form solution. However, it seems that the paper Convergent Temporal-Difference Learning with Arbitrary Smooth Function Approximation does indeed have a closed form solution for non-linear function approximators when minimizing the MSPBE (albeit making a linearity assumption, which is something your work seems to make as well). The work done in the control setting appears to be very similar to the experiments performed in the paper: Shallow Updates for Deep Reinforcement Learning. Significance: Overall, I think that the paper is well written and the experimental evaluation is thorough. However, the novelty is lacking as it appears to be training using a multi-headed approach (which exists) and the convergence results appear to be a straightforward application of Borkars two-timescale proof. The novelty therefore appears to be using a surrogate loss function for training the features which does not possess the sufficient novelty in my opinion for ICLR. I would suggest the authors' detail why their two-timescale approach is different from that of Borkars. Or additionally add some performance guarantee to the convergence results to extend the theory. This would make for a much stronger paper.
ICLR
Title Two-Timescale Networks for Nonlinear Value Function Approximation Abstract A key component for many reinforcement learning agents is to learn a value function, either for policy evaluation or control. Many of the algorithms for learning values, however, are designed for linear function approximation—with a fixed basis or fixed representation. Though there have been a few sound extensions to nonlinear function approximation, such as nonlinear gradient temporal difference learning, these methods have largely not been adopted, eschewed in favour of simpler but not sound methods like temporal difference learning and Q-learning. In this work, we provide a two-timescale network (TTN) architecture that enables linear methods to be used to learn values, with a nonlinear representation learned at a slower timescale. The approach facilitates the use of algorithms developed for the linear setting, such as data-efficient least-squares methods, eligibility traces and the myriad of recently developed linear policy evaluation algorithms, to provide nonlinear value estimates. We prove convergence for TTNs, with particular care given to ensure convergence of the fast linear component under potentially dependent features provided by the learned representation. We empirically demonstrate the benefits of TTNs, compared to other nonlinear value function approximation algorithms, both for policy evaluation and control. 1 INTRODUCTION Value function approximation—estimating the expected returns from states for a policy—is heavily reliant on the quality of the representation of state. One strategy has been to design a basis—such as radial basis functions (Sutton and Barto, 1998) or a Fourier basis (Konidaris et al., 2011)—for use with a linear function approximator and temporal difference (TD) learning (Sutton, 1988). For low-dimensional observation vectors, this approach has been effective, but can be onerous to extend to high-dimensional observations, potentially requiring significant domain expertise. Another strategy has been to learn the representation, such as with basis adaptation or neural networks. Though there is still the need to specify the parametric form, learning these representations alleviates the burden of expert specification. Further, it is more feasible to scale to high-dimensional observations, such as images, with neural networks (Mnih et al., 2015; Silver et al., 2016). Learning representations necessitates algorithms for nonlinear function approximation. Despite the deficiencies in specification for fixed bases, linear function approximation for estimating value functions has several benefits over nonlinear estimators. They enable least-squares methods, which can be much more data-efficient for policy evaluation (Bradtke and Barto, 1996; Szepesvari, 2010; van Seijen and Sutton, 2015), as well as robust to meta-parameters (Pan et al., 2017). Linear algorithms can also make use of eligibility traces, which can significantly speed learning (Sutton, 1988; Dann et al., 2014; White and White, 2016), but have not been able to be extended to nonlinear value function approximation. Additionally, there have been a variety of algorithms derived for the linear setting, both for on-policy and off-policy learning (Sutton et al., 2009; Maei, 2011; van Seijen and Sutton, 2014; van Hasselt et al., 2014; Mahadevan et al., 2014; Sutton et al., 2016; Mahmood et al., 2017). These linear methods have also been well-explored theoretically (Tsitsiklis and Van Roy, 1997; Maei, 2011; Mahmood and Sutton, 2015; Yu, 2015) and empirically (Dann et al., 2014; White and White, 2016), with some insights into improvements from gradient methods (Sutton et al., 2009), true-online traces (van Seijen and Sutton, 2014) and emphatic weightings (Sutton et al., 2016). These algorithms are easy to implement, with relatively simple objectives. Objectives for nonlinear value function approximation, on the other hand, can be quite complex (Maei et al., 2009), resulting in more complex algorithms (Menache et al., 2005; Di Castro and Mannor, 2010; Bhatnagar et al., 2013) or requiring a primal-dual formulation as has been done for control (Dai et al., 2017). In this work, we pursue a simple strategy to take advantage of the benefits of linear methods, while still learning the representation. The main idea is to run two learning processes in parallel: the first learns nonlinear features using a surrogate loss and the second estimates the value function as a linear function of those features. We show that these Two-timescale Networks (TTNs) converge, because the features change on a sufficiently slow scale, so that they are effectively fixed for the fast linear value function estimator. Similar ideas have previously been explored for basis adaptation, but without this key aspect of TTNs—namely the separation of the loss for the representation and value function. This separation is critical because it enables simpler objectives—for which the gradient can be easily sampled—to drive the representation, but still enables use of the mean squared projected Bellman error (MSPBE)—on which all the above linear algorithms are based. This separation avoids the complexity of the nonlinear MSPBE, but maintains the useful properties of the (linear) MSPBE. A variety of basis adaptation approaches have used a two-timescale approach, but with the same objective for the representation and the values (Menache et al., 2005; Di Castro and Mannor, 2010; Bhatnagar et al., 2013; J et al., 2016). Yu and Bertsekas (2009) provided algorithms for basis adaptation using other losses, such as Bellman error using Monte carlo samples, taking derivatives through fixed point solutions for the value function. Levine et al. (2017) periodically compute a closed form least-squares solution for the last layer of neural network, with a Bayesian update to prevent too much change. Because these methods did not separate the value learn and basis adaptation, the resulting algorithms are more complex. The strategy of using two different heads—one to drive the representation and one to learn the values—has yet to be systematically explored. We show that TTNs are a promising direction for nonlinear function approximation, allowing us to leverage linear algorithms while retaining the flexibility of nonlinear function approximators. We first discuss a variety of possible surrogate losses, and their potential for learning a useful representation. We then show that TTNs converge, despite the fact that a linear algorithm is used with a changing representation. This proof is similar to previous convergence proofs for policy evaluation, but with a relaxation on the requirement that features be independent, which is unlikely for learned features. We then show empirically that TTNs are effective compared to other nonlinear value function approximations and that they can exploit several benefits of linear value approximations algorithms. In particular, for both low-dimensional and high-dimensional (image-based) observations, we show (a) the utility of least-squares (or batch) methods, (b) advantages from eligibility traces and (c) gains from being able to select amongst different linear policy evaluation algorithms. We demonstrate that TTNs can be effective for control with neural networks, enabling use of fitted Q-iteration within TTNs as an alternative to target networks. 2 BACKGROUND We assume the agents act in a finite Markov Decision Process (MDP), with notation from (White, 2017). The dynamics of the MDP are defined by the 3-tuple (S,A, P ), where S is the set of states, A the set of actions and P : S × A × S 7→ [0, 1] the transition probability function. The task in this environment is defined by a reward function R : S × A × S 7→ R and a discount function γ : S × A × S 7→ [0, 1]. At each time step, the agent takes an action At according to a policy π : S ×A 7→ [0, 1] and the environment returns reward Rt+1, next state St+1 and discount γt+1. The goal in policy evaluation is to compute the value function: the expected sum of discounted rewards from every state under a fixed policy π. The value function Vπ : S → R is defined recursively from each state s ∈ S as Vπ(s) def = E[Rt+1 + γt+1Vπ(St+1)|St = s] = ∑ a∈A π(s, a) ∑ s′∈S P (s, a, s′)(r + γVπ(s ′)). (1) When using linear function approximation, this goal translates into finding parameters w ∈ Rd to approximate the value function V̂ (s) def = x(s)>w ≈ Vπ(s) where x : S → Rd is a feature function. (2) More generally, a nonlinear function V̂ (s) could be learned to estimate Vπ . To formulate this learning problem, we need to consider the objective for learning the function V̂ . Let Vπ, V̂ ∈ R|S| be the vectors of values for Vπ, V̂ . The recursive formula (1) defines a Bellman operator Bπ where the fixed point satisfies BπVπ = Vπ . Consider a restricted value function class, such as the set of linear value functions V̂ ∈ F = {Xw | w ∈ Rd} where X ∈ R|S|×d is a matrix with the i-th row set to x(s) for ith state s ∈ S. Then, it may no longer be possible to satisfy the recursion. Instead, an alternative is to find a projected fixed point ΠFBπV̂ = V̂ where the projection operator ΠF projects BπV̂ to the space spanned by this linear basis: ΠFV def = arg min V̄ ∈F ‖V̄ − V ‖2d (3) where d ∈ R|S| is a vector which weights each state in the weighted norm ‖V ‖2d = ∑ s∈S d(s)V (s) 2. Many linear policy evaluation algorithms estimate this projected fixed point, including TD (Sutton, 1988), least-squares TD (Bradtke and Barto, 1996) and gradient TD (Sutton et al., 2009). The objective formulated for this projected fixed-point, however, is more complex for nonlinear function approximation. For linear function approximation, the projection operator simplifies into a closed form solution involving only the features X. Letting δt = Rt+1 + γV̂ (St+1)− V̂ (St), the resulting mean-squared projected Bellman error (MSPBE) can be written as MSPBE(w) def= ‖ΠFBπV̂ − V̂ ‖2d = E[δtxt]E[xtx>t ] −1 E[δtxt] (4) where E[δtxt] = ∑ s∈S d(s)E[δt|St = s]x(s). For nonlinear function classes, the projection does not have a closed form solution and may be expensive to compute. Further, the projection involves the value function parameters, so the projection changes as parameters change. The nonlinear MSPBE and resulting algorithm are more complex (Maei et al., 2009), and have not seen widespread use. Another option is simply to consider different objectives. However, as we discuss below, other objectives for learning the value function either are similarly difficult to optimize or provide poor value estimates. In the next section, we discuss some of these alternatives and introduce Two-timescale Networks as a different strategy to enable nonlinear value function approximation. 3 TWO-TIMESCALE NETWORKS AND SURROGATE OBJECTIVES We first introduce Two-timescale Networks (TTNs), and then describe different surrogate objectives that can be used in TTNs. We discuss why these surrogate objectives within TTNs are useful to drive the representation, but are not good replacements for the MSPBE for learning the value function. TTNs use two concurrent optimization processes: one for the parameters of the network θ and one for the parameters of the value function w. The value function is approximated as V̂ (s) def= xθ(s)>w where the features xθ : S → Rd are a parametrized function and θ ∈ Rm is adjusted to provide better features. For a neural network, θ consists of all the parameters in the hidden layers, to produce the final hidden layer xθ(s). The two optimization processes maintain different time scales, with the parameters θ for the representation changed as a slow process, and the parameters w for the value estimate changed as a fast process relative to θ. The separation between these two processes could be problematic, since the target problem— estimating the value function—is not influencing the representation! The slow process is driven by a completely separate objective than the fast process. However, the key is to select this surrogate loss for the slow process so that it is related to the value estimation process, but still straightforward to compute the gradient of the loss. We use V̂ (s) as the output of the fast part, which corresponds to the value estimate used by the agent. To distinguish, Ŷ (s) denotes the output for the slow-part (depicted in Figure 1), which may or may not be an estimate of the value, as we discuss below. Consider first the mean-squared TD error (MSTDE), which corresponds to ∑ s∈S d(s)E[δ2t |St = s]. Notice that this does not correspond to the mean-squared Bellman error (MSBE), for which it is more difficult to compute gradients ‖BπV̂ − V̂ ‖2d = ∑ s∈S d(s) (E[δt|St = s]) 2. Using the MSTDE as a surrogate loss, with Ŷ (s) = xθ(s)>w̄, the slow part of the network minimizes Lslow(θ)= min w̄∈Rd ∑ s∈S d(s)E[δt(θ, w̄)2|St = s] . δt(θ, w̄) def =Rt+1+γt+1xθ(St+1) >w̄−xθ(St)>w̄. This slow part has its own weights w̄ associated with estimating the value function, but learned instead according to the MSTDE. The advantage here is that stochastic gradient descent on the MSTDE is straightforward, with gradient δt∇{θ,w̄}[γt+1Ŷ (St+1)− Ŷ (St)] where ∇{θ,w̄}Ŷ (St) is the gradient of the neural network, including the head of the slow part which uses weights w̄. Using the MSTDE has been found to provide worse value estimates than the MSPBE—which we re-affirm in our experiments. It could, nonetheless, play a useful role as a surrogate loss, where it can inform the representation towards estimating values. w There are a variety of other surrogate losses that could be considered, related to the value function. However, many of these losses are problematic to sample incrementally, without storing large amounts of data. For example, the mean-squared return error (MSRE) could be used, which takes samples of return and minimizes mean-squared error to those sampled returns. Obtaining such returns requires waiting many steps, and so delays updating the representation for the current state. Another alternative is the MSBE. The gradient of the nonlinear MSBE is not as complex as the gradient of the nonlinear MSPBE, because it does not involve the gradient of a projection. However, it suffers from the double sampling problem: sampling the gradient requires two independent samples. For these reasons, we explore the MSTDE as the simplest surrogate loss involving the value function. Finally, surrogate losses could also be defined that are not directly related to the value function. Two natural choices are losses based on predicting the next state and reward. The output of the slow part could correspond to a vector of values, such as Yt = St+1 ∈ Rn or Yt = [ St+1 Rt+1 ] . The ability to predict the next state and reward is intuitively useful for enabling prediction of value, that also has some theoretical grounding. Szepesvari (2010, Section 3.2.1) shows that the Bellman error is small, if the features can capture a horizon of immediate rewards and expected next states. For linear encoders, Song et al. (2016) show that an optimal set of features enables predictions of next state and reward. More generally, learning representations using auxiliary tasks or self-supervised tasks have had some successes in RL, such as using pixel control (Jaderberg et al., 2016) or classifying the temporal distance between frames (Aytar et al., 2018). In computer vision, Gidaris et al. (2018) showed that using rotated images as self-supervised tasks produced a useful representation for the main loss, without training the representation with the main loss. Any of these self-supervised tasks could also be used for the surrogate objective, and motivate that separating out representation learning does not degrade performance. For now, we restrict focus on simpler surrogate objectives, as the main purpose of this work is to demonstrate that the separation in TTNs is a sound approach for learning values. 4 CONVERGENCE OF TWO-TIMESCALE NETWORK ALGORITHM Training TTNs is fully online, using a single transition from the environment at a time. Projected stochastic gradient descent is used to reduce the surrogate loss, Lslow(θ) and a linear policy evaluation algorithm, such as GTD2 or TD(λ), is coupled to the network where the prediction vector w is callibrated proportional to −∇wMSPBEθ(w). The full procedure is summarized in Algorithm 1, in Appendix A. Regarding the convergence of TTNs, a few remarks are in order: 1. The network needs to evolve sufficiently slowly relative to the linear prediction weights. In our theoretical analysis, this is achieved by ensuring that the step sizes ξt and αt of the network and the linear policy evaluation algorithm respectively decay to zero at different rates. In particular, ξt/αt → 0 as t→∞. With this relative disparity in magnitudes, one can assume that the network is essentially quasi-static, while the faster linear component is equilibrated relative to the static features. 2. The linear prediction algorithms need to converge for any set of features provided by the neural network, particularly linearly dependent features. This induces a technical bottleneck since linear independence of the features are a necessary condition for the convergence of the prediction methods GTD and GTD2. We overcome this by following a differential inclusion based analysis for GTD2. 3. Finally, we need to guarantee the stability of the iterates (both feature vector θt and the prediction vector wt) and this is ensured by projecting the iterates to respective compact, convex sets. The analysis for the convergence of the neural network is general, enabling any network architectures that are twice continuously differentiable. We prove that the TTNs converge asymptotically to the stable equilibria of a projected ODE which completely captures the mean dynamics of the algorithm. We now state our main result (for notations and technical details, please refer Appendix B). The results are provided for cases when TD(λ) or GTD2 is used as the linear prediction method. However, note that similar results can be obtained for other linear prediction methods. Theorem 1. Let θ̄ = (θ, w̄)> and Θ ⊂ Rm+d be a compact, convex subset with smooth boundary. Let the projection operator ΓΘ be Frechet differentiable and Γ̂Θ θ̄ (− 12∇Lslow)(θ̄) be Lipschitz con- tinuous. Also, let Assumptions 1-3 hold. Let K be the set of asymptotically stable equilibria of the following ODE contained inside Θ: d dt θ̄(t) = Γ̂Θθ̄(t)(− 1 2 ∇θ̄Lslow)(θ̄(t)), θ̄(0) ∈ Θ̊ and t ∈ R+. Then the stochastic sequence {θ̄t}t∈N generated by the TTN converges almost surely to K (sample path dependent). Further, TD(λ) Convergence: Under the additional Assumption 4-TD(λ), we obtain the following result: For any λ ∈ [0, 1], the stochastic sequence {wt}t∈N generated by the TD(λ) algorithm (Algorithm 2) within the TTN setting converges almost surely to the limit w∗, where w∗ satisfies Πθ̄∗T (λ)(Φθ̄∗w ∗) = Φθ̄∗w ∗, (5) with θ̄∗ ∈ K (sample path dependent). 5 EXPERIMENTS We investigate the performance of TTNs versus a variety of other nonlinear policy evaluation algorithms, as well as the impact of choices within TTNs. We particularly aim to answer (a) is it beneficial to optimize the MSPBE to obtain value estimates, rather than using value estimates from surrogate losses like the MSTDE; (b) do TTNs provide gains over other nonlinear policy evaluation algorithms; and (c) can TTNs benefit from the variety of options in linear algorithms, including leastsquares approaches, eligibility traces and different policy evaluation algorithms. More speculatively, we also investigate if TTNs can provide a competitive alternative to deep Q-learning in control. Experiments were performed on-policy in five environments. We use three classic continuous-state domains: Puddle World, a continuous-state grid world with high-magnitude negative rewards for walking through a puddle; Acrobot, where a robot has to swing itself up; and Cartpole, which involves balancing a pole. We also use two game domains: Catcher, which involves catching falling apples; and Puck World, in which the agent has to chase a puck (Tasfi, 2016). Catcher includes both a variant with 4-dimensional observations—position and velocity of the paddle, and (x,y) of the apple— and one with image-based observations—with two consecutive 64-by-64 grayscale images as input. This domain enables us to analyze the benefit of the algorithms, on the same domain, both with low-dimensional and high-dimensional observations. We describe the policies evaluated for these domains in Appendix D. We include a subset of results in the main body, with additional results in the appendix. Results in Cartpole are similar to Acrobot; Cartpole results are only in the appendix. The value estimates are evaluated using root-mean-squared value error (RMSVE), where value error is (Vπ(s)− V̂ (s))2. The optimal values for a set of 500 states are obtained using extensive rollouts from each state and the RMSVE is computed across these 500 states. For the algorithms, we use the following settings, unless specified otherwise. For the slow part (features), we minimize the mean-squared TD error (MSTDE) using the AMSGrad optimizer (Reddi et al., 2018) with β1 = 0 and β2 = 0.99. The network weights use Xavier initialization (Glorot and Bengio, 2010); the weights for the fast part were initialized to 0. In Puddle World, the neural network consists of a single hidden layer of 128 units with ReLU activations. In the other environments, we use 256 units instead. To choose hyperparameters, we first did a preliminary sweep on a broad range and then chose a smaller range where the algorithms usually made progress, summarized in Appendix D. Results are reported for hyperparameters in the refined range, chosen based on RMSVE over the latter half of a run with shaded regions corresponding to one standard error. TTN vs. competitors. We compare to the following algorithms: nonlinear TD, nonlinear GTD (Maei et al., 2009), Adaptive Bases (ABBE and ABTD) (Di Castro and Mannor, 2010), nonlinear TD + LSTD regularization (inspired by Levine et al. (2017)). We describe these algorithms in more depth in Appendix D. All of the algorithms involve more complex updates compared to TTNs, except for nonlinear TD, which corresponds to a semi-gradient TD update with nonlinear function approximation. For TTNs, we use LSTD for the linear, fast part. In Figure 2, TTN is able to perform as well or better than the competitor algorithms. Especially in Puddle World, its error is significantly lower than the second best algorithm. Interestingly, Nonlinear GTD also performs well across domains, suggesting an advantage for theoretically-sound algorithms. The utility of optimizing the MSPBE. First, we show that the TTN benefits from having a second head learning at a faster timescale. To do so, we compare the prediction errors of using TTN, with the fast process optimizing the MSPBE (using LSTD) and the slow one optimizing the MSTDE, and one trained end-to-end using the MSTDE with AMSGrad. As a baseline, we include TTN with a fixed representation (a randomly initialized neural network) to highlight that the slow process is indeed improving the representation. We also include results for optimizing the MSTDE with the fixed representation. Cartpole In Figure 3, we see that optimizing the MSPBE indeed gives better results than optimizing the MSTDE. Additionally, we can conclude that using the MSTDE, despite being a poor objective to learn the value function, can still be effective for driving feature-learning since it outperforms the fixed representation. Linear algorithms and eligibility traces. TTNs give us the flexibility to choose any linear policy evaluation algorithm for the fast part. We compare several choices: TD, least-squares TD (LSTD) (Bradtke and Barto, 1996), forgetful LSTD (FLSTD) (van Seijen and Sutton, 2015), emphatic TD (Sutton et al., 2016), gradient TD (the TDC variant) (Sutton et al., 2009) and their true-online versions (van Seijen and Sutton, 2014; van Hasselt et al., 2014) to learn the value function. GTD and ETD are newer temporal difference methods which have better convergence properties and can offer increased stability. The true-online variants modify the update rules to improve the behavior of the algorithms when learning online and seem to outperform their counterparts empirically (van Seijen and Sutton, 2014). Least-squares methods summarize past interaction, but are often avoided due to quadratic computation in the number of features. For TTNs, however, there is no computational disadvantage to using LSTD methods, for two reasons. It is common to choose deep but skinny architectures (Mnih et al., 2015; Hessel et al., 2017). Furthermore, if the last layer is fully connected, then we already need to store O(d2) weights and use O(d2) time to compute a forward pass—the same as LSTD. We include FLSTD, which progressively forgets older interaction, as this could be advantageous when the feature representation changes over time. For TTN, incremental versions of the least-squares algorithms are used to maintain estimates of the required quantities online (see appendix D). All of these linear algorithms can use eligibility traces to increase their sample efficiency by propagating TD errors back in time. The trace parameter λ can also provide a bias-variance tradeoff for the value estimates (Sutton, 1988; Dann et al., 2014). For nonlinear function approximation, eligibility traces can no longer be derived for TD. Though invalid, we can naively extend them to this case by keeping one trace per weight, giving us nonlinear TD(λ). The results overall indicate that TTNs can benefit from the ability to use different linear policy evaluation algorithms and traces, in particular from the use of least-squares methods as shown in Figure 4 for Puddle World and Catcher. The dominance of LSTD versus the other linear algorithms is consistent, including in terms of parameter sensitivity, persists for the other three domains. We additionally investigated sensitivity to λ, and found that most of the TTN variants benefit from a nonzero λ value and, in many cases, the best setting is high, near 1. One exception is the least-squares methods, where LSTD performs similarly for most values of λ. Nonlinear TD(λ), on the other hand, performs markedly worse as λ increases. This is unsurprising considering the naive addition of eligibility traces is unsound. We include these sensitivity plots in the appendix, in Figure ??. Surrogate loss functions. For all the previous experiments, we optimized the MSTDE for the slow part of the network, but as discussed in Section 3, other objectives can be used. We compare a variety of objectives, by choosing different Yt, including Yt = Rt+1 (Reward); Yt = St+1 (Next State); and Yt = Rt+1 + Ŷ (St+1). (Semi-gradient MSTDE). In Puck World, in Figure 5 a), we can see that every auxiliary loss performed well. This does not appear to be universally true, as in Acrobot we found that the MSTDE was a less effective surrogate loss, leading to slower learning (see Figure 5 b). Alternate losses such as the semi-gradient MSTDE and next state predictions were more successful in that domain. These results suggest that there is no universally superior surrogate loss and that choosing the appropriate one can yield benefits in certain domains. Control Although the focus of this work is policy evaluation, we also provide some preliminary results for the control setting. For control, we include some standard additions to competitor learning algorithms to enable learning with neural networks. The DQN algorithm (Mnih et al., 2015) utilizes two main tricks to stabilize training: experience replay—storing past transitions and replaying them multiple times—and a target network—which keeps the value function in the Q-learning targets fixed, updating the target network infrequently (e.g., every k = 10, 000 steps). We use an alternative strategy to target networks for TTN. The use of a target network is motivated by fitted Q-iteration (FQI) (Ernst et al., 2005), which updates towards fixed Q-values with one sweep through a batch of data. TTNs provide a straightforward mechanism to instead directly use FQI, where we can solve for the weights on the entire replay buffer, taking advantage of the closed form solution for linear regression towards the Q-values from the last update. Batch FQI requires storing all data, whereas we instead have a sliding window of experience. We therefore additionally incorporate a regularization term, which prevents the weights from changing too significantly between updates, similarly to Levine et al. (2017). Each FQI iteration requires solving a least squares problem on the entire buffer, an operation costing O(nd2) computation where d is the number of features in the last layer of the network and n is the size of the buffer; we update the network every k steps, which reduces the per-step computation to O(nd2/k). The slow part drives feature-learning by minimizing the semi-gradient MSTDE for state-action values. As another competitor, we include LS-DQN (Levine et al., 2017), a DQN variant which also utilizes adjustments to the final layer’s weights towards the FQI solution, similar to TTN-FQI. The experimental details differ for control. On nonimage Catcher, we do a sweep over αslow and λreg , the regularization parameter, for TTN and sweep over the learning rate and the number of steps over which is annealed for DQN. On image Catcher, runs require significantly more computation so we only tune hyperparameters by hand. The FQI update in TTNs was done every 1000 (10000) steps for non-image (image) Catcher. We run each algorithm 10 times (5 times) for 200 thousand steps (10 million steps) on the non-image (image) Catcher. We see that TTN is able to perform well on both versions of Catcher in Figure 6, particularly learning more quickly than the DQN variants. This difference is especially pronounced in the image version of catcher, where TTN is also able to achieve much higher average returns than DQN. Both algorithms seem to suffer from catastrophic forgetting later during training as the performance dips down after an initial rise, although TTN still stabilizes on a better policy. Overall, these results suggest that TTNs are a promising direction for improving sample efficiency in control, whilst still maintaining stability when training neural networks. 6 DISCUSSION AND CONCLUSION In this work, we proposed Two-timescale Networks as a new strategy for policy evaluation with nonlinear function approximation. As opposed to many other algorithms derived for nonlinear value function approximation, TTNs are intentionally designed to be simple to promote ease-of-use. The algorithm combines a slow learning process for adapting features and a fast process for learning a linear value function, both of which are straightforward to train. By leveraging these two timescales, we are able to prove convergence guarantees for a broad class of choices for both the fast and slow learning components. We highlighted several cases where the decoupled architecture in TTNs can improve learning, particularly enabling the use of linear methods—which facilitates use of least-squares methods and eligibility traces. This work has only begun the investigation into which combinations for surrogate losses and linear value function approximation algorithms are most effective. We provided some evidence that, when using stochastic approximation algorithms rather than least-squares algorithms, the addition of traces can have a significant effect within TTNs. This contrasts nonlinear TD, where traces were not effective. The ability to use traces is potentially one of the most exciting outcomes for TTNs, since traces have been so effective for linear methods. More generally, TTNs provide the opportunity to investigate the utility of the many linear value function algorithms, in more complex domains with learned representations. For example, emphatic algorithms have improved asymptotic properties (Sutton et al., 2016), but to the best of our knowledge, have not been used with neural networks. Another promising direction for TTNs is for off-policy learning, where many value functions are learned in parallel. Off-policy learning can suffer from variance due to large magnitude corrections (importance sampling ratios). With a large collection of value functions, it is more likely that some of them will cause large updates, potentially destabilizing learning in the network if trained in an end-to-end fashion. TTNs would not suffer from this problem, because a different objective can be used to drive learning in the network. We provide some preliminary experiments in the appendix supporting this hypothesis (Appendix C.7). A TTN ALGORITHM Algorithm 1 Training of TTNs 1: procedure TRAIN(w,θ, w̄, π) . π is a fixed policy 2: Initialize θ, w̄ with Xavier initialization, w to 0 and thestarting state s according to the environment 3: while training do 4: a← action chosen by π(s) 5: r, s′ ← Environment(s, a) . Get reward and next state 6: θ, w̄← GradientDescent on Lslow using sample (s, r, s′) 7: w← Update on Lvalue using sample (s, r, s′) 8: s← s′ 9: end while 10: return learned parameters w,θ, w̄ 11: end procedure B CONVERGENCE PROOF OF TWO-TIMESCALE NETWORKS B.1 DEFINITIONS & NOTATIONS - Let R+ denote the set of non-negative real numbers, N = {0, 1, 2, . . . } and ‖ · ‖ denote the Euclidean norm or any equivalent norm. - A map f : Rd → Rd is Lipschitz continuous if ‖f(x) − f(y)‖ ≤ L(‖x − y‖), for some L ∈ (0,∞), ∀x,y ∈ Rd. - A set-valued map h : Rd → {subsets of Rd} is called a Marchaud map, if it satisfies the following conditions: 1. For each x ∈ Rd, h(x) is convex and compact. 2. For each x ∈ Rd, ∃K ∈ (0,∞) such that supy∈h(x) ‖y‖ ≤ K(1 + ‖x‖). 3. h is upper-semicontinuous, i.e., if {xn}n∈N → x and {yn}n∈N → y, where xn ∈ Rd, yn ∈ h(xn), ∀n ∈ N, then y ∈ h(x). - For x1,x2 ∈ Rd and D ∈ Rk×k a diagonal matrix, we define the inner-product < x1,x2 >D, x>1 Dx2. We also define the semi-norm ‖x‖D ,< x,x > 1 2 D. If all the diagonal elements of D are strictly positive, then ‖ · ‖D is a norm. - For any set X , let X̊ denote the interior of X and ∂X denote the boundary of X . - For brevity, let θ̄ = (θ, w̄)> and Φθ̄ be the feature matrix corresponding to the feature parameter θ̄, i.e. Φθ̄ , xθ(s1) > xθ(s2) > ... xθ(s|S|) > |S|×d , (6) where xθ(s)> is the row-vector corresponding to state s. Further, define the |S| × |S|-matrix Pπ as follows: Pπs,s′ , ∑ a∈A π(s, a)P (s, a, s′), s, s′ ∈ S. (7) - Also, recall that Lslow(θ) = MSTDE(θ) , E [ E [ δ2t |St ]] . - A function Γ : U ⊆ Rd1 → X ⊆ Rd2 is Frechet differentiable at x ∈ U if there exists a bounded linear operator Γ̂x : Rd1 → Rd2 such that the limit lim ↓0 Γ(x + y)− x (8) exists and is equal to Γ̂x(y). We say Γ is Frechet differentiable if Frechet derivative of Γ exists at every point in its domain. B.2 ASSUMPTIONS Assumption 1: The pre-determined, deterministic, step-size sequence {ξt}t∈N satisfies ξt > 0,∀t ∈ N, ∑ t∈N ξt =∞, ∑ t∈N ξ2t <∞. Assumption 2: The Markov chain induced by the given policy π is ergodic, i.e., aperiodic and irreducible. Assumption 2 implies that the underlying Markov chain is asymptotically stationary and henceforth it guarantees the existence of a unique steady-state distribution dπ over the state space S (Levin and Peres, 2017), i.e., limt→∞ P(St = s) = dπ(s), ∀s ∈ S. Assumption 3: Given a realization of the transition dynamics of the MDP in the form of a sample trajectory Oπ = {S0, A0, R1, S1, A1, R2, S2, . . . }, where the initial state S0 ∈ S is chosen arbitrarily, while the action A 3 At ∼ π(St, ·), the transitioned state S 3 St+1 ∼ P (St, At, ·) and the reward R 3 Rt+1 = R(St, At, St+1). To analyze the long-run behaviour of our algorithm, we employ the ODE based analysis (Borkar, 2008; Kushner and Yin, 2003; Ljung, 1977) of the stochastic recursive algorithms. Here, we consider a deterministic ordinary differential equation (ODE) whose asymptotic flow is equivalent to the long-run behaviour of the stochastic recursion. Then we analyze the qualitative behaviour of the solutions of the ODE to determine the asymptotically stable sets. The ODE-based analysis is elegant and conclusive and it further guarantees that the limit points of the stochastic recursion will almost surely belong to the compact connected internally chain transitive invariant set of the equivalent ODE. Since the algorithm follows a multi-timescale stochastic approximation framework, we will also resort to the more generalized multi-timescale differential inclusion based analysis proposed in (Borkar, 1997; Ramaswamy and Bhatnagar, 2016). Note that there exists only a unilateral coupling between the neural network (where the feature vectors θ̄t are calibrated by following a stochastic gradient descent w.r.t. Lslow) and the various policy evaluation algorithms (see Figure 7). This literally implies that the policy evaluation algorithms depend on the feature vectors θ̄t but not vice-versa. Therefore, one can independently analyze the asymptotic behaviour of the feature vectors {θ̄t}t∈N. Also, as a technical requirement, note that since one cannot guarantee the stability (almost sure boundedness) of the iterates {θ̄t}t∈N (which is a necessary condition required for the ODE based analysis. Please refer Chapter 2 of Borkar (2008)), we consider the following projected stochastic recursion: θ̄t+1 = Γ Θ ( θ̄t + ξtδt ( ∇θ̄t Ŷθ̄(St)− γt+1∇θ̄t Ŷθ̄(St+1) )) , (9) where ΓΘ(·) is the projection onto a pre-determined compact and convex subset Θ ⊂ Rm+d, i.e., ΓΘ(x) = x, for x ∈ Θ̊, while for x /∈ Θ̊, it is the nearest point in Θ w.r.t. the Euclidean distance (or equivalent metric). Define the filtration {Ft}t∈N, a family of increasing natural σ-fields, where Ft , σ ( {θ̄i, Si, Ri; 0 ≤ i ≤ t} ) . The following lemma characterizes the limiting behaviour of the iterates {θ̄t}t∈N: Lemma 1. Let Assumptions 1-3 hold. Let Θ ⊂ Rm+d be a compact, convex subset with smooth boundary. Let ΓΘ be Frechet differentiable. Further, let Γ̂Θ θ̄ (− 12∇Lslow)(θ̄) be Lipschitz continuous. Let K be the set of asymptotically stable equilibria of the following ODE contained inside Θ: d dt θ̄(t) = Γ̂Θθ̄(t)(− 1 2 ∇θ̄Lslow)(θ̄(t)), θ̄(0) ∈ Θ̊ and t ∈ R+. Then the stochastic sequence {θ̄t}t∈N generated by the TTN converges almost surely to K. Proof. We employ here the ODE based analysis as proposed in (Borkar, 2008; Kushner and Clark, 2012). Firstly, we recall here the stochastic recursion which updates θ̄t: θ̄t+1 = Γ Θ ( θ̄t + ξtδt ( ∇θ̄t Ŷθ̄(St)− γt+1∇θ̄t Ŷθ̄(St+1) )) , (10) where ΓΘ is the projection onto a pre-determined compact and convex subset Θ ⊂ Rm+d. Here, δt , Rt+1 + γt+1Ŷθ̄t(St+1)− Ŷθ̄t(St) is the temporal difference. Also,∇θ̄t Ŷθ̄ ∈ R (m+d)×|S| is the Jacobian of Ŷθ̄ at θ̄ = θ̄t and ∇θ̄t Ŷθ̄(s) is the column corresponding to state s. Now the above equation can be rewritten as θ̄t+1 = Γ Θ ( θ̄t + ξt ( h1(θ̄t) + M1t+1 + `1t )) , (11) where h1(θ̄) , E [ δt ( ∇θ̄Ŷθ̄(St)− γt+1∇θ̄Ŷθ̄(St+1) )] , the noise term M1t+1 , δt ( ∇θ̄t Ŷθ̄(St)− γt+1∇θ̄t Ŷθ̄(St+1) ) − E [ δt ( ∇θ̄t Ŷθ̄(St)− γt+1∇θ̄t Ŷθ̄(St+1) ) |Ft ] and the bias `1t , E [ δt ( ∇θ̄t Ŷθ̄(St)− γt+1∇θ̄t Ŷθ̄(St+1) ) |Ft ] − h1(θ̄t). Further, θ̄t+1 = θ̄t + ξt ΓΘ ( θ̄t + ξt ( h1(θ̄t) + M1t+1 + `1t ) − θ̄t ) ξt = θ̄t + ξt ( Γ̂Θθt(h 1(θ̄t)) + Γ̂ Θ θ̄t ( M1t+1 ) + Γ̂Θθ̄t ( `1t ) + o(ξt) ) , (12) where Γ̂Θ is the Frechet derivative (defined in Eq. (8). Note that ΓΘ is single-valued since Θ is convex and also the above limit exists since the boundary ∂Θ is assumed smooth. Further, for x ∈ Θ̊, we have Γ̂Θx (y) = lim →0 ΓΘ (x + y)− x = lim →0 x + y − x = y (for sufficiently small ), (13) i.e., Γ̂Θx (·) is an identity map for x ∈ Θ̊. A few observations are in order: C1: Γ̂Θ θ̄ (h1(θ̄)) is a Lipschitz continuous function in θ̄. This follows from the hypothesis of the Lemma. C2: Γ̂Θ θ̄t ( M1t+1 ) is a truncated martingale difference noise. Indeed, it is easy to verify that the noise sequence {M1t+1}t∈N is a martingale-difference noise sequence w.r.t to the filtration {Ft+1}t∈N, i.e., M1t+1 is Ft+1-measurable and integrable, ∀t ∈ N and E [ M1t+1|Ft ] = 0 a.s., ∀t ∈ N. Also, since ΓΘ(·) is a continuous linear operator, we have Γ̂Θ(M1t+1) to be Ft+1-measurable and integrable, ∀t ∈ N likewise. C3: Γ̂Θ θ̄t ( `1t ) → 0 as t→∞ a.s. Indeed, ∥∥∥Γ̂Θθ̄t (`1t ) ∥∥∥ = ∥∥∥∥∥ lim →0 ΓΘ ( θ̄t + ` 1 t ) − θ̄t ∥∥∥∥∥ ≤ lim →0 ∥∥∥ΓΘ (θ̄t + `1t )− ΓΘ (θ̄t) ∥∥∥ ≤ lim →0 ∥∥∥θ̄t + `1t − θ̄t∥∥∥ = ‖`1t‖. By taking t→∞, C3 follows directly from the ergodicity (Levin and Peres, 2017) (Assumption 2) and finiteness of the underlying Markov chain. C4: o(ξt)→ 0 as t→∞ (follows from Assumption 1). C5: Iterates {θ̄t}t∈N are stable (forcefully), i.e. bounded almost surely, since θt ∈ Θ, ∀t ∈ N (ensured by the projection operator ΓΘ) and Θ is compact (i.e., closed and bounded). C6: ∃K0 ∈ (0,∞), such that E [ ‖Γ̂Θθ̄t ( M1t+1 ) ‖2|Ft ] ≤ K0 ( 1 + ‖θ̄t‖2 ) a.s. (14) This follows directly from the finiteness of the Markov chain and from the assumption that the boundary ∂Θ is smooth. Now, by appealing to Theorem 2, Chapter 2 of (Borkar, 2008)), we conclude that the stochastic recursion (10) asymptotically tracks the following ODE d dt θ̄(t) = Γ̂Θθ̄(t)(h 1(θ̄(t)), θ̄(0) ∈ Θ̊ and t ∈ R+ = Γ̂Θθ̄(t)(− 1 2 ∇θ̄Lslow)(θ̄(t)), θ̄(0) ∈ Θ̊ and t ∈ R+. (15) In other words, the stochastic recursion (10) converges to the asymptotically stable equilibria of the ODE (15) contained inside Θ. Remark 1. It is indeed non-trivial to determine the constraint set Θ without prior adequate knowledge about the limit set of the ODE (15). A pragmatic approach to overcome this concern is to initiate the stochastic recursion with an arbitrary convex, compact set Θ with a smooth boundary and gradually spread to the whole of Rm+d (Chen, 2006). Remark 2. It is also important to characterize the hypothesis of the above lemma (i.e., Γ̂Θ θ̄ (− 12∇Lslow)(θ̄) is Lipschitz continuous) with respect to the features Ŷθ̄. To achieve that one has to consider the non-projected form of the ODE (15). Apparently, when one considers the spreading approach proposed in the above remark, then it is essentially encouraged to consider the non-projected form since the limiting flow of the ODE arising from the projected stochastic recursion is more likely to lie inside the compact, convex set as Θ becomes larger. Thereupon, it is easy to observe that the condition Ŷθ̄ is twice continuously-differentiable is sufficient to ensure the Lipschitz continuity of Γ̂Θ θ̄ (− 12∇Lslow)(θ̄). Additionally, in that case K = {θ̄|∇θ̄Lslow(θ̄) = 0} which is the set of local extrema of J . B.4 TD(λ) ALGORITHM One can directly apply the TD(λ) with linear function approximation algorithm to estimate the value function with respect to the features provided by the neural network. The TD(λ) algorithm is provided in Algorithm 2. Here et,wt ∈ Rd. Further, δt , Rt+1 +γt+1w>t xθt(St+1)−w>t xθt(St) is the temporal difference. Algorithm 2 TD(λ) algorithm Parameters: αt > 0, λ ∈ [0, 1]; Initialization: w0 = 0, e0 = 0; For each transition (St, Rt+1, St+1) in Oπ, do: et+1 = xθt(St) + γt+1λet; (16) wt+1 = wt + αt ( Rt+1 + γt+1w > t xθt(St+1)−w>t xθt(St) ) et; (17) Assumption 4-TD(λ): The pre-determined, deterministic, step-size sequence {αt}t∈N satisfies: αt > 0,∀t ∈ N, ∑ t∈N αt =∞, ∑ t∈N α2t <∞, lim t→∞ ξt αt = 0. Note that the step-size schedules {αt}t∈N and {ξt}t∈N satisfy ξtαt → 0, which implies that {ξt} converges to 0 relatively faster than {αt}. This disparity in terms of the learning rates induces an asynchronous convergence behaviour asymptotically (Borkar, 1997), with feature parameter sequence {θ̄t} converging slower relative to the TD(λ) sequence {wt}. The rationale being that the increment term of the underlying stochastic gradient descent of the neural network is smaller compared to that of the TD(λ) recursion (17), since the neural network SGD is weighted by the step-size schedule {ξt}t∈N which is smaller than {αt}t∈N for all but finitely many t. This unique pseudo heterogeneity induces multiple perspectives, i.e., when viewed from the faster timescale recursion (recursion controlled by αt), the slower timescale recursion (recursion controlled by ξt) seems quasi-static (‘almost a constant’), while viewed from the slower timescale, the faster timescale recursion seems equilibrated. Further, it is analytically admissible (Borkar, 1997) to consider the slow timescale stochastic recursion (i.e., the neural network SGD) to be quasi-stationary (i.e., θ̄t ≡ θ̄, ∀t ∈ N), while analyzing the asymptotic behaviour of the relatively faster timescale stochastic recursion (17). Thereupon, we obtain the following directly from Theorem 1 of (Tsitsiklis and Van Roy, 1997). Lemma 2. Assume θ̄t ≡ θ̄, ∀t ∈ N. Let Assumptions 1-3 and 4-TD(λ) hold. Then for any λ ∈ [0, 1], the stochastic sequence {wt}t∈N generated by the TD(λ) algorithm (Algorithm 2) within the TTN setting converges almost surely to the limit w∗, where w∗ satisfies Πθ̄T (λ)(Φθ̄w ∗) = Φθ̄w ∗, (18) with T (λ)J(s) , (1− λ) ∑∞ i=0 λ iE [∑i j=0 γ [j]Rj+1 + γ [i+1]J(Si+1) ∣∣S0 = s] and γ[j] = Πji=0γi (with γ0 = 1). Also, Πθ̄ is defined according to Eq. (3) with F = {Φθ̄w|w ∈ Rd}. For other single-timescale prediction methods like ETD and LSPE, similar results follow. Regarding the least squares method LSTD, which offers the significant advantage of non-dependency on stepsizes (albeit computationally expensive) couples smoothly within the TTN setting without any additional consideration. B.5 GTD2 ALGORITHM However, one cannot directly apply the original GTD2 and TDC algorithms to the TTN setting, since a necessary condition required for the convergence of these algorithms is the non-singularity of the feature specific matrices E [ xθt(St)xθt(St) > ] and E [ (xθt(St)− γt+1xθt(St+1)) xθt(St)> ] . Please refer Theorem 1 and Theorem 2 of (Sutton et al., 2009). Without the non-singularity assumption, it is indeed hard to guarantee the almost sure boundedness of the GTD2/TDC iterates. In the TTN setting that we consider in this paper, one cannot explicitly assure this condition, since the features are apparently administered by a neural network and it is not directly intuitive on how to control the neural network to generate a collection of features with the desired non-singularity characteristic. Henceforth, one has to consider the projected versions of these algorithms. We consider here the projected GTD2 algorithm provided in Algorithm 3. Algorithm 3 GTD2 algorithm Parameters: αt, βt; Initialization: u0 ∈ U,w0 ∈W ; For each transition (St, Rt+1, St+1) in Oπ do: wt+1 = Γ W ( wt + αt ( δutt+1xθt(St)− ( wt >xθt(St) ) xθt(St) )) ; (19) ut+1 = Γ U ( ut + βt (xθt(St)− γt+1xθt(St+1)) ( wt >xθt(St) )) ; (20) Here ut,wt ∈ Rd. Further, δut+1 , Rt+1 + γt+1u>xθt(St+1) − u>xθt(St) is the temporal difference. Here, ΓW (·) is the projection operator onto a pre-determined convex, compact subset W ⊂ Rd with a smooth boundary ∂W . Therefore, ΓW maps vectors in Rd to the nearest vectors in W w.r.t. the Euclidean distance (or equivalent metric). Convexity and compactness ensure that the projection is unique and belongs to W . Similarly, U is a pre-determined convex, compact subset of Rd with a smooth boundary ∂U . Projection is required since the stability of the iterates {wt}t∈N and {ut}t∈N are hard to guarantee otherwise. Assumption 4-GTD2: The pre-determined, deterministic, step-size sequences {αt}t∈N and {βt}t∈N satisfy αt, βt > 0,∀t ∈ N, ∑ t∈N αt = ∑ t∈N βt =∞, ∑ t∈N ( α2t + β 2 t ) <∞, lim t→∞ βt αt = 0, lim t→∞ ξt βt = 0. Define the filtration {Ft}t∈N, a family of increasing natural σ-fields, where Ft , σ ( {wi,ui, θ̄i, Si, Ri; 0 ≤ i ≤ t} ) . Similar to the TD(λ) case, here also we follow the quasi-stationary argument. Henceforth, we analyze the asymptotic behaviour of GTD2 algorithm under the assumption that feature vector θ̄t is quasi-static, i.e. θ̄t ≡ θ̄ = (θ, w̄)>. Lemma 3. Assume θ̄t ≡ θ̄ = (θ, w̄)>, ∀t ∈ N. Let Assumptions 1-3 and 4-GTD2 hold. Then{ (u,w)> ∣∣ lim inf t→∞ ∥∥(u,w)> − (ut,wt)>∥∥} ⊆ ⋃ u∈A∗ { (u,w)> ∣∣w ∈ Au}, (21) where A∗ is the set of asymptotically stable equilibria of the following ODE: d dt u(t) = Γ̂Uu(t) ( Φ>θ̄ Ddπ (I− γt+1P π)Φθ̄u(t) ) , u(0) ∈ Ů , t ∈ R+ (22) and Au is the asymptotically stable equilibria of the following ODE: d dt w(t) = Γ̂Ww(t) (( Φ>θ̄ Ddπδ u − Φ>θ̄ DdπΦθ̄ ) w(t) ) , w(0) ∈ W̊ and t ∈ R+, with δu defined in Eq. (29). Proof. The two equations in the modified GTD2 algorithm constitute a multi-timescale stochastic approximation recursion, where there exists a bilateral coupling between the stochastic recursions (19) and (20). Since the step-size sequences {αt}t∈N and {βt}t∈N satisfy βtαt → 0, we have βt → 0 faster than αt → 0. This disparity in terms of the learning rates induces a pseudo heterogeneous rate of convergence (or timescales) between the individual stochastic recursions which results in a pseudo asynchronous convergence behaviour when considered over a finite time window. Also note that the coherent long-run behaviour of the multi-timescale stochastic recursion will asymptotically follow this short-term behaviour with the window size extending to infinity(Borkar, 1997; 2008). This pseudo behaviour induces multiple viewpoints, i.e., when observed from the faster timescale recursion (recursion controlled by αt), the slower timescale recursion (recursion controlled by βt) appears quasi-static (‘almost a constant’), while observed from the slower timescale, the faster timescale recursion seems equilibrated. Further, it is analytically admissible (Borkar, 1997) to consider the slow timescale stochastic recursion (20) to be quasi-stationary (i.e., ut ≡ u, ∀t ∈ N), while analyzing the limiting behaviour of the relatively faster timescale stochastic recursion 19. Analysis of the faster time-scale recursion: The faster time-scale stochastic recursion of the GTD2 algorithm is the following: wt+1 = Γ W ( wt + αt ( δutt+1xθt(St)− ( wt >xθt(St) ) xθt(St) )) . (23) Under the previously mentioned quasi-stationary premise that ut ≡ u and θ̄t ≡ θ̄ = (θ, w̄)>, ∀t ∈ N, thereupon, we analyze the long-term behaviour of the following recursion: wt+1 = Γ W ( wt + αt ( δut+1xt − ( wt >xt ) xt )) , (24) where xt = xθ(St) and δut+1 , Rt+1 + γt+1u >xt+1 − u>xt. The above equation can be rearranged as the following: wt+1 = Γ W ( wt + αt ( h2(wt) + M2t+1 + `2t )) , where the noise M2t+1 , δut+1xt − ( w>t xt ) xt − E [ δut+1xt − ( w>t xt ) xt|Ft ] , h2(w) , E [ δut+1xt − ( w>t xt ) xt ] and the bias `2t = E [ δut+1xt − ( w>t xt ) xt|Ft ] − E [ δut+1xt − ( w>t xt ) xt ] . Similar to Equation (12), we can rewrite the above recursion as follows: wt = wt + αt ( Γ̂Wwt(h 2(wt)) + Γ̂ W wt ( M2t+1 ) + Γ̂Wwt ( `2t ) + o(αt) ) , (25) where Γ̂Wwt(·) is the Frechet derivative (defined in Equation (8)) of the projection operator Γ W . A few observations are in order: D1: The iterates {wt}t∈N are stable, i.e., supt∈N ‖wt‖ <∞ a.s. This immediately follows since W is bounded. D2: {Γ̂Wwt ( M2t+1 ) }t∈N is a martingale-difference noise sequence with respect to the filtration {Ft+1}t∈N. This follows directly since {M2t+1}t∈N is a martingale-difference noise sequence with respect to the same filtration. D3: {Γ̂Wwt ( M2t+1 ) }t∈N are square-integrable and ∃K2 ∈ (0,∞) such that E [ ‖Γ̂Wwt ( M2t+1 ) ‖2|Ft ] ≤ K2 ( 1 + ‖wt‖2 ) a.s., t ∈ N. (26) This follows directly from the finiteness of the underlying Markov chain and from the assumption that the boundary ∂W is smooth. D4: Γ̂Ww (h 2(w)) is Lipschitz continuous with respect to w. Proof similar to C1. D5: Γ̂Wwt ( `2t ) → 0 as t→∞ a.s. Proof similar to C3. Now by appealing to Theorem 2, Chapter 2 of (Borkar, 2008) along with the above observations, we conclude that the stochastic recursion 23 asymptotically tracks the following ODE almost surely: d dt w(t) = Γ̂Ww(t)(h 2(w(t)), w(0) ∈ W̊ and t ∈ R+. = Γ̂Ww(t) ( E [ δut+1xt ] − E [ xtx > t ] w(t) ) , w(0) ∈ W̊ and t ∈ R+. (27) Therefore, wt converges asymptotically to the stable equilibria of the above ODE contained inside W almost surely. Qualitative analysis of the solutions of ODE (27): A trivial qualitative analysis of the long-run behaviour of the flow induced by the above ODE attests that the stable limit set is indeed the solutions of the following linear system insideW (This follows since Γ̂Ww (y) = y for w ∈ W̊ and also because Γ̂Ww (·) does not contribute any additional limit points on the boundary other than the roots of h2 since ∂W is smooth). E [ xtx > t ] E [ δut+1xt ] − E [ xtx > t ] E [ xtx > t ] w = 0. ⇒ E [ xtx > t ] w = E [ δut+1xt ] . (28) Note that E [ xtx > t ] = Φ> θ̄ DdπΦθ̄. Claim 1: The above linear system of equations is consistent, i.e., E [ δut+1xt ] ∈ R(Φ> θ̄ DdπΦθ̄), i.e., the range-space of Φ> θ̄ DdπΦθ̄: To see that, note that the above system can indeed be viewed as the least squares solution to the Φθ̄w = δ u with respect to the weighted-norm ‖ · ‖Ddπ , where δu(s) = R̄π(s) + γt+1 ∑ s′∈S Pπs,s′u >xθ(s ′)− ∑ s′∈S Pπs,s′u >xθ(s ′), (29) where R̄ is the expected reward. (Note that E [ δut+1xt ] = Φ> θ̄ Ddπδ u). The least-squares solution w0 ∈ Rd (which certainly exists but may not be unique) satisfies < Φθ̄w, δ u − Φθ̄w0 >Ddπ= 0, ∀w ∈ R d ⇒ < w, D−1dπΦ > θ̄ Ddπ (δ u − Φθ̄w0) >Ddπ= 0, ∀w ∈ R d. Now choose w = D−1dπΦ > θ̄ Ddπ (δ u − Φθ̄w0). Then Φ>θ̄ Ddπ (δ u − Φθ̄w0) = 0 ⇒ Φ>θ̄ DdπΦθ̄w0 = Φ > θ̄ Ddπδ u. [ End of proof of Claim 1 ] Since Φ> θ̄ DdπΦθ̄ may be singular (i.e., Φ > θ̄ DdπΦθ̄ is not invertible), the above least squares solution may not be unique and hence the collection of asymptotically stable equilibria of the flow induced by the ODE (27) may not be singleton for every u. Let’s denote the asymptotically stable equilibria of the flow induced by the said ODE to be the set Au, where ∅ 6= Au ⊆W . Analysis of the slower time-scale recursion: The slower time-scale stochastic recursion of the GTD2 algorithm is the following: ut+1 = Γ U ( ut + βt (xt − γt+1xt+1) ( wt >xt )) , ut ∈ Rd, u0 ∈ U. (30) Note that since ξtβt → 0, the stochastic recursion (20) is managed on a faster timescale relative to the the neural network stochastic recursion (10) and henceforth, we continue to maintain here the quasi-stationary condition θ̄t ≡ θ̄ = (θ, w̄)>. Now the above equation can be rearranged as follows: ut+1 = Γ U ( ut + βt ( E [ ∆wtt+1 ] + M3t+1 + `3t )) , (31) where ∆wtt+1 , (xt − γt+1xt+1) ( wt >xt ) = ( (xt − γt+1xt+1) x>t ) wt, the noise term M3t+1 , ∆wtt+1 − E [ ∆wtt+1|Ft ] and the bias `3t , E [ ∆wtt+1|Ft ] − E [ ∆wtt+1 ] . Similar to Equation (12), we can rewrite the above recursion as follows: ut+1 = ut + βt ( Γ̂Uut ( E [ ∆wtt+1 ]) + Γ̂Uut ( M3t+1 ) + Γ̂Uut ( `3t ) + o(βt) ) , (32) where Γ̂Uut(·) is the Frechet derivative (defined in Equation (8)) of the projection operator Γ U . Now the above equation can be interpreted in terms of stochastic recursive inclusion as follows: ut+1 = ut + βt ( Γ̂Uut ( E [ ∆wtt+1 ]) + Γ̂Uut ( M3t+1 ) + Γ̂Uut ( `3t ) + o(βt) ) , (33) with Γ̂Uut ( E [ ∆wtt+1 ]) ∈ h3(ut), where the set-valued map h3 : Rd → {subsets of Rd} is defined as h3(u) , { Γ̂Uu ( E [ ∆wt+1 ]) , where w ∈ Au } . (34) Indeed h3(u) = { Γ̂Uu (Bw) , where B = E [( (xt − γt+1xt+1) x>t )] and w ∈ Au } . It is easy to verify that B = Φ> θ̄ Ddπ (I− γt+1Pπ)Φθ̄. Here, one cannot directly apply the multi-timescale stochastic approximation results from (Borkar, 1997) since the said paper assumes that the limit point from the slower timescale recursion is unique (Please see Chapter 6 of (Borkar, 2008)). But in our setting, the slower timescale recursion (23) has several limit points (note that the stable limit set Au is not singleton). This is where our analysis differs from that of the seminal paper on GTD2 algorithm, where it is assumed that both the matrices E [ xtx > t ] and E [ (xt − γt+1xt+1)x>t ] are certainly non-singular. However, in our TTN setting, one cannot guarantee this condition, since the features are apparently provided by a neural network and it is hard to fabricate the neural network to generate a collection of features with the desired non-singularity properties. In order to analyze the limiting behaviour of the GTD2 algorithm under the relaxed singularity setting, henceforth one has to view the stochastic recursion (30) as a stochastic recursion inclusion (Benaïm et al., 2005) and apply the recent results from (Ramaswamy and Bhatnagar, 2016) which analyzes the asymptotic behaviour of general multi-timescale stochastic recursive inclusions. A few observations are in order: E1: For each u ∈ U , h3(u) is a singleton: This follows from the definition of h3 and Claim 1 above, where we established that each w ∈ Au is the least squares solution to the linear system of equations Φθ̄w = δ u. Therefore, it further implies that that h3 is a Marchaud map as well. E2: supt∈N (‖wt‖+ ‖ut‖) <∞ a.s. This follows since W and U are bounded sets. E3: {Γ̂Uut ( M3t+1 ) }t∈N is a martingale-difference noise sequence with respect to the filtration {Ft+1}t∈N. This follows directly since {M3t+1}t∈N is a martingale-difference noise sequence with respect to the same filtration. E4: {Γ̂Uut ( M3t+1 ) }t∈N are square-integrable and ∃K3 ∈ (0,∞), such that E [∥∥Γ̂Uut (M3t+1) ∥∥2∣∣Ft] ≤ K3 (1 + ‖ut‖2 + ‖wt‖2) a.s., t ∈ N. (35) This follows directly from the finiteness of the underlying Markov chain and from the assumption that the boundary ∂U is smooth. E5: Γ̂Uut ( `3t ) → 0 as t→∞ a.s. Proof similar to C3. This implies that the bias is asymptotically irrelevant. E6: For each u ∈ U , the set Au is a globally attracting set of the ODE (27) and is also Lyapunov stable. Further, there exists K4 ∈ (0,∞) such that supw∈Au ‖w‖ ≤ K4(1 + ‖u‖). This follows since Au ⊆W and W is bounded. E7: The set-valued map q : U → {subsets of Rd} given by q(u) = Au is upper-semicontinuous: Consider the convergent sequences {un}n∈N → u and {wn}n∈N → w with un ∈ U and wn ∈ q(un) = Au. Note that w ∈ W , u ∈ U since W and U are compact. Also Φ> θ̄ DdπΦθ̄wn = Φ > θ̄ Ddπδ un (from Claim 1). Now taking limits on both sides we get lim n→∞ Φ>θ̄ DdπΦθ̄wn = limn→∞ Φ>θ̄ Ddπδ un ⇒ Φ>θ̄ DdπΦθ̄w = Φ > θ̄ Ddπδ u. This implies that w ∈ Au = q(u). The claim thus follows. Thus we have established all the necessary conditions demanded by Theorem 3.10 of (Ramaswamy and Bhatnagar, 2016) to characterize the limiting behaviour of the stochastic recursive inclusion (33). Now by appealing to the said theorem, we obtain the following result on the asymptotic behaviour of the GTD2 algorithm:{ (u,w)> ∣∣ lim inf t→∞ ∥∥(u,w)>− (ut,wt)>∥∥} ⊆ ⋃ u∈A∗ { (u,w)> ∣∣w ∈ Au}, (36) where A∗ is the set of asymptotically stable equilibria of the following ODE: d dt u(t) = h3(u(t)), u(0) ∈ Ů , t ∈ R+. (37) One can obtain similar results for projected TDC. We now state our main result: Theorem 2. Let Θ ⊂ Rm+d be a compact, convex subset with smooth boundary. Let ΓΘ be Frechet differentiable. Further, let Γ̂Θ θ̄ (− 12∇Lslow)(θ̄) be Lipschitz continuous. Also, let Assumptions 1-3 hold. Let K be the set of asymptotically stable equilibria of the following ODE contained inside Θ: d dt θ̄(t) = Γ̂Θθ̄(t)(− 1 2 ∇θ̄Lslow)(θ̄(t)), θ̄(0) ∈ Θ̊ and t ∈ R+. Then the stochastic sequence {θ̄t}t∈N generated by the TTN converges almost surely to K (sample path dependent). Further, TD(λ) Convergence: Under the additional Assumption 4-TD(λ), we obtain the following result: For any λ ∈ [0, 1], the stochastic sequence {wt}t∈N generated by the TD(λ) algorithm (Algorithm 2) within the TTN setting converges almost surely to the limit w∗, where w∗ satisfies Πθ̄∗T (λ)(Φθ̄∗w ∗) = Φθ̄∗w ∗, (38) with T (λ) defined in Lemma 2 and θ̄∗ ∈ K (sample path dependent). GTD2 Convergence: Let W,U ⊂ Rd be compact, convex subsets with smooth boundaries. Let Assumption 4-GTD2 hold. Let ΓW and ΓU be Frechet differentiable. Then the stochastic sequences {wt}t∈N and {ut}t∈N generated by the GTD2 algorithm (Algorithm 3) within the TTN setting satisfy{ (u,w)> ∣∣ lim inf t→∞ ∥∥(u,w)> − (ut,wt)>∥∥} ⊆ ⋃ u∈A∗ { (u,w)> ∣∣w ∈ Au}, where A∗ is the set of asymptotically stable equilibria of the following ODE: d dt u(t) = Γ̂Uu(t) ( Φ>θ̄∗Ddπ (I− γt+1P π)Φθ̄∗u(t) ) , u(0) ∈ Ů , t ∈ R+ and Au is the asymptotically stable equilibria of the following ODE: d dt w(t) = Γ̂Ww(t) (( Φ>θ̄∗Ddπδ u − Φ>θ̄∗DdπΦθ̄∗ ) w(t) ) , w(0) ∈ W̊ and t ∈ R+, with θ̄∗ ∈ K (sample path dependent) and δu defined in Eq. (29). C.1 NONIMAGE CATCHER C ADDITIONAL EXPERIMENTS C.2 PUDDLEWORLD C.3 IMAGE CATCHER We also ran policy evaluation experiments on image-based catcher with 2 stacked 64x64 frames as input. The policy evaluated was the same as was used in the non-image setting. Similar to the non-imaged based catcher experiments, we have similar plots. C.4 CARTPOLE In the classic Cartpole environment, the agent has to balance a pole on a cart. The state is given by vector of 4 numbers (cart position, cart velocity, pole angle, pole velocity). The two available actions are applying a force towards the left or the right. Rewards are +1 at every timestep and an episode terminates once the pole dips below a certain angle or the cart moves too far from the center. We use the OpenAI gym implementation (Brockman et al., 2016). The policy to be evaluated consists of applying force in the direction the pole is moving with probability 0.9 (stabilizing the pole) or applying force in the direction of the cart’s velocity with probability 0.1. We inject some stochasticity so that the resulting policy does not perform overly well, which would lead to an uninteresting value function. C.5 ACROBOT In the classic Acrobot domain, the agent consisting of two links has to swing up past a certain height. The agent observes a 4-dimensional state consisting of the angles and the angular velocities of each link. The avaiable actions are three possible levels of torque to be applied to the joint. The evaluated policy is obtained by training an agent with true-online Sarsa on a tile coding representation and then fixing its learned epsilon-greedy policy. C.6 PUCK WORLD In Puck World (Tasfi, 2016), the agent has to move in a two-dimensional box towards a good puck while staying away from a bad puck. The 8-dimensional state consists of (player x location, player y location, player x velocity, player y velocity, good puck x location, good puck y location, bad puck x location, bad puck y location). Each action increases the agent’s velocity in one of the four cardinal directions apart from a “None” action which does nothing. The reward is the negative distance to the good puck plus a penalty of −10 + x if the agent is within a certain radius of the bad puck, where x ∈ [−2, 0] depends on the distance to the bad puck (the reward is slightly modified from the original game to make the value function more interesting). The policy moves the agent towards the good puck, while having a soft cap on the agent’s velocity. In more detail, to choose one action, it is defined by the following procedure: First, we choose some eligible actions. The None action is always eligible. The actions which move the agent towards the good puck are also eligible. For example, if the good puck is Northeast of the agent, the North and East actions are eligible. If the agent’s velocity in a certain direction is above 30, then the action for that direction is no longer eligible. Finally, the agent picks uniformly at random from all eligible actions. C.7 OFF-POLICY CATCHER We run a preliminary experiment to check if TTN can have an advantage in the off-policy setting. The target policy is the same as the one used for other Catcher experiments (described in Appendix D). The behaviour policy is slightly different. If the apple is within 20 units (the target policy is 25 units), then the agent takes the action in the direction of the apple with probability 0.7 and one of the other two actions with probability 0.15 each. If the apple is not within range, then the agent takes the None action 10% of the time and one of the other two with equal probability. This combination of behaviour and target policies results in importance sampling ratios in the range of 0 to 8.7, moderately large values. We try TTN with three off-policy algorithms (TD, TDC and LSTD) and compare to off-policy Nonlinear TD. For TTN, the features are learnt optimizing the MSTDE on the behaviour policy while the values are learned off-policy. The main difference between TTN and Nonlinear TD is the fact that Nonlinear TD does off-policy updates to the entire network while TTN only changes the linear part
1. What is the main contribution of the paper regarding non-linear online and on-policy value function approximation? 2. What are the strengths and weaknesses of the proposed approach, particularly in its theoretical justification and experimental validation? 3. Do you have any concerns or questions regarding the paper's content, such as the projection used in the theoretical analysis or the choice of baselines in the experiments? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review The paper introduces an algorithm (TTN) for non-linear online and on-policy value function approximation. The main novelty of the paper is to view non-linear value estimation as two separate components. One of representation learning from a non-linear mapping and one of linear value function estimation. The soundness of the approach stems from the rate at which each component is updated. The authors argue that if the non-linear component is updated at a slower rate than the linear component, the former can be viewed as fixed in the limit and what remains is a linear value function estimation problem for which several sound algorithms exist. TTN is evaluated on 4 domains and compared to several other value estimation methods as well as DQN on a control problem with two variations on the task's state space. I'll start off the review by stating that I find the idea and theoretical justification of separating the non-linear and linear parts of value function estimation to be quite interesting, potentially impacting RL at large. Indeed, this view promises to reconcile latest developments in deep RL with the long-lasting work on RL with linear function approximators. However, there are a few unclear aspects that do not allow one to be fully convinced that this paper lives up to the aforementioned promise. - For the theoretical contribution. The authors claim that the main challenge was to deal with the potentially dependent features outputted by the neural network. It is dealt with by using a projection that projects the linear parameters of the value function to a compact subset of the parameter space. Bar the appendix, there is no mention of this projection in the paper, on how this compact subset (that must include the optimal parameter) is defined and if this projection is merely a theoretical tool or if it was necessary to implement it in practice. There is a projection for the neural net weights too but I can see how for these it might not be necessary to use in practice. However, for the linear weights, as their computation potentially involves inverting ill-conditioned matrices, they can indeed blow-up relatively fast. - I found the experimental validation to be quite rich but not done in a systematic enough manner. For instance, the experiment "utility of optimizing the MSPBE" demonstrates quite nicely the importance of each component but is only performed on a single task. As the theoretical analysis does not say anything about the improvements the representation learning can have on the linear value estimation nor if the loss used for learning the representation effectively yields better features for the MSPBE minimization, this experiment is rather important and should have been performed on more than a single domain. Secondly, I do not find the chosen baselines to be sufficiently competitive. The authors state in Sec. 2 that nonlinear-GTD has not seen widespread use, but having this algorithm as the main competitor does not provide strong evidence that TTN will know a better fate. In the abstract, it is implied that outside of nonlinear-GTD, value function approximation methods are not sound. In approximate policy iteration algorithms such as DDPG or TRPO, there is a need in performing value estimation. It is done by essentially a fitted-Q iteration procedure which is sound. Why wasn't TTN compared to these methods? If it is because they are not online, why being online in the experiments of the paper is important? Showing that TTN is competitive with currently widespread methods for value estimated would have been more convincing than the comparison with nonlinear-GTD. Thirdly, for the sake of reproducibility, as LSTD seems to be the method of choice for learning the linear part, it would have been adequate to provide an algorithm box for this version as is done for GTD2/TDC. LSTD is essentially a batch algorithm and there could be many ways to turn it into an online algorithm. With which algorithm were the results in the experimental section obtained? Finally, on the control task, the authors add several modifications to their algorithm which results in an algorithm that is very close to that of Levine et al., 2017. Why was not the latter a baseline for this experiment? Especially since it was included in other experiments.
ICLR
Title Object Tracking by Hierarchical Part-Whole Attention Abstract We present in this paper that hierarchical representations of objects can provide an informative and low-noisy proxy to associate objects of interest in multi-object tracking. This is aligned with our intuition that we usually only need to compare a little region of the body of target objects to distinguish them from other objects. We build the hierarchical representation in levels of (1) target body parts, (2) the whole target body, and (3) the union area of the target and other objects of overlap. Furthermore, with the spatio-temporal attention mechanism by transformer, we can solve the tracking in a global fashion and keeps the process online. We design our method by combining the representation with the transformer and name it Hierarchical Part-Whole Attention, or HiPWA for short. The experiments on multiple datasets suggest its good effectiveness. Moreover, previous methods mostly focus on leveraging transformers to exploit long temporal context during association which requires heavy computation resources. But HiPWA focuses on a more informative representation of objects on every single frame instead. So it is more robust with the length of temporal context and more computationally economic. 1 INTRODUCTION How to represent the visual existence of an object in a discriminative fashion is a core question of computer vision. In this paper, we propose a hierarchical part-whole representation to represent the visual existence of objects. We adopt multi-object tracking as the application area since the distinguishable appearance feature is critical to avoid the mismatch among target objects when tracking across frames. To gather and process the visual information from different levels, we combine the hierarchical part-whole representation with the attention mechanism from transformers to summarize distinguishable and discriminative visual representations for objects of interest. In the task of multi-object tracking, given a bounding box to localize objects of interest, how should we recognize the major object within the box and distinguish it from the background and other objects, especially some also having partial existence in the box? We believe the visual specificity of one object comes from three perspectives: the compositional, the semantic and the contextual. The compositional suggests the salient and unique visual regions on an object, such as a hat on a pedestrian whose color is different from all others in the same image. With a salient visual composition attached to an object, we can track it across frames even without seeing its full body. The semantic visual information is the commonly adopted one in modern computer vision such as a tight bounding box or instance segmentation mask. It defines the occupancy area of the object with the bond between its visual existence and semantic concept. Finally, contextual visual information describes the surroundings of an object. It helps to distinguish an object via contrast. For example, the bounding box might contain pixels from the background and secondary objects. However, a tight bounding box offers a strong underlying prior when combined with visual context: an object whose parts span across the boundary of the bounding box should not be the major object of this bounding box. Being the secondary object or not an object of interest, it should be regarded as noise when we generate a distinguishable visual representation for the major subject in the bounding box. The analysis above shows each level has its value to represent an object discriminatively. Motivated by the insight, we propose to represent an object by a three-level hierarchy: body parts, full body, and the union area including objects with overlap. We summarize it as a “Part-Body-Union” hierarchy. With the hierarchy constructed, an ideal path to solving the target association in multi-object tracking is to leverage the salient information within the body area and discard mismatch by eliminating the noise revealed by the contextual contrast. Without requiring more fine-grained data annotation, we propose to use transformers to process the hierarchical representation as the attention mechanism can discover important visual information. So, by combining the hierarchical visual representation and attention-based feature fusion, we finally propose our method as Hierarchical Part-Whole Attention, or HiPWA for short. In this work, we build a baseline model following this design and demonstrate its effectiveness in solving multi-object tracking problems. Through experiments on multiple multiobject tracking datasets, the proposed method achieves comparable or even better performance than the state-of-the-art transformer-based methods with a more lightweight implementation and better time efficiency during training and inference. 2 RELATED WORKS 2.1 REPRESENTING OBJECTS BY PARTS The most commonly used object representation for multi-object tracking is bounding boxes. However, the bounding box is noisy by containing background pixels and pixels from secondary objects. On the other hand, our life experience demonstrates that, in many scenarios, it is not necessary to observe the full body of objects to specify an object visually and tracking targets by the distinguishable parts on it is usually more efficient. Therefore, researchers also have been studying object detection and tracking with more fine-grained representation. A common way is to use pre-defined certain parts on target bodies, such as only human head (Sundararaman et al., 2021; Shao et al., 2018), human joints (Andriluka et al., 2018; Xiu et al., 2018) or even every pixel (Voigtlaender et al., 2019; Weber et al., 2021). However, all these choices require more fine-grained data annotation beyond bounding boxes and more fine-grained perception modules beyond just normally available object detectors. In the contrast, the part-whole hierarchy we construct requires no additional annotations and we still solve tracking tasks at the granularity of bounding boxes. The idea of modeling objects with different levels is inspired by the hierarchical modeling of the human body (Marr, 2010) by David Marr when he explains how to construct the visual structure of an object from primal sketch to 2.5 sketch and further 3D representation. His classic three levels of visual information processing system concludes this in a higher-level: the computational, the algorithmic, and the implementational. A similar theory is also introduced by Fodor & Pylyshyn (1988) as the semantic, the syntactic, and the physical. Compared to these cognitive theories aiming to model general visual representation, the three perspectives we propose to recognize an object and distinguish it from others (the compositional, the semantic and the contextual) only apply to the specific problem of generating an effective visual descriptor to represent the objects of interest. 2.2 TRANSFORMER-BASED MULTI-OBJECT TRACKING Transformer (Vaswani et al., 2017) is originally proposed for natural language processing. It shows a powerful capacity for information representation and processing. Later, DETR (Carion et al., 2020) introduces the transformer to the area of visual perception for object detection. It models object detection as solving a bipartite matching problem. Given that the matching-based strategy by DETR is quite similar to the target matching in the task of multi-object tracking, it is intuitive to further migrate transformer to this area. TransTrack (Sun et al., 2020) is the first work using the transformer to solve the MOT problem but it does not invent any association strategy by transformers. A concurrent work TrackFormer (Meinhardt et al., 2021) takes a further step to use the cross attention in transformer decoder in the stage of association by query passing. On the other hand, VisTR (Wang et al., 2021c) proposes a novel global association scheme upon transformer where a video clip of multiple frames is forward into the transformer at the same time to associate objects within the clip. More recently, many works (Zhou et al., 2022; Zeng et al., 2021) follow the global association scheme in either training or inference and achieve good performance. A key to their success is to process the information over a long temporal period, which can be hardly handled without the transformer. GTR (Zhou et al., 2022) makes a baseline model of using only appearance in associating objects and removing some secondary modules such as positional encoding and learnable object query. However, a downside of processing multiple frames as a batch by the transformer is the high requirement of computation resources. It has become a common practice to train the model on at least 4xV100 GPUs (Zhou et al., 2022; Sun et al., 2020; Zeng et al., 2021) or even 8xA100 GPUs (Cai et al., 2022). These methods usually suffer from significant performance drop if only limited computation resource is available. This is because they usually make improvements to association performance by taking advantage of a long temporal window and gathering more visual context within it. In this work, we focus on building a more computation and memory-efficient visual representation for objects from the scope of a single frame instead. This scheme is flexible to be integrated with transformers and more robust to short time windows during object association. 3 METHOD In this section, we introduce the method we propose to leverage a hierarchical part-whole visual representation with the attention mechanism from the transformer for multi-object tracking. In Section 3.1, we describe the overview structure of our method using global association. Then, in Section 3.2, we dive into the details of our proposed part-whole attention module. Finally, we talk about the details of training and inference by HiPWA in Section 3.3. 3.1 GLOBAL ASSOCIATION Before the transformer is introduced into this area, people usually solve multi-object tracking in a frame-by-frame fashion where the association is performed on only two frames. Recently, the transformer shows the advantage to gather and process information from multiple steps in parallel. To leverage this advantage, previous methods (Wang et al., 2021c; Zhou et al., 2022) propose to perform association in a video clip instead of just two frames. Therefore, the spatio-temporal attention capacity of the transformer leads to a new global association fashion. We follow this scheme in our proposed method. The overall pipeline of HiPWA is shown in the left-hand half of Figure 1. Now, we explain the three stages of it. Detection and Feature Extraction. Given a video clip of T frames, i.e., T = {t, t+1, ..., t+T}, we have the corresponding images I = {It, It+1, ..., It+T }. Given a detector model, we could derive the detections of the object category of interest on all frames in parallel, noted as O = {Ot11 , ..., O tN N }. N is the number of detections on the T frames and ti ∈ T (1 ≤ i ≤ N ) is the time step where the i-th detection, i.e., Otii , is detected. Then, we generate the representations of each detected object and note them as F = {F1, F2, ..., FN}. The most commonly adopted solution is to use the backbone output on the object area as the representation features while we adopt our proposed hierarchical part-whole representation instead whose details are to be introduced soon. Token Generation by Hierarchical Part-Whole Attention. After being projected into vectors, the hierarchical representations of detections become tokens Tdet = {Tkdet1 , Tkdet2 , ..., TkdetN }, which are also terms as “object query” in previous works (Sun et al., 2020; Meinhardt et al., 2021). Concatenating the tokens makes Qdet ∈ RN×D, where D is the feature dimension. If we aim to associate the new-coming detections with existing trajectories, we also need the tokens to represent the existing M trajectories, i.e., Ttraj = {Tktraj1 , Tk traj 2 , ..., Tk traj M }. The transformer has shown good power to generate more discriminative feature tokens for trajectories by iterative query passing (Zeng et al., 2021) or long-time feature buffering (Cai et al., 2022). But to make our method simple, we directly project the hierarchical representation of objects on existing trajectories to represent the trajectories. Given a historical horizon H to backtrack the objects on the previous time steps of a trajectory, we represent a trajectory, Tktrajj , with the “track query” Q traj j ∈ RH×D. The track query is the combination of the feature tokens of detections within the historical horizon on the corresponding trajectory. Global Association. By cross-attention, we could get the association score between the set of detections and the trajectory Tktrajj as S(Q traj j , Q det) ∈ RH×N . In practice, because we aim to associate between all M trajectories and N detections, we perform the cross-attention on all object queries and track queries at the same time, namely S(Qtraj, Qdet) ∈ RHM×N . By averaging the score on the H frames selected from the historical horizon, we could finally get the association score between detections and trajectories as S ∈ RM×N . Then, we need to make sure that a trajectory will never be associated with more than one object from the same frame. We normalize the association scores between a trajectory and objects from the same time step by softmax. So the normalized association score between the j-th trajectory and the i-th detection is P (Massoj,i = 1|Qdet, Qtraj) = exp(Sj,i)∑ k∈{1,2,...,N} 1[tk = ti]exp(Sj,k) , (1) where the binary indicator function 1[tk = ti] indicates whether the i-th detection and the k-th detection are on the same time step. Masso ∈ R(M+1)×N is the final global association matrix. Its dimension is of (M + 1)×N because each detection can be associated with an “empty trajectory” to start a new track in practice. The query of the “empty trajectory” is represented by a query randomly drawn from previous unassociated tokens during training. Also, after the association, unassociated trajectories will be considered absent on the corresponding frames. In such a fashion, we can train over a large set of detections and trajectories in parallel and also conduct inference in an online manner by setting O as the set of the detections only from the new-coming frame. 3.2 HIERARCHICAL PART-WHOLE ATTENTION Finally, we come to the details about constructing hierarchical part-whole visual representations. We name this process hierarchical part-whole attention as we use the attention mechanism to gather and process information from different levels in the hierarchy, which is illustrated in the right-hand half of Figure 1. We design this representation because we think there are three perspectives to describe the existence of an object and its identification over other objects: the compositional, the semantic, and the contextual. Correspondingly, we think the body part patches, the full object body, and the union of the occupancy of objects with interaction provide the knowledge from the three perspectives respectively. The insight behind this module is what we would like the most to deliver in this work. Hierarchy Construction. We represent a detected object by a quintuple, i.e., O = [x, y, w, h, c], where the first four values describe its bounding box and c is the detection confidence. So its body area is B = [x, y, x + w, y + h]. Next, we divide the body into multiple sub-regions (parts). By default, similar to what ViT (Dosovitskiy et al., 2020) does upon images, we divide the bounding boxes into 2× 2 bins, making a set of body parts as P = {P1, P2, P3, P4}. On the other hand, from a global scope, there are other targets interacting with O which are highly likely to be mismatched with O in the association stage. We crop the union area enclosing O and all other targets having overlap with it. We note the union area as U . Till now, we have derives the part-whole hierarchy {P, B, U} in a very straightforward way. Feature Fusion. Given the part-whole hierarchy, we have to fuse the features from different levels to get the final feature tokens for association. With a feature encoder, we can extract the CNN features from them as FP ∈ R4C×H×W , FB ∈ RC×H×W and FU ∈ RC×H×W . We simply concatenate the features from the first two levels as FP+B ∈ R5C×H×W . Then, by a two-layer projection network, we gain projected features VP+B ∈ R5×D. We also apply the projection to the union area features and get VU ∈ RD. Finally, we perform cross-attention between VP+B and VU and forward the output to an MLP network to get the tokens of shape R5×D. Before being forwarded to the global association stage, the tokens would be projected to the uniform dimension of D. 3.3 TRAINING AND INFERENCE The method we implement is a baseline model without complicated designs on “queries”. We simply use the hierarchical part-whole features of detected objects to serve as the representations of both detections and trajectories. And during training, we can associate between detections in the sampled video clips or between detections and existing trajectories. These two schemes of associations thus are implemented as the same and share all model modules. During inference, to keep the process online, we only perform association between detections from the new-coming frame and existing trajectories. We realize this by iterating a sliding window with the stride of one frame. Training. We train the association module by maximizing the likelihood of associating detections belonging to the same ground truth trajectory as expressed in Eq. 1. But Eq. 1 happens on one time step ti only. To speed up training, we calculate the association score on all T frames of the sampled video clip at the same time and maximize the likelihood of the association aligned with the ground truths globally in the time window. The objective thus turns to t+T∏ q=t P (Masso j,τjq = 1|Qdet, Qtraj), (2) where τ jq is the ground truth index of the detection which should be associated with the j-th trajectory on the time step q. Therefore, by traversing the association of all trajectories, the training objective becomes the negative log-likelihood loss Lasso = − M∑ j=1 t+T∑ q=t logP (Masso j,τjq = 1|Qdet, Qtraj). (3) On the other hand, trajectories can also be absent on some time steps because of occlusion or target disappearance. So similar to the practice of DETR (Carion et al., 2020) for detection and GTR (Zhou et al., 2022) for tracking, Eq. 3 has included the situation of associating a trajectory with “empty”. Moreover, the main reason why mismatch happens is the features of objects of different identities being indiscriminative. Therefore, to encourage the representations from objects of different identities to be distinguishable, we design a feature discrimination loss in the form of triplet loss as Lfeat = max(0, NP min u=1 ||Att(f(FPu), f(FB))−f(FB)||2−||Att(f(FB), f(F bg U ))−f(FB)|| 2+α), (4) where f(·) is the shared projection layers to project CNN features to feature vectors and NP is the number of part patches (NP = 4 in our default setting). Att(·, ·) is the operation of cross attention to generate attended features. α is the margin to control the distance between positive and negative pairs. FB and FPu (1 ≤ u ≤ NP ) are the extracted features of the body area and the part subregions as explained already. F bgU is the CNN features of the background area in the union box. We obtain the background features by setting the pixels of B in the area of U to be 0 and forward the processed patch of the union area into the feature encoder. We design Eq. 4 to encourage the projection network to pay more attention to the salient area on the body of target objects while less attention to the background area when processing the hierarchical part-whole representations. Also, it encourages the features of the background area in the union box, which probably belongs to another object target, to be distinguishable from the body features. This can be expected to decrease the chance of mismatch between neighboring objects. Finally, the training objective is L = Lasso + Lfeat + Ldet, (5) where Ldet is an optional detection loss. In our default implementation, we finetune the detector at the same time when training the association modules. Inference. We adopt the traditional sliding-window style to realize online influence. With a window size T = 24 and stride 1, we start from the first frame of the input video. On the first frame, every detection is initialized as an original trajectory. In each time window, we would generate trajectories by detections within it. Then we use the association score in Eq. 1 to associate these trajectories with existing trajectories outside this time window. By averaging the detection-trajectory alongside detections of a trajectory, we get the trajectory-trajectory association scores, whose negative value serves as the entries in the cost matrix for the association assignment. And we adopt Hungarians matching to make sure the one-to-one mapping. Only when a pair of trajectories has the association score higher than a threshold β = 0.3, they are eligible to be associated. All unassociated detections on the new-coming frames will start new tracks. 4 EXPERIMENTS 4.1 EXPERIMENT SETUPS Datasets. We conduct quantitative experiments on multiple multi-object tracking datasets, including MOT17 (Milan et al., 2016), MOT20 (Dendorfer et al., 2020) and DanceTrack (Sun et al., 2021). We focus on pedestrian tracking in this paper so pedestrian is the only category of objects of interest on all datasets. MOT17 and MOT20 are the classic and popular datasets in the area of pedestrian tracking but their scales are relatively small and have no official validation sets. DanceTrack, on the contrary, is a recently proposed dataset that is of a much larger scale and provides an official validation set with no overlap with the training set. DanceTrack focuses on the scenarios where targets are in the foreground so detection is not considered as the bottleneck as it is on MOT20. And DanceTrack mainly contains videos where targets have heavy occlusion, complex motion patterns, and similar appearances so it provides a good platform to study the robustness of the tracking algorithm. Evaluation Metrics. The popular CLEAR evaluation protocol (Bernardin & Stiefelhagen, 2008) is based on single-frame-wise matching between the ground truth and predictions. This makes the metric emphasize single-frame detection quality rather than cross-frame association performance. MOTA, the main metric of CLEAR protocol, is also biased to the detection quality. To provide a more accurate sense of association performance in tracking, we mainly adopt the more recent HOTA (Luiten et al., 2021) metric set where the metric is calculated by the video-level association between ground truth and predictions. In the set of metrics, AssA emphasizes the association performance, and DetA stresses on the detection quality. HOTA is the main metric by taking both detection and association quality into consideration. Implementation. We follow the common practice (Sun et al., 2020; Zeng et al., 2021; Cai et al., 2022) to use ResNet-50 (He et al., 2016) as the backbone network, which is pretrained on Crowdhuman (Shao et al., 2018) dataset first. Though advanced detector (Zhang et al., 2021a) is demonstrated as a key to boosting tracking performance, we want our contribution to be more from the improvement of the association stage. Therefore, on MOT17, we follow the practice of another transformerbased global association tracking method GTR (Zhou et al., 2022) to use the classic CenterNet Zhou et al. (2019; 2020) as the detector and all training details are aligned with it to make fair comparisons with this close baseline method. The CenterNet detector is pretrained together with the backbone on Crowdhuman to align with the common practice on this dataset. For the fine-tuning of association modules, we use a 1:1 mixture of MOT17 and Crowdhuman for MOT17. We fine-tune with only the MOT20 training set for evaluation on MOT20. For DanceTrack, we use its official training set as the only training set during finetuning. The image size is set to be 1280 × 1280 during training and the test size is 1560 for the longer edge during the test. During finetuning, the detector head is also finetuned as mentioned already. The training iterations are set to be 20k on MOT17/MOT20 and 80k on DanceTrack. We use BiFPN (Tan et al., 2020) for the feature upsampling. For the implementation of the transformer, we follow the practice of Zhou et al. (2022) to use a stack of two layers of “Linear + ReLU” as the projection layers and one-layer encoders and decoders. We use AdamW (Loshchilov & Hutter, 2017) optimizer for training whose base learning rate is set to be 5e-5. The length of the video clip is T = 8 for training and T = 24 for inference in a sliding window. We use 4 × V100 GPUs as the default training device following some previous practice (Zhou et al., 2022; Zeng et al., 2021) but we will see that even using only one RTX 3090 GPU for training, our method can also achieve good performance. The training on MOT17 or MOT20 takes only 4 hours and the training on DanceTrack takes 11 hours. 4.2 BENCHMARK RESULTS We benchmark our proposed method with existing methods now. The results on the MOT17-test dataset are shown in Table 1. HiPWA achieves the highest HOTA score among all transformer- based methods. But our method only achieves a comparable MOTA score with TransTrack (Sun et al., 2020) and GTR (Zhou et al., 2022), suggesting the superior part of HiPWA does not lie in the detection stage. The higher AssA score of our method also demonstrates its superior association performance. MOT20 is a challenging dataset by containing scenes of crowded pedestrian flows. We report the results on the MOT20-test set in Table 2. Though HiPWA shows better performance than MeMOT (Cai et al., 2022) on MOT17, its performance is inferior on MOT20. This is probably related to the heavy and frequent occlusion on MOT20. It is common on MOT20 that a large portion of pedestrians’ bodies is occluded for a long time. If the occlusion period is longer than the horizon of associating existing trajectories and new-coming detections, HiPWA will be likely to fail. On the other hand, the much longer temporal buffer of object appearance history maintained by MeMOT turns out more effective in such scenarios. However, we note that we design HiPWA with the main goal of demonstrating the hierarchical part-whole representation and choosing the most naive implementation for association heads to make it a computationally efficient baseline model. In the contrast, MeMOT requires 8×A100 GPUs for training to support the long-time history context buffering (22 frames v.s. 8 frames by HiPWA ) uses COCO (Lin et al., 2014) dataset as the additional pretraining data. Next, we come to the benchmark on the DanceTrack dataset in Table 3. HiPWA achieves comparable performance with the best transformer-based methods. The association of HiPWA is inferior to MOTR (Zeng et al., 2021). MOTR has carefully designed global association and optimization modules. The global collective loss and query interaction module to propagate information frame by frame proposed by MOTR show good effectiveness. However, as a side-effect, its training and inference speed is much slower due to the heavy architecture. For example, training on MOT17 takes MOTR 2.5 days for MOTR on 8×V100 GPUs while only 4 hours on 4×V100 GPUs for our proposed method. And the inference speed is 6.3FPS for MOTR while 17.2FPS for our method on the same machine (V100 GPU). Compared to the close baseline GTR (Zhou et al., 2022), HiPWA achieves a more significant gap of outperforming on DanceTrack. Such an observation suggests our proposed part-whole hierarchical representation can be more powerful when the occlusion is heavy. Given the results shown on the three benchmarks, we have demonstrated the effectiveness of our proposed HiPWA to be comparable to the state-of-the-art transformer-based multi-object tracking algorithms with a lightweight design. It builds a new baseline for future research in this line of works. The commonly adopted techniques of query propagation and iteration (Meinhardt et al., 2021; Sun et al., 2020; Zeng et al., 2021), deformable attention (Sun et al., 2020; Cai et al., 2022) and long-time feature buffering (Cai et al., 2022) are all compatible to be integrated with HiPWA . 4.3 ABLATION STUDY Though we provide the results on multiple benchmarks to show the efficiency and effectiveness of our proposed method, there are many variables in the design. We now ablate their contributions to the overall performance of HiPWA . Many previous works in the multi-object tracking community follow the practice of CenterTrack (Zhou et al., 2020) on MOT17 (Milan et al., 2016) to use the latter half of training video sequences as the validation set. However, this makes the ablation study on the validation set not always convincing because the data distribution of the training set and validation set is too close and the performance gap reflected on the validation set might shrink or even disappear on the test set. Therefore, we turn to DanceTrack (Sun et al., 2021) for the ablation study instead where an independent validation set is provided and of a much larger scale than previous MOT datasets. In Table 4 and Table 5, we study the influence of video clip length in the training and inference stage respectively. The result suggests that training the association model with longer video clips Table 4: Results of using different length of video clip during training. T HOTA↑ DetA↑ AssA↑ MOTA↑ IDF1↑ 6 47.8 70.0 32.8 81.1 49.7 8 48.1 70.2 33.2 80.6 50.3 10 48.7 70.0 34.0 80.3 51.7 12 49.2 71.1 34.1 82.6 52.0 Table 5: Results of using different length of video clip during Inference. T HOTA↑ DetA↑ AssA↑ MOTA↑ IDF1↑ 8 47.5 69.8 32.5 80.1 50.3 16 47.9 70.1 32.9 81.4 50.6 24 48.1 70.2 33.2 80.6 50.3 32 47.8 70.1 32.8 81.2 49.8 can continuously improve performance. Limited by the GPU memory, we cannot increase the video clip length to longer than 12 frames here. On the contrary, during the inference stage, increasing the sliding window size does not significantly influence the tracking performance. The hierarchical part-whole representation is the the main contribution of our proposed method. Considering that the hierarchical representation gathers information from three levels (Part, Body, Union), we study the contribution of each of them in Table 6. Compared to only using the features extracted from the bounding box (body) area, our hierarchical representation achieves a performance improvement of 2.4 points of HOTA and 2.9 points of AssA. On the challenging DanceTrack dataset, such improvement can be considered significant when they share the same detections. Also, integrating the features of the union area shows better effectiveness than solely integrating the features of body parts. This is probably because the cross attention between object body and union areas can provide critical information to compare object targets with their neighboring objects, which can prevent potential false association among them. On the other hand, the information about body parts is already contained in the object’s body features. By concatenating the part features and body features, we can’t introduce previously missing information pieces significantly. Finally, as we aim to build a baseline model for future research in this area, we hope the proposed method is more accessible and computationally economic. We try different parameter configurations in Table 7. Even with only a single RTX 3090 GPU for training and inference, its performance is still quite close to our default setting which requires 4 × V100 GPUs. We hope this makes the notorious computation barrier of transformer-based methods not that terrible anymore. 5 CONCLUSION In this paper, we propose to build discriminative hierarchical part-whole representations as the visual descriptor for objects in multi-object tracking. The representation is built upon only bounding box annotation and in three levels: Part, Body, and Union. They are designed to provide visual specificity of the object from the compositional, semantic, and contextual perspectives respectively. We further propose to use attention in the transformer to gather and process the visual features. The combination of these two aspects makes our method, namely Hierarchical Part-Whole Attention and HiPWA for short. The results on multiple datasets demonstrate its efficiency and effectiveness. We hope the study of this paper can provide new knowledge in the visual representation of objects and an advanced baseline model to solve multi-object tracking problems.
1. What is the focus and contribution of the paper on multi-object tracking? 2. What are the strengths of the proposed approach, particularly in terms of the Hierarchical Part-Whole Attention module and its integration with transformer networks? 3. What are the weaknesses of the paper regarding the availability of source code and real-time efficiency? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes an interesting 'Hierarchical Part-Whole Attention' for multi-object tracking. The proposed module is integrated with transformer network and achieves good performance (comparable or even better results than SOTA mot trackers). The overall training efficiency is also good, i.e., 4 hours on 4*v100 GPUs, while other Transformer based trackers may need days. This paper is well-written and organized, and I believe it will be a good baseline for future works to compare and in-depth development on this framework. Strengths And Weaknesses strength: the idea of Hierarchical Part-Whole representation of target object in MOT seems interesting; the combination of Hierarchical Part-Whole Attention and Transformer works well on existing benchmark datasets; the paper is easy to follow and understand. weakness: the source code will be released or not is unclear. it is an interesting idea, but the implementation is also complicated. If the code is not available, maybe it is hard for other researchers to follow. the running efficiency is not real-time. i.e., not fast enough for practical applications. Clarity, Quality, Novelty And Reproducibility this paper is clearly written and the information is enough to reproduce the experiments.
ICLR
Title Object Tracking by Hierarchical Part-Whole Attention Abstract We present in this paper that hierarchical representations of objects can provide an informative and low-noisy proxy to associate objects of interest in multi-object tracking. This is aligned with our intuition that we usually only need to compare a little region of the body of target objects to distinguish them from other objects. We build the hierarchical representation in levels of (1) target body parts, (2) the whole target body, and (3) the union area of the target and other objects of overlap. Furthermore, with the spatio-temporal attention mechanism by transformer, we can solve the tracking in a global fashion and keeps the process online. We design our method by combining the representation with the transformer and name it Hierarchical Part-Whole Attention, or HiPWA for short. The experiments on multiple datasets suggest its good effectiveness. Moreover, previous methods mostly focus on leveraging transformers to exploit long temporal context during association which requires heavy computation resources. But HiPWA focuses on a more informative representation of objects on every single frame instead. So it is more robust with the length of temporal context and more computationally economic. 1 INTRODUCTION How to represent the visual existence of an object in a discriminative fashion is a core question of computer vision. In this paper, we propose a hierarchical part-whole representation to represent the visual existence of objects. We adopt multi-object tracking as the application area since the distinguishable appearance feature is critical to avoid the mismatch among target objects when tracking across frames. To gather and process the visual information from different levels, we combine the hierarchical part-whole representation with the attention mechanism from transformers to summarize distinguishable and discriminative visual representations for objects of interest. In the task of multi-object tracking, given a bounding box to localize objects of interest, how should we recognize the major object within the box and distinguish it from the background and other objects, especially some also having partial existence in the box? We believe the visual specificity of one object comes from three perspectives: the compositional, the semantic and the contextual. The compositional suggests the salient and unique visual regions on an object, such as a hat on a pedestrian whose color is different from all others in the same image. With a salient visual composition attached to an object, we can track it across frames even without seeing its full body. The semantic visual information is the commonly adopted one in modern computer vision such as a tight bounding box or instance segmentation mask. It defines the occupancy area of the object with the bond between its visual existence and semantic concept. Finally, contextual visual information describes the surroundings of an object. It helps to distinguish an object via contrast. For example, the bounding box might contain pixels from the background and secondary objects. However, a tight bounding box offers a strong underlying prior when combined with visual context: an object whose parts span across the boundary of the bounding box should not be the major object of this bounding box. Being the secondary object or not an object of interest, it should be regarded as noise when we generate a distinguishable visual representation for the major subject in the bounding box. The analysis above shows each level has its value to represent an object discriminatively. Motivated by the insight, we propose to represent an object by a three-level hierarchy: body parts, full body, and the union area including objects with overlap. We summarize it as a “Part-Body-Union” hierarchy. With the hierarchy constructed, an ideal path to solving the target association in multi-object tracking is to leverage the salient information within the body area and discard mismatch by eliminating the noise revealed by the contextual contrast. Without requiring more fine-grained data annotation, we propose to use transformers to process the hierarchical representation as the attention mechanism can discover important visual information. So, by combining the hierarchical visual representation and attention-based feature fusion, we finally propose our method as Hierarchical Part-Whole Attention, or HiPWA for short. In this work, we build a baseline model following this design and demonstrate its effectiveness in solving multi-object tracking problems. Through experiments on multiple multiobject tracking datasets, the proposed method achieves comparable or even better performance than the state-of-the-art transformer-based methods with a more lightweight implementation and better time efficiency during training and inference. 2 RELATED WORKS 2.1 REPRESENTING OBJECTS BY PARTS The most commonly used object representation for multi-object tracking is bounding boxes. However, the bounding box is noisy by containing background pixels and pixels from secondary objects. On the other hand, our life experience demonstrates that, in many scenarios, it is not necessary to observe the full body of objects to specify an object visually and tracking targets by the distinguishable parts on it is usually more efficient. Therefore, researchers also have been studying object detection and tracking with more fine-grained representation. A common way is to use pre-defined certain parts on target bodies, such as only human head (Sundararaman et al., 2021; Shao et al., 2018), human joints (Andriluka et al., 2018; Xiu et al., 2018) or even every pixel (Voigtlaender et al., 2019; Weber et al., 2021). However, all these choices require more fine-grained data annotation beyond bounding boxes and more fine-grained perception modules beyond just normally available object detectors. In the contrast, the part-whole hierarchy we construct requires no additional annotations and we still solve tracking tasks at the granularity of bounding boxes. The idea of modeling objects with different levels is inspired by the hierarchical modeling of the human body (Marr, 2010) by David Marr when he explains how to construct the visual structure of an object from primal sketch to 2.5 sketch and further 3D representation. His classic three levels of visual information processing system concludes this in a higher-level: the computational, the algorithmic, and the implementational. A similar theory is also introduced by Fodor & Pylyshyn (1988) as the semantic, the syntactic, and the physical. Compared to these cognitive theories aiming to model general visual representation, the three perspectives we propose to recognize an object and distinguish it from others (the compositional, the semantic and the contextual) only apply to the specific problem of generating an effective visual descriptor to represent the objects of interest. 2.2 TRANSFORMER-BASED MULTI-OBJECT TRACKING Transformer (Vaswani et al., 2017) is originally proposed for natural language processing. It shows a powerful capacity for information representation and processing. Later, DETR (Carion et al., 2020) introduces the transformer to the area of visual perception for object detection. It models object detection as solving a bipartite matching problem. Given that the matching-based strategy by DETR is quite similar to the target matching in the task of multi-object tracking, it is intuitive to further migrate transformer to this area. TransTrack (Sun et al., 2020) is the first work using the transformer to solve the MOT problem but it does not invent any association strategy by transformers. A concurrent work TrackFormer (Meinhardt et al., 2021) takes a further step to use the cross attention in transformer decoder in the stage of association by query passing. On the other hand, VisTR (Wang et al., 2021c) proposes a novel global association scheme upon transformer where a video clip of multiple frames is forward into the transformer at the same time to associate objects within the clip. More recently, many works (Zhou et al., 2022; Zeng et al., 2021) follow the global association scheme in either training or inference and achieve good performance. A key to their success is to process the information over a long temporal period, which can be hardly handled without the transformer. GTR (Zhou et al., 2022) makes a baseline model of using only appearance in associating objects and removing some secondary modules such as positional encoding and learnable object query. However, a downside of processing multiple frames as a batch by the transformer is the high requirement of computation resources. It has become a common practice to train the model on at least 4xV100 GPUs (Zhou et al., 2022; Sun et al., 2020; Zeng et al., 2021) or even 8xA100 GPUs (Cai et al., 2022). These methods usually suffer from significant performance drop if only limited computation resource is available. This is because they usually make improvements to association performance by taking advantage of a long temporal window and gathering more visual context within it. In this work, we focus on building a more computation and memory-efficient visual representation for objects from the scope of a single frame instead. This scheme is flexible to be integrated with transformers and more robust to short time windows during object association. 3 METHOD In this section, we introduce the method we propose to leverage a hierarchical part-whole visual representation with the attention mechanism from the transformer for multi-object tracking. In Section 3.1, we describe the overview structure of our method using global association. Then, in Section 3.2, we dive into the details of our proposed part-whole attention module. Finally, we talk about the details of training and inference by HiPWA in Section 3.3. 3.1 GLOBAL ASSOCIATION Before the transformer is introduced into this area, people usually solve multi-object tracking in a frame-by-frame fashion where the association is performed on only two frames. Recently, the transformer shows the advantage to gather and process information from multiple steps in parallel. To leverage this advantage, previous methods (Wang et al., 2021c; Zhou et al., 2022) propose to perform association in a video clip instead of just two frames. Therefore, the spatio-temporal attention capacity of the transformer leads to a new global association fashion. We follow this scheme in our proposed method. The overall pipeline of HiPWA is shown in the left-hand half of Figure 1. Now, we explain the three stages of it. Detection and Feature Extraction. Given a video clip of T frames, i.e., T = {t, t+1, ..., t+T}, we have the corresponding images I = {It, It+1, ..., It+T }. Given a detector model, we could derive the detections of the object category of interest on all frames in parallel, noted as O = {Ot11 , ..., O tN N }. N is the number of detections on the T frames and ti ∈ T (1 ≤ i ≤ N ) is the time step where the i-th detection, i.e., Otii , is detected. Then, we generate the representations of each detected object and note them as F = {F1, F2, ..., FN}. The most commonly adopted solution is to use the backbone output on the object area as the representation features while we adopt our proposed hierarchical part-whole representation instead whose details are to be introduced soon. Token Generation by Hierarchical Part-Whole Attention. After being projected into vectors, the hierarchical representations of detections become tokens Tdet = {Tkdet1 , Tkdet2 , ..., TkdetN }, which are also terms as “object query” in previous works (Sun et al., 2020; Meinhardt et al., 2021). Concatenating the tokens makes Qdet ∈ RN×D, where D is the feature dimension. If we aim to associate the new-coming detections with existing trajectories, we also need the tokens to represent the existing M trajectories, i.e., Ttraj = {Tktraj1 , Tk traj 2 , ..., Tk traj M }. The transformer has shown good power to generate more discriminative feature tokens for trajectories by iterative query passing (Zeng et al., 2021) or long-time feature buffering (Cai et al., 2022). But to make our method simple, we directly project the hierarchical representation of objects on existing trajectories to represent the trajectories. Given a historical horizon H to backtrack the objects on the previous time steps of a trajectory, we represent a trajectory, Tktrajj , with the “track query” Q traj j ∈ RH×D. The track query is the combination of the feature tokens of detections within the historical horizon on the corresponding trajectory. Global Association. By cross-attention, we could get the association score between the set of detections and the trajectory Tktrajj as S(Q traj j , Q det) ∈ RH×N . In practice, because we aim to associate between all M trajectories and N detections, we perform the cross-attention on all object queries and track queries at the same time, namely S(Qtraj, Qdet) ∈ RHM×N . By averaging the score on the H frames selected from the historical horizon, we could finally get the association score between detections and trajectories as S ∈ RM×N . Then, we need to make sure that a trajectory will never be associated with more than one object from the same frame. We normalize the association scores between a trajectory and objects from the same time step by softmax. So the normalized association score between the j-th trajectory and the i-th detection is P (Massoj,i = 1|Qdet, Qtraj) = exp(Sj,i)∑ k∈{1,2,...,N} 1[tk = ti]exp(Sj,k) , (1) where the binary indicator function 1[tk = ti] indicates whether the i-th detection and the k-th detection are on the same time step. Masso ∈ R(M+1)×N is the final global association matrix. Its dimension is of (M + 1)×N because each detection can be associated with an “empty trajectory” to start a new track in practice. The query of the “empty trajectory” is represented by a query randomly drawn from previous unassociated tokens during training. Also, after the association, unassociated trajectories will be considered absent on the corresponding frames. In such a fashion, we can train over a large set of detections and trajectories in parallel and also conduct inference in an online manner by setting O as the set of the detections only from the new-coming frame. 3.2 HIERARCHICAL PART-WHOLE ATTENTION Finally, we come to the details about constructing hierarchical part-whole visual representations. We name this process hierarchical part-whole attention as we use the attention mechanism to gather and process information from different levels in the hierarchy, which is illustrated in the right-hand half of Figure 1. We design this representation because we think there are three perspectives to describe the existence of an object and its identification over other objects: the compositional, the semantic, and the contextual. Correspondingly, we think the body part patches, the full object body, and the union of the occupancy of objects with interaction provide the knowledge from the three perspectives respectively. The insight behind this module is what we would like the most to deliver in this work. Hierarchy Construction. We represent a detected object by a quintuple, i.e., O = [x, y, w, h, c], where the first four values describe its bounding box and c is the detection confidence. So its body area is B = [x, y, x + w, y + h]. Next, we divide the body into multiple sub-regions (parts). By default, similar to what ViT (Dosovitskiy et al., 2020) does upon images, we divide the bounding boxes into 2× 2 bins, making a set of body parts as P = {P1, P2, P3, P4}. On the other hand, from a global scope, there are other targets interacting with O which are highly likely to be mismatched with O in the association stage. We crop the union area enclosing O and all other targets having overlap with it. We note the union area as U . Till now, we have derives the part-whole hierarchy {P, B, U} in a very straightforward way. Feature Fusion. Given the part-whole hierarchy, we have to fuse the features from different levels to get the final feature tokens for association. With a feature encoder, we can extract the CNN features from them as FP ∈ R4C×H×W , FB ∈ RC×H×W and FU ∈ RC×H×W . We simply concatenate the features from the first two levels as FP+B ∈ R5C×H×W . Then, by a two-layer projection network, we gain projected features VP+B ∈ R5×D. We also apply the projection to the union area features and get VU ∈ RD. Finally, we perform cross-attention between VP+B and VU and forward the output to an MLP network to get the tokens of shape R5×D. Before being forwarded to the global association stage, the tokens would be projected to the uniform dimension of D. 3.3 TRAINING AND INFERENCE The method we implement is a baseline model without complicated designs on “queries”. We simply use the hierarchical part-whole features of detected objects to serve as the representations of both detections and trajectories. And during training, we can associate between detections in the sampled video clips or between detections and existing trajectories. These two schemes of associations thus are implemented as the same and share all model modules. During inference, to keep the process online, we only perform association between detections from the new-coming frame and existing trajectories. We realize this by iterating a sliding window with the stride of one frame. Training. We train the association module by maximizing the likelihood of associating detections belonging to the same ground truth trajectory as expressed in Eq. 1. But Eq. 1 happens on one time step ti only. To speed up training, we calculate the association score on all T frames of the sampled video clip at the same time and maximize the likelihood of the association aligned with the ground truths globally in the time window. The objective thus turns to t+T∏ q=t P (Masso j,τjq = 1|Qdet, Qtraj), (2) where τ jq is the ground truth index of the detection which should be associated with the j-th trajectory on the time step q. Therefore, by traversing the association of all trajectories, the training objective becomes the negative log-likelihood loss Lasso = − M∑ j=1 t+T∑ q=t logP (Masso j,τjq = 1|Qdet, Qtraj). (3) On the other hand, trajectories can also be absent on some time steps because of occlusion or target disappearance. So similar to the practice of DETR (Carion et al., 2020) for detection and GTR (Zhou et al., 2022) for tracking, Eq. 3 has included the situation of associating a trajectory with “empty”. Moreover, the main reason why mismatch happens is the features of objects of different identities being indiscriminative. Therefore, to encourage the representations from objects of different identities to be distinguishable, we design a feature discrimination loss in the form of triplet loss as Lfeat = max(0, NP min u=1 ||Att(f(FPu), f(FB))−f(FB)||2−||Att(f(FB), f(F bg U ))−f(FB)|| 2+α), (4) where f(·) is the shared projection layers to project CNN features to feature vectors and NP is the number of part patches (NP = 4 in our default setting). Att(·, ·) is the operation of cross attention to generate attended features. α is the margin to control the distance between positive and negative pairs. FB and FPu (1 ≤ u ≤ NP ) are the extracted features of the body area and the part subregions as explained already. F bgU is the CNN features of the background area in the union box. We obtain the background features by setting the pixels of B in the area of U to be 0 and forward the processed patch of the union area into the feature encoder. We design Eq. 4 to encourage the projection network to pay more attention to the salient area on the body of target objects while less attention to the background area when processing the hierarchical part-whole representations. Also, it encourages the features of the background area in the union box, which probably belongs to another object target, to be distinguishable from the body features. This can be expected to decrease the chance of mismatch between neighboring objects. Finally, the training objective is L = Lasso + Lfeat + Ldet, (5) where Ldet is an optional detection loss. In our default implementation, we finetune the detector at the same time when training the association modules. Inference. We adopt the traditional sliding-window style to realize online influence. With a window size T = 24 and stride 1, we start from the first frame of the input video. On the first frame, every detection is initialized as an original trajectory. In each time window, we would generate trajectories by detections within it. Then we use the association score in Eq. 1 to associate these trajectories with existing trajectories outside this time window. By averaging the detection-trajectory alongside detections of a trajectory, we get the trajectory-trajectory association scores, whose negative value serves as the entries in the cost matrix for the association assignment. And we adopt Hungarians matching to make sure the one-to-one mapping. Only when a pair of trajectories has the association score higher than a threshold β = 0.3, they are eligible to be associated. All unassociated detections on the new-coming frames will start new tracks. 4 EXPERIMENTS 4.1 EXPERIMENT SETUPS Datasets. We conduct quantitative experiments on multiple multi-object tracking datasets, including MOT17 (Milan et al., 2016), MOT20 (Dendorfer et al., 2020) and DanceTrack (Sun et al., 2021). We focus on pedestrian tracking in this paper so pedestrian is the only category of objects of interest on all datasets. MOT17 and MOT20 are the classic and popular datasets in the area of pedestrian tracking but their scales are relatively small and have no official validation sets. DanceTrack, on the contrary, is a recently proposed dataset that is of a much larger scale and provides an official validation set with no overlap with the training set. DanceTrack focuses on the scenarios where targets are in the foreground so detection is not considered as the bottleneck as it is on MOT20. And DanceTrack mainly contains videos where targets have heavy occlusion, complex motion patterns, and similar appearances so it provides a good platform to study the robustness of the tracking algorithm. Evaluation Metrics. The popular CLEAR evaluation protocol (Bernardin & Stiefelhagen, 2008) is based on single-frame-wise matching between the ground truth and predictions. This makes the metric emphasize single-frame detection quality rather than cross-frame association performance. MOTA, the main metric of CLEAR protocol, is also biased to the detection quality. To provide a more accurate sense of association performance in tracking, we mainly adopt the more recent HOTA (Luiten et al., 2021) metric set where the metric is calculated by the video-level association between ground truth and predictions. In the set of metrics, AssA emphasizes the association performance, and DetA stresses on the detection quality. HOTA is the main metric by taking both detection and association quality into consideration. Implementation. We follow the common practice (Sun et al., 2020; Zeng et al., 2021; Cai et al., 2022) to use ResNet-50 (He et al., 2016) as the backbone network, which is pretrained on Crowdhuman (Shao et al., 2018) dataset first. Though advanced detector (Zhang et al., 2021a) is demonstrated as a key to boosting tracking performance, we want our contribution to be more from the improvement of the association stage. Therefore, on MOT17, we follow the practice of another transformerbased global association tracking method GTR (Zhou et al., 2022) to use the classic CenterNet Zhou et al. (2019; 2020) as the detector and all training details are aligned with it to make fair comparisons with this close baseline method. The CenterNet detector is pretrained together with the backbone on Crowdhuman to align with the common practice on this dataset. For the fine-tuning of association modules, we use a 1:1 mixture of MOT17 and Crowdhuman for MOT17. We fine-tune with only the MOT20 training set for evaluation on MOT20. For DanceTrack, we use its official training set as the only training set during finetuning. The image size is set to be 1280 × 1280 during training and the test size is 1560 for the longer edge during the test. During finetuning, the detector head is also finetuned as mentioned already. The training iterations are set to be 20k on MOT17/MOT20 and 80k on DanceTrack. We use BiFPN (Tan et al., 2020) for the feature upsampling. For the implementation of the transformer, we follow the practice of Zhou et al. (2022) to use a stack of two layers of “Linear + ReLU” as the projection layers and one-layer encoders and decoders. We use AdamW (Loshchilov & Hutter, 2017) optimizer for training whose base learning rate is set to be 5e-5. The length of the video clip is T = 8 for training and T = 24 for inference in a sliding window. We use 4 × V100 GPUs as the default training device following some previous practice (Zhou et al., 2022; Zeng et al., 2021) but we will see that even using only one RTX 3090 GPU for training, our method can also achieve good performance. The training on MOT17 or MOT20 takes only 4 hours and the training on DanceTrack takes 11 hours. 4.2 BENCHMARK RESULTS We benchmark our proposed method with existing methods now. The results on the MOT17-test dataset are shown in Table 1. HiPWA achieves the highest HOTA score among all transformer- based methods. But our method only achieves a comparable MOTA score with TransTrack (Sun et al., 2020) and GTR (Zhou et al., 2022), suggesting the superior part of HiPWA does not lie in the detection stage. The higher AssA score of our method also demonstrates its superior association performance. MOT20 is a challenging dataset by containing scenes of crowded pedestrian flows. We report the results on the MOT20-test set in Table 2. Though HiPWA shows better performance than MeMOT (Cai et al., 2022) on MOT17, its performance is inferior on MOT20. This is probably related to the heavy and frequent occlusion on MOT20. It is common on MOT20 that a large portion of pedestrians’ bodies is occluded for a long time. If the occlusion period is longer than the horizon of associating existing trajectories and new-coming detections, HiPWA will be likely to fail. On the other hand, the much longer temporal buffer of object appearance history maintained by MeMOT turns out more effective in such scenarios. However, we note that we design HiPWA with the main goal of demonstrating the hierarchical part-whole representation and choosing the most naive implementation for association heads to make it a computationally efficient baseline model. In the contrast, MeMOT requires 8×A100 GPUs for training to support the long-time history context buffering (22 frames v.s. 8 frames by HiPWA ) uses COCO (Lin et al., 2014) dataset as the additional pretraining data. Next, we come to the benchmark on the DanceTrack dataset in Table 3. HiPWA achieves comparable performance with the best transformer-based methods. The association of HiPWA is inferior to MOTR (Zeng et al., 2021). MOTR has carefully designed global association and optimization modules. The global collective loss and query interaction module to propagate information frame by frame proposed by MOTR show good effectiveness. However, as a side-effect, its training and inference speed is much slower due to the heavy architecture. For example, training on MOT17 takes MOTR 2.5 days for MOTR on 8×V100 GPUs while only 4 hours on 4×V100 GPUs for our proposed method. And the inference speed is 6.3FPS for MOTR while 17.2FPS for our method on the same machine (V100 GPU). Compared to the close baseline GTR (Zhou et al., 2022), HiPWA achieves a more significant gap of outperforming on DanceTrack. Such an observation suggests our proposed part-whole hierarchical representation can be more powerful when the occlusion is heavy. Given the results shown on the three benchmarks, we have demonstrated the effectiveness of our proposed HiPWA to be comparable to the state-of-the-art transformer-based multi-object tracking algorithms with a lightweight design. It builds a new baseline for future research in this line of works. The commonly adopted techniques of query propagation and iteration (Meinhardt et al., 2021; Sun et al., 2020; Zeng et al., 2021), deformable attention (Sun et al., 2020; Cai et al., 2022) and long-time feature buffering (Cai et al., 2022) are all compatible to be integrated with HiPWA . 4.3 ABLATION STUDY Though we provide the results on multiple benchmarks to show the efficiency and effectiveness of our proposed method, there are many variables in the design. We now ablate their contributions to the overall performance of HiPWA . Many previous works in the multi-object tracking community follow the practice of CenterTrack (Zhou et al., 2020) on MOT17 (Milan et al., 2016) to use the latter half of training video sequences as the validation set. However, this makes the ablation study on the validation set not always convincing because the data distribution of the training set and validation set is too close and the performance gap reflected on the validation set might shrink or even disappear on the test set. Therefore, we turn to DanceTrack (Sun et al., 2021) for the ablation study instead where an independent validation set is provided and of a much larger scale than previous MOT datasets. In Table 4 and Table 5, we study the influence of video clip length in the training and inference stage respectively. The result suggests that training the association model with longer video clips Table 4: Results of using different length of video clip during training. T HOTA↑ DetA↑ AssA↑ MOTA↑ IDF1↑ 6 47.8 70.0 32.8 81.1 49.7 8 48.1 70.2 33.2 80.6 50.3 10 48.7 70.0 34.0 80.3 51.7 12 49.2 71.1 34.1 82.6 52.0 Table 5: Results of using different length of video clip during Inference. T HOTA↑ DetA↑ AssA↑ MOTA↑ IDF1↑ 8 47.5 69.8 32.5 80.1 50.3 16 47.9 70.1 32.9 81.4 50.6 24 48.1 70.2 33.2 80.6 50.3 32 47.8 70.1 32.8 81.2 49.8 can continuously improve performance. Limited by the GPU memory, we cannot increase the video clip length to longer than 12 frames here. On the contrary, during the inference stage, increasing the sliding window size does not significantly influence the tracking performance. The hierarchical part-whole representation is the the main contribution of our proposed method. Considering that the hierarchical representation gathers information from three levels (Part, Body, Union), we study the contribution of each of them in Table 6. Compared to only using the features extracted from the bounding box (body) area, our hierarchical representation achieves a performance improvement of 2.4 points of HOTA and 2.9 points of AssA. On the challenging DanceTrack dataset, such improvement can be considered significant when they share the same detections. Also, integrating the features of the union area shows better effectiveness than solely integrating the features of body parts. This is probably because the cross attention between object body and union areas can provide critical information to compare object targets with their neighboring objects, which can prevent potential false association among them. On the other hand, the information about body parts is already contained in the object’s body features. By concatenating the part features and body features, we can’t introduce previously missing information pieces significantly. Finally, as we aim to build a baseline model for future research in this area, we hope the proposed method is more accessible and computationally economic. We try different parameter configurations in Table 7. Even with only a single RTX 3090 GPU for training and inference, its performance is still quite close to our default setting which requires 4 × V100 GPUs. We hope this makes the notorious computation barrier of transformer-based methods not that terrible anymore. 5 CONCLUSION In this paper, we propose to build discriminative hierarchical part-whole representations as the visual descriptor for objects in multi-object tracking. The representation is built upon only bounding box annotation and in three levels: Part, Body, and Union. They are designed to provide visual specificity of the object from the compositional, semantic, and contextual perspectives respectively. We further propose to use attention in the transformer to gather and process the visual features. The combination of these two aspects makes our method, namely Hierarchical Part-Whole Attention and HiPWA for short. The results on multiple datasets demonstrate its efficiency and effectiveness. We hope the study of this paper can provide new knowledge in the visual representation of objects and an advanced baseline model to solve multi-object tracking problems.
1. What is the main contribution of the paper regarding multi-object tracking? 2. What are the strengths and weaknesses of the proposed hierarchical representation? 3. Do you have any questions or concerns about the notations and their presentations in the paper? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any limitations or potential improvements regarding the motion features in the association?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes a hierarchical representation of objects for multi-object tracking. The hierarchical representation consists of 3 levels: part, whole and union of overlapped objects. The proposed approach demonstrated good performance on multiple pedestrian/dance public datasets. Strengths And Weaknesses The strength of the paper is the fusing of the three levels of representation for multi-object tracking associations. The weaknesses of the paper are some of the notations and areas are not clearly presented. The notations of M and N in section 3.1 are not defined when appearing in the first and second part. This causes confusion when reading the paper. N should be T in 1<=i<=N in the third page. In page 4, "we can tract the CNN features from them", what is the CNN architect to extract the features? In page 6 Inference section, "whose negative value serves as the entries in the cost matrix". The cost value should not be some negative values. Another weakness of the approach is the lack of motion feature in the association. In the association, only appearance features are considered. This can lead to association errors when two objects have similar appearances (similar color of clothes) even they are farther apart in the space. In the Detection and Feature Extraction section, detection features are extracted from a video clip of T frames and then the detection from each one of the T frames is associated with the trajectories. There can be significant motions in the T frames and appearance would be hard to handle the discriminability of the objects. Clarity, Quality, Novelty And Reproducibility The combination of 3 levels of representation especially the union of overlap using attention is somewhat creative.
ICLR
Title Object Tracking by Hierarchical Part-Whole Attention Abstract We present in this paper that hierarchical representations of objects can provide an informative and low-noisy proxy to associate objects of interest in multi-object tracking. This is aligned with our intuition that we usually only need to compare a little region of the body of target objects to distinguish them from other objects. We build the hierarchical representation in levels of (1) target body parts, (2) the whole target body, and (3) the union area of the target and other objects of overlap. Furthermore, with the spatio-temporal attention mechanism by transformer, we can solve the tracking in a global fashion and keeps the process online. We design our method by combining the representation with the transformer and name it Hierarchical Part-Whole Attention, or HiPWA for short. The experiments on multiple datasets suggest its good effectiveness. Moreover, previous methods mostly focus on leveraging transformers to exploit long temporal context during association which requires heavy computation resources. But HiPWA focuses on a more informative representation of objects on every single frame instead. So it is more robust with the length of temporal context and more computationally economic. 1 INTRODUCTION How to represent the visual existence of an object in a discriminative fashion is a core question of computer vision. In this paper, we propose a hierarchical part-whole representation to represent the visual existence of objects. We adopt multi-object tracking as the application area since the distinguishable appearance feature is critical to avoid the mismatch among target objects when tracking across frames. To gather and process the visual information from different levels, we combine the hierarchical part-whole representation with the attention mechanism from transformers to summarize distinguishable and discriminative visual representations for objects of interest. In the task of multi-object tracking, given a bounding box to localize objects of interest, how should we recognize the major object within the box and distinguish it from the background and other objects, especially some also having partial existence in the box? We believe the visual specificity of one object comes from three perspectives: the compositional, the semantic and the contextual. The compositional suggests the salient and unique visual regions on an object, such as a hat on a pedestrian whose color is different from all others in the same image. With a salient visual composition attached to an object, we can track it across frames even without seeing its full body. The semantic visual information is the commonly adopted one in modern computer vision such as a tight bounding box or instance segmentation mask. It defines the occupancy area of the object with the bond between its visual existence and semantic concept. Finally, contextual visual information describes the surroundings of an object. It helps to distinguish an object via contrast. For example, the bounding box might contain pixels from the background and secondary objects. However, a tight bounding box offers a strong underlying prior when combined with visual context: an object whose parts span across the boundary of the bounding box should not be the major object of this bounding box. Being the secondary object or not an object of interest, it should be regarded as noise when we generate a distinguishable visual representation for the major subject in the bounding box. The analysis above shows each level has its value to represent an object discriminatively. Motivated by the insight, we propose to represent an object by a three-level hierarchy: body parts, full body, and the union area including objects with overlap. We summarize it as a “Part-Body-Union” hierarchy. With the hierarchy constructed, an ideal path to solving the target association in multi-object tracking is to leverage the salient information within the body area and discard mismatch by eliminating the noise revealed by the contextual contrast. Without requiring more fine-grained data annotation, we propose to use transformers to process the hierarchical representation as the attention mechanism can discover important visual information. So, by combining the hierarchical visual representation and attention-based feature fusion, we finally propose our method as Hierarchical Part-Whole Attention, or HiPWA for short. In this work, we build a baseline model following this design and demonstrate its effectiveness in solving multi-object tracking problems. Through experiments on multiple multiobject tracking datasets, the proposed method achieves comparable or even better performance than the state-of-the-art transformer-based methods with a more lightweight implementation and better time efficiency during training and inference. 2 RELATED WORKS 2.1 REPRESENTING OBJECTS BY PARTS The most commonly used object representation for multi-object tracking is bounding boxes. However, the bounding box is noisy by containing background pixels and pixels from secondary objects. On the other hand, our life experience demonstrates that, in many scenarios, it is not necessary to observe the full body of objects to specify an object visually and tracking targets by the distinguishable parts on it is usually more efficient. Therefore, researchers also have been studying object detection and tracking with more fine-grained representation. A common way is to use pre-defined certain parts on target bodies, such as only human head (Sundararaman et al., 2021; Shao et al., 2018), human joints (Andriluka et al., 2018; Xiu et al., 2018) or even every pixel (Voigtlaender et al., 2019; Weber et al., 2021). However, all these choices require more fine-grained data annotation beyond bounding boxes and more fine-grained perception modules beyond just normally available object detectors. In the contrast, the part-whole hierarchy we construct requires no additional annotations and we still solve tracking tasks at the granularity of bounding boxes. The idea of modeling objects with different levels is inspired by the hierarchical modeling of the human body (Marr, 2010) by David Marr when he explains how to construct the visual structure of an object from primal sketch to 2.5 sketch and further 3D representation. His classic three levels of visual information processing system concludes this in a higher-level: the computational, the algorithmic, and the implementational. A similar theory is also introduced by Fodor & Pylyshyn (1988) as the semantic, the syntactic, and the physical. Compared to these cognitive theories aiming to model general visual representation, the three perspectives we propose to recognize an object and distinguish it from others (the compositional, the semantic and the contextual) only apply to the specific problem of generating an effective visual descriptor to represent the objects of interest. 2.2 TRANSFORMER-BASED MULTI-OBJECT TRACKING Transformer (Vaswani et al., 2017) is originally proposed for natural language processing. It shows a powerful capacity for information representation and processing. Later, DETR (Carion et al., 2020) introduces the transformer to the area of visual perception for object detection. It models object detection as solving a bipartite matching problem. Given that the matching-based strategy by DETR is quite similar to the target matching in the task of multi-object tracking, it is intuitive to further migrate transformer to this area. TransTrack (Sun et al., 2020) is the first work using the transformer to solve the MOT problem but it does not invent any association strategy by transformers. A concurrent work TrackFormer (Meinhardt et al., 2021) takes a further step to use the cross attention in transformer decoder in the stage of association by query passing. On the other hand, VisTR (Wang et al., 2021c) proposes a novel global association scheme upon transformer where a video clip of multiple frames is forward into the transformer at the same time to associate objects within the clip. More recently, many works (Zhou et al., 2022; Zeng et al., 2021) follow the global association scheme in either training or inference and achieve good performance. A key to their success is to process the information over a long temporal period, which can be hardly handled without the transformer. GTR (Zhou et al., 2022) makes a baseline model of using only appearance in associating objects and removing some secondary modules such as positional encoding and learnable object query. However, a downside of processing multiple frames as a batch by the transformer is the high requirement of computation resources. It has become a common practice to train the model on at least 4xV100 GPUs (Zhou et al., 2022; Sun et al., 2020; Zeng et al., 2021) or even 8xA100 GPUs (Cai et al., 2022). These methods usually suffer from significant performance drop if only limited computation resource is available. This is because they usually make improvements to association performance by taking advantage of a long temporal window and gathering more visual context within it. In this work, we focus on building a more computation and memory-efficient visual representation for objects from the scope of a single frame instead. This scheme is flexible to be integrated with transformers and more robust to short time windows during object association. 3 METHOD In this section, we introduce the method we propose to leverage a hierarchical part-whole visual representation with the attention mechanism from the transformer for multi-object tracking. In Section 3.1, we describe the overview structure of our method using global association. Then, in Section 3.2, we dive into the details of our proposed part-whole attention module. Finally, we talk about the details of training and inference by HiPWA in Section 3.3. 3.1 GLOBAL ASSOCIATION Before the transformer is introduced into this area, people usually solve multi-object tracking in a frame-by-frame fashion where the association is performed on only two frames. Recently, the transformer shows the advantage to gather and process information from multiple steps in parallel. To leverage this advantage, previous methods (Wang et al., 2021c; Zhou et al., 2022) propose to perform association in a video clip instead of just two frames. Therefore, the spatio-temporal attention capacity of the transformer leads to a new global association fashion. We follow this scheme in our proposed method. The overall pipeline of HiPWA is shown in the left-hand half of Figure 1. Now, we explain the three stages of it. Detection and Feature Extraction. Given a video clip of T frames, i.e., T = {t, t+1, ..., t+T}, we have the corresponding images I = {It, It+1, ..., It+T }. Given a detector model, we could derive the detections of the object category of interest on all frames in parallel, noted as O = {Ot11 , ..., O tN N }. N is the number of detections on the T frames and ti ∈ T (1 ≤ i ≤ N ) is the time step where the i-th detection, i.e., Otii , is detected. Then, we generate the representations of each detected object and note them as F = {F1, F2, ..., FN}. The most commonly adopted solution is to use the backbone output on the object area as the representation features while we adopt our proposed hierarchical part-whole representation instead whose details are to be introduced soon. Token Generation by Hierarchical Part-Whole Attention. After being projected into vectors, the hierarchical representations of detections become tokens Tdet = {Tkdet1 , Tkdet2 , ..., TkdetN }, which are also terms as “object query” in previous works (Sun et al., 2020; Meinhardt et al., 2021). Concatenating the tokens makes Qdet ∈ RN×D, where D is the feature dimension. If we aim to associate the new-coming detections with existing trajectories, we also need the tokens to represent the existing M trajectories, i.e., Ttraj = {Tktraj1 , Tk traj 2 , ..., Tk traj M }. The transformer has shown good power to generate more discriminative feature tokens for trajectories by iterative query passing (Zeng et al., 2021) or long-time feature buffering (Cai et al., 2022). But to make our method simple, we directly project the hierarchical representation of objects on existing trajectories to represent the trajectories. Given a historical horizon H to backtrack the objects on the previous time steps of a trajectory, we represent a trajectory, Tktrajj , with the “track query” Q traj j ∈ RH×D. The track query is the combination of the feature tokens of detections within the historical horizon on the corresponding trajectory. Global Association. By cross-attention, we could get the association score between the set of detections and the trajectory Tktrajj as S(Q traj j , Q det) ∈ RH×N . In practice, because we aim to associate between all M trajectories and N detections, we perform the cross-attention on all object queries and track queries at the same time, namely S(Qtraj, Qdet) ∈ RHM×N . By averaging the score on the H frames selected from the historical horizon, we could finally get the association score between detections and trajectories as S ∈ RM×N . Then, we need to make sure that a trajectory will never be associated with more than one object from the same frame. We normalize the association scores between a trajectory and objects from the same time step by softmax. So the normalized association score between the j-th trajectory and the i-th detection is P (Massoj,i = 1|Qdet, Qtraj) = exp(Sj,i)∑ k∈{1,2,...,N} 1[tk = ti]exp(Sj,k) , (1) where the binary indicator function 1[tk = ti] indicates whether the i-th detection and the k-th detection are on the same time step. Masso ∈ R(M+1)×N is the final global association matrix. Its dimension is of (M + 1)×N because each detection can be associated with an “empty trajectory” to start a new track in practice. The query of the “empty trajectory” is represented by a query randomly drawn from previous unassociated tokens during training. Also, after the association, unassociated trajectories will be considered absent on the corresponding frames. In such a fashion, we can train over a large set of detections and trajectories in parallel and also conduct inference in an online manner by setting O as the set of the detections only from the new-coming frame. 3.2 HIERARCHICAL PART-WHOLE ATTENTION Finally, we come to the details about constructing hierarchical part-whole visual representations. We name this process hierarchical part-whole attention as we use the attention mechanism to gather and process information from different levels in the hierarchy, which is illustrated in the right-hand half of Figure 1. We design this representation because we think there are three perspectives to describe the existence of an object and its identification over other objects: the compositional, the semantic, and the contextual. Correspondingly, we think the body part patches, the full object body, and the union of the occupancy of objects with interaction provide the knowledge from the three perspectives respectively. The insight behind this module is what we would like the most to deliver in this work. Hierarchy Construction. We represent a detected object by a quintuple, i.e., O = [x, y, w, h, c], where the first four values describe its bounding box and c is the detection confidence. So its body area is B = [x, y, x + w, y + h]. Next, we divide the body into multiple sub-regions (parts). By default, similar to what ViT (Dosovitskiy et al., 2020) does upon images, we divide the bounding boxes into 2× 2 bins, making a set of body parts as P = {P1, P2, P3, P4}. On the other hand, from a global scope, there are other targets interacting with O which are highly likely to be mismatched with O in the association stage. We crop the union area enclosing O and all other targets having overlap with it. We note the union area as U . Till now, we have derives the part-whole hierarchy {P, B, U} in a very straightforward way. Feature Fusion. Given the part-whole hierarchy, we have to fuse the features from different levels to get the final feature tokens for association. With a feature encoder, we can extract the CNN features from them as FP ∈ R4C×H×W , FB ∈ RC×H×W and FU ∈ RC×H×W . We simply concatenate the features from the first two levels as FP+B ∈ R5C×H×W . Then, by a two-layer projection network, we gain projected features VP+B ∈ R5×D. We also apply the projection to the union area features and get VU ∈ RD. Finally, we perform cross-attention between VP+B and VU and forward the output to an MLP network to get the tokens of shape R5×D. Before being forwarded to the global association stage, the tokens would be projected to the uniform dimension of D. 3.3 TRAINING AND INFERENCE The method we implement is a baseline model without complicated designs on “queries”. We simply use the hierarchical part-whole features of detected objects to serve as the representations of both detections and trajectories. And during training, we can associate between detections in the sampled video clips or between detections and existing trajectories. These two schemes of associations thus are implemented as the same and share all model modules. During inference, to keep the process online, we only perform association between detections from the new-coming frame and existing trajectories. We realize this by iterating a sliding window with the stride of one frame. Training. We train the association module by maximizing the likelihood of associating detections belonging to the same ground truth trajectory as expressed in Eq. 1. But Eq. 1 happens on one time step ti only. To speed up training, we calculate the association score on all T frames of the sampled video clip at the same time and maximize the likelihood of the association aligned with the ground truths globally in the time window. The objective thus turns to t+T∏ q=t P (Masso j,τjq = 1|Qdet, Qtraj), (2) where τ jq is the ground truth index of the detection which should be associated with the j-th trajectory on the time step q. Therefore, by traversing the association of all trajectories, the training objective becomes the negative log-likelihood loss Lasso = − M∑ j=1 t+T∑ q=t logP (Masso j,τjq = 1|Qdet, Qtraj). (3) On the other hand, trajectories can also be absent on some time steps because of occlusion or target disappearance. So similar to the practice of DETR (Carion et al., 2020) for detection and GTR (Zhou et al., 2022) for tracking, Eq. 3 has included the situation of associating a trajectory with “empty”. Moreover, the main reason why mismatch happens is the features of objects of different identities being indiscriminative. Therefore, to encourage the representations from objects of different identities to be distinguishable, we design a feature discrimination loss in the form of triplet loss as Lfeat = max(0, NP min u=1 ||Att(f(FPu), f(FB))−f(FB)||2−||Att(f(FB), f(F bg U ))−f(FB)|| 2+α), (4) where f(·) is the shared projection layers to project CNN features to feature vectors and NP is the number of part patches (NP = 4 in our default setting). Att(·, ·) is the operation of cross attention to generate attended features. α is the margin to control the distance between positive and negative pairs. FB and FPu (1 ≤ u ≤ NP ) are the extracted features of the body area and the part subregions as explained already. F bgU is the CNN features of the background area in the union box. We obtain the background features by setting the pixels of B in the area of U to be 0 and forward the processed patch of the union area into the feature encoder. We design Eq. 4 to encourage the projection network to pay more attention to the salient area on the body of target objects while less attention to the background area when processing the hierarchical part-whole representations. Also, it encourages the features of the background area in the union box, which probably belongs to another object target, to be distinguishable from the body features. This can be expected to decrease the chance of mismatch between neighboring objects. Finally, the training objective is L = Lasso + Lfeat + Ldet, (5) where Ldet is an optional detection loss. In our default implementation, we finetune the detector at the same time when training the association modules. Inference. We adopt the traditional sliding-window style to realize online influence. With a window size T = 24 and stride 1, we start from the first frame of the input video. On the first frame, every detection is initialized as an original trajectory. In each time window, we would generate trajectories by detections within it. Then we use the association score in Eq. 1 to associate these trajectories with existing trajectories outside this time window. By averaging the detection-trajectory alongside detections of a trajectory, we get the trajectory-trajectory association scores, whose negative value serves as the entries in the cost matrix for the association assignment. And we adopt Hungarians matching to make sure the one-to-one mapping. Only when a pair of trajectories has the association score higher than a threshold β = 0.3, they are eligible to be associated. All unassociated detections on the new-coming frames will start new tracks. 4 EXPERIMENTS 4.1 EXPERIMENT SETUPS Datasets. We conduct quantitative experiments on multiple multi-object tracking datasets, including MOT17 (Milan et al., 2016), MOT20 (Dendorfer et al., 2020) and DanceTrack (Sun et al., 2021). We focus on pedestrian tracking in this paper so pedestrian is the only category of objects of interest on all datasets. MOT17 and MOT20 are the classic and popular datasets in the area of pedestrian tracking but their scales are relatively small and have no official validation sets. DanceTrack, on the contrary, is a recently proposed dataset that is of a much larger scale and provides an official validation set with no overlap with the training set. DanceTrack focuses on the scenarios where targets are in the foreground so detection is not considered as the bottleneck as it is on MOT20. And DanceTrack mainly contains videos where targets have heavy occlusion, complex motion patterns, and similar appearances so it provides a good platform to study the robustness of the tracking algorithm. Evaluation Metrics. The popular CLEAR evaluation protocol (Bernardin & Stiefelhagen, 2008) is based on single-frame-wise matching between the ground truth and predictions. This makes the metric emphasize single-frame detection quality rather than cross-frame association performance. MOTA, the main metric of CLEAR protocol, is also biased to the detection quality. To provide a more accurate sense of association performance in tracking, we mainly adopt the more recent HOTA (Luiten et al., 2021) metric set where the metric is calculated by the video-level association between ground truth and predictions. In the set of metrics, AssA emphasizes the association performance, and DetA stresses on the detection quality. HOTA is the main metric by taking both detection and association quality into consideration. Implementation. We follow the common practice (Sun et al., 2020; Zeng et al., 2021; Cai et al., 2022) to use ResNet-50 (He et al., 2016) as the backbone network, which is pretrained on Crowdhuman (Shao et al., 2018) dataset first. Though advanced detector (Zhang et al., 2021a) is demonstrated as a key to boosting tracking performance, we want our contribution to be more from the improvement of the association stage. Therefore, on MOT17, we follow the practice of another transformerbased global association tracking method GTR (Zhou et al., 2022) to use the classic CenterNet Zhou et al. (2019; 2020) as the detector and all training details are aligned with it to make fair comparisons with this close baseline method. The CenterNet detector is pretrained together with the backbone on Crowdhuman to align with the common practice on this dataset. For the fine-tuning of association modules, we use a 1:1 mixture of MOT17 and Crowdhuman for MOT17. We fine-tune with only the MOT20 training set for evaluation on MOT20. For DanceTrack, we use its official training set as the only training set during finetuning. The image size is set to be 1280 × 1280 during training and the test size is 1560 for the longer edge during the test. During finetuning, the detector head is also finetuned as mentioned already. The training iterations are set to be 20k on MOT17/MOT20 and 80k on DanceTrack. We use BiFPN (Tan et al., 2020) for the feature upsampling. For the implementation of the transformer, we follow the practice of Zhou et al. (2022) to use a stack of two layers of “Linear + ReLU” as the projection layers and one-layer encoders and decoders. We use AdamW (Loshchilov & Hutter, 2017) optimizer for training whose base learning rate is set to be 5e-5. The length of the video clip is T = 8 for training and T = 24 for inference in a sliding window. We use 4 × V100 GPUs as the default training device following some previous practice (Zhou et al., 2022; Zeng et al., 2021) but we will see that even using only one RTX 3090 GPU for training, our method can also achieve good performance. The training on MOT17 or MOT20 takes only 4 hours and the training on DanceTrack takes 11 hours. 4.2 BENCHMARK RESULTS We benchmark our proposed method with existing methods now. The results on the MOT17-test dataset are shown in Table 1. HiPWA achieves the highest HOTA score among all transformer- based methods. But our method only achieves a comparable MOTA score with TransTrack (Sun et al., 2020) and GTR (Zhou et al., 2022), suggesting the superior part of HiPWA does not lie in the detection stage. The higher AssA score of our method also demonstrates its superior association performance. MOT20 is a challenging dataset by containing scenes of crowded pedestrian flows. We report the results on the MOT20-test set in Table 2. Though HiPWA shows better performance than MeMOT (Cai et al., 2022) on MOT17, its performance is inferior on MOT20. This is probably related to the heavy and frequent occlusion on MOT20. It is common on MOT20 that a large portion of pedestrians’ bodies is occluded for a long time. If the occlusion period is longer than the horizon of associating existing trajectories and new-coming detections, HiPWA will be likely to fail. On the other hand, the much longer temporal buffer of object appearance history maintained by MeMOT turns out more effective in such scenarios. However, we note that we design HiPWA with the main goal of demonstrating the hierarchical part-whole representation and choosing the most naive implementation for association heads to make it a computationally efficient baseline model. In the contrast, MeMOT requires 8×A100 GPUs for training to support the long-time history context buffering (22 frames v.s. 8 frames by HiPWA ) uses COCO (Lin et al., 2014) dataset as the additional pretraining data. Next, we come to the benchmark on the DanceTrack dataset in Table 3. HiPWA achieves comparable performance with the best transformer-based methods. The association of HiPWA is inferior to MOTR (Zeng et al., 2021). MOTR has carefully designed global association and optimization modules. The global collective loss and query interaction module to propagate information frame by frame proposed by MOTR show good effectiveness. However, as a side-effect, its training and inference speed is much slower due to the heavy architecture. For example, training on MOT17 takes MOTR 2.5 days for MOTR on 8×V100 GPUs while only 4 hours on 4×V100 GPUs for our proposed method. And the inference speed is 6.3FPS for MOTR while 17.2FPS for our method on the same machine (V100 GPU). Compared to the close baseline GTR (Zhou et al., 2022), HiPWA achieves a more significant gap of outperforming on DanceTrack. Such an observation suggests our proposed part-whole hierarchical representation can be more powerful when the occlusion is heavy. Given the results shown on the three benchmarks, we have demonstrated the effectiveness of our proposed HiPWA to be comparable to the state-of-the-art transformer-based multi-object tracking algorithms with a lightweight design. It builds a new baseline for future research in this line of works. The commonly adopted techniques of query propagation and iteration (Meinhardt et al., 2021; Sun et al., 2020; Zeng et al., 2021), deformable attention (Sun et al., 2020; Cai et al., 2022) and long-time feature buffering (Cai et al., 2022) are all compatible to be integrated with HiPWA . 4.3 ABLATION STUDY Though we provide the results on multiple benchmarks to show the efficiency and effectiveness of our proposed method, there are many variables in the design. We now ablate their contributions to the overall performance of HiPWA . Many previous works in the multi-object tracking community follow the practice of CenterTrack (Zhou et al., 2020) on MOT17 (Milan et al., 2016) to use the latter half of training video sequences as the validation set. However, this makes the ablation study on the validation set not always convincing because the data distribution of the training set and validation set is too close and the performance gap reflected on the validation set might shrink or even disappear on the test set. Therefore, we turn to DanceTrack (Sun et al., 2021) for the ablation study instead where an independent validation set is provided and of a much larger scale than previous MOT datasets. In Table 4 and Table 5, we study the influence of video clip length in the training and inference stage respectively. The result suggests that training the association model with longer video clips Table 4: Results of using different length of video clip during training. T HOTA↑ DetA↑ AssA↑ MOTA↑ IDF1↑ 6 47.8 70.0 32.8 81.1 49.7 8 48.1 70.2 33.2 80.6 50.3 10 48.7 70.0 34.0 80.3 51.7 12 49.2 71.1 34.1 82.6 52.0 Table 5: Results of using different length of video clip during Inference. T HOTA↑ DetA↑ AssA↑ MOTA↑ IDF1↑ 8 47.5 69.8 32.5 80.1 50.3 16 47.9 70.1 32.9 81.4 50.6 24 48.1 70.2 33.2 80.6 50.3 32 47.8 70.1 32.8 81.2 49.8 can continuously improve performance. Limited by the GPU memory, we cannot increase the video clip length to longer than 12 frames here. On the contrary, during the inference stage, increasing the sliding window size does not significantly influence the tracking performance. The hierarchical part-whole representation is the the main contribution of our proposed method. Considering that the hierarchical representation gathers information from three levels (Part, Body, Union), we study the contribution of each of them in Table 6. Compared to only using the features extracted from the bounding box (body) area, our hierarchical representation achieves a performance improvement of 2.4 points of HOTA and 2.9 points of AssA. On the challenging DanceTrack dataset, such improvement can be considered significant when they share the same detections. Also, integrating the features of the union area shows better effectiveness than solely integrating the features of body parts. This is probably because the cross attention between object body and union areas can provide critical information to compare object targets with their neighboring objects, which can prevent potential false association among them. On the other hand, the information about body parts is already contained in the object’s body features. By concatenating the part features and body features, we can’t introduce previously missing information pieces significantly. Finally, as we aim to build a baseline model for future research in this area, we hope the proposed method is more accessible and computationally economic. We try different parameter configurations in Table 7. Even with only a single RTX 3090 GPU for training and inference, its performance is still quite close to our default setting which requires 4 × V100 GPUs. We hope this makes the notorious computation barrier of transformer-based methods not that terrible anymore. 5 CONCLUSION In this paper, we propose to build discriminative hierarchical part-whole representations as the visual descriptor for objects in multi-object tracking. The representation is built upon only bounding box annotation and in three levels: Part, Body, and Union. They are designed to provide visual specificity of the object from the compositional, semantic, and contextual perspectives respectively. We further propose to use attention in the transformer to gather and process the visual features. The combination of these two aspects makes our method, namely Hierarchical Part-Whole Attention and HiPWA for short. The results on multiple datasets demonstrate its efficiency and effectiveness. We hope the study of this paper can provide new knowledge in the visual representation of objects and an advanced baseline model to solve multi-object tracking problems.
1. What is the focus and contribution of the paper on multiple object tracking? 2. What are the strengths of the proposed approach, particularly in its modification from the original GTR framework? 3. What are the weaknesses of the paper, especially in terms of experimentation and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or concerns regarding the part-whole attention module design and its potential impact on performance?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposed modification for multiple object tracking framework based on GTR[Zhou et.al 2022]. Instead of use the feature representation from detection bounding boxes. This paper proposed to use split bounding boxes and surrounding image patch if there are other objects located close to the current object, which aims to tackle occlusion and avoid identity switch cases. The proposed methods performs comparable with state-of-the-art methods on MOT17, MOT20 and DanckTrack benchmark. Strengths And Weaknesses Transformer based tracking framework demonstrated promising performance. Research effort to improve those framework should be encouraged. This paper tried to explicitly guide the model to learn local object patches and surrounding object patches to improve performance. Experiments demonstrated the proposed part-whole attention improves tracking performance, both in the ablation study and on MOT17 and DanceTrack comparing to GTR. How was the structure in part-whole attention module designed? Did the author try other designs? Any insights? Given this paper is based on GTR, experiments should conduct on TAO dataset. Clarity, Quality, Novelty And Reproducibility The overall structure is based on GTR but the context was not clear. The author should add add clear statement to acknowledge work by GTR paper. The proposed part-whole attention is well described. The discussion regarding L_feat should be expanded, especially alpha. Experiments are conducted on different hardware setup and reported similar performance, which indicates certain level of reproducibility. But code is publicly available. Modified a previous work to introduce more feature representation prior. Novelty is fair.
ICLR
Title Pseudo-LiDAR++: Accurate Depth for 3D Object Detection in Autonomous Driving Abstract Detecting objects such as cars and pedestrians in 3D plays an indispensable role in autonomous driving. Existing approaches largely rely on expensive LiDAR sensors for accurate depth information. While recently pseudo-LiDAR has been introduced as a promising alternative, at a much lower cost based solely on stereo images, there is still a notable performance gap. In this paper we provide substantial advances to the pseudo-LiDAR framework through improvements in stereo depth estimation. Concretely, we adapt the stereo network architecture and loss function to be more aligned with accurate depth estimation of faraway objects — currently the primary weakness of pseudo-LiDAR. Further, we explore the idea to leverage cheaper but extremely sparse LiDAR sensors, which alone provide insufficient information for 3D detection, to de-bias our depth estimation. We propose a depthpropagation algorithm, guided by the initial depth estimates, to diffuse these few exact measurements across the entire depth map. We show on the KITTI object detection benchmark that our combined approach yields substantial improvements in depth estimation and stereo-based 3D object detection — outperforming the previous state-of-the-art detection accuracy for faraway objects by 40%. Our code is available at https://github.com/mileyan/Pseudo_Lidar_V2. N/A 1 INTRODUCTION Safe driving in autonomous cars requires accurate 3D detection and localization of cars, pedestrians and other objects. This in turn requires accurate depth information, which can be obtained from LiDAR (Light Detection And Ranging) sensors. Although highly precise and reliable, LiDAR sensors are notoriously expensive: a 64-beam model can cost around $75,000 (USD)1. The alternative is to measure depth through inexpensive commodity cameras. However, in spite of recent dramatic progress in stereo-based 3D object detection brought by pseudo-LiDAR (Wang et al., 2019a), a significant performance gap remains especially for faraway objects (which we want to detect early to allow time for reaction). The trade-off between affordability and safety creates an ethical dilemma. ∗ Equal contributions 1The information is obtained from the automotive LiDAR market report: http://www.woodsidecap. com/wp-content/uploads/2018/04/Yole_WCP-LiDAR-Report_April-2018-FINAL.pdf In this paper we propose a possible solution to this remaining challenge that combines insights from both perspectives. We observe that the higher 3D object localization error of stereo-based systems, compared to LiDAR-based ones, stems entirely from the higher error in depth estimation (after the 3D point cloud is obtained the two approaches are identical (Wang et al., 2019a)). Importantly, this error is not random but systematic: we observe that stereo methods do indeed detect objects with high reliability, yet they estimate the depth of the entire object as either too far or too close. See Figure 1 for an illustration: the red stereo points capture the car but are shifted by about 2m completely outside the ground-truth location (green box). If we can de-bias these depth estimates it should be possible to obtain accurate 3D localization even for distant objects without exorbitant costs. We start by revisiting the depth estimation routine embedded at the heart of state-of-the-art stereobased 3D detection approach (Wang et al., 2019a). A major contributor to the systematic depth bias comes from the fact that depth is typically not computed directly. Instead, one first estimates the disparity — the horizontal shift of a pixel between the left and right images — and then inverts it to obtain pixel-wise depth. While the use of deep neural networks has largely improved disparity estimation (Chang & Chen, 2018; Cheng et al., 2018; Mayer et al., 2016; Wang et al., 2019b), designing and learning the networks to optimize the accuracy of disparity estimation simply overemphasizes nearby objects due to the reciprocal transformation. For instance, a unit disparity error (in pixels) for a 5-meter-away object means a 10cm error in depth: the length of a side mirror. The same disparity error for a 50-meter-away object, however, becomes a 5.8m error in depth: the length of an entire car. Penalizing both errors equally means that the network spends more time correcting subtle errors on nearby objects than gross errors on faraway objects, resulting in degraded depth estimates and ultimately poor detection and localization for faraway objects. We thus propose to adapt the stereo network architecture and loss function for direct depth estimation. Concretely, the cost volume that fuses the left-right images and the subsequent 3D convolutions are the key components in stereo networks. Taking the central assumption of convolutions — all neighborhoods can be operated in an identical manner — we propose to construct the cost volume on the grid of depth rather than disparity, enabling 3D convolutions and the loss function to perform exactly on the right scale for depth estimation. We refer to our network as stereo depth network (SDN). See Figure 1 for a comparison of 3D points obtained with SDN (purple) and disparity estimation (red). Although our SDN improves the depth estimates significantly, stereo images are still inherently 2D and it is unclear if they can ever match the accuracy and reliability of a true 3D LiDAR sensor. Although LiDAR sensors with 32 or 64 beams are expensive, LiDAR sensors with only 4 beams are two orders of magnitude cheaper2 and thus easily affordable. The 4 laser beams are very sparse and ill-suited to capture 3D object shapes by themselves, but if paired with stereo images they become the ideal tool to de-bias our dense stereo depth estimates: a single high-precision laser beam may inform us how to correct the depth of an entire car or pedestrian in its path. To this end, we present a novel depth-propagation algorithm, inspired by graph-based manifold learning (Weinberger et al., 2005; Roweis & Saul, 2000; Xiaojin & Zoubin, 2002). In a nutshell, we connect our estimated 3D stereo point cloud locally by a nearest neighbor graph, such that points corresponding to the same object will share many local paths with each other. We match the few but exact LiDAR measurements first with pixels (irrespective of depth) and then with their corresponding 3D points to obtain accurate depth estimates for several nodes in the graph. Finally, we propagate this exact depth information along the graph using a label diffusion mechanism — resulting in a dense and accurate depth map at negligible cost. In Figure 1 we see that the few (yellow) LiDAR measurements are sufficient to position almost all final (blue) points of the entire car within the green ground truth box. We conduct extensive empirical studies of our approaches on the KITTI object detection benchmark (Geiger et al., 2012; 2013) and achieve remarkable results. With solely stereo images, we outperform the previous state of the art (Wang et al., 2019a) by 10%. Further adding a cheap 4-beam LiDAR brings another 27% relative improvement — on some metrics, our approach is nearly on par with those based on a 64-beam LiDAR but can potentially save 95% in cost. 2The Ibeo Wide Angle Scanning (ScaLa) sensor with 4 beams costs $600 (USD). In this paper we simulate the 4-beam LiDAR signal on KITTI benchmark (Geiger et al., 2012) by sparsifying the original 64-beam signal. 2 BACKGROUND 3D object detection. Most work on 3D object detection operates on 3D point clouds from LiDAR as input (Li, 2017; Li et al., 2016; Meyer et al., 2019b; Yang et al., 2018a; Du et al., 2018; Shi et al., 2019; Engelcke et al., 2017; Yan et al., 2018; Lang et al., 2019). Frustum PointNet (Qi et al., 2018) applies PointNet (Qi et al., 2017a;b) to the points directly, while Voxelnet (Zhou & Tuzel, 2018) quantizes them into 3D grids. For street scenes, several work finds that processing points from the bird’s-eye view can already capture object contours and locations (Chen et al., 2017; Yang et al., 2018b; Ku et al., 2018). Images have also been used, but mainly to supplement LiDAR (Meyer et al., 2019a; Xu et al., 2018; Liang et al., 2018; Chen et al., 2017; Ku et al., 2018). Early work based solely on images — mostly built on the 2D frontal-view detection pipeline (Ren et al., 2015; He et al., 2017; Lin et al., 2017) — fell far behind in localizing objects in 3D (Li et al., 2019a; Xiang et al., 2015; 2017; Chabot et al., 2017; Mousavian et al., 2017; Chen et al., 2015; Xu & Chen, 2018; Chen et al., 2016; Pham & Jeon, 2017; Chen et al., 2018)3. Pseudo-LiDAR. This gap has been reduced significantly recently with the introduction of the pseudoLiDAR framework proposed in (Wang et al., 2019a). This framework applies a drastically different approach from previous image-based 3D object detectors. Instead of directly detecting the 3D bounding boxes from the frontal view of a scene, pseudo-LiDAR begins with image-based depth estimation, predicting the depth Z(u, v) of each image pixel (u, v). The resulting depth map Z is then back-projected into a 3D point cloud: a pixel (u, v) will be transformed to (x, y, z) in 3D by z = Z(u, v), x = (u− cU )× z fU , y = (v − cV )× z fV , (1) where (cU , cV ) is the camera center and fU and fV are the horizontal and vertical focal length. The 3D point cloud is then treated exactly as LiDAR signal — any LiDAR-based 3D detector can be applied seamlessly. By taking the state-of-the-art algorithms from both ends (Chang & Chen, 2018; Ku et al., 2018; Qi et al., 2018), pseudo-LiDAR obtains the highest image-based performance on the KITTI object detection benchmark (Geiger et al., 2012; 2013). Our work builds upon this framework. Stereo disparity estimation. Pseudo-LiDAR relies heavily on the quality of depth estimation. Essentially, if the estimated pixel depths match those provided by LiDAR, pseudo-LiDAR with any LiDAR-based detector should be able to achieve the same performance as that obtained by applying the same detector to the LiDAR signal. According to (Wang et al., 2019a), depth estimation from stereo pairs of images (Mayer et al., 2016; Yamaguchi et al., 2014; Chang & Chen, 2018) are more accurate than that from monocular (i.e., single) images (Fu et al., 2018; Godard et al., 2017) for 3D object detection. We therefore focus on stereo depth estimation, which is routinely obtained from estimating disparity between images. A disparity estimation algorithm takes a pair of left-right images Il and Ir as input, captured from a pair of cameras with a horizontal offset (i.e., baseline) b. Without loss of generality, we assume that the algorithm treats the left image, Il, as reference and outputs a disparity map D recording the horizontal disparity to Ir for each pixel (u, v). Ideally, Il(u, v) and Ir(u, v +D(u, v)) will picture the same 3D location. We can therefore derive the depth map Z via the following transform, Z(u, v) = fU × b D(u, v) (fU : horizontal focal length). (2) A common pipeline of disparity estimation is to first construct a 4D disparity cost volume Cdisp, in which Cdisp(u, v, d, :) is a feature vector that captures the pixel difference between Il(u, v) and Ir(u, v+d). It then estimates the disparity D(u, v) for each pixel (u, v) according to the cost volume Cdisp. One basic algorithm is to build a 3D cost volume withCdisp(u, v, d) = ‖Il(u, v)−Ir(u, v+d)‖2 and determine D(u, v) as argmind Cdisp(u, v, d). Advanced algorithms exploit more robust features in constructingCdisp and perform structured prediction forD. In what follows, we give an introduction of PSMNet (Chang & Chen, 2018), a state-of-the-art algorithm used in (Wang et al., 2019a). PSMNet begins with extracting deep feature maps hl and hr from Il and Ir, respectively. It then constructs Cdisp(u, v, d, :) by concatenating features of hl(u, v) and hr(u, v + d), followed by layers 3Recently, Srivastava et al. (2019) proposed to lift 2D monocular images to 3D representations (e.g., bird’s-eye view (BEV) images) and achieved promising monocular-based 3D object detection results. of 3D convolutions. The resulting 3D tensor Sdisp, with the feature channel size ending up being one, is then used to derive the pixel disparity via the following weighted combination, D(u, v) = ∑ d softmax(−Sdisp(u, v, d))× d, (3) where softmax is performed along the 3rd dimension of Sdisp. PSMNet can be learned end-to-end, including the image feature extractor and 3D convolution kernels, to minimize the disparity error∑ (u,v)∈A `(D(u, v)−D?(u, v)), (4) where ` is the smooth L1 loss, D? is the ground truth map, andA contains pixels with ground truths. 3 STEREO DEPTH NETWORK (SDN) A stereo network designed and learned to minimize the disparity error (cf. Equation 4) may over-emphasize nearby objects with smaller depths and therefore perform poorly in estimating depths for faraway objects. To see this, note that Equation 2 implies that for a given error in disparity δD, the error in depth δZ increases quadratically with depth: Z ∝ 1 D ⇒ δZ ∝ 1 D2 δD ⇒ δZ ∝ Z2δD. (5) The middle term is obtained by differentiating Z(D) w.r.t. D. In particular, using the settings on the KITTI dataset (Geiger et al., 2012; 2013), a single pixel error in disparity implies only a 0.1m error in depth at a depth of 5 meters, but a 5.8m error at a depth of 50 meters. See Figure 2 for a mapping from disparity to depth. Depth Loss. We propose two changes to adapt stereo networks for direct depth estimation. First, we learn stereo networks to directly optimize the depth loss∑ (u,v)∈A `(Z(u, v)− Z?(u, v)). (6) Z and Z? can be obtained from D and D? using Equation 2. The change from the disparity loss to the depth loss corrects the disproportionally strong emphasis on tiny depth errors of nearby objects — a necessary but still insufficient change to overcome the problems of disparity estimation. Depth Cost Volume. To facilitate accurate depth learning (rather than disparity) we need to further address the internals of the depth estimation pipeline. A crucial source of error is the 3D convolutions within the 4D disparity cost volume, where the same kernels are applied for the entire cost volume. This is highly problematic as it implicitly assumes that the effect of a convolution is homogeneous throughout — which is clearly violated by the reciprocal depth to disparity relation (Figure 2). For example, it may be completely appropriate to locally smooth two neighboring pixels with disparity 85 and 86 (changing the depth by a few cm to smooth out a surface), whereas applying the same kernel for two pixels with disparity 5 and 6 could easily move the 3D points by 10m or more. Taking this insight and the central assumption of convolutions — all neighborhoods can be operated upon in an identical manner — into account, we propose to instead construct the depth cost volume Cdepth, in which Cdepth(u, v, z, :) will encode features describing how likely the depth Z(u, v) of pixel (u, v) is z. The subsequent 3D convolutions will then operate on the grid of depth, rather than disparity, affecting neighboring depths identically, independent of their location. The resulting 3D tensor Sdepth is then used to predict the pixel depth similar to Equation 3 Z(u, v) = ∑ z softmax(−Sdepth(u, v, z))× z. We construct the new depth volume, Cdepth, based on the intuition that Cdepth(u, v, z, :) and Cdisp ( u, v, fU × b z , : ) should lead to equivalent “cost”. To this end, we apply a bilinear interpolation to construct Cdepth from Cdisp using the depth-to-disparity transform in Equation 2. Specifically, we consider disparity in the range of [0, 191] following PSMNet (Chang & Chen, 2018), and consider depth in the range of [1m, 80m] and set the grid of depth in Cdepth to be 1m. Figure 5 (top) depicts our stereo depth network (SDN) pipeline. Crucially, all convolution operations are operated on Cdepth exclusively. Figure 4 compares the median values of absolute depth estimation errors using the disparity cost volume (i.e., PSMNet) and the depth cost volume (SDN) (see subsection D.5 for detailed numbers). As expected, for faraway depth, SDN leads to drastically smaller errors with only marginal increases in the very near range (which disparity based methods over-optimize). See the appendix for the detailed setup and more discussions. 4 DEPTH CORRECTION Our SDN significantly improves depth estimation and more precisely renders the object contours (see Figure 3). However, there is a fundamental limitation in stereo because of the discrete nature of pixels: the disparity, being the difference in the horizontal coordinate between corresponding pixels, has to be quantized at the level of individual pixels while the depth is continuous. Although the quantization error can be alleviated with higher resolution images, the computational depth prediction cost scales cubically with resolution— pushing the limits of GPUs in autonomous vehicles. We therefore explore a hybrid approach by leveraging a cheap LiDAR with extremely sparse (e.g., 4 beams) but accurate depth measurements to correct this bias. We note that such sensors are too sparse to capture object shapes and cannot be used alone for detection. However, by projecting the LiDAR points into the image plane we obtain exact depths on a small portion of “landmark” pixels. We present a graph-based depth correction (GDC) algorithm that effectively combines the dense stereo depth that has rendered object shapes and the sparse accurate LiDAR measurements. Conceptually, we expect the corrected depth map to have the following properties: globally, landmark pixels associated with LiDAR points should possess the exact depths; locally, object shapes captured by neighboring 3D points, back-projected from the input depth map (cf. Equation 1), should be preserved. Figure 5 (bottom) illustrates the algorithm. Input Matching. We take as input the two point clouds from LiDAR (L) and Pseudo-LiDAR (PL) by stereo depth estimation. The latter is obtained by converting pixels (u, v) with depth z to 3D points (xu, yv, z). First, we characterize the local shapes by the directed K-nearest-neighbor (KNN) graph in the PL point cloud (using accelerated KD-Trees (Shevtsov et al., 2007)) that connects each 3D point to its KNNs with appropriate weights. Similarly, we can project the 3D LiDAR points onto pixel locations (u, v) and match them to corresponding 3D stereo points. Without loss of generality, we assume that we are given “ground truth” LiDAR depth for the first n points and no ground truth for the remaining m points. We refer to the 3D stereo depth estimates as Z ∈ Rn+m and the LiDAR depth ground-truth as G ∈ Rn. Edge weights. To construct the KNN graph in 3D we ignore the LiDAR information on the first n points and only use their predicted stereo depth in Z. Let Ni denote the set of k neighbors of the ith point. Further, let W ∈ R(n+m)×(n+m) denote the weight matrix, where Wij denotes the edge-weight between points i and j. Inspired by prior work in manifold learning (Roweis & Saul, 2000; Weinberger et al., 2005) we choose the weights to be the coefficients that reconstruct the depth of any point from the depths of its neighbors inNi. We can solve for these weights with the following constrained quadratic optimization problem: W = argminW ‖Z −WZ‖22, s.t. W1 = 1 and Wij = 0 if j /∈ Ni. (7) Here 1 ∈ Rn+m denotes the all-ones vector. As long as we pick k > 3 and the points are in general position there are infinitely many solutions that satisfy Z =WZ, and we pick the solution with the minimum L2 norm (obtained with slight L2 regularization). Depth Correction. Let us denote the corrected depth values as Z ′ ∈ Rn+m, with Z ′ = [Z ′L;Z ′PL] and Z ′L ∈ Rn and Z ′PL ∈ Rm, where Z ′L are the depth values of points with LiDAR ground-truth and Z ′PL otherwise. For the n points with LiDAR measurements we update the depth to the (ground truth) values Z ′L = G. We then solve for Z ′ PL given G and the weighted KNN graph encoded in W . Concretely, we update the remaining depths Z ′PL such that the depth of any point i can still be be reconstructed with high fidelity as a weighted sum of its KNNs’ depths using the learned weights W ; i.e. if point i : 1 ≤ i ≤ n is moved to its new depth Gi, then its neighbors in Ni must also be corrected such that Gi ≈ ∑ j∈Ni WijZ ′ j . Further, the neighbors’ neighbors must be corrected and the depth of the few n points propagates across the entire graph. We can solve for the final Z ′ directly with another quadratic optimization: Z ′ = argminZ′ ‖Z ′ −WZ ′‖2, s.t. Z ′1:n = G. (8) To illustrate the correction process, imagine the simplest case where the depth of only a single point (n = 1) is updated to G1 = Z1 + δ. A new optimal depth for Equation 8 is to move all the remaining points similarly, i.e. Z ′ = Z + 1δ: as Z =WZ and W1 = 1 we must have W (Z + 1δ) = Z + 1δ. In the setting with n > 1, the least-squares loss ensures a soft diffusion between the different LiDAR depth estimates. Both optimization problems in Equation 7 and Equation 8 can be solved exactly and efficiently with sparse matrix solvers. We summarize the procedure as an algorithm in the appendix. From the view of graph-based manifold learning, our GDC algorithm is reminiscent of locally linear embeddings (Roweis & Saul, 2000) with landmarks to guide the final solution (Weinberger et al., 2005). Figure 1 illustrates vividly how the initial 3D point cloud from SDN (purple) of a car in the KITTI dataset is corrected with a few sparse LiDAR measurements (yellow). The resulting points (blue) are right inside the ground-truth box and clearly show the contour of the car. Figure 4 shows the additional improvement from the GDC (blue) over the pure SDN depth estimates (see subsection D.5 for detailed numbers). The error (calculated only on non-landmark pixels) is corrected over the entire image where many regions have no LiDAR measurements. This is because that the pseudo-LiDAR point cloud is sufficiently dense and we choose k to be large enough (in practice, we use k = 10) such that the KNN graph is typically connected (or consists of few large connected components). See subsection D.6 for more analysis. For objects such as cars the improvements through GDC are far more pronounced, as these typically are touched by the four LiDAR beams and can be corrected effectively. 5 EXPERIMENTS 5.1 SETUP We refer to our combined method (SDN and GDC) for 3D object detection as PSEUDO-LIDAR++ (PL++ in short). To analyze the contribution of each component, we evaluate SDN and GDC independently and jointly across several settings. For GDC we set k = 10 and consider adding signal from a (simulated) 4-beam LiDAR, unless stated otherwise. Dataset, Metrics, and Baselines. We evaluate on the KITTI dataset (Geiger et al., 2013; 2012), which contains 7,481 and 7,518 images for training and testing. We follow (Chen et al., 2015) to separate the 7,481 images into 3,712 for training and 3,769 validation. For each (left) image, KITTI provides the corresponding right image, the 64-beam Velodyne LiDAR point cloud, the camera calibration matrices, and the bounding boxes. We focus on 3D object detection and bird’s-eye-view (BEV) localization and report results on the validation set. Specifically, we focus on the “car” category, following Chen et al. (2017) and Xu et al. (2018). We report average precision (AP) with IoU (Intersection over Union) thresholds at 0.5 and 0.7. We denote AP for the 3D and BEV tasks by AP3D and APBEV. KITTI defines the easy, moderate, and hard settings, in which objects with 2D box heights smaller than or occlusion/truncation levels larger than certain thresholds are disregarded. We compare to four stereo-based detectors: PSEUDO-LIDAR (PL in short) (Wang et al., 2019a), 3DOP (Chen et al., 2015), S-RCNN (Li et al., 2019b), and MLF-STEREO (Xu & Chen, 2018). Stereo depth network (SDN). We use PSMNET (Chang & Chen, 2018) as the backbone for our stereo depth estimation network (SDN). We follow Wang et al. (2019a) to pre-train SDN on the synthetic Scene Flow dataset (Mayer et al., 2016) and fine-tune it on the 3,712 training images of KITTI. We obtain the depth ground truth by projecting the corresponding LiDAR points onto images. We also train a PSMNET in the same way for comparison, which minimizes disparity error. 3D object detection. We apply three algorithms: AVOD (Ku et al., 2018), PIXOR (Yang et al., 2018b), and P-RCNN (Shi et al., 2019). All utilize information from LiDAR and/or monocular images. We use the released implementations of AVOD ( specifically, AVOD-FPN) and P-RCNN. We implement PIXOR ourselves with a slight modification to include visual information (denoted as PIXOR?). We train all models on the 3,712 training data from scratch by replacing the LiDAR points with pseudo-LiDAR data generated from stereo depth estimation. See the appendix for details. Sparser LiDAR. We simulate sparser LiDAR signal with fewer beams by first projecting the 64-beam LiDAR points onto a 2D plane of horizontal and vertical angles. We quantize the vertical angles into 64 levels with an interval of 0.4◦, which is close to the SPEC of the 64-beam LiDAR. We keep points fallen into a subset of beams to mimic the sparser signal. See the appendix for details. 5.2 EXPERIMENTAL RESULTS Results on the KITTI val set. We summarize the main results on KITTI object detection in Table 1. Several important trends can be observed: 1) Our PL++ with enhanced depth estimations by SDN and GDC yields consistent improvement over PL across all settings; 2) PL++ with GDC refinement of 4-beam LiDAR (Input: L# + S) performs significantly better than PL++ with only stereo inputs (Input: S); 3) PL experiences a substantial drop in accuracy from IoU at 0.5 to 0.7 for the hard setting. This suggests that while PL detects faraway objects, it mislocalizes them, likely placing them at the wrong depth. This causes the object to be considered a missed detection at higher overlap thresholds. Interestingly, here is where we experience the largest gain — from PL: P-RCNN (APBEV = 52.7) to PL++: P-RCNN (APBEV = 73.4) with input as L# + S. Note that the majority of the gain comes from GDC, as PL++ with the stereo-only version only improving the score to 57.3 APBEV. 4) The gap between PL++ and LiDAR is at most 13% APBEV, even at the hard setting under IoU at 0.7. 5) For IoU at 0.5, with the aid of only 4 LiDAR beams, PL++ (SDN + GDC) achieves results comparable to models with 64-beam LiDAR signals. Results on the KITTI test set. Table 2 summarizes results on the car category on the KITTI test set. We see a similar gap between our methods and LiDAR as on the validation set, suggesting that our improvement is not particular to the validation data. Our approach without LiDAR refinement (pure SDN) is placed at the top position among all the image-based algorithms on the KITTI leaderboard. In the following, we conduct a series of experiments to analyze the performance gain by our approaches and discuss several key observations. We mainly experiment with P-RCNN: we find that the results with AVOD and PIXOR? follow similar trends and thus include them in the appendix. Depth loss and depth cost volume. To turn a disparity network (e.g., PSMNET) into SDN, there are two changes: 1) change the disparity loss into the depth loss; 2) change the disparity cost volume into the depth cost volume. In Table 3, we uncover the effect of these two changes separately. On the APBEV/AP3D (moderate) metric, the depth loss gives us a 6%/2% improvement and the depth cost volume brings another 2 ∼ 3% gain4. 4We note that, the degree of improvement brought by the depth loss and depth cost volume depends on the 3D detector in use. Table 3 suggests that the depth loss provides more gain than the depth cost volume (for P-RCNN). In Table 6, we, however, see that the depth cost volume provides comparable or even bigger gain Impact of sparse LiDAR beams. We leverage 4-beam LiDAR to correct stereo depth using GDC. However, it is possible that gains in 3D object detection come entirely from the new LiDAR sensor and that the stereo estimates are immaterial. In Table 4, we study this question by comparing the detection results against those of models using 1) sole 4-beam LiDAR point clouds and 2) pseudo-LiDAR point clouds with depths of landmark pixels replaced by 4-beam LiDAR: i.e., in depth correction, we only correct depths of the landmark pixels without propagation. It can be seen that 4-beam LiDAR itself performs fairly well on locating faraway objects but cannot capture nearby objects precisely, while simply replacing pseudo-LiDAR with LiDAR at the landmark pixels prevents the model from detecting faraway object accurately. In contrast, our proposed GDC method effectively combines the merits of the two signals, achieving superior performance than using them alone. Pedestrian and cyclist detection. For a fair comparison to (Wang et al., 2019a), we apply FPOINTNET (Qi et al., 2018) for detecting pedestrians and cyclists. Table 5 shows the results: our methods significantly boosts the performance. Qualitative visualization. In Figure 6, we show an qualitative comparison of detection results on a randomly chosen scene in the KITTI object validation set, using P-RCNN (with confidence > 0.95) with different input signals. Specifically, we show the results from the frontal-view images and the bird’s-eye view (BEV) point clouds. In the BEV map, the observer is on the left-hand side looking to the right. It can be seen that the point clouds generated by PSEUDO-LIDAR ++ (SDN alone or SDN than the depth loss (for PIXOR? and AVOD). Nevertheless, Table 3 and Table 6 both suggest the compatibility of the two approaches: combining them leads to the best performance. +GDC) align better with LiDAR than that generated by PSEUDO-LIDAR (PSMNET). For nearby objects (i.e., bounding boxes close to the left in the BEV map), we see that P-RCNN with any point cloud performs fairly well in localization. However, for faraway objects (i.e., bounding boxes close to the right), PSEUDO-LIDAR with depth estimated from PSMNET predicts objects (red boxes) that are deviated from the ground truths (green boxes). Moreover, the noisy PSMNET points also leads to false negatives. In contrast, the detected boxes by our PSEUDO-LIDAR ++, either with SDN alone or with SDN +GDC, align pretty well with the ground truth boxes, justifying our targeted improvement in estimating faraway depths. Additional results, analyses, qualitative visualization and discussions. We provide results of PSEUDO-LIDAR ++ with fewer LiDAR beams, comparisons to depth completion methods, analysis on depth quality and detection accuracy, run time, failure cases, and more qualitative results in the appendix. With simple optimizations, GDC runs in 90 ms/frame using a single GPU (7.7 ms for KD-tree construction and search). 6 CONCLUSION In this paper we made two contributions to improve the 3D object detection in autonomous vehicles without expensive LiDAR. First, we identify the disparity estimation as a main source of error for stereo-based systems and propose a novel approach to learn depth directly end-to-end instead of through disparity estimates. Second, we advocate that one should not use expensive LiDAR sensors to learn the local structure and depth of objects. Instead one can use commodity stereo cameras for the former and a cheap sparse LiDAR to correct the systematic bias in the resulting depth estimates. We provide a novel graph propagation algorithm that integrates the two data modalities and propagates the sparse yet accurate depth estimates using two sparse matrix solvers. The resulting system, PSEUDO-LIDAR ++ (SDN + GDC), performs almost on par with 64-beam LiDAR systems for $75,000 but only requires 4 beams and two commodity cameras, which could be obtained with a total cost of less than $1,000. ACKNOWLEDGMENTS This research is supported by grants from the National Science Foundation NSF (III-1618134, III1526012, IIS-1149882, IIS-1724282, and TRIPODS-1740822), the Office of Naval Research DOD (N00014-17-1-2175), the Bill and Melinda Gates Foundation, and the Cornell Center for Materials Research with funding from the NSF MRSEC program (DMR-1719875). We are thankful for generous support by Zillow and SAP America Inc. We thank Gao Huang for helpful discussion. Appendix We provide details omitted in the main text. • Appendix A: details on constructing the depth cost volume (section 3 of the main paper). • Appendix B: detailed implementation of the GDC algorithm (section 4 of the main paper). • Appendix C: additional details of experimental setups (subsection 5.1 of the main paper). • Appendix D: additional results, analyses, and discussions (subsection 5.2 of the main paper). A DEPTH COST VOLUME With Equation 2, we know where each grid (u, v, z) in Cdepth corresponds to in Cdisp (may not be on a grid). We can then obtain features for each grid in Cdepth (i.e., Cdepth(u, v, z, :)) by bilinear interpolation over features on grids of Cdisp around the non-grid location (i.e., ( u, v, fU × b z ) ). We applied the “grid_sample” function in PyTorch for bilinear interpolation. We use PSMNET (Chang & Chen, 2018) as the backbone for our stereo depth estimation network (SDN). The only change is to construct the depth cost volume before performing 3D convolutions. B GRAPH-BASED DEPTH CORRECTION (GDC) ALGORITHM Here we present the GDC algorithm in detail (see algorithm 1). The two steps described in the main paper can be easily turned into two (sparse) linear systems and then solved by using Lagrange multipliers. For the first step (i.e., Equation 7), we solve the same problem as in the main text but we switch the objective to minimizing the L2-norm of W and set Z −WZ = 0 as a constraint5. For the second step (i.e., Equation 8), we use the Conjugate Gradient (CG) to iteratively solve the sparse linear system. Algorithm 1: Graph-based depth correction (GDC). “;” stands for column-wise concatenation. Input: Stereo depth map Z ∈ R(n+m)×1, the corresponding pseudo-LiDAR (PL) point cloud P ∈ R(n+m)×3, and LiDAR depths G ∈ Rn×1 on the first the n pixels. Output: Corrected depth map Z ′ ∈ R(n+m)×1 function GDC(Z,P,G,K) Solve: W = argminW∈R(n+m)×(n+m) ‖W‖2 s.t. Z −W · Z = 0, Wij = 0 if j /∈ Ni (i.e., the set of neighbors of the ith point) according to P ,∑ j Wij = 1 for ∀i = 1, . . . , n+m. Solve: Z ′PL = argminZ′PL∈Rm×1 ‖[G;Z ′ PL]−W [G;Z ′PL]‖2 return [G;Z ′PL] end C EXPERIMENTAL SETUP C.1 SPARSE LIDAR GENERATION In this section, we explain how we generate sparser LiDAR with fewer beams from a 64-beam LiDAR point cloud from KITTI dataset in detail. For every point (xi, yi, zi) ∈ R3 of the point cloud in one 5These two problems yield identical solutions but we found the second one is easier to solve in practice. We note that, Equation 7 is an under-constrained problem, with infinitely many solutions. To identify a single solution, we add a small L2 regularization term to the objective (as mentioned in the main text). scene (in LiDAR coordinate system (x: front, y: left, z: up, and (0, 0, 0) is the location of the LiDAR sensor)), we compute the elevation angle θi to the LiDAR sensor as θi = arg cos ( √ x2i + y 2 i√ x2i + y 2 i + z 2 i ) . We order the points by their elevation angles and slice them into separate lines by step 0.4◦, starting from −23.6◦ (close to the Velodyne 64-beam LiDAR SPEC). We select LiDAR points whose elevation angles fall within [−2.4◦,−2.0◦) ∪ [−0.8◦,−0.4◦) to be the 2-beam LiDAR signal, and similarly [−2.4◦,−2.0◦)∪ [−1.6◦,−1.2◦)∪ [−0.8◦,−0.4◦)∪ [0.0◦, 0.4◦) to be the 4-beam LiDAR signal. We choose them in such a way that consecutive lines has a 0.8◦ interval, following the SPEC of the “cheap” 4-beam LiDAR ScaLa. We visualize these sparsified LiDAR point clouds from the bird’s-eye view on one example scene in Figure 7. C.2 3D OBJECT DETECTION ALGORITHMS In this section, we provide more details about the way we train 3D object detection models on pseudo-LiDAR point clouds. For AVOD, we use the same model as in (Wang et al., 2019a). For P-RCNN, we use the implementation provided by the authors. Since the P-RCNN model exploits the sparse nature of LiDAR point clouds, when training it with pseudo-LiDAR input, we will first sparsify the point clouds into 64 beams using the method described in subsection C.1. For PIXOR?, we implement the same base model structure and data augmentation specified by Yang et al. (2018b), but without the “decode fine-tune” step and focal loss. Inspired by the trick in (Liang et al., 2018), we add another image feature (ResNet-18 by He et al. (2016)) branch along the LiDAR branch, and concatenate the corresponding image features onto the LiDAR branch at each stage. We train PIXOR? using RMSProp with momentum 0.9, learning rate 10−5 (decay by 10 after 50 and 80 epochs) for 90 epochs. The BEV evaluation results are similar to the reported results (see Table 1). D ADDITIONAL RESULTS, ANALYSES, AND DISCUSSIONS D.1 ABLATION STUDY In Table 6 and Table 7 we provide more experimental results aligned with experiments in subsection 5.2 of the main paper. We conduct the same experiments on two other models, AVOD and PIXOR?, and observe similar trends of improvements brought by learning with the depth loss (from PSMNET to PSMNET +DL), constructing the depth cost volume (from PSMNET +DL to SDN), and applying GDC to correct the bias in stereo depth estimation (comparing SDN +GDC with SDN). We note that, in Table 7, results of AVOD (or PIXOR?) with SDN + L# are worse than those with L# at the moderate and hard settings. This observation is different from that in Table 4, where P-RCNN with SDN + L# outperforms P-RCNN with L# in 5 out of 6 comparisons. We hypothesize that this is because P-RCNN takes sparsified inputs (see subsection C.2) while AVOD and PIXOR? take dense inputs. In the later case, the four replaced LiDAR beams in SDN + L# will be dominated by the dense stereo depths so that SDN + L# is worse than L#. D.2 USING FEWER LIDAR BEAMS In PL++ (i.e., SDN + GDC), we use 4-beam LiDAR to correct the predicted point cloud. In Table 8, we investigate using fewer (and also potentially cheaper) LiDAR beams for depth correction. We observe that even with 2 beams, GDC can already manage to combine the two signals and yield a better performance than using 2-beam LiDAR or pseudo-LiDAR alone. D.3 DEPTH CORRECTION VS. DEPTH COMPLETION commensurate. Also, our GDC algorithm is a general, simple, inference-time approach that requires no training, unlike prior learning-based approaches to depth completion. Here we empirically compare to PNP (Wang et al., 2018), a recently proposed depth completion algorithm compatible with any (even stereo) depth estimation network, similar to GDC. We use SDN for initial depth estimation, and evaluate GDC and PNP by randomly selecting a fraction of LiDAR points as provided ground truths and calculating the median absolute depth errors on the remaining LiDAR points. As shown in Figure 8, GDC outperforms PNP by a large margin. Table 9 shows a further comparison to PNP on 3D object detection. We apply PNP and GDC respectively to correct the depth estimates obtained from SDN, train a P-RCNN or PIXOR? using the resulting pseudo-LiDAR points on the KITTI training set, and compare the detection results on the KITTI validation set. In either case, SDN + GDC outperforms SDN + PNP by a notable margin. D.4 RUN TIME With the following optimizations for implementation, 1. Sub-sampling pseudo-LiDAR points: keeping at most one point within a cubic of size 0.1m3 2. Limiting the pseudo-LiDAR points for depth correction: keeping only those whose elevation angles are within [−3.0◦, 0.4◦) (the range of 4-beam LiDAR plus 0.6◦; see subsection C.1 for details) 3. After performing GDC for depth correction, combining the corrected pseudo-LiDAR points with those outsides the elevation angles of [−3.0◦, 0.4◦) GDC runs in 90 ms/frame using a single GPU (7.7ms for KD-tree construction and search, 46.5ms for solving W , and 26.9ms for solving Z ′PL) with negligible performance difference (see Table 10). For consistency, all results reported in the main paper are based on the naive implementation. Further speedups can be achieved by CUDA programming for GPUs. D.5 STEREO DEPTH VS. DETECTION We quantitatively evaluate the stereo depths by median errors in Figure 4 of the main text (numerical values are listed in Table 11). In Table 12 we further show mean errors with standard deviation (the large standard deviation likely results from outliers such as occluded pixels around object boundaries). For both tables, we divide pixels into beams according to their truth depths, and evaluate on pixels not on the 4-beam LiDAR. The improvement of SDN (+ GDC) over PSMNET becomes larger as we consider pixels farther away. Table 13 further demonstrates the relationship between depth quality and detection accuracy: SDN (+ GDC) significantly outperforms PSMNET for detecting faraway cars. We note that, for very faraway cars (i.e., 50-70 m), the number of training object instances are extremely small, which suggests that the very poor performance might partially cause by over-fitting. Further, we apply the same evaluation procedure but group the errors by the shortest distance between each PSEUDO-LIDAR point and the 4-beam LiDAR points in Figure 9. We can see that the closer the PSEUDO-LIDAR points are to the 4-beam LiDAR points, the bigger improvement GDC can bring. D.6 CONNECTED COMPONENTS IN KNN GRAPHS OF PSEUDO-LIDAR POINTS BY SDN Here, we provide empirical analysis on the relationship between the k we choose in building the Knearest-neighbor graph of PSEUDO-LIDAR points by SDN and the number of connected components of that graph. We show the results on KITTI validation set in Figure 11. It can be seen that with k ≥ 9, the average number of connected components in the graph is smaller than 2. D.7 FAILURE CASES AND WEAKNESS There is still a gap between our approach and LiDAR for faraway objects (see Table 13). We further analyze APBEV at different IoU in Figure 10. For low IoU (0.2-0.5), SDN (+GDC) is on par with LiDAR, but the gap increases significantly at high IoU thresholds. This suggests that the predominant gap between our approach and LiDAR is because of mislocalization, perhaps due to residual inaccuracies in depth. D.8 QUALITATIVE RESULTS In Figure 6,12,13 and Figure 14, we show detection results using P-RCNN (with confidence > 0.95) with different input signals on four randomly chosen scenes in the KITTI object validation set. Specifically, we show the results from the frontal-view images and the bird’s-eye view (BEV) point clouds. In the BEV map, the observer is on the left-hand side looking to the right. It can be seen that the point clouds generated by PSEUDO-LIDAR ++ (SDN alone or SDN +GDC) align better with LiDAR than those generated by PSEUDO-LIDAR (PSMNET). For nearby objects (i.e., bounding boxes close to the left in the BEV map), we see that P-RCNN with any point cloud performs fairly well in localization. However, for faraway objects (i.e., bounding boxes close to the right), PSEUDO-LIDAR with depth estimated from PSMNET predicts objects (red boxes) deviated from the ground truths (green boxes). Moreover, the noisy PSMNET points also leads to several false positives or negatives. In contrast, the detected boxes by our PSEUDO-LIDAR ++, either with SDN alone or with SDN +GDC, align pretty well with the ground truth boxes, justifying our targeted improvement in estimating faraway depths. In Figure 12, we see one failure case for both PSEUDO-LIDAR and PSEUDO-LIDAR ++: the most faraway car is missed, while LiDAR signal can still detect it, suggesting that for very faraway objects stereo-based methods may still have limitation.
1. What are the strengths and weaknesses of the proposed approach in the paper regarding stereo-based 3D object detection? 2. Do you think the experiments conducted in the paper are thorough and well-thought-out? 3. How does the reviewer assess the significance and novelty of the three ideas proposed in the paper? 4. What are some minor comments or suggestions the reviewer has regarding the paper's content?
Review
Review The paper proposes to improve the idea of using stereo + lidar -style object detection to form stereo-based 3D object detection, building off of pseudo-lidar. In particular, it proposes to (a) switch the loss function for stereo deep networks from disparity to depth (b) do the stereo cost volume analysis in a depth volume (via resampling) rather than disparity volume and (c) if sparse lidar is available, align the estimated depth with the sparse lidar. Each seems to improve results, and the resulting system achieves good results on KITTI and outperforms past work in this area. Positives: +The paper proposes three ideas that seem good and lead to improvements that are demonstrated empirically. +The paper is well written  +The experiments are exceptionally thorough +The ideas seems to me to be of obvious importance, although I realize that I'm likely not qualified to make a statement about this, and this should perhaps be done by a roboticist.  Negatives: -Most of the heavy lifting in the case without sparse LIDAR is done by tweaking the loss function rather than the cost, although the remaining gain is still pretty good -I am not sure if this is a negative, but this is really a 3D vision paper. I do some form of 3D vision, but I really don't feel confident about my ability to assess whether doing stereo matching in a depth cost volume as opposed to disparity is correct -- I really haven't worked on stereo. It seems to work well, but I feel as if the wrong people are being asked to review the paper. I leave questions of venue to the area chair though. Overall, I am inclined to accept the paper. I am a tiny bit worried about venue and whether the right people will check the work, but I don't think this should be decided by reviewers. However, I think the experiments are quite thorough and the paper is clearly above the bar. In more detail: Method: +The method reads quite well and the idea is clean. I particularly like the graph-based depth correction algorithm, and the LLE-like way of adjusting the estimated depthmap. I have a few small comments below that do not affect my judgment, but I think would improve the paper. = Small thought: the words systematic bias throughout is primarily referring to a bias for a particular object as opposed to a bias of the system (i.e., any individual object is too far or too close). This seems non-standard to me. A systematic bias would be that everything's too far away by 1m for instance. Experiments: +The experiments are exceptionally thorough, and of my pile of ICLR papers, this by far has the most thorough and well-thought-out experimental analysis. +The system shows systematic improvements on 3 different LIDAR-based object detection systems; I think this is great. =I'm not sure whether the 64-beam LIDAR can be subsampled to imitate 4-beam LIDAR. I simply don't know enough about the hardware to know if this is a sensible approximation.  -Table 3 primarily suggests that the vast majority of the hard work in the non-sparse LIDAR is done by the depth loss rather than the depth cost volume. The resulting change is still pretty good (although I suspect that if you stuck in a coordconv in the disparity cost volume, it would handle the fact that you want unequal smoothing). -The burial of the results on depth prediction results in the appendix with one is a little surprising as is the solitary table on it, but I understand the need to focus on 3D detection. Small stuff that doesn't affect my review: 1) Framing the problem as having ethical considerations is, in my view, not necessary -- should all network compression papers start arguing that it is of profound ethical importance to figure out your bit quantization?  2) Last paragraph above Section 4 "gird" -> "Grid" 3) Figure 3 caption "pruple" -> "purple" 4) Figure 4 is suboptimal -- I assume SDN+GDC < SDN < Disparity Net, but this is hard to verify. 5) Calling the network "Disparity Net" is a bit of an issue given that there's DispNet already  6) "Figure 1 illustrates beautifully how" -> Please don't editorialize like this ----------------------- Post rebuttal update: I have read the rebuttal and maintain my belief that the paper should be accepted.
ICLR
Title Pseudo-LiDAR++: Accurate Depth for 3D Object Detection in Autonomous Driving Abstract Detecting objects such as cars and pedestrians in 3D plays an indispensable role in autonomous driving. Existing approaches largely rely on expensive LiDAR sensors for accurate depth information. While recently pseudo-LiDAR has been introduced as a promising alternative, at a much lower cost based solely on stereo images, there is still a notable performance gap. In this paper we provide substantial advances to the pseudo-LiDAR framework through improvements in stereo depth estimation. Concretely, we adapt the stereo network architecture and loss function to be more aligned with accurate depth estimation of faraway objects — currently the primary weakness of pseudo-LiDAR. Further, we explore the idea to leverage cheaper but extremely sparse LiDAR sensors, which alone provide insufficient information for 3D detection, to de-bias our depth estimation. We propose a depthpropagation algorithm, guided by the initial depth estimates, to diffuse these few exact measurements across the entire depth map. We show on the KITTI object detection benchmark that our combined approach yields substantial improvements in depth estimation and stereo-based 3D object detection — outperforming the previous state-of-the-art detection accuracy for faraway objects by 40%. Our code is available at https://github.com/mileyan/Pseudo_Lidar_V2. N/A 1 INTRODUCTION Safe driving in autonomous cars requires accurate 3D detection and localization of cars, pedestrians and other objects. This in turn requires accurate depth information, which can be obtained from LiDAR (Light Detection And Ranging) sensors. Although highly precise and reliable, LiDAR sensors are notoriously expensive: a 64-beam model can cost around $75,000 (USD)1. The alternative is to measure depth through inexpensive commodity cameras. However, in spite of recent dramatic progress in stereo-based 3D object detection brought by pseudo-LiDAR (Wang et al., 2019a), a significant performance gap remains especially for faraway objects (which we want to detect early to allow time for reaction). The trade-off between affordability and safety creates an ethical dilemma. ∗ Equal contributions 1The information is obtained from the automotive LiDAR market report: http://www.woodsidecap. com/wp-content/uploads/2018/04/Yole_WCP-LiDAR-Report_April-2018-FINAL.pdf In this paper we propose a possible solution to this remaining challenge that combines insights from both perspectives. We observe that the higher 3D object localization error of stereo-based systems, compared to LiDAR-based ones, stems entirely from the higher error in depth estimation (after the 3D point cloud is obtained the two approaches are identical (Wang et al., 2019a)). Importantly, this error is not random but systematic: we observe that stereo methods do indeed detect objects with high reliability, yet they estimate the depth of the entire object as either too far or too close. See Figure 1 for an illustration: the red stereo points capture the car but are shifted by about 2m completely outside the ground-truth location (green box). If we can de-bias these depth estimates it should be possible to obtain accurate 3D localization even for distant objects without exorbitant costs. We start by revisiting the depth estimation routine embedded at the heart of state-of-the-art stereobased 3D detection approach (Wang et al., 2019a). A major contributor to the systematic depth bias comes from the fact that depth is typically not computed directly. Instead, one first estimates the disparity — the horizontal shift of a pixel between the left and right images — and then inverts it to obtain pixel-wise depth. While the use of deep neural networks has largely improved disparity estimation (Chang & Chen, 2018; Cheng et al., 2018; Mayer et al., 2016; Wang et al., 2019b), designing and learning the networks to optimize the accuracy of disparity estimation simply overemphasizes nearby objects due to the reciprocal transformation. For instance, a unit disparity error (in pixels) for a 5-meter-away object means a 10cm error in depth: the length of a side mirror. The same disparity error for a 50-meter-away object, however, becomes a 5.8m error in depth: the length of an entire car. Penalizing both errors equally means that the network spends more time correcting subtle errors on nearby objects than gross errors on faraway objects, resulting in degraded depth estimates and ultimately poor detection and localization for faraway objects. We thus propose to adapt the stereo network architecture and loss function for direct depth estimation. Concretely, the cost volume that fuses the left-right images and the subsequent 3D convolutions are the key components in stereo networks. Taking the central assumption of convolutions — all neighborhoods can be operated in an identical manner — we propose to construct the cost volume on the grid of depth rather than disparity, enabling 3D convolutions and the loss function to perform exactly on the right scale for depth estimation. We refer to our network as stereo depth network (SDN). See Figure 1 for a comparison of 3D points obtained with SDN (purple) and disparity estimation (red). Although our SDN improves the depth estimates significantly, stereo images are still inherently 2D and it is unclear if they can ever match the accuracy and reliability of a true 3D LiDAR sensor. Although LiDAR sensors with 32 or 64 beams are expensive, LiDAR sensors with only 4 beams are two orders of magnitude cheaper2 and thus easily affordable. The 4 laser beams are very sparse and ill-suited to capture 3D object shapes by themselves, but if paired with stereo images they become the ideal tool to de-bias our dense stereo depth estimates: a single high-precision laser beam may inform us how to correct the depth of an entire car or pedestrian in its path. To this end, we present a novel depth-propagation algorithm, inspired by graph-based manifold learning (Weinberger et al., 2005; Roweis & Saul, 2000; Xiaojin & Zoubin, 2002). In a nutshell, we connect our estimated 3D stereo point cloud locally by a nearest neighbor graph, such that points corresponding to the same object will share many local paths with each other. We match the few but exact LiDAR measurements first with pixels (irrespective of depth) and then with their corresponding 3D points to obtain accurate depth estimates for several nodes in the graph. Finally, we propagate this exact depth information along the graph using a label diffusion mechanism — resulting in a dense and accurate depth map at negligible cost. In Figure 1 we see that the few (yellow) LiDAR measurements are sufficient to position almost all final (blue) points of the entire car within the green ground truth box. We conduct extensive empirical studies of our approaches on the KITTI object detection benchmark (Geiger et al., 2012; 2013) and achieve remarkable results. With solely stereo images, we outperform the previous state of the art (Wang et al., 2019a) by 10%. Further adding a cheap 4-beam LiDAR brings another 27% relative improvement — on some metrics, our approach is nearly on par with those based on a 64-beam LiDAR but can potentially save 95% in cost. 2The Ibeo Wide Angle Scanning (ScaLa) sensor with 4 beams costs $600 (USD). In this paper we simulate the 4-beam LiDAR signal on KITTI benchmark (Geiger et al., 2012) by sparsifying the original 64-beam signal. 2 BACKGROUND 3D object detection. Most work on 3D object detection operates on 3D point clouds from LiDAR as input (Li, 2017; Li et al., 2016; Meyer et al., 2019b; Yang et al., 2018a; Du et al., 2018; Shi et al., 2019; Engelcke et al., 2017; Yan et al., 2018; Lang et al., 2019). Frustum PointNet (Qi et al., 2018) applies PointNet (Qi et al., 2017a;b) to the points directly, while Voxelnet (Zhou & Tuzel, 2018) quantizes them into 3D grids. For street scenes, several work finds that processing points from the bird’s-eye view can already capture object contours and locations (Chen et al., 2017; Yang et al., 2018b; Ku et al., 2018). Images have also been used, but mainly to supplement LiDAR (Meyer et al., 2019a; Xu et al., 2018; Liang et al., 2018; Chen et al., 2017; Ku et al., 2018). Early work based solely on images — mostly built on the 2D frontal-view detection pipeline (Ren et al., 2015; He et al., 2017; Lin et al., 2017) — fell far behind in localizing objects in 3D (Li et al., 2019a; Xiang et al., 2015; 2017; Chabot et al., 2017; Mousavian et al., 2017; Chen et al., 2015; Xu & Chen, 2018; Chen et al., 2016; Pham & Jeon, 2017; Chen et al., 2018)3. Pseudo-LiDAR. This gap has been reduced significantly recently with the introduction of the pseudoLiDAR framework proposed in (Wang et al., 2019a). This framework applies a drastically different approach from previous image-based 3D object detectors. Instead of directly detecting the 3D bounding boxes from the frontal view of a scene, pseudo-LiDAR begins with image-based depth estimation, predicting the depth Z(u, v) of each image pixel (u, v). The resulting depth map Z is then back-projected into a 3D point cloud: a pixel (u, v) will be transformed to (x, y, z) in 3D by z = Z(u, v), x = (u− cU )× z fU , y = (v − cV )× z fV , (1) where (cU , cV ) is the camera center and fU and fV are the horizontal and vertical focal length. The 3D point cloud is then treated exactly as LiDAR signal — any LiDAR-based 3D detector can be applied seamlessly. By taking the state-of-the-art algorithms from both ends (Chang & Chen, 2018; Ku et al., 2018; Qi et al., 2018), pseudo-LiDAR obtains the highest image-based performance on the KITTI object detection benchmark (Geiger et al., 2012; 2013). Our work builds upon this framework. Stereo disparity estimation. Pseudo-LiDAR relies heavily on the quality of depth estimation. Essentially, if the estimated pixel depths match those provided by LiDAR, pseudo-LiDAR with any LiDAR-based detector should be able to achieve the same performance as that obtained by applying the same detector to the LiDAR signal. According to (Wang et al., 2019a), depth estimation from stereo pairs of images (Mayer et al., 2016; Yamaguchi et al., 2014; Chang & Chen, 2018) are more accurate than that from monocular (i.e., single) images (Fu et al., 2018; Godard et al., 2017) for 3D object detection. We therefore focus on stereo depth estimation, which is routinely obtained from estimating disparity between images. A disparity estimation algorithm takes a pair of left-right images Il and Ir as input, captured from a pair of cameras with a horizontal offset (i.e., baseline) b. Without loss of generality, we assume that the algorithm treats the left image, Il, as reference and outputs a disparity map D recording the horizontal disparity to Ir for each pixel (u, v). Ideally, Il(u, v) and Ir(u, v +D(u, v)) will picture the same 3D location. We can therefore derive the depth map Z via the following transform, Z(u, v) = fU × b D(u, v) (fU : horizontal focal length). (2) A common pipeline of disparity estimation is to first construct a 4D disparity cost volume Cdisp, in which Cdisp(u, v, d, :) is a feature vector that captures the pixel difference between Il(u, v) and Ir(u, v+d). It then estimates the disparity D(u, v) for each pixel (u, v) according to the cost volume Cdisp. One basic algorithm is to build a 3D cost volume withCdisp(u, v, d) = ‖Il(u, v)−Ir(u, v+d)‖2 and determine D(u, v) as argmind Cdisp(u, v, d). Advanced algorithms exploit more robust features in constructingCdisp and perform structured prediction forD. In what follows, we give an introduction of PSMNet (Chang & Chen, 2018), a state-of-the-art algorithm used in (Wang et al., 2019a). PSMNet begins with extracting deep feature maps hl and hr from Il and Ir, respectively. It then constructs Cdisp(u, v, d, :) by concatenating features of hl(u, v) and hr(u, v + d), followed by layers 3Recently, Srivastava et al. (2019) proposed to lift 2D monocular images to 3D representations (e.g., bird’s-eye view (BEV) images) and achieved promising monocular-based 3D object detection results. of 3D convolutions. The resulting 3D tensor Sdisp, with the feature channel size ending up being one, is then used to derive the pixel disparity via the following weighted combination, D(u, v) = ∑ d softmax(−Sdisp(u, v, d))× d, (3) where softmax is performed along the 3rd dimension of Sdisp. PSMNet can be learned end-to-end, including the image feature extractor and 3D convolution kernels, to minimize the disparity error∑ (u,v)∈A `(D(u, v)−D?(u, v)), (4) where ` is the smooth L1 loss, D? is the ground truth map, andA contains pixels with ground truths. 3 STEREO DEPTH NETWORK (SDN) A stereo network designed and learned to minimize the disparity error (cf. Equation 4) may over-emphasize nearby objects with smaller depths and therefore perform poorly in estimating depths for faraway objects. To see this, note that Equation 2 implies that for a given error in disparity δD, the error in depth δZ increases quadratically with depth: Z ∝ 1 D ⇒ δZ ∝ 1 D2 δD ⇒ δZ ∝ Z2δD. (5) The middle term is obtained by differentiating Z(D) w.r.t. D. In particular, using the settings on the KITTI dataset (Geiger et al., 2012; 2013), a single pixel error in disparity implies only a 0.1m error in depth at a depth of 5 meters, but a 5.8m error at a depth of 50 meters. See Figure 2 for a mapping from disparity to depth. Depth Loss. We propose two changes to adapt stereo networks for direct depth estimation. First, we learn stereo networks to directly optimize the depth loss∑ (u,v)∈A `(Z(u, v)− Z?(u, v)). (6) Z and Z? can be obtained from D and D? using Equation 2. The change from the disparity loss to the depth loss corrects the disproportionally strong emphasis on tiny depth errors of nearby objects — a necessary but still insufficient change to overcome the problems of disparity estimation. Depth Cost Volume. To facilitate accurate depth learning (rather than disparity) we need to further address the internals of the depth estimation pipeline. A crucial source of error is the 3D convolutions within the 4D disparity cost volume, where the same kernels are applied for the entire cost volume. This is highly problematic as it implicitly assumes that the effect of a convolution is homogeneous throughout — which is clearly violated by the reciprocal depth to disparity relation (Figure 2). For example, it may be completely appropriate to locally smooth two neighboring pixels with disparity 85 and 86 (changing the depth by a few cm to smooth out a surface), whereas applying the same kernel for two pixels with disparity 5 and 6 could easily move the 3D points by 10m or more. Taking this insight and the central assumption of convolutions — all neighborhoods can be operated upon in an identical manner — into account, we propose to instead construct the depth cost volume Cdepth, in which Cdepth(u, v, z, :) will encode features describing how likely the depth Z(u, v) of pixel (u, v) is z. The subsequent 3D convolutions will then operate on the grid of depth, rather than disparity, affecting neighboring depths identically, independent of their location. The resulting 3D tensor Sdepth is then used to predict the pixel depth similar to Equation 3 Z(u, v) = ∑ z softmax(−Sdepth(u, v, z))× z. We construct the new depth volume, Cdepth, based on the intuition that Cdepth(u, v, z, :) and Cdisp ( u, v, fU × b z , : ) should lead to equivalent “cost”. To this end, we apply a bilinear interpolation to construct Cdepth from Cdisp using the depth-to-disparity transform in Equation 2. Specifically, we consider disparity in the range of [0, 191] following PSMNet (Chang & Chen, 2018), and consider depth in the range of [1m, 80m] and set the grid of depth in Cdepth to be 1m. Figure 5 (top) depicts our stereo depth network (SDN) pipeline. Crucially, all convolution operations are operated on Cdepth exclusively. Figure 4 compares the median values of absolute depth estimation errors using the disparity cost volume (i.e., PSMNet) and the depth cost volume (SDN) (see subsection D.5 for detailed numbers). As expected, for faraway depth, SDN leads to drastically smaller errors with only marginal increases in the very near range (which disparity based methods over-optimize). See the appendix for the detailed setup and more discussions. 4 DEPTH CORRECTION Our SDN significantly improves depth estimation and more precisely renders the object contours (see Figure 3). However, there is a fundamental limitation in stereo because of the discrete nature of pixels: the disparity, being the difference in the horizontal coordinate between corresponding pixels, has to be quantized at the level of individual pixels while the depth is continuous. Although the quantization error can be alleviated with higher resolution images, the computational depth prediction cost scales cubically with resolution— pushing the limits of GPUs in autonomous vehicles. We therefore explore a hybrid approach by leveraging a cheap LiDAR with extremely sparse (e.g., 4 beams) but accurate depth measurements to correct this bias. We note that such sensors are too sparse to capture object shapes and cannot be used alone for detection. However, by projecting the LiDAR points into the image plane we obtain exact depths on a small portion of “landmark” pixels. We present a graph-based depth correction (GDC) algorithm that effectively combines the dense stereo depth that has rendered object shapes and the sparse accurate LiDAR measurements. Conceptually, we expect the corrected depth map to have the following properties: globally, landmark pixels associated with LiDAR points should possess the exact depths; locally, object shapes captured by neighboring 3D points, back-projected from the input depth map (cf. Equation 1), should be preserved. Figure 5 (bottom) illustrates the algorithm. Input Matching. We take as input the two point clouds from LiDAR (L) and Pseudo-LiDAR (PL) by stereo depth estimation. The latter is obtained by converting pixels (u, v) with depth z to 3D points (xu, yv, z). First, we characterize the local shapes by the directed K-nearest-neighbor (KNN) graph in the PL point cloud (using accelerated KD-Trees (Shevtsov et al., 2007)) that connects each 3D point to its KNNs with appropriate weights. Similarly, we can project the 3D LiDAR points onto pixel locations (u, v) and match them to corresponding 3D stereo points. Without loss of generality, we assume that we are given “ground truth” LiDAR depth for the first n points and no ground truth for the remaining m points. We refer to the 3D stereo depth estimates as Z ∈ Rn+m and the LiDAR depth ground-truth as G ∈ Rn. Edge weights. To construct the KNN graph in 3D we ignore the LiDAR information on the first n points and only use their predicted stereo depth in Z. Let Ni denote the set of k neighbors of the ith point. Further, let W ∈ R(n+m)×(n+m) denote the weight matrix, where Wij denotes the edge-weight between points i and j. Inspired by prior work in manifold learning (Roweis & Saul, 2000; Weinberger et al., 2005) we choose the weights to be the coefficients that reconstruct the depth of any point from the depths of its neighbors inNi. We can solve for these weights with the following constrained quadratic optimization problem: W = argminW ‖Z −WZ‖22, s.t. W1 = 1 and Wij = 0 if j /∈ Ni. (7) Here 1 ∈ Rn+m denotes the all-ones vector. As long as we pick k > 3 and the points are in general position there are infinitely many solutions that satisfy Z =WZ, and we pick the solution with the minimum L2 norm (obtained with slight L2 regularization). Depth Correction. Let us denote the corrected depth values as Z ′ ∈ Rn+m, with Z ′ = [Z ′L;Z ′PL] and Z ′L ∈ Rn and Z ′PL ∈ Rm, where Z ′L are the depth values of points with LiDAR ground-truth and Z ′PL otherwise. For the n points with LiDAR measurements we update the depth to the (ground truth) values Z ′L = G. We then solve for Z ′ PL given G and the weighted KNN graph encoded in W . Concretely, we update the remaining depths Z ′PL such that the depth of any point i can still be be reconstructed with high fidelity as a weighted sum of its KNNs’ depths using the learned weights W ; i.e. if point i : 1 ≤ i ≤ n is moved to its new depth Gi, then its neighbors in Ni must also be corrected such that Gi ≈ ∑ j∈Ni WijZ ′ j . Further, the neighbors’ neighbors must be corrected and the depth of the few n points propagates across the entire graph. We can solve for the final Z ′ directly with another quadratic optimization: Z ′ = argminZ′ ‖Z ′ −WZ ′‖2, s.t. Z ′1:n = G. (8) To illustrate the correction process, imagine the simplest case where the depth of only a single point (n = 1) is updated to G1 = Z1 + δ. A new optimal depth for Equation 8 is to move all the remaining points similarly, i.e. Z ′ = Z + 1δ: as Z =WZ and W1 = 1 we must have W (Z + 1δ) = Z + 1δ. In the setting with n > 1, the least-squares loss ensures a soft diffusion between the different LiDAR depth estimates. Both optimization problems in Equation 7 and Equation 8 can be solved exactly and efficiently with sparse matrix solvers. We summarize the procedure as an algorithm in the appendix. From the view of graph-based manifold learning, our GDC algorithm is reminiscent of locally linear embeddings (Roweis & Saul, 2000) with landmarks to guide the final solution (Weinberger et al., 2005). Figure 1 illustrates vividly how the initial 3D point cloud from SDN (purple) of a car in the KITTI dataset is corrected with a few sparse LiDAR measurements (yellow). The resulting points (blue) are right inside the ground-truth box and clearly show the contour of the car. Figure 4 shows the additional improvement from the GDC (blue) over the pure SDN depth estimates (see subsection D.5 for detailed numbers). The error (calculated only on non-landmark pixels) is corrected over the entire image where many regions have no LiDAR measurements. This is because that the pseudo-LiDAR point cloud is sufficiently dense and we choose k to be large enough (in practice, we use k = 10) such that the KNN graph is typically connected (or consists of few large connected components). See subsection D.6 for more analysis. For objects such as cars the improvements through GDC are far more pronounced, as these typically are touched by the four LiDAR beams and can be corrected effectively. 5 EXPERIMENTS 5.1 SETUP We refer to our combined method (SDN and GDC) for 3D object detection as PSEUDO-LIDAR++ (PL++ in short). To analyze the contribution of each component, we evaluate SDN and GDC independently and jointly across several settings. For GDC we set k = 10 and consider adding signal from a (simulated) 4-beam LiDAR, unless stated otherwise. Dataset, Metrics, and Baselines. We evaluate on the KITTI dataset (Geiger et al., 2013; 2012), which contains 7,481 and 7,518 images for training and testing. We follow (Chen et al., 2015) to separate the 7,481 images into 3,712 for training and 3,769 validation. For each (left) image, KITTI provides the corresponding right image, the 64-beam Velodyne LiDAR point cloud, the camera calibration matrices, and the bounding boxes. We focus on 3D object detection and bird’s-eye-view (BEV) localization and report results on the validation set. Specifically, we focus on the “car” category, following Chen et al. (2017) and Xu et al. (2018). We report average precision (AP) with IoU (Intersection over Union) thresholds at 0.5 and 0.7. We denote AP for the 3D and BEV tasks by AP3D and APBEV. KITTI defines the easy, moderate, and hard settings, in which objects with 2D box heights smaller than or occlusion/truncation levels larger than certain thresholds are disregarded. We compare to four stereo-based detectors: PSEUDO-LIDAR (PL in short) (Wang et al., 2019a), 3DOP (Chen et al., 2015), S-RCNN (Li et al., 2019b), and MLF-STEREO (Xu & Chen, 2018). Stereo depth network (SDN). We use PSMNET (Chang & Chen, 2018) as the backbone for our stereo depth estimation network (SDN). We follow Wang et al. (2019a) to pre-train SDN on the synthetic Scene Flow dataset (Mayer et al., 2016) and fine-tune it on the 3,712 training images of KITTI. We obtain the depth ground truth by projecting the corresponding LiDAR points onto images. We also train a PSMNET in the same way for comparison, which minimizes disparity error. 3D object detection. We apply three algorithms: AVOD (Ku et al., 2018), PIXOR (Yang et al., 2018b), and P-RCNN (Shi et al., 2019). All utilize information from LiDAR and/or monocular images. We use the released implementations of AVOD ( specifically, AVOD-FPN) and P-RCNN. We implement PIXOR ourselves with a slight modification to include visual information (denoted as PIXOR?). We train all models on the 3,712 training data from scratch by replacing the LiDAR points with pseudo-LiDAR data generated from stereo depth estimation. See the appendix for details. Sparser LiDAR. We simulate sparser LiDAR signal with fewer beams by first projecting the 64-beam LiDAR points onto a 2D plane of horizontal and vertical angles. We quantize the vertical angles into 64 levels with an interval of 0.4◦, which is close to the SPEC of the 64-beam LiDAR. We keep points fallen into a subset of beams to mimic the sparser signal. See the appendix for details. 5.2 EXPERIMENTAL RESULTS Results on the KITTI val set. We summarize the main results on KITTI object detection in Table 1. Several important trends can be observed: 1) Our PL++ with enhanced depth estimations by SDN and GDC yields consistent improvement over PL across all settings; 2) PL++ with GDC refinement of 4-beam LiDAR (Input: L# + S) performs significantly better than PL++ with only stereo inputs (Input: S); 3) PL experiences a substantial drop in accuracy from IoU at 0.5 to 0.7 for the hard setting. This suggests that while PL detects faraway objects, it mislocalizes them, likely placing them at the wrong depth. This causes the object to be considered a missed detection at higher overlap thresholds. Interestingly, here is where we experience the largest gain — from PL: P-RCNN (APBEV = 52.7) to PL++: P-RCNN (APBEV = 73.4) with input as L# + S. Note that the majority of the gain comes from GDC, as PL++ with the stereo-only version only improving the score to 57.3 APBEV. 4) The gap between PL++ and LiDAR is at most 13% APBEV, even at the hard setting under IoU at 0.7. 5) For IoU at 0.5, with the aid of only 4 LiDAR beams, PL++ (SDN + GDC) achieves results comparable to models with 64-beam LiDAR signals. Results on the KITTI test set. Table 2 summarizes results on the car category on the KITTI test set. We see a similar gap between our methods and LiDAR as on the validation set, suggesting that our improvement is not particular to the validation data. Our approach without LiDAR refinement (pure SDN) is placed at the top position among all the image-based algorithms on the KITTI leaderboard. In the following, we conduct a series of experiments to analyze the performance gain by our approaches and discuss several key observations. We mainly experiment with P-RCNN: we find that the results with AVOD and PIXOR? follow similar trends and thus include them in the appendix. Depth loss and depth cost volume. To turn a disparity network (e.g., PSMNET) into SDN, there are two changes: 1) change the disparity loss into the depth loss; 2) change the disparity cost volume into the depth cost volume. In Table 3, we uncover the effect of these two changes separately. On the APBEV/AP3D (moderate) metric, the depth loss gives us a 6%/2% improvement and the depth cost volume brings another 2 ∼ 3% gain4. 4We note that, the degree of improvement brought by the depth loss and depth cost volume depends on the 3D detector in use. Table 3 suggests that the depth loss provides more gain than the depth cost volume (for P-RCNN). In Table 6, we, however, see that the depth cost volume provides comparable or even bigger gain Impact of sparse LiDAR beams. We leverage 4-beam LiDAR to correct stereo depth using GDC. However, it is possible that gains in 3D object detection come entirely from the new LiDAR sensor and that the stereo estimates are immaterial. In Table 4, we study this question by comparing the detection results against those of models using 1) sole 4-beam LiDAR point clouds and 2) pseudo-LiDAR point clouds with depths of landmark pixels replaced by 4-beam LiDAR: i.e., in depth correction, we only correct depths of the landmark pixels without propagation. It can be seen that 4-beam LiDAR itself performs fairly well on locating faraway objects but cannot capture nearby objects precisely, while simply replacing pseudo-LiDAR with LiDAR at the landmark pixels prevents the model from detecting faraway object accurately. In contrast, our proposed GDC method effectively combines the merits of the two signals, achieving superior performance than using them alone. Pedestrian and cyclist detection. For a fair comparison to (Wang et al., 2019a), we apply FPOINTNET (Qi et al., 2018) for detecting pedestrians and cyclists. Table 5 shows the results: our methods significantly boosts the performance. Qualitative visualization. In Figure 6, we show an qualitative comparison of detection results on a randomly chosen scene in the KITTI object validation set, using P-RCNN (with confidence > 0.95) with different input signals. Specifically, we show the results from the frontal-view images and the bird’s-eye view (BEV) point clouds. In the BEV map, the observer is on the left-hand side looking to the right. It can be seen that the point clouds generated by PSEUDO-LIDAR ++ (SDN alone or SDN than the depth loss (for PIXOR? and AVOD). Nevertheless, Table 3 and Table 6 both suggest the compatibility of the two approaches: combining them leads to the best performance. +GDC) align better with LiDAR than that generated by PSEUDO-LIDAR (PSMNET). For nearby objects (i.e., bounding boxes close to the left in the BEV map), we see that P-RCNN with any point cloud performs fairly well in localization. However, for faraway objects (i.e., bounding boxes close to the right), PSEUDO-LIDAR with depth estimated from PSMNET predicts objects (red boxes) that are deviated from the ground truths (green boxes). Moreover, the noisy PSMNET points also leads to false negatives. In contrast, the detected boxes by our PSEUDO-LIDAR ++, either with SDN alone or with SDN +GDC, align pretty well with the ground truth boxes, justifying our targeted improvement in estimating faraway depths. Additional results, analyses, qualitative visualization and discussions. We provide results of PSEUDO-LIDAR ++ with fewer LiDAR beams, comparisons to depth completion methods, analysis on depth quality and detection accuracy, run time, failure cases, and more qualitative results in the appendix. With simple optimizations, GDC runs in 90 ms/frame using a single GPU (7.7 ms for KD-tree construction and search). 6 CONCLUSION In this paper we made two contributions to improve the 3D object detection in autonomous vehicles without expensive LiDAR. First, we identify the disparity estimation as a main source of error for stereo-based systems and propose a novel approach to learn depth directly end-to-end instead of through disparity estimates. Second, we advocate that one should not use expensive LiDAR sensors to learn the local structure and depth of objects. Instead one can use commodity stereo cameras for the former and a cheap sparse LiDAR to correct the systematic bias in the resulting depth estimates. We provide a novel graph propagation algorithm that integrates the two data modalities and propagates the sparse yet accurate depth estimates using two sparse matrix solvers. The resulting system, PSEUDO-LIDAR ++ (SDN + GDC), performs almost on par with 64-beam LiDAR systems for $75,000 but only requires 4 beams and two commodity cameras, which could be obtained with a total cost of less than $1,000. ACKNOWLEDGMENTS This research is supported by grants from the National Science Foundation NSF (III-1618134, III1526012, IIS-1149882, IIS-1724282, and TRIPODS-1740822), the Office of Naval Research DOD (N00014-17-1-2175), the Bill and Melinda Gates Foundation, and the Cornell Center for Materials Research with funding from the NSF MRSEC program (DMR-1719875). We are thankful for generous support by Zillow and SAP America Inc. We thank Gao Huang for helpful discussion. Appendix We provide details omitted in the main text. • Appendix A: details on constructing the depth cost volume (section 3 of the main paper). • Appendix B: detailed implementation of the GDC algorithm (section 4 of the main paper). • Appendix C: additional details of experimental setups (subsection 5.1 of the main paper). • Appendix D: additional results, analyses, and discussions (subsection 5.2 of the main paper). A DEPTH COST VOLUME With Equation 2, we know where each grid (u, v, z) in Cdepth corresponds to in Cdisp (may not be on a grid). We can then obtain features for each grid in Cdepth (i.e., Cdepth(u, v, z, :)) by bilinear interpolation over features on grids of Cdisp around the non-grid location (i.e., ( u, v, fU × b z ) ). We applied the “grid_sample” function in PyTorch for bilinear interpolation. We use PSMNET (Chang & Chen, 2018) as the backbone for our stereo depth estimation network (SDN). The only change is to construct the depth cost volume before performing 3D convolutions. B GRAPH-BASED DEPTH CORRECTION (GDC) ALGORITHM Here we present the GDC algorithm in detail (see algorithm 1). The two steps described in the main paper can be easily turned into two (sparse) linear systems and then solved by using Lagrange multipliers. For the first step (i.e., Equation 7), we solve the same problem as in the main text but we switch the objective to minimizing the L2-norm of W and set Z −WZ = 0 as a constraint5. For the second step (i.e., Equation 8), we use the Conjugate Gradient (CG) to iteratively solve the sparse linear system. Algorithm 1: Graph-based depth correction (GDC). “;” stands for column-wise concatenation. Input: Stereo depth map Z ∈ R(n+m)×1, the corresponding pseudo-LiDAR (PL) point cloud P ∈ R(n+m)×3, and LiDAR depths G ∈ Rn×1 on the first the n pixels. Output: Corrected depth map Z ′ ∈ R(n+m)×1 function GDC(Z,P,G,K) Solve: W = argminW∈R(n+m)×(n+m) ‖W‖2 s.t. Z −W · Z = 0, Wij = 0 if j /∈ Ni (i.e., the set of neighbors of the ith point) according to P ,∑ j Wij = 1 for ∀i = 1, . . . , n+m. Solve: Z ′PL = argminZ′PL∈Rm×1 ‖[G;Z ′ PL]−W [G;Z ′PL]‖2 return [G;Z ′PL] end C EXPERIMENTAL SETUP C.1 SPARSE LIDAR GENERATION In this section, we explain how we generate sparser LiDAR with fewer beams from a 64-beam LiDAR point cloud from KITTI dataset in detail. For every point (xi, yi, zi) ∈ R3 of the point cloud in one 5These two problems yield identical solutions but we found the second one is easier to solve in practice. We note that, Equation 7 is an under-constrained problem, with infinitely many solutions. To identify a single solution, we add a small L2 regularization term to the objective (as mentioned in the main text). scene (in LiDAR coordinate system (x: front, y: left, z: up, and (0, 0, 0) is the location of the LiDAR sensor)), we compute the elevation angle θi to the LiDAR sensor as θi = arg cos ( √ x2i + y 2 i√ x2i + y 2 i + z 2 i ) . We order the points by their elevation angles and slice them into separate lines by step 0.4◦, starting from −23.6◦ (close to the Velodyne 64-beam LiDAR SPEC). We select LiDAR points whose elevation angles fall within [−2.4◦,−2.0◦) ∪ [−0.8◦,−0.4◦) to be the 2-beam LiDAR signal, and similarly [−2.4◦,−2.0◦)∪ [−1.6◦,−1.2◦)∪ [−0.8◦,−0.4◦)∪ [0.0◦, 0.4◦) to be the 4-beam LiDAR signal. We choose them in such a way that consecutive lines has a 0.8◦ interval, following the SPEC of the “cheap” 4-beam LiDAR ScaLa. We visualize these sparsified LiDAR point clouds from the bird’s-eye view on one example scene in Figure 7. C.2 3D OBJECT DETECTION ALGORITHMS In this section, we provide more details about the way we train 3D object detection models on pseudo-LiDAR point clouds. For AVOD, we use the same model as in (Wang et al., 2019a). For P-RCNN, we use the implementation provided by the authors. Since the P-RCNN model exploits the sparse nature of LiDAR point clouds, when training it with pseudo-LiDAR input, we will first sparsify the point clouds into 64 beams using the method described in subsection C.1. For PIXOR?, we implement the same base model structure and data augmentation specified by Yang et al. (2018b), but without the “decode fine-tune” step and focal loss. Inspired by the trick in (Liang et al., 2018), we add another image feature (ResNet-18 by He et al. (2016)) branch along the LiDAR branch, and concatenate the corresponding image features onto the LiDAR branch at each stage. We train PIXOR? using RMSProp with momentum 0.9, learning rate 10−5 (decay by 10 after 50 and 80 epochs) for 90 epochs. The BEV evaluation results are similar to the reported results (see Table 1). D ADDITIONAL RESULTS, ANALYSES, AND DISCUSSIONS D.1 ABLATION STUDY In Table 6 and Table 7 we provide more experimental results aligned with experiments in subsection 5.2 of the main paper. We conduct the same experiments on two other models, AVOD and PIXOR?, and observe similar trends of improvements brought by learning with the depth loss (from PSMNET to PSMNET +DL), constructing the depth cost volume (from PSMNET +DL to SDN), and applying GDC to correct the bias in stereo depth estimation (comparing SDN +GDC with SDN). We note that, in Table 7, results of AVOD (or PIXOR?) with SDN + L# are worse than those with L# at the moderate and hard settings. This observation is different from that in Table 4, where P-RCNN with SDN + L# outperforms P-RCNN with L# in 5 out of 6 comparisons. We hypothesize that this is because P-RCNN takes sparsified inputs (see subsection C.2) while AVOD and PIXOR? take dense inputs. In the later case, the four replaced LiDAR beams in SDN + L# will be dominated by the dense stereo depths so that SDN + L# is worse than L#. D.2 USING FEWER LIDAR BEAMS In PL++ (i.e., SDN + GDC), we use 4-beam LiDAR to correct the predicted point cloud. In Table 8, we investigate using fewer (and also potentially cheaper) LiDAR beams for depth correction. We observe that even with 2 beams, GDC can already manage to combine the two signals and yield a better performance than using 2-beam LiDAR or pseudo-LiDAR alone. D.3 DEPTH CORRECTION VS. DEPTH COMPLETION commensurate. Also, our GDC algorithm is a general, simple, inference-time approach that requires no training, unlike prior learning-based approaches to depth completion. Here we empirically compare to PNP (Wang et al., 2018), a recently proposed depth completion algorithm compatible with any (even stereo) depth estimation network, similar to GDC. We use SDN for initial depth estimation, and evaluate GDC and PNP by randomly selecting a fraction of LiDAR points as provided ground truths and calculating the median absolute depth errors on the remaining LiDAR points. As shown in Figure 8, GDC outperforms PNP by a large margin. Table 9 shows a further comparison to PNP on 3D object detection. We apply PNP and GDC respectively to correct the depth estimates obtained from SDN, train a P-RCNN or PIXOR? using the resulting pseudo-LiDAR points on the KITTI training set, and compare the detection results on the KITTI validation set. In either case, SDN + GDC outperforms SDN + PNP by a notable margin. D.4 RUN TIME With the following optimizations for implementation, 1. Sub-sampling pseudo-LiDAR points: keeping at most one point within a cubic of size 0.1m3 2. Limiting the pseudo-LiDAR points for depth correction: keeping only those whose elevation angles are within [−3.0◦, 0.4◦) (the range of 4-beam LiDAR plus 0.6◦; see subsection C.1 for details) 3. After performing GDC for depth correction, combining the corrected pseudo-LiDAR points with those outsides the elevation angles of [−3.0◦, 0.4◦) GDC runs in 90 ms/frame using a single GPU (7.7ms for KD-tree construction and search, 46.5ms for solving W , and 26.9ms for solving Z ′PL) with negligible performance difference (see Table 10). For consistency, all results reported in the main paper are based on the naive implementation. Further speedups can be achieved by CUDA programming for GPUs. D.5 STEREO DEPTH VS. DETECTION We quantitatively evaluate the stereo depths by median errors in Figure 4 of the main text (numerical values are listed in Table 11). In Table 12 we further show mean errors with standard deviation (the large standard deviation likely results from outliers such as occluded pixels around object boundaries). For both tables, we divide pixels into beams according to their truth depths, and evaluate on pixels not on the 4-beam LiDAR. The improvement of SDN (+ GDC) over PSMNET becomes larger as we consider pixels farther away. Table 13 further demonstrates the relationship between depth quality and detection accuracy: SDN (+ GDC) significantly outperforms PSMNET for detecting faraway cars. We note that, for very faraway cars (i.e., 50-70 m), the number of training object instances are extremely small, which suggests that the very poor performance might partially cause by over-fitting. Further, we apply the same evaluation procedure but group the errors by the shortest distance between each PSEUDO-LIDAR point and the 4-beam LiDAR points in Figure 9. We can see that the closer the PSEUDO-LIDAR points are to the 4-beam LiDAR points, the bigger improvement GDC can bring. D.6 CONNECTED COMPONENTS IN KNN GRAPHS OF PSEUDO-LIDAR POINTS BY SDN Here, we provide empirical analysis on the relationship between the k we choose in building the Knearest-neighbor graph of PSEUDO-LIDAR points by SDN and the number of connected components of that graph. We show the results on KITTI validation set in Figure 11. It can be seen that with k ≥ 9, the average number of connected components in the graph is smaller than 2. D.7 FAILURE CASES AND WEAKNESS There is still a gap between our approach and LiDAR for faraway objects (see Table 13). We further analyze APBEV at different IoU in Figure 10. For low IoU (0.2-0.5), SDN (+GDC) is on par with LiDAR, but the gap increases significantly at high IoU thresholds. This suggests that the predominant gap between our approach and LiDAR is because of mislocalization, perhaps due to residual inaccuracies in depth. D.8 QUALITATIVE RESULTS In Figure 6,12,13 and Figure 14, we show detection results using P-RCNN (with confidence > 0.95) with different input signals on four randomly chosen scenes in the KITTI object validation set. Specifically, we show the results from the frontal-view images and the bird’s-eye view (BEV) point clouds. In the BEV map, the observer is on the left-hand side looking to the right. It can be seen that the point clouds generated by PSEUDO-LIDAR ++ (SDN alone or SDN +GDC) align better with LiDAR than those generated by PSEUDO-LIDAR (PSMNET). For nearby objects (i.e., bounding boxes close to the left in the BEV map), we see that P-RCNN with any point cloud performs fairly well in localization. However, for faraway objects (i.e., bounding boxes close to the right), PSEUDO-LIDAR with depth estimated from PSMNET predicts objects (red boxes) deviated from the ground truths (green boxes). Moreover, the noisy PSMNET points also leads to several false positives or negatives. In contrast, the detected boxes by our PSEUDO-LIDAR ++, either with SDN alone or with SDN +GDC, align pretty well with the ground truth boxes, justifying our targeted improvement in estimating faraway depths. In Figure 12, we see one failure case for both PSEUDO-LIDAR and PSEUDO-LIDAR ++: the most faraway car is missed, while LiDAR signal can still detect it, suggesting that for very faraway objects stereo-based methods may still have limitation.
1. What are the main contributions of the paper, particularly in extending the work of Wang et al. (2019)? 2. What are the strengths of the proposed methods, especially in improving 3D object detection using pseudo LiDAR data? 3. Do you have any concerns regarding the proposed approach, such as the conversion of disparity cost volume to depth cost volume? 4. How does the graph diffusion algorithm (GDC) handle the issue of at least one beam of the LiDAR hitting the k-connected local point cloud? 5. Can you provide more details on optimizing the objective functions (7) and (8), including the use of L2 regularization? 6. How do the authors justify using median error in meters for evaluating the performance of different variants of the stereo network? Are there severe outliers, and how do they compare between different methods? 7. Does the paper oversell its results by claiming PL++ with GDC performs significantly better than PL++ without GDC while also achieving comparable results to models with access to full 64-beam LiDAR data?
Review
Review The paper proposes two extensions to the recent work of (Wang et al., 2019) on 3D object detection with pseudo LiDAR data. Wang et al. showed that 3D object detection using stereo images as inputs can be significantly improved if the depth map is projected to 3D and treated like a LiDAR point cloud (i.e., using methods that utilize the LiDAR point cloud). This paper shows that one shortcoming of this approach is given by the fact that the depth uncertainty increases the farther the objects are away. To remedy this, the authors propose to train the stereo estimation network (based on Chang & Chen, 2018) directly with depth outputs, instead of disparity values (inverse depth), by rewriting the loss and converting the cost volume. This already boosts the performance for far away objects. The authors demonstrate that the usage of a (simulated) low-cost 4-beam LiDAR can further facilitate the detection. For this purpose a graph diffusion algorithm is listed that aligns the pseudo LiDAR point cloud from the stereo set-up with the depth estimates from the low-cost LiDAR. Simulating the low-cost LiDAR on the Kitti benchmarks shows that this approach further increases the performance of the object detection methods. In general, I am in favour of accepting the paper as it shows two orthogonal and interesting additions to the pseudo LiDAR paper of Wang et al. that improve its performance. However, I would like to see some clarifications in the rebuttal. The proposed stereo network converts a disparity cost volume to a depth cost volume using bilinear interpolation. I agree, that the 3D convolutions are more meaningful (given the spacing of the grid cells) on the latter, but why the detour over the disparity cost volume? It should be possible to build the depth cost volume directly, which would lead to decreased memory consumption and speed up the method without any loss in accuracy? One assumption of the second contribution (GDC) is that at least one beam of the LiDAR will hit the k-connected local point cloud. Can you give some bounds on the likelihood that this happens, especially for far away objects it could be unlikely, although it is most beneficial for those objects. Further, I am missing a details on the optimization of (7) and (8). What is meant with slight L2 regularization? In the appendix it is also stated that a slightly different objective is optimized? Finally, the notation could also be improved. The authors are using L and G for the LiDAR point cloud and PL and Z for the pseudo LiDAR point cloud and then in the Z' is used for both. Fig. 4 shows the median error in meters for the different variants of the stereo network. Why has the median been used? Are there severe outliers? If yes, it would also be interesting to quantify those and compare them (e.g., box plots). In the abstract and in the discussion the authors oversell their results a bit. At the one hand they state that PL++ with GDC performs significantly better than PL++ w/o GDC, on the other hand they also claim that PL++ achieves comparable results to models that have access to the full 64-beam LiDAR data. However, if you compare the differences, then the gaps are for several cases almost as big, or bigger as in the former claim. Things to improve the paper that did not impact the score: - In equation (2) you could replace the x with a . (\cdot), or completely remove it - On page 5: KNN neighbors -> k-nearest neighbors - Also on page 5: write out W.l.o.g. - In Tab. 1 it would help to highlight (bold) the best entries per column
ICLR
Title Pseudo-LiDAR++: Accurate Depth for 3D Object Detection in Autonomous Driving Abstract Detecting objects such as cars and pedestrians in 3D plays an indispensable role in autonomous driving. Existing approaches largely rely on expensive LiDAR sensors for accurate depth information. While recently pseudo-LiDAR has been introduced as a promising alternative, at a much lower cost based solely on stereo images, there is still a notable performance gap. In this paper we provide substantial advances to the pseudo-LiDAR framework through improvements in stereo depth estimation. Concretely, we adapt the stereo network architecture and loss function to be more aligned with accurate depth estimation of faraway objects — currently the primary weakness of pseudo-LiDAR. Further, we explore the idea to leverage cheaper but extremely sparse LiDAR sensors, which alone provide insufficient information for 3D detection, to de-bias our depth estimation. We propose a depthpropagation algorithm, guided by the initial depth estimates, to diffuse these few exact measurements across the entire depth map. We show on the KITTI object detection benchmark that our combined approach yields substantial improvements in depth estimation and stereo-based 3D object detection — outperforming the previous state-of-the-art detection accuracy for faraway objects by 40%. Our code is available at https://github.com/mileyan/Pseudo_Lidar_V2. N/A 1 INTRODUCTION Safe driving in autonomous cars requires accurate 3D detection and localization of cars, pedestrians and other objects. This in turn requires accurate depth information, which can be obtained from LiDAR (Light Detection And Ranging) sensors. Although highly precise and reliable, LiDAR sensors are notoriously expensive: a 64-beam model can cost around $75,000 (USD)1. The alternative is to measure depth through inexpensive commodity cameras. However, in spite of recent dramatic progress in stereo-based 3D object detection brought by pseudo-LiDAR (Wang et al., 2019a), a significant performance gap remains especially for faraway objects (which we want to detect early to allow time for reaction). The trade-off between affordability and safety creates an ethical dilemma. ∗ Equal contributions 1The information is obtained from the automotive LiDAR market report: http://www.woodsidecap. com/wp-content/uploads/2018/04/Yole_WCP-LiDAR-Report_April-2018-FINAL.pdf In this paper we propose a possible solution to this remaining challenge that combines insights from both perspectives. We observe that the higher 3D object localization error of stereo-based systems, compared to LiDAR-based ones, stems entirely from the higher error in depth estimation (after the 3D point cloud is obtained the two approaches are identical (Wang et al., 2019a)). Importantly, this error is not random but systematic: we observe that stereo methods do indeed detect objects with high reliability, yet they estimate the depth of the entire object as either too far or too close. See Figure 1 for an illustration: the red stereo points capture the car but are shifted by about 2m completely outside the ground-truth location (green box). If we can de-bias these depth estimates it should be possible to obtain accurate 3D localization even for distant objects without exorbitant costs. We start by revisiting the depth estimation routine embedded at the heart of state-of-the-art stereobased 3D detection approach (Wang et al., 2019a). A major contributor to the systematic depth bias comes from the fact that depth is typically not computed directly. Instead, one first estimates the disparity — the horizontal shift of a pixel between the left and right images — and then inverts it to obtain pixel-wise depth. While the use of deep neural networks has largely improved disparity estimation (Chang & Chen, 2018; Cheng et al., 2018; Mayer et al., 2016; Wang et al., 2019b), designing and learning the networks to optimize the accuracy of disparity estimation simply overemphasizes nearby objects due to the reciprocal transformation. For instance, a unit disparity error (in pixels) for a 5-meter-away object means a 10cm error in depth: the length of a side mirror. The same disparity error for a 50-meter-away object, however, becomes a 5.8m error in depth: the length of an entire car. Penalizing both errors equally means that the network spends more time correcting subtle errors on nearby objects than gross errors on faraway objects, resulting in degraded depth estimates and ultimately poor detection and localization for faraway objects. We thus propose to adapt the stereo network architecture and loss function for direct depth estimation. Concretely, the cost volume that fuses the left-right images and the subsequent 3D convolutions are the key components in stereo networks. Taking the central assumption of convolutions — all neighborhoods can be operated in an identical manner — we propose to construct the cost volume on the grid of depth rather than disparity, enabling 3D convolutions and the loss function to perform exactly on the right scale for depth estimation. We refer to our network as stereo depth network (SDN). See Figure 1 for a comparison of 3D points obtained with SDN (purple) and disparity estimation (red). Although our SDN improves the depth estimates significantly, stereo images are still inherently 2D and it is unclear if they can ever match the accuracy and reliability of a true 3D LiDAR sensor. Although LiDAR sensors with 32 or 64 beams are expensive, LiDAR sensors with only 4 beams are two orders of magnitude cheaper2 and thus easily affordable. The 4 laser beams are very sparse and ill-suited to capture 3D object shapes by themselves, but if paired with stereo images they become the ideal tool to de-bias our dense stereo depth estimates: a single high-precision laser beam may inform us how to correct the depth of an entire car or pedestrian in its path. To this end, we present a novel depth-propagation algorithm, inspired by graph-based manifold learning (Weinberger et al., 2005; Roweis & Saul, 2000; Xiaojin & Zoubin, 2002). In a nutshell, we connect our estimated 3D stereo point cloud locally by a nearest neighbor graph, such that points corresponding to the same object will share many local paths with each other. We match the few but exact LiDAR measurements first with pixels (irrespective of depth) and then with their corresponding 3D points to obtain accurate depth estimates for several nodes in the graph. Finally, we propagate this exact depth information along the graph using a label diffusion mechanism — resulting in a dense and accurate depth map at negligible cost. In Figure 1 we see that the few (yellow) LiDAR measurements are sufficient to position almost all final (blue) points of the entire car within the green ground truth box. We conduct extensive empirical studies of our approaches on the KITTI object detection benchmark (Geiger et al., 2012; 2013) and achieve remarkable results. With solely stereo images, we outperform the previous state of the art (Wang et al., 2019a) by 10%. Further adding a cheap 4-beam LiDAR brings another 27% relative improvement — on some metrics, our approach is nearly on par with those based on a 64-beam LiDAR but can potentially save 95% in cost. 2The Ibeo Wide Angle Scanning (ScaLa) sensor with 4 beams costs $600 (USD). In this paper we simulate the 4-beam LiDAR signal on KITTI benchmark (Geiger et al., 2012) by sparsifying the original 64-beam signal. 2 BACKGROUND 3D object detection. Most work on 3D object detection operates on 3D point clouds from LiDAR as input (Li, 2017; Li et al., 2016; Meyer et al., 2019b; Yang et al., 2018a; Du et al., 2018; Shi et al., 2019; Engelcke et al., 2017; Yan et al., 2018; Lang et al., 2019). Frustum PointNet (Qi et al., 2018) applies PointNet (Qi et al., 2017a;b) to the points directly, while Voxelnet (Zhou & Tuzel, 2018) quantizes them into 3D grids. For street scenes, several work finds that processing points from the bird’s-eye view can already capture object contours and locations (Chen et al., 2017; Yang et al., 2018b; Ku et al., 2018). Images have also been used, but mainly to supplement LiDAR (Meyer et al., 2019a; Xu et al., 2018; Liang et al., 2018; Chen et al., 2017; Ku et al., 2018). Early work based solely on images — mostly built on the 2D frontal-view detection pipeline (Ren et al., 2015; He et al., 2017; Lin et al., 2017) — fell far behind in localizing objects in 3D (Li et al., 2019a; Xiang et al., 2015; 2017; Chabot et al., 2017; Mousavian et al., 2017; Chen et al., 2015; Xu & Chen, 2018; Chen et al., 2016; Pham & Jeon, 2017; Chen et al., 2018)3. Pseudo-LiDAR. This gap has been reduced significantly recently with the introduction of the pseudoLiDAR framework proposed in (Wang et al., 2019a). This framework applies a drastically different approach from previous image-based 3D object detectors. Instead of directly detecting the 3D bounding boxes from the frontal view of a scene, pseudo-LiDAR begins with image-based depth estimation, predicting the depth Z(u, v) of each image pixel (u, v). The resulting depth map Z is then back-projected into a 3D point cloud: a pixel (u, v) will be transformed to (x, y, z) in 3D by z = Z(u, v), x = (u− cU )× z fU , y = (v − cV )× z fV , (1) where (cU , cV ) is the camera center and fU and fV are the horizontal and vertical focal length. The 3D point cloud is then treated exactly as LiDAR signal — any LiDAR-based 3D detector can be applied seamlessly. By taking the state-of-the-art algorithms from both ends (Chang & Chen, 2018; Ku et al., 2018; Qi et al., 2018), pseudo-LiDAR obtains the highest image-based performance on the KITTI object detection benchmark (Geiger et al., 2012; 2013). Our work builds upon this framework. Stereo disparity estimation. Pseudo-LiDAR relies heavily on the quality of depth estimation. Essentially, if the estimated pixel depths match those provided by LiDAR, pseudo-LiDAR with any LiDAR-based detector should be able to achieve the same performance as that obtained by applying the same detector to the LiDAR signal. According to (Wang et al., 2019a), depth estimation from stereo pairs of images (Mayer et al., 2016; Yamaguchi et al., 2014; Chang & Chen, 2018) are more accurate than that from monocular (i.e., single) images (Fu et al., 2018; Godard et al., 2017) for 3D object detection. We therefore focus on stereo depth estimation, which is routinely obtained from estimating disparity between images. A disparity estimation algorithm takes a pair of left-right images Il and Ir as input, captured from a pair of cameras with a horizontal offset (i.e., baseline) b. Without loss of generality, we assume that the algorithm treats the left image, Il, as reference and outputs a disparity map D recording the horizontal disparity to Ir for each pixel (u, v). Ideally, Il(u, v) and Ir(u, v +D(u, v)) will picture the same 3D location. We can therefore derive the depth map Z via the following transform, Z(u, v) = fU × b D(u, v) (fU : horizontal focal length). (2) A common pipeline of disparity estimation is to first construct a 4D disparity cost volume Cdisp, in which Cdisp(u, v, d, :) is a feature vector that captures the pixel difference between Il(u, v) and Ir(u, v+d). It then estimates the disparity D(u, v) for each pixel (u, v) according to the cost volume Cdisp. One basic algorithm is to build a 3D cost volume withCdisp(u, v, d) = ‖Il(u, v)−Ir(u, v+d)‖2 and determine D(u, v) as argmind Cdisp(u, v, d). Advanced algorithms exploit more robust features in constructingCdisp and perform structured prediction forD. In what follows, we give an introduction of PSMNet (Chang & Chen, 2018), a state-of-the-art algorithm used in (Wang et al., 2019a). PSMNet begins with extracting deep feature maps hl and hr from Il and Ir, respectively. It then constructs Cdisp(u, v, d, :) by concatenating features of hl(u, v) and hr(u, v + d), followed by layers 3Recently, Srivastava et al. (2019) proposed to lift 2D monocular images to 3D representations (e.g., bird’s-eye view (BEV) images) and achieved promising monocular-based 3D object detection results. of 3D convolutions. The resulting 3D tensor Sdisp, with the feature channel size ending up being one, is then used to derive the pixel disparity via the following weighted combination, D(u, v) = ∑ d softmax(−Sdisp(u, v, d))× d, (3) where softmax is performed along the 3rd dimension of Sdisp. PSMNet can be learned end-to-end, including the image feature extractor and 3D convolution kernels, to minimize the disparity error∑ (u,v)∈A `(D(u, v)−D?(u, v)), (4) where ` is the smooth L1 loss, D? is the ground truth map, andA contains pixels with ground truths. 3 STEREO DEPTH NETWORK (SDN) A stereo network designed and learned to minimize the disparity error (cf. Equation 4) may over-emphasize nearby objects with smaller depths and therefore perform poorly in estimating depths for faraway objects. To see this, note that Equation 2 implies that for a given error in disparity δD, the error in depth δZ increases quadratically with depth: Z ∝ 1 D ⇒ δZ ∝ 1 D2 δD ⇒ δZ ∝ Z2δD. (5) The middle term is obtained by differentiating Z(D) w.r.t. D. In particular, using the settings on the KITTI dataset (Geiger et al., 2012; 2013), a single pixel error in disparity implies only a 0.1m error in depth at a depth of 5 meters, but a 5.8m error at a depth of 50 meters. See Figure 2 for a mapping from disparity to depth. Depth Loss. We propose two changes to adapt stereo networks for direct depth estimation. First, we learn stereo networks to directly optimize the depth loss∑ (u,v)∈A `(Z(u, v)− Z?(u, v)). (6) Z and Z? can be obtained from D and D? using Equation 2. The change from the disparity loss to the depth loss corrects the disproportionally strong emphasis on tiny depth errors of nearby objects — a necessary but still insufficient change to overcome the problems of disparity estimation. Depth Cost Volume. To facilitate accurate depth learning (rather than disparity) we need to further address the internals of the depth estimation pipeline. A crucial source of error is the 3D convolutions within the 4D disparity cost volume, where the same kernels are applied for the entire cost volume. This is highly problematic as it implicitly assumes that the effect of a convolution is homogeneous throughout — which is clearly violated by the reciprocal depth to disparity relation (Figure 2). For example, it may be completely appropriate to locally smooth two neighboring pixels with disparity 85 and 86 (changing the depth by a few cm to smooth out a surface), whereas applying the same kernel for two pixels with disparity 5 and 6 could easily move the 3D points by 10m or more. Taking this insight and the central assumption of convolutions — all neighborhoods can be operated upon in an identical manner — into account, we propose to instead construct the depth cost volume Cdepth, in which Cdepth(u, v, z, :) will encode features describing how likely the depth Z(u, v) of pixel (u, v) is z. The subsequent 3D convolutions will then operate on the grid of depth, rather than disparity, affecting neighboring depths identically, independent of their location. The resulting 3D tensor Sdepth is then used to predict the pixel depth similar to Equation 3 Z(u, v) = ∑ z softmax(−Sdepth(u, v, z))× z. We construct the new depth volume, Cdepth, based on the intuition that Cdepth(u, v, z, :) and Cdisp ( u, v, fU × b z , : ) should lead to equivalent “cost”. To this end, we apply a bilinear interpolation to construct Cdepth from Cdisp using the depth-to-disparity transform in Equation 2. Specifically, we consider disparity in the range of [0, 191] following PSMNet (Chang & Chen, 2018), and consider depth in the range of [1m, 80m] and set the grid of depth in Cdepth to be 1m. Figure 5 (top) depicts our stereo depth network (SDN) pipeline. Crucially, all convolution operations are operated on Cdepth exclusively. Figure 4 compares the median values of absolute depth estimation errors using the disparity cost volume (i.e., PSMNet) and the depth cost volume (SDN) (see subsection D.5 for detailed numbers). As expected, for faraway depth, SDN leads to drastically smaller errors with only marginal increases in the very near range (which disparity based methods over-optimize). See the appendix for the detailed setup and more discussions. 4 DEPTH CORRECTION Our SDN significantly improves depth estimation and more precisely renders the object contours (see Figure 3). However, there is a fundamental limitation in stereo because of the discrete nature of pixels: the disparity, being the difference in the horizontal coordinate between corresponding pixels, has to be quantized at the level of individual pixels while the depth is continuous. Although the quantization error can be alleviated with higher resolution images, the computational depth prediction cost scales cubically with resolution— pushing the limits of GPUs in autonomous vehicles. We therefore explore a hybrid approach by leveraging a cheap LiDAR with extremely sparse (e.g., 4 beams) but accurate depth measurements to correct this bias. We note that such sensors are too sparse to capture object shapes and cannot be used alone for detection. However, by projecting the LiDAR points into the image plane we obtain exact depths on a small portion of “landmark” pixels. We present a graph-based depth correction (GDC) algorithm that effectively combines the dense stereo depth that has rendered object shapes and the sparse accurate LiDAR measurements. Conceptually, we expect the corrected depth map to have the following properties: globally, landmark pixels associated with LiDAR points should possess the exact depths; locally, object shapes captured by neighboring 3D points, back-projected from the input depth map (cf. Equation 1), should be preserved. Figure 5 (bottom) illustrates the algorithm. Input Matching. We take as input the two point clouds from LiDAR (L) and Pseudo-LiDAR (PL) by stereo depth estimation. The latter is obtained by converting pixels (u, v) with depth z to 3D points (xu, yv, z). First, we characterize the local shapes by the directed K-nearest-neighbor (KNN) graph in the PL point cloud (using accelerated KD-Trees (Shevtsov et al., 2007)) that connects each 3D point to its KNNs with appropriate weights. Similarly, we can project the 3D LiDAR points onto pixel locations (u, v) and match them to corresponding 3D stereo points. Without loss of generality, we assume that we are given “ground truth” LiDAR depth for the first n points and no ground truth for the remaining m points. We refer to the 3D stereo depth estimates as Z ∈ Rn+m and the LiDAR depth ground-truth as G ∈ Rn. Edge weights. To construct the KNN graph in 3D we ignore the LiDAR information on the first n points and only use their predicted stereo depth in Z. Let Ni denote the set of k neighbors of the ith point. Further, let W ∈ R(n+m)×(n+m) denote the weight matrix, where Wij denotes the edge-weight between points i and j. Inspired by prior work in manifold learning (Roweis & Saul, 2000; Weinberger et al., 2005) we choose the weights to be the coefficients that reconstruct the depth of any point from the depths of its neighbors inNi. We can solve for these weights with the following constrained quadratic optimization problem: W = argminW ‖Z −WZ‖22, s.t. W1 = 1 and Wij = 0 if j /∈ Ni. (7) Here 1 ∈ Rn+m denotes the all-ones vector. As long as we pick k > 3 and the points are in general position there are infinitely many solutions that satisfy Z =WZ, and we pick the solution with the minimum L2 norm (obtained with slight L2 regularization). Depth Correction. Let us denote the corrected depth values as Z ′ ∈ Rn+m, with Z ′ = [Z ′L;Z ′PL] and Z ′L ∈ Rn and Z ′PL ∈ Rm, where Z ′L are the depth values of points with LiDAR ground-truth and Z ′PL otherwise. For the n points with LiDAR measurements we update the depth to the (ground truth) values Z ′L = G. We then solve for Z ′ PL given G and the weighted KNN graph encoded in W . Concretely, we update the remaining depths Z ′PL such that the depth of any point i can still be be reconstructed with high fidelity as a weighted sum of its KNNs’ depths using the learned weights W ; i.e. if point i : 1 ≤ i ≤ n is moved to its new depth Gi, then its neighbors in Ni must also be corrected such that Gi ≈ ∑ j∈Ni WijZ ′ j . Further, the neighbors’ neighbors must be corrected and the depth of the few n points propagates across the entire graph. We can solve for the final Z ′ directly with another quadratic optimization: Z ′ = argminZ′ ‖Z ′ −WZ ′‖2, s.t. Z ′1:n = G. (8) To illustrate the correction process, imagine the simplest case where the depth of only a single point (n = 1) is updated to G1 = Z1 + δ. A new optimal depth for Equation 8 is to move all the remaining points similarly, i.e. Z ′ = Z + 1δ: as Z =WZ and W1 = 1 we must have W (Z + 1δ) = Z + 1δ. In the setting with n > 1, the least-squares loss ensures a soft diffusion between the different LiDAR depth estimates. Both optimization problems in Equation 7 and Equation 8 can be solved exactly and efficiently with sparse matrix solvers. We summarize the procedure as an algorithm in the appendix. From the view of graph-based manifold learning, our GDC algorithm is reminiscent of locally linear embeddings (Roweis & Saul, 2000) with landmarks to guide the final solution (Weinberger et al., 2005). Figure 1 illustrates vividly how the initial 3D point cloud from SDN (purple) of a car in the KITTI dataset is corrected with a few sparse LiDAR measurements (yellow). The resulting points (blue) are right inside the ground-truth box and clearly show the contour of the car. Figure 4 shows the additional improvement from the GDC (blue) over the pure SDN depth estimates (see subsection D.5 for detailed numbers). The error (calculated only on non-landmark pixels) is corrected over the entire image where many regions have no LiDAR measurements. This is because that the pseudo-LiDAR point cloud is sufficiently dense and we choose k to be large enough (in practice, we use k = 10) such that the KNN graph is typically connected (or consists of few large connected components). See subsection D.6 for more analysis. For objects such as cars the improvements through GDC are far more pronounced, as these typically are touched by the four LiDAR beams and can be corrected effectively. 5 EXPERIMENTS 5.1 SETUP We refer to our combined method (SDN and GDC) for 3D object detection as PSEUDO-LIDAR++ (PL++ in short). To analyze the contribution of each component, we evaluate SDN and GDC independently and jointly across several settings. For GDC we set k = 10 and consider adding signal from a (simulated) 4-beam LiDAR, unless stated otherwise. Dataset, Metrics, and Baselines. We evaluate on the KITTI dataset (Geiger et al., 2013; 2012), which contains 7,481 and 7,518 images for training and testing. We follow (Chen et al., 2015) to separate the 7,481 images into 3,712 for training and 3,769 validation. For each (left) image, KITTI provides the corresponding right image, the 64-beam Velodyne LiDAR point cloud, the camera calibration matrices, and the bounding boxes. We focus on 3D object detection and bird’s-eye-view (BEV) localization and report results on the validation set. Specifically, we focus on the “car” category, following Chen et al. (2017) and Xu et al. (2018). We report average precision (AP) with IoU (Intersection over Union) thresholds at 0.5 and 0.7. We denote AP for the 3D and BEV tasks by AP3D and APBEV. KITTI defines the easy, moderate, and hard settings, in which objects with 2D box heights smaller than or occlusion/truncation levels larger than certain thresholds are disregarded. We compare to four stereo-based detectors: PSEUDO-LIDAR (PL in short) (Wang et al., 2019a), 3DOP (Chen et al., 2015), S-RCNN (Li et al., 2019b), and MLF-STEREO (Xu & Chen, 2018). Stereo depth network (SDN). We use PSMNET (Chang & Chen, 2018) as the backbone for our stereo depth estimation network (SDN). We follow Wang et al. (2019a) to pre-train SDN on the synthetic Scene Flow dataset (Mayer et al., 2016) and fine-tune it on the 3,712 training images of KITTI. We obtain the depth ground truth by projecting the corresponding LiDAR points onto images. We also train a PSMNET in the same way for comparison, which minimizes disparity error. 3D object detection. We apply three algorithms: AVOD (Ku et al., 2018), PIXOR (Yang et al., 2018b), and P-RCNN (Shi et al., 2019). All utilize information from LiDAR and/or monocular images. We use the released implementations of AVOD ( specifically, AVOD-FPN) and P-RCNN. We implement PIXOR ourselves with a slight modification to include visual information (denoted as PIXOR?). We train all models on the 3,712 training data from scratch by replacing the LiDAR points with pseudo-LiDAR data generated from stereo depth estimation. See the appendix for details. Sparser LiDAR. We simulate sparser LiDAR signal with fewer beams by first projecting the 64-beam LiDAR points onto a 2D plane of horizontal and vertical angles. We quantize the vertical angles into 64 levels with an interval of 0.4◦, which is close to the SPEC of the 64-beam LiDAR. We keep points fallen into a subset of beams to mimic the sparser signal. See the appendix for details. 5.2 EXPERIMENTAL RESULTS Results on the KITTI val set. We summarize the main results on KITTI object detection in Table 1. Several important trends can be observed: 1) Our PL++ with enhanced depth estimations by SDN and GDC yields consistent improvement over PL across all settings; 2) PL++ with GDC refinement of 4-beam LiDAR (Input: L# + S) performs significantly better than PL++ with only stereo inputs (Input: S); 3) PL experiences a substantial drop in accuracy from IoU at 0.5 to 0.7 for the hard setting. This suggests that while PL detects faraway objects, it mislocalizes them, likely placing them at the wrong depth. This causes the object to be considered a missed detection at higher overlap thresholds. Interestingly, here is where we experience the largest gain — from PL: P-RCNN (APBEV = 52.7) to PL++: P-RCNN (APBEV = 73.4) with input as L# + S. Note that the majority of the gain comes from GDC, as PL++ with the stereo-only version only improving the score to 57.3 APBEV. 4) The gap between PL++ and LiDAR is at most 13% APBEV, even at the hard setting under IoU at 0.7. 5) For IoU at 0.5, with the aid of only 4 LiDAR beams, PL++ (SDN + GDC) achieves results comparable to models with 64-beam LiDAR signals. Results on the KITTI test set. Table 2 summarizes results on the car category on the KITTI test set. We see a similar gap between our methods and LiDAR as on the validation set, suggesting that our improvement is not particular to the validation data. Our approach without LiDAR refinement (pure SDN) is placed at the top position among all the image-based algorithms on the KITTI leaderboard. In the following, we conduct a series of experiments to analyze the performance gain by our approaches and discuss several key observations. We mainly experiment with P-RCNN: we find that the results with AVOD and PIXOR? follow similar trends and thus include them in the appendix. Depth loss and depth cost volume. To turn a disparity network (e.g., PSMNET) into SDN, there are two changes: 1) change the disparity loss into the depth loss; 2) change the disparity cost volume into the depth cost volume. In Table 3, we uncover the effect of these two changes separately. On the APBEV/AP3D (moderate) metric, the depth loss gives us a 6%/2% improvement and the depth cost volume brings another 2 ∼ 3% gain4. 4We note that, the degree of improvement brought by the depth loss and depth cost volume depends on the 3D detector in use. Table 3 suggests that the depth loss provides more gain than the depth cost volume (for P-RCNN). In Table 6, we, however, see that the depth cost volume provides comparable or even bigger gain Impact of sparse LiDAR beams. We leverage 4-beam LiDAR to correct stereo depth using GDC. However, it is possible that gains in 3D object detection come entirely from the new LiDAR sensor and that the stereo estimates are immaterial. In Table 4, we study this question by comparing the detection results against those of models using 1) sole 4-beam LiDAR point clouds and 2) pseudo-LiDAR point clouds with depths of landmark pixels replaced by 4-beam LiDAR: i.e., in depth correction, we only correct depths of the landmark pixels without propagation. It can be seen that 4-beam LiDAR itself performs fairly well on locating faraway objects but cannot capture nearby objects precisely, while simply replacing pseudo-LiDAR with LiDAR at the landmark pixels prevents the model from detecting faraway object accurately. In contrast, our proposed GDC method effectively combines the merits of the two signals, achieving superior performance than using them alone. Pedestrian and cyclist detection. For a fair comparison to (Wang et al., 2019a), we apply FPOINTNET (Qi et al., 2018) for detecting pedestrians and cyclists. Table 5 shows the results: our methods significantly boosts the performance. Qualitative visualization. In Figure 6, we show an qualitative comparison of detection results on a randomly chosen scene in the KITTI object validation set, using P-RCNN (with confidence > 0.95) with different input signals. Specifically, we show the results from the frontal-view images and the bird’s-eye view (BEV) point clouds. In the BEV map, the observer is on the left-hand side looking to the right. It can be seen that the point clouds generated by PSEUDO-LIDAR ++ (SDN alone or SDN than the depth loss (for PIXOR? and AVOD). Nevertheless, Table 3 and Table 6 both suggest the compatibility of the two approaches: combining them leads to the best performance. +GDC) align better with LiDAR than that generated by PSEUDO-LIDAR (PSMNET). For nearby objects (i.e., bounding boxes close to the left in the BEV map), we see that P-RCNN with any point cloud performs fairly well in localization. However, for faraway objects (i.e., bounding boxes close to the right), PSEUDO-LIDAR with depth estimated from PSMNET predicts objects (red boxes) that are deviated from the ground truths (green boxes). Moreover, the noisy PSMNET points also leads to false negatives. In contrast, the detected boxes by our PSEUDO-LIDAR ++, either with SDN alone or with SDN +GDC, align pretty well with the ground truth boxes, justifying our targeted improvement in estimating faraway depths. Additional results, analyses, qualitative visualization and discussions. We provide results of PSEUDO-LIDAR ++ with fewer LiDAR beams, comparisons to depth completion methods, analysis on depth quality and detection accuracy, run time, failure cases, and more qualitative results in the appendix. With simple optimizations, GDC runs in 90 ms/frame using a single GPU (7.7 ms for KD-tree construction and search). 6 CONCLUSION In this paper we made two contributions to improve the 3D object detection in autonomous vehicles without expensive LiDAR. First, we identify the disparity estimation as a main source of error for stereo-based systems and propose a novel approach to learn depth directly end-to-end instead of through disparity estimates. Second, we advocate that one should not use expensive LiDAR sensors to learn the local structure and depth of objects. Instead one can use commodity stereo cameras for the former and a cheap sparse LiDAR to correct the systematic bias in the resulting depth estimates. We provide a novel graph propagation algorithm that integrates the two data modalities and propagates the sparse yet accurate depth estimates using two sparse matrix solvers. The resulting system, PSEUDO-LIDAR ++ (SDN + GDC), performs almost on par with 64-beam LiDAR systems for $75,000 but only requires 4 beams and two commodity cameras, which could be obtained with a total cost of less than $1,000. ACKNOWLEDGMENTS This research is supported by grants from the National Science Foundation NSF (III-1618134, III1526012, IIS-1149882, IIS-1724282, and TRIPODS-1740822), the Office of Naval Research DOD (N00014-17-1-2175), the Bill and Melinda Gates Foundation, and the Cornell Center for Materials Research with funding from the NSF MRSEC program (DMR-1719875). We are thankful for generous support by Zillow and SAP America Inc. We thank Gao Huang for helpful discussion. Appendix We provide details omitted in the main text. • Appendix A: details on constructing the depth cost volume (section 3 of the main paper). • Appendix B: detailed implementation of the GDC algorithm (section 4 of the main paper). • Appendix C: additional details of experimental setups (subsection 5.1 of the main paper). • Appendix D: additional results, analyses, and discussions (subsection 5.2 of the main paper). A DEPTH COST VOLUME With Equation 2, we know where each grid (u, v, z) in Cdepth corresponds to in Cdisp (may not be on a grid). We can then obtain features for each grid in Cdepth (i.e., Cdepth(u, v, z, :)) by bilinear interpolation over features on grids of Cdisp around the non-grid location (i.e., ( u, v, fU × b z ) ). We applied the “grid_sample” function in PyTorch for bilinear interpolation. We use PSMNET (Chang & Chen, 2018) as the backbone for our stereo depth estimation network (SDN). The only change is to construct the depth cost volume before performing 3D convolutions. B GRAPH-BASED DEPTH CORRECTION (GDC) ALGORITHM Here we present the GDC algorithm in detail (see algorithm 1). The two steps described in the main paper can be easily turned into two (sparse) linear systems and then solved by using Lagrange multipliers. For the first step (i.e., Equation 7), we solve the same problem as in the main text but we switch the objective to minimizing the L2-norm of W and set Z −WZ = 0 as a constraint5. For the second step (i.e., Equation 8), we use the Conjugate Gradient (CG) to iteratively solve the sparse linear system. Algorithm 1: Graph-based depth correction (GDC). “;” stands for column-wise concatenation. Input: Stereo depth map Z ∈ R(n+m)×1, the corresponding pseudo-LiDAR (PL) point cloud P ∈ R(n+m)×3, and LiDAR depths G ∈ Rn×1 on the first the n pixels. Output: Corrected depth map Z ′ ∈ R(n+m)×1 function GDC(Z,P,G,K) Solve: W = argminW∈R(n+m)×(n+m) ‖W‖2 s.t. Z −W · Z = 0, Wij = 0 if j /∈ Ni (i.e., the set of neighbors of the ith point) according to P ,∑ j Wij = 1 for ∀i = 1, . . . , n+m. Solve: Z ′PL = argminZ′PL∈Rm×1 ‖[G;Z ′ PL]−W [G;Z ′PL]‖2 return [G;Z ′PL] end C EXPERIMENTAL SETUP C.1 SPARSE LIDAR GENERATION In this section, we explain how we generate sparser LiDAR with fewer beams from a 64-beam LiDAR point cloud from KITTI dataset in detail. For every point (xi, yi, zi) ∈ R3 of the point cloud in one 5These two problems yield identical solutions but we found the second one is easier to solve in practice. We note that, Equation 7 is an under-constrained problem, with infinitely many solutions. To identify a single solution, we add a small L2 regularization term to the objective (as mentioned in the main text). scene (in LiDAR coordinate system (x: front, y: left, z: up, and (0, 0, 0) is the location of the LiDAR sensor)), we compute the elevation angle θi to the LiDAR sensor as θi = arg cos ( √ x2i + y 2 i√ x2i + y 2 i + z 2 i ) . We order the points by their elevation angles and slice them into separate lines by step 0.4◦, starting from −23.6◦ (close to the Velodyne 64-beam LiDAR SPEC). We select LiDAR points whose elevation angles fall within [−2.4◦,−2.0◦) ∪ [−0.8◦,−0.4◦) to be the 2-beam LiDAR signal, and similarly [−2.4◦,−2.0◦)∪ [−1.6◦,−1.2◦)∪ [−0.8◦,−0.4◦)∪ [0.0◦, 0.4◦) to be the 4-beam LiDAR signal. We choose them in such a way that consecutive lines has a 0.8◦ interval, following the SPEC of the “cheap” 4-beam LiDAR ScaLa. We visualize these sparsified LiDAR point clouds from the bird’s-eye view on one example scene in Figure 7. C.2 3D OBJECT DETECTION ALGORITHMS In this section, we provide more details about the way we train 3D object detection models on pseudo-LiDAR point clouds. For AVOD, we use the same model as in (Wang et al., 2019a). For P-RCNN, we use the implementation provided by the authors. Since the P-RCNN model exploits the sparse nature of LiDAR point clouds, when training it with pseudo-LiDAR input, we will first sparsify the point clouds into 64 beams using the method described in subsection C.1. For PIXOR?, we implement the same base model structure and data augmentation specified by Yang et al. (2018b), but without the “decode fine-tune” step and focal loss. Inspired by the trick in (Liang et al., 2018), we add another image feature (ResNet-18 by He et al. (2016)) branch along the LiDAR branch, and concatenate the corresponding image features onto the LiDAR branch at each stage. We train PIXOR? using RMSProp with momentum 0.9, learning rate 10−5 (decay by 10 after 50 and 80 epochs) for 90 epochs. The BEV evaluation results are similar to the reported results (see Table 1). D ADDITIONAL RESULTS, ANALYSES, AND DISCUSSIONS D.1 ABLATION STUDY In Table 6 and Table 7 we provide more experimental results aligned with experiments in subsection 5.2 of the main paper. We conduct the same experiments on two other models, AVOD and PIXOR?, and observe similar trends of improvements brought by learning with the depth loss (from PSMNET to PSMNET +DL), constructing the depth cost volume (from PSMNET +DL to SDN), and applying GDC to correct the bias in stereo depth estimation (comparing SDN +GDC with SDN). We note that, in Table 7, results of AVOD (or PIXOR?) with SDN + L# are worse than those with L# at the moderate and hard settings. This observation is different from that in Table 4, where P-RCNN with SDN + L# outperforms P-RCNN with L# in 5 out of 6 comparisons. We hypothesize that this is because P-RCNN takes sparsified inputs (see subsection C.2) while AVOD and PIXOR? take dense inputs. In the later case, the four replaced LiDAR beams in SDN + L# will be dominated by the dense stereo depths so that SDN + L# is worse than L#. D.2 USING FEWER LIDAR BEAMS In PL++ (i.e., SDN + GDC), we use 4-beam LiDAR to correct the predicted point cloud. In Table 8, we investigate using fewer (and also potentially cheaper) LiDAR beams for depth correction. We observe that even with 2 beams, GDC can already manage to combine the two signals and yield a better performance than using 2-beam LiDAR or pseudo-LiDAR alone. D.3 DEPTH CORRECTION VS. DEPTH COMPLETION commensurate. Also, our GDC algorithm is a general, simple, inference-time approach that requires no training, unlike prior learning-based approaches to depth completion. Here we empirically compare to PNP (Wang et al., 2018), a recently proposed depth completion algorithm compatible with any (even stereo) depth estimation network, similar to GDC. We use SDN for initial depth estimation, and evaluate GDC and PNP by randomly selecting a fraction of LiDAR points as provided ground truths and calculating the median absolute depth errors on the remaining LiDAR points. As shown in Figure 8, GDC outperforms PNP by a large margin. Table 9 shows a further comparison to PNP on 3D object detection. We apply PNP and GDC respectively to correct the depth estimates obtained from SDN, train a P-RCNN or PIXOR? using the resulting pseudo-LiDAR points on the KITTI training set, and compare the detection results on the KITTI validation set. In either case, SDN + GDC outperforms SDN + PNP by a notable margin. D.4 RUN TIME With the following optimizations for implementation, 1. Sub-sampling pseudo-LiDAR points: keeping at most one point within a cubic of size 0.1m3 2. Limiting the pseudo-LiDAR points for depth correction: keeping only those whose elevation angles are within [−3.0◦, 0.4◦) (the range of 4-beam LiDAR plus 0.6◦; see subsection C.1 for details) 3. After performing GDC for depth correction, combining the corrected pseudo-LiDAR points with those outsides the elevation angles of [−3.0◦, 0.4◦) GDC runs in 90 ms/frame using a single GPU (7.7ms for KD-tree construction and search, 46.5ms for solving W , and 26.9ms for solving Z ′PL) with negligible performance difference (see Table 10). For consistency, all results reported in the main paper are based on the naive implementation. Further speedups can be achieved by CUDA programming for GPUs. D.5 STEREO DEPTH VS. DETECTION We quantitatively evaluate the stereo depths by median errors in Figure 4 of the main text (numerical values are listed in Table 11). In Table 12 we further show mean errors with standard deviation (the large standard deviation likely results from outliers such as occluded pixels around object boundaries). For both tables, we divide pixels into beams according to their truth depths, and evaluate on pixels not on the 4-beam LiDAR. The improvement of SDN (+ GDC) over PSMNET becomes larger as we consider pixels farther away. Table 13 further demonstrates the relationship between depth quality and detection accuracy: SDN (+ GDC) significantly outperforms PSMNET for detecting faraway cars. We note that, for very faraway cars (i.e., 50-70 m), the number of training object instances are extremely small, which suggests that the very poor performance might partially cause by over-fitting. Further, we apply the same evaluation procedure but group the errors by the shortest distance between each PSEUDO-LIDAR point and the 4-beam LiDAR points in Figure 9. We can see that the closer the PSEUDO-LIDAR points are to the 4-beam LiDAR points, the bigger improvement GDC can bring. D.6 CONNECTED COMPONENTS IN KNN GRAPHS OF PSEUDO-LIDAR POINTS BY SDN Here, we provide empirical analysis on the relationship between the k we choose in building the Knearest-neighbor graph of PSEUDO-LIDAR points by SDN and the number of connected components of that graph. We show the results on KITTI validation set in Figure 11. It can be seen that with k ≥ 9, the average number of connected components in the graph is smaller than 2. D.7 FAILURE CASES AND WEAKNESS There is still a gap between our approach and LiDAR for faraway objects (see Table 13). We further analyze APBEV at different IoU in Figure 10. For low IoU (0.2-0.5), SDN (+GDC) is on par with LiDAR, but the gap increases significantly at high IoU thresholds. This suggests that the predominant gap between our approach and LiDAR is because of mislocalization, perhaps due to residual inaccuracies in depth. D.8 QUALITATIVE RESULTS In Figure 6,12,13 and Figure 14, we show detection results using P-RCNN (with confidence > 0.95) with different input signals on four randomly chosen scenes in the KITTI object validation set. Specifically, we show the results from the frontal-view images and the bird’s-eye view (BEV) point clouds. In the BEV map, the observer is on the left-hand side looking to the right. It can be seen that the point clouds generated by PSEUDO-LIDAR ++ (SDN alone or SDN +GDC) align better with LiDAR than those generated by PSEUDO-LIDAR (PSMNET). For nearby objects (i.e., bounding boxes close to the left in the BEV map), we see that P-RCNN with any point cloud performs fairly well in localization. However, for faraway objects (i.e., bounding boxes close to the right), PSEUDO-LIDAR with depth estimated from PSMNET predicts objects (red boxes) deviated from the ground truths (green boxes). Moreover, the noisy PSMNET points also leads to several false positives or negatives. In contrast, the detected boxes by our PSEUDO-LIDAR ++, either with SDN alone or with SDN +GDC, align pretty well with the ground truth boxes, justifying our targeted improvement in estimating faraway depths. In Figure 12, we see one failure case for both PSEUDO-LIDAR and PSEUDO-LIDAR ++: the most faraway car is missed, while LiDAR signal can still detect it, suggesting that for very faraway objects stereo-based methods may still have limitation.
1. What is the focus of the paper regarding Pseudo-Lidar? 2. What are the strengths of the proposed approach, particularly in terms of accuracy and 3D object detection performance? 3. Are there any concerns or areas for improvement regarding the method's comparison with other works and its visual representation? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review Summary: This paper describes a new method for Pseudo-Lidar, that is the reliable recovery of a 3D point cloud from 2D inputs and subsequent detection of 3D objects from the point cloud. The authors focus on improving the accuracy of the reconstructed point cloud by formulating a loss in depth, rather than disparity space, and by using sparse true lidar readings to align estimates. These techniques lead to a boost in 3D object detection performance. Strengths: The Pseudo-Lidar method has been well-received, and it appears that this paper makes a nice improvement on the previous in terms of 3D point cloud accuracy. While the image results here are convincing, I would have liked to see an added empirical evaluation of precisely how accurate the resulting 3D reconstructions are, measured against ground truth 3D lidar on a test set. Do the point clouds only look accurate locally (and perhaps near known objects give good shape due to regularity), or are the metric results also quite strong indeed? I found the author's technical analysis and method description to be clear and well-motivated. None of the math or formulations are entirely surprising, but they are new to this area, so this appeared as nice sensible progress to me. This area is closely tied to the self-driving car application, and thus bottom-line performance is the key measuring stick for impact on practitioners. For 3D object detection, the main goal of interest, the authors show up to 20% improvements for their combined method over quite recent and strong PL methods (although the new method uses sparse lidar, which is a great advantage, hence not entirely equal comparison). This is the main impact of the paper, as I see it, and enough reason for acceptance. Areas for Improvement: I found that the authors did not sufficiently recognize that there have been a wide variety of methods utilizing sparse 3D along with dense 2D images to interpolate to full 3D. For example, [A] is one I recall well from 15 years back, but at that time there was a strong community in this area, so I encourage the authors to do a bit more thorough review. This paper has the fewest qualitative examples of 3D objects detected among the recent papers I've read. The final pages of the Appendix contain a few more of these visuals, but there are too few in the main paper for the reader to get any intuitive feeling of the physical meaning of your performance improvement. I'd like to see you add several examples, even if small, into the paper to aid in this understanding. Decision: weak accept due to the nice clear method that gives a strong improvement on an important area to industry today. [A] Statistical Inference and Synthesis in the Image Domain for Mobile Robot Environment Modeling, Luz Abril Torres-Méndez and Gregory Dudek. In Proceedings of the IEEEE/RSJ/GI International Conference on Intelligent Robots and Systems (IROS), Sendai, Japan, 2004.
ICLR
Title Answer-based Adversarial Training for Generating Clarification Questions Abstract We propose a generative adversarial training approach for the problem of clarification question generation. Our approach generates clarification questions with the goal of eliciting new information that would make the given context more complete. We develop a Generative Adversarial Network (GAN) where the generator is a sequence-to-sequence model and the discriminator is a utility function that models the value of updating the context with the answer to the clarification question. We evaluate on two datasets, using both automatic metrics and human judgments of usefulness, specificity and relevance, showing that our approach outperforms both a retrieval-based model and ablations that exclude the utility model and the adversarial training. 1 INTRODUCTION A goal of natural language processing is to develop techniques that enable machines to process naturally occurring language. However, not all language is clear and, as humans, we may not always understand each other (Grice, 1975); in cases of gaps or mismatches in knowledge, we tend to ask questions (Graesser et al., 2008). In this work, we focus on the task of automatically generating clarification questions: questions that ask for information that is missing from a given linguistic context. Our clarification question generation model builds on the sequence-to-sequence approach that has proven effective for several language generation tasks (Sutskever et al., 2014; Serban et al., 2016; Yin et al., 2016; Du et al., 2017). Unfortunately, training a sequence-to-sequence model directly on context/question pairs yields generated questions that are highly generic1, corroborating a common finding in dialog systems (Li et al., 2016b). Our goal is to be able to generate questions that are useful and specific. To achieve this, we begin with a recent observation of Rao & Daumé III (2018), who considered the task of question reranking: the system should learn to generate clarification questions whose answers have high utility, which they defined as the likelihood that this question would lead to an answer that will make the context more complete (§2.3). Inspired by this, we construct a question generation model that first generates a question given a context, and then generates a hypothetical answer to that question. Given this (context, question, answer) tuple, we train a utility calculator to estimate the usefulness of this question. We then show that this utility calculator can be generalized using ideas for generative adversarial networks (Goodfellow et al., 2014) for text (Yu et al., 2017), wherein the utility predictor plays the role of the “discriminator” and the question generator is the “generator” (§2.2), which we train using the MIXER algorithm (Ranzato et al., 2015). We evaluate our approach on two question generation datasets: for posts on Stack Exchange and for Amazon product descriptions (Figure 1). Using both automatic metrics and human evaluation, we demonstrate that our adversarially trained model generates a more diverse set of questions than all the baseline models. Furthermore, we find that although all models generate questions that are relevant to the context at hand, our adversarially-trained model generates questions that are more specific to the context.2 1For instance, in the context of asking questions about home appliances, frequently asking like “What are the dimensions?” or “Is it made in China?” 2Code and data release: All code will be released under a license at least as permissive as MIT; all data will be made available after publication subject to allowance by the original licenses. 2 TRAINING A CLARIFICATION QUESTION GENERATOR Our goal is to build a model that, given a context, can generate an appropriate clarification question. As a running example, we will use the Amazon setting: where the dataset consists of (context, question, answer) triples where the context is the product description, question is clarification question about that product that (preferably) is not already answered in the description and answer is the seller’s (or other users’) reply to the question. Representationally, our question generator is a standard sequence-to-sequence model with attention (§2.1). The learning problem is: how to train the sequence-to-sequence model to produce good question. An overview of our training setup is shown in Figure 2. Given a context, our question generator outputs a question. In order to evaluate the usefulness of this question, we then have a second sequence-to-sequence model called the “answer generator” that generates a hypothetical answer based on the context and the question (§2.5). This (context, question and answer) triple is fed into a UTILITY calculator, whose initial goal is to estimate the probability that this question/answer pair is useful in this context (§2.3). This UTILITY is treated as a reward, which is used to update the question generator using the MIXER (Ranzato et al., 2015) algorithm (§2.2). Finally, we reinterpret the answer-generator-plus-utility-calculator component as a discriminator for differentiating between true (context, question, answer) triples and synthetic triples (§ 2.4), and optimize this adversarial objective using MIXER. 2.1 SEQUENCE-TO-SEQUENCE MODEL FOR QUESTION GENERATION We use a standard attention based sequence-to-sequence model (Luong et al., 2015) for our question generator. Given an input sequence (context) c = (c1, c2, ..., cN ), this model generates an output sequence (question) q = (q1, q2, ..., qT ). The architecture of this model is an encoder-decoder with attention. The encoder is a recurrent neural network (RNN) operating over the input word embeddings to compute a source context representation c̃. The decoder uses this source representation to generate the target sequence one word at a time: p(q|c̃t) = T∏ t=1 p(qt|q1, q2, ..., qt−1, c̃t) = T∏ t=1 softmax(Wsh̃t) ; where h̃t = tanh(Wc[c̃t;ht]) (1) In Eq 1, h̃t is the attentional hidden state of the RNN at time t and Ws and Wc are parameters of the model (details in Appendix A). The predicted token qt is the token in the vocabulary that is assigned the highest probability using the softmax function. The standard training objective for sequence-tosequence model is to maximize the log-likelihood of all (c, q) pairs in the training data D which is equivalent to minimizing the loss, Lmle(D) = − ∑ (c,q)∈D T∑ t=1 log p(qt|q1, q2, ..., qt−1, c) (2) 2.2 TRAINING THE GENERATOR TO OPTIMIZE QUESTION UTILITY Training sequence-to-sequence models for the task of clarification question generation (with context as input and question as output) using maximum likelihood objective unfortunately leads to the generation of highly generic questions, such as “What are the dimensions?” when asking questions about home appliances. This issue has been observed in dialog generation as well (Li et al., 2016b). Recently Rao & Daumé III (2018) observed that usefulness of a question can be better measured as the utility that would be obtained if the context were updated with the answer to the proposed question. We use this observation to define a UTILITY based reward function and train the question generator to optimize this reward. We train the UTILITY reward to predict the likelihood that a question would generate an answer that would increase the utility of the context by adding useful information to it (see §2.3 for details). Similar to optimizing metrics like BLEU and ROUGE, this UTILITY function also operates on discrete text outputs, which makes optimization difficult due to non-differentiability. A successful recent approach dealing with the non-differentiability while also retaining some advantages of maximum likelihood training is the Mixed Incremental Cross-Entropy Reinforce (Ranzato et al., 2015) algorithm (MIXER). In MIXER, the overall loss L is differentiated as in REINFORCE (Williams, 1992): L(θ) = −Eqs∼pθr(qs) ; ∇θL(θ) = −Eqs∼pθr(qs)∇θ log pθ(qs) (3) where ys is a random output sample according to the model pθ, where θ are the parameters of the network. We then approximate the expected gradient using a single sample qs = (qs1, q s 2, ..., q s T ) from the model distribution (pθ). In REINFORCE, the policy is initialized random, which can cause long convergence times. To solve this, MIXER starts by optimizing maximum likelihood and slowly shifts to optimizing the expected reward from Eq 3. For the initial ∆ time steps, MIXER optimizes Lmle and for the remaining (T −∆) time steps, it optimizes the external reward. In our model, we minimize the UTILITY-based loss Lmax-utility defined as: Lmax-utility = −(r(qp)− r(qb)) T∑ t=1 log p(qt|q1, q2, ..., qt−1, ct) (4) where r(qp) is the UTILITY based reward on the predicted question and r(qb) is a baseline reward introduced to reduce the high variance otherwise observed when using REINFORCE. In MIXER, the baseline is estimated using a linear regressor that takes in the current hidden states of the model as input and is trained to minimize the mean squared error (||r(qp) − r(qb)||)2. Instead we use a self-critical training approach Rennie et al. (2017) where the baseline is estimated using the reward obtained by the current model under greedy decoding during test time. 2.3 ESTIMATING A UTILITY FUNCTION FROM HISTORICAL DATA Given a (context, question, answer) triple, Rao & Daumé III (2018) introduce a utility function UTILITY(c, q, a) to calculate the value of updating a context c with the answer a to a clarification question q. The inspiration for thier utility function is to estimate the probability that an answer would be a meaningful addition to a context, and treat this as a binary classification problem where the positive instances are the true (context, question, answer) triples in the dataset whereas the negative instances are contexts paired with a random (question, answer) from the dataset. The model we use is to first embed of the words in the context c, then use an LSTM (long-short term memory) (Hochreiter & Schmidhuber, 1997) to generate a neural representation c̄ of the context by averaging the output of each of the hidden states. Similarly, we obtain a neural representation q̄ and ā of q and a respectively using question and answer LSTM models. Finally, a feed forward neural network FUTILITY(c̄, q̄, ā) predicts the usefulness of the question. 2.4 UTILITY GAN FOR CLARIFICATION QUESTION GENERATION The UTILITY function trained on true vs random samples from real data (as described in the previous section) can be a weak reward signal for questions generated by a model due to the large discrepancy between the true data and the model’s outputs. In order to strengthen the reward signal, we reinterpret the UTILITY function (coupled with the answer generator) as a discriminator in an adversarial learning setting. That is, instead of taking the UTILITY calculator to be a fixed model that outputs the expected quality of a question/answer pair, we additionally optimize it to distinguish between true question/answer pairs and model-generated ones. This reinterpretation turns our model into a form of a generative adversarial network (GAN) (Goodfellow et al., 2014). A GAN is a training procedure for “generative” models that can be interpreted as a game between a generator and a discriminator. The generator is an arbitrary model g ∈ G that produces outputs (in our case, questions). The discriminator is another model d ∈ D that attempts to classify between true outputs and model-generated outputs. The goal of the generator is to generate data such that it can fool the discriminator; the goal of the discriminator is to be able to successfully distinguish between real and generated data. In the process of trying to fool the discriminator, the generator produces data that is as close as possible to the real data distribution. Generically, the GAN objective is: LGAN(D,G) = max d∈D min g∈G Ex∼p̂ log d(x) + Ez∼pz log(1− d(g(z))) (5) where x is sampled from the true data distribution p̂, and z is sampled from a prior defined on input noise variables pz . Although GANs have been successfully used for image tasks, training GANs for text generation is challenging due to the discrete nature of outputs in text. The discrete outputs from the generator make it difficult to pass the gradient update from the discriminator to the generator. Recently, Yu et al. (2017) proposed a sequence GAN model for text generation to overcome this issue. They treat their generator as an agent and use the discriminator as a reward function to update the generative model using reinforcement learning techniques. Our GAN-based approach is inspired by this sequence GAN model with two main modifications: a) We use the MIXER algorithm as our generator (§2.2) instead of policy gradient approach; and b) We use the UTILITY function (§2.3) as our discriminator instead of a convolutional neural network (CNN). In our model, the answer is an latent variable: we do not actually use it anywhere except to train the discriminator. Because of this, we train our discriminator using (context, true question, generated answer) triples as positive instances and (context, generated question, generated answer) triples as the negative instances. Formally, our objective function is: LGAN-U(U ,M) = max u∈U min m∈M Eq∼p̂ log u(c, q,A(c, q)) + Ec∼p̂ log(1− u(c,m(c),A(c,m(c)))) (6) where U is the UTILITY discriminator,M is the MIXER generator, p̂ is our data of (context, question, answer) triples and A is our answer generator. 2.5 PRETRAINING Question Generator. We pretrain our question generator using the sequence-to-sequence model§2.1 where we define the input sequence as the context and the output sequence as the question. This answer generator is trained to maximize the log-likelihood of all ([context+question], answer) pairs in the training data. Parameters of this model are updated during adversarial training. Answer Generator. We pretrain our answer generator using the sequence-to-sequence model§2.1 where we define the input sequence as the concatenation of the context and the question and the output sequence as the answer. This answer generator is trained to maximize the log-likelihood of all (context, question) pairs in the training data. Unlike the question generator, the parameters of the answer generator are kept fixed during the adversarial training. Discriminator. We pretrain the discriminator using (context, question, answer) triples from the training data. For positive instances, we use a context and its true question, answer and for negative instances, we use the same context but randomly sample a question from the training data (and use the answer paired with that random question). 3 EXPERIMENTAL RESULTS We base our experimental design on the following research questions: 1. Do generation models outperform simpler retrieval baselines? 2. Does optimizing the UTILITY reward improve over maximum likelihood training? 3. Does using adversarial training improve over optimizing the pretrained UTILITY? 4. How do the models perform when evaluated for nuances such as specificity and usefulness? 3.1 DATASETS We evaluate our model on two datasets. The first is from StackExchange and was curated by Rao & Daumé III (2018); the second is from Amazon, curated by McAuley & Yang (2016), and has not previously been used for the task of question generation. StackExchange. This dataset consists of posts, questions asked to that post on stackexchange.com (and answers) collected from three related subdomains on stackexchage.com (askubuntu, unix and superuser). Additionally, for 500 instances each from the tune and the test set, the dataset includes 1 to 5 other questions identified as valid questions by expert human annotators from a pool of candidate questions. This dataset consists of 61, 681 training, 7710 validation and 7709 test examples. Amazon. Each instance consists of a question asked about a product on amazon.com combined with other information (product ID, question type “Yes/No”, answer type, answer and answer time). To obtain the description of the product, we use the metadata information contained in the amazon reviews dataset (McAuley et al., 2015). We consider at most 10 questions for each product. This dataset includes several different product categories. We choose the Home and Kitchen category since it contains a high number of questions and is relatively easy category for human based evaluation. This dataset consists of 19, 119 training, 2435 validation and 2305 test examples, and each product description contains between 3 and 10 questions (average: 7). 3.2 BASELINES AND ABLATED MODELS We compare three variants (ablations) of our proposed approach, together with an information retrieval baseline: GAN-Utility is our full model which is a UTILITY function based GAN training (§2.4) including the UTILITY discriminator, a MIXER question generator and a sequence-tosequence based answer generator. Max-Utility is our reinforcement learning baseline with a pretrained question generator described model (§ 2.2) without the adversarial training. MLE is the question generator model pretrained on context, question pairs using maximum likelihood objective (§2.1). Lucene3 is a TF-IDF (term frequency-inverse document frequency) based document ranking system which given a document, retrieves N other documents that are most similar to the given document. Given a context, we use Lucene to retrieve top 10 contexts that are most similar to the given context. We randomly choose a question from the 10 questions paired with these contexts to construct our Lucene baseline4. Experimental details of all our models are described in Appendix B. 3.3 EVALUATION METRICS We evaluate initially with several automated evaluation metrics, and then more substantially based on crowdsourced human judgments. Automatic metrics include: Diversity, which calculates the proportion of unique trigrams5 in the output to measure the diversity as commonly used to evaluate dialogue generation (Li et al., 2016b); BLEU (Papineni et al., 2002), which evaluate n-gram precision between a predicted sentence and reference sentences; and METEOR (Banerjee & Lavie, 2005), which is similar to BLEU but includes stemmed and synonym matches when measuring the similarity between the predicted sequence and the reference sequences. 3https://lucene.apache.org/ 4For the Amazon dataset, we ignore questions asked to products of the same brand as the given product since Amazon replicates questions across same brand allowing the true question to be included in that set. 5We report trigrams, but bigrams and unigrams follow similar trends. Human judgments involve showing contexts and generated questions to crowdworkers6 and asking them to evaluate the questions along several axes. Roughly, we ask for the following five judgments for each question (exact wordings in Appendix C): Is it relevant (yes/no); Is it grammatical (yes/comprehensible/incomprehensible); How specific is it to this product (four options from “specific to only this product” to “generic to any product”); Does this question ask for new information not contained in the discription (completely/somewhat/no); and How useful is this question to a potential buyer (four options from “should be included in the description” to “useful only to the person asking”). For the last three questions, we also allowed a “not applicable” response in the case that the question was either ungrammatical or irrelevant. 3.4 AUTOMATIC METRIC RESULTS Table 1 shows the results on the two datasets when evaluated according to automatic metrics. In the Amazon dataset, GAN-Utility outperforms all ablations on DIVERSITY, suggesting that it produces more diverse outputs. Lucene, on the other hand, has the highest DIVERSITY since it consists of human generated questions, which tend to be more diverse because they are much longer compared to model generated questions. This comes at the cost of lower match with the reference as visible in the BLEU and METEOR scores. In terms of BLEU and METEOR, there is inconsistency. Although GAN-Utility outperforms all baselines according to METEOR, the fully ablated MLE model has a higher BLEU score. This is because BLEU score looks for exact n-gram matches and since MLE produces more generic outputs, it is much more likely that it will match one of 10 references compared to the specific/diverse outputs of GAN-Utility, since one of those ten is highly likely to itself be generic. In the StackExchange dataset GAN-Utility outperforms all ablations on both BLEU and METEOR. Unlike in the Amazon dataset, MLE does not outperform GAN-Utility in BLEU. This is because the MLE outputs in this dataset are not as generic as in the amazon dataset due to the highly technical nature of contexts in StackExchange. As in the Amazon dataset, GAN-Utility outperforms MLE on DIVERSITY. Interestingly, the Max-Utility ablation achieves a higher DIVERSITY score than GAN-Utility. On manual analysis we find that Max-Utility produces longer outputs compared to GAN-Utility but at the cost of being less grammatical. 3.5 HUMAN JUDGEMENTS ANALYSIS Table 2 shows the numeric results of human-based evaluation performed on the reference and the system outputs on 500 random samples from the test set of the Amazon dataset.7 These results overall show that the GAN-Utility model successfully generates the most specific questions, while being equally good at seeking new information and being useful to potential buyers. All approaches produce relevant, grammatical questions. All our models are all equally good at seeking new information, but are weaker than Lucene, which performs better according to new information but at 6We use Figure-Eight, https://www.figure-eight.com. We paid crowdworkers 5 cents per judgment. 7We could not ask crowdworkers evaluate the StackExchange data due to its highly technical nature. the cost of much lower specificity and slightly lower relevance. Our models are all equally good also at generating useful questions: their usefulness score is significantly better than both Lucene and Reference, largely because Lucene and Reference tend to ask questions that are more often useful only to the person asking the question, making them less useful for potential other buyers (see Figure 4). Our full model, GAN-Utility, performs significantly better when measured by specificity to the product, which aligns with the higher DIVERSITY score obtained by GAN-Utility under automatic metric evaluation. 4 RELATED WORK Question Generation. Most previous work on question generation has been on generating reading comprehension style questions i.e. questions that ask about information present in a given text (Heilman, 2011; Rus et al., 2010; 2011; Duan et al., 2017). Outside reading comprehension questions, Labutov et al. (2015) use crowdsourcing to generate question templates, Liu et al. (2010) use templated questions to help authors write better related work sections, Mostafazadeh et al. (2016) introduced visual question answer tasking that focuses on generating natural and engaging questions about an image. Mostafazadeh et al. (2017) introduced an extension of this task called the Image Grounded Conversation task where they use both the image and some initial textual context to generate a natural follow-up question and a response to that question. Buck et al. (2017) propose an active question answering model where they build an agent that learns to reformulate the question to be asked to a question-answering system so as to elicit the best possible answers. Duan et al. (2017) extract large number of question-answer pairs from community question answering forums and use them to train a model that can generate a natural question given a passage. Neural Models and Adversarial Training for Text Generation. Neural network based models have had significant success at a variety of text generation tasks, including machine translation (Bahdanau et al., 2015; Luong et al., 2015), summarization (Nallapati et al., 2016), dialog (Serban et al., 2016; Bordes et al., 2016; Li et al., 2016a; Serban et al., 2017), textual style transfer (Jhamtani et al., 2017; Kabbara & Cheung, 2016; Rao & Tetreault, 2018) and question answering (Yin et al., 2016; Serban et al., 2016).Our task is most similar to dialog, in which a wide variety of possible outputs are acceptable, and where lack of specificity in generated outputs is common. We addresses this challenge using an adversarial network approach (Goodfellow et al., 2014), a training procedure that can generate natural-looking outputs, which have been effective for natural image generation (Denton et al., 2015). Due to the challenges in optimizing over discrete output spaces like text, Yu et al. (2017) introduced a Seq(uence)GAN approach where they overcome this issue by using REINFORCE to optimize. Li et al. (2017) train an adversarial model similar to SeqGAN for generating next utterance in a dialog given a context. However, unlike our work, their discriminator is a binary classifier trained to distinguish between human and machine generated utterances. Finally, Fedus et al. (2018) introduce an actor-critic conditional GAN for filling in missing text conditioned on the surrounding context. 5 CONCLUSION In this work, we describe a novel approach to the problem of clarification question generation. Given a context, we use the observation of Rao & Daumé III (2018) that the usefulness of a clarification question can be measured by the value of updating the context with an answer to the question. We use a sequence-to-sequence model to generate a question given a context and a second sequenceto-sequence model to generate an answer given the context and the question. Given the (context, predicted question, predicted answer) triple we calculator the utility of this triple and use it as a reward to retrain the question generator using reinforcement learning based MIXER model. Further, to improve upon the utility function, we reinterpret it as a discriminator in an adversarial setting and train both the utility function and the MIXER model in a minimax fashion. We find that our adversarial training approach produces more diverse questions compared to both a model trained using maximum likelihood objective and a model trained using utility reward based reinforcement learning. There are several avenues of future work in this area. Following Mostafazadeh et al. (2016), we could combine text input with image input to generate more relevant questions. Because some questions can be answered by looking at the product image in the Amazon dataset (McAuley & Yang, 2016), this could help generate more relevant and useful questions. As in most One significant research challenge in the space of free text generation problems when the set of possible outputs is large, is that of automatic evaluation (Lowe et al., 2016): in our results we saw some correlation between human judgments and automatic metrics, but not enough to trust the automatic metrics completely. Lastly, integrating such a question generation model into a real world platform like StackExchange or Amazon to understand the real utility of such models and to unearth additional research questions. A DETAILS OF SEQUENCE-TO-SEQUENCE MODEL In this section, we describe the attention based sequence-to-sequence model introduced in §2.1 of the main paper. In Eq 1, h̃t is the attentional hidden state of the RNN at time t obtained by concatenating the target hidden state ht and the source-side context vector c̃t, andWs is a linear transformation that maps ht to an output vocabulary-sized vector. The predicted token qt is the token in the vocabulary that is assigned the highest probability using the softmax function. Each attentional hidden state h̃t depends on a distinct input context vector c̃t computed using a global attention mechanism over the input hidden states as: c̃t = N∑ n=1 anthn (7) ant = align(hn, ht) = exp [ hTt Wahn ]/∑ n′ exp [ hTt Wahn′ ] (8) The attention weights ant is calculated based on the alignment score between the source hidden state hn and the current target hidden state ht. B EXPERIMENTAL DETAILS In this section, we describe the details of our experimental setup. We preprocess all inputs (context, question and answers) using tokenization and lowercasing. We set the max length of context to be 100, question to be 20 and answer to be 20. Our sequence-to-sequence model (§2.1) operates on word embeddings which are pretrained on in domain data using Glove (Pennington et al., 2014). We use embeddings of size 200 and a vocabulary with cut off frequency set to 10. During train time, we use teacher forcing. During test time, we use beam search decoding with beam size 5. We use a hidden layer of size two for both the encoder and decoder recurrent neural network models with size of hidden unit set to 100. We use a dropout of 0.5 and learning ratio of 0.0001 In the MIXER model, we start with ∆ = T and decrease it by 2 for every epoch (we found decreasing ∆ to 0 is ineffective for our task, hence we stop at 2). C DETAILS OF HUMAN BASED EVALUATION In this section, we describe in detail the human based evaluation methodology introduced in §3.3 of the main paper. Relevance: We ask a Yes-No question: ”Is the question on topic” Grammaticality: We ask ”Is the question grammatical?”, and let workers choose from: [Grammatical, Comprehensible and Incomprehensible]. Specificity: We ask ”How specific is the question?” and let workers choose from: 1. Specific pretty much only to this product 2. Specific to this and other very similar products (same product from different manufacturer) 3. Generic enough to be applicable to many other products of this type 4. Generic enough to be applicable to any product under Home and Kitchen 5. N/A (Not applicable): Question is not on topic OR is incomprehensible Seeking new information: We ask “Does the question ask for new information currently not included in the description?” and let workers choose from: [Completely, Somewhat, No, N/A] Usefulness: We ask “How useful is the question to a potential buyer (or a current user) of the product?” and let workers choose from: 1. Useful enough to be included in the product description 2. Useful to a large number of potential buyers (or current users) 3. Useful to a small number of potential buyers (or current users) 4. Useful only to the person asking the question 5. N/A (Not applicable): Not on topic OR incomprehensible OR not asking new information
1. What are the strengths and weaknesses of the proposed approach for generating clarification questions? 2. How does the reviewer assess the evaluation setup and its usefulness in evaluating clarification questions? 3. What is the main concern regarding the objective of generating clarification questions? 4. Is it possible to evaluate the model trained in such a setup, and how can utility be naturally defined? 5. What is the significance of the combination of SeqGANs, MIXER, and self-critical baseline for policy gradient updates? 6. Can the approach be applied to a task that might actually require asking clarification questions?
Review
Review The paper proposes a new approach for the problem of clarification question generation. It is based on training question-answer generator model jointly with a utility function (or discriminator) using a GAN-like objective. The system is compared to a variety of baselines and shows to generate slightly more diverse answers than competing baselines, but is otherwise quite comparable. The paper is overall well presented and introduces an interesting approach that combines SeqGANs with MIXER training and a self critical baseline. The authors also took care to establish reasonable baselines given the novelty of the task. The evaluations are carried out on a very artificial task setup, however, that is overall not very usefull for evaluating clarification questions. Therefore, I believe that this paper needs a completely different evaluation setup and I can unfortunately not recommend it for acceptance without that. Detailed comments: It is unclear to me whether it is really possible to evaluate a model trained in such a setup, because it is impossible to say what is a useful clarification question without establishing an information need first. Just asking random questions about a product, which is the best we can hope to learn, is not a very interesting task. Clarification is usually a means to an end goal, but in this paper it is established as the final goal, which doesn't make too much sense to me. This leads to the rather artificial treatment of "Utility", which is impossible to define without a clear down-stream task. The paper relabels generating human-like questions as the utility to optimize towards in order to ultimately fool a discriminator. I don't see how this can be viewed as defining utility. So I strongly suggest to evaluate the approach on a task that might actually require asking clarification questions, in which case utility is naturally defined. GAN training could still be used to make the generated questions more diverse. Strengths: - clearly written and well presented - learning to generate clarification questions is an important topic - interesting combination of SeqGANs, MIXER and self-critical baseline for policy gradient updates - a range of good baselines for this novel task setup Weaknesses: - minor: automatic evaluations are kind of useless here and the datasets are rather artificial for this task - major: generating clarification questions cannot be the end goal in and of itself (see above explanation) Other comments: - section pretraining, paragraph question generator: I do not understand the reference to answer generator in this paragraph. I think something got mixed up in this section. - needs some proof reading: some spelling mistakes (eg: p3 thier->their), missing spaces (e.g., p.4 "model§2.1"), Questions: Why is specificity such an important aspect if we mainly care about usefulness? In other words, is usefulness not capturing specificity to a certain degree? The goal of this paper is to train clarification questions, so I do not really understand why also synthetic answers are being generated? Why not just training the system on (context, question) tuples?
ICLR
Title Answer-based Adversarial Training for Generating Clarification Questions Abstract We propose a generative adversarial training approach for the problem of clarification question generation. Our approach generates clarification questions with the goal of eliciting new information that would make the given context more complete. We develop a Generative Adversarial Network (GAN) where the generator is a sequence-to-sequence model and the discriminator is a utility function that models the value of updating the context with the answer to the clarification question. We evaluate on two datasets, using both automatic metrics and human judgments of usefulness, specificity and relevance, showing that our approach outperforms both a retrieval-based model and ablations that exclude the utility model and the adversarial training. 1 INTRODUCTION A goal of natural language processing is to develop techniques that enable machines to process naturally occurring language. However, not all language is clear and, as humans, we may not always understand each other (Grice, 1975); in cases of gaps or mismatches in knowledge, we tend to ask questions (Graesser et al., 2008). In this work, we focus on the task of automatically generating clarification questions: questions that ask for information that is missing from a given linguistic context. Our clarification question generation model builds on the sequence-to-sequence approach that has proven effective for several language generation tasks (Sutskever et al., 2014; Serban et al., 2016; Yin et al., 2016; Du et al., 2017). Unfortunately, training a sequence-to-sequence model directly on context/question pairs yields generated questions that are highly generic1, corroborating a common finding in dialog systems (Li et al., 2016b). Our goal is to be able to generate questions that are useful and specific. To achieve this, we begin with a recent observation of Rao & Daumé III (2018), who considered the task of question reranking: the system should learn to generate clarification questions whose answers have high utility, which they defined as the likelihood that this question would lead to an answer that will make the context more complete (§2.3). Inspired by this, we construct a question generation model that first generates a question given a context, and then generates a hypothetical answer to that question. Given this (context, question, answer) tuple, we train a utility calculator to estimate the usefulness of this question. We then show that this utility calculator can be generalized using ideas for generative adversarial networks (Goodfellow et al., 2014) for text (Yu et al., 2017), wherein the utility predictor plays the role of the “discriminator” and the question generator is the “generator” (§2.2), which we train using the MIXER algorithm (Ranzato et al., 2015). We evaluate our approach on two question generation datasets: for posts on Stack Exchange and for Amazon product descriptions (Figure 1). Using both automatic metrics and human evaluation, we demonstrate that our adversarially trained model generates a more diverse set of questions than all the baseline models. Furthermore, we find that although all models generate questions that are relevant to the context at hand, our adversarially-trained model generates questions that are more specific to the context.2 1For instance, in the context of asking questions about home appliances, frequently asking like “What are the dimensions?” or “Is it made in China?” 2Code and data release: All code will be released under a license at least as permissive as MIT; all data will be made available after publication subject to allowance by the original licenses. 2 TRAINING A CLARIFICATION QUESTION GENERATOR Our goal is to build a model that, given a context, can generate an appropriate clarification question. As a running example, we will use the Amazon setting: where the dataset consists of (context, question, answer) triples where the context is the product description, question is clarification question about that product that (preferably) is not already answered in the description and answer is the seller’s (or other users’) reply to the question. Representationally, our question generator is a standard sequence-to-sequence model with attention (§2.1). The learning problem is: how to train the sequence-to-sequence model to produce good question. An overview of our training setup is shown in Figure 2. Given a context, our question generator outputs a question. In order to evaluate the usefulness of this question, we then have a second sequence-to-sequence model called the “answer generator” that generates a hypothetical answer based on the context and the question (§2.5). This (context, question and answer) triple is fed into a UTILITY calculator, whose initial goal is to estimate the probability that this question/answer pair is useful in this context (§2.3). This UTILITY is treated as a reward, which is used to update the question generator using the MIXER (Ranzato et al., 2015) algorithm (§2.2). Finally, we reinterpret the answer-generator-plus-utility-calculator component as a discriminator for differentiating between true (context, question, answer) triples and synthetic triples (§ 2.4), and optimize this adversarial objective using MIXER. 2.1 SEQUENCE-TO-SEQUENCE MODEL FOR QUESTION GENERATION We use a standard attention based sequence-to-sequence model (Luong et al., 2015) for our question generator. Given an input sequence (context) c = (c1, c2, ..., cN ), this model generates an output sequence (question) q = (q1, q2, ..., qT ). The architecture of this model is an encoder-decoder with attention. The encoder is a recurrent neural network (RNN) operating over the input word embeddings to compute a source context representation c̃. The decoder uses this source representation to generate the target sequence one word at a time: p(q|c̃t) = T∏ t=1 p(qt|q1, q2, ..., qt−1, c̃t) = T∏ t=1 softmax(Wsh̃t) ; where h̃t = tanh(Wc[c̃t;ht]) (1) In Eq 1, h̃t is the attentional hidden state of the RNN at time t and Ws and Wc are parameters of the model (details in Appendix A). The predicted token qt is the token in the vocabulary that is assigned the highest probability using the softmax function. The standard training objective for sequence-tosequence model is to maximize the log-likelihood of all (c, q) pairs in the training data D which is equivalent to minimizing the loss, Lmle(D) = − ∑ (c,q)∈D T∑ t=1 log p(qt|q1, q2, ..., qt−1, c) (2) 2.2 TRAINING THE GENERATOR TO OPTIMIZE QUESTION UTILITY Training sequence-to-sequence models for the task of clarification question generation (with context as input and question as output) using maximum likelihood objective unfortunately leads to the generation of highly generic questions, such as “What are the dimensions?” when asking questions about home appliances. This issue has been observed in dialog generation as well (Li et al., 2016b). Recently Rao & Daumé III (2018) observed that usefulness of a question can be better measured as the utility that would be obtained if the context were updated with the answer to the proposed question. We use this observation to define a UTILITY based reward function and train the question generator to optimize this reward. We train the UTILITY reward to predict the likelihood that a question would generate an answer that would increase the utility of the context by adding useful information to it (see §2.3 for details). Similar to optimizing metrics like BLEU and ROUGE, this UTILITY function also operates on discrete text outputs, which makes optimization difficult due to non-differentiability. A successful recent approach dealing with the non-differentiability while also retaining some advantages of maximum likelihood training is the Mixed Incremental Cross-Entropy Reinforce (Ranzato et al., 2015) algorithm (MIXER). In MIXER, the overall loss L is differentiated as in REINFORCE (Williams, 1992): L(θ) = −Eqs∼pθr(qs) ; ∇θL(θ) = −Eqs∼pθr(qs)∇θ log pθ(qs) (3) where ys is a random output sample according to the model pθ, where θ are the parameters of the network. We then approximate the expected gradient using a single sample qs = (qs1, q s 2, ..., q s T ) from the model distribution (pθ). In REINFORCE, the policy is initialized random, which can cause long convergence times. To solve this, MIXER starts by optimizing maximum likelihood and slowly shifts to optimizing the expected reward from Eq 3. For the initial ∆ time steps, MIXER optimizes Lmle and for the remaining (T −∆) time steps, it optimizes the external reward. In our model, we minimize the UTILITY-based loss Lmax-utility defined as: Lmax-utility = −(r(qp)− r(qb)) T∑ t=1 log p(qt|q1, q2, ..., qt−1, ct) (4) where r(qp) is the UTILITY based reward on the predicted question and r(qb) is a baseline reward introduced to reduce the high variance otherwise observed when using REINFORCE. In MIXER, the baseline is estimated using a linear regressor that takes in the current hidden states of the model as input and is trained to minimize the mean squared error (||r(qp) − r(qb)||)2. Instead we use a self-critical training approach Rennie et al. (2017) where the baseline is estimated using the reward obtained by the current model under greedy decoding during test time. 2.3 ESTIMATING A UTILITY FUNCTION FROM HISTORICAL DATA Given a (context, question, answer) triple, Rao & Daumé III (2018) introduce a utility function UTILITY(c, q, a) to calculate the value of updating a context c with the answer a to a clarification question q. The inspiration for thier utility function is to estimate the probability that an answer would be a meaningful addition to a context, and treat this as a binary classification problem where the positive instances are the true (context, question, answer) triples in the dataset whereas the negative instances are contexts paired with a random (question, answer) from the dataset. The model we use is to first embed of the words in the context c, then use an LSTM (long-short term memory) (Hochreiter & Schmidhuber, 1997) to generate a neural representation c̄ of the context by averaging the output of each of the hidden states. Similarly, we obtain a neural representation q̄ and ā of q and a respectively using question and answer LSTM models. Finally, a feed forward neural network FUTILITY(c̄, q̄, ā) predicts the usefulness of the question. 2.4 UTILITY GAN FOR CLARIFICATION QUESTION GENERATION The UTILITY function trained on true vs random samples from real data (as described in the previous section) can be a weak reward signal for questions generated by a model due to the large discrepancy between the true data and the model’s outputs. In order to strengthen the reward signal, we reinterpret the UTILITY function (coupled with the answer generator) as a discriminator in an adversarial learning setting. That is, instead of taking the UTILITY calculator to be a fixed model that outputs the expected quality of a question/answer pair, we additionally optimize it to distinguish between true question/answer pairs and model-generated ones. This reinterpretation turns our model into a form of a generative adversarial network (GAN) (Goodfellow et al., 2014). A GAN is a training procedure for “generative” models that can be interpreted as a game between a generator and a discriminator. The generator is an arbitrary model g ∈ G that produces outputs (in our case, questions). The discriminator is another model d ∈ D that attempts to classify between true outputs and model-generated outputs. The goal of the generator is to generate data such that it can fool the discriminator; the goal of the discriminator is to be able to successfully distinguish between real and generated data. In the process of trying to fool the discriminator, the generator produces data that is as close as possible to the real data distribution. Generically, the GAN objective is: LGAN(D,G) = max d∈D min g∈G Ex∼p̂ log d(x) + Ez∼pz log(1− d(g(z))) (5) where x is sampled from the true data distribution p̂, and z is sampled from a prior defined on input noise variables pz . Although GANs have been successfully used for image tasks, training GANs for text generation is challenging due to the discrete nature of outputs in text. The discrete outputs from the generator make it difficult to pass the gradient update from the discriminator to the generator. Recently, Yu et al. (2017) proposed a sequence GAN model for text generation to overcome this issue. They treat their generator as an agent and use the discriminator as a reward function to update the generative model using reinforcement learning techniques. Our GAN-based approach is inspired by this sequence GAN model with two main modifications: a) We use the MIXER algorithm as our generator (§2.2) instead of policy gradient approach; and b) We use the UTILITY function (§2.3) as our discriminator instead of a convolutional neural network (CNN). In our model, the answer is an latent variable: we do not actually use it anywhere except to train the discriminator. Because of this, we train our discriminator using (context, true question, generated answer) triples as positive instances and (context, generated question, generated answer) triples as the negative instances. Formally, our objective function is: LGAN-U(U ,M) = max u∈U min m∈M Eq∼p̂ log u(c, q,A(c, q)) + Ec∼p̂ log(1− u(c,m(c),A(c,m(c)))) (6) where U is the UTILITY discriminator,M is the MIXER generator, p̂ is our data of (context, question, answer) triples and A is our answer generator. 2.5 PRETRAINING Question Generator. We pretrain our question generator using the sequence-to-sequence model§2.1 where we define the input sequence as the context and the output sequence as the question. This answer generator is trained to maximize the log-likelihood of all ([context+question], answer) pairs in the training data. Parameters of this model are updated during adversarial training. Answer Generator. We pretrain our answer generator using the sequence-to-sequence model§2.1 where we define the input sequence as the concatenation of the context and the question and the output sequence as the answer. This answer generator is trained to maximize the log-likelihood of all (context, question) pairs in the training data. Unlike the question generator, the parameters of the answer generator are kept fixed during the adversarial training. Discriminator. We pretrain the discriminator using (context, question, answer) triples from the training data. For positive instances, we use a context and its true question, answer and for negative instances, we use the same context but randomly sample a question from the training data (and use the answer paired with that random question). 3 EXPERIMENTAL RESULTS We base our experimental design on the following research questions: 1. Do generation models outperform simpler retrieval baselines? 2. Does optimizing the UTILITY reward improve over maximum likelihood training? 3. Does using adversarial training improve over optimizing the pretrained UTILITY? 4. How do the models perform when evaluated for nuances such as specificity and usefulness? 3.1 DATASETS We evaluate our model on two datasets. The first is from StackExchange and was curated by Rao & Daumé III (2018); the second is from Amazon, curated by McAuley & Yang (2016), and has not previously been used for the task of question generation. StackExchange. This dataset consists of posts, questions asked to that post on stackexchange.com (and answers) collected from three related subdomains on stackexchage.com (askubuntu, unix and superuser). Additionally, for 500 instances each from the tune and the test set, the dataset includes 1 to 5 other questions identified as valid questions by expert human annotators from a pool of candidate questions. This dataset consists of 61, 681 training, 7710 validation and 7709 test examples. Amazon. Each instance consists of a question asked about a product on amazon.com combined with other information (product ID, question type “Yes/No”, answer type, answer and answer time). To obtain the description of the product, we use the metadata information contained in the amazon reviews dataset (McAuley et al., 2015). We consider at most 10 questions for each product. This dataset includes several different product categories. We choose the Home and Kitchen category since it contains a high number of questions and is relatively easy category for human based evaluation. This dataset consists of 19, 119 training, 2435 validation and 2305 test examples, and each product description contains between 3 and 10 questions (average: 7). 3.2 BASELINES AND ABLATED MODELS We compare three variants (ablations) of our proposed approach, together with an information retrieval baseline: GAN-Utility is our full model which is a UTILITY function based GAN training (§2.4) including the UTILITY discriminator, a MIXER question generator and a sequence-tosequence based answer generator. Max-Utility is our reinforcement learning baseline with a pretrained question generator described model (§ 2.2) without the adversarial training. MLE is the question generator model pretrained on context, question pairs using maximum likelihood objective (§2.1). Lucene3 is a TF-IDF (term frequency-inverse document frequency) based document ranking system which given a document, retrieves N other documents that are most similar to the given document. Given a context, we use Lucene to retrieve top 10 contexts that are most similar to the given context. We randomly choose a question from the 10 questions paired with these contexts to construct our Lucene baseline4. Experimental details of all our models are described in Appendix B. 3.3 EVALUATION METRICS We evaluate initially with several automated evaluation metrics, and then more substantially based on crowdsourced human judgments. Automatic metrics include: Diversity, which calculates the proportion of unique trigrams5 in the output to measure the diversity as commonly used to evaluate dialogue generation (Li et al., 2016b); BLEU (Papineni et al., 2002), which evaluate n-gram precision between a predicted sentence and reference sentences; and METEOR (Banerjee & Lavie, 2005), which is similar to BLEU but includes stemmed and synonym matches when measuring the similarity between the predicted sequence and the reference sequences. 3https://lucene.apache.org/ 4For the Amazon dataset, we ignore questions asked to products of the same brand as the given product since Amazon replicates questions across same brand allowing the true question to be included in that set. 5We report trigrams, but bigrams and unigrams follow similar trends. Human judgments involve showing contexts and generated questions to crowdworkers6 and asking them to evaluate the questions along several axes. Roughly, we ask for the following five judgments for each question (exact wordings in Appendix C): Is it relevant (yes/no); Is it grammatical (yes/comprehensible/incomprehensible); How specific is it to this product (four options from “specific to only this product” to “generic to any product”); Does this question ask for new information not contained in the discription (completely/somewhat/no); and How useful is this question to a potential buyer (four options from “should be included in the description” to “useful only to the person asking”). For the last three questions, we also allowed a “not applicable” response in the case that the question was either ungrammatical or irrelevant. 3.4 AUTOMATIC METRIC RESULTS Table 1 shows the results on the two datasets when evaluated according to automatic metrics. In the Amazon dataset, GAN-Utility outperforms all ablations on DIVERSITY, suggesting that it produces more diverse outputs. Lucene, on the other hand, has the highest DIVERSITY since it consists of human generated questions, which tend to be more diverse because they are much longer compared to model generated questions. This comes at the cost of lower match with the reference as visible in the BLEU and METEOR scores. In terms of BLEU and METEOR, there is inconsistency. Although GAN-Utility outperforms all baselines according to METEOR, the fully ablated MLE model has a higher BLEU score. This is because BLEU score looks for exact n-gram matches and since MLE produces more generic outputs, it is much more likely that it will match one of 10 references compared to the specific/diverse outputs of GAN-Utility, since one of those ten is highly likely to itself be generic. In the StackExchange dataset GAN-Utility outperforms all ablations on both BLEU and METEOR. Unlike in the Amazon dataset, MLE does not outperform GAN-Utility in BLEU. This is because the MLE outputs in this dataset are not as generic as in the amazon dataset due to the highly technical nature of contexts in StackExchange. As in the Amazon dataset, GAN-Utility outperforms MLE on DIVERSITY. Interestingly, the Max-Utility ablation achieves a higher DIVERSITY score than GAN-Utility. On manual analysis we find that Max-Utility produces longer outputs compared to GAN-Utility but at the cost of being less grammatical. 3.5 HUMAN JUDGEMENTS ANALYSIS Table 2 shows the numeric results of human-based evaluation performed on the reference and the system outputs on 500 random samples from the test set of the Amazon dataset.7 These results overall show that the GAN-Utility model successfully generates the most specific questions, while being equally good at seeking new information and being useful to potential buyers. All approaches produce relevant, grammatical questions. All our models are all equally good at seeking new information, but are weaker than Lucene, which performs better according to new information but at 6We use Figure-Eight, https://www.figure-eight.com. We paid crowdworkers 5 cents per judgment. 7We could not ask crowdworkers evaluate the StackExchange data due to its highly technical nature. the cost of much lower specificity and slightly lower relevance. Our models are all equally good also at generating useful questions: their usefulness score is significantly better than both Lucene and Reference, largely because Lucene and Reference tend to ask questions that are more often useful only to the person asking the question, making them less useful for potential other buyers (see Figure 4). Our full model, GAN-Utility, performs significantly better when measured by specificity to the product, which aligns with the higher DIVERSITY score obtained by GAN-Utility under automatic metric evaluation. 4 RELATED WORK Question Generation. Most previous work on question generation has been on generating reading comprehension style questions i.e. questions that ask about information present in a given text (Heilman, 2011; Rus et al., 2010; 2011; Duan et al., 2017). Outside reading comprehension questions, Labutov et al. (2015) use crowdsourcing to generate question templates, Liu et al. (2010) use templated questions to help authors write better related work sections, Mostafazadeh et al. (2016) introduced visual question answer tasking that focuses on generating natural and engaging questions about an image. Mostafazadeh et al. (2017) introduced an extension of this task called the Image Grounded Conversation task where they use both the image and some initial textual context to generate a natural follow-up question and a response to that question. Buck et al. (2017) propose an active question answering model where they build an agent that learns to reformulate the question to be asked to a question-answering system so as to elicit the best possible answers. Duan et al. (2017) extract large number of question-answer pairs from community question answering forums and use them to train a model that can generate a natural question given a passage. Neural Models and Adversarial Training for Text Generation. Neural network based models have had significant success at a variety of text generation tasks, including machine translation (Bahdanau et al., 2015; Luong et al., 2015), summarization (Nallapati et al., 2016), dialog (Serban et al., 2016; Bordes et al., 2016; Li et al., 2016a; Serban et al., 2017), textual style transfer (Jhamtani et al., 2017; Kabbara & Cheung, 2016; Rao & Tetreault, 2018) and question answering (Yin et al., 2016; Serban et al., 2016).Our task is most similar to dialog, in which a wide variety of possible outputs are acceptable, and where lack of specificity in generated outputs is common. We addresses this challenge using an adversarial network approach (Goodfellow et al., 2014), a training procedure that can generate natural-looking outputs, which have been effective for natural image generation (Denton et al., 2015). Due to the challenges in optimizing over discrete output spaces like text, Yu et al. (2017) introduced a Seq(uence)GAN approach where they overcome this issue by using REINFORCE to optimize. Li et al. (2017) train an adversarial model similar to SeqGAN for generating next utterance in a dialog given a context. However, unlike our work, their discriminator is a binary classifier trained to distinguish between human and machine generated utterances. Finally, Fedus et al. (2018) introduce an actor-critic conditional GAN for filling in missing text conditioned on the surrounding context. 5 CONCLUSION In this work, we describe a novel approach to the problem of clarification question generation. Given a context, we use the observation of Rao & Daumé III (2018) that the usefulness of a clarification question can be measured by the value of updating the context with an answer to the question. We use a sequence-to-sequence model to generate a question given a context and a second sequenceto-sequence model to generate an answer given the context and the question. Given the (context, predicted question, predicted answer) triple we calculator the utility of this triple and use it as a reward to retrain the question generator using reinforcement learning based MIXER model. Further, to improve upon the utility function, we reinterpret it as a discriminator in an adversarial setting and train both the utility function and the MIXER model in a minimax fashion. We find that our adversarial training approach produces more diverse questions compared to both a model trained using maximum likelihood objective and a model trained using utility reward based reinforcement learning. There are several avenues of future work in this area. Following Mostafazadeh et al. (2016), we could combine text input with image input to generate more relevant questions. Because some questions can be answered by looking at the product image in the Amazon dataset (McAuley & Yang, 2016), this could help generate more relevant and useful questions. As in most One significant research challenge in the space of free text generation problems when the set of possible outputs is large, is that of automatic evaluation (Lowe et al., 2016): in our results we saw some correlation between human judgments and automatic metrics, but not enough to trust the automatic metrics completely. Lastly, integrating such a question generation model into a real world platform like StackExchange or Amazon to understand the real utility of such models and to unearth additional research questions. A DETAILS OF SEQUENCE-TO-SEQUENCE MODEL In this section, we describe the attention based sequence-to-sequence model introduced in §2.1 of the main paper. In Eq 1, h̃t is the attentional hidden state of the RNN at time t obtained by concatenating the target hidden state ht and the source-side context vector c̃t, andWs is a linear transformation that maps ht to an output vocabulary-sized vector. The predicted token qt is the token in the vocabulary that is assigned the highest probability using the softmax function. Each attentional hidden state h̃t depends on a distinct input context vector c̃t computed using a global attention mechanism over the input hidden states as: c̃t = N∑ n=1 anthn (7) ant = align(hn, ht) = exp [ hTt Wahn ]/∑ n′ exp [ hTt Wahn′ ] (8) The attention weights ant is calculated based on the alignment score between the source hidden state hn and the current target hidden state ht. B EXPERIMENTAL DETAILS In this section, we describe the details of our experimental setup. We preprocess all inputs (context, question and answers) using tokenization and lowercasing. We set the max length of context to be 100, question to be 20 and answer to be 20. Our sequence-to-sequence model (§2.1) operates on word embeddings which are pretrained on in domain data using Glove (Pennington et al., 2014). We use embeddings of size 200 and a vocabulary with cut off frequency set to 10. During train time, we use teacher forcing. During test time, we use beam search decoding with beam size 5. We use a hidden layer of size two for both the encoder and decoder recurrent neural network models with size of hidden unit set to 100. We use a dropout of 0.5 and learning ratio of 0.0001 In the MIXER model, we start with ∆ = T and decrease it by 2 for every epoch (we found decreasing ∆ to 0 is ineffective for our task, hence we stop at 2). C DETAILS OF HUMAN BASED EVALUATION In this section, we describe in detail the human based evaluation methodology introduced in §3.3 of the main paper. Relevance: We ask a Yes-No question: ”Is the question on topic” Grammaticality: We ask ”Is the question grammatical?”, and let workers choose from: [Grammatical, Comprehensible and Incomprehensible]. Specificity: We ask ”How specific is the question?” and let workers choose from: 1. Specific pretty much only to this product 2. Specific to this and other very similar products (same product from different manufacturer) 3. Generic enough to be applicable to many other products of this type 4. Generic enough to be applicable to any product under Home and Kitchen 5. N/A (Not applicable): Question is not on topic OR is incomprehensible Seeking new information: We ask “Does the question ask for new information currently not included in the description?” and let workers choose from: [Completely, Somewhat, No, N/A] Usefulness: We ask “How useful is the question to a potential buyer (or a current user) of the product?” and let workers choose from: 1. Useful enough to be included in the product description 2. Useful to a large number of potential buyers (or current users) 3. Useful to a small number of potential buyers (or current users) 4. Useful only to the person asking the question 5. N/A (Not applicable): Not on topic OR incomprehensible OR not asking new information
1. What is the main contribution of the paper regarding clarification question generation? 2. What are the strengths and weaknesses of the proposed GAN-based approach? 3. Do you have any concerns about the experimental setup or results? 4. How does the reviewer assess the novelty and effectiveness of the proposed method? 5. Are there any suggestions for improving the paper's content or research methodology?
Review
Review This paper addresses an interesting task of clarification question generation by proposing a GAN-based approach. It mainly builds on the ideas of Rao & Daum´e III (2018) to understand the usefulness of generated questions via a utility function that acts as the discriminator while a simple seq-to-seq model is used to generate questions in the generator module. The proposed GAN model is inspired by the sequence GAN model of Yu et al. (2017) with simple variations such as using MIXER (Ranzato et al., 2015) as the generator and not using a CNN-based discriminator. Experiments were conducted on two datasets, and the obtained results were mixed and not conclusive. Overall, due to the lack of novelty and unconvincing results, I feel the paper needs more work before it is ready for publication. My detailed comments are below: - "As a running example, we will use the Amazon setting: ..." --> The example in Figure 1 is only referred once and not even in Table 3 to show the related model predictions. I would suggest to truly consider it as a running example to clarify the training and testing procedure better. Also, an example of the StackExchange dataset would be helpful. - The utility function (Section 2.3) seems to be simple. Did you evaluate the effectiveness of this function solely in predicting the usefulness of a question? How would a binary classifier work instead? I also wonder why simple seq-to-seq models were used as question/answer generators while there exist a lot of work that already outperform these models for similar text generation tasks. - "In our model, the answer is an latent variable: we do not actually use it anywhere except to train the discriminator. Because of this, we train our discriminator using (context, true question, generated answer) triples as positive instances and (context, generated question, generated answer) triples as the negative instances." --> This part is not clear. Did you use generated answers or the true answers as part of the positive instances? Please clarify across the paper when you used generated answer/question and when you used true answer/question. - "Unlike the question generator, the parameters of the answer generator are kept fixed during the adversarial training" --> please explain why. - I like that the experiments were carried out on multiple datasets. What are the lengths of the contexts, questions, and answers for both datasets on average number of words? What are the impacts of the length restrictions of 100, 20, and 20 you set for context, question, answer on the evaluation results? How did you come up with these numbers? I would suggest to include an analysis of impacts of variable lengths of context, question, answer on the model performance. - It's not clear how the Lucene system was built with human generated questions. Please clarify. - Table 2 shows mixed results, what should we conclude from this? - How many crowdworkers were used for human judgements? What was the inter-annotator agreement? How did you convert the human answers into the numeric scores of Table 2? Without these information, it is not possible to judge the utility of the human evaluation. StackExchange results could have been annotated via other crowdsourcing venues e.g. upwork. - The related work should be better compared and contrasted with the proposed work, especially the main contributions of the paper should be clearly highlighted. - Table 3 is not referred in text. I would suggest to include the name of the products also for better context. The human evaluation scores look very subjective, hence, the inter-annotator agreement is an essential factor. - There are a lot of grammatical mistakes and inconsistencies across the paper that need to be corrected.
ICLR
Title Answer-based Adversarial Training for Generating Clarification Questions Abstract We propose a generative adversarial training approach for the problem of clarification question generation. Our approach generates clarification questions with the goal of eliciting new information that would make the given context more complete. We develop a Generative Adversarial Network (GAN) where the generator is a sequence-to-sequence model and the discriminator is a utility function that models the value of updating the context with the answer to the clarification question. We evaluate on two datasets, using both automatic metrics and human judgments of usefulness, specificity and relevance, showing that our approach outperforms both a retrieval-based model and ablations that exclude the utility model and the adversarial training. 1 INTRODUCTION A goal of natural language processing is to develop techniques that enable machines to process naturally occurring language. However, not all language is clear and, as humans, we may not always understand each other (Grice, 1975); in cases of gaps or mismatches in knowledge, we tend to ask questions (Graesser et al., 2008). In this work, we focus on the task of automatically generating clarification questions: questions that ask for information that is missing from a given linguistic context. Our clarification question generation model builds on the sequence-to-sequence approach that has proven effective for several language generation tasks (Sutskever et al., 2014; Serban et al., 2016; Yin et al., 2016; Du et al., 2017). Unfortunately, training a sequence-to-sequence model directly on context/question pairs yields generated questions that are highly generic1, corroborating a common finding in dialog systems (Li et al., 2016b). Our goal is to be able to generate questions that are useful and specific. To achieve this, we begin with a recent observation of Rao & Daumé III (2018), who considered the task of question reranking: the system should learn to generate clarification questions whose answers have high utility, which they defined as the likelihood that this question would lead to an answer that will make the context more complete (§2.3). Inspired by this, we construct a question generation model that first generates a question given a context, and then generates a hypothetical answer to that question. Given this (context, question, answer) tuple, we train a utility calculator to estimate the usefulness of this question. We then show that this utility calculator can be generalized using ideas for generative adversarial networks (Goodfellow et al., 2014) for text (Yu et al., 2017), wherein the utility predictor plays the role of the “discriminator” and the question generator is the “generator” (§2.2), which we train using the MIXER algorithm (Ranzato et al., 2015). We evaluate our approach on two question generation datasets: for posts on Stack Exchange and for Amazon product descriptions (Figure 1). Using both automatic metrics and human evaluation, we demonstrate that our adversarially trained model generates a more diverse set of questions than all the baseline models. Furthermore, we find that although all models generate questions that are relevant to the context at hand, our adversarially-trained model generates questions that are more specific to the context.2 1For instance, in the context of asking questions about home appliances, frequently asking like “What are the dimensions?” or “Is it made in China?” 2Code and data release: All code will be released under a license at least as permissive as MIT; all data will be made available after publication subject to allowance by the original licenses. 2 TRAINING A CLARIFICATION QUESTION GENERATOR Our goal is to build a model that, given a context, can generate an appropriate clarification question. As a running example, we will use the Amazon setting: where the dataset consists of (context, question, answer) triples where the context is the product description, question is clarification question about that product that (preferably) is not already answered in the description and answer is the seller’s (or other users’) reply to the question. Representationally, our question generator is a standard sequence-to-sequence model with attention (§2.1). The learning problem is: how to train the sequence-to-sequence model to produce good question. An overview of our training setup is shown in Figure 2. Given a context, our question generator outputs a question. In order to evaluate the usefulness of this question, we then have a second sequence-to-sequence model called the “answer generator” that generates a hypothetical answer based on the context and the question (§2.5). This (context, question and answer) triple is fed into a UTILITY calculator, whose initial goal is to estimate the probability that this question/answer pair is useful in this context (§2.3). This UTILITY is treated as a reward, which is used to update the question generator using the MIXER (Ranzato et al., 2015) algorithm (§2.2). Finally, we reinterpret the answer-generator-plus-utility-calculator component as a discriminator for differentiating between true (context, question, answer) triples and synthetic triples (§ 2.4), and optimize this adversarial objective using MIXER. 2.1 SEQUENCE-TO-SEQUENCE MODEL FOR QUESTION GENERATION We use a standard attention based sequence-to-sequence model (Luong et al., 2015) for our question generator. Given an input sequence (context) c = (c1, c2, ..., cN ), this model generates an output sequence (question) q = (q1, q2, ..., qT ). The architecture of this model is an encoder-decoder with attention. The encoder is a recurrent neural network (RNN) operating over the input word embeddings to compute a source context representation c̃. The decoder uses this source representation to generate the target sequence one word at a time: p(q|c̃t) = T∏ t=1 p(qt|q1, q2, ..., qt−1, c̃t) = T∏ t=1 softmax(Wsh̃t) ; where h̃t = tanh(Wc[c̃t;ht]) (1) In Eq 1, h̃t is the attentional hidden state of the RNN at time t and Ws and Wc are parameters of the model (details in Appendix A). The predicted token qt is the token in the vocabulary that is assigned the highest probability using the softmax function. The standard training objective for sequence-tosequence model is to maximize the log-likelihood of all (c, q) pairs in the training data D which is equivalent to minimizing the loss, Lmle(D) = − ∑ (c,q)∈D T∑ t=1 log p(qt|q1, q2, ..., qt−1, c) (2) 2.2 TRAINING THE GENERATOR TO OPTIMIZE QUESTION UTILITY Training sequence-to-sequence models for the task of clarification question generation (with context as input and question as output) using maximum likelihood objective unfortunately leads to the generation of highly generic questions, such as “What are the dimensions?” when asking questions about home appliances. This issue has been observed in dialog generation as well (Li et al., 2016b). Recently Rao & Daumé III (2018) observed that usefulness of a question can be better measured as the utility that would be obtained if the context were updated with the answer to the proposed question. We use this observation to define a UTILITY based reward function and train the question generator to optimize this reward. We train the UTILITY reward to predict the likelihood that a question would generate an answer that would increase the utility of the context by adding useful information to it (see §2.3 for details). Similar to optimizing metrics like BLEU and ROUGE, this UTILITY function also operates on discrete text outputs, which makes optimization difficult due to non-differentiability. A successful recent approach dealing with the non-differentiability while also retaining some advantages of maximum likelihood training is the Mixed Incremental Cross-Entropy Reinforce (Ranzato et al., 2015) algorithm (MIXER). In MIXER, the overall loss L is differentiated as in REINFORCE (Williams, 1992): L(θ) = −Eqs∼pθr(qs) ; ∇θL(θ) = −Eqs∼pθr(qs)∇θ log pθ(qs) (3) where ys is a random output sample according to the model pθ, where θ are the parameters of the network. We then approximate the expected gradient using a single sample qs = (qs1, q s 2, ..., q s T ) from the model distribution (pθ). In REINFORCE, the policy is initialized random, which can cause long convergence times. To solve this, MIXER starts by optimizing maximum likelihood and slowly shifts to optimizing the expected reward from Eq 3. For the initial ∆ time steps, MIXER optimizes Lmle and for the remaining (T −∆) time steps, it optimizes the external reward. In our model, we minimize the UTILITY-based loss Lmax-utility defined as: Lmax-utility = −(r(qp)− r(qb)) T∑ t=1 log p(qt|q1, q2, ..., qt−1, ct) (4) where r(qp) is the UTILITY based reward on the predicted question and r(qb) is a baseline reward introduced to reduce the high variance otherwise observed when using REINFORCE. In MIXER, the baseline is estimated using a linear regressor that takes in the current hidden states of the model as input and is trained to minimize the mean squared error (||r(qp) − r(qb)||)2. Instead we use a self-critical training approach Rennie et al. (2017) where the baseline is estimated using the reward obtained by the current model under greedy decoding during test time. 2.3 ESTIMATING A UTILITY FUNCTION FROM HISTORICAL DATA Given a (context, question, answer) triple, Rao & Daumé III (2018) introduce a utility function UTILITY(c, q, a) to calculate the value of updating a context c with the answer a to a clarification question q. The inspiration for thier utility function is to estimate the probability that an answer would be a meaningful addition to a context, and treat this as a binary classification problem where the positive instances are the true (context, question, answer) triples in the dataset whereas the negative instances are contexts paired with a random (question, answer) from the dataset. The model we use is to first embed of the words in the context c, then use an LSTM (long-short term memory) (Hochreiter & Schmidhuber, 1997) to generate a neural representation c̄ of the context by averaging the output of each of the hidden states. Similarly, we obtain a neural representation q̄ and ā of q and a respectively using question and answer LSTM models. Finally, a feed forward neural network FUTILITY(c̄, q̄, ā) predicts the usefulness of the question. 2.4 UTILITY GAN FOR CLARIFICATION QUESTION GENERATION The UTILITY function trained on true vs random samples from real data (as described in the previous section) can be a weak reward signal for questions generated by a model due to the large discrepancy between the true data and the model’s outputs. In order to strengthen the reward signal, we reinterpret the UTILITY function (coupled with the answer generator) as a discriminator in an adversarial learning setting. That is, instead of taking the UTILITY calculator to be a fixed model that outputs the expected quality of a question/answer pair, we additionally optimize it to distinguish between true question/answer pairs and model-generated ones. This reinterpretation turns our model into a form of a generative adversarial network (GAN) (Goodfellow et al., 2014). A GAN is a training procedure for “generative” models that can be interpreted as a game between a generator and a discriminator. The generator is an arbitrary model g ∈ G that produces outputs (in our case, questions). The discriminator is another model d ∈ D that attempts to classify between true outputs and model-generated outputs. The goal of the generator is to generate data such that it can fool the discriminator; the goal of the discriminator is to be able to successfully distinguish between real and generated data. In the process of trying to fool the discriminator, the generator produces data that is as close as possible to the real data distribution. Generically, the GAN objective is: LGAN(D,G) = max d∈D min g∈G Ex∼p̂ log d(x) + Ez∼pz log(1− d(g(z))) (5) where x is sampled from the true data distribution p̂, and z is sampled from a prior defined on input noise variables pz . Although GANs have been successfully used for image tasks, training GANs for text generation is challenging due to the discrete nature of outputs in text. The discrete outputs from the generator make it difficult to pass the gradient update from the discriminator to the generator. Recently, Yu et al. (2017) proposed a sequence GAN model for text generation to overcome this issue. They treat their generator as an agent and use the discriminator as a reward function to update the generative model using reinforcement learning techniques. Our GAN-based approach is inspired by this sequence GAN model with two main modifications: a) We use the MIXER algorithm as our generator (§2.2) instead of policy gradient approach; and b) We use the UTILITY function (§2.3) as our discriminator instead of a convolutional neural network (CNN). In our model, the answer is an latent variable: we do not actually use it anywhere except to train the discriminator. Because of this, we train our discriminator using (context, true question, generated answer) triples as positive instances and (context, generated question, generated answer) triples as the negative instances. Formally, our objective function is: LGAN-U(U ,M) = max u∈U min m∈M Eq∼p̂ log u(c, q,A(c, q)) + Ec∼p̂ log(1− u(c,m(c),A(c,m(c)))) (6) where U is the UTILITY discriminator,M is the MIXER generator, p̂ is our data of (context, question, answer) triples and A is our answer generator. 2.5 PRETRAINING Question Generator. We pretrain our question generator using the sequence-to-sequence model§2.1 where we define the input sequence as the context and the output sequence as the question. This answer generator is trained to maximize the log-likelihood of all ([context+question], answer) pairs in the training data. Parameters of this model are updated during adversarial training. Answer Generator. We pretrain our answer generator using the sequence-to-sequence model§2.1 where we define the input sequence as the concatenation of the context and the question and the output sequence as the answer. This answer generator is trained to maximize the log-likelihood of all (context, question) pairs in the training data. Unlike the question generator, the parameters of the answer generator are kept fixed during the adversarial training. Discriminator. We pretrain the discriminator using (context, question, answer) triples from the training data. For positive instances, we use a context and its true question, answer and for negative instances, we use the same context but randomly sample a question from the training data (and use the answer paired with that random question). 3 EXPERIMENTAL RESULTS We base our experimental design on the following research questions: 1. Do generation models outperform simpler retrieval baselines? 2. Does optimizing the UTILITY reward improve over maximum likelihood training? 3. Does using adversarial training improve over optimizing the pretrained UTILITY? 4. How do the models perform when evaluated for nuances such as specificity and usefulness? 3.1 DATASETS We evaluate our model on two datasets. The first is from StackExchange and was curated by Rao & Daumé III (2018); the second is from Amazon, curated by McAuley & Yang (2016), and has not previously been used for the task of question generation. StackExchange. This dataset consists of posts, questions asked to that post on stackexchange.com (and answers) collected from three related subdomains on stackexchage.com (askubuntu, unix and superuser). Additionally, for 500 instances each from the tune and the test set, the dataset includes 1 to 5 other questions identified as valid questions by expert human annotators from a pool of candidate questions. This dataset consists of 61, 681 training, 7710 validation and 7709 test examples. Amazon. Each instance consists of a question asked about a product on amazon.com combined with other information (product ID, question type “Yes/No”, answer type, answer and answer time). To obtain the description of the product, we use the metadata information contained in the amazon reviews dataset (McAuley et al., 2015). We consider at most 10 questions for each product. This dataset includes several different product categories. We choose the Home and Kitchen category since it contains a high number of questions and is relatively easy category for human based evaluation. This dataset consists of 19, 119 training, 2435 validation and 2305 test examples, and each product description contains between 3 and 10 questions (average: 7). 3.2 BASELINES AND ABLATED MODELS We compare three variants (ablations) of our proposed approach, together with an information retrieval baseline: GAN-Utility is our full model which is a UTILITY function based GAN training (§2.4) including the UTILITY discriminator, a MIXER question generator and a sequence-tosequence based answer generator. Max-Utility is our reinforcement learning baseline with a pretrained question generator described model (§ 2.2) without the adversarial training. MLE is the question generator model pretrained on context, question pairs using maximum likelihood objective (§2.1). Lucene3 is a TF-IDF (term frequency-inverse document frequency) based document ranking system which given a document, retrieves N other documents that are most similar to the given document. Given a context, we use Lucene to retrieve top 10 contexts that are most similar to the given context. We randomly choose a question from the 10 questions paired with these contexts to construct our Lucene baseline4. Experimental details of all our models are described in Appendix B. 3.3 EVALUATION METRICS We evaluate initially with several automated evaluation metrics, and then more substantially based on crowdsourced human judgments. Automatic metrics include: Diversity, which calculates the proportion of unique trigrams5 in the output to measure the diversity as commonly used to evaluate dialogue generation (Li et al., 2016b); BLEU (Papineni et al., 2002), which evaluate n-gram precision between a predicted sentence and reference sentences; and METEOR (Banerjee & Lavie, 2005), which is similar to BLEU but includes stemmed and synonym matches when measuring the similarity between the predicted sequence and the reference sequences. 3https://lucene.apache.org/ 4For the Amazon dataset, we ignore questions asked to products of the same brand as the given product since Amazon replicates questions across same brand allowing the true question to be included in that set. 5We report trigrams, but bigrams and unigrams follow similar trends. Human judgments involve showing contexts and generated questions to crowdworkers6 and asking them to evaluate the questions along several axes. Roughly, we ask for the following five judgments for each question (exact wordings in Appendix C): Is it relevant (yes/no); Is it grammatical (yes/comprehensible/incomprehensible); How specific is it to this product (four options from “specific to only this product” to “generic to any product”); Does this question ask for new information not contained in the discription (completely/somewhat/no); and How useful is this question to a potential buyer (four options from “should be included in the description” to “useful only to the person asking”). For the last three questions, we also allowed a “not applicable” response in the case that the question was either ungrammatical or irrelevant. 3.4 AUTOMATIC METRIC RESULTS Table 1 shows the results on the two datasets when evaluated according to automatic metrics. In the Amazon dataset, GAN-Utility outperforms all ablations on DIVERSITY, suggesting that it produces more diverse outputs. Lucene, on the other hand, has the highest DIVERSITY since it consists of human generated questions, which tend to be more diverse because they are much longer compared to model generated questions. This comes at the cost of lower match with the reference as visible in the BLEU and METEOR scores. In terms of BLEU and METEOR, there is inconsistency. Although GAN-Utility outperforms all baselines according to METEOR, the fully ablated MLE model has a higher BLEU score. This is because BLEU score looks for exact n-gram matches and since MLE produces more generic outputs, it is much more likely that it will match one of 10 references compared to the specific/diverse outputs of GAN-Utility, since one of those ten is highly likely to itself be generic. In the StackExchange dataset GAN-Utility outperforms all ablations on both BLEU and METEOR. Unlike in the Amazon dataset, MLE does not outperform GAN-Utility in BLEU. This is because the MLE outputs in this dataset are not as generic as in the amazon dataset due to the highly technical nature of contexts in StackExchange. As in the Amazon dataset, GAN-Utility outperforms MLE on DIVERSITY. Interestingly, the Max-Utility ablation achieves a higher DIVERSITY score than GAN-Utility. On manual analysis we find that Max-Utility produces longer outputs compared to GAN-Utility but at the cost of being less grammatical. 3.5 HUMAN JUDGEMENTS ANALYSIS Table 2 shows the numeric results of human-based evaluation performed on the reference and the system outputs on 500 random samples from the test set of the Amazon dataset.7 These results overall show that the GAN-Utility model successfully generates the most specific questions, while being equally good at seeking new information and being useful to potential buyers. All approaches produce relevant, grammatical questions. All our models are all equally good at seeking new information, but are weaker than Lucene, which performs better according to new information but at 6We use Figure-Eight, https://www.figure-eight.com. We paid crowdworkers 5 cents per judgment. 7We could not ask crowdworkers evaluate the StackExchange data due to its highly technical nature. the cost of much lower specificity and slightly lower relevance. Our models are all equally good also at generating useful questions: their usefulness score is significantly better than both Lucene and Reference, largely because Lucene and Reference tend to ask questions that are more often useful only to the person asking the question, making them less useful for potential other buyers (see Figure 4). Our full model, GAN-Utility, performs significantly better when measured by specificity to the product, which aligns with the higher DIVERSITY score obtained by GAN-Utility under automatic metric evaluation. 4 RELATED WORK Question Generation. Most previous work on question generation has been on generating reading comprehension style questions i.e. questions that ask about information present in a given text (Heilman, 2011; Rus et al., 2010; 2011; Duan et al., 2017). Outside reading comprehension questions, Labutov et al. (2015) use crowdsourcing to generate question templates, Liu et al. (2010) use templated questions to help authors write better related work sections, Mostafazadeh et al. (2016) introduced visual question answer tasking that focuses on generating natural and engaging questions about an image. Mostafazadeh et al. (2017) introduced an extension of this task called the Image Grounded Conversation task where they use both the image and some initial textual context to generate a natural follow-up question and a response to that question. Buck et al. (2017) propose an active question answering model where they build an agent that learns to reformulate the question to be asked to a question-answering system so as to elicit the best possible answers. Duan et al. (2017) extract large number of question-answer pairs from community question answering forums and use them to train a model that can generate a natural question given a passage. Neural Models and Adversarial Training for Text Generation. Neural network based models have had significant success at a variety of text generation tasks, including machine translation (Bahdanau et al., 2015; Luong et al., 2015), summarization (Nallapati et al., 2016), dialog (Serban et al., 2016; Bordes et al., 2016; Li et al., 2016a; Serban et al., 2017), textual style transfer (Jhamtani et al., 2017; Kabbara & Cheung, 2016; Rao & Tetreault, 2018) and question answering (Yin et al., 2016; Serban et al., 2016).Our task is most similar to dialog, in which a wide variety of possible outputs are acceptable, and where lack of specificity in generated outputs is common. We addresses this challenge using an adversarial network approach (Goodfellow et al., 2014), a training procedure that can generate natural-looking outputs, which have been effective for natural image generation (Denton et al., 2015). Due to the challenges in optimizing over discrete output spaces like text, Yu et al. (2017) introduced a Seq(uence)GAN approach where they overcome this issue by using REINFORCE to optimize. Li et al. (2017) train an adversarial model similar to SeqGAN for generating next utterance in a dialog given a context. However, unlike our work, their discriminator is a binary classifier trained to distinguish between human and machine generated utterances. Finally, Fedus et al. (2018) introduce an actor-critic conditional GAN for filling in missing text conditioned on the surrounding context. 5 CONCLUSION In this work, we describe a novel approach to the problem of clarification question generation. Given a context, we use the observation of Rao & Daumé III (2018) that the usefulness of a clarification question can be measured by the value of updating the context with an answer to the question. We use a sequence-to-sequence model to generate a question given a context and a second sequenceto-sequence model to generate an answer given the context and the question. Given the (context, predicted question, predicted answer) triple we calculator the utility of this triple and use it as a reward to retrain the question generator using reinforcement learning based MIXER model. Further, to improve upon the utility function, we reinterpret it as a discriminator in an adversarial setting and train both the utility function and the MIXER model in a minimax fashion. We find that our adversarial training approach produces more diverse questions compared to both a model trained using maximum likelihood objective and a model trained using utility reward based reinforcement learning. There are several avenues of future work in this area. Following Mostafazadeh et al. (2016), we could combine text input with image input to generate more relevant questions. Because some questions can be answered by looking at the product image in the Amazon dataset (McAuley & Yang, 2016), this could help generate more relevant and useful questions. As in most One significant research challenge in the space of free text generation problems when the set of possible outputs is large, is that of automatic evaluation (Lowe et al., 2016): in our results we saw some correlation between human judgments and automatic metrics, but not enough to trust the automatic metrics completely. Lastly, integrating such a question generation model into a real world platform like StackExchange or Amazon to understand the real utility of such models and to unearth additional research questions. A DETAILS OF SEQUENCE-TO-SEQUENCE MODEL In this section, we describe the attention based sequence-to-sequence model introduced in §2.1 of the main paper. In Eq 1, h̃t is the attentional hidden state of the RNN at time t obtained by concatenating the target hidden state ht and the source-side context vector c̃t, andWs is a linear transformation that maps ht to an output vocabulary-sized vector. The predicted token qt is the token in the vocabulary that is assigned the highest probability using the softmax function. Each attentional hidden state h̃t depends on a distinct input context vector c̃t computed using a global attention mechanism over the input hidden states as: c̃t = N∑ n=1 anthn (7) ant = align(hn, ht) = exp [ hTt Wahn ]/∑ n′ exp [ hTt Wahn′ ] (8) The attention weights ant is calculated based on the alignment score between the source hidden state hn and the current target hidden state ht. B EXPERIMENTAL DETAILS In this section, we describe the details of our experimental setup. We preprocess all inputs (context, question and answers) using tokenization and lowercasing. We set the max length of context to be 100, question to be 20 and answer to be 20. Our sequence-to-sequence model (§2.1) operates on word embeddings which are pretrained on in domain data using Glove (Pennington et al., 2014). We use embeddings of size 200 and a vocabulary with cut off frequency set to 10. During train time, we use teacher forcing. During test time, we use beam search decoding with beam size 5. We use a hidden layer of size two for both the encoder and decoder recurrent neural network models with size of hidden unit set to 100. We use a dropout of 0.5 and learning ratio of 0.0001 In the MIXER model, we start with ∆ = T and decrease it by 2 for every epoch (we found decreasing ∆ to 0 is ineffective for our task, hence we stop at 2). C DETAILS OF HUMAN BASED EVALUATION In this section, we describe in detail the human based evaluation methodology introduced in §3.3 of the main paper. Relevance: We ask a Yes-No question: ”Is the question on topic” Grammaticality: We ask ”Is the question grammatical?”, and let workers choose from: [Grammatical, Comprehensible and Incomprehensible]. Specificity: We ask ”How specific is the question?” and let workers choose from: 1. Specific pretty much only to this product 2. Specific to this and other very similar products (same product from different manufacturer) 3. Generic enough to be applicable to many other products of this type 4. Generic enough to be applicable to any product under Home and Kitchen 5. N/A (Not applicable): Question is not on topic OR is incomprehensible Seeking new information: We ask “Does the question ask for new information currently not included in the description?” and let workers choose from: [Completely, Somewhat, No, N/A] Usefulness: We ask “How useful is the question to a potential buyer (or a current user) of the product?” and let workers choose from: 1. Useful enough to be included in the product description 2. Useful to a large number of potential buyers (or current users) 3. Useful to a small number of potential buyers (or current users) 4. Useful only to the person asking the question 5. N/A (Not applicable): Not on topic OR incomprehensible OR not asking new information
1. What is the focus of the paper regarding generating clarification questions? 2. What are the strengths of the proposed approach, particularly in using reinforcement learning and a GAN-like setting? 3. What are the weaknesses of the paper, especially regarding the discriminator's role and the small improvements in results? 4. How does the reviewer assess the suitability of the automated metrics used in the paper? 5. Does the reviewer believe that the setup created by the authors can achieve the goal of creating useful questions?
Review
Review In the paper, the authors try to improve the generation of clarification questions using reinforcement learning against a discriminator, creating a GAN-like setting. They train two sequence-to-sequence models that i) generate questions from context, and ii) answers from (context, question) pairs. They also train a discriminator model on (question, answer, context) triples. I believe the task, and the setup created by the authors is very interesting and novel, and the paper is very well written. They show that the additional training against the discriminator leads to a (very small) increase in diversity and question specificity. On the negative side: - It is not clear to me that the discriminator acts as a utility function as they claim, at least in the way they define utility. It is only trained to distinguish real questions about a context from random ones. - The results presented in Table 1 and 2 show only very small differences to the other approaches. I wonder why that is, and how much the model actually changes in the reinforcement learning tuning step. - The automated metrics do not seem suitable for the task, since they can only measure how close a generated example is to some gold example. - The only significant improvement is on specificity, with the much more important goal of creating useful questions not achieved. I am actually not sure if increased utility (i.e. identifying missing information and asking about it) can be achieved with a setup like this. But despite these weaknesses I still think this is a very interesting contribution.
ICLR
Title On the Origin of Implicit Regularization in Stochastic Gradient Descent Abstract For infinitesimal learning rates, stochastic gradient descent (SGD) follows the path of gradient flow on the full batch loss function. However moderately large learning rates can achieve higher test accuracies, and this generalization benefit is not explained by convergence bounds, since the learning rate which maximizes test accuracy is often larger than the learning rate which minimizes training loss. To interpret this phenomenon we prove that for SGD with random shuffling, the mean SGD iterate also stays close to the path of gradient flow if the learning rate is small and finite, but on a modified loss. This modified loss is composed of the original loss function and an implicit regularizer, which penalizes the norms of the minibatch gradients. Under mild assumptions, when the batch size is small the scale of the implicit regularization term is proportional to the ratio of the learning rate to the batch size. We verify empirically that explicitly including the implicit regularizer in the loss can enhance the test accuracy when the learning rate is small. 1 INTRODUCTION In the limit of vanishing learning rates, stochastic gradient descent with minibatch gradients (SGD) follows the path of gradient flow on the full batch loss function (Yaida, 2019). However in deep networks, SGD often achieves higher test accuracies when the learning rate is moderately large (LeCun et al., 2012; Keskar et al., 2017). This generalization benefit is not explained by convergence rate bounds (Ma et al., 2018; Zhang et al., 2019), because it arises even for large compute budgets for which smaller learning rates often achieve lower training losses (Smith et al., 2020). Although many authors have studied this phenomenon (Jastrzębski et al., 2018; Smith & Le, 2018; Chaudhari & Soatto, 2018; Shallue et al., 2018; Park et al., 2019; Li et al., 2019; Lewkowycz et al., 2020), it remains poorly understood, and is an important open question in the theory of deep learning. In a recent work, Barrett & Dherin (2021) analyzed the influence of finite learning rates on the iterates of gradient descent (GD). Their approach is inspired by backward error analysis, a method for the numerical analysis of ordinary differential equation (ODE) solvers (Hairer et al., 2006). The key insight of backward error analysis is that we can describe the bias introduced when integrating an ODE with finite step sizes by introducing an ancillary modified flow. This modified flow is derived to ensure that discrete iterates of the original ODE lie on the path of the continuous solution to the modified flow. Using this technique, the authors show that if the learning rate is not too large, the discrete iterates of GD lie close to the path of gradient flow on a modified loss C̃GD(ω) = C(ω) + ( /4)||∇C(ω)||2. This modified loss is composed of the original loss C(ω) and an implicit regularizer proportional to the learning rate which penalizes the euclidean norm of the gradient. However these results only hold for full batch GD, while in practice SGD with small or moderately large batch sizes usually achieves higher test accuracies (Keskar et al., 2017; Smith et al., 2020). In this work, we devise an alternative approach to backward error analysis, which accounts for the correlations between minibatches during one epoch of training. Using this novel approach, we prove that for small finite learning rates, the mean SGD iterate after one epoch, averaged over all possible sequences of minibatches, lies close to the path of gradient flow on a second modified loss C̃SGD(ω), which we define in equation 1. This new modified loss is also composed of the full batch loss function and an implicit regularizer, however the structure of the implicit regularizers for GD and SGD differ, and their modified losses can have different local and global minima. Our analysis therefore helps explain both why finite learning rates can aid generalization, and why SGD can achieve higher test accuracies than GD. We assume that each training example is sampled once per epoch, in line with best practice (Bottou, 2012), and we confirm empirically that explicitly including the implicit regularization term of SGD in the training loss can enhance the test accuracy when the learning rate is small. Furthermore, we prove that if the batch size is small and the gradients are sufficiently diverse, then the expected magnitude of the implicit regularization term of SGD is proportional to the ratio of the learning rate to the batch size (Goyal et al., 2017; Smith et al., 2018). We note that many previous authors have sought to explain the generalization benefit of SGD using an analogy between SGD and stochastic differential equations (SDEs) (Mandt et al., 2017; Smith & Le, 2018; Jastrzębski et al., 2018; Chaudhari & Soatto, 2018). However this SDE analogy assumes that each minibatch is randomly sampled from the full dataset, which implies that some examples will be sampled multiple times in one epoch. Furthermore, the most common SDE analogy holds only for vanishing learning rates (Yaida, 2019) and therefore misses the generalization benefits of finite learning rates which we identify in this work. An important exception is Li et al. (2017), who applied backward error analysis to identify a modified SDE which holds when the learning rate is finite. However this work still relies on the assumption that minibatches are sampled randomly. It also focused on the convergence rate, and did not discuss the performance of SGD on the test set. Main Result. We now introduce our main result. We define the cost function over parameters ω as C(ω) = (1/N) ∑N j=1 Cj(ω), which is the mean of the per-example costs Cj(ω), where N denotes the training set size. Gradient flow follows the ODE ω̇ = −∇C(ω), while gradient descent computes discrete updates ωi+1 = ωi − ∇C(ωi), where is the learning rate. For simplicity, we assume that the batch size B perfectly splits the training set such that N%B = 0, where % denotes the modulo operation, and for convenience we define the number of batches per epoch m = N/B. We can therefore re-write the cost function as a sum over minibatches C(ω) = (1/m) ∑m−1 k=0 Ĉk(ω), where the minibatch cost Ĉk(ω) = (1/B) ∑kB+B j=kB+1 Cj(ω). In order to guarantee that we sample each example precisely once per epoch, we define SGD by the discrete update ωi+1 = ωi− ∇Ĉi%m(ωi). Informally, our main result is as follows. After one epoch, the mean iterate of SGD with a small but finite learning rate , averaged over all possible shuffles of the batch indices, stays close to the path of gradient flow on a modified loss ω̇ = −∇C̃SGD(ω), where the modified loss C̃SGD is given by: C̃SGD(ω) = C(ω) + 4m ∑m−1 k=0 ||∇Ĉk(ω)||2. (1) We emphasize that our analysis studies the mean evolution of SGD, not the path of individual trajectories. The modified loss C̃SGD(ω) is composed of the original loss C(ω) and an implicit regularizer Creg(ω) = (1/4m) ∑m−1 k=0 ||∇Ĉk(ω)||2. The scale of this implicit regularization term is proportional to the learning rate , and it penalizes the mean squared norm of the gradient evaluated on a batch of B examples. To help us compare the modified losses of GD and SGD, we can expand, C̃SGD(ω) = C(ω) + 4 ||∇C(ω)||2 + 4m ∑m−1 i=0 ||∇Ĉi(ω)−∇C(ω)||2. (2) We arrive at Equation 2 from Equation 1 by noting that ∑m−1 i=0 (∇Ĉi(ω) − ∇C(ω)) = 0. In the limit B → N , we identify the modified loss of gradient descent, C̃GD = C(ω) + ( /4)||∇C(ω)||2, which penalizes “sharp” regions where the norm of the full-batch gradient (||∇C(ω)||2) is large. However, as shown by Equation 2, the modified loss of SGD penalizes both sharp regions where the full-batch gradient is large, and also “non-uniform” regions where the norms of the errors in the minibatch gradients (||∇Ĉ(ω) − ∇C(ω)||2) are large (Wu et al., 2018). Although global minima of C(ω) are global minima of C̃GD(ω), global minima of C(ω) may not be global (or even local) minima of C̃SGD(ω). Note however that C(ω) and C̃SGD(ω) do share the same global minima on over-parameterized models which can interpolate the training set (Ma et al., 2018). We verify in our experiments that the implicit regularizer can enhance the test accuracy of models trained with SGD. Paper structure. In Section 2, we derive our main result (Equation 1), and we confirm empirically that we can close the generalization gap between small and large learning rates by including the implicit regularizer explicitly in the loss function. In Section 3, we confirm Equation 1 satisfies the linear scaling rule between learning rate and batch size (Goyal et al., 2017). In Section 4, we provide additional experiments which challenge the prevailing view that the generalization benefit of small batch SGD arises from the temperature of an associated SDE (Mandt et al., 2017; Park et al., 2019). 2 A BACKWARD ERROR ANALYSIS OF STOCHASTIC GRADIENT DESCENT Backward error analysis has great potential to clarify the role of finite learning rates, and to help identify the implicit biases of different optimizers. We therefore give a detailed introduction to the core methodology in Section 2.1, before deriving our main result in Section 2.2. In Section 2.3, we confirm empirically that the implicit regularizer can enhance the test accuracy of deep networks. 2.1 AN INTRODUCTION TO BACKWARD ERROR ANALYSIS In numerical analysis, we often wish to integrate ODEs of the form ω̇ = f(ω). This system usually cannot be solved analytically, forcing us to simulate the continuous flow with discrete updates, like the Euler step ω(t+ ) ≈ ω(t) + f(ω(t)). However discrete updates will introduce approximation error when the step size is finite. In order to study the bias introduced by this approximation error, we assume the learning rate is relatively small, and introduce a modified flow ω̇ = f̃(ω), where, f̃(ω) = f(ω) + f1(ω) + 2f2(ω) + ... . (3) The modified flow of f̃(ω) is equal to the original flow of f(ω) when → 0, but it differs from the original flow if is finite. The goal of backward error analysis is to choose the correction terms fi(ω) such that the iterates obtained from discrete updates of the original flow with small finite step sizes lie on the path taken by the continuous solution to the modified flow with vanishing step sizes. The standard derivation of backward error analysis begins by taking a Taylor expansion in of the solution to the modified flow ω(t + ). We obtain the derivatives of ω(t + ) recursively using the modified flow equation ω̇ = f̃(ω) (see Hairer et al. (2006)), and we identify the correction terms fi(ω) by ensuring this Taylor expansion matches the discrete update (e.g., ωt+1 = ωt + f(ωt)) for all powers of . However, this approach does not clarify why these correction terms arise. To build our intuition for the origin of the corrections terms, and to clarify how we might apply this analysis to SGD, we take a different approach. First, we will identify the path taken by the continuous modified flow by considering the combined influence of an infinite number of discrete steps in the limit of vanishing learning rates, and then we will compare this continuous path to either a single step of GD or a single epoch of SGD. Imagine taking n Euler steps on the modified flow f̃(ω) with step size α, ωt+n = ωt + αf̃(ωt) + αf̃(ωt+1) + αf̃(ωt+2) + ... (4) = ωt + αf̃(ωt) + αf̃(ωt + αf̃(ωt)) + αf̃(ωt + αf̃(ωt) + αf̃(ωt + αf̃(ωt))) + ... (5) = ωt + nαf̃(ωt) + (n/2)(n− 1)α2∇f̃(ωt)f̃(ωt) +O(n3α3). (6) We arrived at Equation 6 by taking the Taylor expansion of f̃ and then counting the number of terms of type ∇f̃(ωt)f̃(ωt) using the formula for an arithmetic series. Note that we assume ∇f̃ exists. Next, to ensure ωt+n in Equation 6 coincides with the solution ω(t+ ) of the continuous modified flow ω̇ = f̃(ω) for small but finite , we let the number of steps n→∞ while setting α = /n, ω(t+ ) = ω(t) + f̃(ω(t)) + ( 2/2)∇f̃(ω(t))f̃(ω(t)) +O( 3) (7) = ω(t) + f(ω(t)) + 2 (f1(ω(t)) + (1/2)∇f(ω(t))f(ω(t))) +O( 3). (8) We have replaced f̃(ω) with its definition from Equation 3. As we will see below, Equation 8 is the key component of backward error analysis, which describes the path taken when integrating the continuous modified flow f̃(ω) with vanishing learning rates over a discrete time step of length . Notice that we have assumed that the Taylor expansion in Equation 8 converges, while the higher order terms at O( 3) will contain higher order derivatives of the original flow f(ω). Backward error analysis therefore implicitly assumes that f(ω) is an analytic function in the vicinity of the current parameters ω. We refer the reader to Hairer et al. (2006) for a detailed introduction. Gradient descent: As a simple example, we will now derive the first order correction f1(ω) of the modified flow for GD. First, we recall that the discrete updates obey ωi+1 = ωi− ∇C(ωi), and we therefore fix f(ω) = −∇C(ω). In order to ensure that the continuous modified flow coincides with this discrete update, we need all terms at O( 2) and above in Equation 8 to vanish. At order 2, this implies that f1(ω) + (1/2)∇∇C(ω)∇C(ω) = 0, which yields the first order correction, f1(ω) = −(1/2)∇∇C(ω)∇C(ω) = −(1/4)∇ ( ||∇C(ω)||2 ) . (9) We conclude that, if the learning rate is sufficiently small such that we can neglect higher order terms in Equation 3, then the discrete GD iterates lie on the path of the following ODE, ω̇ = −∇C(ω)− ( /4)∇ ( ||∇C(ω)||2 ) (10) = −∇C̃GD(ω). (11) Equation 11 corresponds to gradient flow on the modified loss, C̃GD(ω) = C(ω)+( /4)||∇C(ω)||2. 2.2 BACKWARD ERROR ANALYSIS AND STOCHASTIC GRADIENT DESCENT We now derive our main result (Equation 1). As described in the introduction, we assume N%B = 0, where N is the training set size, B is the batch size, and % denotes the modulo operation. The number of updates per epochm = N/B, and the minibatch costs Ĉk(ω) = (1/B) ∑kB+B j=kB+1 Cj(ω). SGD with constant learning rates obeys ωi+1 = ωi− ∇Ĉi%m(ωi). It is standard practice to shuffle the dataset once per epoch, but we omit this step here and instead perform our analysis over a single epoch. In Equation 6 we derived the influence of n Euler steps on the flow f̃(ω) with step size α. Following a similar approach, we now derive the influence of m SGD updates with learning rate , ωm = ω0 − ∇Ĉ0(ω0)− ∇Ĉ1(ω1)− ∇Ĉ2(ω2)− ... (12) = ω0 − m−1∑ j=0 ∇Ĉj(ω0) + 2 m−1∑ j=0 ∑ k<j ∇∇Ĉj(ω0)∇Ĉk(ω0) +O(m3 3) (13) = ω0 −m ∇C(ω0) + 2ξ(ω0) +O(m3 3). (14) The error in Equation 14 is O(m3 3) since there are O(m3) terms in the Taylor expansion proportional to 3. Notice that a single epoch of SGD is equivalent to a single GD update with learning rate m up to first order in . Remarkably, this implies that when the learning rate is sufficiently small, there is no noise in the iterates of SGD after completing one epoch. For clarity, this observation arises because we require that each training example is sampled once per epoch. However the second order correction ξ(ω) = ∑m−1 j=0 ∑ k<j ∇∇Ĉj(ω)∇Ĉk(ω) does not appear in the GD update, and it is a random variable which depends on the order of the mini-batches. In order to identify the bias introduced by SGD, we will evaluate the mean correction E(ξ), where we take the expectation across all possible sequences of the (non-overlapping) mini-batches {Ĉ0, Ĉ1, ..., Ĉm−1}. Note that we hold the composition of the batches fixed, averaging only over their order. We conclude that, E(ξ(ω)) = 1 2 (∑m−1 j=0 ∑ k 6=j ∇∇Ĉj(ω)∇Ĉk(ω) ) (15) = m2 2 ∇∇C(ω)∇C(ω)− 1 2 ∑m−1 j=0 ∇∇Ĉj∇Ĉj (16) = m2 4 ∇ ( ||∇C(ω)||2 − 1 m2 ∑m−1 j=0 ||∇Ĉj(ω)||2 ) . (17) For clarity, in Equation 15 we exploit the fact that every sequence of batches has a corresponding sequence in reverse order. Combining Equations 14 and 17, we conclude that after one epoch, E(ωm) = ω0 −m ∇C(ω0) + m2 2 4 ∇ ( ||∇C(ω0)||2 − (1/m2) ∑m−1 j=0 ||∇Ĉj(ω0)||2 ) +O(m3 3). (18) Having identified the expected value of the SGD iterate after one epoch E(ωm) (for small but finite learning rates), we can now use this expression to identify the corresponding modified flow. First, we set f(ω) = −∇C(ω), t = 0, ω(0) = ω0, and let → m in Equations 3 and 8 to obtain, ω(m ) = ω0 −m ∇C(ω0) +m2 2 ( f1(ω0) + (1/4)∇||∇C(ω0)||2 ) +O(m3 3). (19) Next, we equate Equations 18 and 19 by setting ω(m ) = E(ωm). We immediately identify the first order correction to the modified flow f1(ω) = −(1/(4m2))∇ ∑m−1 j=0 ||∇Ĉj(ω)||2. We therefore conclude that, after one epoch, the expected SGD iterate E(ωm) = ω(m ) + O(m3 3), where ω(0) = ω0 and ω̇ = −∇C(ω) +m f1(ω). Simplifying, we conclude ω̇ = −∇C̃SGD(ω), where, C̃SGD(ω) = C(ω) + ( /4m) ∑m−1 k=0 ||∇Ĉk(ω)||2. (20) Equation 20 is identical to Equation 1, and this completes the proof of our main result. We emphasize that C̃SGD assumes a fixed set of minibatches {Ĉ0, Ĉ1, ..., Ĉm−1}. We will evaluate the expected modified loss after shuffling the dataset and sampling a new set of minibatches in Section 3. REMARKS ON THE ANALYSIS The phrase “for small finite learning rates” has a precise meaning in our analysis. It implies is large enough that terms of O(m2 2) may be significant, but small enough that terms of O(m3 3) are negligible. Our analysis is unusual, because we consider the mean evolution of the SGD iterates but ignore the variance of individual training runs. Previous analyses of SGD have usually focused on the variance of the iterates in limit of vanishing learning rates (Mandt et al., 2017; Smith & Le, 2018; Jastrzębski et al., 2018). However these works assume that each minibatch is randomly sampled from the full dataset. Under this assumption, the variance in the iterates arises at O( ), while the bias arises at O( 2). By contrast, in our analysis each example is sampled once per epoch, and both the variance and the bias arise at O( 2) (for simplicity, we assume m = N/B is constant). We therefore anticipate that the variance will play a less important role than is commonly supposed. Furthermore, we can construct a specific sequence of minibatches for which the variance atO(m2 2) vanishes, such that the evolution of a specific training run will coincide exactly with gradient flow on the modified loss of Equation 1 for small finite learning rates. To achieve this, we perform two training epochs with the sequence of minibatches (Ĉ0, Ĉ1, ..., Ĉm−1, Ĉm−1, ..., Ĉ1, Ĉ0) (i.e., the second epoch iterates through the same set of minibatches as the first but in the opposite order). If one inspects Equations 13 to 15, one will see that reversing the second epoch has the same effect as taking the mean across all possible sequences of minibatches (it replaces the ∑ k<j by a ∑ k 6=j). The key limitation of our analysis is that we assume m = N /B is small, in order to neglect terms at O(m3 3). This is an extreme approximation, since we typically expect that N/B is large. Therefore, while our work identifies the first order correction to the bias arising from finite learning rates, higher order terms in the modified flow may also play an important role at practical learning rates. We note however that previous theoretical analyses have made even more extreme assumptions. For instance, most prior work studying SGD in the small learning rate limit neglects all terms at O( 2) (for an exception, see Li et al. (2017)). Furthermore, as we show in Section 3, the learning rate often scales proportional to the batch size, such that /B is constant (Goyal et al., 2017; McCandlish et al., 2018). Therefore the accuracy of our approximations does not necessarily degrade as the batch size falls, but higher order terms may play an increasingly important role as the dataset size increases. Our experimental results in Section 2.3 and Section 4 suggest that our analysis can explain most of the generalization benefit of finite learning rate SGD for Wide-ResNets trained on CIFAR-10. We note that to achieve the highest test accuracies, practitioners usually decay the learning rate during training. Under this scheme, the modified loss would change as training proceeds. However it is widely thought that the generalization benefit of SGD arises from the use of large learning rates early in training (Smith et al., 2018; Li et al., 2019; Jastrzebski et al., 2020; Lewkowycz et al., 2020), and popular schedules hold the learning rate constant or approximately constant for several epochs. Finally, we emphasize that our primary goal in this work is to identify the influence of finite learning rates on training. The implicit regularization term may not be beneficial in all models and datasets. 2.3 AN EMPIRICAL EVALUATION OF THE MODIFIED LOSS In order to confirm that the modified loss C̃SGD(ω) can help explain why large learning rates enhance generalization, we now verify empirically that the implicit regularizer inherent in constant learning rate SGD, Creg(ω) = (1/4m) ∑m−1 k=0 ||∇Ĉk(ω)||2, can enhance the test accuracy of deep networks. To this end, we train the same model with two different (explicit) loss functions. The first loss function C(ω) represents the original loss, while the second Cmod(ω) = C(ω) + λCreg(ω) is obtained from the modified loss C̃SGD(ω) by replacing the learning rate with an explicit regular- ization coefficient λ. Notice that Cmod(ω) = (1/m) ∑m−1 k=0 ( Ĉk(ω) + (λ/4)||∇Ĉk(ω)||2 ) , which ensures that it is straightforward to minimize the modified loss Cmod(ω) with minibatch gradients. Since the implicit regularization term Creg(ω) is expensive to differentiate (typically 5-10x overhead), we consider a 10-1 Wide-ResNet model (Zagoruyko & Komodakis, 2016) for classification on CIFAR-10. To ensure close agreement with our theoretical analysis, we train without batch normalization using SkipInit initialization (De & Smith, 2020). We train for 6400 epochs at batch size 32 without learning rate decay using SGD without Momentum. We use standard data augmentation including crops and random flips, and we use weight decay with L2 coefficient 5 × 10−4. We emphasize that, since we train using a finite (though very large) compute budget, the final networks may not have fully converged. This is particularly relevant when training with small learning rates. Note that we provide additional experiments on Fashion-MNIST (Xiao et al., 2017) in appendix D. In Figure 1(a), we compare two training runs, one minimizing the modified loss Cmod(ω) with λ = 2−6, and one minimizing the original loss C(ω). For both runs we use a small constant learning rate = 2−9. As expected, the regularized training run achieves significantly higher test accuracies late in training. This confirms that the implicit regularizer, which arises as a consequence of using SGD with finite learning rates, can also enhance the test accuracy if it is included explicitly in the loss. In Figure 1(b), we provide the test accuracy for a range of regularization strengths λ (orange line). We provide the mean test accuracy of the best 5 out of 7 training runs at each regularization strength, and for each run we take the highest test accuracy achieved during the entire training run. We use a fixed learning rate = 2−9 for all λ. For comparison, we also provide the test accuracy achieved with the original loss C(ω) for a range of learning rates (blue line). In both cases, the test accuracy rises initially, before falling for large regularization strengths or large learning rates. Furthermore, in this network the optimal regularization strength on the modified loss λopt = 2−6 is equal to the optimal learning rate on the original loss opt = 2−6. Meanwhile when λ → 0 the performance of the modified loss approaches the performance of the original loss at = 2−9 (dotted green line). We provide the corresponding training accuracies in appendix C. Finally, in Figure 1(c), we provide the values of the implicit regularizer Creg(ω) at the end of training. As predicted by our analysis, training with larger learning rates reduces the value of the implicit regularization term. In Figure 2, we take the same 10-1 Wide-ResNet model and provide the mean training and test accuracies achieved at a range of learning rates for two regularization coefficients (following the experimental protocol above). In Figure 2(a), we train on the original loss C(ω) (λ = 0), while in Figure 2(b), we train on the modified loss Cmod(ω) with regularization coefficient λ = 2−6. From Figure 2(a), when λ = 0 there is a clear generalization benefit to large learning rates, as the learning rate that maximizes test accuracy (2−6) is 16 times larger than the learning rate that maximizes training accuracy (2−10). However in Figure 2(b) with λ = 2−6, the learning rates that maximize the test and training accuracies are equal (2−8). This suggests that when we include the implicit regularizer explicitly in the loss, the generalization benefit of large learning rates is diminished. 3 IMPLICIT REGULARIZATION AND THE BATCH SIZE In Section 2.2, we derived the modified loss by considering the expected SGD iterate after one epoch. We held the composition of the batches fixed, averaging only over the order in which the batches are seen. This choice helped make clear how to explicitly include the implicit regularizer in the loss function in Section 2.3. However, in order to clarify how the implicit regularizer term depends on the batch size, we now evaluate the expected modified loss after randomly shuffling the dataset and sampling a new set of m non-overlapping minibatches {Ĉ0, Ĉ1, ..., Ĉm−1}. Since the minibatch losses Ĉi(ω) are all identically distributed by symmetry, we recall Equation 2 and conclude that, E(C̃SGD(ω)) = C(ω) + ( /4) ||∇C(ω)||2 + ( /4)E(||∇Ĉ(ω)−∇C(ω)||2), (21) where Ĉ(ω) denotes a batch of B non-overlapping examples, drawn randomly from the full dataset. To simplify equation 21, we prove in appendix A that E(||∇Ĉ(ω) − ∇C(ω)||2) = (N−B)(N−1) Γ(ω) B , where Γ(ω) = (1/N) ∑N i=1 ||∇Ci(ω)−∇C(ω)||2. We therefore obtain, E(C̃SGD(ω)) = C(ω) + 4 ||∇C(ω)||2 + (N −B) (N − 1) 4B Γ(ω). (22) Note that Γ(ω) is the trace of the empirical covariance matrix of the per-example gradients. We have not assumed that the minibatch gradients are Gaussian distributed, however if the per-example gradients are heavy tailed (Simsekli et al., 2019) then Γ(ω) may diverge, in which case the expected value of the modified loss is ill-defined. Equation 22 shows that the implicit regularization term of SGD has two contributions. The first term is proportional to the learning rate , and it penalizes the norm of the full batch gradient. The second term is proportional to the ratio of the learning rate to the batch size /B (assuming N B), and it penalizes the trace of the covariance matrix. To interpret this result, we assume that the minibatch gradients are diverse, such that (Γ(ω)/B) ||∇C(ω)||2. This assumption guarantees that increasing the batch size reduces the error in the gradient estimate. In this limit, the second term above will dominate, and therefore different batch sizes will experience the same implicit regularization so long as the ratio of the learning rate to the batch size is constant. To verify this claim, in Figure 2.3 we plot the mean test accuracies achieved on a 10-1 Wide-ResNet, trained on CIFAR-10 with a constant learning rate, for a range of learning rates , regularization coefficients λ and batch sizes B. As expected, in Figure 3(a), training on the original loss C(ω) for 6400 epochs, we see that different batch sizes achieve similar test accuracies so long as the ratio /B is constant and the batch size is not too large. We note that this linear scaling rule is well known and has been observed in prior work (Goyal et al., 2017; Smith & Le, 2018; Jastrzębski et al., 2018; Zhang et al., 2019). To confirm that this behaviour is consistent with the modified loss, in Figure 3(b) we fix the learning rate = 2−9 and train on Cmod(ω) at a range of regularization strengths λ for 10 million steps. As expected, different batch sizes achieve similar test accuracy so long as the ratio λ/B is constant. We note that we expect this phenomenon to break down for very large batch sizes, however we were not able to run experiments in this limit due to computational constraints. For very large batch sizes, the first implicit regularization term in Equation 22 dominates, the linear scaling rule breaks down, and the bias of SGD is similar to the bias of GD identified by Barrett & Dherin (2021). We expect the optimal learning rate to be independent of the batch size in this limit, as observed by McCandlish et al. (2018) and Smith et al. (2020). Convergence bounds also predict a transition between a small batch regime where the optimal learning rate ∝ B and a large batch regime where the optimal learning rate is constant (Ma et al., 2018; Zhang et al., 2019). However these analyses identify the learning rate which minimizes the training loss. Our analysis compliments these claims by explaining why similar conclusions hold when maximizing test accuracy. 4 FINITE LEARNING RATES AND STOCHASTIC DIFFERENTIAL EQUATIONS In the previous two sections, we argued that the use of finite learning rates and small batch sizes introduces implicit regularization, which can enhance the test accuracy of deep networks. We analyzed this effect using backward error analysis (Hairer et al., 2006; Li et al., 2017; Barrett & Dherin, 2021), but many previous papers have argued that this effect can be understood by interpreting small batch SGD as the discretization of an SDE (Mandt et al., 2017; Smith & Le, 2018; Jastrzębski et al., 2018; Park et al., 2019). In this section, we compare this popular perspective with our main results from Sections 2 and 3. To briefly recap, in the SDE analogy a single gradient update is given by ωi+1 = ωi − ∇Ĉ(ωi), where Ĉ denotes a random batch of B non-overlapping training examples. Notice that in the SDE analogy, since examples are drawn randomly from the full dataset, there is no guarantee that each training example is sampled once per epoch. Assuming N B 1 and that the gradients are not heavy tailed, the central limit theorem is applied to model the noise in an update by a Gaussian noise source ξ whose covariance is inversely proportional to the batch size: ωi+1 = ωi − ( ∇C(ωi) + ξi/ √ B ) = ωi − ∇C(ωi) + √ Tξi. (23) This assumes E(ξi) = 0 and E(ξiξTj ) = F (ω)δij , where F (ω) is the covariance matrix of the per example gradients, and we define the “temperature” T = /B. The SDE analogy notes that Equation 23 is identical to the Euler discretization of an SDE with step size and temperature T (Gardiner et al., 1985). Therefore one might expect the SGD iterates to remain close to this underlying SDE in the limit of small learning rates ( → 0). In this limit, the temperature defines the influence of mini-batching on the dynamics, and it is therefore often assumed that the temperature also governs the generalization benefit of SGD (Smith & Le, 2018; Jastrzębski et al., 2018; Park et al., 2019). However this conclusion from the SDE analogy is inconsistent with our analysis in Section 2. To see this, note that in Section 2 we assumed that each training example is sampled once per epoch, as recommended by practitioners (Bottou, 2012), and showed that under this assumption there is no noise in the dynamics of SGD up to first order in after one epoch of training. The SDE analogy therefore relies on the assumption that minibatches are sampled randomly from the full dataset. Furthermore, SGD only converges to the underlying SDE when the learning rate → 0, but in this limit the temperature T → 0 and SGD converges to gradient flow (Yaida, 2019). We must use a finite learning rate to preserve a finite temperature, but at any finite learning rate the distributions of the SGD iterates and the underlying SDE may differ. We now provide intriguing empirical evidence to support our contention that the generalization benefit of SGD arises from finite learning rates, not the temperature of an associated stochastic process. First, we introduce a modified SGD update rule: n-step SGD: Apply n gradient descent updates sequentially on the same minibatch with bare learning rate α, effective learning rate = nα and batch size B. Sample the next minibatch and repeat. To analyze n-step SGD, we consider the combined influence of n updates on the same minibatch: ωi+1 = ωi − α∇Ĉ(ωi)− α∇Ĉ(ωi − α∇Ĉ(ωi)) + ... (24) = ωi − nα∇Ĉ(ωi) +O(n2α2) (25) = ωi − ∇C(ωi) + √ Tξi +O( 2). (26) Equations 23 and 26 are identical up to first order in but they differ at O( 2) and above. Therefore, if minibatches are randomly sampled from the full dataset, then the dynamics of standard SGD and n-step SGD should remain close to the same underlying SDE in the limit → 0, but their dynamics will differ when the learning rate is finite. We conclude that if the dynamics of SGD is close to the continuum limit of the associated SDE, then standard SGD and n-step SGD ought to achieve similar test accuracies after the same number of training epochs. However if, as we argued in Section 2, the generalization benefit of SGD arises from finite learning rate corrections at O( 2) and above, then we should expect the performance of standard SGD and n-step SGD to differ. For completeness, we provide a backward error analysis of n-step SGD in appendix B. In line with Section 2 (and best practice), we assume each training example is sampled once per epoch. We find that after one epoch, the expected n-step SGD iterate E(ωm) = ω(m ) + O(m3 3), where ω(0) = ω0, ω̇ = −∇C̃nSGD(ω) and C̃nSGD(ω) = C(ω) + ( /4mn) ∑m−1 i=0 ||∇Ci(ω)||2. The scale of the implicit regularizer is proportional to α = /n, which implies that the implicit regularization is suppressed as n increases if we hold constant. As expected, we recover Equation 1 when n = 1. In Figure 4(a), we plot the performance of n-step SGD at a range of bare learning rates α, when training a 16-4 Wide-ResNet on CIFAR-10 for 400 epochs using SkipInit (De & Smith, 2020) at batch size 32. Each example is sampled once per epoch. We introduce a learning rate decay schedule, whereby we hold the learning rate constant for the first half of training, before decaying the learning rate by a factor of 2 every remaining tenth of training, and we provide the mean test accuracy of the best 5 out of 7 training runs for each value of α. The optimal test accuracy drops from 93.5% when n = 1 (standard SGD) to 88.8% when n = 16. This occurs even though all values of n perform the same number of training epochs, indicating that 16-step SGD performed 16 times more gradient updates. These results suggest that, at least for this model and dataset, the generalization benefit of SGD is not controlled by the temperature of the associated SDE, but instead arises from the implicit regularization associated with finite learning rates. When we increase n we reduce the largest stable bare learning rate α, and this suppresses the implicit regularization benefit, which reduces the test accuracy. We also verify in Figure 4(b) that similar conclusions arise if we hold the number of parameter updates fixed (such that the number of training epochs is inversely proportional to n). Smaller values of n are stable at larger bare learning rates and achieve higher test accuracies. Finally we confirm in Figure 4(c) that the test accuracy degrades as n increases even if one tunes both the learning rate and the epoch budget independently for each value of n, thus demonstrating that n-step SGD consistently achieves lower test accuracies as n increases. Note that we provide additional experiments on Fashion-MNIST (Xiao et al., 2017) in appendix D. 5 DISCUSSION Many authors have observed that large learning rates (Li et al., 2019; Lewkowycz et al., 2020), and small batch sizes (Keskar et al., 2017; Smith et al., 2020), can enhance generalization. Most theoretical work has sought to explain this by observing that increasing the learning rate, or reducing the batch size, increases the variance of the SGD iterates (Smith & Le, 2018; Jastrzębski et al., 2018; Chaudhari & Soatto, 2018). We take a different approach, and note that when the learning rate is finite, the SGD iterates are also biased (Roberts, 2018). Backward error analysis (Hairer et al., 2006; Li et al., 2017; Barrett & Dherin, 2021) provides a powerful tool that computes how this bias accumulates over multiple parameter updates. Although this work focused on GD and SGD, we anticipate that backward error analysis could also be used to clarify the role of finite learning rates in adaptive optimizers like Adam (Kingma & Ba, 2015) or Natural Gradient Descent (Amari, 1998). We note however that backward error analysis assumes that the learning rate is small (though finite). It therefore does not capture the chaotic or oscillatory dynamics which arise when the learning rate is close to instability. At these very large learning rates the modified loss, which is defined as a Taylor series in powers of the learning rate, does not converge. Lewkowycz et al. (2020) recently argued that the test accuracies of wide networks trained with full batch gradient descent on quadratic losses are maximized for large learning rates close to divergence. In this “catapult” regime, the GD iterates oscillate along high curvature directions and the loss may increase early in training. It remains an open question to establish whether backward error analysis fully describes the generalization benefit of small batch SGD, or if these chaotic or oscillatory effects also play a role in some networks. ACKNOWLEDGMENTS We would like to thank Jascha Sohl-Dickstein, Razvan Pascanu, Alex Botev, Yee Whye Teh and the anonymous reviewers for helpful discussions and feedback on earlier versions of this manuscript. A THE EXPECTED NORM OF A MINIBATCH GRADIENT To keep the notation clean, we define Xi = (∇Ci(ω)−∇C(ω)). We also recall for clarity that the expectation value E(...) is taken over all possible random shuffles of the indices i. Therefore, E(||(∇Ĉ(ω)−∇C(ω))||2) = 1 B2 E( B∑ i=1 B∑ j=1 Xi ·Xj) (27) = B B2 E(Xi ·Xi) + B(B − 1) B2 E(Xi ·Xj 6=i) (28) = 1 NB N∑ i=1 Xi ·Xi + (B − 1) B 1 N(N − 1) N∑ i=1 ∑ j 6=i Xi ·Xj (29) = 1 NB N∑ i=1 Xi ·Xi + (B − 1) BN(N − 1) N∑ i=1 N∑ j=1 (Xi ·Xj(1− δij)) . Note that we obtain Equation 28 by counting the number of diagonal and off-diagonal terms in the sum in Equation 27. Next, we recall that ∑N i=1Xi = ∑N i=1(∇Ci(ω)−∇C(ω)) = 0. Therefore, E(||(∇Ĉ(ω)−∇C(ω))||2) = 1 NB N∑ i=1 Xi ·Xi − (B − 1) BN(N − 1) N∑ i=1 Xi ·Xi (30) = 1 NB ( 1− (B − 1) (N − 1) ) N∑ i=1 Xi ·Xi (31) = (N −B) (N − 1) Γ(ω) B , (32) where Γ(ω) = (1/N) ∑N i=1Xi · Xi = (1/N) ∑N i=1 ||∇Ci(ω) − ∇C(ω)||2. We can immediately identify Γ(ω) as the trace of the empirical covariance matrix of the per-example gradients. B A BACKWARD ERROR ANALYSIS FOR N-STEP SGD Under n-step SGD, we apply n gradient descent updates on the same minibatch with bare learning rate α and batch sizeB. After n updates, we sample the next minibatch and repeat. For convenience, we define the effective learning rate = nα. After one minibatch (n parameter updates), ωi+1 = ωi − α∇Ĉi(ωi)− α∇Ĉi(ωi − α∇Ĉi(ωi)) + ... (33) = ωi − nα∇Ĉi(ωi) + (n/2)(n− 1)α2∇∇Ĉi(ωi)∇Ĉi(ωi) +O(n3α3) (34) = ωi − ∇Ĉi(ωi) + (1/4)(1− 1/n) 2∇ ( ||∇Ĉi(ωi)||2 ) +O( 3). (35) After one epoch (including terms up to second order in ), ωm = ω0 − ∇Ĉ0(ω0) + (1/4)(1− 1/n) 2∇ ( ||∇Ĉ0(ω0)||2 ) − ∇Ĉ1(ω1) + (1/4)(1− 1/n) 2∇ ( ||∇Ĉ1(ω1)||2 ) − ... − ∇Ĉm−1(ωm−1) + (1/4)(1− 1/n) 2∇ ( ||∇Ĉm−1(ωm−1)||2 ) +O( 3). (36) To simplify this expression, we note that ωi+1 = ωi − ∇Ĉi(ωi) +O( 2). We can therefore re-use our earlier analysis from Section 2.2 of the main text (see Equation 13 for comparison) to obtain, ωm = ω0 −m ∇C(ω0) + 2 m−1∑ j=0 ∑ k<j ∇∇Ĉj(ω0)∇Ĉk(ω0) + (1/4)(1− 1/n) 2 m−1∑ i=0 ∇ ( ||∇Ĉi(ω0)||2 ) +O( 3). (37) Taking the expectation over all possible batch orderings (see Equations 15 to 18), we obtain, E(ωm) = ω0 −m ∇C(ω0) + m2 2 4 ∇ ( ||∇C(ω0)||2 − 1 m2n m−1∑ i=0 ||∇Ĉi(ω0)||2 ) +O(m3 3). (38) Fixing f(ω) = −∇C(ω) and equating Equation 38 with the continuous modified flow in Equation 19 by setting E(ωm) = ω(m ), we identify the modified flow ω̇ = −∇C̃nSGD +O(m2 2), where, C̃nSGD(ω) = C(ω) + 4mn m−1∑ i=0 ||∇Ĉi(ω)||2. (39) Comparing Equation 39 to Equation 1, we note that the modified losses of SGD and n-step SGD coincide when n = 1. However for n-step SGD when n > 1, the strength of the implicit regularization term is proportional to the scale of the bare learning rate α = /n, not the effective learning rate . C TRAINING LOSSES In Figure 1(b) of Section 2.3 in the main text, we compared the test accuracies achieved when training on the original loss C(ω) at a range of learning rates , to the test accuracies achieved when training on the modified loss Cmod(ω) at fixed learning rate = 2−9 and a range of regularization coefficients λ. For completeness, in Figure 5, we provide the corresponding training accuracies, as well as the final values of the original loss C(ω). Remarkably, large learning rates and large regularization coefficients achieve similar training accuracies and similar original losses. This suggests that the implicit regularization term in the modified loss of SGD (C̃SGD(ω)) may help explain why the training accuracies and losses often exhibit plateaus when training with large learning rates. D ADDITIONAL RESULTS ON FASHION-MNIST In this section we provide additional experiments on the Fashion-MNIST dataset (Xiao et al., 2017), which comprises 10 classes, 60000 training examples and 10000 examples in the test set. We consider a simple fully connected MLP which comprises 3 nonlinear layers, each with width 4096 and ReLU activations, and a final linear softmax layer. We apply a simple data pipeline which first applies per-image standardization and then flattens the input to a 784 dimensional vector. We do not apply data augmentation and we train using vanilla SGD without learning rate decay for all experiments. We perform seven training runs for each combination of hyper-parameters and show the mean performance of the best five (to ensure our results are not skewed by a single failed run). We use a batch size B = 16 unless otherwise specified, and we do not use weight decay. We note that this model is highly over-parameterized. Unlike the Wide-ResNet we considered in the main text we consistently achieve 100% training accuracy if the learning rate is not too large. In Figure 6(a), we train for 400 epochs, and we compare the effect of tuning the learning rate when training on the original loss, to the effect of tuning the explicit regularization strength λ (with = 2−9). As observed in the main text, tuning the explicit regularizer has a similar effect on the test accuracy to tuning the learning rate. Surprisingly, the optimal values of and λ differ by a factor of 8. However we note that the optimal learning rate is = 2−5, while the explicit regularizer already achieves a very similar test accuracy at λ = 2−4 (just a factor of two larger), before reaching a higher maximum test accuracy at λ = 2−2. In Figure 6(b), we train for 400 epochs on the original loss and compare the test accuracies achieved for a range of batch sizes at different learning rates. As observed in the main text, the test accuracy is determined by the ratio of the learning rate to the batch size. Meanwhile in Figure 6(c), we plot the test accuracy achieved after training for 1.5 million steps on the modified loss with learning rate = 2−9 and regularization coefficient λ. Once again, we find that the test accuracy achieved is primarily determined by the ratio of the regularization coefficient to the batch size, although smaller batch sizes also achieve slightly higher accuracies. Finally, in Figure 7 we train using n-step SGD (see Section 4) on the original loss at a range of bare learning rates α. In Figure 7(a) we train for 400 epochs, while in Figure 7(b) we train for 6 million updates. We recall that the SDE analogy predicts that the generalization benefit of n-SGD would be determined by the effective learning rate = nα. By contrast, backward error analysis predicts that the generalization benefit for small learning rates would be controlled by the bare learning rate α, but that higher order terms may be larger for larger values of n. We find that the test accuracy in both figures is governed by the bare learning rate α, not the effective learning rate = nα, and therefore these results are inconsistent with the predictions from the SDE analysis in prior work. Note that Figure 7 has a surprising implication. It suggests that, for this model, while there is a largest stable bare learning rate we cannot exceed, we can repeatedly apply updates obtained on the same batch of training examples without suffering a significant degradation in test accuracy. We speculate that this may indicate that the gradients of different examples in this over-parameterized model are close to orthogonal (Sankararaman et al., 2020).
1. What is the focus of the paper regarding SGD's implicit regularization? 2. What are the strengths of the proposed approach compared to prior works, particularly in its application in practical scenarios? 3. How does the reviewer assess the clarity and quality of the paper's content, especially in the theoretical analysis and numerical experiments? 4. Are there any differences between the proposed method and previous analyses using SDE? If so, what are they? 5. Can the reviewer think of any potential applications or future research directions related to this work?
Review
Review This paper analyzes the implicit regularization in SGD with finite learning rates via backward error analysis. The modified flow introduced in this paper better approximates the practical behavior of SGD as it does not require vanishing learning rates and it allows to use random shuffling in stead of i.i.d sampling. The numerical experiments validates the existence of the implicit regularization and how it affects the generalization of the model trained by SGD. The difference from SDE analysis is also discussed. Reason for score: The paper is well organized. Specially, I enjoy reading section II. The tool of backward error analysis and the derivation of the implicit regularization in SGD flow are introduced clearly and concisely. The analysis is based on random shuffling instead of i.i.d sampling matches the practical use of SGD. The numerical experiments are very convincing. The consistency of SGD with larger lr and SGD with smaller lr plus explicit regularization validates the results of theoretical analysis. The numerical experiments also provide some insights into tuning hyper parameters such as learning rate and batch size.
ICLR
Title On the Origin of Implicit Regularization in Stochastic Gradient Descent Abstract For infinitesimal learning rates, stochastic gradient descent (SGD) follows the path of gradient flow on the full batch loss function. However moderately large learning rates can achieve higher test accuracies, and this generalization benefit is not explained by convergence bounds, since the learning rate which maximizes test accuracy is often larger than the learning rate which minimizes training loss. To interpret this phenomenon we prove that for SGD with random shuffling, the mean SGD iterate also stays close to the path of gradient flow if the learning rate is small and finite, but on a modified loss. This modified loss is composed of the original loss function and an implicit regularizer, which penalizes the norms of the minibatch gradients. Under mild assumptions, when the batch size is small the scale of the implicit regularization term is proportional to the ratio of the learning rate to the batch size. We verify empirically that explicitly including the implicit regularizer in the loss can enhance the test accuracy when the learning rate is small. 1 INTRODUCTION In the limit of vanishing learning rates, stochastic gradient descent with minibatch gradients (SGD) follows the path of gradient flow on the full batch loss function (Yaida, 2019). However in deep networks, SGD often achieves higher test accuracies when the learning rate is moderately large (LeCun et al., 2012; Keskar et al., 2017). This generalization benefit is not explained by convergence rate bounds (Ma et al., 2018; Zhang et al., 2019), because it arises even for large compute budgets for which smaller learning rates often achieve lower training losses (Smith et al., 2020). Although many authors have studied this phenomenon (Jastrzębski et al., 2018; Smith & Le, 2018; Chaudhari & Soatto, 2018; Shallue et al., 2018; Park et al., 2019; Li et al., 2019; Lewkowycz et al., 2020), it remains poorly understood, and is an important open question in the theory of deep learning. In a recent work, Barrett & Dherin (2021) analyzed the influence of finite learning rates on the iterates of gradient descent (GD). Their approach is inspired by backward error analysis, a method for the numerical analysis of ordinary differential equation (ODE) solvers (Hairer et al., 2006). The key insight of backward error analysis is that we can describe the bias introduced when integrating an ODE with finite step sizes by introducing an ancillary modified flow. This modified flow is derived to ensure that discrete iterates of the original ODE lie on the path of the continuous solution to the modified flow. Using this technique, the authors show that if the learning rate is not too large, the discrete iterates of GD lie close to the path of gradient flow on a modified loss C̃GD(ω) = C(ω) + ( /4)||∇C(ω)||2. This modified loss is composed of the original loss C(ω) and an implicit regularizer proportional to the learning rate which penalizes the euclidean norm of the gradient. However these results only hold for full batch GD, while in practice SGD with small or moderately large batch sizes usually achieves higher test accuracies (Keskar et al., 2017; Smith et al., 2020). In this work, we devise an alternative approach to backward error analysis, which accounts for the correlations between minibatches during one epoch of training. Using this novel approach, we prove that for small finite learning rates, the mean SGD iterate after one epoch, averaged over all possible sequences of minibatches, lies close to the path of gradient flow on a second modified loss C̃SGD(ω), which we define in equation 1. This new modified loss is also composed of the full batch loss function and an implicit regularizer, however the structure of the implicit regularizers for GD and SGD differ, and their modified losses can have different local and global minima. Our analysis therefore helps explain both why finite learning rates can aid generalization, and why SGD can achieve higher test accuracies than GD. We assume that each training example is sampled once per epoch, in line with best practice (Bottou, 2012), and we confirm empirically that explicitly including the implicit regularization term of SGD in the training loss can enhance the test accuracy when the learning rate is small. Furthermore, we prove that if the batch size is small and the gradients are sufficiently diverse, then the expected magnitude of the implicit regularization term of SGD is proportional to the ratio of the learning rate to the batch size (Goyal et al., 2017; Smith et al., 2018). We note that many previous authors have sought to explain the generalization benefit of SGD using an analogy between SGD and stochastic differential equations (SDEs) (Mandt et al., 2017; Smith & Le, 2018; Jastrzębski et al., 2018; Chaudhari & Soatto, 2018). However this SDE analogy assumes that each minibatch is randomly sampled from the full dataset, which implies that some examples will be sampled multiple times in one epoch. Furthermore, the most common SDE analogy holds only for vanishing learning rates (Yaida, 2019) and therefore misses the generalization benefits of finite learning rates which we identify in this work. An important exception is Li et al. (2017), who applied backward error analysis to identify a modified SDE which holds when the learning rate is finite. However this work still relies on the assumption that minibatches are sampled randomly. It also focused on the convergence rate, and did not discuss the performance of SGD on the test set. Main Result. We now introduce our main result. We define the cost function over parameters ω as C(ω) = (1/N) ∑N j=1 Cj(ω), which is the mean of the per-example costs Cj(ω), where N denotes the training set size. Gradient flow follows the ODE ω̇ = −∇C(ω), while gradient descent computes discrete updates ωi+1 = ωi − ∇C(ωi), where is the learning rate. For simplicity, we assume that the batch size B perfectly splits the training set such that N%B = 0, where % denotes the modulo operation, and for convenience we define the number of batches per epoch m = N/B. We can therefore re-write the cost function as a sum over minibatches C(ω) = (1/m) ∑m−1 k=0 Ĉk(ω), where the minibatch cost Ĉk(ω) = (1/B) ∑kB+B j=kB+1 Cj(ω). In order to guarantee that we sample each example precisely once per epoch, we define SGD by the discrete update ωi+1 = ωi− ∇Ĉi%m(ωi). Informally, our main result is as follows. After one epoch, the mean iterate of SGD with a small but finite learning rate , averaged over all possible shuffles of the batch indices, stays close to the path of gradient flow on a modified loss ω̇ = −∇C̃SGD(ω), where the modified loss C̃SGD is given by: C̃SGD(ω) = C(ω) + 4m ∑m−1 k=0 ||∇Ĉk(ω)||2. (1) We emphasize that our analysis studies the mean evolution of SGD, not the path of individual trajectories. The modified loss C̃SGD(ω) is composed of the original loss C(ω) and an implicit regularizer Creg(ω) = (1/4m) ∑m−1 k=0 ||∇Ĉk(ω)||2. The scale of this implicit regularization term is proportional to the learning rate , and it penalizes the mean squared norm of the gradient evaluated on a batch of B examples. To help us compare the modified losses of GD and SGD, we can expand, C̃SGD(ω) = C(ω) + 4 ||∇C(ω)||2 + 4m ∑m−1 i=0 ||∇Ĉi(ω)−∇C(ω)||2. (2) We arrive at Equation 2 from Equation 1 by noting that ∑m−1 i=0 (∇Ĉi(ω) − ∇C(ω)) = 0. In the limit B → N , we identify the modified loss of gradient descent, C̃GD = C(ω) + ( /4)||∇C(ω)||2, which penalizes “sharp” regions where the norm of the full-batch gradient (||∇C(ω)||2) is large. However, as shown by Equation 2, the modified loss of SGD penalizes both sharp regions where the full-batch gradient is large, and also “non-uniform” regions where the norms of the errors in the minibatch gradients (||∇Ĉ(ω) − ∇C(ω)||2) are large (Wu et al., 2018). Although global minima of C(ω) are global minima of C̃GD(ω), global minima of C(ω) may not be global (or even local) minima of C̃SGD(ω). Note however that C(ω) and C̃SGD(ω) do share the same global minima on over-parameterized models which can interpolate the training set (Ma et al., 2018). We verify in our experiments that the implicit regularizer can enhance the test accuracy of models trained with SGD. Paper structure. In Section 2, we derive our main result (Equation 1), and we confirm empirically that we can close the generalization gap between small and large learning rates by including the implicit regularizer explicitly in the loss function. In Section 3, we confirm Equation 1 satisfies the linear scaling rule between learning rate and batch size (Goyal et al., 2017). In Section 4, we provide additional experiments which challenge the prevailing view that the generalization benefit of small batch SGD arises from the temperature of an associated SDE (Mandt et al., 2017; Park et al., 2019). 2 A BACKWARD ERROR ANALYSIS OF STOCHASTIC GRADIENT DESCENT Backward error analysis has great potential to clarify the role of finite learning rates, and to help identify the implicit biases of different optimizers. We therefore give a detailed introduction to the core methodology in Section 2.1, before deriving our main result in Section 2.2. In Section 2.3, we confirm empirically that the implicit regularizer can enhance the test accuracy of deep networks. 2.1 AN INTRODUCTION TO BACKWARD ERROR ANALYSIS In numerical analysis, we often wish to integrate ODEs of the form ω̇ = f(ω). This system usually cannot be solved analytically, forcing us to simulate the continuous flow with discrete updates, like the Euler step ω(t+ ) ≈ ω(t) + f(ω(t)). However discrete updates will introduce approximation error when the step size is finite. In order to study the bias introduced by this approximation error, we assume the learning rate is relatively small, and introduce a modified flow ω̇ = f̃(ω), where, f̃(ω) = f(ω) + f1(ω) + 2f2(ω) + ... . (3) The modified flow of f̃(ω) is equal to the original flow of f(ω) when → 0, but it differs from the original flow if is finite. The goal of backward error analysis is to choose the correction terms fi(ω) such that the iterates obtained from discrete updates of the original flow with small finite step sizes lie on the path taken by the continuous solution to the modified flow with vanishing step sizes. The standard derivation of backward error analysis begins by taking a Taylor expansion in of the solution to the modified flow ω(t + ). We obtain the derivatives of ω(t + ) recursively using the modified flow equation ω̇ = f̃(ω) (see Hairer et al. (2006)), and we identify the correction terms fi(ω) by ensuring this Taylor expansion matches the discrete update (e.g., ωt+1 = ωt + f(ωt)) for all powers of . However, this approach does not clarify why these correction terms arise. To build our intuition for the origin of the corrections terms, and to clarify how we might apply this analysis to SGD, we take a different approach. First, we will identify the path taken by the continuous modified flow by considering the combined influence of an infinite number of discrete steps in the limit of vanishing learning rates, and then we will compare this continuous path to either a single step of GD or a single epoch of SGD. Imagine taking n Euler steps on the modified flow f̃(ω) with step size α, ωt+n = ωt + αf̃(ωt) + αf̃(ωt+1) + αf̃(ωt+2) + ... (4) = ωt + αf̃(ωt) + αf̃(ωt + αf̃(ωt)) + αf̃(ωt + αf̃(ωt) + αf̃(ωt + αf̃(ωt))) + ... (5) = ωt + nαf̃(ωt) + (n/2)(n− 1)α2∇f̃(ωt)f̃(ωt) +O(n3α3). (6) We arrived at Equation 6 by taking the Taylor expansion of f̃ and then counting the number of terms of type ∇f̃(ωt)f̃(ωt) using the formula for an arithmetic series. Note that we assume ∇f̃ exists. Next, to ensure ωt+n in Equation 6 coincides with the solution ω(t+ ) of the continuous modified flow ω̇ = f̃(ω) for small but finite , we let the number of steps n→∞ while setting α = /n, ω(t+ ) = ω(t) + f̃(ω(t)) + ( 2/2)∇f̃(ω(t))f̃(ω(t)) +O( 3) (7) = ω(t) + f(ω(t)) + 2 (f1(ω(t)) + (1/2)∇f(ω(t))f(ω(t))) +O( 3). (8) We have replaced f̃(ω) with its definition from Equation 3. As we will see below, Equation 8 is the key component of backward error analysis, which describes the path taken when integrating the continuous modified flow f̃(ω) with vanishing learning rates over a discrete time step of length . Notice that we have assumed that the Taylor expansion in Equation 8 converges, while the higher order terms at O( 3) will contain higher order derivatives of the original flow f(ω). Backward error analysis therefore implicitly assumes that f(ω) is an analytic function in the vicinity of the current parameters ω. We refer the reader to Hairer et al. (2006) for a detailed introduction. Gradient descent: As a simple example, we will now derive the first order correction f1(ω) of the modified flow for GD. First, we recall that the discrete updates obey ωi+1 = ωi− ∇C(ωi), and we therefore fix f(ω) = −∇C(ω). In order to ensure that the continuous modified flow coincides with this discrete update, we need all terms at O( 2) and above in Equation 8 to vanish. At order 2, this implies that f1(ω) + (1/2)∇∇C(ω)∇C(ω) = 0, which yields the first order correction, f1(ω) = −(1/2)∇∇C(ω)∇C(ω) = −(1/4)∇ ( ||∇C(ω)||2 ) . (9) We conclude that, if the learning rate is sufficiently small such that we can neglect higher order terms in Equation 3, then the discrete GD iterates lie on the path of the following ODE, ω̇ = −∇C(ω)− ( /4)∇ ( ||∇C(ω)||2 ) (10) = −∇C̃GD(ω). (11) Equation 11 corresponds to gradient flow on the modified loss, C̃GD(ω) = C(ω)+( /4)||∇C(ω)||2. 2.2 BACKWARD ERROR ANALYSIS AND STOCHASTIC GRADIENT DESCENT We now derive our main result (Equation 1). As described in the introduction, we assume N%B = 0, where N is the training set size, B is the batch size, and % denotes the modulo operation. The number of updates per epochm = N/B, and the minibatch costs Ĉk(ω) = (1/B) ∑kB+B j=kB+1 Cj(ω). SGD with constant learning rates obeys ωi+1 = ωi− ∇Ĉi%m(ωi). It is standard practice to shuffle the dataset once per epoch, but we omit this step here and instead perform our analysis over a single epoch. In Equation 6 we derived the influence of n Euler steps on the flow f̃(ω) with step size α. Following a similar approach, we now derive the influence of m SGD updates with learning rate , ωm = ω0 − ∇Ĉ0(ω0)− ∇Ĉ1(ω1)− ∇Ĉ2(ω2)− ... (12) = ω0 − m−1∑ j=0 ∇Ĉj(ω0) + 2 m−1∑ j=0 ∑ k<j ∇∇Ĉj(ω0)∇Ĉk(ω0) +O(m3 3) (13) = ω0 −m ∇C(ω0) + 2ξ(ω0) +O(m3 3). (14) The error in Equation 14 is O(m3 3) since there are O(m3) terms in the Taylor expansion proportional to 3. Notice that a single epoch of SGD is equivalent to a single GD update with learning rate m up to first order in . Remarkably, this implies that when the learning rate is sufficiently small, there is no noise in the iterates of SGD after completing one epoch. For clarity, this observation arises because we require that each training example is sampled once per epoch. However the second order correction ξ(ω) = ∑m−1 j=0 ∑ k<j ∇∇Ĉj(ω)∇Ĉk(ω) does not appear in the GD update, and it is a random variable which depends on the order of the mini-batches. In order to identify the bias introduced by SGD, we will evaluate the mean correction E(ξ), where we take the expectation across all possible sequences of the (non-overlapping) mini-batches {Ĉ0, Ĉ1, ..., Ĉm−1}. Note that we hold the composition of the batches fixed, averaging only over their order. We conclude that, E(ξ(ω)) = 1 2 (∑m−1 j=0 ∑ k 6=j ∇∇Ĉj(ω)∇Ĉk(ω) ) (15) = m2 2 ∇∇C(ω)∇C(ω)− 1 2 ∑m−1 j=0 ∇∇Ĉj∇Ĉj (16) = m2 4 ∇ ( ||∇C(ω)||2 − 1 m2 ∑m−1 j=0 ||∇Ĉj(ω)||2 ) . (17) For clarity, in Equation 15 we exploit the fact that every sequence of batches has a corresponding sequence in reverse order. Combining Equations 14 and 17, we conclude that after one epoch, E(ωm) = ω0 −m ∇C(ω0) + m2 2 4 ∇ ( ||∇C(ω0)||2 − (1/m2) ∑m−1 j=0 ||∇Ĉj(ω0)||2 ) +O(m3 3). (18) Having identified the expected value of the SGD iterate after one epoch E(ωm) (for small but finite learning rates), we can now use this expression to identify the corresponding modified flow. First, we set f(ω) = −∇C(ω), t = 0, ω(0) = ω0, and let → m in Equations 3 and 8 to obtain, ω(m ) = ω0 −m ∇C(ω0) +m2 2 ( f1(ω0) + (1/4)∇||∇C(ω0)||2 ) +O(m3 3). (19) Next, we equate Equations 18 and 19 by setting ω(m ) = E(ωm). We immediately identify the first order correction to the modified flow f1(ω) = −(1/(4m2))∇ ∑m−1 j=0 ||∇Ĉj(ω)||2. We therefore conclude that, after one epoch, the expected SGD iterate E(ωm) = ω(m ) + O(m3 3), where ω(0) = ω0 and ω̇ = −∇C(ω) +m f1(ω). Simplifying, we conclude ω̇ = −∇C̃SGD(ω), where, C̃SGD(ω) = C(ω) + ( /4m) ∑m−1 k=0 ||∇Ĉk(ω)||2. (20) Equation 20 is identical to Equation 1, and this completes the proof of our main result. We emphasize that C̃SGD assumes a fixed set of minibatches {Ĉ0, Ĉ1, ..., Ĉm−1}. We will evaluate the expected modified loss after shuffling the dataset and sampling a new set of minibatches in Section 3. REMARKS ON THE ANALYSIS The phrase “for small finite learning rates” has a precise meaning in our analysis. It implies is large enough that terms of O(m2 2) may be significant, but small enough that terms of O(m3 3) are negligible. Our analysis is unusual, because we consider the mean evolution of the SGD iterates but ignore the variance of individual training runs. Previous analyses of SGD have usually focused on the variance of the iterates in limit of vanishing learning rates (Mandt et al., 2017; Smith & Le, 2018; Jastrzębski et al., 2018). However these works assume that each minibatch is randomly sampled from the full dataset. Under this assumption, the variance in the iterates arises at O( ), while the bias arises at O( 2). By contrast, in our analysis each example is sampled once per epoch, and both the variance and the bias arise at O( 2) (for simplicity, we assume m = N/B is constant). We therefore anticipate that the variance will play a less important role than is commonly supposed. Furthermore, we can construct a specific sequence of minibatches for which the variance atO(m2 2) vanishes, such that the evolution of a specific training run will coincide exactly with gradient flow on the modified loss of Equation 1 for small finite learning rates. To achieve this, we perform two training epochs with the sequence of minibatches (Ĉ0, Ĉ1, ..., Ĉm−1, Ĉm−1, ..., Ĉ1, Ĉ0) (i.e., the second epoch iterates through the same set of minibatches as the first but in the opposite order). If one inspects Equations 13 to 15, one will see that reversing the second epoch has the same effect as taking the mean across all possible sequences of minibatches (it replaces the ∑ k<j by a ∑ k 6=j). The key limitation of our analysis is that we assume m = N /B is small, in order to neglect terms at O(m3 3). This is an extreme approximation, since we typically expect that N/B is large. Therefore, while our work identifies the first order correction to the bias arising from finite learning rates, higher order terms in the modified flow may also play an important role at practical learning rates. We note however that previous theoretical analyses have made even more extreme assumptions. For instance, most prior work studying SGD in the small learning rate limit neglects all terms at O( 2) (for an exception, see Li et al. (2017)). Furthermore, as we show in Section 3, the learning rate often scales proportional to the batch size, such that /B is constant (Goyal et al., 2017; McCandlish et al., 2018). Therefore the accuracy of our approximations does not necessarily degrade as the batch size falls, but higher order terms may play an increasingly important role as the dataset size increases. Our experimental results in Section 2.3 and Section 4 suggest that our analysis can explain most of the generalization benefit of finite learning rate SGD for Wide-ResNets trained on CIFAR-10. We note that to achieve the highest test accuracies, practitioners usually decay the learning rate during training. Under this scheme, the modified loss would change as training proceeds. However it is widely thought that the generalization benefit of SGD arises from the use of large learning rates early in training (Smith et al., 2018; Li et al., 2019; Jastrzebski et al., 2020; Lewkowycz et al., 2020), and popular schedules hold the learning rate constant or approximately constant for several epochs. Finally, we emphasize that our primary goal in this work is to identify the influence of finite learning rates on training. The implicit regularization term may not be beneficial in all models and datasets. 2.3 AN EMPIRICAL EVALUATION OF THE MODIFIED LOSS In order to confirm that the modified loss C̃SGD(ω) can help explain why large learning rates enhance generalization, we now verify empirically that the implicit regularizer inherent in constant learning rate SGD, Creg(ω) = (1/4m) ∑m−1 k=0 ||∇Ĉk(ω)||2, can enhance the test accuracy of deep networks. To this end, we train the same model with two different (explicit) loss functions. The first loss function C(ω) represents the original loss, while the second Cmod(ω) = C(ω) + λCreg(ω) is obtained from the modified loss C̃SGD(ω) by replacing the learning rate with an explicit regular- ization coefficient λ. Notice that Cmod(ω) = (1/m) ∑m−1 k=0 ( Ĉk(ω) + (λ/4)||∇Ĉk(ω)||2 ) , which ensures that it is straightforward to minimize the modified loss Cmod(ω) with minibatch gradients. Since the implicit regularization term Creg(ω) is expensive to differentiate (typically 5-10x overhead), we consider a 10-1 Wide-ResNet model (Zagoruyko & Komodakis, 2016) for classification on CIFAR-10. To ensure close agreement with our theoretical analysis, we train without batch normalization using SkipInit initialization (De & Smith, 2020). We train for 6400 epochs at batch size 32 without learning rate decay using SGD without Momentum. We use standard data augmentation including crops and random flips, and we use weight decay with L2 coefficient 5 × 10−4. We emphasize that, since we train using a finite (though very large) compute budget, the final networks may not have fully converged. This is particularly relevant when training with small learning rates. Note that we provide additional experiments on Fashion-MNIST (Xiao et al., 2017) in appendix D. In Figure 1(a), we compare two training runs, one minimizing the modified loss Cmod(ω) with λ = 2−6, and one minimizing the original loss C(ω). For both runs we use a small constant learning rate = 2−9. As expected, the regularized training run achieves significantly higher test accuracies late in training. This confirms that the implicit regularizer, which arises as a consequence of using SGD with finite learning rates, can also enhance the test accuracy if it is included explicitly in the loss. In Figure 1(b), we provide the test accuracy for a range of regularization strengths λ (orange line). We provide the mean test accuracy of the best 5 out of 7 training runs at each regularization strength, and for each run we take the highest test accuracy achieved during the entire training run. We use a fixed learning rate = 2−9 for all λ. For comparison, we also provide the test accuracy achieved with the original loss C(ω) for a range of learning rates (blue line). In both cases, the test accuracy rises initially, before falling for large regularization strengths or large learning rates. Furthermore, in this network the optimal regularization strength on the modified loss λopt = 2−6 is equal to the optimal learning rate on the original loss opt = 2−6. Meanwhile when λ → 0 the performance of the modified loss approaches the performance of the original loss at = 2−9 (dotted green line). We provide the corresponding training accuracies in appendix C. Finally, in Figure 1(c), we provide the values of the implicit regularizer Creg(ω) at the end of training. As predicted by our analysis, training with larger learning rates reduces the value of the implicit regularization term. In Figure 2, we take the same 10-1 Wide-ResNet model and provide the mean training and test accuracies achieved at a range of learning rates for two regularization coefficients (following the experimental protocol above). In Figure 2(a), we train on the original loss C(ω) (λ = 0), while in Figure 2(b), we train on the modified loss Cmod(ω) with regularization coefficient λ = 2−6. From Figure 2(a), when λ = 0 there is a clear generalization benefit to large learning rates, as the learning rate that maximizes test accuracy (2−6) is 16 times larger than the learning rate that maximizes training accuracy (2−10). However in Figure 2(b) with λ = 2−6, the learning rates that maximize the test and training accuracies are equal (2−8). This suggests that when we include the implicit regularizer explicitly in the loss, the generalization benefit of large learning rates is diminished. 3 IMPLICIT REGULARIZATION AND THE BATCH SIZE In Section 2.2, we derived the modified loss by considering the expected SGD iterate after one epoch. We held the composition of the batches fixed, averaging only over the order in which the batches are seen. This choice helped make clear how to explicitly include the implicit regularizer in the loss function in Section 2.3. However, in order to clarify how the implicit regularizer term depends on the batch size, we now evaluate the expected modified loss after randomly shuffling the dataset and sampling a new set of m non-overlapping minibatches {Ĉ0, Ĉ1, ..., Ĉm−1}. Since the minibatch losses Ĉi(ω) are all identically distributed by symmetry, we recall Equation 2 and conclude that, E(C̃SGD(ω)) = C(ω) + ( /4) ||∇C(ω)||2 + ( /4)E(||∇Ĉ(ω)−∇C(ω)||2), (21) where Ĉ(ω) denotes a batch of B non-overlapping examples, drawn randomly from the full dataset. To simplify equation 21, we prove in appendix A that E(||∇Ĉ(ω) − ∇C(ω)||2) = (N−B)(N−1) Γ(ω) B , where Γ(ω) = (1/N) ∑N i=1 ||∇Ci(ω)−∇C(ω)||2. We therefore obtain, E(C̃SGD(ω)) = C(ω) + 4 ||∇C(ω)||2 + (N −B) (N − 1) 4B Γ(ω). (22) Note that Γ(ω) is the trace of the empirical covariance matrix of the per-example gradients. We have not assumed that the minibatch gradients are Gaussian distributed, however if the per-example gradients are heavy tailed (Simsekli et al., 2019) then Γ(ω) may diverge, in which case the expected value of the modified loss is ill-defined. Equation 22 shows that the implicit regularization term of SGD has two contributions. The first term is proportional to the learning rate , and it penalizes the norm of the full batch gradient. The second term is proportional to the ratio of the learning rate to the batch size /B (assuming N B), and it penalizes the trace of the covariance matrix. To interpret this result, we assume that the minibatch gradients are diverse, such that (Γ(ω)/B) ||∇C(ω)||2. This assumption guarantees that increasing the batch size reduces the error in the gradient estimate. In this limit, the second term above will dominate, and therefore different batch sizes will experience the same implicit regularization so long as the ratio of the learning rate to the batch size is constant. To verify this claim, in Figure 2.3 we plot the mean test accuracies achieved on a 10-1 Wide-ResNet, trained on CIFAR-10 with a constant learning rate, for a range of learning rates , regularization coefficients λ and batch sizes B. As expected, in Figure 3(a), training on the original loss C(ω) for 6400 epochs, we see that different batch sizes achieve similar test accuracies so long as the ratio /B is constant and the batch size is not too large. We note that this linear scaling rule is well known and has been observed in prior work (Goyal et al., 2017; Smith & Le, 2018; Jastrzębski et al., 2018; Zhang et al., 2019). To confirm that this behaviour is consistent with the modified loss, in Figure 3(b) we fix the learning rate = 2−9 and train on Cmod(ω) at a range of regularization strengths λ for 10 million steps. As expected, different batch sizes achieve similar test accuracy so long as the ratio λ/B is constant. We note that we expect this phenomenon to break down for very large batch sizes, however we were not able to run experiments in this limit due to computational constraints. For very large batch sizes, the first implicit regularization term in Equation 22 dominates, the linear scaling rule breaks down, and the bias of SGD is similar to the bias of GD identified by Barrett & Dherin (2021). We expect the optimal learning rate to be independent of the batch size in this limit, as observed by McCandlish et al. (2018) and Smith et al. (2020). Convergence bounds also predict a transition between a small batch regime where the optimal learning rate ∝ B and a large batch regime where the optimal learning rate is constant (Ma et al., 2018; Zhang et al., 2019). However these analyses identify the learning rate which minimizes the training loss. Our analysis compliments these claims by explaining why similar conclusions hold when maximizing test accuracy. 4 FINITE LEARNING RATES AND STOCHASTIC DIFFERENTIAL EQUATIONS In the previous two sections, we argued that the use of finite learning rates and small batch sizes introduces implicit regularization, which can enhance the test accuracy of deep networks. We analyzed this effect using backward error analysis (Hairer et al., 2006; Li et al., 2017; Barrett & Dherin, 2021), but many previous papers have argued that this effect can be understood by interpreting small batch SGD as the discretization of an SDE (Mandt et al., 2017; Smith & Le, 2018; Jastrzębski et al., 2018; Park et al., 2019). In this section, we compare this popular perspective with our main results from Sections 2 and 3. To briefly recap, in the SDE analogy a single gradient update is given by ωi+1 = ωi − ∇Ĉ(ωi), where Ĉ denotes a random batch of B non-overlapping training examples. Notice that in the SDE analogy, since examples are drawn randomly from the full dataset, there is no guarantee that each training example is sampled once per epoch. Assuming N B 1 and that the gradients are not heavy tailed, the central limit theorem is applied to model the noise in an update by a Gaussian noise source ξ whose covariance is inversely proportional to the batch size: ωi+1 = ωi − ( ∇C(ωi) + ξi/ √ B ) = ωi − ∇C(ωi) + √ Tξi. (23) This assumes E(ξi) = 0 and E(ξiξTj ) = F (ω)δij , where F (ω) is the covariance matrix of the per example gradients, and we define the “temperature” T = /B. The SDE analogy notes that Equation 23 is identical to the Euler discretization of an SDE with step size and temperature T (Gardiner et al., 1985). Therefore one might expect the SGD iterates to remain close to this underlying SDE in the limit of small learning rates ( → 0). In this limit, the temperature defines the influence of mini-batching on the dynamics, and it is therefore often assumed that the temperature also governs the generalization benefit of SGD (Smith & Le, 2018; Jastrzębski et al., 2018; Park et al., 2019). However this conclusion from the SDE analogy is inconsistent with our analysis in Section 2. To see this, note that in Section 2 we assumed that each training example is sampled once per epoch, as recommended by practitioners (Bottou, 2012), and showed that under this assumption there is no noise in the dynamics of SGD up to first order in after one epoch of training. The SDE analogy therefore relies on the assumption that minibatches are sampled randomly from the full dataset. Furthermore, SGD only converges to the underlying SDE when the learning rate → 0, but in this limit the temperature T → 0 and SGD converges to gradient flow (Yaida, 2019). We must use a finite learning rate to preserve a finite temperature, but at any finite learning rate the distributions of the SGD iterates and the underlying SDE may differ. We now provide intriguing empirical evidence to support our contention that the generalization benefit of SGD arises from finite learning rates, not the temperature of an associated stochastic process. First, we introduce a modified SGD update rule: n-step SGD: Apply n gradient descent updates sequentially on the same minibatch with bare learning rate α, effective learning rate = nα and batch size B. Sample the next minibatch and repeat. To analyze n-step SGD, we consider the combined influence of n updates on the same minibatch: ωi+1 = ωi − α∇Ĉ(ωi)− α∇Ĉ(ωi − α∇Ĉ(ωi)) + ... (24) = ωi − nα∇Ĉ(ωi) +O(n2α2) (25) = ωi − ∇C(ωi) + √ Tξi +O( 2). (26) Equations 23 and 26 are identical up to first order in but they differ at O( 2) and above. Therefore, if minibatches are randomly sampled from the full dataset, then the dynamics of standard SGD and n-step SGD should remain close to the same underlying SDE in the limit → 0, but their dynamics will differ when the learning rate is finite. We conclude that if the dynamics of SGD is close to the continuum limit of the associated SDE, then standard SGD and n-step SGD ought to achieve similar test accuracies after the same number of training epochs. However if, as we argued in Section 2, the generalization benefit of SGD arises from finite learning rate corrections at O( 2) and above, then we should expect the performance of standard SGD and n-step SGD to differ. For completeness, we provide a backward error analysis of n-step SGD in appendix B. In line with Section 2 (and best practice), we assume each training example is sampled once per epoch. We find that after one epoch, the expected n-step SGD iterate E(ωm) = ω(m ) + O(m3 3), where ω(0) = ω0, ω̇ = −∇C̃nSGD(ω) and C̃nSGD(ω) = C(ω) + ( /4mn) ∑m−1 i=0 ||∇Ci(ω)||2. The scale of the implicit regularizer is proportional to α = /n, which implies that the implicit regularization is suppressed as n increases if we hold constant. As expected, we recover Equation 1 when n = 1. In Figure 4(a), we plot the performance of n-step SGD at a range of bare learning rates α, when training a 16-4 Wide-ResNet on CIFAR-10 for 400 epochs using SkipInit (De & Smith, 2020) at batch size 32. Each example is sampled once per epoch. We introduce a learning rate decay schedule, whereby we hold the learning rate constant for the first half of training, before decaying the learning rate by a factor of 2 every remaining tenth of training, and we provide the mean test accuracy of the best 5 out of 7 training runs for each value of α. The optimal test accuracy drops from 93.5% when n = 1 (standard SGD) to 88.8% when n = 16. This occurs even though all values of n perform the same number of training epochs, indicating that 16-step SGD performed 16 times more gradient updates. These results suggest that, at least for this model and dataset, the generalization benefit of SGD is not controlled by the temperature of the associated SDE, but instead arises from the implicit regularization associated with finite learning rates. When we increase n we reduce the largest stable bare learning rate α, and this suppresses the implicit regularization benefit, which reduces the test accuracy. We also verify in Figure 4(b) that similar conclusions arise if we hold the number of parameter updates fixed (such that the number of training epochs is inversely proportional to n). Smaller values of n are stable at larger bare learning rates and achieve higher test accuracies. Finally we confirm in Figure 4(c) that the test accuracy degrades as n increases even if one tunes both the learning rate and the epoch budget independently for each value of n, thus demonstrating that n-step SGD consistently achieves lower test accuracies as n increases. Note that we provide additional experiments on Fashion-MNIST (Xiao et al., 2017) in appendix D. 5 DISCUSSION Many authors have observed that large learning rates (Li et al., 2019; Lewkowycz et al., 2020), and small batch sizes (Keskar et al., 2017; Smith et al., 2020), can enhance generalization. Most theoretical work has sought to explain this by observing that increasing the learning rate, or reducing the batch size, increases the variance of the SGD iterates (Smith & Le, 2018; Jastrzębski et al., 2018; Chaudhari & Soatto, 2018). We take a different approach, and note that when the learning rate is finite, the SGD iterates are also biased (Roberts, 2018). Backward error analysis (Hairer et al., 2006; Li et al., 2017; Barrett & Dherin, 2021) provides a powerful tool that computes how this bias accumulates over multiple parameter updates. Although this work focused on GD and SGD, we anticipate that backward error analysis could also be used to clarify the role of finite learning rates in adaptive optimizers like Adam (Kingma & Ba, 2015) or Natural Gradient Descent (Amari, 1998). We note however that backward error analysis assumes that the learning rate is small (though finite). It therefore does not capture the chaotic or oscillatory dynamics which arise when the learning rate is close to instability. At these very large learning rates the modified loss, which is defined as a Taylor series in powers of the learning rate, does not converge. Lewkowycz et al. (2020) recently argued that the test accuracies of wide networks trained with full batch gradient descent on quadratic losses are maximized for large learning rates close to divergence. In this “catapult” regime, the GD iterates oscillate along high curvature directions and the loss may increase early in training. It remains an open question to establish whether backward error analysis fully describes the generalization benefit of small batch SGD, or if these chaotic or oscillatory effects also play a role in some networks. ACKNOWLEDGMENTS We would like to thank Jascha Sohl-Dickstein, Razvan Pascanu, Alex Botev, Yee Whye Teh and the anonymous reviewers for helpful discussions and feedback on earlier versions of this manuscript. A THE EXPECTED NORM OF A MINIBATCH GRADIENT To keep the notation clean, we define Xi = (∇Ci(ω)−∇C(ω)). We also recall for clarity that the expectation value E(...) is taken over all possible random shuffles of the indices i. Therefore, E(||(∇Ĉ(ω)−∇C(ω))||2) = 1 B2 E( B∑ i=1 B∑ j=1 Xi ·Xj) (27) = B B2 E(Xi ·Xi) + B(B − 1) B2 E(Xi ·Xj 6=i) (28) = 1 NB N∑ i=1 Xi ·Xi + (B − 1) B 1 N(N − 1) N∑ i=1 ∑ j 6=i Xi ·Xj (29) = 1 NB N∑ i=1 Xi ·Xi + (B − 1) BN(N − 1) N∑ i=1 N∑ j=1 (Xi ·Xj(1− δij)) . Note that we obtain Equation 28 by counting the number of diagonal and off-diagonal terms in the sum in Equation 27. Next, we recall that ∑N i=1Xi = ∑N i=1(∇Ci(ω)−∇C(ω)) = 0. Therefore, E(||(∇Ĉ(ω)−∇C(ω))||2) = 1 NB N∑ i=1 Xi ·Xi − (B − 1) BN(N − 1) N∑ i=1 Xi ·Xi (30) = 1 NB ( 1− (B − 1) (N − 1) ) N∑ i=1 Xi ·Xi (31) = (N −B) (N − 1) Γ(ω) B , (32) where Γ(ω) = (1/N) ∑N i=1Xi · Xi = (1/N) ∑N i=1 ||∇Ci(ω) − ∇C(ω)||2. We can immediately identify Γ(ω) as the trace of the empirical covariance matrix of the per-example gradients. B A BACKWARD ERROR ANALYSIS FOR N-STEP SGD Under n-step SGD, we apply n gradient descent updates on the same minibatch with bare learning rate α and batch sizeB. After n updates, we sample the next minibatch and repeat. For convenience, we define the effective learning rate = nα. After one minibatch (n parameter updates), ωi+1 = ωi − α∇Ĉi(ωi)− α∇Ĉi(ωi − α∇Ĉi(ωi)) + ... (33) = ωi − nα∇Ĉi(ωi) + (n/2)(n− 1)α2∇∇Ĉi(ωi)∇Ĉi(ωi) +O(n3α3) (34) = ωi − ∇Ĉi(ωi) + (1/4)(1− 1/n) 2∇ ( ||∇Ĉi(ωi)||2 ) +O( 3). (35) After one epoch (including terms up to second order in ), ωm = ω0 − ∇Ĉ0(ω0) + (1/4)(1− 1/n) 2∇ ( ||∇Ĉ0(ω0)||2 ) − ∇Ĉ1(ω1) + (1/4)(1− 1/n) 2∇ ( ||∇Ĉ1(ω1)||2 ) − ... − ∇Ĉm−1(ωm−1) + (1/4)(1− 1/n) 2∇ ( ||∇Ĉm−1(ωm−1)||2 ) +O( 3). (36) To simplify this expression, we note that ωi+1 = ωi − ∇Ĉi(ωi) +O( 2). We can therefore re-use our earlier analysis from Section 2.2 of the main text (see Equation 13 for comparison) to obtain, ωm = ω0 −m ∇C(ω0) + 2 m−1∑ j=0 ∑ k<j ∇∇Ĉj(ω0)∇Ĉk(ω0) + (1/4)(1− 1/n) 2 m−1∑ i=0 ∇ ( ||∇Ĉi(ω0)||2 ) +O( 3). (37) Taking the expectation over all possible batch orderings (see Equations 15 to 18), we obtain, E(ωm) = ω0 −m ∇C(ω0) + m2 2 4 ∇ ( ||∇C(ω0)||2 − 1 m2n m−1∑ i=0 ||∇Ĉi(ω0)||2 ) +O(m3 3). (38) Fixing f(ω) = −∇C(ω) and equating Equation 38 with the continuous modified flow in Equation 19 by setting E(ωm) = ω(m ), we identify the modified flow ω̇ = −∇C̃nSGD +O(m2 2), where, C̃nSGD(ω) = C(ω) + 4mn m−1∑ i=0 ||∇Ĉi(ω)||2. (39) Comparing Equation 39 to Equation 1, we note that the modified losses of SGD and n-step SGD coincide when n = 1. However for n-step SGD when n > 1, the strength of the implicit regularization term is proportional to the scale of the bare learning rate α = /n, not the effective learning rate . C TRAINING LOSSES In Figure 1(b) of Section 2.3 in the main text, we compared the test accuracies achieved when training on the original loss C(ω) at a range of learning rates , to the test accuracies achieved when training on the modified loss Cmod(ω) at fixed learning rate = 2−9 and a range of regularization coefficients λ. For completeness, in Figure 5, we provide the corresponding training accuracies, as well as the final values of the original loss C(ω). Remarkably, large learning rates and large regularization coefficients achieve similar training accuracies and similar original losses. This suggests that the implicit regularization term in the modified loss of SGD (C̃SGD(ω)) may help explain why the training accuracies and losses often exhibit plateaus when training with large learning rates. D ADDITIONAL RESULTS ON FASHION-MNIST In this section we provide additional experiments on the Fashion-MNIST dataset (Xiao et al., 2017), which comprises 10 classes, 60000 training examples and 10000 examples in the test set. We consider a simple fully connected MLP which comprises 3 nonlinear layers, each with width 4096 and ReLU activations, and a final linear softmax layer. We apply a simple data pipeline which first applies per-image standardization and then flattens the input to a 784 dimensional vector. We do not apply data augmentation and we train using vanilla SGD without learning rate decay for all experiments. We perform seven training runs for each combination of hyper-parameters and show the mean performance of the best five (to ensure our results are not skewed by a single failed run). We use a batch size B = 16 unless otherwise specified, and we do not use weight decay. We note that this model is highly over-parameterized. Unlike the Wide-ResNet we considered in the main text we consistently achieve 100% training accuracy if the learning rate is not too large. In Figure 6(a), we train for 400 epochs, and we compare the effect of tuning the learning rate when training on the original loss, to the effect of tuning the explicit regularization strength λ (with = 2−9). As observed in the main text, tuning the explicit regularizer has a similar effect on the test accuracy to tuning the learning rate. Surprisingly, the optimal values of and λ differ by a factor of 8. However we note that the optimal learning rate is = 2−5, while the explicit regularizer already achieves a very similar test accuracy at λ = 2−4 (just a factor of two larger), before reaching a higher maximum test accuracy at λ = 2−2. In Figure 6(b), we train for 400 epochs on the original loss and compare the test accuracies achieved for a range of batch sizes at different learning rates. As observed in the main text, the test accuracy is determined by the ratio of the learning rate to the batch size. Meanwhile in Figure 6(c), we plot the test accuracy achieved after training for 1.5 million steps on the modified loss with learning rate = 2−9 and regularization coefficient λ. Once again, we find that the test accuracy achieved is primarily determined by the ratio of the regularization coefficient to the batch size, although smaller batch sizes also achieve slightly higher accuracies. Finally, in Figure 7 we train using n-step SGD (see Section 4) on the original loss at a range of bare learning rates α. In Figure 7(a) we train for 400 epochs, while in Figure 7(b) we train for 6 million updates. We recall that the SDE analogy predicts that the generalization benefit of n-SGD would be determined by the effective learning rate = nα. By contrast, backward error analysis predicts that the generalization benefit for small learning rates would be controlled by the bare learning rate α, but that higher order terms may be larger for larger values of n. We find that the test accuracy in both figures is governed by the bare learning rate α, not the effective learning rate = nα, and therefore these results are inconsistent with the predictions from the SDE analysis in prior work. Note that Figure 7 has a surprising implication. It suggests that, for this model, while there is a largest stable bare learning rate we cannot exceed, we can repeatedly apply updates obtained on the same batch of training examples without suffering a significant degradation in test accuracy. We speculate that this may indicate that the gradients of different examples in this over-parameterized model are close to orthogonal (Sankararaman et al., 2020).
1. What is the focus of the paper regarding implicit regularization in deep learning? 2. What are the strengths of the proposed approach, particularly in extending prior works? 3. What are the weaknesses or limitations of the paper, especially in experimentation? 4. How does the reviewer assess the clarity and relevance of the paper's content?
Review
Review This paper studies an implicit regularization mechanism of finite learning rate SGD by introducing explicitely a regularization term, using the framework of backward analysis. They theoretically motivate their analysis, then empirically demonstrate it on CIFAR-10 using a Wide ResNet architecture. This extends a previous (Barrett and Dherin, preprint) analysis of GD using the same framework, but limited to full batch GD. Noticeably, this new analysis using minibatch GD highlights an additional regularization of the trace of the covariance of per-example gradients. In sec 2, however, I think it should be made clear that the setup is slightly different from minibatch GD, even when trained for a single epoch, in that there is an expectation accross permutations of sequences of minibatches. Can you discuss this assumption a bit more? In terms of experiments, it would be useful to include other architecture/tasks, even toyish, in order to appreciate the generality of the empirical evaluation. Overall, I think this contributes new interesting insights which are very relevant for studying minibatch GD in deep learning.
ICLR
Title On the Origin of Implicit Regularization in Stochastic Gradient Descent Abstract For infinitesimal learning rates, stochastic gradient descent (SGD) follows the path of gradient flow on the full batch loss function. However moderately large learning rates can achieve higher test accuracies, and this generalization benefit is not explained by convergence bounds, since the learning rate which maximizes test accuracy is often larger than the learning rate which minimizes training loss. To interpret this phenomenon we prove that for SGD with random shuffling, the mean SGD iterate also stays close to the path of gradient flow if the learning rate is small and finite, but on a modified loss. This modified loss is composed of the original loss function and an implicit regularizer, which penalizes the norms of the minibatch gradients. Under mild assumptions, when the batch size is small the scale of the implicit regularization term is proportional to the ratio of the learning rate to the batch size. We verify empirically that explicitly including the implicit regularizer in the loss can enhance the test accuracy when the learning rate is small. 1 INTRODUCTION In the limit of vanishing learning rates, stochastic gradient descent with minibatch gradients (SGD) follows the path of gradient flow on the full batch loss function (Yaida, 2019). However in deep networks, SGD often achieves higher test accuracies when the learning rate is moderately large (LeCun et al., 2012; Keskar et al., 2017). This generalization benefit is not explained by convergence rate bounds (Ma et al., 2018; Zhang et al., 2019), because it arises even for large compute budgets for which smaller learning rates often achieve lower training losses (Smith et al., 2020). Although many authors have studied this phenomenon (Jastrzębski et al., 2018; Smith & Le, 2018; Chaudhari & Soatto, 2018; Shallue et al., 2018; Park et al., 2019; Li et al., 2019; Lewkowycz et al., 2020), it remains poorly understood, and is an important open question in the theory of deep learning. In a recent work, Barrett & Dherin (2021) analyzed the influence of finite learning rates on the iterates of gradient descent (GD). Their approach is inspired by backward error analysis, a method for the numerical analysis of ordinary differential equation (ODE) solvers (Hairer et al., 2006). The key insight of backward error analysis is that we can describe the bias introduced when integrating an ODE with finite step sizes by introducing an ancillary modified flow. This modified flow is derived to ensure that discrete iterates of the original ODE lie on the path of the continuous solution to the modified flow. Using this technique, the authors show that if the learning rate is not too large, the discrete iterates of GD lie close to the path of gradient flow on a modified loss C̃GD(ω) = C(ω) + ( /4)||∇C(ω)||2. This modified loss is composed of the original loss C(ω) and an implicit regularizer proportional to the learning rate which penalizes the euclidean norm of the gradient. However these results only hold for full batch GD, while in practice SGD with small or moderately large batch sizes usually achieves higher test accuracies (Keskar et al., 2017; Smith et al., 2020). In this work, we devise an alternative approach to backward error analysis, which accounts for the correlations between minibatches during one epoch of training. Using this novel approach, we prove that for small finite learning rates, the mean SGD iterate after one epoch, averaged over all possible sequences of minibatches, lies close to the path of gradient flow on a second modified loss C̃SGD(ω), which we define in equation 1. This new modified loss is also composed of the full batch loss function and an implicit regularizer, however the structure of the implicit regularizers for GD and SGD differ, and their modified losses can have different local and global minima. Our analysis therefore helps explain both why finite learning rates can aid generalization, and why SGD can achieve higher test accuracies than GD. We assume that each training example is sampled once per epoch, in line with best practice (Bottou, 2012), and we confirm empirically that explicitly including the implicit regularization term of SGD in the training loss can enhance the test accuracy when the learning rate is small. Furthermore, we prove that if the batch size is small and the gradients are sufficiently diverse, then the expected magnitude of the implicit regularization term of SGD is proportional to the ratio of the learning rate to the batch size (Goyal et al., 2017; Smith et al., 2018). We note that many previous authors have sought to explain the generalization benefit of SGD using an analogy between SGD and stochastic differential equations (SDEs) (Mandt et al., 2017; Smith & Le, 2018; Jastrzębski et al., 2018; Chaudhari & Soatto, 2018). However this SDE analogy assumes that each minibatch is randomly sampled from the full dataset, which implies that some examples will be sampled multiple times in one epoch. Furthermore, the most common SDE analogy holds only for vanishing learning rates (Yaida, 2019) and therefore misses the generalization benefits of finite learning rates which we identify in this work. An important exception is Li et al. (2017), who applied backward error analysis to identify a modified SDE which holds when the learning rate is finite. However this work still relies on the assumption that minibatches are sampled randomly. It also focused on the convergence rate, and did not discuss the performance of SGD on the test set. Main Result. We now introduce our main result. We define the cost function over parameters ω as C(ω) = (1/N) ∑N j=1 Cj(ω), which is the mean of the per-example costs Cj(ω), where N denotes the training set size. Gradient flow follows the ODE ω̇ = −∇C(ω), while gradient descent computes discrete updates ωi+1 = ωi − ∇C(ωi), where is the learning rate. For simplicity, we assume that the batch size B perfectly splits the training set such that N%B = 0, where % denotes the modulo operation, and for convenience we define the number of batches per epoch m = N/B. We can therefore re-write the cost function as a sum over minibatches C(ω) = (1/m) ∑m−1 k=0 Ĉk(ω), where the minibatch cost Ĉk(ω) = (1/B) ∑kB+B j=kB+1 Cj(ω). In order to guarantee that we sample each example precisely once per epoch, we define SGD by the discrete update ωi+1 = ωi− ∇Ĉi%m(ωi). Informally, our main result is as follows. After one epoch, the mean iterate of SGD with a small but finite learning rate , averaged over all possible shuffles of the batch indices, stays close to the path of gradient flow on a modified loss ω̇ = −∇C̃SGD(ω), where the modified loss C̃SGD is given by: C̃SGD(ω) = C(ω) + 4m ∑m−1 k=0 ||∇Ĉk(ω)||2. (1) We emphasize that our analysis studies the mean evolution of SGD, not the path of individual trajectories. The modified loss C̃SGD(ω) is composed of the original loss C(ω) and an implicit regularizer Creg(ω) = (1/4m) ∑m−1 k=0 ||∇Ĉk(ω)||2. The scale of this implicit regularization term is proportional to the learning rate , and it penalizes the mean squared norm of the gradient evaluated on a batch of B examples. To help us compare the modified losses of GD and SGD, we can expand, C̃SGD(ω) = C(ω) + 4 ||∇C(ω)||2 + 4m ∑m−1 i=0 ||∇Ĉi(ω)−∇C(ω)||2. (2) We arrive at Equation 2 from Equation 1 by noting that ∑m−1 i=0 (∇Ĉi(ω) − ∇C(ω)) = 0. In the limit B → N , we identify the modified loss of gradient descent, C̃GD = C(ω) + ( /4)||∇C(ω)||2, which penalizes “sharp” regions where the norm of the full-batch gradient (||∇C(ω)||2) is large. However, as shown by Equation 2, the modified loss of SGD penalizes both sharp regions where the full-batch gradient is large, and also “non-uniform” regions where the norms of the errors in the minibatch gradients (||∇Ĉ(ω) − ∇C(ω)||2) are large (Wu et al., 2018). Although global minima of C(ω) are global minima of C̃GD(ω), global minima of C(ω) may not be global (or even local) minima of C̃SGD(ω). Note however that C(ω) and C̃SGD(ω) do share the same global minima on over-parameterized models which can interpolate the training set (Ma et al., 2018). We verify in our experiments that the implicit regularizer can enhance the test accuracy of models trained with SGD. Paper structure. In Section 2, we derive our main result (Equation 1), and we confirm empirically that we can close the generalization gap between small and large learning rates by including the implicit regularizer explicitly in the loss function. In Section 3, we confirm Equation 1 satisfies the linear scaling rule between learning rate and batch size (Goyal et al., 2017). In Section 4, we provide additional experiments which challenge the prevailing view that the generalization benefit of small batch SGD arises from the temperature of an associated SDE (Mandt et al., 2017; Park et al., 2019). 2 A BACKWARD ERROR ANALYSIS OF STOCHASTIC GRADIENT DESCENT Backward error analysis has great potential to clarify the role of finite learning rates, and to help identify the implicit biases of different optimizers. We therefore give a detailed introduction to the core methodology in Section 2.1, before deriving our main result in Section 2.2. In Section 2.3, we confirm empirically that the implicit regularizer can enhance the test accuracy of deep networks. 2.1 AN INTRODUCTION TO BACKWARD ERROR ANALYSIS In numerical analysis, we often wish to integrate ODEs of the form ω̇ = f(ω). This system usually cannot be solved analytically, forcing us to simulate the continuous flow with discrete updates, like the Euler step ω(t+ ) ≈ ω(t) + f(ω(t)). However discrete updates will introduce approximation error when the step size is finite. In order to study the bias introduced by this approximation error, we assume the learning rate is relatively small, and introduce a modified flow ω̇ = f̃(ω), where, f̃(ω) = f(ω) + f1(ω) + 2f2(ω) + ... . (3) The modified flow of f̃(ω) is equal to the original flow of f(ω) when → 0, but it differs from the original flow if is finite. The goal of backward error analysis is to choose the correction terms fi(ω) such that the iterates obtained from discrete updates of the original flow with small finite step sizes lie on the path taken by the continuous solution to the modified flow with vanishing step sizes. The standard derivation of backward error analysis begins by taking a Taylor expansion in of the solution to the modified flow ω(t + ). We obtain the derivatives of ω(t + ) recursively using the modified flow equation ω̇ = f̃(ω) (see Hairer et al. (2006)), and we identify the correction terms fi(ω) by ensuring this Taylor expansion matches the discrete update (e.g., ωt+1 = ωt + f(ωt)) for all powers of . However, this approach does not clarify why these correction terms arise. To build our intuition for the origin of the corrections terms, and to clarify how we might apply this analysis to SGD, we take a different approach. First, we will identify the path taken by the continuous modified flow by considering the combined influence of an infinite number of discrete steps in the limit of vanishing learning rates, and then we will compare this continuous path to either a single step of GD or a single epoch of SGD. Imagine taking n Euler steps on the modified flow f̃(ω) with step size α, ωt+n = ωt + αf̃(ωt) + αf̃(ωt+1) + αf̃(ωt+2) + ... (4) = ωt + αf̃(ωt) + αf̃(ωt + αf̃(ωt)) + αf̃(ωt + αf̃(ωt) + αf̃(ωt + αf̃(ωt))) + ... (5) = ωt + nαf̃(ωt) + (n/2)(n− 1)α2∇f̃(ωt)f̃(ωt) +O(n3α3). (6) We arrived at Equation 6 by taking the Taylor expansion of f̃ and then counting the number of terms of type ∇f̃(ωt)f̃(ωt) using the formula for an arithmetic series. Note that we assume ∇f̃ exists. Next, to ensure ωt+n in Equation 6 coincides with the solution ω(t+ ) of the continuous modified flow ω̇ = f̃(ω) for small but finite , we let the number of steps n→∞ while setting α = /n, ω(t+ ) = ω(t) + f̃(ω(t)) + ( 2/2)∇f̃(ω(t))f̃(ω(t)) +O( 3) (7) = ω(t) + f(ω(t)) + 2 (f1(ω(t)) + (1/2)∇f(ω(t))f(ω(t))) +O( 3). (8) We have replaced f̃(ω) with its definition from Equation 3. As we will see below, Equation 8 is the key component of backward error analysis, which describes the path taken when integrating the continuous modified flow f̃(ω) with vanishing learning rates over a discrete time step of length . Notice that we have assumed that the Taylor expansion in Equation 8 converges, while the higher order terms at O( 3) will contain higher order derivatives of the original flow f(ω). Backward error analysis therefore implicitly assumes that f(ω) is an analytic function in the vicinity of the current parameters ω. We refer the reader to Hairer et al. (2006) for a detailed introduction. Gradient descent: As a simple example, we will now derive the first order correction f1(ω) of the modified flow for GD. First, we recall that the discrete updates obey ωi+1 = ωi− ∇C(ωi), and we therefore fix f(ω) = −∇C(ω). In order to ensure that the continuous modified flow coincides with this discrete update, we need all terms at O( 2) and above in Equation 8 to vanish. At order 2, this implies that f1(ω) + (1/2)∇∇C(ω)∇C(ω) = 0, which yields the first order correction, f1(ω) = −(1/2)∇∇C(ω)∇C(ω) = −(1/4)∇ ( ||∇C(ω)||2 ) . (9) We conclude that, if the learning rate is sufficiently small such that we can neglect higher order terms in Equation 3, then the discrete GD iterates lie on the path of the following ODE, ω̇ = −∇C(ω)− ( /4)∇ ( ||∇C(ω)||2 ) (10) = −∇C̃GD(ω). (11) Equation 11 corresponds to gradient flow on the modified loss, C̃GD(ω) = C(ω)+( /4)||∇C(ω)||2. 2.2 BACKWARD ERROR ANALYSIS AND STOCHASTIC GRADIENT DESCENT We now derive our main result (Equation 1). As described in the introduction, we assume N%B = 0, where N is the training set size, B is the batch size, and % denotes the modulo operation. The number of updates per epochm = N/B, and the minibatch costs Ĉk(ω) = (1/B) ∑kB+B j=kB+1 Cj(ω). SGD with constant learning rates obeys ωi+1 = ωi− ∇Ĉi%m(ωi). It is standard practice to shuffle the dataset once per epoch, but we omit this step here and instead perform our analysis over a single epoch. In Equation 6 we derived the influence of n Euler steps on the flow f̃(ω) with step size α. Following a similar approach, we now derive the influence of m SGD updates with learning rate , ωm = ω0 − ∇Ĉ0(ω0)− ∇Ĉ1(ω1)− ∇Ĉ2(ω2)− ... (12) = ω0 − m−1∑ j=0 ∇Ĉj(ω0) + 2 m−1∑ j=0 ∑ k<j ∇∇Ĉj(ω0)∇Ĉk(ω0) +O(m3 3) (13) = ω0 −m ∇C(ω0) + 2ξ(ω0) +O(m3 3). (14) The error in Equation 14 is O(m3 3) since there are O(m3) terms in the Taylor expansion proportional to 3. Notice that a single epoch of SGD is equivalent to a single GD update with learning rate m up to first order in . Remarkably, this implies that when the learning rate is sufficiently small, there is no noise in the iterates of SGD after completing one epoch. For clarity, this observation arises because we require that each training example is sampled once per epoch. However the second order correction ξ(ω) = ∑m−1 j=0 ∑ k<j ∇∇Ĉj(ω)∇Ĉk(ω) does not appear in the GD update, and it is a random variable which depends on the order of the mini-batches. In order to identify the bias introduced by SGD, we will evaluate the mean correction E(ξ), where we take the expectation across all possible sequences of the (non-overlapping) mini-batches {Ĉ0, Ĉ1, ..., Ĉm−1}. Note that we hold the composition of the batches fixed, averaging only over their order. We conclude that, E(ξ(ω)) = 1 2 (∑m−1 j=0 ∑ k 6=j ∇∇Ĉj(ω)∇Ĉk(ω) ) (15) = m2 2 ∇∇C(ω)∇C(ω)− 1 2 ∑m−1 j=0 ∇∇Ĉj∇Ĉj (16) = m2 4 ∇ ( ||∇C(ω)||2 − 1 m2 ∑m−1 j=0 ||∇Ĉj(ω)||2 ) . (17) For clarity, in Equation 15 we exploit the fact that every sequence of batches has a corresponding sequence in reverse order. Combining Equations 14 and 17, we conclude that after one epoch, E(ωm) = ω0 −m ∇C(ω0) + m2 2 4 ∇ ( ||∇C(ω0)||2 − (1/m2) ∑m−1 j=0 ||∇Ĉj(ω0)||2 ) +O(m3 3). (18) Having identified the expected value of the SGD iterate after one epoch E(ωm) (for small but finite learning rates), we can now use this expression to identify the corresponding modified flow. First, we set f(ω) = −∇C(ω), t = 0, ω(0) = ω0, and let → m in Equations 3 and 8 to obtain, ω(m ) = ω0 −m ∇C(ω0) +m2 2 ( f1(ω0) + (1/4)∇||∇C(ω0)||2 ) +O(m3 3). (19) Next, we equate Equations 18 and 19 by setting ω(m ) = E(ωm). We immediately identify the first order correction to the modified flow f1(ω) = −(1/(4m2))∇ ∑m−1 j=0 ||∇Ĉj(ω)||2. We therefore conclude that, after one epoch, the expected SGD iterate E(ωm) = ω(m ) + O(m3 3), where ω(0) = ω0 and ω̇ = −∇C(ω) +m f1(ω). Simplifying, we conclude ω̇ = −∇C̃SGD(ω), where, C̃SGD(ω) = C(ω) + ( /4m) ∑m−1 k=0 ||∇Ĉk(ω)||2. (20) Equation 20 is identical to Equation 1, and this completes the proof of our main result. We emphasize that C̃SGD assumes a fixed set of minibatches {Ĉ0, Ĉ1, ..., Ĉm−1}. We will evaluate the expected modified loss after shuffling the dataset and sampling a new set of minibatches in Section 3. REMARKS ON THE ANALYSIS The phrase “for small finite learning rates” has a precise meaning in our analysis. It implies is large enough that terms of O(m2 2) may be significant, but small enough that terms of O(m3 3) are negligible. Our analysis is unusual, because we consider the mean evolution of the SGD iterates but ignore the variance of individual training runs. Previous analyses of SGD have usually focused on the variance of the iterates in limit of vanishing learning rates (Mandt et al., 2017; Smith & Le, 2018; Jastrzębski et al., 2018). However these works assume that each minibatch is randomly sampled from the full dataset. Under this assumption, the variance in the iterates arises at O( ), while the bias arises at O( 2). By contrast, in our analysis each example is sampled once per epoch, and both the variance and the bias arise at O( 2) (for simplicity, we assume m = N/B is constant). We therefore anticipate that the variance will play a less important role than is commonly supposed. Furthermore, we can construct a specific sequence of minibatches for which the variance atO(m2 2) vanishes, such that the evolution of a specific training run will coincide exactly with gradient flow on the modified loss of Equation 1 for small finite learning rates. To achieve this, we perform two training epochs with the sequence of minibatches (Ĉ0, Ĉ1, ..., Ĉm−1, Ĉm−1, ..., Ĉ1, Ĉ0) (i.e., the second epoch iterates through the same set of minibatches as the first but in the opposite order). If one inspects Equations 13 to 15, one will see that reversing the second epoch has the same effect as taking the mean across all possible sequences of minibatches (it replaces the ∑ k<j by a ∑ k 6=j). The key limitation of our analysis is that we assume m = N /B is small, in order to neglect terms at O(m3 3). This is an extreme approximation, since we typically expect that N/B is large. Therefore, while our work identifies the first order correction to the bias arising from finite learning rates, higher order terms in the modified flow may also play an important role at practical learning rates. We note however that previous theoretical analyses have made even more extreme assumptions. For instance, most prior work studying SGD in the small learning rate limit neglects all terms at O( 2) (for an exception, see Li et al. (2017)). Furthermore, as we show in Section 3, the learning rate often scales proportional to the batch size, such that /B is constant (Goyal et al., 2017; McCandlish et al., 2018). Therefore the accuracy of our approximations does not necessarily degrade as the batch size falls, but higher order terms may play an increasingly important role as the dataset size increases. Our experimental results in Section 2.3 and Section 4 suggest that our analysis can explain most of the generalization benefit of finite learning rate SGD for Wide-ResNets trained on CIFAR-10. We note that to achieve the highest test accuracies, practitioners usually decay the learning rate during training. Under this scheme, the modified loss would change as training proceeds. However it is widely thought that the generalization benefit of SGD arises from the use of large learning rates early in training (Smith et al., 2018; Li et al., 2019; Jastrzebski et al., 2020; Lewkowycz et al., 2020), and popular schedules hold the learning rate constant or approximately constant for several epochs. Finally, we emphasize that our primary goal in this work is to identify the influence of finite learning rates on training. The implicit regularization term may not be beneficial in all models and datasets. 2.3 AN EMPIRICAL EVALUATION OF THE MODIFIED LOSS In order to confirm that the modified loss C̃SGD(ω) can help explain why large learning rates enhance generalization, we now verify empirically that the implicit regularizer inherent in constant learning rate SGD, Creg(ω) = (1/4m) ∑m−1 k=0 ||∇Ĉk(ω)||2, can enhance the test accuracy of deep networks. To this end, we train the same model with two different (explicit) loss functions. The first loss function C(ω) represents the original loss, while the second Cmod(ω) = C(ω) + λCreg(ω) is obtained from the modified loss C̃SGD(ω) by replacing the learning rate with an explicit regular- ization coefficient λ. Notice that Cmod(ω) = (1/m) ∑m−1 k=0 ( Ĉk(ω) + (λ/4)||∇Ĉk(ω)||2 ) , which ensures that it is straightforward to minimize the modified loss Cmod(ω) with minibatch gradients. Since the implicit regularization term Creg(ω) is expensive to differentiate (typically 5-10x overhead), we consider a 10-1 Wide-ResNet model (Zagoruyko & Komodakis, 2016) for classification on CIFAR-10. To ensure close agreement with our theoretical analysis, we train without batch normalization using SkipInit initialization (De & Smith, 2020). We train for 6400 epochs at batch size 32 without learning rate decay using SGD without Momentum. We use standard data augmentation including crops and random flips, and we use weight decay with L2 coefficient 5 × 10−4. We emphasize that, since we train using a finite (though very large) compute budget, the final networks may not have fully converged. This is particularly relevant when training with small learning rates. Note that we provide additional experiments on Fashion-MNIST (Xiao et al., 2017) in appendix D. In Figure 1(a), we compare two training runs, one minimizing the modified loss Cmod(ω) with λ = 2−6, and one minimizing the original loss C(ω). For both runs we use a small constant learning rate = 2−9. As expected, the regularized training run achieves significantly higher test accuracies late in training. This confirms that the implicit regularizer, which arises as a consequence of using SGD with finite learning rates, can also enhance the test accuracy if it is included explicitly in the loss. In Figure 1(b), we provide the test accuracy for a range of regularization strengths λ (orange line). We provide the mean test accuracy of the best 5 out of 7 training runs at each regularization strength, and for each run we take the highest test accuracy achieved during the entire training run. We use a fixed learning rate = 2−9 for all λ. For comparison, we also provide the test accuracy achieved with the original loss C(ω) for a range of learning rates (blue line). In both cases, the test accuracy rises initially, before falling for large regularization strengths or large learning rates. Furthermore, in this network the optimal regularization strength on the modified loss λopt = 2−6 is equal to the optimal learning rate on the original loss opt = 2−6. Meanwhile when λ → 0 the performance of the modified loss approaches the performance of the original loss at = 2−9 (dotted green line). We provide the corresponding training accuracies in appendix C. Finally, in Figure 1(c), we provide the values of the implicit regularizer Creg(ω) at the end of training. As predicted by our analysis, training with larger learning rates reduces the value of the implicit regularization term. In Figure 2, we take the same 10-1 Wide-ResNet model and provide the mean training and test accuracies achieved at a range of learning rates for two regularization coefficients (following the experimental protocol above). In Figure 2(a), we train on the original loss C(ω) (λ = 0), while in Figure 2(b), we train on the modified loss Cmod(ω) with regularization coefficient λ = 2−6. From Figure 2(a), when λ = 0 there is a clear generalization benefit to large learning rates, as the learning rate that maximizes test accuracy (2−6) is 16 times larger than the learning rate that maximizes training accuracy (2−10). However in Figure 2(b) with λ = 2−6, the learning rates that maximize the test and training accuracies are equal (2−8). This suggests that when we include the implicit regularizer explicitly in the loss, the generalization benefit of large learning rates is diminished. 3 IMPLICIT REGULARIZATION AND THE BATCH SIZE In Section 2.2, we derived the modified loss by considering the expected SGD iterate after one epoch. We held the composition of the batches fixed, averaging only over the order in which the batches are seen. This choice helped make clear how to explicitly include the implicit regularizer in the loss function in Section 2.3. However, in order to clarify how the implicit regularizer term depends on the batch size, we now evaluate the expected modified loss after randomly shuffling the dataset and sampling a new set of m non-overlapping minibatches {Ĉ0, Ĉ1, ..., Ĉm−1}. Since the minibatch losses Ĉi(ω) are all identically distributed by symmetry, we recall Equation 2 and conclude that, E(C̃SGD(ω)) = C(ω) + ( /4) ||∇C(ω)||2 + ( /4)E(||∇Ĉ(ω)−∇C(ω)||2), (21) where Ĉ(ω) denotes a batch of B non-overlapping examples, drawn randomly from the full dataset. To simplify equation 21, we prove in appendix A that E(||∇Ĉ(ω) − ∇C(ω)||2) = (N−B)(N−1) Γ(ω) B , where Γ(ω) = (1/N) ∑N i=1 ||∇Ci(ω)−∇C(ω)||2. We therefore obtain, E(C̃SGD(ω)) = C(ω) + 4 ||∇C(ω)||2 + (N −B) (N − 1) 4B Γ(ω). (22) Note that Γ(ω) is the trace of the empirical covariance matrix of the per-example gradients. We have not assumed that the minibatch gradients are Gaussian distributed, however if the per-example gradients are heavy tailed (Simsekli et al., 2019) then Γ(ω) may diverge, in which case the expected value of the modified loss is ill-defined. Equation 22 shows that the implicit regularization term of SGD has two contributions. The first term is proportional to the learning rate , and it penalizes the norm of the full batch gradient. The second term is proportional to the ratio of the learning rate to the batch size /B (assuming N B), and it penalizes the trace of the covariance matrix. To interpret this result, we assume that the minibatch gradients are diverse, such that (Γ(ω)/B) ||∇C(ω)||2. This assumption guarantees that increasing the batch size reduces the error in the gradient estimate. In this limit, the second term above will dominate, and therefore different batch sizes will experience the same implicit regularization so long as the ratio of the learning rate to the batch size is constant. To verify this claim, in Figure 2.3 we plot the mean test accuracies achieved on a 10-1 Wide-ResNet, trained on CIFAR-10 with a constant learning rate, for a range of learning rates , regularization coefficients λ and batch sizes B. As expected, in Figure 3(a), training on the original loss C(ω) for 6400 epochs, we see that different batch sizes achieve similar test accuracies so long as the ratio /B is constant and the batch size is not too large. We note that this linear scaling rule is well known and has been observed in prior work (Goyal et al., 2017; Smith & Le, 2018; Jastrzębski et al., 2018; Zhang et al., 2019). To confirm that this behaviour is consistent with the modified loss, in Figure 3(b) we fix the learning rate = 2−9 and train on Cmod(ω) at a range of regularization strengths λ for 10 million steps. As expected, different batch sizes achieve similar test accuracy so long as the ratio λ/B is constant. We note that we expect this phenomenon to break down for very large batch sizes, however we were not able to run experiments in this limit due to computational constraints. For very large batch sizes, the first implicit regularization term in Equation 22 dominates, the linear scaling rule breaks down, and the bias of SGD is similar to the bias of GD identified by Barrett & Dherin (2021). We expect the optimal learning rate to be independent of the batch size in this limit, as observed by McCandlish et al. (2018) and Smith et al. (2020). Convergence bounds also predict a transition between a small batch regime where the optimal learning rate ∝ B and a large batch regime where the optimal learning rate is constant (Ma et al., 2018; Zhang et al., 2019). However these analyses identify the learning rate which minimizes the training loss. Our analysis compliments these claims by explaining why similar conclusions hold when maximizing test accuracy. 4 FINITE LEARNING RATES AND STOCHASTIC DIFFERENTIAL EQUATIONS In the previous two sections, we argued that the use of finite learning rates and small batch sizes introduces implicit regularization, which can enhance the test accuracy of deep networks. We analyzed this effect using backward error analysis (Hairer et al., 2006; Li et al., 2017; Barrett & Dherin, 2021), but many previous papers have argued that this effect can be understood by interpreting small batch SGD as the discretization of an SDE (Mandt et al., 2017; Smith & Le, 2018; Jastrzębski et al., 2018; Park et al., 2019). In this section, we compare this popular perspective with our main results from Sections 2 and 3. To briefly recap, in the SDE analogy a single gradient update is given by ωi+1 = ωi − ∇Ĉ(ωi), where Ĉ denotes a random batch of B non-overlapping training examples. Notice that in the SDE analogy, since examples are drawn randomly from the full dataset, there is no guarantee that each training example is sampled once per epoch. Assuming N B 1 and that the gradients are not heavy tailed, the central limit theorem is applied to model the noise in an update by a Gaussian noise source ξ whose covariance is inversely proportional to the batch size: ωi+1 = ωi − ( ∇C(ωi) + ξi/ √ B ) = ωi − ∇C(ωi) + √ Tξi. (23) This assumes E(ξi) = 0 and E(ξiξTj ) = F (ω)δij , where F (ω) is the covariance matrix of the per example gradients, and we define the “temperature” T = /B. The SDE analogy notes that Equation 23 is identical to the Euler discretization of an SDE with step size and temperature T (Gardiner et al., 1985). Therefore one might expect the SGD iterates to remain close to this underlying SDE in the limit of small learning rates ( → 0). In this limit, the temperature defines the influence of mini-batching on the dynamics, and it is therefore often assumed that the temperature also governs the generalization benefit of SGD (Smith & Le, 2018; Jastrzębski et al., 2018; Park et al., 2019). However this conclusion from the SDE analogy is inconsistent with our analysis in Section 2. To see this, note that in Section 2 we assumed that each training example is sampled once per epoch, as recommended by practitioners (Bottou, 2012), and showed that under this assumption there is no noise in the dynamics of SGD up to first order in after one epoch of training. The SDE analogy therefore relies on the assumption that minibatches are sampled randomly from the full dataset. Furthermore, SGD only converges to the underlying SDE when the learning rate → 0, but in this limit the temperature T → 0 and SGD converges to gradient flow (Yaida, 2019). We must use a finite learning rate to preserve a finite temperature, but at any finite learning rate the distributions of the SGD iterates and the underlying SDE may differ. We now provide intriguing empirical evidence to support our contention that the generalization benefit of SGD arises from finite learning rates, not the temperature of an associated stochastic process. First, we introduce a modified SGD update rule: n-step SGD: Apply n gradient descent updates sequentially on the same minibatch with bare learning rate α, effective learning rate = nα and batch size B. Sample the next minibatch and repeat. To analyze n-step SGD, we consider the combined influence of n updates on the same minibatch: ωi+1 = ωi − α∇Ĉ(ωi)− α∇Ĉ(ωi − α∇Ĉ(ωi)) + ... (24) = ωi − nα∇Ĉ(ωi) +O(n2α2) (25) = ωi − ∇C(ωi) + √ Tξi +O( 2). (26) Equations 23 and 26 are identical up to first order in but they differ at O( 2) and above. Therefore, if minibatches are randomly sampled from the full dataset, then the dynamics of standard SGD and n-step SGD should remain close to the same underlying SDE in the limit → 0, but their dynamics will differ when the learning rate is finite. We conclude that if the dynamics of SGD is close to the continuum limit of the associated SDE, then standard SGD and n-step SGD ought to achieve similar test accuracies after the same number of training epochs. However if, as we argued in Section 2, the generalization benefit of SGD arises from finite learning rate corrections at O( 2) and above, then we should expect the performance of standard SGD and n-step SGD to differ. For completeness, we provide a backward error analysis of n-step SGD in appendix B. In line with Section 2 (and best practice), we assume each training example is sampled once per epoch. We find that after one epoch, the expected n-step SGD iterate E(ωm) = ω(m ) + O(m3 3), where ω(0) = ω0, ω̇ = −∇C̃nSGD(ω) and C̃nSGD(ω) = C(ω) + ( /4mn) ∑m−1 i=0 ||∇Ci(ω)||2. The scale of the implicit regularizer is proportional to α = /n, which implies that the implicit regularization is suppressed as n increases if we hold constant. As expected, we recover Equation 1 when n = 1. In Figure 4(a), we plot the performance of n-step SGD at a range of bare learning rates α, when training a 16-4 Wide-ResNet on CIFAR-10 for 400 epochs using SkipInit (De & Smith, 2020) at batch size 32. Each example is sampled once per epoch. We introduce a learning rate decay schedule, whereby we hold the learning rate constant for the first half of training, before decaying the learning rate by a factor of 2 every remaining tenth of training, and we provide the mean test accuracy of the best 5 out of 7 training runs for each value of α. The optimal test accuracy drops from 93.5% when n = 1 (standard SGD) to 88.8% when n = 16. This occurs even though all values of n perform the same number of training epochs, indicating that 16-step SGD performed 16 times more gradient updates. These results suggest that, at least for this model and dataset, the generalization benefit of SGD is not controlled by the temperature of the associated SDE, but instead arises from the implicit regularization associated with finite learning rates. When we increase n we reduce the largest stable bare learning rate α, and this suppresses the implicit regularization benefit, which reduces the test accuracy. We also verify in Figure 4(b) that similar conclusions arise if we hold the number of parameter updates fixed (such that the number of training epochs is inversely proportional to n). Smaller values of n are stable at larger bare learning rates and achieve higher test accuracies. Finally we confirm in Figure 4(c) that the test accuracy degrades as n increases even if one tunes both the learning rate and the epoch budget independently for each value of n, thus demonstrating that n-step SGD consistently achieves lower test accuracies as n increases. Note that we provide additional experiments on Fashion-MNIST (Xiao et al., 2017) in appendix D. 5 DISCUSSION Many authors have observed that large learning rates (Li et al., 2019; Lewkowycz et al., 2020), and small batch sizes (Keskar et al., 2017; Smith et al., 2020), can enhance generalization. Most theoretical work has sought to explain this by observing that increasing the learning rate, or reducing the batch size, increases the variance of the SGD iterates (Smith & Le, 2018; Jastrzębski et al., 2018; Chaudhari & Soatto, 2018). We take a different approach, and note that when the learning rate is finite, the SGD iterates are also biased (Roberts, 2018). Backward error analysis (Hairer et al., 2006; Li et al., 2017; Barrett & Dherin, 2021) provides a powerful tool that computes how this bias accumulates over multiple parameter updates. Although this work focused on GD and SGD, we anticipate that backward error analysis could also be used to clarify the role of finite learning rates in adaptive optimizers like Adam (Kingma & Ba, 2015) or Natural Gradient Descent (Amari, 1998). We note however that backward error analysis assumes that the learning rate is small (though finite). It therefore does not capture the chaotic or oscillatory dynamics which arise when the learning rate is close to instability. At these very large learning rates the modified loss, which is defined as a Taylor series in powers of the learning rate, does not converge. Lewkowycz et al. (2020) recently argued that the test accuracies of wide networks trained with full batch gradient descent on quadratic losses are maximized for large learning rates close to divergence. In this “catapult” regime, the GD iterates oscillate along high curvature directions and the loss may increase early in training. It remains an open question to establish whether backward error analysis fully describes the generalization benefit of small batch SGD, or if these chaotic or oscillatory effects also play a role in some networks. ACKNOWLEDGMENTS We would like to thank Jascha Sohl-Dickstein, Razvan Pascanu, Alex Botev, Yee Whye Teh and the anonymous reviewers for helpful discussions and feedback on earlier versions of this manuscript. A THE EXPECTED NORM OF A MINIBATCH GRADIENT To keep the notation clean, we define Xi = (∇Ci(ω)−∇C(ω)). We also recall for clarity that the expectation value E(...) is taken over all possible random shuffles of the indices i. Therefore, E(||(∇Ĉ(ω)−∇C(ω))||2) = 1 B2 E( B∑ i=1 B∑ j=1 Xi ·Xj) (27) = B B2 E(Xi ·Xi) + B(B − 1) B2 E(Xi ·Xj 6=i) (28) = 1 NB N∑ i=1 Xi ·Xi + (B − 1) B 1 N(N − 1) N∑ i=1 ∑ j 6=i Xi ·Xj (29) = 1 NB N∑ i=1 Xi ·Xi + (B − 1) BN(N − 1) N∑ i=1 N∑ j=1 (Xi ·Xj(1− δij)) . Note that we obtain Equation 28 by counting the number of diagonal and off-diagonal terms in the sum in Equation 27. Next, we recall that ∑N i=1Xi = ∑N i=1(∇Ci(ω)−∇C(ω)) = 0. Therefore, E(||(∇Ĉ(ω)−∇C(ω))||2) = 1 NB N∑ i=1 Xi ·Xi − (B − 1) BN(N − 1) N∑ i=1 Xi ·Xi (30) = 1 NB ( 1− (B − 1) (N − 1) ) N∑ i=1 Xi ·Xi (31) = (N −B) (N − 1) Γ(ω) B , (32) where Γ(ω) = (1/N) ∑N i=1Xi · Xi = (1/N) ∑N i=1 ||∇Ci(ω) − ∇C(ω)||2. We can immediately identify Γ(ω) as the trace of the empirical covariance matrix of the per-example gradients. B A BACKWARD ERROR ANALYSIS FOR N-STEP SGD Under n-step SGD, we apply n gradient descent updates on the same minibatch with bare learning rate α and batch sizeB. After n updates, we sample the next minibatch and repeat. For convenience, we define the effective learning rate = nα. After one minibatch (n parameter updates), ωi+1 = ωi − α∇Ĉi(ωi)− α∇Ĉi(ωi − α∇Ĉi(ωi)) + ... (33) = ωi − nα∇Ĉi(ωi) + (n/2)(n− 1)α2∇∇Ĉi(ωi)∇Ĉi(ωi) +O(n3α3) (34) = ωi − ∇Ĉi(ωi) + (1/4)(1− 1/n) 2∇ ( ||∇Ĉi(ωi)||2 ) +O( 3). (35) After one epoch (including terms up to second order in ), ωm = ω0 − ∇Ĉ0(ω0) + (1/4)(1− 1/n) 2∇ ( ||∇Ĉ0(ω0)||2 ) − ∇Ĉ1(ω1) + (1/4)(1− 1/n) 2∇ ( ||∇Ĉ1(ω1)||2 ) − ... − ∇Ĉm−1(ωm−1) + (1/4)(1− 1/n) 2∇ ( ||∇Ĉm−1(ωm−1)||2 ) +O( 3). (36) To simplify this expression, we note that ωi+1 = ωi − ∇Ĉi(ωi) +O( 2). We can therefore re-use our earlier analysis from Section 2.2 of the main text (see Equation 13 for comparison) to obtain, ωm = ω0 −m ∇C(ω0) + 2 m−1∑ j=0 ∑ k<j ∇∇Ĉj(ω0)∇Ĉk(ω0) + (1/4)(1− 1/n) 2 m−1∑ i=0 ∇ ( ||∇Ĉi(ω0)||2 ) +O( 3). (37) Taking the expectation over all possible batch orderings (see Equations 15 to 18), we obtain, E(ωm) = ω0 −m ∇C(ω0) + m2 2 4 ∇ ( ||∇C(ω0)||2 − 1 m2n m−1∑ i=0 ||∇Ĉi(ω0)||2 ) +O(m3 3). (38) Fixing f(ω) = −∇C(ω) and equating Equation 38 with the continuous modified flow in Equation 19 by setting E(ωm) = ω(m ), we identify the modified flow ω̇ = −∇C̃nSGD +O(m2 2), where, C̃nSGD(ω) = C(ω) + 4mn m−1∑ i=0 ||∇Ĉi(ω)||2. (39) Comparing Equation 39 to Equation 1, we note that the modified losses of SGD and n-step SGD coincide when n = 1. However for n-step SGD when n > 1, the strength of the implicit regularization term is proportional to the scale of the bare learning rate α = /n, not the effective learning rate . C TRAINING LOSSES In Figure 1(b) of Section 2.3 in the main text, we compared the test accuracies achieved when training on the original loss C(ω) at a range of learning rates , to the test accuracies achieved when training on the modified loss Cmod(ω) at fixed learning rate = 2−9 and a range of regularization coefficients λ. For completeness, in Figure 5, we provide the corresponding training accuracies, as well as the final values of the original loss C(ω). Remarkably, large learning rates and large regularization coefficients achieve similar training accuracies and similar original losses. This suggests that the implicit regularization term in the modified loss of SGD (C̃SGD(ω)) may help explain why the training accuracies and losses often exhibit plateaus when training with large learning rates. D ADDITIONAL RESULTS ON FASHION-MNIST In this section we provide additional experiments on the Fashion-MNIST dataset (Xiao et al., 2017), which comprises 10 classes, 60000 training examples and 10000 examples in the test set. We consider a simple fully connected MLP which comprises 3 nonlinear layers, each with width 4096 and ReLU activations, and a final linear softmax layer. We apply a simple data pipeline which first applies per-image standardization and then flattens the input to a 784 dimensional vector. We do not apply data augmentation and we train using vanilla SGD without learning rate decay for all experiments. We perform seven training runs for each combination of hyper-parameters and show the mean performance of the best five (to ensure our results are not skewed by a single failed run). We use a batch size B = 16 unless otherwise specified, and we do not use weight decay. We note that this model is highly over-parameterized. Unlike the Wide-ResNet we considered in the main text we consistently achieve 100% training accuracy if the learning rate is not too large. In Figure 6(a), we train for 400 epochs, and we compare the effect of tuning the learning rate when training on the original loss, to the effect of tuning the explicit regularization strength λ (with = 2−9). As observed in the main text, tuning the explicit regularizer has a similar effect on the test accuracy to tuning the learning rate. Surprisingly, the optimal values of and λ differ by a factor of 8. However we note that the optimal learning rate is = 2−5, while the explicit regularizer already achieves a very similar test accuracy at λ = 2−4 (just a factor of two larger), before reaching a higher maximum test accuracy at λ = 2−2. In Figure 6(b), we train for 400 epochs on the original loss and compare the test accuracies achieved for a range of batch sizes at different learning rates. As observed in the main text, the test accuracy is determined by the ratio of the learning rate to the batch size. Meanwhile in Figure 6(c), we plot the test accuracy achieved after training for 1.5 million steps on the modified loss with learning rate = 2−9 and regularization coefficient λ. Once again, we find that the test accuracy achieved is primarily determined by the ratio of the regularization coefficient to the batch size, although smaller batch sizes also achieve slightly higher accuracies. Finally, in Figure 7 we train using n-step SGD (see Section 4) on the original loss at a range of bare learning rates α. In Figure 7(a) we train for 400 epochs, while in Figure 7(b) we train for 6 million updates. We recall that the SDE analogy predicts that the generalization benefit of n-SGD would be determined by the effective learning rate = nα. By contrast, backward error analysis predicts that the generalization benefit for small learning rates would be controlled by the bare learning rate α, but that higher order terms may be larger for larger values of n. We find that the test accuracy in both figures is governed by the bare learning rate α, not the effective learning rate = nα, and therefore these results are inconsistent with the predictions from the SDE analysis in prior work. Note that Figure 7 has a surprising implication. It suggests that, for this model, while there is a largest stable bare learning rate we cannot exceed, we can repeatedly apply updates obtained on the same batch of training examples without suffering a significant degradation in test accuracy. We speculate that this may indicate that the gradients of different examples in this over-parameterized model are close to orthogonal (Sankararaman et al., 2020).
1. What is the focus of the paper regarding SGD's generalization error? 2. What are the strengths of the proposed backward error analysis? 3. What are the weaknesses of the paper, particularly in experimentation? 4. How does the reviewer assess the novelty and interest of the paper's content? 5. Are there any concerns regarding the theoretical analysis, specifically on the variance and deviation of individual SGD flows? 6. Is the assumption of small m reasonable in practical scenarios?
Review
Review Summary: To analyze why the generalization error of SGD with larger learning rates achieves better test error, this paper analyzes the implicit regularization of SGD (with a finite step size) via a first order backward error analysis. Under this analysis the paper shows that the mean position of SGD with m minibatches effectively follows the flow according to Eq (20) for a small but finite step size, while GD effectively follows the last inline equation in section 2.1. The paper shows empirically on an image classification task that by explicitly including the (implicit SGD) regularizer, SGD on the modified loss behaves similarly to using a larger learning rate when evaluating on the test set. The paper then extends this results to consider varying the batch size in section 3, showing that for small batchsizes the implicit regularization scales with the ratio of learning rate and batchsize ϵ / B . Finally in section 4, the paper analyzes SGD when for each sampled minibatch in an epoch, we apply n gradient steps with a stepsize ϵ / n and show that performance degrades as n increases, suggesting that the benefit of SGD with larger learning rates is due to the implicit regularizer and not the temperature of an associated SDE. This paper is clearly written and well edited. I find the main result and the analysis technique interesting and novel. Although the experiments are well explained and help support the theory developed, there is only one experiment setting making it difficult to believe strong general claims such as those in section 4. I do have concerns about equating the "mean" behavior of SGD with the actual behavior of SGD and. Recommendation: I recommend accepting this paper. As it currently stands, this paper is borderlin on the acceptance threshold for me. I like the novel use of the backward error analysis to gain insight into the behavior of SGD and I believe it would be of interest to ICLR readers. My main concerns are the papers' narrow focus on the mean behavior of SGD and the single experiment setting used to validate results. I would much more strongly support this paper if the theoretical analysis was stronger (e.g. analyzing the variance of individual SGD flows/regularizers to the mean SGD flow/regularizer) or if more experiments (in different settings) supported the results. Questions: If we don't take the expectation over ξ ( m ) in Section 2.2, the theory suggests that there exist a (random) modified flow for each (random) ordering of minibatches C ^ 0 , … , C ^ m by equating equations (14) and (19). The main result Eq (20) would correspond to the expected value over the (random) modified flow. I believe this paper would be much stronger if there was some discussion of how the variance / deviations of these random flows from the mean flow (i.e the variance of ξ ( m ) ) affects the implicit regularization and how this scales with batch size and properties of the loss. Would the implicit regularization break down for some experiments? Is the assumption that m ϵ is small reasonable (so that we can ignore the higher order O ( m 3 ϵ 3 ) terms in the analysis)? Isn't m = N / B the number of updates per epochs very large in practice since N >> B ?
ICLR
Title On the Origin of Implicit Regularization in Stochastic Gradient Descent Abstract For infinitesimal learning rates, stochastic gradient descent (SGD) follows the path of gradient flow on the full batch loss function. However moderately large learning rates can achieve higher test accuracies, and this generalization benefit is not explained by convergence bounds, since the learning rate which maximizes test accuracy is often larger than the learning rate which minimizes training loss. To interpret this phenomenon we prove that for SGD with random shuffling, the mean SGD iterate also stays close to the path of gradient flow if the learning rate is small and finite, but on a modified loss. This modified loss is composed of the original loss function and an implicit regularizer, which penalizes the norms of the minibatch gradients. Under mild assumptions, when the batch size is small the scale of the implicit regularization term is proportional to the ratio of the learning rate to the batch size. We verify empirically that explicitly including the implicit regularizer in the loss can enhance the test accuracy when the learning rate is small. 1 INTRODUCTION In the limit of vanishing learning rates, stochastic gradient descent with minibatch gradients (SGD) follows the path of gradient flow on the full batch loss function (Yaida, 2019). However in deep networks, SGD often achieves higher test accuracies when the learning rate is moderately large (LeCun et al., 2012; Keskar et al., 2017). This generalization benefit is not explained by convergence rate bounds (Ma et al., 2018; Zhang et al., 2019), because it arises even for large compute budgets for which smaller learning rates often achieve lower training losses (Smith et al., 2020). Although many authors have studied this phenomenon (Jastrzębski et al., 2018; Smith & Le, 2018; Chaudhari & Soatto, 2018; Shallue et al., 2018; Park et al., 2019; Li et al., 2019; Lewkowycz et al., 2020), it remains poorly understood, and is an important open question in the theory of deep learning. In a recent work, Barrett & Dherin (2021) analyzed the influence of finite learning rates on the iterates of gradient descent (GD). Their approach is inspired by backward error analysis, a method for the numerical analysis of ordinary differential equation (ODE) solvers (Hairer et al., 2006). The key insight of backward error analysis is that we can describe the bias introduced when integrating an ODE with finite step sizes by introducing an ancillary modified flow. This modified flow is derived to ensure that discrete iterates of the original ODE lie on the path of the continuous solution to the modified flow. Using this technique, the authors show that if the learning rate is not too large, the discrete iterates of GD lie close to the path of gradient flow on a modified loss C̃GD(ω) = C(ω) + ( /4)||∇C(ω)||2. This modified loss is composed of the original loss C(ω) and an implicit regularizer proportional to the learning rate which penalizes the euclidean norm of the gradient. However these results only hold for full batch GD, while in practice SGD with small or moderately large batch sizes usually achieves higher test accuracies (Keskar et al., 2017; Smith et al., 2020). In this work, we devise an alternative approach to backward error analysis, which accounts for the correlations between minibatches during one epoch of training. Using this novel approach, we prove that for small finite learning rates, the mean SGD iterate after one epoch, averaged over all possible sequences of minibatches, lies close to the path of gradient flow on a second modified loss C̃SGD(ω), which we define in equation 1. This new modified loss is also composed of the full batch loss function and an implicit regularizer, however the structure of the implicit regularizers for GD and SGD differ, and their modified losses can have different local and global minima. Our analysis therefore helps explain both why finite learning rates can aid generalization, and why SGD can achieve higher test accuracies than GD. We assume that each training example is sampled once per epoch, in line with best practice (Bottou, 2012), and we confirm empirically that explicitly including the implicit regularization term of SGD in the training loss can enhance the test accuracy when the learning rate is small. Furthermore, we prove that if the batch size is small and the gradients are sufficiently diverse, then the expected magnitude of the implicit regularization term of SGD is proportional to the ratio of the learning rate to the batch size (Goyal et al., 2017; Smith et al., 2018). We note that many previous authors have sought to explain the generalization benefit of SGD using an analogy between SGD and stochastic differential equations (SDEs) (Mandt et al., 2017; Smith & Le, 2018; Jastrzębski et al., 2018; Chaudhari & Soatto, 2018). However this SDE analogy assumes that each minibatch is randomly sampled from the full dataset, which implies that some examples will be sampled multiple times in one epoch. Furthermore, the most common SDE analogy holds only for vanishing learning rates (Yaida, 2019) and therefore misses the generalization benefits of finite learning rates which we identify in this work. An important exception is Li et al. (2017), who applied backward error analysis to identify a modified SDE which holds when the learning rate is finite. However this work still relies on the assumption that minibatches are sampled randomly. It also focused on the convergence rate, and did not discuss the performance of SGD on the test set. Main Result. We now introduce our main result. We define the cost function over parameters ω as C(ω) = (1/N) ∑N j=1 Cj(ω), which is the mean of the per-example costs Cj(ω), where N denotes the training set size. Gradient flow follows the ODE ω̇ = −∇C(ω), while gradient descent computes discrete updates ωi+1 = ωi − ∇C(ωi), where is the learning rate. For simplicity, we assume that the batch size B perfectly splits the training set such that N%B = 0, where % denotes the modulo operation, and for convenience we define the number of batches per epoch m = N/B. We can therefore re-write the cost function as a sum over minibatches C(ω) = (1/m) ∑m−1 k=0 Ĉk(ω), where the minibatch cost Ĉk(ω) = (1/B) ∑kB+B j=kB+1 Cj(ω). In order to guarantee that we sample each example precisely once per epoch, we define SGD by the discrete update ωi+1 = ωi− ∇Ĉi%m(ωi). Informally, our main result is as follows. After one epoch, the mean iterate of SGD with a small but finite learning rate , averaged over all possible shuffles of the batch indices, stays close to the path of gradient flow on a modified loss ω̇ = −∇C̃SGD(ω), where the modified loss C̃SGD is given by: C̃SGD(ω) = C(ω) + 4m ∑m−1 k=0 ||∇Ĉk(ω)||2. (1) We emphasize that our analysis studies the mean evolution of SGD, not the path of individual trajectories. The modified loss C̃SGD(ω) is composed of the original loss C(ω) and an implicit regularizer Creg(ω) = (1/4m) ∑m−1 k=0 ||∇Ĉk(ω)||2. The scale of this implicit regularization term is proportional to the learning rate , and it penalizes the mean squared norm of the gradient evaluated on a batch of B examples. To help us compare the modified losses of GD and SGD, we can expand, C̃SGD(ω) = C(ω) + 4 ||∇C(ω)||2 + 4m ∑m−1 i=0 ||∇Ĉi(ω)−∇C(ω)||2. (2) We arrive at Equation 2 from Equation 1 by noting that ∑m−1 i=0 (∇Ĉi(ω) − ∇C(ω)) = 0. In the limit B → N , we identify the modified loss of gradient descent, C̃GD = C(ω) + ( /4)||∇C(ω)||2, which penalizes “sharp” regions where the norm of the full-batch gradient (||∇C(ω)||2) is large. However, as shown by Equation 2, the modified loss of SGD penalizes both sharp regions where the full-batch gradient is large, and also “non-uniform” regions where the norms of the errors in the minibatch gradients (||∇Ĉ(ω) − ∇C(ω)||2) are large (Wu et al., 2018). Although global minima of C(ω) are global minima of C̃GD(ω), global minima of C(ω) may not be global (or even local) minima of C̃SGD(ω). Note however that C(ω) and C̃SGD(ω) do share the same global minima on over-parameterized models which can interpolate the training set (Ma et al., 2018). We verify in our experiments that the implicit regularizer can enhance the test accuracy of models trained with SGD. Paper structure. In Section 2, we derive our main result (Equation 1), and we confirm empirically that we can close the generalization gap between small and large learning rates by including the implicit regularizer explicitly in the loss function. In Section 3, we confirm Equation 1 satisfies the linear scaling rule between learning rate and batch size (Goyal et al., 2017). In Section 4, we provide additional experiments which challenge the prevailing view that the generalization benefit of small batch SGD arises from the temperature of an associated SDE (Mandt et al., 2017; Park et al., 2019). 2 A BACKWARD ERROR ANALYSIS OF STOCHASTIC GRADIENT DESCENT Backward error analysis has great potential to clarify the role of finite learning rates, and to help identify the implicit biases of different optimizers. We therefore give a detailed introduction to the core methodology in Section 2.1, before deriving our main result in Section 2.2. In Section 2.3, we confirm empirically that the implicit regularizer can enhance the test accuracy of deep networks. 2.1 AN INTRODUCTION TO BACKWARD ERROR ANALYSIS In numerical analysis, we often wish to integrate ODEs of the form ω̇ = f(ω). This system usually cannot be solved analytically, forcing us to simulate the continuous flow with discrete updates, like the Euler step ω(t+ ) ≈ ω(t) + f(ω(t)). However discrete updates will introduce approximation error when the step size is finite. In order to study the bias introduced by this approximation error, we assume the learning rate is relatively small, and introduce a modified flow ω̇ = f̃(ω), where, f̃(ω) = f(ω) + f1(ω) + 2f2(ω) + ... . (3) The modified flow of f̃(ω) is equal to the original flow of f(ω) when → 0, but it differs from the original flow if is finite. The goal of backward error analysis is to choose the correction terms fi(ω) such that the iterates obtained from discrete updates of the original flow with small finite step sizes lie on the path taken by the continuous solution to the modified flow with vanishing step sizes. The standard derivation of backward error analysis begins by taking a Taylor expansion in of the solution to the modified flow ω(t + ). We obtain the derivatives of ω(t + ) recursively using the modified flow equation ω̇ = f̃(ω) (see Hairer et al. (2006)), and we identify the correction terms fi(ω) by ensuring this Taylor expansion matches the discrete update (e.g., ωt+1 = ωt + f(ωt)) for all powers of . However, this approach does not clarify why these correction terms arise. To build our intuition for the origin of the corrections terms, and to clarify how we might apply this analysis to SGD, we take a different approach. First, we will identify the path taken by the continuous modified flow by considering the combined influence of an infinite number of discrete steps in the limit of vanishing learning rates, and then we will compare this continuous path to either a single step of GD or a single epoch of SGD. Imagine taking n Euler steps on the modified flow f̃(ω) with step size α, ωt+n = ωt + αf̃(ωt) + αf̃(ωt+1) + αf̃(ωt+2) + ... (4) = ωt + αf̃(ωt) + αf̃(ωt + αf̃(ωt)) + αf̃(ωt + αf̃(ωt) + αf̃(ωt + αf̃(ωt))) + ... (5) = ωt + nαf̃(ωt) + (n/2)(n− 1)α2∇f̃(ωt)f̃(ωt) +O(n3α3). (6) We arrived at Equation 6 by taking the Taylor expansion of f̃ and then counting the number of terms of type ∇f̃(ωt)f̃(ωt) using the formula for an arithmetic series. Note that we assume ∇f̃ exists. Next, to ensure ωt+n in Equation 6 coincides with the solution ω(t+ ) of the continuous modified flow ω̇ = f̃(ω) for small but finite , we let the number of steps n→∞ while setting α = /n, ω(t+ ) = ω(t) + f̃(ω(t)) + ( 2/2)∇f̃(ω(t))f̃(ω(t)) +O( 3) (7) = ω(t) + f(ω(t)) + 2 (f1(ω(t)) + (1/2)∇f(ω(t))f(ω(t))) +O( 3). (8) We have replaced f̃(ω) with its definition from Equation 3. As we will see below, Equation 8 is the key component of backward error analysis, which describes the path taken when integrating the continuous modified flow f̃(ω) with vanishing learning rates over a discrete time step of length . Notice that we have assumed that the Taylor expansion in Equation 8 converges, while the higher order terms at O( 3) will contain higher order derivatives of the original flow f(ω). Backward error analysis therefore implicitly assumes that f(ω) is an analytic function in the vicinity of the current parameters ω. We refer the reader to Hairer et al. (2006) for a detailed introduction. Gradient descent: As a simple example, we will now derive the first order correction f1(ω) of the modified flow for GD. First, we recall that the discrete updates obey ωi+1 = ωi− ∇C(ωi), and we therefore fix f(ω) = −∇C(ω). In order to ensure that the continuous modified flow coincides with this discrete update, we need all terms at O( 2) and above in Equation 8 to vanish. At order 2, this implies that f1(ω) + (1/2)∇∇C(ω)∇C(ω) = 0, which yields the first order correction, f1(ω) = −(1/2)∇∇C(ω)∇C(ω) = −(1/4)∇ ( ||∇C(ω)||2 ) . (9) We conclude that, if the learning rate is sufficiently small such that we can neglect higher order terms in Equation 3, then the discrete GD iterates lie on the path of the following ODE, ω̇ = −∇C(ω)− ( /4)∇ ( ||∇C(ω)||2 ) (10) = −∇C̃GD(ω). (11) Equation 11 corresponds to gradient flow on the modified loss, C̃GD(ω) = C(ω)+( /4)||∇C(ω)||2. 2.2 BACKWARD ERROR ANALYSIS AND STOCHASTIC GRADIENT DESCENT We now derive our main result (Equation 1). As described in the introduction, we assume N%B = 0, where N is the training set size, B is the batch size, and % denotes the modulo operation. The number of updates per epochm = N/B, and the minibatch costs Ĉk(ω) = (1/B) ∑kB+B j=kB+1 Cj(ω). SGD with constant learning rates obeys ωi+1 = ωi− ∇Ĉi%m(ωi). It is standard practice to shuffle the dataset once per epoch, but we omit this step here and instead perform our analysis over a single epoch. In Equation 6 we derived the influence of n Euler steps on the flow f̃(ω) with step size α. Following a similar approach, we now derive the influence of m SGD updates with learning rate , ωm = ω0 − ∇Ĉ0(ω0)− ∇Ĉ1(ω1)− ∇Ĉ2(ω2)− ... (12) = ω0 − m−1∑ j=0 ∇Ĉj(ω0) + 2 m−1∑ j=0 ∑ k<j ∇∇Ĉj(ω0)∇Ĉk(ω0) +O(m3 3) (13) = ω0 −m ∇C(ω0) + 2ξ(ω0) +O(m3 3). (14) The error in Equation 14 is O(m3 3) since there are O(m3) terms in the Taylor expansion proportional to 3. Notice that a single epoch of SGD is equivalent to a single GD update with learning rate m up to first order in . Remarkably, this implies that when the learning rate is sufficiently small, there is no noise in the iterates of SGD after completing one epoch. For clarity, this observation arises because we require that each training example is sampled once per epoch. However the second order correction ξ(ω) = ∑m−1 j=0 ∑ k<j ∇∇Ĉj(ω)∇Ĉk(ω) does not appear in the GD update, and it is a random variable which depends on the order of the mini-batches. In order to identify the bias introduced by SGD, we will evaluate the mean correction E(ξ), where we take the expectation across all possible sequences of the (non-overlapping) mini-batches {Ĉ0, Ĉ1, ..., Ĉm−1}. Note that we hold the composition of the batches fixed, averaging only over their order. We conclude that, E(ξ(ω)) = 1 2 (∑m−1 j=0 ∑ k 6=j ∇∇Ĉj(ω)∇Ĉk(ω) ) (15) = m2 2 ∇∇C(ω)∇C(ω)− 1 2 ∑m−1 j=0 ∇∇Ĉj∇Ĉj (16) = m2 4 ∇ ( ||∇C(ω)||2 − 1 m2 ∑m−1 j=0 ||∇Ĉj(ω)||2 ) . (17) For clarity, in Equation 15 we exploit the fact that every sequence of batches has a corresponding sequence in reverse order. Combining Equations 14 and 17, we conclude that after one epoch, E(ωm) = ω0 −m ∇C(ω0) + m2 2 4 ∇ ( ||∇C(ω0)||2 − (1/m2) ∑m−1 j=0 ||∇Ĉj(ω0)||2 ) +O(m3 3). (18) Having identified the expected value of the SGD iterate after one epoch E(ωm) (for small but finite learning rates), we can now use this expression to identify the corresponding modified flow. First, we set f(ω) = −∇C(ω), t = 0, ω(0) = ω0, and let → m in Equations 3 and 8 to obtain, ω(m ) = ω0 −m ∇C(ω0) +m2 2 ( f1(ω0) + (1/4)∇||∇C(ω0)||2 ) +O(m3 3). (19) Next, we equate Equations 18 and 19 by setting ω(m ) = E(ωm). We immediately identify the first order correction to the modified flow f1(ω) = −(1/(4m2))∇ ∑m−1 j=0 ||∇Ĉj(ω)||2. We therefore conclude that, after one epoch, the expected SGD iterate E(ωm) = ω(m ) + O(m3 3), where ω(0) = ω0 and ω̇ = −∇C(ω) +m f1(ω). Simplifying, we conclude ω̇ = −∇C̃SGD(ω), where, C̃SGD(ω) = C(ω) + ( /4m) ∑m−1 k=0 ||∇Ĉk(ω)||2. (20) Equation 20 is identical to Equation 1, and this completes the proof of our main result. We emphasize that C̃SGD assumes a fixed set of minibatches {Ĉ0, Ĉ1, ..., Ĉm−1}. We will evaluate the expected modified loss after shuffling the dataset and sampling a new set of minibatches in Section 3. REMARKS ON THE ANALYSIS The phrase “for small finite learning rates” has a precise meaning in our analysis. It implies is large enough that terms of O(m2 2) may be significant, but small enough that terms of O(m3 3) are negligible. Our analysis is unusual, because we consider the mean evolution of the SGD iterates but ignore the variance of individual training runs. Previous analyses of SGD have usually focused on the variance of the iterates in limit of vanishing learning rates (Mandt et al., 2017; Smith & Le, 2018; Jastrzębski et al., 2018). However these works assume that each minibatch is randomly sampled from the full dataset. Under this assumption, the variance in the iterates arises at O( ), while the bias arises at O( 2). By contrast, in our analysis each example is sampled once per epoch, and both the variance and the bias arise at O( 2) (for simplicity, we assume m = N/B is constant). We therefore anticipate that the variance will play a less important role than is commonly supposed. Furthermore, we can construct a specific sequence of minibatches for which the variance atO(m2 2) vanishes, such that the evolution of a specific training run will coincide exactly with gradient flow on the modified loss of Equation 1 for small finite learning rates. To achieve this, we perform two training epochs with the sequence of minibatches (Ĉ0, Ĉ1, ..., Ĉm−1, Ĉm−1, ..., Ĉ1, Ĉ0) (i.e., the second epoch iterates through the same set of minibatches as the first but in the opposite order). If one inspects Equations 13 to 15, one will see that reversing the second epoch has the same effect as taking the mean across all possible sequences of minibatches (it replaces the ∑ k<j by a ∑ k 6=j). The key limitation of our analysis is that we assume m = N /B is small, in order to neglect terms at O(m3 3). This is an extreme approximation, since we typically expect that N/B is large. Therefore, while our work identifies the first order correction to the bias arising from finite learning rates, higher order terms in the modified flow may also play an important role at practical learning rates. We note however that previous theoretical analyses have made even more extreme assumptions. For instance, most prior work studying SGD in the small learning rate limit neglects all terms at O( 2) (for an exception, see Li et al. (2017)). Furthermore, as we show in Section 3, the learning rate often scales proportional to the batch size, such that /B is constant (Goyal et al., 2017; McCandlish et al., 2018). Therefore the accuracy of our approximations does not necessarily degrade as the batch size falls, but higher order terms may play an increasingly important role as the dataset size increases. Our experimental results in Section 2.3 and Section 4 suggest that our analysis can explain most of the generalization benefit of finite learning rate SGD for Wide-ResNets trained on CIFAR-10. We note that to achieve the highest test accuracies, practitioners usually decay the learning rate during training. Under this scheme, the modified loss would change as training proceeds. However it is widely thought that the generalization benefit of SGD arises from the use of large learning rates early in training (Smith et al., 2018; Li et al., 2019; Jastrzebski et al., 2020; Lewkowycz et al., 2020), and popular schedules hold the learning rate constant or approximately constant for several epochs. Finally, we emphasize that our primary goal in this work is to identify the influence of finite learning rates on training. The implicit regularization term may not be beneficial in all models and datasets. 2.3 AN EMPIRICAL EVALUATION OF THE MODIFIED LOSS In order to confirm that the modified loss C̃SGD(ω) can help explain why large learning rates enhance generalization, we now verify empirically that the implicit regularizer inherent in constant learning rate SGD, Creg(ω) = (1/4m) ∑m−1 k=0 ||∇Ĉk(ω)||2, can enhance the test accuracy of deep networks. To this end, we train the same model with two different (explicit) loss functions. The first loss function C(ω) represents the original loss, while the second Cmod(ω) = C(ω) + λCreg(ω) is obtained from the modified loss C̃SGD(ω) by replacing the learning rate with an explicit regular- ization coefficient λ. Notice that Cmod(ω) = (1/m) ∑m−1 k=0 ( Ĉk(ω) + (λ/4)||∇Ĉk(ω)||2 ) , which ensures that it is straightforward to minimize the modified loss Cmod(ω) with minibatch gradients. Since the implicit regularization term Creg(ω) is expensive to differentiate (typically 5-10x overhead), we consider a 10-1 Wide-ResNet model (Zagoruyko & Komodakis, 2016) for classification on CIFAR-10. To ensure close agreement with our theoretical analysis, we train without batch normalization using SkipInit initialization (De & Smith, 2020). We train for 6400 epochs at batch size 32 without learning rate decay using SGD without Momentum. We use standard data augmentation including crops and random flips, and we use weight decay with L2 coefficient 5 × 10−4. We emphasize that, since we train using a finite (though very large) compute budget, the final networks may not have fully converged. This is particularly relevant when training with small learning rates. Note that we provide additional experiments on Fashion-MNIST (Xiao et al., 2017) in appendix D. In Figure 1(a), we compare two training runs, one minimizing the modified loss Cmod(ω) with λ = 2−6, and one minimizing the original loss C(ω). For both runs we use a small constant learning rate = 2−9. As expected, the regularized training run achieves significantly higher test accuracies late in training. This confirms that the implicit regularizer, which arises as a consequence of using SGD with finite learning rates, can also enhance the test accuracy if it is included explicitly in the loss. In Figure 1(b), we provide the test accuracy for a range of regularization strengths λ (orange line). We provide the mean test accuracy of the best 5 out of 7 training runs at each regularization strength, and for each run we take the highest test accuracy achieved during the entire training run. We use a fixed learning rate = 2−9 for all λ. For comparison, we also provide the test accuracy achieved with the original loss C(ω) for a range of learning rates (blue line). In both cases, the test accuracy rises initially, before falling for large regularization strengths or large learning rates. Furthermore, in this network the optimal regularization strength on the modified loss λopt = 2−6 is equal to the optimal learning rate on the original loss opt = 2−6. Meanwhile when λ → 0 the performance of the modified loss approaches the performance of the original loss at = 2−9 (dotted green line). We provide the corresponding training accuracies in appendix C. Finally, in Figure 1(c), we provide the values of the implicit regularizer Creg(ω) at the end of training. As predicted by our analysis, training with larger learning rates reduces the value of the implicit regularization term. In Figure 2, we take the same 10-1 Wide-ResNet model and provide the mean training and test accuracies achieved at a range of learning rates for two regularization coefficients (following the experimental protocol above). In Figure 2(a), we train on the original loss C(ω) (λ = 0), while in Figure 2(b), we train on the modified loss Cmod(ω) with regularization coefficient λ = 2−6. From Figure 2(a), when λ = 0 there is a clear generalization benefit to large learning rates, as the learning rate that maximizes test accuracy (2−6) is 16 times larger than the learning rate that maximizes training accuracy (2−10). However in Figure 2(b) with λ = 2−6, the learning rates that maximize the test and training accuracies are equal (2−8). This suggests that when we include the implicit regularizer explicitly in the loss, the generalization benefit of large learning rates is diminished. 3 IMPLICIT REGULARIZATION AND THE BATCH SIZE In Section 2.2, we derived the modified loss by considering the expected SGD iterate after one epoch. We held the composition of the batches fixed, averaging only over the order in which the batches are seen. This choice helped make clear how to explicitly include the implicit regularizer in the loss function in Section 2.3. However, in order to clarify how the implicit regularizer term depends on the batch size, we now evaluate the expected modified loss after randomly shuffling the dataset and sampling a new set of m non-overlapping minibatches {Ĉ0, Ĉ1, ..., Ĉm−1}. Since the minibatch losses Ĉi(ω) are all identically distributed by symmetry, we recall Equation 2 and conclude that, E(C̃SGD(ω)) = C(ω) + ( /4) ||∇C(ω)||2 + ( /4)E(||∇Ĉ(ω)−∇C(ω)||2), (21) where Ĉ(ω) denotes a batch of B non-overlapping examples, drawn randomly from the full dataset. To simplify equation 21, we prove in appendix A that E(||∇Ĉ(ω) − ∇C(ω)||2) = (N−B)(N−1) Γ(ω) B , where Γ(ω) = (1/N) ∑N i=1 ||∇Ci(ω)−∇C(ω)||2. We therefore obtain, E(C̃SGD(ω)) = C(ω) + 4 ||∇C(ω)||2 + (N −B) (N − 1) 4B Γ(ω). (22) Note that Γ(ω) is the trace of the empirical covariance matrix of the per-example gradients. We have not assumed that the minibatch gradients are Gaussian distributed, however if the per-example gradients are heavy tailed (Simsekli et al., 2019) then Γ(ω) may diverge, in which case the expected value of the modified loss is ill-defined. Equation 22 shows that the implicit regularization term of SGD has two contributions. The first term is proportional to the learning rate , and it penalizes the norm of the full batch gradient. The second term is proportional to the ratio of the learning rate to the batch size /B (assuming N B), and it penalizes the trace of the covariance matrix. To interpret this result, we assume that the minibatch gradients are diverse, such that (Γ(ω)/B) ||∇C(ω)||2. This assumption guarantees that increasing the batch size reduces the error in the gradient estimate. In this limit, the second term above will dominate, and therefore different batch sizes will experience the same implicit regularization so long as the ratio of the learning rate to the batch size is constant. To verify this claim, in Figure 2.3 we plot the mean test accuracies achieved on a 10-1 Wide-ResNet, trained on CIFAR-10 with a constant learning rate, for a range of learning rates , regularization coefficients λ and batch sizes B. As expected, in Figure 3(a), training on the original loss C(ω) for 6400 epochs, we see that different batch sizes achieve similar test accuracies so long as the ratio /B is constant and the batch size is not too large. We note that this linear scaling rule is well known and has been observed in prior work (Goyal et al., 2017; Smith & Le, 2018; Jastrzębski et al., 2018; Zhang et al., 2019). To confirm that this behaviour is consistent with the modified loss, in Figure 3(b) we fix the learning rate = 2−9 and train on Cmod(ω) at a range of regularization strengths λ for 10 million steps. As expected, different batch sizes achieve similar test accuracy so long as the ratio λ/B is constant. We note that we expect this phenomenon to break down for very large batch sizes, however we were not able to run experiments in this limit due to computational constraints. For very large batch sizes, the first implicit regularization term in Equation 22 dominates, the linear scaling rule breaks down, and the bias of SGD is similar to the bias of GD identified by Barrett & Dherin (2021). We expect the optimal learning rate to be independent of the batch size in this limit, as observed by McCandlish et al. (2018) and Smith et al. (2020). Convergence bounds also predict a transition between a small batch regime where the optimal learning rate ∝ B and a large batch regime where the optimal learning rate is constant (Ma et al., 2018; Zhang et al., 2019). However these analyses identify the learning rate which minimizes the training loss. Our analysis compliments these claims by explaining why similar conclusions hold when maximizing test accuracy. 4 FINITE LEARNING RATES AND STOCHASTIC DIFFERENTIAL EQUATIONS In the previous two sections, we argued that the use of finite learning rates and small batch sizes introduces implicit regularization, which can enhance the test accuracy of deep networks. We analyzed this effect using backward error analysis (Hairer et al., 2006; Li et al., 2017; Barrett & Dherin, 2021), but many previous papers have argued that this effect can be understood by interpreting small batch SGD as the discretization of an SDE (Mandt et al., 2017; Smith & Le, 2018; Jastrzębski et al., 2018; Park et al., 2019). In this section, we compare this popular perspective with our main results from Sections 2 and 3. To briefly recap, in the SDE analogy a single gradient update is given by ωi+1 = ωi − ∇Ĉ(ωi), where Ĉ denotes a random batch of B non-overlapping training examples. Notice that in the SDE analogy, since examples are drawn randomly from the full dataset, there is no guarantee that each training example is sampled once per epoch. Assuming N B 1 and that the gradients are not heavy tailed, the central limit theorem is applied to model the noise in an update by a Gaussian noise source ξ whose covariance is inversely proportional to the batch size: ωi+1 = ωi − ( ∇C(ωi) + ξi/ √ B ) = ωi − ∇C(ωi) + √ Tξi. (23) This assumes E(ξi) = 0 and E(ξiξTj ) = F (ω)δij , where F (ω) is the covariance matrix of the per example gradients, and we define the “temperature” T = /B. The SDE analogy notes that Equation 23 is identical to the Euler discretization of an SDE with step size and temperature T (Gardiner et al., 1985). Therefore one might expect the SGD iterates to remain close to this underlying SDE in the limit of small learning rates ( → 0). In this limit, the temperature defines the influence of mini-batching on the dynamics, and it is therefore often assumed that the temperature also governs the generalization benefit of SGD (Smith & Le, 2018; Jastrzębski et al., 2018; Park et al., 2019). However this conclusion from the SDE analogy is inconsistent with our analysis in Section 2. To see this, note that in Section 2 we assumed that each training example is sampled once per epoch, as recommended by practitioners (Bottou, 2012), and showed that under this assumption there is no noise in the dynamics of SGD up to first order in after one epoch of training. The SDE analogy therefore relies on the assumption that minibatches are sampled randomly from the full dataset. Furthermore, SGD only converges to the underlying SDE when the learning rate → 0, but in this limit the temperature T → 0 and SGD converges to gradient flow (Yaida, 2019). We must use a finite learning rate to preserve a finite temperature, but at any finite learning rate the distributions of the SGD iterates and the underlying SDE may differ. We now provide intriguing empirical evidence to support our contention that the generalization benefit of SGD arises from finite learning rates, not the temperature of an associated stochastic process. First, we introduce a modified SGD update rule: n-step SGD: Apply n gradient descent updates sequentially on the same minibatch with bare learning rate α, effective learning rate = nα and batch size B. Sample the next minibatch and repeat. To analyze n-step SGD, we consider the combined influence of n updates on the same minibatch: ωi+1 = ωi − α∇Ĉ(ωi)− α∇Ĉ(ωi − α∇Ĉ(ωi)) + ... (24) = ωi − nα∇Ĉ(ωi) +O(n2α2) (25) = ωi − ∇C(ωi) + √ Tξi +O( 2). (26) Equations 23 and 26 are identical up to first order in but they differ at O( 2) and above. Therefore, if minibatches are randomly sampled from the full dataset, then the dynamics of standard SGD and n-step SGD should remain close to the same underlying SDE in the limit → 0, but their dynamics will differ when the learning rate is finite. We conclude that if the dynamics of SGD is close to the continuum limit of the associated SDE, then standard SGD and n-step SGD ought to achieve similar test accuracies after the same number of training epochs. However if, as we argued in Section 2, the generalization benefit of SGD arises from finite learning rate corrections at O( 2) and above, then we should expect the performance of standard SGD and n-step SGD to differ. For completeness, we provide a backward error analysis of n-step SGD in appendix B. In line with Section 2 (and best practice), we assume each training example is sampled once per epoch. We find that after one epoch, the expected n-step SGD iterate E(ωm) = ω(m ) + O(m3 3), where ω(0) = ω0, ω̇ = −∇C̃nSGD(ω) and C̃nSGD(ω) = C(ω) + ( /4mn) ∑m−1 i=0 ||∇Ci(ω)||2. The scale of the implicit regularizer is proportional to α = /n, which implies that the implicit regularization is suppressed as n increases if we hold constant. As expected, we recover Equation 1 when n = 1. In Figure 4(a), we plot the performance of n-step SGD at a range of bare learning rates α, when training a 16-4 Wide-ResNet on CIFAR-10 for 400 epochs using SkipInit (De & Smith, 2020) at batch size 32. Each example is sampled once per epoch. We introduce a learning rate decay schedule, whereby we hold the learning rate constant for the first half of training, before decaying the learning rate by a factor of 2 every remaining tenth of training, and we provide the mean test accuracy of the best 5 out of 7 training runs for each value of α. The optimal test accuracy drops from 93.5% when n = 1 (standard SGD) to 88.8% when n = 16. This occurs even though all values of n perform the same number of training epochs, indicating that 16-step SGD performed 16 times more gradient updates. These results suggest that, at least for this model and dataset, the generalization benefit of SGD is not controlled by the temperature of the associated SDE, but instead arises from the implicit regularization associated with finite learning rates. When we increase n we reduce the largest stable bare learning rate α, and this suppresses the implicit regularization benefit, which reduces the test accuracy. We also verify in Figure 4(b) that similar conclusions arise if we hold the number of parameter updates fixed (such that the number of training epochs is inversely proportional to n). Smaller values of n are stable at larger bare learning rates and achieve higher test accuracies. Finally we confirm in Figure 4(c) that the test accuracy degrades as n increases even if one tunes both the learning rate and the epoch budget independently for each value of n, thus demonstrating that n-step SGD consistently achieves lower test accuracies as n increases. Note that we provide additional experiments on Fashion-MNIST (Xiao et al., 2017) in appendix D. 5 DISCUSSION Many authors have observed that large learning rates (Li et al., 2019; Lewkowycz et al., 2020), and small batch sizes (Keskar et al., 2017; Smith et al., 2020), can enhance generalization. Most theoretical work has sought to explain this by observing that increasing the learning rate, or reducing the batch size, increases the variance of the SGD iterates (Smith & Le, 2018; Jastrzębski et al., 2018; Chaudhari & Soatto, 2018). We take a different approach, and note that when the learning rate is finite, the SGD iterates are also biased (Roberts, 2018). Backward error analysis (Hairer et al., 2006; Li et al., 2017; Barrett & Dherin, 2021) provides a powerful tool that computes how this bias accumulates over multiple parameter updates. Although this work focused on GD and SGD, we anticipate that backward error analysis could also be used to clarify the role of finite learning rates in adaptive optimizers like Adam (Kingma & Ba, 2015) or Natural Gradient Descent (Amari, 1998). We note however that backward error analysis assumes that the learning rate is small (though finite). It therefore does not capture the chaotic or oscillatory dynamics which arise when the learning rate is close to instability. At these very large learning rates the modified loss, which is defined as a Taylor series in powers of the learning rate, does not converge. Lewkowycz et al. (2020) recently argued that the test accuracies of wide networks trained with full batch gradient descent on quadratic losses are maximized for large learning rates close to divergence. In this “catapult” regime, the GD iterates oscillate along high curvature directions and the loss may increase early in training. It remains an open question to establish whether backward error analysis fully describes the generalization benefit of small batch SGD, or if these chaotic or oscillatory effects also play a role in some networks. ACKNOWLEDGMENTS We would like to thank Jascha Sohl-Dickstein, Razvan Pascanu, Alex Botev, Yee Whye Teh and the anonymous reviewers for helpful discussions and feedback on earlier versions of this manuscript. A THE EXPECTED NORM OF A MINIBATCH GRADIENT To keep the notation clean, we define Xi = (∇Ci(ω)−∇C(ω)). We also recall for clarity that the expectation value E(...) is taken over all possible random shuffles of the indices i. Therefore, E(||(∇Ĉ(ω)−∇C(ω))||2) = 1 B2 E( B∑ i=1 B∑ j=1 Xi ·Xj) (27) = B B2 E(Xi ·Xi) + B(B − 1) B2 E(Xi ·Xj 6=i) (28) = 1 NB N∑ i=1 Xi ·Xi + (B − 1) B 1 N(N − 1) N∑ i=1 ∑ j 6=i Xi ·Xj (29) = 1 NB N∑ i=1 Xi ·Xi + (B − 1) BN(N − 1) N∑ i=1 N∑ j=1 (Xi ·Xj(1− δij)) . Note that we obtain Equation 28 by counting the number of diagonal and off-diagonal terms in the sum in Equation 27. Next, we recall that ∑N i=1Xi = ∑N i=1(∇Ci(ω)−∇C(ω)) = 0. Therefore, E(||(∇Ĉ(ω)−∇C(ω))||2) = 1 NB N∑ i=1 Xi ·Xi − (B − 1) BN(N − 1) N∑ i=1 Xi ·Xi (30) = 1 NB ( 1− (B − 1) (N − 1) ) N∑ i=1 Xi ·Xi (31) = (N −B) (N − 1) Γ(ω) B , (32) where Γ(ω) = (1/N) ∑N i=1Xi · Xi = (1/N) ∑N i=1 ||∇Ci(ω) − ∇C(ω)||2. We can immediately identify Γ(ω) as the trace of the empirical covariance matrix of the per-example gradients. B A BACKWARD ERROR ANALYSIS FOR N-STEP SGD Under n-step SGD, we apply n gradient descent updates on the same minibatch with bare learning rate α and batch sizeB. After n updates, we sample the next minibatch and repeat. For convenience, we define the effective learning rate = nα. After one minibatch (n parameter updates), ωi+1 = ωi − α∇Ĉi(ωi)− α∇Ĉi(ωi − α∇Ĉi(ωi)) + ... (33) = ωi − nα∇Ĉi(ωi) + (n/2)(n− 1)α2∇∇Ĉi(ωi)∇Ĉi(ωi) +O(n3α3) (34) = ωi − ∇Ĉi(ωi) + (1/4)(1− 1/n) 2∇ ( ||∇Ĉi(ωi)||2 ) +O( 3). (35) After one epoch (including terms up to second order in ), ωm = ω0 − ∇Ĉ0(ω0) + (1/4)(1− 1/n) 2∇ ( ||∇Ĉ0(ω0)||2 ) − ∇Ĉ1(ω1) + (1/4)(1− 1/n) 2∇ ( ||∇Ĉ1(ω1)||2 ) − ... − ∇Ĉm−1(ωm−1) + (1/4)(1− 1/n) 2∇ ( ||∇Ĉm−1(ωm−1)||2 ) +O( 3). (36) To simplify this expression, we note that ωi+1 = ωi − ∇Ĉi(ωi) +O( 2). We can therefore re-use our earlier analysis from Section 2.2 of the main text (see Equation 13 for comparison) to obtain, ωm = ω0 −m ∇C(ω0) + 2 m−1∑ j=0 ∑ k<j ∇∇Ĉj(ω0)∇Ĉk(ω0) + (1/4)(1− 1/n) 2 m−1∑ i=0 ∇ ( ||∇Ĉi(ω0)||2 ) +O( 3). (37) Taking the expectation over all possible batch orderings (see Equations 15 to 18), we obtain, E(ωm) = ω0 −m ∇C(ω0) + m2 2 4 ∇ ( ||∇C(ω0)||2 − 1 m2n m−1∑ i=0 ||∇Ĉi(ω0)||2 ) +O(m3 3). (38) Fixing f(ω) = −∇C(ω) and equating Equation 38 with the continuous modified flow in Equation 19 by setting E(ωm) = ω(m ), we identify the modified flow ω̇ = −∇C̃nSGD +O(m2 2), where, C̃nSGD(ω) = C(ω) + 4mn m−1∑ i=0 ||∇Ĉi(ω)||2. (39) Comparing Equation 39 to Equation 1, we note that the modified losses of SGD and n-step SGD coincide when n = 1. However for n-step SGD when n > 1, the strength of the implicit regularization term is proportional to the scale of the bare learning rate α = /n, not the effective learning rate . C TRAINING LOSSES In Figure 1(b) of Section 2.3 in the main text, we compared the test accuracies achieved when training on the original loss C(ω) at a range of learning rates , to the test accuracies achieved when training on the modified loss Cmod(ω) at fixed learning rate = 2−9 and a range of regularization coefficients λ. For completeness, in Figure 5, we provide the corresponding training accuracies, as well as the final values of the original loss C(ω). Remarkably, large learning rates and large regularization coefficients achieve similar training accuracies and similar original losses. This suggests that the implicit regularization term in the modified loss of SGD (C̃SGD(ω)) may help explain why the training accuracies and losses often exhibit plateaus when training with large learning rates. D ADDITIONAL RESULTS ON FASHION-MNIST In this section we provide additional experiments on the Fashion-MNIST dataset (Xiao et al., 2017), which comprises 10 classes, 60000 training examples and 10000 examples in the test set. We consider a simple fully connected MLP which comprises 3 nonlinear layers, each with width 4096 and ReLU activations, and a final linear softmax layer. We apply a simple data pipeline which first applies per-image standardization and then flattens the input to a 784 dimensional vector. We do not apply data augmentation and we train using vanilla SGD without learning rate decay for all experiments. We perform seven training runs for each combination of hyper-parameters and show the mean performance of the best five (to ensure our results are not skewed by a single failed run). We use a batch size B = 16 unless otherwise specified, and we do not use weight decay. We note that this model is highly over-parameterized. Unlike the Wide-ResNet we considered in the main text we consistently achieve 100% training accuracy if the learning rate is not too large. In Figure 6(a), we train for 400 epochs, and we compare the effect of tuning the learning rate when training on the original loss, to the effect of tuning the explicit regularization strength λ (with = 2−9). As observed in the main text, tuning the explicit regularizer has a similar effect on the test accuracy to tuning the learning rate. Surprisingly, the optimal values of and λ differ by a factor of 8. However we note that the optimal learning rate is = 2−5, while the explicit regularizer already achieves a very similar test accuracy at λ = 2−4 (just a factor of two larger), before reaching a higher maximum test accuracy at λ = 2−2. In Figure 6(b), we train for 400 epochs on the original loss and compare the test accuracies achieved for a range of batch sizes at different learning rates. As observed in the main text, the test accuracy is determined by the ratio of the learning rate to the batch size. Meanwhile in Figure 6(c), we plot the test accuracy achieved after training for 1.5 million steps on the modified loss with learning rate = 2−9 and regularization coefficient λ. Once again, we find that the test accuracy achieved is primarily determined by the ratio of the regularization coefficient to the batch size, although smaller batch sizes also achieve slightly higher accuracies. Finally, in Figure 7 we train using n-step SGD (see Section 4) on the original loss at a range of bare learning rates α. In Figure 7(a) we train for 400 epochs, while in Figure 7(b) we train for 6 million updates. We recall that the SDE analogy predicts that the generalization benefit of n-SGD would be determined by the effective learning rate = nα. By contrast, backward error analysis predicts that the generalization benefit for small learning rates would be controlled by the bare learning rate α, but that higher order terms may be larger for larger values of n. We find that the test accuracy in both figures is governed by the bare learning rate α, not the effective learning rate = nα, and therefore these results are inconsistent with the predictions from the SDE analysis in prior work. Note that Figure 7 has a surprising implication. It suggests that, for this model, while there is a largest stable bare learning rate we cannot exceed, we can repeatedly apply updates obtained on the same batch of training examples without suffering a significant degradation in test accuracy. We speculate that this may indicate that the gradients of different examples in this over-parameterized model are close to orthogonal (Sankararaman et al., 2020).
1. What is the main contribution of the paper on explaining the positive effect of large step sizes on generalization performance in SGD? 2. How does the paper account for finite step sizes in its analysis, and how does it compare to previous works that assume infinitesimal step sizes? 3. What are some limitations of the paper's characterization of SGD using a deterministic gradient flow ODE, and how might these limitations affect practical applications? 4. Why does the paper focus on a single epoch of SGD when analyzing its behavior, and what insights can be gained from studying multiple epochs? 5. How does the paper handle the composition of minibatches, and why does it present results for fixed and random minibatch compositions separately? 6. Does the paper's analysis assume too small step sizes compared to practical settings, and how might this affect its conclusions? 7. Can the paper's analysis be extended to sampling data points with replacement, and what differences might there be in terms of generalization performance between sampling with and without replacement? 8. How do the smoothness properties of the problem not enter into the analysis, and why does the paper not consider rescaling the step size with a constant while keeping the loss function unchanged? 9. What attention is given to the regularization term in the paper, and how might this relate to recent literature on implicit regularizers? 10. What experiments or references would support the conjecture that the optimal learning rate is independent of batch size in the large batch size regime?
Review
Review Summary Using backward error analysis, the paper argues that SGD with small but finite step sizes stays on the path of a gradient flow ODE of a modified loss, which penalizes the squared norms of the mini-batch gradients. This offers a possible explanation of the empirically observed positive effect of (relatively) large step sizes on generalization performance. The paper further contests previous findings based on a vanishing step size assumption. Rating Similar to several recent works, this paper tries to explain certain aspects of stochastic gradient descent using a continuous time approximation. In contrast to existing works, it explicitly accounts for the effect of finite step sizes, which I think is a very interesting direction and surfaces several interesting aspects. I also welcome and endorse the critical discussion of prior work based on infinitesimal step size assumptions. Overall, the paper was interesting and pleasant to read. To the very best of my knowledge, all mathematical derivations are technically correct. However—as the authors themselves note in their critique of SDE approximations to SGD—the devil is in the details with continuous time approximations. In my opinion, that makes is absolutely crucial to discuss the scope of the results carefully and transparently, including a critical discussion on assumptions made and simplifications that go into the continuous-time model. In my opinion, this paper fails to deliver that, which is why I recommend rejection. Below, I am asking for clarification on various points and would encourage the authors to respond to the major points in the rebuttal phase. Major Comments The main result says that the expected SGD iterate after a single epoch lands close to the path of a gradient flow ODE on a modified loss. Unless I am missing something, this fundamentally fails to capture the behavior over multiple epochs. The analysis only guarantees that, from any given starting point ω 0 , the expected iterate after one epoch of SGD ends up close to the ODE path starting from ω 0 . Unless I am missing something, this does not imply that two epochs of SGD starting from ω 0 end up on that path. We can not simply chain two epochs together: The first epoch only stays on the path in expectation, but any realization of that random variable will deviate from the path, which affects the initial condition of the next epoch. Intuitively, one needs to get a handle on the variance of the iterate as well in order to give guarantees for multiple epochs. Is this understanding correct? If so, to what extent can insights about a single epoch of SGD be transferred to practical settings? Comment (1) hints at a larger (but vague) point that the paper is trying to characterize a stochastic optimization procedure with a solution of a deterministic gradient flow ODE. It does so by focusing on the expectation of the iterate, which might be an approach to highlight certain aspects, but it will never give a full picture. Why wouldn’t we also be interested in the covariance of the iterates? The limitations of this characterization should be discussed thoroughly in the paper. In Section 2, the composition of the minibatches is assumed to be fixed and the randomness only comes from their ordering. The paper says: "It is standard practice to shuffle the dataset once per epoch, but this step does not affect our analysis and we omit it for brevity.“ I don’t think that statement is justified with respect to the result in Eq. (1), given that the modified loss depends on the minibatch composition. Therefore, would we reshuffle the dataset after each epoch, the modified loss would change from one epoch to the next. Later, in Section 3, the expectation is additionally taken over the composition of the batches. Why is the result presented in these two distinct steps? None of the key findings of the paper seems to rely on the intermediate fixed-composition result. It also doesn’t reflect the common practice of reshuffling the entire dataset and then traversing it, which simultaneously randomizes the composition and ordering of batches. So why not give the result of Eq. (22) directly? It is also the more intuitive result, invoking the trace of the gradient covariance matrix, which also appears in prior work on continuous time approximations of SGD. While the analysis tries to account for finite step sizes, it still seems to assume step sizes that are orders of magnitude smaller than those used in practice. In particular, when going from Eq. (12) to Eq. (13), each minibatch cost function is equated with its second-order Taylor approximation around the starting point ω 0 . This is a drastic approximation and I don’t see any justification for why this should be anywhere near accurate for practical settings. For large datasets and moderate batch sizes, the number of updates in one epoch will be in the thousands. For realistic step size choices, a second-order Taylor expansion around the starting point will probably be rather poor after a handful of SGD updates, no? The paper strongly emphasizes the assumption of sampling data points without replacement. While sampling without replacement is indeed the usual setting in practice, most of the stochastic optimisation literature builds on the assumption of sampling with replacement. And to my knowledge, no major differences (in terms of generalization performance) have been reported in the literature between the two approaches. a) Can the analysis presented in the paper be extended to setting of sampling with replacement? It seems to me that this should be straight-forward. Equations (12) and (13) should hold also when each minibatch is obtained from sampling with replacement. In that case, the expectation of the second-order correction term should directly give a result akin to Eq. (22). If that is in fact possible, it should definitely be added to the paper. b) If that is not possible, what prevents the application and is this a technicality or would you actually expect substantially different behavior in terms of generalization? c) It would also have been nice to see the experiments repeated with sampling with replacement to check empirically whether the findings hold in that case? Something that bugs me from an optimization perspective is that the smoothness properties of the problem do not enter this analysis at all. For example, you write (near the bottom of page 4) that “our analysis assumes m ϵ = N ϵ / B is small.” However, any given loss function C ( w ) can be rescaled by a constant M ≫ 1 while scaling the step size with 1 / M . This leaves the behavior of SGD unaffected while making the step size arbitrarily small. Why does that not enter into the analysis? It probably relates to my comment (4), seeing that the step sizes are assumed to be so small that they are not restricted by the smoothness of the function. Minor Comments The paper derives the implicit regularizer and provides empirical evidence that it can partially explain the benefits of large step sizes for generalization. However, very little attention is given to the regularization term itself and to the question why this regularizer might be beneficial. The only comment speaking to that is that the regularizer penalizes “sharp” regions. I would like to see this discussion expanded and connected to the recent literature. At the end of page 6, you write about the large batch size regime and say that the “we expect the optimal learning rate to be independent of the batch size in this limit.” It would have been great to substantiate that conjecture with an experiment and/or to refer to specific experiments done in prior work. You repeatedly use the phrase “small but finite learning rates”. If my understanding is correct, that has phrase has a very precise meaning in the context of this work, namely that terms of order O ( ϵ 3 ) are vanishingly small while terms that a quadratic or linear in ϵ can not be ignored. (This is in contrast to prior work that also ignores quadratic terms.) Maybe this could be stated clearly the first time you use this phrase. Typos / Style I think you should capitalize references to sections, equations, figures, et cetera. The bib file could really need some love. You are citing the arXiv versions for several papers that have been published in peer-reviewed venues. Capitalization in paper titles is messed up (e.g., “sgd”). Edit after Rebuttal I thank the authors for their engagement with my review. Many of my comments and questions have been resolved and, consequently, I have increase my score and recommend accepting this paper.
ICLR
Title Morpho-MNIST: Quantitative Assessment and Diagnostics for Representation Learning Abstract Revealing latent structure in data is an active field of research, having introduced exciting technologies such as variational autoencoders and adversarial networks, and is essential to push machine learning towards unsupervised knowledge discovery. However, a major challenge is the lack of suitable benchmarks for an objective and quantitative evaluation of learned representations. To address this issue we introduce Morpho-MNIST, a framework that aims to answer: “to what extent has my model learned to represent specific factors of variation in the data?” We extend the popular MNIST dataset by adding a morphometric analysis enabling quantitative comparison of trained models, identification of the roles of latent variables, and characterisation of sample diversity. We further propose a set of quantifiable perturbations to assess the performance of unsupervised and supervised methods on challenging tasks such as outlier detection and domain adaptation. 1 INTRODUCTION A key factor for progress in machine learning has been the availability of well curated, easy-to-use, standardised and sufficiently large annotated datasets for benchmarking different algorithms and models. This has led to major advances in speech recognition, computer vision, and natural language processing. A commonality between these tasks is their natural formulation as supervised learning tasks, wherein performance can be measured in terms of accuracy on a test set. The general problem of representation learning (i.e. to reveal latent structure in data) is more difficult to assess due the lack of suitable benchmarks. Although the field is very active, with many recently proposed techniques such as probabilistic autoencoders and adversarial learning, it is less clear where the field stands in terms of progress or which approaches are more expressive for specific tasks. The lack of reproducible ways to quantify performance has led to subjective means of evaluation: visualisation techniques have been used to show low-dimensional projections of the latent space and visual inspection of generated or reconstructed samples are popular to provide subjective measures of descriptiveness. On the other hand, the quality of sampled images generally tells us little about how well the learned representations capture known factors of variation in the training distribution. In order to advance progress, the availability of tools for objective assessment of representation learning methods seems essential yet lacking. This paper introduces Morpho-MNIST, a collection of shape metrics and perturbations, in a step towards quantitative assessment of representation learning. We build upon one of the most popular machine learning benchmarks, MNIST, which despite its shortcomings remains widely used. While MNIST was originally constructed to facilitate research in image classification, in the form of recognising handwritten digits (LeCun et al., 1998), it has found its use in representation learning, for example, to demonstrate that the learned latent space yields clusters consistent with digit labels. Methods aiming to disentangle the latent space claim success if individual latent variables capture specific style variations (e.g. stroke thickness, sidewards leaning digits and other visual characteristics). The main appeal of selecting MNIST as a benchmark for representation learning is that, while manifesting complex interactions between pixel intensities and underlying shapes, it has well understood and easily measurable factors of variation. More generally, MNIST remains popular in practice due to several factors: it allows reproducible comparisons with previous results reported in the literature; the dataset is sufficiently large for its complexity and consists of small, two-dimensional greyscale images defining a tractable ten-class classification problem; computation and memory requirements are low; most popular deep learning frameworks and libraries offer tutorials using MNIST, which makes it straightforward for new researchers to enter the field and to experiment with new ideas and explore latest developments. We take advantage of these qualities and extend MNIST in multiple ways, as summarised in the following. 1.1 CONTRIBUTIONS Our aim is to bridge the gap between methodology-focused research and critical real-world applications that could benefit from latest machine learning methods. As we preserve the general properties of MNIST—such as image size, file format, numbers of training and test images, and the original ten-class classification problem—we believe this new quantitative framework for assessing representation learning will experience widespread use in the community and may inspire further extensions facilitated by a publicly available Morpho-MNIST code base. Morphometrics: We propose to describe true and generated digit images in terms of measurable shape attributes. These include stroke thickness and length, and the width, height, and slant of digits (cf. Fig. 1, left). Whereas some of these properties have been analysed qualitatively in previous work, we demonstrate that objectively quantifying each of them allows to identify the role of inferred representations. Moreover, these tools can be used to measure model samples, enabling assessment of generative performance with respect to sample diversity (Section 4.1) and disentanglement of latent variables (Section 4.2). These measurements can be directly employed to re-evaluate existing models and may be added retrospectively to previous experiments involving the original MNIST dataset. Adoption of our morphometric analysis may provide new insights into the effectiveness of representation learning methods in terms of revealing meaningful latent structures. Furthermore, for other datasets it suffices to design the relevant scalar metrics and include them in the very same evaluation framework. Perturbations: We introduce a set of parametrisable global and local perturbations, inspired by natural and pathological variability in medical images. Global changes involve overall thinning and thickening of digits, while local changes include both swelling and fractures (see examples on the right in Fig. 1 and many more in Appendix B). Injecting these perturbations into the dataset adds a new type of complexity to the data manifold and opens up a variety of interesting applications. The proposed perturbations are designed to enable a wide range of new studies and applications for both supervised and unsupervised tasks. Detection of ‘abnormalities’ (i.e. local perturbations) is an evident application, although more challenging tasks can also be defined, such as classification from noisy/corrupted data, domain adaptation, localisation of perturbations, characterising semantics of learned latent representations, and more. We explore a few supplementary examples of supervised tasks in Appendix D. 1.2 RELATED WORK: DATASETS In this section, we provide an overview of some datasets that are related to MNIST, by either sharing its original source content, containing transformations of the original MNIST images or being distributed in the same format for easy replacement. We also mention a few prevalent datasets of images with generative factor annotations, similarly to the morphometrics proposed in this paper. NIST datasets: The MNIST (modified NIST) dataset (LeCun et al., 1998) was constructed from handwritten digits in NIST Special Databases 1 and 3, now released as Special Database 19 (Grother and Hanaoka, 2016). Cohen et al. (2017) generated a much larger dataset based on the same NIST database, containing additional upper- and lower-case letters, called EMNIST (extended MNIST). MNIST perturbations: The seminal paper by LeCun et al. (1998) employed data augmentation using planar affine transformations including translation, scaling, squeezing, and shearing. Loosli et al. (2007) employed random elastic deformations to construct the Infinite MNIST dataset. Other MNIST variations include rotations and insertion of random and structured background (Larochelle et al., 2007), and Tieleman (2013) applied spatial affine transformations and provided ground-truth transformation parameters. MNIST format: Due to the ubiquity of the MNIST dataset in machine learning research and the resulting multitude of compatible model architectures available, it is appealing to release new datasets in the same format (28×28, 8-bit grayscale images). One such effort is Fashion-MNIST (Xiao et al., 2017), containing images of clothing articles from ten distinct classes, adapted from an online shopping catalogue. Another example is notMNIST (Bulatov, 2011), a dataset of character glyphs for letters ‘A’–‘J’ (also ten classes), in a challengingly diverse collection of typefaces. Annotated datasets: Computer vision datasets that are popular for evaluating disentanglement of learned latent factors of variation include those from Paysan et al. (2009) and Aubry et al. (2014). They contain 2D renderings of 3D faces and chairs, respectively, with ground-truth pose parameters (azimuth, elevation) and lighting conditions (faces only). A further initiative in that direction is the dSprites dataset (Matthey et al., 2017), which consists of binary images containing three types of shapes with varying location, orientation and size. The availability of the ground-truth values of such attributes has motivated the accelerated adoption of these datasets in the evaluation of representation learning algorithms. 1.3 RELATED WORK: QUANTITATIVE EVALUATION Evaluation of representation learning is a notoriously challenging task and remains an open research problem. Numerous solutions have been proposed, with many of the earlier ones focusing on the test log-likelihood under the model (Kingma and Welling, 2013) or, for likelihood-free models, under a kernel density estimate (KDE) of generated samples (Goodfellow et al., 2014; Makhzani et al., 2015)—being shown not to be reliable proxies for the true model likelihood (Theis et al., 2016). Another perspective for evaluation of generative models of images is the visual fidelity of its samples to the training data, which would normally require manual inspection. To address this issue, a successful family of metrics have been proposed, based on visual features extracted by the Inception network (Szegedy et al., 2016). The original Inception score (Salimans et al., 2016) relies on the ‘crispness’ of class predictions, whereas the Fréchet Inception distance (FID) (Heusel et al., 2017) and the kernel Inception distance (KID) (Bińkowski et al., 2018) statistically compare high-level representations instead of the final network outputs. Although the approaches above can reveal vague signs of mode collapse, it may be useful to diagnose this phenomenon on its own. With this objective, Arora et al. (2018) proposed to estimate the support of the learned distribution (assumed discrete) using the birthday paradox test, by counting pairs of visual duplicates among model samples. Unfortunately, the adoption of this technique is hindered by its reliance on manual visual inspection to flag identical images. There have been several attempts at quantifying representation disentanglement performance. For example, Higgins et al. (2017) proposed to use the accuracy of a simple classifier trained to predict which factor of variation was held fixed in a simulated dataset. There exist further informationtheoretic approaches, involving the KL divergence contribution from each latent dimension (Dupont, 2018) or their mutual information with each known generative factor (Chen et al., 2018). Yet another method, explored in Kumar et al. (2018), is based on the predictive accuracy of each latent variable to each generative factor (continuous or discrete). 2 MORPHOMETRY Meaningful morphometrics are instrumental in characterising distributions of rasterised shapes, such as MNIST digits, and can be useful as additional data for downstream learning tasks. We begin this section by describing the image processing pipeline employed for extracting the metrics and for applying perturbations (Section 3), followed by details on the computation of each measurement. 2.1 PROCESSING PIPELINE The original 28×28 resolution of the MNIST images is generally not high enough to enable satisfactory morphological processing: stroke properties (e.g. length, thickness) measured directly on the binarised images would likely be inaccurate and heavily quantised. To mitigate this issue and enable sub-pixel accuracy in the measurements, we propose to use the following processing steps: 1. upscale (e.g. ×4, to 112×112)1; 2. binarise (e.g. threshold ≥128); 3. compute Euclidean distance transform (EDT) from boundaries; 4. skeletonise (medial axis, i.e. ridges of EDT); 5. apply perturbation (cf. Section 3); and 6. downscale to original resolution. We illustrate the pipeline in Fig. 2. The binary high-resolution digits have smooth boundaries and faithfully capture subtle variations in contour shape and stroke thickness that are only vaguely discernible in the low-resolution images. Additionally, note how the final downscaled image is almost indistinguishable from the original. All morphometric attributes described below are calculated for each digit after applying steps 1–4 of this pipeline. The distributions for the plain MNIST training set is plotted in Fig. 3, and the distributions after applying each type of perturbation can be found in Appendix A. 1Up- and downscaling by a factor of f are done with bicubic interpolation and Gaussian smoothing (bandwidth σ = 2f/6), following scikit-image defaults (van der Walt et al., 2014). 2.2 STROKE LENGTH Here we approximate the trace of the pen tip, as a digit was being written, by the computed morphological skeleton. In this light, the total length of the skeleton is an estimate of the length of the pen stroke, which in turn is a measure of shape complexity. It can be computed in a single pass by accumulating the Euclidean distance of each skeleton pixel to its immediate neighbours, taking care to only count the individual contributions once. This approach is more robust against rotations than a naïve estimate by simply counting the pixels. 2.3 STROKE THICKNESS A prominent factor of style variation in the MNIST digits is the overall thickness of the strokes, due to both legitimate differences in pen thickness and force applied, and also to the rescaling of the original NIST images by different factors. We estimate it by exploiting the computed distance transform. By virtue of how the image skeleton is computed, its pixels are approximately equidistant to the nearest boundaries, therefore we take twice the mean value of the EDT over all skeleton pixels as our global estimate. 2.4 SLANT The extent by which handwritten symbols lean right or left (forward and backward slant, respectively) is a further notorious and quantifiable dimension of handwriting style. It introduces so much variation in the appearance of characters in images that it is common practice in OCR systems to ‘deslant’ them, in an attempt to reduce within-class variance (LeCun et al., 1998; Teow and Loe, 2002). We adapt the referred deslanting methodology to describe the slant angle of the handwritten digits. After estimating the second-order image moments, we define the slant based on the horizontal shear: α = arctan ( − ∑ i,j xij(i− ī)(j − j̄)∑ i,j xij(i− ī)2 ) , (1) where xij is the intensity of pixel (i, j), and (̄i, j̄) are the centroid coordinates. The minus sign ensures that positive and negative values correspond to forward and backward slant, respectively. 2.5 WIDTH AND HEIGHT It is useful to measure other general shape attributes, such as width, height, and aspect ratio, which also present substantial variation related to personal handwriting style.2 To this end, we propose to fit a bounding parallelogram to each digit, with horizontal and slanted sides (cf. Fig. 1). We sweep the image top-to-bottom with a horizontal boundary to compute a vertical marginal cumulative distribution function (CDF), and likewise left-to-right with a slanted boundary for a horizontal marginal CDF, with angle α as computed above. The bounds are then chosen based on equal-tailed intervals containing a given proportion of the image mass—98% in both directions (1% from each side) proved accurate and robust in our experiments. 3 PERTURBATIONS As discussed in Section 1, we bring forward a number of morphological perturbations for MNIST digits, to enable interesting applications and experimentation. In this section, we detail these parametrisable transformations, categorised as global or local. 2Note that little variation in height is expected, since the original handwritten digits were scaled to fit a 20×20 box (LeCun et al., 1998). Nevertheless, a minority of digits were originally wider than they were tall, which explains the long tails in the distribution of heights (Fig. 3). 3.1 GLOBAL: THINNING AND THICKENING The first pair of transformations we present is based on simple morphological operations: the binarised image of a digit is dilated or eroded with a circular structuring element. Its radius is set proportionally to the estimated stroke thickness (Section 2.3), so that the overall thickness of each digit will decrease or increase by an approximately fixed factor (here, -70% and +100%; see Figs. B.1 and B.2). Since there is substantial thickness variability in the original MNIST data (cf. Fig. 3) and most thinned and thickened digits look very plausible, we believe that these perturbations can constitute a powerful form of data augmentation for training. For the same reason, we have not included these perturbations in the abnormality detection experiments (Appendix D). 3.2 LOCAL: SWELLING In addition to the global transformations above, we introduce local perturbations with variable location and extent, which are harder to detect automatically. Given a radius R, a centre location r0 and a strength parameter γ > 1, the coordinates r of pixels within distance R of r0 are nonlinearly warped according to a radial power transform: r 7→ r0 + (r− r0) (‖r− r0‖ R )γ−1 , (2) leaving the remaining portions of the image untouched and resampling with bicubic interpolation. In the experiments and released dataset, we set γ = 7 and R = 3 √ θ/2, where θ is thickness. Unlike simple linear scaling with θ, this choice for R produces noticeable but not exaggerated effects across the thickness range observed in the dataset (cf. Fig. B.3). The centre location, r0, is picked uniformly at random from the pixels along the estimated skeleton. 3.3 LOCAL: FRACTURES We describe the proposed procedure for adding fractures to an MNIST digit, where we define a fracture as a break in the continuity of a pen stroke. Because single fractures can in many cases be easily mistaken for true gaps between strokes, we add multiple fractures to each affected digit. When selecting the location for a fracture, we attempt to avoid getting too close to stroke tips (points on the skeleton with a single neighbour) or fork points (more than two neighbours). This is achieved by sampling only among those skeleton pixels above a certain distance to these detected points. In addition, we would like fractures to be transversal to the pen strokes. Local orientation is determined based on second-order moments of the skeleton inside a window centred at the chosen location, and the length of the fracture is estimated from the boundary EDT. Finally, the fracture is drawn onto the high-resolution binary image with a circular brush along the estimated normal. In practice, we found that adding three fractures with 1.5 px thickness, 2 px minimum distance to tips and forks and angle window of 5×5 px2 (‘px’ as measured in the low resolution image) produces detectable but not too obvious perturbations (see Fig. B.4). We also extend the lines on both ends by 0.5 px to add some tolerance. 4 EVALUATION CASE STUDIES In this section, we demonstrate potential uses of the proposed framework: using morphometrics to characterise the distribution of samples from generative models and finding associations between learned latent representations and morphometric attributes. In addition, we exemplify in Appendix D a variety of supervised tasks on the MNIST dataset augmented with perturbations. 4.1 SAMPLE DIVERSITY Here we aim to illustrate ways in which the proposed MNIST morphometrics may be used to visualise distributions learned by generative models and to quantify their agreement with the true data distribution in terms of these semantic attributes. We also believe that extracting such measurements from model samples is a step toward diagnosing the issue of mode collapse. We exemplify this scenario with a vanilla GAN (Goodfellow et al., 2014) and a β-VAE (Higgins et al., 2017), both with generator (resp. decoder) and discriminator architecture as used in the MNIST experiments in Chen et al. (2016), and encoder mirroring the decoder. We train a β-VAE with β = 4 and a GAN, both with 64-dimensional latent space. To explore the behaviour of a much less expressive model, we additionally train a GAN with only two latent dimensions. Visualisation: Figure 4 illustrates the morphometric distributions of the plain MNIST test images and of 10,000 samples from each of these three models. As can be seen, morphometrics provide interpretable low-dimensional statistics which allow comparing distributions learned by generative models between each other and with true datasets. While Figs. 4b and 4c show model samples roughly as diverse as the true images, the samples from the low-dimensional GAN in Fig. 4d seem concentrated on certain regions, covering a distribution that is less faithful to the true one in Fig. 4a. Statistical comparison: We argue that in this lower-dimensional space of morphometrics it is possible to statistically compare the distributions, since this was shown not to be effective directly in image space (e.g. Theis et al., 2016). To this end, we propose to use kernel two-sample tests based on maximum mean discrepancy (MMD) between morphometrics of the test data and of each of the sample distributions. Here, we performed the linear-time asymptotic test described in Gretton et al. (2012, §6) (details and further considerations in Appendix C). The test results in Table 1 seem to confirm the mismatch of the low-dimensional GAN’s samples, whereas the β-VAE and larger GAN do not show a significant departure from the data distribution. Finding replicas: One potentially fruitful suggestion would be to use a variant of hierarchical agglomerative clustering on sample morphometric attributes (e.g. using standardised Euclidean distance, or other suitable metrics). With a low enough distance threshold, it would be possible to identify groups of near-replicas, the abundance of which would signify mode collapse. Alternatively, this could be applicable as a heuristic in the birthday paradox test for estimating the support of the learned distribution (Arora et al., 2018). 4.2 DISENTANGLEMENT In this experiment, we demonstrate that: (a) standard MNIST can be augmented with morphometric attributes to quantitatively study representations computed by an inference model (as already possible with e.g. dSprites and 3D faces); (b) we can measure shape attributes of samples to assess disentanglement of a generative model, which is unprecedented to the best of our knowledge; and (c) this analysis can also diagnose when a model unexpectedly fails to learn a known aspect of the data. Methodology: We take MAP estimates of latent codes for each image (i.e. maximal logit for categorical codes and mean for continuous codes), as predicted by the variational recognition network. Using an approach related to the disentanglement measure introduced in Kumar et al. (2018), we study the correlation structures between known generative factors and latent codes learned by an InfoGAN. Specifically, we compute the partial correlation between each latent code variable and each morphometric attribute, controlling for the variation in the remaining latent variables (disregarding the noise vector).3 As opposed to the simple correlation, this technique allows us to study the net first-order effect of each latent code, all else being equal. Models were trained for 20 epochs using 64 images per batch, with no hyperparameter tuning. We emphasize that our goal was to illustrate how the proposed morphometrics can serve as tools to better understand whether they behave as intended and not to optimally train the models in each scenario. Inferential disentanglement: To illustrate how this methodology can be applied in practice to assess disentanglement, we consider two settings. The first is the same as in the MNIST experiment from Chen et al. (2016), with a 10-way categorical and two continuous latent codes, trained and evaluated on the plain MNIST digits, which we will refer to as INFOGAN-A. 3For the categorical code, c1, we take a single binary dummy variable for each category, c (k) 1 , while controlling only for the remaining codes (c2, c3 etc.) to avoid multicollinearity. The second setting was designed to investigate whether the model could disentangle the concept of thickness, by including an additional continuous latent code and training on a dataset with exaggerated thickness variations. We constructed this dataset by randomly interleaving plain, thinned and thickened digit images in equal proportions. Since the perturbations were applied completely at random, we expect a trained generative model to identify that thickness should be largely independent of the other morphological attributes. We refer to this set-up as INFOGAN-B. Table 2 summarises the different experimental settings, for reference. In Fig. 5a, we see that INFOGAN-A learned to encode slant mostly in c3, while c (8) 1 clearly relates to the ‘1’ class (much narrower digit shape and shorter pen stroke; cf. Fig. 3). Figure 5b quantitatively confirms the hypothesis that INFOGAN-B’s recognition network would learn to separate slant and thickness (in c4 and c3, resp.), the most prominent factors of style variation in this dataset. Interestingly, it shows that c3 also associates with height, as thicker digits tend to be taller. Generative disentanglement: The evaluation methodology described above is useful to investigate the behaviour of the inference direction of a model, and can readily be used with datasets which include ground-truth generative factor annotations. On the other hand, unless we trust that the inference approximation is highly accurate, this tells us little about the generative expressiveness of the model. This is where computed metrics truly show their potential: we can measure generated samples, and see how their attributes relate to the latent variables used to create them. Figure 6 shows results for a similar analysis to Fig. 5, but now evaluated on samples from that model. As the tables are mostly indistinguishable, we may argue that in this case the inference and generator networks have learned to consistently encode and decode the digit shape attributes. As further illustration, Fig. 7 displays traversals of the latent space, obtained by varying a subset of the latent variables while holding the remaining ones (including noise) constant. With these examples, we are able to qualitatively verify the quantitative results in Fig. 6. Note that, until now, visual inspection was typically the only means of evaluating disentanglement and expressiveness of the generative direction of image models (e.g. Chen et al., 2016; Dupont, 2018). Diagnosing failure: We also attempted to detect whether an InfoGAN had learned to discover local perturbations (swelling and fractures). To this end, we extended the model formulation with additional Bernoulli latent codes, which would hopefully learn to encode presence/absence of each type of local perturbation. The model investigated here, dubbed INFOGAN-C (cf. Table 2), had a 10-way categorical, two continuous and two binary codes, and was trained with a dataset of plain, swollen and fractured digits (randomly mixed as above). Again via inferential partial correlation analysis—now including ground-truth perturbation annotations—we can quantitatively verify that this particular model instance was unable to meaningfully capture the perturbations (Fig. 8, bottom-right block). In fact, it appears that the addition of the binary variables did not lead to more expressive representations in this case, even impairing the disentanglement of the categorical variables, if compared to Figs. 5a and 5b, for example. 5 CONCLUSION With Morpho-MNIST we provide a number of mechanisms to quantitatively assess representation learning with respect to measurable factors of variation in the data. We believe that this is an important asset for future research on generative models, and we would like to emphasize that the proposed morphometrics can be used post hoc to evaluate already trained models, potentially revealing novel insights and interesting observations. A similar morphometry approach could be used with other datasets such as dSprites, e.g. estimating shape location and size, number of objects/connected components. Perhaps some generic image metrics may be useful for analysis on other datasets, e.g. relating to sharpness or colour diversity, or we could even consider using the output of object detectors (analogously to the Inception-based scores; e.g. number/class of objects, bounding boxes etc.). In future work we plan to include additional perturbations, for example, mimicking imaging artefacts commonly observed in medical imaging modalities to add further complexity and realism. A MORPHOMETRICS OF PLAIN AND PERTURBED DATASETS B PERTURBATION EXAMPLES C MMD DETAILS We employed a Gaussian product kernel with bandwidths derived from Scott’s rule, analogously to the KDE plots in Fig. 4. Scott’s rule of thumb defines the bandwidth for a density estimation kernel as N−1/(D+4) times the standard deviation in each dimension, where N and D denote sample size and number of dimensions (Scott, 1992, Eq. (6.42)). We determine the KDE bandwidths separately for real and sample data, then add their squares to obtain the squared bandwidth of the MMD’s Gaussian kernel, as it corresponds to the convolution of the density estimation kernels chosen for each set of data. See Gretton et al. (2012, §3.3.1) for further details on the relation between MMD and L2 distance of kernel density estimates. Whereas the bandwidth heuristic used here is fairly crude, much more sophisticated kernel selection procedures are available, e.g. by explicitly optimising the test power (Sutherland et al., 2017). A further analysis tool in a similar vein would be to apply a relative MMD similarity test (Bounliphone et al., 2016), to rank trained models based on sample fidelity. It would also be possible to adopt a model criticism methodology based on the MMD witness function (Lloyd and Ghahramani, 2015), to identify over- and under-represented regions in morphometric space (and corresponding generated image exemplars could be inspected as well). D SUPERVISED TASKS Although the driving motivation for introducing Morpho-MNIST has been the lack of means for quantitative evaluation of generative models, the proposed framework may also be a valuable resource in the context of supervised learning. We conducted several experiments to demonstrate potential applications of these datasets with increased difficulty due to the injected perturbations: standard digit recognition, supervised abnormality detection, and thickness regression. Note such experiments can later serve as baselines for unsupervised tasks such as outlier detection and domain adaptation. We evaluated four different models: k-nearest-neighbours (kNN) using k = 5 neighbours and `1 distance weighting, a support vector machine (SVM) with polynomial kernel and penalty parameter C = 100, a multi-layer perceptron (MLP) with 784–200–200–L architecture (L: number of outputs), and a LeNet-5 convolutional neural network (LeCun et al., 1998). Here, we use the same datasets as in the disentanglement experiments (Section 4.2): plain digits (PLAIN), plain mixed with thinned and thickened digits (GLOBAL), and plain mixed with swollen and fractured digits (LOCAL). For digit recognition, each model is trained once on PLAIN, then tested on both PLAIN and LOCAL test datasets, to investigate the effect of domain shift. All methods suffer a drop in test accuracy on LOCAL (Table 3, first two columns). kNN appears to be the most robust to the local perturbations, perhaps because they affect only a few pixels, leaving the image distance between neighbours largely unchanged. On the other hand, local patterns that LeNet-5 relies on may have changed considerably. The abnormality detection task is, using the LOCAL dataset, to predict whether a digit is normal or perturbed (swollen or fractured)—compare with lesion detection in medical scans. Table 3 (third column) indicates that LeNet-5 is able to detect abnormalities with high accuracy, likely thanks to local invariances of its convolutional architecture. Note that all scores (especially the simpler models’) are lower than digit classification accuracy, revealing the (possibly surprising) higher difficulty of this binary classification problem compared to the ten-class digit classification. Finally, we also constructed a regression task for digit thickness using the GLOBAL dataset, mimicking medical imaging tasks such as estimating brain age from cortical grey matter maps. Since this is a non-trivial task, requiring some awareness of local geometry, it is perhaps unsurprising that the convolutional model outperformed the others, which rely on holistic features (Table 3, last column).
1. What are the limitations of the paper regarding its experimental evaluation and technical aspects? 2. How does the reviewer assess the assumption made by the authors about the latent space of generative models? 3. What are the challenges in studying the properties of a generative model on datasets like CIFAR or real-world images? 4. Is there a guarantee that a GAN model learning good representations in one dataset will generalize well to other datasets?
Review
Review This paper discusses the problem of evaluating and diagnosing the representations learnt using a generative model. This is a very important and necessary problem. However, this paper lacks in terms of experimental evaluation and has some technical flaws. 1. Morphological properties deals with only the "shape" properties of the image object. However, when the entire image is subject to the generative model, it learns multiple properties from the image apart from shape too - such as texture and color. Additionally, there are lot of low level pixel relations that the model learns to fit the distribution of the given images. However, here the authors have assumed that the latent space of the generative models are influenced only by the morphological properties of the image - which is wrong. Latent space features could be affected by the color or texture of the image as well. 2. Extracting morphological properties of the image is straight-foward for MNIST kind of objects. However, it becomes really difficult for other datasets such as CIFAR or some real world images. Studying the properties of a generative model on such datasets is very challenging and the authors have not added a discussion around that. 3. Now assuming that my GAN model has learnt good representation in Morpho-MNIST dataset, is it guaranteed to learn good representations in other datasets as well? There is no guarantee on generalizability or extensibility of the work.
ICLR
Title Morpho-MNIST: Quantitative Assessment and Diagnostics for Representation Learning Abstract Revealing latent structure in data is an active field of research, having introduced exciting technologies such as variational autoencoders and adversarial networks, and is essential to push machine learning towards unsupervised knowledge discovery. However, a major challenge is the lack of suitable benchmarks for an objective and quantitative evaluation of learned representations. To address this issue we introduce Morpho-MNIST, a framework that aims to answer: “to what extent has my model learned to represent specific factors of variation in the data?” We extend the popular MNIST dataset by adding a morphometric analysis enabling quantitative comparison of trained models, identification of the roles of latent variables, and characterisation of sample diversity. We further propose a set of quantifiable perturbations to assess the performance of unsupervised and supervised methods on challenging tasks such as outlier detection and domain adaptation. 1 INTRODUCTION A key factor for progress in machine learning has been the availability of well curated, easy-to-use, standardised and sufficiently large annotated datasets for benchmarking different algorithms and models. This has led to major advances in speech recognition, computer vision, and natural language processing. A commonality between these tasks is their natural formulation as supervised learning tasks, wherein performance can be measured in terms of accuracy on a test set. The general problem of representation learning (i.e. to reveal latent structure in data) is more difficult to assess due the lack of suitable benchmarks. Although the field is very active, with many recently proposed techniques such as probabilistic autoencoders and adversarial learning, it is less clear where the field stands in terms of progress or which approaches are more expressive for specific tasks. The lack of reproducible ways to quantify performance has led to subjective means of evaluation: visualisation techniques have been used to show low-dimensional projections of the latent space and visual inspection of generated or reconstructed samples are popular to provide subjective measures of descriptiveness. On the other hand, the quality of sampled images generally tells us little about how well the learned representations capture known factors of variation in the training distribution. In order to advance progress, the availability of tools for objective assessment of representation learning methods seems essential yet lacking. This paper introduces Morpho-MNIST, a collection of shape metrics and perturbations, in a step towards quantitative assessment of representation learning. We build upon one of the most popular machine learning benchmarks, MNIST, which despite its shortcomings remains widely used. While MNIST was originally constructed to facilitate research in image classification, in the form of recognising handwritten digits (LeCun et al., 1998), it has found its use in representation learning, for example, to demonstrate that the learned latent space yields clusters consistent with digit labels. Methods aiming to disentangle the latent space claim success if individual latent variables capture specific style variations (e.g. stroke thickness, sidewards leaning digits and other visual characteristics). The main appeal of selecting MNIST as a benchmark for representation learning is that, while manifesting complex interactions between pixel intensities and underlying shapes, it has well understood and easily measurable factors of variation. More generally, MNIST remains popular in practice due to several factors: it allows reproducible comparisons with previous results reported in the literature; the dataset is sufficiently large for its complexity and consists of small, two-dimensional greyscale images defining a tractable ten-class classification problem; computation and memory requirements are low; most popular deep learning frameworks and libraries offer tutorials using MNIST, which makes it straightforward for new researchers to enter the field and to experiment with new ideas and explore latest developments. We take advantage of these qualities and extend MNIST in multiple ways, as summarised in the following. 1.1 CONTRIBUTIONS Our aim is to bridge the gap between methodology-focused research and critical real-world applications that could benefit from latest machine learning methods. As we preserve the general properties of MNIST—such as image size, file format, numbers of training and test images, and the original ten-class classification problem—we believe this new quantitative framework for assessing representation learning will experience widespread use in the community and may inspire further extensions facilitated by a publicly available Morpho-MNIST code base. Morphometrics: We propose to describe true and generated digit images in terms of measurable shape attributes. These include stroke thickness and length, and the width, height, and slant of digits (cf. Fig. 1, left). Whereas some of these properties have been analysed qualitatively in previous work, we demonstrate that objectively quantifying each of them allows to identify the role of inferred representations. Moreover, these tools can be used to measure model samples, enabling assessment of generative performance with respect to sample diversity (Section 4.1) and disentanglement of latent variables (Section 4.2). These measurements can be directly employed to re-evaluate existing models and may be added retrospectively to previous experiments involving the original MNIST dataset. Adoption of our morphometric analysis may provide new insights into the effectiveness of representation learning methods in terms of revealing meaningful latent structures. Furthermore, for other datasets it suffices to design the relevant scalar metrics and include them in the very same evaluation framework. Perturbations: We introduce a set of parametrisable global and local perturbations, inspired by natural and pathological variability in medical images. Global changes involve overall thinning and thickening of digits, while local changes include both swelling and fractures (see examples on the right in Fig. 1 and many more in Appendix B). Injecting these perturbations into the dataset adds a new type of complexity to the data manifold and opens up a variety of interesting applications. The proposed perturbations are designed to enable a wide range of new studies and applications for both supervised and unsupervised tasks. Detection of ‘abnormalities’ (i.e. local perturbations) is an evident application, although more challenging tasks can also be defined, such as classification from noisy/corrupted data, domain adaptation, localisation of perturbations, characterising semantics of learned latent representations, and more. We explore a few supplementary examples of supervised tasks in Appendix D. 1.2 RELATED WORK: DATASETS In this section, we provide an overview of some datasets that are related to MNIST, by either sharing its original source content, containing transformations of the original MNIST images or being distributed in the same format for easy replacement. We also mention a few prevalent datasets of images with generative factor annotations, similarly to the morphometrics proposed in this paper. NIST datasets: The MNIST (modified NIST) dataset (LeCun et al., 1998) was constructed from handwritten digits in NIST Special Databases 1 and 3, now released as Special Database 19 (Grother and Hanaoka, 2016). Cohen et al. (2017) generated a much larger dataset based on the same NIST database, containing additional upper- and lower-case letters, called EMNIST (extended MNIST). MNIST perturbations: The seminal paper by LeCun et al. (1998) employed data augmentation using planar affine transformations including translation, scaling, squeezing, and shearing. Loosli et al. (2007) employed random elastic deformations to construct the Infinite MNIST dataset. Other MNIST variations include rotations and insertion of random and structured background (Larochelle et al., 2007), and Tieleman (2013) applied spatial affine transformations and provided ground-truth transformation parameters. MNIST format: Due to the ubiquity of the MNIST dataset in machine learning research and the resulting multitude of compatible model architectures available, it is appealing to release new datasets in the same format (28×28, 8-bit grayscale images). One such effort is Fashion-MNIST (Xiao et al., 2017), containing images of clothing articles from ten distinct classes, adapted from an online shopping catalogue. Another example is notMNIST (Bulatov, 2011), a dataset of character glyphs for letters ‘A’–‘J’ (also ten classes), in a challengingly diverse collection of typefaces. Annotated datasets: Computer vision datasets that are popular for evaluating disentanglement of learned latent factors of variation include those from Paysan et al. (2009) and Aubry et al. (2014). They contain 2D renderings of 3D faces and chairs, respectively, with ground-truth pose parameters (azimuth, elevation) and lighting conditions (faces only). A further initiative in that direction is the dSprites dataset (Matthey et al., 2017), which consists of binary images containing three types of shapes with varying location, orientation and size. The availability of the ground-truth values of such attributes has motivated the accelerated adoption of these datasets in the evaluation of representation learning algorithms. 1.3 RELATED WORK: QUANTITATIVE EVALUATION Evaluation of representation learning is a notoriously challenging task and remains an open research problem. Numerous solutions have been proposed, with many of the earlier ones focusing on the test log-likelihood under the model (Kingma and Welling, 2013) or, for likelihood-free models, under a kernel density estimate (KDE) of generated samples (Goodfellow et al., 2014; Makhzani et al., 2015)—being shown not to be reliable proxies for the true model likelihood (Theis et al., 2016). Another perspective for evaluation of generative models of images is the visual fidelity of its samples to the training data, which would normally require manual inspection. To address this issue, a successful family of metrics have been proposed, based on visual features extracted by the Inception network (Szegedy et al., 2016). The original Inception score (Salimans et al., 2016) relies on the ‘crispness’ of class predictions, whereas the Fréchet Inception distance (FID) (Heusel et al., 2017) and the kernel Inception distance (KID) (Bińkowski et al., 2018) statistically compare high-level representations instead of the final network outputs. Although the approaches above can reveal vague signs of mode collapse, it may be useful to diagnose this phenomenon on its own. With this objective, Arora et al. (2018) proposed to estimate the support of the learned distribution (assumed discrete) using the birthday paradox test, by counting pairs of visual duplicates among model samples. Unfortunately, the adoption of this technique is hindered by its reliance on manual visual inspection to flag identical images. There have been several attempts at quantifying representation disentanglement performance. For example, Higgins et al. (2017) proposed to use the accuracy of a simple classifier trained to predict which factor of variation was held fixed in a simulated dataset. There exist further informationtheoretic approaches, involving the KL divergence contribution from each latent dimension (Dupont, 2018) or their mutual information with each known generative factor (Chen et al., 2018). Yet another method, explored in Kumar et al. (2018), is based on the predictive accuracy of each latent variable to each generative factor (continuous or discrete). 2 MORPHOMETRY Meaningful morphometrics are instrumental in characterising distributions of rasterised shapes, such as MNIST digits, and can be useful as additional data for downstream learning tasks. We begin this section by describing the image processing pipeline employed for extracting the metrics and for applying perturbations (Section 3), followed by details on the computation of each measurement. 2.1 PROCESSING PIPELINE The original 28×28 resolution of the MNIST images is generally not high enough to enable satisfactory morphological processing: stroke properties (e.g. length, thickness) measured directly on the binarised images would likely be inaccurate and heavily quantised. To mitigate this issue and enable sub-pixel accuracy in the measurements, we propose to use the following processing steps: 1. upscale (e.g. ×4, to 112×112)1; 2. binarise (e.g. threshold ≥128); 3. compute Euclidean distance transform (EDT) from boundaries; 4. skeletonise (medial axis, i.e. ridges of EDT); 5. apply perturbation (cf. Section 3); and 6. downscale to original resolution. We illustrate the pipeline in Fig. 2. The binary high-resolution digits have smooth boundaries and faithfully capture subtle variations in contour shape and stroke thickness that are only vaguely discernible in the low-resolution images. Additionally, note how the final downscaled image is almost indistinguishable from the original. All morphometric attributes described below are calculated for each digit after applying steps 1–4 of this pipeline. The distributions for the plain MNIST training set is plotted in Fig. 3, and the distributions after applying each type of perturbation can be found in Appendix A. 1Up- and downscaling by a factor of f are done with bicubic interpolation and Gaussian smoothing (bandwidth σ = 2f/6), following scikit-image defaults (van der Walt et al., 2014). 2.2 STROKE LENGTH Here we approximate the trace of the pen tip, as a digit was being written, by the computed morphological skeleton. In this light, the total length of the skeleton is an estimate of the length of the pen stroke, which in turn is a measure of shape complexity. It can be computed in a single pass by accumulating the Euclidean distance of each skeleton pixel to its immediate neighbours, taking care to only count the individual contributions once. This approach is more robust against rotations than a naïve estimate by simply counting the pixels. 2.3 STROKE THICKNESS A prominent factor of style variation in the MNIST digits is the overall thickness of the strokes, due to both legitimate differences in pen thickness and force applied, and also to the rescaling of the original NIST images by different factors. We estimate it by exploiting the computed distance transform. By virtue of how the image skeleton is computed, its pixels are approximately equidistant to the nearest boundaries, therefore we take twice the mean value of the EDT over all skeleton pixels as our global estimate. 2.4 SLANT The extent by which handwritten symbols lean right or left (forward and backward slant, respectively) is a further notorious and quantifiable dimension of handwriting style. It introduces so much variation in the appearance of characters in images that it is common practice in OCR systems to ‘deslant’ them, in an attempt to reduce within-class variance (LeCun et al., 1998; Teow and Loe, 2002). We adapt the referred deslanting methodology to describe the slant angle of the handwritten digits. After estimating the second-order image moments, we define the slant based on the horizontal shear: α = arctan ( − ∑ i,j xij(i− ī)(j − j̄)∑ i,j xij(i− ī)2 ) , (1) where xij is the intensity of pixel (i, j), and (̄i, j̄) are the centroid coordinates. The minus sign ensures that positive and negative values correspond to forward and backward slant, respectively. 2.5 WIDTH AND HEIGHT It is useful to measure other general shape attributes, such as width, height, and aspect ratio, which also present substantial variation related to personal handwriting style.2 To this end, we propose to fit a bounding parallelogram to each digit, with horizontal and slanted sides (cf. Fig. 1). We sweep the image top-to-bottom with a horizontal boundary to compute a vertical marginal cumulative distribution function (CDF), and likewise left-to-right with a slanted boundary for a horizontal marginal CDF, with angle α as computed above. The bounds are then chosen based on equal-tailed intervals containing a given proportion of the image mass—98% in both directions (1% from each side) proved accurate and robust in our experiments. 3 PERTURBATIONS As discussed in Section 1, we bring forward a number of morphological perturbations for MNIST digits, to enable interesting applications and experimentation. In this section, we detail these parametrisable transformations, categorised as global or local. 2Note that little variation in height is expected, since the original handwritten digits were scaled to fit a 20×20 box (LeCun et al., 1998). Nevertheless, a minority of digits were originally wider than they were tall, which explains the long tails in the distribution of heights (Fig. 3). 3.1 GLOBAL: THINNING AND THICKENING The first pair of transformations we present is based on simple morphological operations: the binarised image of a digit is dilated or eroded with a circular structuring element. Its radius is set proportionally to the estimated stroke thickness (Section 2.3), so that the overall thickness of each digit will decrease or increase by an approximately fixed factor (here, -70% and +100%; see Figs. B.1 and B.2). Since there is substantial thickness variability in the original MNIST data (cf. Fig. 3) and most thinned and thickened digits look very plausible, we believe that these perturbations can constitute a powerful form of data augmentation for training. For the same reason, we have not included these perturbations in the abnormality detection experiments (Appendix D). 3.2 LOCAL: SWELLING In addition to the global transformations above, we introduce local perturbations with variable location and extent, which are harder to detect automatically. Given a radius R, a centre location r0 and a strength parameter γ > 1, the coordinates r of pixels within distance R of r0 are nonlinearly warped according to a radial power transform: r 7→ r0 + (r− r0) (‖r− r0‖ R )γ−1 , (2) leaving the remaining portions of the image untouched and resampling with bicubic interpolation. In the experiments and released dataset, we set γ = 7 and R = 3 √ θ/2, where θ is thickness. Unlike simple linear scaling with θ, this choice for R produces noticeable but not exaggerated effects across the thickness range observed in the dataset (cf. Fig. B.3). The centre location, r0, is picked uniformly at random from the pixels along the estimated skeleton. 3.3 LOCAL: FRACTURES We describe the proposed procedure for adding fractures to an MNIST digit, where we define a fracture as a break in the continuity of a pen stroke. Because single fractures can in many cases be easily mistaken for true gaps between strokes, we add multiple fractures to each affected digit. When selecting the location for a fracture, we attempt to avoid getting too close to stroke tips (points on the skeleton with a single neighbour) or fork points (more than two neighbours). This is achieved by sampling only among those skeleton pixels above a certain distance to these detected points. In addition, we would like fractures to be transversal to the pen strokes. Local orientation is determined based on second-order moments of the skeleton inside a window centred at the chosen location, and the length of the fracture is estimated from the boundary EDT. Finally, the fracture is drawn onto the high-resolution binary image with a circular brush along the estimated normal. In practice, we found that adding three fractures with 1.5 px thickness, 2 px minimum distance to tips and forks and angle window of 5×5 px2 (‘px’ as measured in the low resolution image) produces detectable but not too obvious perturbations (see Fig. B.4). We also extend the lines on both ends by 0.5 px to add some tolerance. 4 EVALUATION CASE STUDIES In this section, we demonstrate potential uses of the proposed framework: using morphometrics to characterise the distribution of samples from generative models and finding associations between learned latent representations and morphometric attributes. In addition, we exemplify in Appendix D a variety of supervised tasks on the MNIST dataset augmented with perturbations. 4.1 SAMPLE DIVERSITY Here we aim to illustrate ways in which the proposed MNIST morphometrics may be used to visualise distributions learned by generative models and to quantify their agreement with the true data distribution in terms of these semantic attributes. We also believe that extracting such measurements from model samples is a step toward diagnosing the issue of mode collapse. We exemplify this scenario with a vanilla GAN (Goodfellow et al., 2014) and a β-VAE (Higgins et al., 2017), both with generator (resp. decoder) and discriminator architecture as used in the MNIST experiments in Chen et al. (2016), and encoder mirroring the decoder. We train a β-VAE with β = 4 and a GAN, both with 64-dimensional latent space. To explore the behaviour of a much less expressive model, we additionally train a GAN with only two latent dimensions. Visualisation: Figure 4 illustrates the morphometric distributions of the plain MNIST test images and of 10,000 samples from each of these three models. As can be seen, morphometrics provide interpretable low-dimensional statistics which allow comparing distributions learned by generative models between each other and with true datasets. While Figs. 4b and 4c show model samples roughly as diverse as the true images, the samples from the low-dimensional GAN in Fig. 4d seem concentrated on certain regions, covering a distribution that is less faithful to the true one in Fig. 4a. Statistical comparison: We argue that in this lower-dimensional space of morphometrics it is possible to statistically compare the distributions, since this was shown not to be effective directly in image space (e.g. Theis et al., 2016). To this end, we propose to use kernel two-sample tests based on maximum mean discrepancy (MMD) between morphometrics of the test data and of each of the sample distributions. Here, we performed the linear-time asymptotic test described in Gretton et al. (2012, §6) (details and further considerations in Appendix C). The test results in Table 1 seem to confirm the mismatch of the low-dimensional GAN’s samples, whereas the β-VAE and larger GAN do not show a significant departure from the data distribution. Finding replicas: One potentially fruitful suggestion would be to use a variant of hierarchical agglomerative clustering on sample morphometric attributes (e.g. using standardised Euclidean distance, or other suitable metrics). With a low enough distance threshold, it would be possible to identify groups of near-replicas, the abundance of which would signify mode collapse. Alternatively, this could be applicable as a heuristic in the birthday paradox test for estimating the support of the learned distribution (Arora et al., 2018). 4.2 DISENTANGLEMENT In this experiment, we demonstrate that: (a) standard MNIST can be augmented with morphometric attributes to quantitatively study representations computed by an inference model (as already possible with e.g. dSprites and 3D faces); (b) we can measure shape attributes of samples to assess disentanglement of a generative model, which is unprecedented to the best of our knowledge; and (c) this analysis can also diagnose when a model unexpectedly fails to learn a known aspect of the data. Methodology: We take MAP estimates of latent codes for each image (i.e. maximal logit for categorical codes and mean for continuous codes), as predicted by the variational recognition network. Using an approach related to the disentanglement measure introduced in Kumar et al. (2018), we study the correlation structures between known generative factors and latent codes learned by an InfoGAN. Specifically, we compute the partial correlation between each latent code variable and each morphometric attribute, controlling for the variation in the remaining latent variables (disregarding the noise vector).3 As opposed to the simple correlation, this technique allows us to study the net first-order effect of each latent code, all else being equal. Models were trained for 20 epochs using 64 images per batch, with no hyperparameter tuning. We emphasize that our goal was to illustrate how the proposed morphometrics can serve as tools to better understand whether they behave as intended and not to optimally train the models in each scenario. Inferential disentanglement: To illustrate how this methodology can be applied in practice to assess disentanglement, we consider two settings. The first is the same as in the MNIST experiment from Chen et al. (2016), with a 10-way categorical and two continuous latent codes, trained and evaluated on the plain MNIST digits, which we will refer to as INFOGAN-A. 3For the categorical code, c1, we take a single binary dummy variable for each category, c (k) 1 , while controlling only for the remaining codes (c2, c3 etc.) to avoid multicollinearity. The second setting was designed to investigate whether the model could disentangle the concept of thickness, by including an additional continuous latent code and training on a dataset with exaggerated thickness variations. We constructed this dataset by randomly interleaving plain, thinned and thickened digit images in equal proportions. Since the perturbations were applied completely at random, we expect a trained generative model to identify that thickness should be largely independent of the other morphological attributes. We refer to this set-up as INFOGAN-B. Table 2 summarises the different experimental settings, for reference. In Fig. 5a, we see that INFOGAN-A learned to encode slant mostly in c3, while c (8) 1 clearly relates to the ‘1’ class (much narrower digit shape and shorter pen stroke; cf. Fig. 3). Figure 5b quantitatively confirms the hypothesis that INFOGAN-B’s recognition network would learn to separate slant and thickness (in c4 and c3, resp.), the most prominent factors of style variation in this dataset. Interestingly, it shows that c3 also associates with height, as thicker digits tend to be taller. Generative disentanglement: The evaluation methodology described above is useful to investigate the behaviour of the inference direction of a model, and can readily be used with datasets which include ground-truth generative factor annotations. On the other hand, unless we trust that the inference approximation is highly accurate, this tells us little about the generative expressiveness of the model. This is where computed metrics truly show their potential: we can measure generated samples, and see how their attributes relate to the latent variables used to create them. Figure 6 shows results for a similar analysis to Fig. 5, but now evaluated on samples from that model. As the tables are mostly indistinguishable, we may argue that in this case the inference and generator networks have learned to consistently encode and decode the digit shape attributes. As further illustration, Fig. 7 displays traversals of the latent space, obtained by varying a subset of the latent variables while holding the remaining ones (including noise) constant. With these examples, we are able to qualitatively verify the quantitative results in Fig. 6. Note that, until now, visual inspection was typically the only means of evaluating disentanglement and expressiveness of the generative direction of image models (e.g. Chen et al., 2016; Dupont, 2018). Diagnosing failure: We also attempted to detect whether an InfoGAN had learned to discover local perturbations (swelling and fractures). To this end, we extended the model formulation with additional Bernoulli latent codes, which would hopefully learn to encode presence/absence of each type of local perturbation. The model investigated here, dubbed INFOGAN-C (cf. Table 2), had a 10-way categorical, two continuous and two binary codes, and was trained with a dataset of plain, swollen and fractured digits (randomly mixed as above). Again via inferential partial correlation analysis—now including ground-truth perturbation annotations—we can quantitatively verify that this particular model instance was unable to meaningfully capture the perturbations (Fig. 8, bottom-right block). In fact, it appears that the addition of the binary variables did not lead to more expressive representations in this case, even impairing the disentanglement of the categorical variables, if compared to Figs. 5a and 5b, for example. 5 CONCLUSION With Morpho-MNIST we provide a number of mechanisms to quantitatively assess representation learning with respect to measurable factors of variation in the data. We believe that this is an important asset for future research on generative models, and we would like to emphasize that the proposed morphometrics can be used post hoc to evaluate already trained models, potentially revealing novel insights and interesting observations. A similar morphometry approach could be used with other datasets such as dSprites, e.g. estimating shape location and size, number of objects/connected components. Perhaps some generic image metrics may be useful for analysis on other datasets, e.g. relating to sharpness or colour diversity, or we could even consider using the output of object detectors (analogously to the Inception-based scores; e.g. number/class of objects, bounding boxes etc.). In future work we plan to include additional perturbations, for example, mimicking imaging artefacts commonly observed in medical imaging modalities to add further complexity and realism. A MORPHOMETRICS OF PLAIN AND PERTURBED DATASETS B PERTURBATION EXAMPLES C MMD DETAILS We employed a Gaussian product kernel with bandwidths derived from Scott’s rule, analogously to the KDE plots in Fig. 4. Scott’s rule of thumb defines the bandwidth for a density estimation kernel as N−1/(D+4) times the standard deviation in each dimension, where N and D denote sample size and number of dimensions (Scott, 1992, Eq. (6.42)). We determine the KDE bandwidths separately for real and sample data, then add their squares to obtain the squared bandwidth of the MMD’s Gaussian kernel, as it corresponds to the convolution of the density estimation kernels chosen for each set of data. See Gretton et al. (2012, §3.3.1) for further details on the relation between MMD and L2 distance of kernel density estimates. Whereas the bandwidth heuristic used here is fairly crude, much more sophisticated kernel selection procedures are available, e.g. by explicitly optimising the test power (Sutherland et al., 2017). A further analysis tool in a similar vein would be to apply a relative MMD similarity test (Bounliphone et al., 2016), to rank trained models based on sample fidelity. It would also be possible to adopt a model criticism methodology based on the MMD witness function (Lloyd and Ghahramani, 2015), to identify over- and under-represented regions in morphometric space (and corresponding generated image exemplars could be inspected as well). D SUPERVISED TASKS Although the driving motivation for introducing Morpho-MNIST has been the lack of means for quantitative evaluation of generative models, the proposed framework may also be a valuable resource in the context of supervised learning. We conducted several experiments to demonstrate potential applications of these datasets with increased difficulty due to the injected perturbations: standard digit recognition, supervised abnormality detection, and thickness regression. Note such experiments can later serve as baselines for unsupervised tasks such as outlier detection and domain adaptation. We evaluated four different models: k-nearest-neighbours (kNN) using k = 5 neighbours and `1 distance weighting, a support vector machine (SVM) with polynomial kernel and penalty parameter C = 100, a multi-layer perceptron (MLP) with 784–200–200–L architecture (L: number of outputs), and a LeNet-5 convolutional neural network (LeCun et al., 1998). Here, we use the same datasets as in the disentanglement experiments (Section 4.2): plain digits (PLAIN), plain mixed with thinned and thickened digits (GLOBAL), and plain mixed with swollen and fractured digits (LOCAL). For digit recognition, each model is trained once on PLAIN, then tested on both PLAIN and LOCAL test datasets, to investigate the effect of domain shift. All methods suffer a drop in test accuracy on LOCAL (Table 3, first two columns). kNN appears to be the most robust to the local perturbations, perhaps because they affect only a few pixels, leaving the image distance between neighbours largely unchanged. On the other hand, local patterns that LeNet-5 relies on may have changed considerably. The abnormality detection task is, using the LOCAL dataset, to predict whether a digit is normal or perturbed (swollen or fractured)—compare with lesion detection in medical scans. Table 3 (third column) indicates that LeNet-5 is able to detect abnormalities with high accuracy, likely thanks to local invariances of its convolutional architecture. Note that all scores (especially the simpler models’) are lower than digit classification accuracy, revealing the (possibly surprising) higher difficulty of this binary classification problem compared to the ten-class digit classification. Finally, we also constructed a regression task for digit thickness using the GLOBAL dataset, mimicking medical imaging tasks such as estimating brain age from cortical grey matter maps. Since this is a non-trivial task, requiring some awareness of local geometry, it is perhaps unsurprising that the convolutional model outperformed the others, which rely on holistic features (Table 3, last column).
1. What are the key contributions and novel aspects introduced by the paper in MNIST dataset analysis? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its application to natural image generation tasks? 3. Do you have any concerns regarding the correlation between MNIST generation and more complex natural image generation tasks? 4. How does the reviewer assess the effectiveness of the suggested criteria and perturbations in decreasing the dimension of data? 5. Are there any open questions or areas for future research regarding the evaluation of generative models on MNIST and other datasets?
Review
Review Authors present a set of criteria to categorize MNISt digists (e.g. slant, stroke length, ..) and a set of interesting perturbations (swelling, fractures, ...) to modify MNIST dataset. They suggest analysing performance of generative models based on these tools. By extracting this kind of features, they effectively decrease the dimmension of data. Therefore, statistically comparing the distribution of generated vs test data and binning the generated data is now possible. They perform a thorough study regarding MNIST. Their tools are a handy addition to the analytical surveys in several applications (e.g. how classification fails), but not convincingly for generation. Since their method is manually designed for MNIST, the manuscript would benefit from a justification or discussion on the common pitfalls and the correlation between MNIST generation and more complex natural image generation tasks. Since the presented metrics do not show a significant difference between the VAE and Vanilla GAN model, the question remains whether evaluating on MNIST is a good proxy for the performance of the model on colored images with backgrounds or not. For example sharpness and attending to details is not typically a challenge in MNIST generation where in other datasets this is usually the first challenge to be addressed. I'm not convinced that ability of a model in disentangling thickness correlates to their ability in natural image generation.
ICLR
Title Morpho-MNIST: Quantitative Assessment and Diagnostics for Representation Learning Abstract Revealing latent structure in data is an active field of research, having introduced exciting technologies such as variational autoencoders and adversarial networks, and is essential to push machine learning towards unsupervised knowledge discovery. However, a major challenge is the lack of suitable benchmarks for an objective and quantitative evaluation of learned representations. To address this issue we introduce Morpho-MNIST, a framework that aims to answer: “to what extent has my model learned to represent specific factors of variation in the data?” We extend the popular MNIST dataset by adding a morphometric analysis enabling quantitative comparison of trained models, identification of the roles of latent variables, and characterisation of sample diversity. We further propose a set of quantifiable perturbations to assess the performance of unsupervised and supervised methods on challenging tasks such as outlier detection and domain adaptation. 1 INTRODUCTION A key factor for progress in machine learning has been the availability of well curated, easy-to-use, standardised and sufficiently large annotated datasets for benchmarking different algorithms and models. This has led to major advances in speech recognition, computer vision, and natural language processing. A commonality between these tasks is their natural formulation as supervised learning tasks, wherein performance can be measured in terms of accuracy on a test set. The general problem of representation learning (i.e. to reveal latent structure in data) is more difficult to assess due the lack of suitable benchmarks. Although the field is very active, with many recently proposed techniques such as probabilistic autoencoders and adversarial learning, it is less clear where the field stands in terms of progress or which approaches are more expressive for specific tasks. The lack of reproducible ways to quantify performance has led to subjective means of evaluation: visualisation techniques have been used to show low-dimensional projections of the latent space and visual inspection of generated or reconstructed samples are popular to provide subjective measures of descriptiveness. On the other hand, the quality of sampled images generally tells us little about how well the learned representations capture known factors of variation in the training distribution. In order to advance progress, the availability of tools for objective assessment of representation learning methods seems essential yet lacking. This paper introduces Morpho-MNIST, a collection of shape metrics and perturbations, in a step towards quantitative assessment of representation learning. We build upon one of the most popular machine learning benchmarks, MNIST, which despite its shortcomings remains widely used. While MNIST was originally constructed to facilitate research in image classification, in the form of recognising handwritten digits (LeCun et al., 1998), it has found its use in representation learning, for example, to demonstrate that the learned latent space yields clusters consistent with digit labels. Methods aiming to disentangle the latent space claim success if individual latent variables capture specific style variations (e.g. stroke thickness, sidewards leaning digits and other visual characteristics). The main appeal of selecting MNIST as a benchmark for representation learning is that, while manifesting complex interactions between pixel intensities and underlying shapes, it has well understood and easily measurable factors of variation. More generally, MNIST remains popular in practice due to several factors: it allows reproducible comparisons with previous results reported in the literature; the dataset is sufficiently large for its complexity and consists of small, two-dimensional greyscale images defining a tractable ten-class classification problem; computation and memory requirements are low; most popular deep learning frameworks and libraries offer tutorials using MNIST, which makes it straightforward for new researchers to enter the field and to experiment with new ideas and explore latest developments. We take advantage of these qualities and extend MNIST in multiple ways, as summarised in the following. 1.1 CONTRIBUTIONS Our aim is to bridge the gap between methodology-focused research and critical real-world applications that could benefit from latest machine learning methods. As we preserve the general properties of MNIST—such as image size, file format, numbers of training and test images, and the original ten-class classification problem—we believe this new quantitative framework for assessing representation learning will experience widespread use in the community and may inspire further extensions facilitated by a publicly available Morpho-MNIST code base. Morphometrics: We propose to describe true and generated digit images in terms of measurable shape attributes. These include stroke thickness and length, and the width, height, and slant of digits (cf. Fig. 1, left). Whereas some of these properties have been analysed qualitatively in previous work, we demonstrate that objectively quantifying each of them allows to identify the role of inferred representations. Moreover, these tools can be used to measure model samples, enabling assessment of generative performance with respect to sample diversity (Section 4.1) and disentanglement of latent variables (Section 4.2). These measurements can be directly employed to re-evaluate existing models and may be added retrospectively to previous experiments involving the original MNIST dataset. Adoption of our morphometric analysis may provide new insights into the effectiveness of representation learning methods in terms of revealing meaningful latent structures. Furthermore, for other datasets it suffices to design the relevant scalar metrics and include them in the very same evaluation framework. Perturbations: We introduce a set of parametrisable global and local perturbations, inspired by natural and pathological variability in medical images. Global changes involve overall thinning and thickening of digits, while local changes include both swelling and fractures (see examples on the right in Fig. 1 and many more in Appendix B). Injecting these perturbations into the dataset adds a new type of complexity to the data manifold and opens up a variety of interesting applications. The proposed perturbations are designed to enable a wide range of new studies and applications for both supervised and unsupervised tasks. Detection of ‘abnormalities’ (i.e. local perturbations) is an evident application, although more challenging tasks can also be defined, such as classification from noisy/corrupted data, domain adaptation, localisation of perturbations, characterising semantics of learned latent representations, and more. We explore a few supplementary examples of supervised tasks in Appendix D. 1.2 RELATED WORK: DATASETS In this section, we provide an overview of some datasets that are related to MNIST, by either sharing its original source content, containing transformations of the original MNIST images or being distributed in the same format for easy replacement. We also mention a few prevalent datasets of images with generative factor annotations, similarly to the morphometrics proposed in this paper. NIST datasets: The MNIST (modified NIST) dataset (LeCun et al., 1998) was constructed from handwritten digits in NIST Special Databases 1 and 3, now released as Special Database 19 (Grother and Hanaoka, 2016). Cohen et al. (2017) generated a much larger dataset based on the same NIST database, containing additional upper- and lower-case letters, called EMNIST (extended MNIST). MNIST perturbations: The seminal paper by LeCun et al. (1998) employed data augmentation using planar affine transformations including translation, scaling, squeezing, and shearing. Loosli et al. (2007) employed random elastic deformations to construct the Infinite MNIST dataset. Other MNIST variations include rotations and insertion of random and structured background (Larochelle et al., 2007), and Tieleman (2013) applied spatial affine transformations and provided ground-truth transformation parameters. MNIST format: Due to the ubiquity of the MNIST dataset in machine learning research and the resulting multitude of compatible model architectures available, it is appealing to release new datasets in the same format (28×28, 8-bit grayscale images). One such effort is Fashion-MNIST (Xiao et al., 2017), containing images of clothing articles from ten distinct classes, adapted from an online shopping catalogue. Another example is notMNIST (Bulatov, 2011), a dataset of character glyphs for letters ‘A’–‘J’ (also ten classes), in a challengingly diverse collection of typefaces. Annotated datasets: Computer vision datasets that are popular for evaluating disentanglement of learned latent factors of variation include those from Paysan et al. (2009) and Aubry et al. (2014). They contain 2D renderings of 3D faces and chairs, respectively, with ground-truth pose parameters (azimuth, elevation) and lighting conditions (faces only). A further initiative in that direction is the dSprites dataset (Matthey et al., 2017), which consists of binary images containing three types of shapes with varying location, orientation and size. The availability of the ground-truth values of such attributes has motivated the accelerated adoption of these datasets in the evaluation of representation learning algorithms. 1.3 RELATED WORK: QUANTITATIVE EVALUATION Evaluation of representation learning is a notoriously challenging task and remains an open research problem. Numerous solutions have been proposed, with many of the earlier ones focusing on the test log-likelihood under the model (Kingma and Welling, 2013) or, for likelihood-free models, under a kernel density estimate (KDE) of generated samples (Goodfellow et al., 2014; Makhzani et al., 2015)—being shown not to be reliable proxies for the true model likelihood (Theis et al., 2016). Another perspective for evaluation of generative models of images is the visual fidelity of its samples to the training data, which would normally require manual inspection. To address this issue, a successful family of metrics have been proposed, based on visual features extracted by the Inception network (Szegedy et al., 2016). The original Inception score (Salimans et al., 2016) relies on the ‘crispness’ of class predictions, whereas the Fréchet Inception distance (FID) (Heusel et al., 2017) and the kernel Inception distance (KID) (Bińkowski et al., 2018) statistically compare high-level representations instead of the final network outputs. Although the approaches above can reveal vague signs of mode collapse, it may be useful to diagnose this phenomenon on its own. With this objective, Arora et al. (2018) proposed to estimate the support of the learned distribution (assumed discrete) using the birthday paradox test, by counting pairs of visual duplicates among model samples. Unfortunately, the adoption of this technique is hindered by its reliance on manual visual inspection to flag identical images. There have been several attempts at quantifying representation disentanglement performance. For example, Higgins et al. (2017) proposed to use the accuracy of a simple classifier trained to predict which factor of variation was held fixed in a simulated dataset. There exist further informationtheoretic approaches, involving the KL divergence contribution from each latent dimension (Dupont, 2018) or their mutual information with each known generative factor (Chen et al., 2018). Yet another method, explored in Kumar et al. (2018), is based on the predictive accuracy of each latent variable to each generative factor (continuous or discrete). 2 MORPHOMETRY Meaningful morphometrics are instrumental in characterising distributions of rasterised shapes, such as MNIST digits, and can be useful as additional data for downstream learning tasks. We begin this section by describing the image processing pipeline employed for extracting the metrics and for applying perturbations (Section 3), followed by details on the computation of each measurement. 2.1 PROCESSING PIPELINE The original 28×28 resolution of the MNIST images is generally not high enough to enable satisfactory morphological processing: stroke properties (e.g. length, thickness) measured directly on the binarised images would likely be inaccurate and heavily quantised. To mitigate this issue and enable sub-pixel accuracy in the measurements, we propose to use the following processing steps: 1. upscale (e.g. ×4, to 112×112)1; 2. binarise (e.g. threshold ≥128); 3. compute Euclidean distance transform (EDT) from boundaries; 4. skeletonise (medial axis, i.e. ridges of EDT); 5. apply perturbation (cf. Section 3); and 6. downscale to original resolution. We illustrate the pipeline in Fig. 2. The binary high-resolution digits have smooth boundaries and faithfully capture subtle variations in contour shape and stroke thickness that are only vaguely discernible in the low-resolution images. Additionally, note how the final downscaled image is almost indistinguishable from the original. All morphometric attributes described below are calculated for each digit after applying steps 1–4 of this pipeline. The distributions for the plain MNIST training set is plotted in Fig. 3, and the distributions after applying each type of perturbation can be found in Appendix A. 1Up- and downscaling by a factor of f are done with bicubic interpolation and Gaussian smoothing (bandwidth σ = 2f/6), following scikit-image defaults (van der Walt et al., 2014). 2.2 STROKE LENGTH Here we approximate the trace of the pen tip, as a digit was being written, by the computed morphological skeleton. In this light, the total length of the skeleton is an estimate of the length of the pen stroke, which in turn is a measure of shape complexity. It can be computed in a single pass by accumulating the Euclidean distance of each skeleton pixel to its immediate neighbours, taking care to only count the individual contributions once. This approach is more robust against rotations than a naïve estimate by simply counting the pixels. 2.3 STROKE THICKNESS A prominent factor of style variation in the MNIST digits is the overall thickness of the strokes, due to both legitimate differences in pen thickness and force applied, and also to the rescaling of the original NIST images by different factors. We estimate it by exploiting the computed distance transform. By virtue of how the image skeleton is computed, its pixels are approximately equidistant to the nearest boundaries, therefore we take twice the mean value of the EDT over all skeleton pixels as our global estimate. 2.4 SLANT The extent by which handwritten symbols lean right or left (forward and backward slant, respectively) is a further notorious and quantifiable dimension of handwriting style. It introduces so much variation in the appearance of characters in images that it is common practice in OCR systems to ‘deslant’ them, in an attempt to reduce within-class variance (LeCun et al., 1998; Teow and Loe, 2002). We adapt the referred deslanting methodology to describe the slant angle of the handwritten digits. After estimating the second-order image moments, we define the slant based on the horizontal shear: α = arctan ( − ∑ i,j xij(i− ī)(j − j̄)∑ i,j xij(i− ī)2 ) , (1) where xij is the intensity of pixel (i, j), and (̄i, j̄) are the centroid coordinates. The minus sign ensures that positive and negative values correspond to forward and backward slant, respectively. 2.5 WIDTH AND HEIGHT It is useful to measure other general shape attributes, such as width, height, and aspect ratio, which also present substantial variation related to personal handwriting style.2 To this end, we propose to fit a bounding parallelogram to each digit, with horizontal and slanted sides (cf. Fig. 1). We sweep the image top-to-bottom with a horizontal boundary to compute a vertical marginal cumulative distribution function (CDF), and likewise left-to-right with a slanted boundary for a horizontal marginal CDF, with angle α as computed above. The bounds are then chosen based on equal-tailed intervals containing a given proportion of the image mass—98% in both directions (1% from each side) proved accurate and robust in our experiments. 3 PERTURBATIONS As discussed in Section 1, we bring forward a number of morphological perturbations for MNIST digits, to enable interesting applications and experimentation. In this section, we detail these parametrisable transformations, categorised as global or local. 2Note that little variation in height is expected, since the original handwritten digits were scaled to fit a 20×20 box (LeCun et al., 1998). Nevertheless, a minority of digits were originally wider than they were tall, which explains the long tails in the distribution of heights (Fig. 3). 3.1 GLOBAL: THINNING AND THICKENING The first pair of transformations we present is based on simple morphological operations: the binarised image of a digit is dilated or eroded with a circular structuring element. Its radius is set proportionally to the estimated stroke thickness (Section 2.3), so that the overall thickness of each digit will decrease or increase by an approximately fixed factor (here, -70% and +100%; see Figs. B.1 and B.2). Since there is substantial thickness variability in the original MNIST data (cf. Fig. 3) and most thinned and thickened digits look very plausible, we believe that these perturbations can constitute a powerful form of data augmentation for training. For the same reason, we have not included these perturbations in the abnormality detection experiments (Appendix D). 3.2 LOCAL: SWELLING In addition to the global transformations above, we introduce local perturbations with variable location and extent, which are harder to detect automatically. Given a radius R, a centre location r0 and a strength parameter γ > 1, the coordinates r of pixels within distance R of r0 are nonlinearly warped according to a radial power transform: r 7→ r0 + (r− r0) (‖r− r0‖ R )γ−1 , (2) leaving the remaining portions of the image untouched and resampling with bicubic interpolation. In the experiments and released dataset, we set γ = 7 and R = 3 √ θ/2, where θ is thickness. Unlike simple linear scaling with θ, this choice for R produces noticeable but not exaggerated effects across the thickness range observed in the dataset (cf. Fig. B.3). The centre location, r0, is picked uniformly at random from the pixels along the estimated skeleton. 3.3 LOCAL: FRACTURES We describe the proposed procedure for adding fractures to an MNIST digit, where we define a fracture as a break in the continuity of a pen stroke. Because single fractures can in many cases be easily mistaken for true gaps between strokes, we add multiple fractures to each affected digit. When selecting the location for a fracture, we attempt to avoid getting too close to stroke tips (points on the skeleton with a single neighbour) or fork points (more than two neighbours). This is achieved by sampling only among those skeleton pixels above a certain distance to these detected points. In addition, we would like fractures to be transversal to the pen strokes. Local orientation is determined based on second-order moments of the skeleton inside a window centred at the chosen location, and the length of the fracture is estimated from the boundary EDT. Finally, the fracture is drawn onto the high-resolution binary image with a circular brush along the estimated normal. In practice, we found that adding three fractures with 1.5 px thickness, 2 px minimum distance to tips and forks and angle window of 5×5 px2 (‘px’ as measured in the low resolution image) produces detectable but not too obvious perturbations (see Fig. B.4). We also extend the lines on both ends by 0.5 px to add some tolerance. 4 EVALUATION CASE STUDIES In this section, we demonstrate potential uses of the proposed framework: using morphometrics to characterise the distribution of samples from generative models and finding associations between learned latent representations and morphometric attributes. In addition, we exemplify in Appendix D a variety of supervised tasks on the MNIST dataset augmented with perturbations. 4.1 SAMPLE DIVERSITY Here we aim to illustrate ways in which the proposed MNIST morphometrics may be used to visualise distributions learned by generative models and to quantify their agreement with the true data distribution in terms of these semantic attributes. We also believe that extracting such measurements from model samples is a step toward diagnosing the issue of mode collapse. We exemplify this scenario with a vanilla GAN (Goodfellow et al., 2014) and a β-VAE (Higgins et al., 2017), both with generator (resp. decoder) and discriminator architecture as used in the MNIST experiments in Chen et al. (2016), and encoder mirroring the decoder. We train a β-VAE with β = 4 and a GAN, both with 64-dimensional latent space. To explore the behaviour of a much less expressive model, we additionally train a GAN with only two latent dimensions. Visualisation: Figure 4 illustrates the morphometric distributions of the plain MNIST test images and of 10,000 samples from each of these three models. As can be seen, morphometrics provide interpretable low-dimensional statistics which allow comparing distributions learned by generative models between each other and with true datasets. While Figs. 4b and 4c show model samples roughly as diverse as the true images, the samples from the low-dimensional GAN in Fig. 4d seem concentrated on certain regions, covering a distribution that is less faithful to the true one in Fig. 4a. Statistical comparison: We argue that in this lower-dimensional space of morphometrics it is possible to statistically compare the distributions, since this was shown not to be effective directly in image space (e.g. Theis et al., 2016). To this end, we propose to use kernel two-sample tests based on maximum mean discrepancy (MMD) between morphometrics of the test data and of each of the sample distributions. Here, we performed the linear-time asymptotic test described in Gretton et al. (2012, §6) (details and further considerations in Appendix C). The test results in Table 1 seem to confirm the mismatch of the low-dimensional GAN’s samples, whereas the β-VAE and larger GAN do not show a significant departure from the data distribution. Finding replicas: One potentially fruitful suggestion would be to use a variant of hierarchical agglomerative clustering on sample morphometric attributes (e.g. using standardised Euclidean distance, or other suitable metrics). With a low enough distance threshold, it would be possible to identify groups of near-replicas, the abundance of which would signify mode collapse. Alternatively, this could be applicable as a heuristic in the birthday paradox test for estimating the support of the learned distribution (Arora et al., 2018). 4.2 DISENTANGLEMENT In this experiment, we demonstrate that: (a) standard MNIST can be augmented with morphometric attributes to quantitatively study representations computed by an inference model (as already possible with e.g. dSprites and 3D faces); (b) we can measure shape attributes of samples to assess disentanglement of a generative model, which is unprecedented to the best of our knowledge; and (c) this analysis can also diagnose when a model unexpectedly fails to learn a known aspect of the data. Methodology: We take MAP estimates of latent codes for each image (i.e. maximal logit for categorical codes and mean for continuous codes), as predicted by the variational recognition network. Using an approach related to the disentanglement measure introduced in Kumar et al. (2018), we study the correlation structures between known generative factors and latent codes learned by an InfoGAN. Specifically, we compute the partial correlation between each latent code variable and each morphometric attribute, controlling for the variation in the remaining latent variables (disregarding the noise vector).3 As opposed to the simple correlation, this technique allows us to study the net first-order effect of each latent code, all else being equal. Models were trained for 20 epochs using 64 images per batch, with no hyperparameter tuning. We emphasize that our goal was to illustrate how the proposed morphometrics can serve as tools to better understand whether they behave as intended and not to optimally train the models in each scenario. Inferential disentanglement: To illustrate how this methodology can be applied in practice to assess disentanglement, we consider two settings. The first is the same as in the MNIST experiment from Chen et al. (2016), with a 10-way categorical and two continuous latent codes, trained and evaluated on the plain MNIST digits, which we will refer to as INFOGAN-A. 3For the categorical code, c1, we take a single binary dummy variable for each category, c (k) 1 , while controlling only for the remaining codes (c2, c3 etc.) to avoid multicollinearity. The second setting was designed to investigate whether the model could disentangle the concept of thickness, by including an additional continuous latent code and training on a dataset with exaggerated thickness variations. We constructed this dataset by randomly interleaving plain, thinned and thickened digit images in equal proportions. Since the perturbations were applied completely at random, we expect a trained generative model to identify that thickness should be largely independent of the other morphological attributes. We refer to this set-up as INFOGAN-B. Table 2 summarises the different experimental settings, for reference. In Fig. 5a, we see that INFOGAN-A learned to encode slant mostly in c3, while c (8) 1 clearly relates to the ‘1’ class (much narrower digit shape and shorter pen stroke; cf. Fig. 3). Figure 5b quantitatively confirms the hypothesis that INFOGAN-B’s recognition network would learn to separate slant and thickness (in c4 and c3, resp.), the most prominent factors of style variation in this dataset. Interestingly, it shows that c3 also associates with height, as thicker digits tend to be taller. Generative disentanglement: The evaluation methodology described above is useful to investigate the behaviour of the inference direction of a model, and can readily be used with datasets which include ground-truth generative factor annotations. On the other hand, unless we trust that the inference approximation is highly accurate, this tells us little about the generative expressiveness of the model. This is where computed metrics truly show their potential: we can measure generated samples, and see how their attributes relate to the latent variables used to create them. Figure 6 shows results for a similar analysis to Fig. 5, but now evaluated on samples from that model. As the tables are mostly indistinguishable, we may argue that in this case the inference and generator networks have learned to consistently encode and decode the digit shape attributes. As further illustration, Fig. 7 displays traversals of the latent space, obtained by varying a subset of the latent variables while holding the remaining ones (including noise) constant. With these examples, we are able to qualitatively verify the quantitative results in Fig. 6. Note that, until now, visual inspection was typically the only means of evaluating disentanglement and expressiveness of the generative direction of image models (e.g. Chen et al., 2016; Dupont, 2018). Diagnosing failure: We also attempted to detect whether an InfoGAN had learned to discover local perturbations (swelling and fractures). To this end, we extended the model formulation with additional Bernoulli latent codes, which would hopefully learn to encode presence/absence of each type of local perturbation. The model investigated here, dubbed INFOGAN-C (cf. Table 2), had a 10-way categorical, two continuous and two binary codes, and was trained with a dataset of plain, swollen and fractured digits (randomly mixed as above). Again via inferential partial correlation analysis—now including ground-truth perturbation annotations—we can quantitatively verify that this particular model instance was unable to meaningfully capture the perturbations (Fig. 8, bottom-right block). In fact, it appears that the addition of the binary variables did not lead to more expressive representations in this case, even impairing the disentanglement of the categorical variables, if compared to Figs. 5a and 5b, for example. 5 CONCLUSION With Morpho-MNIST we provide a number of mechanisms to quantitatively assess representation learning with respect to measurable factors of variation in the data. We believe that this is an important asset for future research on generative models, and we would like to emphasize that the proposed morphometrics can be used post hoc to evaluate already trained models, potentially revealing novel insights and interesting observations. A similar morphometry approach could be used with other datasets such as dSprites, e.g. estimating shape location and size, number of objects/connected components. Perhaps some generic image metrics may be useful for analysis on other datasets, e.g. relating to sharpness or colour diversity, or we could even consider using the output of object detectors (analogously to the Inception-based scores; e.g. number/class of objects, bounding boxes etc.). In future work we plan to include additional perturbations, for example, mimicking imaging artefacts commonly observed in medical imaging modalities to add further complexity and realism. A MORPHOMETRICS OF PLAIN AND PERTURBED DATASETS B PERTURBATION EXAMPLES C MMD DETAILS We employed a Gaussian product kernel with bandwidths derived from Scott’s rule, analogously to the KDE plots in Fig. 4. Scott’s rule of thumb defines the bandwidth for a density estimation kernel as N−1/(D+4) times the standard deviation in each dimension, where N and D denote sample size and number of dimensions (Scott, 1992, Eq. (6.42)). We determine the KDE bandwidths separately for real and sample data, then add their squares to obtain the squared bandwidth of the MMD’s Gaussian kernel, as it corresponds to the convolution of the density estimation kernels chosen for each set of data. See Gretton et al. (2012, §3.3.1) for further details on the relation between MMD and L2 distance of kernel density estimates. Whereas the bandwidth heuristic used here is fairly crude, much more sophisticated kernel selection procedures are available, e.g. by explicitly optimising the test power (Sutherland et al., 2017). A further analysis tool in a similar vein would be to apply a relative MMD similarity test (Bounliphone et al., 2016), to rank trained models based on sample fidelity. It would also be possible to adopt a model criticism methodology based on the MMD witness function (Lloyd and Ghahramani, 2015), to identify over- and under-represented regions in morphometric space (and corresponding generated image exemplars could be inspected as well). D SUPERVISED TASKS Although the driving motivation for introducing Morpho-MNIST has been the lack of means for quantitative evaluation of generative models, the proposed framework may also be a valuable resource in the context of supervised learning. We conducted several experiments to demonstrate potential applications of these datasets with increased difficulty due to the injected perturbations: standard digit recognition, supervised abnormality detection, and thickness regression. Note such experiments can later serve as baselines for unsupervised tasks such as outlier detection and domain adaptation. We evaluated four different models: k-nearest-neighbours (kNN) using k = 5 neighbours and `1 distance weighting, a support vector machine (SVM) with polynomial kernel and penalty parameter C = 100, a multi-layer perceptron (MLP) with 784–200–200–L architecture (L: number of outputs), and a LeNet-5 convolutional neural network (LeCun et al., 1998). Here, we use the same datasets as in the disentanglement experiments (Section 4.2): plain digits (PLAIN), plain mixed with thinned and thickened digits (GLOBAL), and plain mixed with swollen and fractured digits (LOCAL). For digit recognition, each model is trained once on PLAIN, then tested on both PLAIN and LOCAL test datasets, to investigate the effect of domain shift. All methods suffer a drop in test accuracy on LOCAL (Table 3, first two columns). kNN appears to be the most robust to the local perturbations, perhaps because they affect only a few pixels, leaving the image distance between neighbours largely unchanged. On the other hand, local patterns that LeNet-5 relies on may have changed considerably. The abnormality detection task is, using the LOCAL dataset, to predict whether a digit is normal or perturbed (swollen or fractured)—compare with lesion detection in medical scans. Table 3 (third column) indicates that LeNet-5 is able to detect abnormalities with high accuracy, likely thanks to local invariances of its convolutional architecture. Note that all scores (especially the simpler models’) are lower than digit classification accuracy, revealing the (possibly surprising) higher difficulty of this binary classification problem compared to the ten-class digit classification. Finally, we also constructed a regression task for digit thickness using the GLOBAL dataset, mimicking medical imaging tasks such as estimating brain age from cortical grey matter maps. Since this is a non-trivial task, requiring some awareness of local geometry, it is perhaps unsurprising that the convolutional model outperformed the others, which rely on holistic features (Table 3, last column).
1. What is the focus and contribution of the paper on semantic correspondence? 2. What are the strengths of the proposed approach, particularly in terms of neural representation? 3. What are the weaknesses of the paper, especially for the experiment section? 4. Do you have any concerns on the semantic correspondence representation? 5. What are the limitations regarding the NeMF approach? Review: Summary Of The Paper This paper introduces Neural Matching Fields (NeMF) into semantic correspondence. To the best of my knowledge, this approach should be the first method to do the task using implicit neural representation. There are two problems: computation for 4D matching field and inference efficiency. Authors provide effective methods to address these problems. Strengths And Weaknesses This paper employs implicit neural representation to do semantic correspondence. This should be the major contribution. According to the statement of authors, I can follow the idea easily and this idea should work. The disadvantage of this work is the experiments. There are too many quantitative comparisons. According to the data, the performance of this method seems OK. However, authors should provide more visual experiments to convince readers. Questions I only have one concern. Traditional Implicit Neural Representation method such as LIIF and NeRF records images into the weights of neural network. One neural network represents one image or one scene. Does NeMF take a neural network to represent a semantic correspondence or a matching cost? If so, how much time will your method cost to train a network? If not so, what is the difference between your method and other semantic correspondence methods. Limitations According to my understanding, NeMF takes a network to represent a matching cost. In practice, people need a method to compute different matching costs for different image pairs. How does NeMF deal with this situation? Questions to address: 1. What is the focus and contribution of the paper on semantic correspondence? 2. What are the strengths of the proposed approach, particularly in terms of neural representation? 3. What are the weaknesses of the paper, especially for the experiment section? 4. Do you have any concerns on the semantic correspondence representation? 5. What are the limitations regarding the NeMF approach?
Review
Review The author proposed an extended version of MNIS where they introduced thickening/thinning/swelling/fracture. The operation is done using binary morphological operations. * Providing benchmark data for tasks such disentanglement is important but I am not sure generating data is sufficient contribution for a paper. * I am not sure what conclusion I should draw from Fig 5 and Fig 6 about the data. * Eventually this data can become a benchmark data when it is paired with a method. Then that method/data are a benchmark.
ICLR
Title Adversarial reading networks for machine comprehension Abstract Machine reading has recently shown remarkable progress thanks to differentiable reasoning models. In this context, End-to-End trainable Memory Networks (MemN2N) have demonstrated promising performance on simple natural language based reasoning tasks such as factual reasoning and basic deduction. However, the task of machine comprehension is currently bounded to a supervised setting and available question answering dataset. In this paper we explore the paradigm of adversarial learning and self-play for the task of machine reading comprehension. Inspired by the successful propositions in the domain of game learning, we present a novel approach of training for this task that is based on the definition of a coupled attention-based memory model. On one hand, a reader network is in charge of finding answers regarding a passage of text and a question. On the other hand, a narrator network is in charge of obfuscating spans of text in order to minimize the probability of success of the reader. We experimented the model on several question-answering corpora. The proposed learning paradigm and associated models present encouraging results. 1 INTRODUCTION Automatic comprehension of text is one of the main goals of natural language processing. While the ability of a machine to understand text can be assessed in many different ways, several benchmark datasets have recently been created to focus on answering questions as a way to evaluate machine comprehension (Richardson et al., 2013); (Hermann et al., 2015); (Hill et al., 2015a); (Weston et al., 2015); (Rajpurkar et al., 2016); (Nguyen et al., 2016). In this setup, the machine is presented with a piece of text such as a news article or a story. Then, the machine is expected to answer one or multiple questions related to the text. The task is linked to several important incomes. First, it provides tools that will shortly help users with efficient access to large amounts of information. Also, it acts as an important proxy task to assess model of natural language understanding and reasoning. In this context, numerous large-scale machine comprehension/QA datasets (Hermann et al., 2015); (Rajpurkar et al., 2016); (Trischler et al., 2016a); (Nguyen et al., 2016) have been recently released and have contributed to significant advancement. From a model perspective, neural models are now approaching human parity on some of these benchmarks and a large corpus of novel and promissing research has been produced in the domain of attention, memory and parametric model with socalled reasoning capabilities. However, the field is currently bounded to the paradigm of supervised learning and strictly linked to the current annotated dataset. As a counterpart, an increasing research activity has been dedicated since the 90’s to self-play and adversariality to overcome this boundary and allow a model to exploit its own decision to improve itself. Two famous examples are related to policy learning in games. Indeed, TD-Gammon (Tesauro, 1995) was a neural network controller for backgammon which achieved near top player performance using self-play as learning paradigm. More recently, DeepMind AlphaGo uses the same paradigm to win against the current world best human go player. The major advantage of such setting is to partially release the learning procedure to the limit of an available dataset. The dual models learn and improve their performance by acting one against the other as so-called sparing patterns. In this paper, we adapt this paradigm to the domain of machine reading. On the first hand, a reader network is trained to learn to answer question regarding a passage of text. On the other hand, a narrator network learns to obfuscate words of a given passage in order to minimize the probability of successfull answering of the reader model. We developed a sequential learning protocol in order to gradually improved the quality of the models. This paradigm separates itself from the current research direction of joint question and answer learning from text as proposed on Wang et al. (2017). Indeed, in comparison to question generation as regularizer of a reader model that sounds promising, we believe adversarial training unleashs from the constraint of strict and bounded supervision and brings robustness to the answering model. Our contributions can be summarized as follows: (1) We propose a new learning paradigm for machine comprehension based on adversarial training. (2) We show this methodology allows to overcome the boundaries of strict supervision and provides robustness to noise in question-answering settings through a set of experiments in several machine reading corpora and (3) visualizations of the models reveals some useful insights of the attention mechanism for reasoning the questions and extracting meaning passage of a text given a question. Roadmap: In Section 2, we formalize our adversarial learning protocol. Also, the reader and narrator networks are presented. In Section 3 the corpora used for evaluation are detailed. Section 4 presents our current experimental results. Section 5 details several vizualizations of the decisions and attention values computed by the coupled models. Finally, Section 6 reviews the state-of-the-art of machine reading comprehension, Memory Network models, the paradigm of self-play and its links to adversarial learning. 2 ADVERSARIAL READING NETWORKS Several studies have recently challenged deep machine reading models with adversarial examples as Miyato et al. (2016) and Jia & Liang (2017). This kind of approach is well known in computer vision (Goodfellow et al., 2014) but seems to also affects natural language processing. More precisely, Jia & Liang (2017) demonstrates that a large majority of the recent state of the art deep machine reading models suffers from a lack of robustness regarding adversarial examples because of their so-called oversensibility. Indeed average accuracies were decreased by half when these models were tested on corrupted data, i.e a document with an additional sentence at the end which normally does not affect the answer. The model we propose is built to use this adversariality as an adaptive dropout by challenging the reader with more and more difficult tasks during the learning. Indeed, we extend the concept of asymmetric self-play to train a model that we called the narrator during an adversarial game with a reader. The narrator is acquiring knowledge about the reader behaviour during the training and it generates harder adversarial examples. Beyond increasing artificially the size of the available dataset, this adaptive behaviour of the narrator prevents catastrophic forgetting phenomena from the reader. In this section, we explain the protocol of adversarial training we developed for robust machine comprehension. Then, we describe the reader and narrator models used. 2.1 MAIN LEARNING PROTOCOL The overall framework is a turn-based question answering game described in Figure 1. At the beginning of each round, the narrator obfuscates one word for each document sampled from the training corpus. We fix the ratio of corrupted data / clear data to a ratio λ ∈ R[0,1] of the dataset. Indeed, a too low percentage of corrupted data might not have any effect on the training and a too high one will prevent the reader of learning well. Then, the reader is trained on a subset of this obfuscated corpus and tested on the remaining subset. Note that both train and test sets contain corrupted data. Finally the narrator gets back a set of rewards regarding the reader performances on the obfuscated stories. Given a tuple (d, dobf, q) where d is the original document, dobf the document with an obfuscated word proposed by the narrator and q the associated question, the reward r given to the narrator is defined as follow: r = { 1 if the reader answer well on d and fail on dobf 0 otherwise The reward given to the narrator is a direct measurement of the impact of the obfuscation on the reader performance. All the previously collected rewards are stored and used for experience replay throughout the turns. After each learning turn, all the parameters of the narrator are reinitialized and retrained on all the recorded rewards. Throughout the turns, the narrator accumulates information about the reader behaviour and proposes more challenging tasks as the game is playing. Each narrator’s dataset is choosen to maximizes its expected rewards for 80% of the stories and randomly obfuscates a word in the remaining 20% in order to ensure exploration. Finally, the reader keeps improving through the turn and any catastrophic forgetting is compensated at the next turn of the narrator by especially focusing on these flaws. Algorithm 1 Pseudo-code of the adversarial training Split dataset into 3 pieces (A) train (80%), (B) valid (10%) and (C) test (10%) Create D an empty dataset epoch = 0 while epoch < NB MAX EPOCHS do Split A into A1 (80%) and A2 (20%) if epoch = 0 then Randomly corrupt 20% of A1 and 100% of A2 else Reinitialize all the parameters of the narrator Train the narrator on D The narrator corrupts 20% of A1 and 100% of A2 end if Train one epoch of the reader on A1 Let A2 clear be the dataset that contains the same data as in A2 but without corruption Test the reader on A2 and on A2 clear for all (d ∈ A2, d clear ∈ A2 clear) do Let r be the reward given to the narrator if The reader succeed on d clear and fails on d then D ← {D ∪ (d, r = 1)} else if The reader succeed on d clear and succeed on d then D ← {D ∪ (d, r = 0)} end if end for Test the reader on B and see if it should early stop or not epoch← epoch + 1 end while Test the reader on C and report the results Finally let â be the predicted distribution and a the the ground-truth. Categorical cross entropy LNarrator = − ∑N i=1 ∑v j=1 aij log(âij), is the loss function for the reader network as the model decision is a distribution over a vocabulary. Then, binary cross entropy LReader = − ∑N i=1[ailog(âi) + (1 − ai)log(1 − âi)] is used as loss function for the narrator network. 2.2 BASELINE PROTOCOL As a reference protocol, one word is obfuscated in several stories of the dataset using a uniform sampling strategy. This is a naive variation of the first protocol where the narrator doesn’t learn from the reader feedbacks. In fact, this protocol is similar to a dropout regularization that allows to avoid overfitting the training set. However without the narrator learning of the first protocol, we lost the adaptive dropout and all the curriculum learning notions of easier and harder inputs. In practice this simple adversarial protocol improves the robustness of the results compared to a standard learning protocol. This learning protocol have strong similarities with the one proposed by Maaten et al. (2013). 2.3 READER NETWORK We use a Gated End-to-End Memory Network, GMemN2N, as reader which was first introduced by Perez & Liu (2016). This architecture is based on two different memory cells and an output prediction. An input memory representation {mi} and an output representation {ci} are used to store embedding representations of inputs. Suppose that an input of the model is a tuple (d, q) where d is a document, i.e. a set of sentences {si} and q a query about d, the entire set of sentences is converted into input memory vectors mi = AΦ(si) and output memory vectors ci = CΦ(si) by using two embedding matrixA and C. The question q is also embedded using a third matrixB, u = BΨ(q) of the same dimension as A and C. where Φ and Ψ are respectively the sentence embedding function and the question embedding function described in the next paragraph. The input memory is used to compute the relevance of each sentence in its context regarding the question, by computing the inner product of the input memory sentence representation with the query. Then a softmax is used to compute the probability distribution. The response o = ∑ i pici from the output memory is the sum of the output memory vectors {ci} weighted with the sentence relevances calculated before pi = softmax(uTmi), where softmax(ai) = eai/ ∑ j∈[1,n] e aj . A gated mechanism is used when we updated the value of the controller u: T k(uk) = σ(W kTu k + bkT )u k+1 = ok T k(uk) + uk (1− T k(uk)) (1) Finally, assuming we use a model with K hops of memory, the final prediction is â = softmax(W (oK +uK)) whereW is a matrix of size d×v and v is the number of candidate answers. In this model, we do not use the adjacent or layer-wise weight tying scheme and all the matrix Ak and Bk of the multiple hops are different. Text and question representations: To build the sentence representations, we use a 1-dimensional Convolutional Neural Network (CNN) with a list of filter sizes over all the sentences as proposed in Kim (2014). Let [s1, . . . , sN ] be the vectorial representation of a document with N sentences where si = [wi,1, wi,2, . . . , wi,n] is the i − th sentence which contains n words. Given a convolutional filter F ∈ Rh∗d where h is the width of the convolutional window, i.e the number words it overlaps, the convolutional layer produces: ci,j = f(F [Ewi,j , . . . , Ewi,j+h]),∀j ∈ [1, n− j] where is the elementwise multiplication, f a rectified linear unit (ReLU), b a bias term and E the embedding matrix of size d ∗ V where V is the vocabulary size and d the word embedding size. Then, a max pooling operator is applied to this vector to extract features. Given a filter F , after a convolutional operation and a max pooling operation, we obtain a feature ĉi = maxj(ci,j) from the i − th sentence of the text. Multiple filters with varying sizes are used. Assume that our model uses Ns different filter sizes and Nf for each size, we are able to extract Ns × Nf features for one sentence. The final representation of the sentence Φ(si) = [ĉiF1 , ĉiF2 , . . . , ĉiFNs∗Nf ] is the concatenation of the extracted features from all the filters. 2.4 NARRATOR NETWORK The objective of this model is to predict the probability of the reader to successfully respond to a question given a document with an obfuscated word. This information will be use by the narrator to determine the position of the obfuscated word in the document which maximizes the probability of the reader to fail its task. We use a GMemN2N similarly to the reader. However, on the last layer a sigmoid function is used to predict the probability of the reader to fail on this input: â = σ(W (oK +uK)) where σ = 11+e−x and â ∈ [0, 1] is the predicted probability of failure of the reader and W a matrix of size d× 1. An input of the reader is a tuple (dobf, q) where dobf is a document with an obfuscated word. To obfuscate a word, we replace it by the word unk for unknown. The output of the narrator is a real number r ∈ R[0,1] which is the expected probability of the reader to fail on the question. The objective of the narrator is to select the stories which maximize this reward. Finally, we use the same text passage and query representation than for the reader, based on a CNN with different filter sizes for the document and the two last hidden states of a bidirectional Gated Rectified Unit (GRU) recurrent network for the question encoding. Both models are fully-differentiable. 3 DATASETS AND DATA PREPROCESSING Cambridge Dialogs: the transactional dialog corpus proposed by Wen et al. (2016) has been produced by a crowdsourced version of the Wizard-of-Oz paradigm. It was originally designed for dialog state tracking but Perez (2016)) have shown that this task could also be considered as a reading task. In such setting, the informable slots provided as metadata to each dialog were used to produce questions for a dialog comprehension task. The dataset deals with an agent assisting a user to find a restaurant in Cambridge, UK. To propose the best matching restaurant the system needs to extract 3 constraints which correspond to the informable slots in the dialog state tracking task: Food, Pricerange, Area. Given a dialog between an agent and a user, this informable slots become questions for the model we propose. The dataset contains 680 different dialogs about 99 different restaurants. We preprocess the dataset to transform it into a question answering dataset by using the three informable slot types as questions about a given dialog. After this preprocessing operation, we end up with our question answering formatted dataset which contains 1352 possible answers. TripAdvisor aspect-based sentiment analysis: the dataset contains hotel reviews from the TripAdvisor website (Wang et al., 2010). This dataset contains a total of 235K detailed reviews about 1850 hotels. Each review is associated to an overall rating, between 0 and 5 stars. Furthermore, 7 aspects: value, room, location, cleanliness, checkin/front desk, service, and business service are available. We transform the dataset into a question answering task over a given review. Concretely, for each review a question is an aspect and we use the number of stars as answer. This kind of machine reading approach to sentiment analysis was previously proposed in Tang et al. (2016). Children’s Book Test (CBT): the dataset is built from freely available books (Hill et al., 2015b) thanks to Project Gutenberg1. The training data consists of tuples (S, q, C, a) where S is the context composed by 20 consecutive sentences from the book, q is the query, C a set of 10 candidate answers and a the answer. The query q is the 21st sentence, i.e the sentence that directly follows the 20 sentences of the context and where one word is removed and replaced with a missing word symbol. Questions are grouped into 4 distinct categories depending of the type of the removed word: Named Entities (NE), (Common) Nouns (CN), Verbs (V) and Prepositions (P). The training contains 669, 343 inputs (context+query) and we evaluated our models on the provided test set which contains 10, 000 inputs, 2, 500 per category. 4 EXPERIMENTS 4.1 TRAINING DETAILS 10% of the dataset was randomly held-out to create a test set. We split the dataset before all the training operations and each of the protocol we propose was tested on the same test dataset. For the training phase, we split the training dataset to extract a validation set to perform early stopping. We use Adam optimizer (Kingma & Ba, 2014) with a starting learning rate at 0.0005. We set the dropout to 0.9 which means that during training, 10%, randomly selected, of the parameters are not used during the forward pass and not updated during the backward propagation of error. We also added the gated memory mecanism of Perez & Liu (2016) that dynamically regulates the access 1https://www.gutenberg.org to the memory blocks. This mechanism had a very positive effect on the overall performances of our models. All weights are initialized randomly from a Gaussian distribution with zero mean and σ = 0.1. Moreover, we penalize the loss with the sum of the L2 of the parameters of the models. We set the batch size to 16 inputs and we use embedding word of size 300. We initialize all the embedding matrix with pre-trained GloVe word vectors (Pennington et al., 2014) and we randomly initialize the words of our document that are not in the GloVe model. It seems that for our experiments CNN encoding doesn’t improve only the overall accuracy of the model compared to LSTM but also the stability by decreasing the variance of the results. So in practice we use 128 filters of size 2, 3, 5 and 8 so a total of 512 filters for the one dimensional convolutional layer. We repeat each training 10 times for the two first datasets and report maximum and average accuracy on the test set. The maximum is the score on the test set of the best of the 10 trained models based on the validation set. During the adversarial learning, the dataset contains 70% of clear dialogs and 30% of corrupted dialogs, λ = 0.3. Inside these corrupted data, 20% are randomly obfuscated by the narrator in order to make it learn from exploration and the narrator maximizes his reward for the remaining 80%. Eventually to fit with the format of the dataset, we slightly modified the output layer of our reader for the CBT task. Instead of projecting on a set of candidate answers the last layer of the reader makes a projection on the entire vocabulary â = σ(M W (oK + uK)) where W is a matrix of size V ∗ d with V the vocabulary size, the elementwise product and M the mask vector of size V containing 1 if the corresponding word is proposed in the candidate answers 0 otherwise. 4.2 RESULTS Performance results on the Cambridge dataset and TripAdvisor are displayed in table 2. We present the results of our implementation of a standard GMemN2N, a uniform GMemN2N which is the reader trained with the baseline protocol 2.2 and the GMemN2N trained in the adversarial protocol 2.1 against the narrator. Each of the experiment was run 10 times and we report in this table the maximum score on the test (based on validation set) and the average score. The precise number of hops needed to achieve the best performance with such models is not obvious so we are presenting all the results for reader and narrator between 4 and 6 hops. Adaptive adversarial GMemN2N improves the accuracy of the model on the Cambridge task by 2.3 points for a model with with 6 hops. The best performance on the TripAdvisor dataset was achieved by the adversarial GMemN2N with 4 hops. It improves the accuracy by 1.5 points. The uniform protocol improves the stability of the performances compare to a standard reader but we went further with the adversarial protocol which improve both the overall accuracy and the stability of the performances. It is not clear for this task that the number of hops, between 4 and 6, has an influence on the general behaviour but we achieve the best performance with our adversarial protocol and a reader with 6 hops. All the average values of the models trained with the adversarial protocol are higher than the others, even for the 5 hops model which doesn’t achieve a very good max performance during the 10 replications we have run. Performances on the CBT dataset are displayed in table 3. Because of the size of this dataset, we didn’t repeat the training 10 times but only once. Results of the uniform training seem similar to the performances of the standard reader in this case but the accuracy of the models trained with our adversarial protocol remain higher than others. 5 VISUALIZATIONS AND ANALYSIS 5.1 NARRATOR PREDICTIONS In order to better understand the narrator learnings from the reader behaviour during the adversarial protocol, Figure 2 depicts the rewards that the narrator expects for each word of a document after several rounds of the game. Given a tuple (d, q) where d is a clear document and q a query and assuming the document contains k words, we generate k corrupted documents where one word is obfuscated in each of them. We then feed the narrator with these corrupted data and report the results. y-axis represents the document and x-axis the expected reward from the reader if the narrator decides to generate a corrupted document by obfuscating this word. In red, the words of the documents that correspond to the answer of the question are highlighted. The narrator tends to obfuscate some important keywords of the dialogs. Furthermore, the narrator is not pointing on a single word but it points on a word and on its neighborhood. This might be a consequence of the encoding which is not only a representation of a word but a representation of a word in its context. 5.2 STANDARD VS ADVERSARIAL READER ATTENTION Figure 3 depicts the attention values, presented by hops, over a document from the Cambridge dataset. The document was choosen as only the adversarial protocol answers correctly to the question. It displays attention distributions for a reader trained with the three different protocols: [top] standard, [middle] uniform, [bottom] adversarial. The overall aspect of the first two readers are comparable. The readers quickly focus on what we assume to be an important span of text. After two hops the readers start looking at the same position in the document. On the contrary, the reader trained with the adversarial protocol seems to have a very different behavior regarding the attention mechanism. It captures the important part of the sentence directly at the first hop and uses the 4 remaining hops to focus more largely on the end of the document. We might interpret this as a consequence of the obfuscation protocol that forces the reader to look on different parts of the sentence instead of focusing on one precise point during the learning process. 6 RELATED WORK 6.1 END-TO-END MACHINE READING The task of end-to-end machine reading consist in learning to select an answer to question given a passage of text in supervised manner. One of the popular formal setting of the problem, the clozestyle QA task, involves tuples of the form (d, q, a, C), where d is a document (context), q is a query over the contents of d, in which a phrase is replaced with a placeholder, and a is the answer to q, which comes from a set of candidates C. In this work we consider datasets where each candidate c ∈ C has at least one token which also appears in the document. The task can then be described as: given a document-query pair (d, q), find a ∈ C which answers q. Below we provide an overview of representative neural network architectures which have been applied to this problem. LSTMs with Attention: Several architectures introduced in Hermann et al. (2015) employ LSTM units to compute a combined document-query representation g(d, q), which is used to rank the candidate answers. These include the DeepLSTM Reader which performs a single forward pass through the concatenated (document, query) pair to obtain g(d, q); the Attentive Reader which first computes a document vector d(q) by a weighted aggregation of words according to attentions based on q, and then combines d(q) and q to obtain their joint representation g(d(q), q); and the Impatient Reader where the document representation is built incrementally. The architecture of the Attentive Reader has been simplified recently in Stanford Attentive Reader, where shallower recurrent units were used with a bilinear form for the query-document attention (Chen et al., 2016). Attention Sum: The Attention-Sum (AS) Reader (Kadlec et al., 2016) uses two bidirectional GRU networks to encode both d and q into vectors. A probability distribution over the entities in d is obtained by computing dot products between q and the entity embeddings and taking a softmax. Then, an aggregation scheme named pointer-sum attention is further applied to sum the probabilities of the same entity, so that frequent entities the document will be favored compared to rare ones. Building on the AS Reader, the Attention-over-Attention (AoA) Reader (Cui et al., 2016) introduces a two-way attention mechanism where the query and the document are mutually attentive to each other. Multi-hop Architectures: Memory Networks (MemNets) were proposed in Weston et al. (2014), where each sentence in the document is encoded to a memory by aggregating nearby words. Attention over the memory slots given the query is used to compute an overall memory and to renew the query representation over multiple iterations, allowing certain types of reasoning over the salient facts in the memory and the query. Neural Semantic Encoders (NSE) Munkhdalai & Yu (2016) extended MemNets by introducing a write operation which can evolve the memory over time during the course of reading. Iterative reasoning has been found effective in several more recent models, including the Iterative Attentive Reader Sordoni et al. (2016) and ReasoNet Shen et al. (2016). The latter allows dynamic reasoning steps and is trained with reinforcement learning. Other related works, included EpiReader (Trischler et al., 2016b), consist of two networks, where one proposes a small set of candidate answers, and the other reranks the proposed candidates conditioned on the query and the context; Bi-Directional Attention Flow network (BiDAF) (Seo et al., 2016) adopts a multi-stage hierarchical architecture along with a flow-based attention mechanism. 6.2 ADVERSARIAL LEARNING AND SELF-PLAY The main principle of self-play consist in defining a learning task where two, possible antagonist behaviours, will be learnt jointly by competing from one against the another. In the context of twoplayer zero-sum games, such setting falls quite naturally. Two models of the same nature compete regarding the rules of the considered game and learn from their sucessive performances. A majority of prior work has focused on learning from self-play data using temporal-difference learning in backgammon (Tesauro, 1995), chess (Mannen, 2003), or using linear regression in Othello (van der Ree & Wiering, 2013) and more recently Go (Silver et al., 2016). In the general context of board games, the main advantage of self-play as a method of training neural network controllers lies in the fact that every position will be the result of a game position from an actual board, rather than being contrived positions that may fail to teach the network about probabilities or prevent the network from properly generalizing from the results. In other word, self-play contributes to exhibit challenging configurations to overcome as a controller. In such setting, the network has the advantage of having seen over several million different board positions, which would have been hardly feasible in a network trained through a crafted set of training data. In the domain of reading, it has been recently observed that the tasks of answering to a question given a passage of text and predicting the question regarding a text passage are interesting tasks to model jointly. So, several papers have recently proposed to use the question generation as a regularization task to improve the passage encoding model of a neural reader ((Yuan et al., 2017), (Wang et al., 2017)). In this paper, we claim these two tasks are indeed complementary but we think adversarial training of the nature used in two player games will lead to the same advantages than those observed previously. As generating the question given a passage and a question is hard we inspired ourself from the recent work proposed in (Guo et al., 2017) and define a narrator network as complementary task to the reader learning one. Such narrator have the task of finding the most meaningfull spans of text to obfuscate in a give passage and given a question in order to minimize the probability of successfull answering of the reader. 6.3 ADAPTIVE DROPOUT Recent deep neural networks are composed of a lot of parameters and tend to easily overfit the training set. One of the main idea which has been developped to prevent this overfitting is to randomly drop units from the network during the training session (Srivastava et al., 2014). Such approach results to combine many different neural networks to make a prediction. In the same idea of avoiding to overfit the training data, training a model on a dataset which contains corrupted data is something usefull which has been studying in Maaten et al. (2013). They have developed different ways to corrupt a document, for example by adding noise into the input features and our work refers to what they call the blankout corruption which consist of randomly delete features into the input documents (texts or images in this case) with probability q. 7 CONCLUSION AND FUTURE WORK In this paper, we propose an adversarial protocol to train coupled deep memory networks for the task of machine comprehension. On all reported experiments, the models trained with this novel protocol outperform the equivalent models trained using a standard supervised protocol. Moreover our adversarial protocol seems to reduce the variance of the models performances. In future work, we plan to continue studying this novel protocol using an active question answering task. Moreover, we currently investigate an adaptation of such protocol to Visual Question Answering.
1. What is the novel idea proposed by the paper in question answering? 2. What are the concerns regarding the experimental results presented in the paper? 3. How does the reviewer assess the impact of the proposed approach if it works as intended? 4. What recommendation does the reviewer have for improving the paper's evaluation? 5. Are there any typos or errors in the paper that the reviewer noticed?
Review
Review The paper aims to improve the accuracy of reading model on question answering dataset by playing against an adversarial agent (which is called narrator by the authors) that "obfuscates" the document, i.e. changing words in the document. The authors mention that word dropout can be considered as its special case which randomly drops words without any prior. Then the authors claim that smartly choosing the words to drop can make a stronger adversarial agent, which in turn would improve the performance of the reader as well. Hence the adversarial agent is trained and is architecturally similar to the reader but just has a different last layer, which predicts the word that would make the reader fail if the word is obfuscated. I think the idea is interesting and novel. While there have been numerous GAN-like approaches for language understanding, very few, if any, have shown worthy results. So if this works, it could be an impactful achievement. However, I am concerned with the experimental results. First, CBT: NE and CN numbers are too low. Even a pure LSTM achieves (no attention, no memory) 44% and 45%, respectively (Yu et al., 2017). These are 9% and 6% higher than the reported numbers for adversarial GMemN2N. So it is very difficult to determine if the model is appropriate for the dataset in the first place, and whether the gain from the non-adversarial setting is due to the adversarial setup or not. Second, Cambridge dialogs: the dataset's metric is not accuracy-based (while the paper reports accuracy), so I assume some preprocessing and altering have been done on the dataset. So there is no baseline to compare. Though I understand that the point of the paper is the improvement via the adversarial setting, it is hard to gauge how good the numbers are. Third, TripAdvisor: the dataset paper by Wang et al. (2010) is not evaluated on accuracy (rather on ranking, etc.). Did you also make changes to the dataset? Again, this makes the paper less strong because there is no baseline to compare. In short, the only comparable dataset is CBT, which has too low accuracy compared to a very simple baseline. In order to improve the paper, I recommend the authors to evaluate on more common datasets and/or use more appropriate reading models. --- Typos: page 1 first para: "One the first hand" -> "On the first hand" page 1 first para: "minimize to probability" -> "minimize the probability" page 3 first para: "compensate" -> "compensated" page 3 last para: "softmaxis" -> "softmax is" page 4 sec 2.4: "similar to the reader" -> "similarly to the reader" page 4 sec 2.4: "unknow" -> "unknown" page 4 sec 3 first para: missing reference at "a given dialog" page 5 first para: "Concretly" -> "Concretely" Table 1: "GMenN2N" -> "GMemN2N" Table 1: what is difference between "mean" and "average"? page 8 last para: missing reference at "Iterative Attentive Reader" page 9 sec 6.2 last para: several citations missing, e.g. which paper is by "Tesauro"? [Yu et al. 2017] Adams Wei Yu, Hongrae Kim, and Quoc V. Le. Learning to Skim Text. ACL 2017
ICLR
Title Adversarial reading networks for machine comprehension Abstract Machine reading has recently shown remarkable progress thanks to differentiable reasoning models. In this context, End-to-End trainable Memory Networks (MemN2N) have demonstrated promising performance on simple natural language based reasoning tasks such as factual reasoning and basic deduction. However, the task of machine comprehension is currently bounded to a supervised setting and available question answering dataset. In this paper we explore the paradigm of adversarial learning and self-play for the task of machine reading comprehension. Inspired by the successful propositions in the domain of game learning, we present a novel approach of training for this task that is based on the definition of a coupled attention-based memory model. On one hand, a reader network is in charge of finding answers regarding a passage of text and a question. On the other hand, a narrator network is in charge of obfuscating spans of text in order to minimize the probability of success of the reader. We experimented the model on several question-answering corpora. The proposed learning paradigm and associated models present encouraging results. 1 INTRODUCTION Automatic comprehension of text is one of the main goals of natural language processing. While the ability of a machine to understand text can be assessed in many different ways, several benchmark datasets have recently been created to focus on answering questions as a way to evaluate machine comprehension (Richardson et al., 2013); (Hermann et al., 2015); (Hill et al., 2015a); (Weston et al., 2015); (Rajpurkar et al., 2016); (Nguyen et al., 2016). In this setup, the machine is presented with a piece of text such as a news article or a story. Then, the machine is expected to answer one or multiple questions related to the text. The task is linked to several important incomes. First, it provides tools that will shortly help users with efficient access to large amounts of information. Also, it acts as an important proxy task to assess model of natural language understanding and reasoning. In this context, numerous large-scale machine comprehension/QA datasets (Hermann et al., 2015); (Rajpurkar et al., 2016); (Trischler et al., 2016a); (Nguyen et al., 2016) have been recently released and have contributed to significant advancement. From a model perspective, neural models are now approaching human parity on some of these benchmarks and a large corpus of novel and promissing research has been produced in the domain of attention, memory and parametric model with socalled reasoning capabilities. However, the field is currently bounded to the paradigm of supervised learning and strictly linked to the current annotated dataset. As a counterpart, an increasing research activity has been dedicated since the 90’s to self-play and adversariality to overcome this boundary and allow a model to exploit its own decision to improve itself. Two famous examples are related to policy learning in games. Indeed, TD-Gammon (Tesauro, 1995) was a neural network controller for backgammon which achieved near top player performance using self-play as learning paradigm. More recently, DeepMind AlphaGo uses the same paradigm to win against the current world best human go player. The major advantage of such setting is to partially release the learning procedure to the limit of an available dataset. The dual models learn and improve their performance by acting one against the other as so-called sparing patterns. In this paper, we adapt this paradigm to the domain of machine reading. On the first hand, a reader network is trained to learn to answer question regarding a passage of text. On the other hand, a narrator network learns to obfuscate words of a given passage in order to minimize the probability of successfull answering of the reader model. We developed a sequential learning protocol in order to gradually improved the quality of the models. This paradigm separates itself from the current research direction of joint question and answer learning from text as proposed on Wang et al. (2017). Indeed, in comparison to question generation as regularizer of a reader model that sounds promising, we believe adversarial training unleashs from the constraint of strict and bounded supervision and brings robustness to the answering model. Our contributions can be summarized as follows: (1) We propose a new learning paradigm for machine comprehension based on adversarial training. (2) We show this methodology allows to overcome the boundaries of strict supervision and provides robustness to noise in question-answering settings through a set of experiments in several machine reading corpora and (3) visualizations of the models reveals some useful insights of the attention mechanism for reasoning the questions and extracting meaning passage of a text given a question. Roadmap: In Section 2, we formalize our adversarial learning protocol. Also, the reader and narrator networks are presented. In Section 3 the corpora used for evaluation are detailed. Section 4 presents our current experimental results. Section 5 details several vizualizations of the decisions and attention values computed by the coupled models. Finally, Section 6 reviews the state-of-the-art of machine reading comprehension, Memory Network models, the paradigm of self-play and its links to adversarial learning. 2 ADVERSARIAL READING NETWORKS Several studies have recently challenged deep machine reading models with adversarial examples as Miyato et al. (2016) and Jia & Liang (2017). This kind of approach is well known in computer vision (Goodfellow et al., 2014) but seems to also affects natural language processing. More precisely, Jia & Liang (2017) demonstrates that a large majority of the recent state of the art deep machine reading models suffers from a lack of robustness regarding adversarial examples because of their so-called oversensibility. Indeed average accuracies were decreased by half when these models were tested on corrupted data, i.e a document with an additional sentence at the end which normally does not affect the answer. The model we propose is built to use this adversariality as an adaptive dropout by challenging the reader with more and more difficult tasks during the learning. Indeed, we extend the concept of asymmetric self-play to train a model that we called the narrator during an adversarial game with a reader. The narrator is acquiring knowledge about the reader behaviour during the training and it generates harder adversarial examples. Beyond increasing artificially the size of the available dataset, this adaptive behaviour of the narrator prevents catastrophic forgetting phenomena from the reader. In this section, we explain the protocol of adversarial training we developed for robust machine comprehension. Then, we describe the reader and narrator models used. 2.1 MAIN LEARNING PROTOCOL The overall framework is a turn-based question answering game described in Figure 1. At the beginning of each round, the narrator obfuscates one word for each document sampled from the training corpus. We fix the ratio of corrupted data / clear data to a ratio λ ∈ R[0,1] of the dataset. Indeed, a too low percentage of corrupted data might not have any effect on the training and a too high one will prevent the reader of learning well. Then, the reader is trained on a subset of this obfuscated corpus and tested on the remaining subset. Note that both train and test sets contain corrupted data. Finally the narrator gets back a set of rewards regarding the reader performances on the obfuscated stories. Given a tuple (d, dobf, q) where d is the original document, dobf the document with an obfuscated word proposed by the narrator and q the associated question, the reward r given to the narrator is defined as follow: r = { 1 if the reader answer well on d and fail on dobf 0 otherwise The reward given to the narrator is a direct measurement of the impact of the obfuscation on the reader performance. All the previously collected rewards are stored and used for experience replay throughout the turns. After each learning turn, all the parameters of the narrator are reinitialized and retrained on all the recorded rewards. Throughout the turns, the narrator accumulates information about the reader behaviour and proposes more challenging tasks as the game is playing. Each narrator’s dataset is choosen to maximizes its expected rewards for 80% of the stories and randomly obfuscates a word in the remaining 20% in order to ensure exploration. Finally, the reader keeps improving through the turn and any catastrophic forgetting is compensated at the next turn of the narrator by especially focusing on these flaws. Algorithm 1 Pseudo-code of the adversarial training Split dataset into 3 pieces (A) train (80%), (B) valid (10%) and (C) test (10%) Create D an empty dataset epoch = 0 while epoch < NB MAX EPOCHS do Split A into A1 (80%) and A2 (20%) if epoch = 0 then Randomly corrupt 20% of A1 and 100% of A2 else Reinitialize all the parameters of the narrator Train the narrator on D The narrator corrupts 20% of A1 and 100% of A2 end if Train one epoch of the reader on A1 Let A2 clear be the dataset that contains the same data as in A2 but without corruption Test the reader on A2 and on A2 clear for all (d ∈ A2, d clear ∈ A2 clear) do Let r be the reward given to the narrator if The reader succeed on d clear and fails on d then D ← {D ∪ (d, r = 1)} else if The reader succeed on d clear and succeed on d then D ← {D ∪ (d, r = 0)} end if end for Test the reader on B and see if it should early stop or not epoch← epoch + 1 end while Test the reader on C and report the results Finally let â be the predicted distribution and a the the ground-truth. Categorical cross entropy LNarrator = − ∑N i=1 ∑v j=1 aij log(âij), is the loss function for the reader network as the model decision is a distribution over a vocabulary. Then, binary cross entropy LReader = − ∑N i=1[ailog(âi) + (1 − ai)log(1 − âi)] is used as loss function for the narrator network. 2.2 BASELINE PROTOCOL As a reference protocol, one word is obfuscated in several stories of the dataset using a uniform sampling strategy. This is a naive variation of the first protocol where the narrator doesn’t learn from the reader feedbacks. In fact, this protocol is similar to a dropout regularization that allows to avoid overfitting the training set. However without the narrator learning of the first protocol, we lost the adaptive dropout and all the curriculum learning notions of easier and harder inputs. In practice this simple adversarial protocol improves the robustness of the results compared to a standard learning protocol. This learning protocol have strong similarities with the one proposed by Maaten et al. (2013). 2.3 READER NETWORK We use a Gated End-to-End Memory Network, GMemN2N, as reader which was first introduced by Perez & Liu (2016). This architecture is based on two different memory cells and an output prediction. An input memory representation {mi} and an output representation {ci} are used to store embedding representations of inputs. Suppose that an input of the model is a tuple (d, q) where d is a document, i.e. a set of sentences {si} and q a query about d, the entire set of sentences is converted into input memory vectors mi = AΦ(si) and output memory vectors ci = CΦ(si) by using two embedding matrixA and C. The question q is also embedded using a third matrixB, u = BΨ(q) of the same dimension as A and C. where Φ and Ψ are respectively the sentence embedding function and the question embedding function described in the next paragraph. The input memory is used to compute the relevance of each sentence in its context regarding the question, by computing the inner product of the input memory sentence representation with the query. Then a softmax is used to compute the probability distribution. The response o = ∑ i pici from the output memory is the sum of the output memory vectors {ci} weighted with the sentence relevances calculated before pi = softmax(uTmi), where softmax(ai) = eai/ ∑ j∈[1,n] e aj . A gated mechanism is used when we updated the value of the controller u: T k(uk) = σ(W kTu k + bkT )u k+1 = ok T k(uk) + uk (1− T k(uk)) (1) Finally, assuming we use a model with K hops of memory, the final prediction is â = softmax(W (oK +uK)) whereW is a matrix of size d×v and v is the number of candidate answers. In this model, we do not use the adjacent or layer-wise weight tying scheme and all the matrix Ak and Bk of the multiple hops are different. Text and question representations: To build the sentence representations, we use a 1-dimensional Convolutional Neural Network (CNN) with a list of filter sizes over all the sentences as proposed in Kim (2014). Let [s1, . . . , sN ] be the vectorial representation of a document with N sentences where si = [wi,1, wi,2, . . . , wi,n] is the i − th sentence which contains n words. Given a convolutional filter F ∈ Rh∗d where h is the width of the convolutional window, i.e the number words it overlaps, the convolutional layer produces: ci,j = f(F [Ewi,j , . . . , Ewi,j+h]),∀j ∈ [1, n− j] where is the elementwise multiplication, f a rectified linear unit (ReLU), b a bias term and E the embedding matrix of size d ∗ V where V is the vocabulary size and d the word embedding size. Then, a max pooling operator is applied to this vector to extract features. Given a filter F , after a convolutional operation and a max pooling operation, we obtain a feature ĉi = maxj(ci,j) from the i − th sentence of the text. Multiple filters with varying sizes are used. Assume that our model uses Ns different filter sizes and Nf for each size, we are able to extract Ns × Nf features for one sentence. The final representation of the sentence Φ(si) = [ĉiF1 , ĉiF2 , . . . , ĉiFNs∗Nf ] is the concatenation of the extracted features from all the filters. 2.4 NARRATOR NETWORK The objective of this model is to predict the probability of the reader to successfully respond to a question given a document with an obfuscated word. This information will be use by the narrator to determine the position of the obfuscated word in the document which maximizes the probability of the reader to fail its task. We use a GMemN2N similarly to the reader. However, on the last layer a sigmoid function is used to predict the probability of the reader to fail on this input: â = σ(W (oK +uK)) where σ = 11+e−x and â ∈ [0, 1] is the predicted probability of failure of the reader and W a matrix of size d× 1. An input of the reader is a tuple (dobf, q) where dobf is a document with an obfuscated word. To obfuscate a word, we replace it by the word unk for unknown. The output of the narrator is a real number r ∈ R[0,1] which is the expected probability of the reader to fail on the question. The objective of the narrator is to select the stories which maximize this reward. Finally, we use the same text passage and query representation than for the reader, based on a CNN with different filter sizes for the document and the two last hidden states of a bidirectional Gated Rectified Unit (GRU) recurrent network for the question encoding. Both models are fully-differentiable. 3 DATASETS AND DATA PREPROCESSING Cambridge Dialogs: the transactional dialog corpus proposed by Wen et al. (2016) has been produced by a crowdsourced version of the Wizard-of-Oz paradigm. It was originally designed for dialog state tracking but Perez (2016)) have shown that this task could also be considered as a reading task. In such setting, the informable slots provided as metadata to each dialog were used to produce questions for a dialog comprehension task. The dataset deals with an agent assisting a user to find a restaurant in Cambridge, UK. To propose the best matching restaurant the system needs to extract 3 constraints which correspond to the informable slots in the dialog state tracking task: Food, Pricerange, Area. Given a dialog between an agent and a user, this informable slots become questions for the model we propose. The dataset contains 680 different dialogs about 99 different restaurants. We preprocess the dataset to transform it into a question answering dataset by using the three informable slot types as questions about a given dialog. After this preprocessing operation, we end up with our question answering formatted dataset which contains 1352 possible answers. TripAdvisor aspect-based sentiment analysis: the dataset contains hotel reviews from the TripAdvisor website (Wang et al., 2010). This dataset contains a total of 235K detailed reviews about 1850 hotels. Each review is associated to an overall rating, between 0 and 5 stars. Furthermore, 7 aspects: value, room, location, cleanliness, checkin/front desk, service, and business service are available. We transform the dataset into a question answering task over a given review. Concretely, for each review a question is an aspect and we use the number of stars as answer. This kind of machine reading approach to sentiment analysis was previously proposed in Tang et al. (2016). Children’s Book Test (CBT): the dataset is built from freely available books (Hill et al., 2015b) thanks to Project Gutenberg1. The training data consists of tuples (S, q, C, a) where S is the context composed by 20 consecutive sentences from the book, q is the query, C a set of 10 candidate answers and a the answer. The query q is the 21st sentence, i.e the sentence that directly follows the 20 sentences of the context and where one word is removed and replaced with a missing word symbol. Questions are grouped into 4 distinct categories depending of the type of the removed word: Named Entities (NE), (Common) Nouns (CN), Verbs (V) and Prepositions (P). The training contains 669, 343 inputs (context+query) and we evaluated our models on the provided test set which contains 10, 000 inputs, 2, 500 per category. 4 EXPERIMENTS 4.1 TRAINING DETAILS 10% of the dataset was randomly held-out to create a test set. We split the dataset before all the training operations and each of the protocol we propose was tested on the same test dataset. For the training phase, we split the training dataset to extract a validation set to perform early stopping. We use Adam optimizer (Kingma & Ba, 2014) with a starting learning rate at 0.0005. We set the dropout to 0.9 which means that during training, 10%, randomly selected, of the parameters are not used during the forward pass and not updated during the backward propagation of error. We also added the gated memory mecanism of Perez & Liu (2016) that dynamically regulates the access 1https://www.gutenberg.org to the memory blocks. This mechanism had a very positive effect on the overall performances of our models. All weights are initialized randomly from a Gaussian distribution with zero mean and σ = 0.1. Moreover, we penalize the loss with the sum of the L2 of the parameters of the models. We set the batch size to 16 inputs and we use embedding word of size 300. We initialize all the embedding matrix with pre-trained GloVe word vectors (Pennington et al., 2014) and we randomly initialize the words of our document that are not in the GloVe model. It seems that for our experiments CNN encoding doesn’t improve only the overall accuracy of the model compared to LSTM but also the stability by decreasing the variance of the results. So in practice we use 128 filters of size 2, 3, 5 and 8 so a total of 512 filters for the one dimensional convolutional layer. We repeat each training 10 times for the two first datasets and report maximum and average accuracy on the test set. The maximum is the score on the test set of the best of the 10 trained models based on the validation set. During the adversarial learning, the dataset contains 70% of clear dialogs and 30% of corrupted dialogs, λ = 0.3. Inside these corrupted data, 20% are randomly obfuscated by the narrator in order to make it learn from exploration and the narrator maximizes his reward for the remaining 80%. Eventually to fit with the format of the dataset, we slightly modified the output layer of our reader for the CBT task. Instead of projecting on a set of candidate answers the last layer of the reader makes a projection on the entire vocabulary â = σ(M W (oK + uK)) where W is a matrix of size V ∗ d with V the vocabulary size, the elementwise product and M the mask vector of size V containing 1 if the corresponding word is proposed in the candidate answers 0 otherwise. 4.2 RESULTS Performance results on the Cambridge dataset and TripAdvisor are displayed in table 2. We present the results of our implementation of a standard GMemN2N, a uniform GMemN2N which is the reader trained with the baseline protocol 2.2 and the GMemN2N trained in the adversarial protocol 2.1 against the narrator. Each of the experiment was run 10 times and we report in this table the maximum score on the test (based on validation set) and the average score. The precise number of hops needed to achieve the best performance with such models is not obvious so we are presenting all the results for reader and narrator between 4 and 6 hops. Adaptive adversarial GMemN2N improves the accuracy of the model on the Cambridge task by 2.3 points for a model with with 6 hops. The best performance on the TripAdvisor dataset was achieved by the adversarial GMemN2N with 4 hops. It improves the accuracy by 1.5 points. The uniform protocol improves the stability of the performances compare to a standard reader but we went further with the adversarial protocol which improve both the overall accuracy and the stability of the performances. It is not clear for this task that the number of hops, between 4 and 6, has an influence on the general behaviour but we achieve the best performance with our adversarial protocol and a reader with 6 hops. All the average values of the models trained with the adversarial protocol are higher than the others, even for the 5 hops model which doesn’t achieve a very good max performance during the 10 replications we have run. Performances on the CBT dataset are displayed in table 3. Because of the size of this dataset, we didn’t repeat the training 10 times but only once. Results of the uniform training seem similar to the performances of the standard reader in this case but the accuracy of the models trained with our adversarial protocol remain higher than others. 5 VISUALIZATIONS AND ANALYSIS 5.1 NARRATOR PREDICTIONS In order to better understand the narrator learnings from the reader behaviour during the adversarial protocol, Figure 2 depicts the rewards that the narrator expects for each word of a document after several rounds of the game. Given a tuple (d, q) where d is a clear document and q a query and assuming the document contains k words, we generate k corrupted documents where one word is obfuscated in each of them. We then feed the narrator with these corrupted data and report the results. y-axis represents the document and x-axis the expected reward from the reader if the narrator decides to generate a corrupted document by obfuscating this word. In red, the words of the documents that correspond to the answer of the question are highlighted. The narrator tends to obfuscate some important keywords of the dialogs. Furthermore, the narrator is not pointing on a single word but it points on a word and on its neighborhood. This might be a consequence of the encoding which is not only a representation of a word but a representation of a word in its context. 5.2 STANDARD VS ADVERSARIAL READER ATTENTION Figure 3 depicts the attention values, presented by hops, over a document from the Cambridge dataset. The document was choosen as only the adversarial protocol answers correctly to the question. It displays attention distributions for a reader trained with the three different protocols: [top] standard, [middle] uniform, [bottom] adversarial. The overall aspect of the first two readers are comparable. The readers quickly focus on what we assume to be an important span of text. After two hops the readers start looking at the same position in the document. On the contrary, the reader trained with the adversarial protocol seems to have a very different behavior regarding the attention mechanism. It captures the important part of the sentence directly at the first hop and uses the 4 remaining hops to focus more largely on the end of the document. We might interpret this as a consequence of the obfuscation protocol that forces the reader to look on different parts of the sentence instead of focusing on one precise point during the learning process. 6 RELATED WORK 6.1 END-TO-END MACHINE READING The task of end-to-end machine reading consist in learning to select an answer to question given a passage of text in supervised manner. One of the popular formal setting of the problem, the clozestyle QA task, involves tuples of the form (d, q, a, C), where d is a document (context), q is a query over the contents of d, in which a phrase is replaced with a placeholder, and a is the answer to q, which comes from a set of candidates C. In this work we consider datasets where each candidate c ∈ C has at least one token which also appears in the document. The task can then be described as: given a document-query pair (d, q), find a ∈ C which answers q. Below we provide an overview of representative neural network architectures which have been applied to this problem. LSTMs with Attention: Several architectures introduced in Hermann et al. (2015) employ LSTM units to compute a combined document-query representation g(d, q), which is used to rank the candidate answers. These include the DeepLSTM Reader which performs a single forward pass through the concatenated (document, query) pair to obtain g(d, q); the Attentive Reader which first computes a document vector d(q) by a weighted aggregation of words according to attentions based on q, and then combines d(q) and q to obtain their joint representation g(d(q), q); and the Impatient Reader where the document representation is built incrementally. The architecture of the Attentive Reader has been simplified recently in Stanford Attentive Reader, where shallower recurrent units were used with a bilinear form for the query-document attention (Chen et al., 2016). Attention Sum: The Attention-Sum (AS) Reader (Kadlec et al., 2016) uses two bidirectional GRU networks to encode both d and q into vectors. A probability distribution over the entities in d is obtained by computing dot products between q and the entity embeddings and taking a softmax. Then, an aggregation scheme named pointer-sum attention is further applied to sum the probabilities of the same entity, so that frequent entities the document will be favored compared to rare ones. Building on the AS Reader, the Attention-over-Attention (AoA) Reader (Cui et al., 2016) introduces a two-way attention mechanism where the query and the document are mutually attentive to each other. Multi-hop Architectures: Memory Networks (MemNets) were proposed in Weston et al. (2014), where each sentence in the document is encoded to a memory by aggregating nearby words. Attention over the memory slots given the query is used to compute an overall memory and to renew the query representation over multiple iterations, allowing certain types of reasoning over the salient facts in the memory and the query. Neural Semantic Encoders (NSE) Munkhdalai & Yu (2016) extended MemNets by introducing a write operation which can evolve the memory over time during the course of reading. Iterative reasoning has been found effective in several more recent models, including the Iterative Attentive Reader Sordoni et al. (2016) and ReasoNet Shen et al. (2016). The latter allows dynamic reasoning steps and is trained with reinforcement learning. Other related works, included EpiReader (Trischler et al., 2016b), consist of two networks, where one proposes a small set of candidate answers, and the other reranks the proposed candidates conditioned on the query and the context; Bi-Directional Attention Flow network (BiDAF) (Seo et al., 2016) adopts a multi-stage hierarchical architecture along with a flow-based attention mechanism. 6.2 ADVERSARIAL LEARNING AND SELF-PLAY The main principle of self-play consist in defining a learning task where two, possible antagonist behaviours, will be learnt jointly by competing from one against the another. In the context of twoplayer zero-sum games, such setting falls quite naturally. Two models of the same nature compete regarding the rules of the considered game and learn from their sucessive performances. A majority of prior work has focused on learning from self-play data using temporal-difference learning in backgammon (Tesauro, 1995), chess (Mannen, 2003), or using linear regression in Othello (van der Ree & Wiering, 2013) and more recently Go (Silver et al., 2016). In the general context of board games, the main advantage of self-play as a method of training neural network controllers lies in the fact that every position will be the result of a game position from an actual board, rather than being contrived positions that may fail to teach the network about probabilities or prevent the network from properly generalizing from the results. In other word, self-play contributes to exhibit challenging configurations to overcome as a controller. In such setting, the network has the advantage of having seen over several million different board positions, which would have been hardly feasible in a network trained through a crafted set of training data. In the domain of reading, it has been recently observed that the tasks of answering to a question given a passage of text and predicting the question regarding a text passage are interesting tasks to model jointly. So, several papers have recently proposed to use the question generation as a regularization task to improve the passage encoding model of a neural reader ((Yuan et al., 2017), (Wang et al., 2017)). In this paper, we claim these two tasks are indeed complementary but we think adversarial training of the nature used in two player games will lead to the same advantages than those observed previously. As generating the question given a passage and a question is hard we inspired ourself from the recent work proposed in (Guo et al., 2017) and define a narrator network as complementary task to the reader learning one. Such narrator have the task of finding the most meaningfull spans of text to obfuscate in a give passage and given a question in order to minimize the probability of successfull answering of the reader. 6.3 ADAPTIVE DROPOUT Recent deep neural networks are composed of a lot of parameters and tend to easily overfit the training set. One of the main idea which has been developped to prevent this overfitting is to randomly drop units from the network during the training session (Srivastava et al., 2014). Such approach results to combine many different neural networks to make a prediction. In the same idea of avoiding to overfit the training data, training a model on a dataset which contains corrupted data is something usefull which has been studying in Maaten et al. (2013). They have developed different ways to corrupt a document, for example by adding noise into the input features and our work refers to what they call the blankout corruption which consist of randomly delete features into the input documents (texts or images in this case) with probability q. 7 CONCLUSION AND FUTURE WORK In this paper, we propose an adversarial protocol to train coupled deep memory networks for the task of machine comprehension. On all reported experiments, the models trained with this novel protocol outperform the equivalent models trained using a standard supervised protocol. Moreover our adversarial protocol seems to reduce the variance of the models performances. In future work, we plan to continue studying this novel protocol using an active question answering task. Moreover, we currently investigate an adaptation of such protocol to Visual Question Answering.
1. What is the main contribution of the paper in terms of adversarial learning for machine comprehension tasks? 2. How effective is the proposed framework in improving performance compared to previous methods? 3. Can you provide more clarity on why the narrator network is named so when its job is to obfuscate passages? 4. Is there any specific reason for using self-play as motivation for the learning method? 5. Could you elaborate more on how the narrator prevents catastrophic forgetting, and how does reinitializing and retraining the narrator work? 6. How does the narrator choose which words to obfuscate, and do you consider treating the number of hops as a hyperparameter? 7. Can you explain how rounds are constructed in Figure 2, and provide a pseudo-code for the learning procedure? 8. What is the justification behind Figure 3, and what does it represent? 9. Are you willing to release the code for reproducing results? 10. Minor comments include grammatical errors and suggestions for phrasing changes throughout the review.
Review
Review Summary: This paper proposes an adversarial learning framework for machine comprehension task. Specifically, authors consider a reader network which learns to answer the question by reading the passage and a narrator network which learns to obfuscate the passage so that the reader can fail in its task. Authors report results in 3 different reading comprehension datasets and the proposed learning framework results in improving the performance of GMemN2N. My Comments: This paper is a direct application of adversarial learning to the task of reading comprehension. It is a reasonable idea and authors indeed show that it works. 1. The paper needs a lot of editing. Please check the minor comments. 2. Why is the adversary called narrator network? It is bit confusing because the job of that network is to obfuscate the passage. 3. Why do you motivate the learning method using self-play? This is just using the idea of adversarial learning (like GAN) and it is not related to self-play. 4. In section 2, first paragraph, authors mention that the narrator prevents catastrophic forgetting. How is this happening? Can you elaborate more? 5. The learning framework is not explained in a precise way. What do you mean by re-initializing and retraining the narrator? Isn’t it costly to reinitialize the network and retrain it for every turn? How many such epochs are done? You say that test set also contains obfuscated documents. Is it only for the validation set? Can you please explain if you use obfuscation when you report the final test performance too? It would be more clear if you can provide a complete pseudo-code of the learning procedure. 6. How does the narrator choose which word to obfuscate? Do you run the narrator model with all possible obfuscations and pick the best choice? 7. Why don’t you treat number of hops as a hyper-parameter and choose it based on validation set? I would like to see the results in Table 1 where you choose number of hops for each of the three models based on validation set. 8. In figure 2, how are rounds constructed? Does the model sees the same document again and again for 100 times or each time it sees a random document and you sample documents with replacement? This will be clear if you provide the pseudo-code for learning. 9. I do not understand author's’ justification for figure-3. Is it the case that the model learns to attend to last sentences for all the questions? Or where it attends varies across examples? 10. Are you willing to release the code for reproducing the results? Minor comments: Page 1, “exploit his own decision” should be “exploit its own decision” In page 2, section 2.1, sentence starting with “Indeed, a too low percentage …” needs to be fixed. Page 3, “forgetting is compensate” should be “forgetting is compensated”. Page 4, “for one sentences” needs to be fixed. Page 4, “unknow” should be “unknown”. Page 4, “??” needs to be fixed. Page 5, “for the two first datasets” needs to be fixed. Table 1, “GMenN2N” should be “GMemN2N”. In caption, is it mean accuracy or maximum accuracy? Page 6, “dataset was achieves” needs to be fixed. Page 7, “document by obfuscated this word” needs to be fixed. Page 7, “overall aspect of the two first readers” needs to be fixed. Page 8, last para, references needs to be fixed. Page 9, first sentence, please check grammar. Section 6.2, last sentence is irrelevant.
ICLR
Title Adversarial reading networks for machine comprehension Abstract Machine reading has recently shown remarkable progress thanks to differentiable reasoning models. In this context, End-to-End trainable Memory Networks (MemN2N) have demonstrated promising performance on simple natural language based reasoning tasks such as factual reasoning and basic deduction. However, the task of machine comprehension is currently bounded to a supervised setting and available question answering dataset. In this paper we explore the paradigm of adversarial learning and self-play for the task of machine reading comprehension. Inspired by the successful propositions in the domain of game learning, we present a novel approach of training for this task that is based on the definition of a coupled attention-based memory model. On one hand, a reader network is in charge of finding answers regarding a passage of text and a question. On the other hand, a narrator network is in charge of obfuscating spans of text in order to minimize the probability of success of the reader. We experimented the model on several question-answering corpora. The proposed learning paradigm and associated models present encouraging results. 1 INTRODUCTION Automatic comprehension of text is one of the main goals of natural language processing. While the ability of a machine to understand text can be assessed in many different ways, several benchmark datasets have recently been created to focus on answering questions as a way to evaluate machine comprehension (Richardson et al., 2013); (Hermann et al., 2015); (Hill et al., 2015a); (Weston et al., 2015); (Rajpurkar et al., 2016); (Nguyen et al., 2016). In this setup, the machine is presented with a piece of text such as a news article or a story. Then, the machine is expected to answer one or multiple questions related to the text. The task is linked to several important incomes. First, it provides tools that will shortly help users with efficient access to large amounts of information. Also, it acts as an important proxy task to assess model of natural language understanding and reasoning. In this context, numerous large-scale machine comprehension/QA datasets (Hermann et al., 2015); (Rajpurkar et al., 2016); (Trischler et al., 2016a); (Nguyen et al., 2016) have been recently released and have contributed to significant advancement. From a model perspective, neural models are now approaching human parity on some of these benchmarks and a large corpus of novel and promissing research has been produced in the domain of attention, memory and parametric model with socalled reasoning capabilities. However, the field is currently bounded to the paradigm of supervised learning and strictly linked to the current annotated dataset. As a counterpart, an increasing research activity has been dedicated since the 90’s to self-play and adversariality to overcome this boundary and allow a model to exploit its own decision to improve itself. Two famous examples are related to policy learning in games. Indeed, TD-Gammon (Tesauro, 1995) was a neural network controller for backgammon which achieved near top player performance using self-play as learning paradigm. More recently, DeepMind AlphaGo uses the same paradigm to win against the current world best human go player. The major advantage of such setting is to partially release the learning procedure to the limit of an available dataset. The dual models learn and improve their performance by acting one against the other as so-called sparing patterns. In this paper, we adapt this paradigm to the domain of machine reading. On the first hand, a reader network is trained to learn to answer question regarding a passage of text. On the other hand, a narrator network learns to obfuscate words of a given passage in order to minimize the probability of successfull answering of the reader model. We developed a sequential learning protocol in order to gradually improved the quality of the models. This paradigm separates itself from the current research direction of joint question and answer learning from text as proposed on Wang et al. (2017). Indeed, in comparison to question generation as regularizer of a reader model that sounds promising, we believe adversarial training unleashs from the constraint of strict and bounded supervision and brings robustness to the answering model. Our contributions can be summarized as follows: (1) We propose a new learning paradigm for machine comprehension based on adversarial training. (2) We show this methodology allows to overcome the boundaries of strict supervision and provides robustness to noise in question-answering settings through a set of experiments in several machine reading corpora and (3) visualizations of the models reveals some useful insights of the attention mechanism for reasoning the questions and extracting meaning passage of a text given a question. Roadmap: In Section 2, we formalize our adversarial learning protocol. Also, the reader and narrator networks are presented. In Section 3 the corpora used for evaluation are detailed. Section 4 presents our current experimental results. Section 5 details several vizualizations of the decisions and attention values computed by the coupled models. Finally, Section 6 reviews the state-of-the-art of machine reading comprehension, Memory Network models, the paradigm of self-play and its links to adversarial learning. 2 ADVERSARIAL READING NETWORKS Several studies have recently challenged deep machine reading models with adversarial examples as Miyato et al. (2016) and Jia & Liang (2017). This kind of approach is well known in computer vision (Goodfellow et al., 2014) but seems to also affects natural language processing. More precisely, Jia & Liang (2017) demonstrates that a large majority of the recent state of the art deep machine reading models suffers from a lack of robustness regarding adversarial examples because of their so-called oversensibility. Indeed average accuracies were decreased by half when these models were tested on corrupted data, i.e a document with an additional sentence at the end which normally does not affect the answer. The model we propose is built to use this adversariality as an adaptive dropout by challenging the reader with more and more difficult tasks during the learning. Indeed, we extend the concept of asymmetric self-play to train a model that we called the narrator during an adversarial game with a reader. The narrator is acquiring knowledge about the reader behaviour during the training and it generates harder adversarial examples. Beyond increasing artificially the size of the available dataset, this adaptive behaviour of the narrator prevents catastrophic forgetting phenomena from the reader. In this section, we explain the protocol of adversarial training we developed for robust machine comprehension. Then, we describe the reader and narrator models used. 2.1 MAIN LEARNING PROTOCOL The overall framework is a turn-based question answering game described in Figure 1. At the beginning of each round, the narrator obfuscates one word for each document sampled from the training corpus. We fix the ratio of corrupted data / clear data to a ratio λ ∈ R[0,1] of the dataset. Indeed, a too low percentage of corrupted data might not have any effect on the training and a too high one will prevent the reader of learning well. Then, the reader is trained on a subset of this obfuscated corpus and tested on the remaining subset. Note that both train and test sets contain corrupted data. Finally the narrator gets back a set of rewards regarding the reader performances on the obfuscated stories. Given a tuple (d, dobf, q) where d is the original document, dobf the document with an obfuscated word proposed by the narrator and q the associated question, the reward r given to the narrator is defined as follow: r = { 1 if the reader answer well on d and fail on dobf 0 otherwise The reward given to the narrator is a direct measurement of the impact of the obfuscation on the reader performance. All the previously collected rewards are stored and used for experience replay throughout the turns. After each learning turn, all the parameters of the narrator are reinitialized and retrained on all the recorded rewards. Throughout the turns, the narrator accumulates information about the reader behaviour and proposes more challenging tasks as the game is playing. Each narrator’s dataset is choosen to maximizes its expected rewards for 80% of the stories and randomly obfuscates a word in the remaining 20% in order to ensure exploration. Finally, the reader keeps improving through the turn and any catastrophic forgetting is compensated at the next turn of the narrator by especially focusing on these flaws. Algorithm 1 Pseudo-code of the adversarial training Split dataset into 3 pieces (A) train (80%), (B) valid (10%) and (C) test (10%) Create D an empty dataset epoch = 0 while epoch < NB MAX EPOCHS do Split A into A1 (80%) and A2 (20%) if epoch = 0 then Randomly corrupt 20% of A1 and 100% of A2 else Reinitialize all the parameters of the narrator Train the narrator on D The narrator corrupts 20% of A1 and 100% of A2 end if Train one epoch of the reader on A1 Let A2 clear be the dataset that contains the same data as in A2 but without corruption Test the reader on A2 and on A2 clear for all (d ∈ A2, d clear ∈ A2 clear) do Let r be the reward given to the narrator if The reader succeed on d clear and fails on d then D ← {D ∪ (d, r = 1)} else if The reader succeed on d clear and succeed on d then D ← {D ∪ (d, r = 0)} end if end for Test the reader on B and see if it should early stop or not epoch← epoch + 1 end while Test the reader on C and report the results Finally let â be the predicted distribution and a the the ground-truth. Categorical cross entropy LNarrator = − ∑N i=1 ∑v j=1 aij log(âij), is the loss function for the reader network as the model decision is a distribution over a vocabulary. Then, binary cross entropy LReader = − ∑N i=1[ailog(âi) + (1 − ai)log(1 − âi)] is used as loss function for the narrator network. 2.2 BASELINE PROTOCOL As a reference protocol, one word is obfuscated in several stories of the dataset using a uniform sampling strategy. This is a naive variation of the first protocol where the narrator doesn’t learn from the reader feedbacks. In fact, this protocol is similar to a dropout regularization that allows to avoid overfitting the training set. However without the narrator learning of the first protocol, we lost the adaptive dropout and all the curriculum learning notions of easier and harder inputs. In practice this simple adversarial protocol improves the robustness of the results compared to a standard learning protocol. This learning protocol have strong similarities with the one proposed by Maaten et al. (2013). 2.3 READER NETWORK We use a Gated End-to-End Memory Network, GMemN2N, as reader which was first introduced by Perez & Liu (2016). This architecture is based on two different memory cells and an output prediction. An input memory representation {mi} and an output representation {ci} are used to store embedding representations of inputs. Suppose that an input of the model is a tuple (d, q) where d is a document, i.e. a set of sentences {si} and q a query about d, the entire set of sentences is converted into input memory vectors mi = AΦ(si) and output memory vectors ci = CΦ(si) by using two embedding matrixA and C. The question q is also embedded using a third matrixB, u = BΨ(q) of the same dimension as A and C. where Φ and Ψ are respectively the sentence embedding function and the question embedding function described in the next paragraph. The input memory is used to compute the relevance of each sentence in its context regarding the question, by computing the inner product of the input memory sentence representation with the query. Then a softmax is used to compute the probability distribution. The response o = ∑ i pici from the output memory is the sum of the output memory vectors {ci} weighted with the sentence relevances calculated before pi = softmax(uTmi), where softmax(ai) = eai/ ∑ j∈[1,n] e aj . A gated mechanism is used when we updated the value of the controller u: T k(uk) = σ(W kTu k + bkT )u k+1 = ok T k(uk) + uk (1− T k(uk)) (1) Finally, assuming we use a model with K hops of memory, the final prediction is â = softmax(W (oK +uK)) whereW is a matrix of size d×v and v is the number of candidate answers. In this model, we do not use the adjacent or layer-wise weight tying scheme and all the matrix Ak and Bk of the multiple hops are different. Text and question representations: To build the sentence representations, we use a 1-dimensional Convolutional Neural Network (CNN) with a list of filter sizes over all the sentences as proposed in Kim (2014). Let [s1, . . . , sN ] be the vectorial representation of a document with N sentences where si = [wi,1, wi,2, . . . , wi,n] is the i − th sentence which contains n words. Given a convolutional filter F ∈ Rh∗d where h is the width of the convolutional window, i.e the number words it overlaps, the convolutional layer produces: ci,j = f(F [Ewi,j , . . . , Ewi,j+h]),∀j ∈ [1, n− j] where is the elementwise multiplication, f a rectified linear unit (ReLU), b a bias term and E the embedding matrix of size d ∗ V where V is the vocabulary size and d the word embedding size. Then, a max pooling operator is applied to this vector to extract features. Given a filter F , after a convolutional operation and a max pooling operation, we obtain a feature ĉi = maxj(ci,j) from the i − th sentence of the text. Multiple filters with varying sizes are used. Assume that our model uses Ns different filter sizes and Nf for each size, we are able to extract Ns × Nf features for one sentence. The final representation of the sentence Φ(si) = [ĉiF1 , ĉiF2 , . . . , ĉiFNs∗Nf ] is the concatenation of the extracted features from all the filters. 2.4 NARRATOR NETWORK The objective of this model is to predict the probability of the reader to successfully respond to a question given a document with an obfuscated word. This information will be use by the narrator to determine the position of the obfuscated word in the document which maximizes the probability of the reader to fail its task. We use a GMemN2N similarly to the reader. However, on the last layer a sigmoid function is used to predict the probability of the reader to fail on this input: â = σ(W (oK +uK)) where σ = 11+e−x and â ∈ [0, 1] is the predicted probability of failure of the reader and W a matrix of size d× 1. An input of the reader is a tuple (dobf, q) where dobf is a document with an obfuscated word. To obfuscate a word, we replace it by the word unk for unknown. The output of the narrator is a real number r ∈ R[0,1] which is the expected probability of the reader to fail on the question. The objective of the narrator is to select the stories which maximize this reward. Finally, we use the same text passage and query representation than for the reader, based on a CNN with different filter sizes for the document and the two last hidden states of a bidirectional Gated Rectified Unit (GRU) recurrent network for the question encoding. Both models are fully-differentiable. 3 DATASETS AND DATA PREPROCESSING Cambridge Dialogs: the transactional dialog corpus proposed by Wen et al. (2016) has been produced by a crowdsourced version of the Wizard-of-Oz paradigm. It was originally designed for dialog state tracking but Perez (2016)) have shown that this task could also be considered as a reading task. In such setting, the informable slots provided as metadata to each dialog were used to produce questions for a dialog comprehension task. The dataset deals with an agent assisting a user to find a restaurant in Cambridge, UK. To propose the best matching restaurant the system needs to extract 3 constraints which correspond to the informable slots in the dialog state tracking task: Food, Pricerange, Area. Given a dialog between an agent and a user, this informable slots become questions for the model we propose. The dataset contains 680 different dialogs about 99 different restaurants. We preprocess the dataset to transform it into a question answering dataset by using the three informable slot types as questions about a given dialog. After this preprocessing operation, we end up with our question answering formatted dataset which contains 1352 possible answers. TripAdvisor aspect-based sentiment analysis: the dataset contains hotel reviews from the TripAdvisor website (Wang et al., 2010). This dataset contains a total of 235K detailed reviews about 1850 hotels. Each review is associated to an overall rating, between 0 and 5 stars. Furthermore, 7 aspects: value, room, location, cleanliness, checkin/front desk, service, and business service are available. We transform the dataset into a question answering task over a given review. Concretely, for each review a question is an aspect and we use the number of stars as answer. This kind of machine reading approach to sentiment analysis was previously proposed in Tang et al. (2016). Children’s Book Test (CBT): the dataset is built from freely available books (Hill et al., 2015b) thanks to Project Gutenberg1. The training data consists of tuples (S, q, C, a) where S is the context composed by 20 consecutive sentences from the book, q is the query, C a set of 10 candidate answers and a the answer. The query q is the 21st sentence, i.e the sentence that directly follows the 20 sentences of the context and where one word is removed and replaced with a missing word symbol. Questions are grouped into 4 distinct categories depending of the type of the removed word: Named Entities (NE), (Common) Nouns (CN), Verbs (V) and Prepositions (P). The training contains 669, 343 inputs (context+query) and we evaluated our models on the provided test set which contains 10, 000 inputs, 2, 500 per category. 4 EXPERIMENTS 4.1 TRAINING DETAILS 10% of the dataset was randomly held-out to create a test set. We split the dataset before all the training operations and each of the protocol we propose was tested on the same test dataset. For the training phase, we split the training dataset to extract a validation set to perform early stopping. We use Adam optimizer (Kingma & Ba, 2014) with a starting learning rate at 0.0005. We set the dropout to 0.9 which means that during training, 10%, randomly selected, of the parameters are not used during the forward pass and not updated during the backward propagation of error. We also added the gated memory mecanism of Perez & Liu (2016) that dynamically regulates the access 1https://www.gutenberg.org to the memory blocks. This mechanism had a very positive effect on the overall performances of our models. All weights are initialized randomly from a Gaussian distribution with zero mean and σ = 0.1. Moreover, we penalize the loss with the sum of the L2 of the parameters of the models. We set the batch size to 16 inputs and we use embedding word of size 300. We initialize all the embedding matrix with pre-trained GloVe word vectors (Pennington et al., 2014) and we randomly initialize the words of our document that are not in the GloVe model. It seems that for our experiments CNN encoding doesn’t improve only the overall accuracy of the model compared to LSTM but also the stability by decreasing the variance of the results. So in practice we use 128 filters of size 2, 3, 5 and 8 so a total of 512 filters for the one dimensional convolutional layer. We repeat each training 10 times for the two first datasets and report maximum and average accuracy on the test set. The maximum is the score on the test set of the best of the 10 trained models based on the validation set. During the adversarial learning, the dataset contains 70% of clear dialogs and 30% of corrupted dialogs, λ = 0.3. Inside these corrupted data, 20% are randomly obfuscated by the narrator in order to make it learn from exploration and the narrator maximizes his reward for the remaining 80%. Eventually to fit with the format of the dataset, we slightly modified the output layer of our reader for the CBT task. Instead of projecting on a set of candidate answers the last layer of the reader makes a projection on the entire vocabulary â = σ(M W (oK + uK)) where W is a matrix of size V ∗ d with V the vocabulary size, the elementwise product and M the mask vector of size V containing 1 if the corresponding word is proposed in the candidate answers 0 otherwise. 4.2 RESULTS Performance results on the Cambridge dataset and TripAdvisor are displayed in table 2. We present the results of our implementation of a standard GMemN2N, a uniform GMemN2N which is the reader trained with the baseline protocol 2.2 and the GMemN2N trained in the adversarial protocol 2.1 against the narrator. Each of the experiment was run 10 times and we report in this table the maximum score on the test (based on validation set) and the average score. The precise number of hops needed to achieve the best performance with such models is not obvious so we are presenting all the results for reader and narrator between 4 and 6 hops. Adaptive adversarial GMemN2N improves the accuracy of the model on the Cambridge task by 2.3 points for a model with with 6 hops. The best performance on the TripAdvisor dataset was achieved by the adversarial GMemN2N with 4 hops. It improves the accuracy by 1.5 points. The uniform protocol improves the stability of the performances compare to a standard reader but we went further with the adversarial protocol which improve both the overall accuracy and the stability of the performances. It is not clear for this task that the number of hops, between 4 and 6, has an influence on the general behaviour but we achieve the best performance with our adversarial protocol and a reader with 6 hops. All the average values of the models trained with the adversarial protocol are higher than the others, even for the 5 hops model which doesn’t achieve a very good max performance during the 10 replications we have run. Performances on the CBT dataset are displayed in table 3. Because of the size of this dataset, we didn’t repeat the training 10 times but only once. Results of the uniform training seem similar to the performances of the standard reader in this case but the accuracy of the models trained with our adversarial protocol remain higher than others. 5 VISUALIZATIONS AND ANALYSIS 5.1 NARRATOR PREDICTIONS In order to better understand the narrator learnings from the reader behaviour during the adversarial protocol, Figure 2 depicts the rewards that the narrator expects for each word of a document after several rounds of the game. Given a tuple (d, q) where d is a clear document and q a query and assuming the document contains k words, we generate k corrupted documents where one word is obfuscated in each of them. We then feed the narrator with these corrupted data and report the results. y-axis represents the document and x-axis the expected reward from the reader if the narrator decides to generate a corrupted document by obfuscating this word. In red, the words of the documents that correspond to the answer of the question are highlighted. The narrator tends to obfuscate some important keywords of the dialogs. Furthermore, the narrator is not pointing on a single word but it points on a word and on its neighborhood. This might be a consequence of the encoding which is not only a representation of a word but a representation of a word in its context. 5.2 STANDARD VS ADVERSARIAL READER ATTENTION Figure 3 depicts the attention values, presented by hops, over a document from the Cambridge dataset. The document was choosen as only the adversarial protocol answers correctly to the question. It displays attention distributions for a reader trained with the three different protocols: [top] standard, [middle] uniform, [bottom] adversarial. The overall aspect of the first two readers are comparable. The readers quickly focus on what we assume to be an important span of text. After two hops the readers start looking at the same position in the document. On the contrary, the reader trained with the adversarial protocol seems to have a very different behavior regarding the attention mechanism. It captures the important part of the sentence directly at the first hop and uses the 4 remaining hops to focus more largely on the end of the document. We might interpret this as a consequence of the obfuscation protocol that forces the reader to look on different parts of the sentence instead of focusing on one precise point during the learning process. 6 RELATED WORK 6.1 END-TO-END MACHINE READING The task of end-to-end machine reading consist in learning to select an answer to question given a passage of text in supervised manner. One of the popular formal setting of the problem, the clozestyle QA task, involves tuples of the form (d, q, a, C), where d is a document (context), q is a query over the contents of d, in which a phrase is replaced with a placeholder, and a is the answer to q, which comes from a set of candidates C. In this work we consider datasets where each candidate c ∈ C has at least one token which also appears in the document. The task can then be described as: given a document-query pair (d, q), find a ∈ C which answers q. Below we provide an overview of representative neural network architectures which have been applied to this problem. LSTMs with Attention: Several architectures introduced in Hermann et al. (2015) employ LSTM units to compute a combined document-query representation g(d, q), which is used to rank the candidate answers. These include the DeepLSTM Reader which performs a single forward pass through the concatenated (document, query) pair to obtain g(d, q); the Attentive Reader which first computes a document vector d(q) by a weighted aggregation of words according to attentions based on q, and then combines d(q) and q to obtain their joint representation g(d(q), q); and the Impatient Reader where the document representation is built incrementally. The architecture of the Attentive Reader has been simplified recently in Stanford Attentive Reader, where shallower recurrent units were used with a bilinear form for the query-document attention (Chen et al., 2016). Attention Sum: The Attention-Sum (AS) Reader (Kadlec et al., 2016) uses two bidirectional GRU networks to encode both d and q into vectors. A probability distribution over the entities in d is obtained by computing dot products between q and the entity embeddings and taking a softmax. Then, an aggregation scheme named pointer-sum attention is further applied to sum the probabilities of the same entity, so that frequent entities the document will be favored compared to rare ones. Building on the AS Reader, the Attention-over-Attention (AoA) Reader (Cui et al., 2016) introduces a two-way attention mechanism where the query and the document are mutually attentive to each other. Multi-hop Architectures: Memory Networks (MemNets) were proposed in Weston et al. (2014), where each sentence in the document is encoded to a memory by aggregating nearby words. Attention over the memory slots given the query is used to compute an overall memory and to renew the query representation over multiple iterations, allowing certain types of reasoning over the salient facts in the memory and the query. Neural Semantic Encoders (NSE) Munkhdalai & Yu (2016) extended MemNets by introducing a write operation which can evolve the memory over time during the course of reading. Iterative reasoning has been found effective in several more recent models, including the Iterative Attentive Reader Sordoni et al. (2016) and ReasoNet Shen et al. (2016). The latter allows dynamic reasoning steps and is trained with reinforcement learning. Other related works, included EpiReader (Trischler et al., 2016b), consist of two networks, where one proposes a small set of candidate answers, and the other reranks the proposed candidates conditioned on the query and the context; Bi-Directional Attention Flow network (BiDAF) (Seo et al., 2016) adopts a multi-stage hierarchical architecture along with a flow-based attention mechanism. 6.2 ADVERSARIAL LEARNING AND SELF-PLAY The main principle of self-play consist in defining a learning task where two, possible antagonist behaviours, will be learnt jointly by competing from one against the another. In the context of twoplayer zero-sum games, such setting falls quite naturally. Two models of the same nature compete regarding the rules of the considered game and learn from their sucessive performances. A majority of prior work has focused on learning from self-play data using temporal-difference learning in backgammon (Tesauro, 1995), chess (Mannen, 2003), or using linear regression in Othello (van der Ree & Wiering, 2013) and more recently Go (Silver et al., 2016). In the general context of board games, the main advantage of self-play as a method of training neural network controllers lies in the fact that every position will be the result of a game position from an actual board, rather than being contrived positions that may fail to teach the network about probabilities or prevent the network from properly generalizing from the results. In other word, self-play contributes to exhibit challenging configurations to overcome as a controller. In such setting, the network has the advantage of having seen over several million different board positions, which would have been hardly feasible in a network trained through a crafted set of training data. In the domain of reading, it has been recently observed that the tasks of answering to a question given a passage of text and predicting the question regarding a text passage are interesting tasks to model jointly. So, several papers have recently proposed to use the question generation as a regularization task to improve the passage encoding model of a neural reader ((Yuan et al., 2017), (Wang et al., 2017)). In this paper, we claim these two tasks are indeed complementary but we think adversarial training of the nature used in two player games will lead to the same advantages than those observed previously. As generating the question given a passage and a question is hard we inspired ourself from the recent work proposed in (Guo et al., 2017) and define a narrator network as complementary task to the reader learning one. Such narrator have the task of finding the most meaningfull spans of text to obfuscate in a give passage and given a question in order to minimize the probability of successfull answering of the reader. 6.3 ADAPTIVE DROPOUT Recent deep neural networks are composed of a lot of parameters and tend to easily overfit the training set. One of the main idea which has been developped to prevent this overfitting is to randomly drop units from the network during the training session (Srivastava et al., 2014). Such approach results to combine many different neural networks to make a prediction. In the same idea of avoiding to overfit the training data, training a model on a dataset which contains corrupted data is something usefull which has been studying in Maaten et al. (2013). They have developed different ways to corrupt a document, for example by adding noise into the input features and our work refers to what they call the blankout corruption which consist of randomly delete features into the input documents (texts or images in this case) with probability q. 7 CONCLUSION AND FUTURE WORK In this paper, we propose an adversarial protocol to train coupled deep memory networks for the task of machine comprehension. On all reported experiments, the models trained with this novel protocol outperform the equivalent models trained using a standard supervised protocol. Moreover our adversarial protocol seems to reduce the variance of the models performances. In future work, we plan to continue studying this novel protocol using an active question answering task. Moreover, we currently investigate an adaptation of such protocol to Visual Question Answering.
1. What is the main contribution of the paper in the field of adversarial reading comprehension? 2. What are the limitations of the proposed approach, particularly in terms of evaluating its effectiveness and comparing it to previous works? 3. How does the reviewer suggest improving the proposed method, and what are the potential challenges in implementing those suggestions? 4. What is the significance of the fact that the proposed method can potentially destroy factual information needed to answer questions? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review The main idea of this paper is to automate the construction of adversarial reading comprehension problems in the spirit of Jia and Liang, EMNLP 2017. In that work a "distractor sentence" is manually added to a passage to superficially, but not logically, support an incorrect answer. It was shown that these distractor sentences largely fool existing reading comprehension systems although they do not fool human readers. This paper replaces the manual addition of a distractor sentence with a single word replacement where a "narrator" is trained adversarially to select a replacement to fool the question answering system. This idea seems interesting but very difficult to evaluate. An adversarial word replacement my in fact destroy the factual information needed to answer the question and there is no control for this. The performance of the question answering system in the presence of this adversarial narrator is of unclear significance and the empirical results in the paper are very difficult to interpret. No comparisons with previous work are given (and perhaps cannot be given). A better model would be the addition of a distractor sentence as this preserves the information in the original passage. A language model could probably be used to generate a compelling distractor. But we want that the corrupted passage has the same correct answer as the uncorrupted passage and this difficult to guarantee. A trained "narrator" could learn to actually change the correct answer.
ICLR
Title Causal Probabilistic Spatio-temporal Fusion Transformers in Two-sided Ride-Hailing Markets Abstract Achieving accurate spatio-temporal predictions in large-scale systems is extremely valuable in many real-world applications, such as weather forecasts, retail forecasting, and urban traffic forecasting. So far, most existing methods for multi-horizon, multi-task and multitarget predictions select important predicting variables via their correlations with responses of interest, and thus it is highly possible that many forecasting models generated from those methods are not causal, leading to poor interpretability. The aim of this paper is to develop a collaborative causal spatio-temporal fusion transformer, named CausalTrans, to establish the collaborative causal effects of predictors on multiple forecasting targets, such as supply and demand in ride-sharing platforms. Specifically, we integrate the causal attention with the Conditional Average Treatment Effect (CATE) estimation method in causal inference. Moreover, we propose a novel and fast multi-head attention evolved from Taylor’s expansion instead of softmax, reducing time complexity from O(V) to O(V), where V is the number of nodes in a graph. We further design a spatial graph fusion mechanism to significantly reduce the parameters’ scale. We conduct a wide range of experiments to demonstrate the interpretability of causal attention, the effectiveness of various model components, and the time efficiency of our CausalTrans. As shown in these experiments, our CausalTrans framework can achieve up to 15% error reduction compared with various baseline methods. 1 INTRODUCTION This paper is motivated by solving a collaborative probabilistic forecasting problem of both supply and demand in two-sided ride-hailing platforms, such as Uber and DiDi. Collaborative supply and demand relationships are common in various two-sided markets, such as Amazon, Airbnb, and eBay. We consider two-sided ride-hailing platforms as an example. In this case, we denote supply and demand as online driver number and call orders, respectively, on the platform at a specific time in a city. Some major factors for demand include rush hours, weekdays, weather conditions, transportation network, points of interest, and holidays. For instance, if it rains during peak hours in weekdays, demand will dramatically increase and last for a certain time period. In contrast, some major factors for supply include weather, holidays, traffic condition, weekdays, and platform’s dispatching and repositioning policies. Moreover, supply tends to gradually cover the area with many unsatisfied orders, that is, the distribution of supply tends to match with that of demand. We are interested in establishing collaborative causal forecasting models for demand and supply by using various predictors (or covariates). Although many learning methods have been developed to address various collaborative prediction tasks, such as spatio-temporal traffic flow prediction (Zhu & Laptev, 2017; Du et al., 2018; Zhang et al., 2019b; Ermagun & Levinson, 2018; Luo et al., 2019), multivariate prediction (Bahadori et al., 2014; Liang et al., 2018), multi-task prediction (Tang et al., 2018; Chen et al., 2018; Chandra et al., 2017), multi-view prediction (Yao et al., 2018), and multi-horizon prediction (Lim et al., 2019; Yu et al., 2020), these existing methods primarily select important predictors via their correlations with responses, leading to many forecasting models with poor interpretability. In contrast, we propose CausalTrans: a Collaborative Spatio-temporal Fusion Transformer, that generates causal probabilistic multi-horizon forecasts. To the best of our knowledge, this is the first work that captures collaborative causal effects of external covariates on multiple forecasting targets. Building such models is not only essential to enhancing forecasting performance, but also helps the platform to utilize various platform policies to match the distribution of supply with that of demand in two-sided markets. In the CausalTrans framework, our major contributions are summarized as follows: • We design the causal attention based on double machine learning (Chernozhukov et al., 2018) with two layers fully connected neural networks, and successful apply it to various large-scale time series forecasting problems. We conduct a wide range of experiments on real world datasets with multiple covariates and demonstrate that CausalTrans with causal attention outperforms many baseline models in various Ride-hailing scenarios. • We propose a spatial fusion mechanism based on graph attention networks (GAT) (Veličković et al., 2017) to gather local regions and enhance robustness as adjacent regions always share similar supply and demand patterns. • We propose an approximate time-efficient Taylor expansion attention to replace softmax in multihead attention of Transformers (Vaswani et al., 2017) such that time complexity reduces from O(V2) to O(V). We carry out two groups of experiments with three multi-heads and five multi-heads to verify such efficiency improvement. 2 RELATED WORK There is a large body of literature on vehicle flow forecasting (Zhu & Laptev, 2017; Bahadori et al., 2014; Tang et al., 2018; Lim et al., 2019; Yao et al., 2018). We selectively review several major methods as follows. In Zhu & Laptev (2017), the time series forecasting task as a two-step procedure includes offline pre-training and online forecasting. The offline pre-training step is an encoder-decoder framework for compressing sequential features and extracting principal components, whereas the second step gives explainable prediction changes under external variables. Bahadori et al. (2014) proposed a unified low-rank tensor learning framework for multivariate spatio-temporal analysis by combining various attributes of spatio-temporal data including spatial clustering and shared variables structure. For multi-step traffic flow prediction, Tang et al. (2018) proposed a spatio-temporal multi-task collaborative learning model to extract and learn shared information among multiple prediction tasks collaboratively. For example, such model combines spatial features collected from offline observation stations and inherent information between blended time granularities. Lim et al. (2019) proposed a temporal fusion transformer (TFT) to capture temporal correlations at each position, which was similar to self-attention mechanism and expected to capture long-term and short-term dependencies. Yao et al. (2018) proposed a deep multi-view spatio-temporal network (DMVST-Net), including a speed viewpoint (modeling the correlation between historical and future demand by LSTM (Gers & Schmidhuber, 2001)), a spatial viewpoint (modeling local spatial correlation by CNN), and a contextual viewpoint (modeling regional correlations in local temporal patterns). Overall, all above methods improve time series fitting by learning and predicting correlations across multiple spatio-temporal perspectives, targets, and tasks. However, those methods lack convincing interpretability of "how and to what extent external variables affect supply and demand". Achieving good demand forecasting involves not only historical demand targets, but also various current external variables (e.g., weather conditions, traffic conditions, holidays, and driver reposition). Those historical demand observations were affected by historical external factors, so the demand forecasting only based on correlation between variables is hardly convincing. Furthermore, supply forecasting is empirically affected by the distribution of demand besides current external variables. Establishing causal relationship between (supply, demand) and multiple external variables is critically important for accurate supply and demand forecasting. 3 METHODOLOGY We introduce the CausalTrans framework to efficiently establish the collaborative causal effects of multiple predictors on spatio-temporal supply and demand below. 3.1 COLLABORATIVE SUPPLY AND DEMAND FORECASTING We consider all related observations including supply, demand, and external variables collected in a city. Each day is divided into 24 hour segments and a city is divided into non-overlapping hexagonal regions (side length ranges from 600 to 1000 meters). The complete data consists of demand xv(t) ∈ R, supply yv(t) ∈ R, and dynamic covariates zv(t) ∈ Rz , where t is a specific hour segment and v ∈ V is a specific hexagon of the set of hexagonal regions, denoted as V . Dynamic covariates includes weather, holidays, social events, POI (Point Of Interests), and government policies. Weather features consist of temperature (◦C), rainfall (mm), wind level and PM2.5 (mg/m3). Holiday features are represented by one-hot boolean vectors, including seasons, weekdays, and national and popular holidays, such as Christmas Day. POI features are represented by the number of various positions, including traffic stations, business districts, communities, hospitals and schools. More detail cases about collaborative supply and demand are provided in Appendix A. The problem of interest is to use all available observations in {(xv(: t), yv(: t), zv(:, t)), v ∈ V} to predict {(xv(t+ 1 : t+ τmax), yv(t+ 1 : t+ τmax)), v ∈ V}, where τmax is a pre-specified time length, xv(t1 : t2) and yv(t1 : t2) are the demand and supply vectors starting from time point t1 to time point t2, and xv(: t2) and yv(: t2) are the demand and supply vectors starting from the earliest time point to time point t. The demand xv may depend on historical supply yv that happens several weeks (or even longer) ago. But in the latest several weeks (training period), based on our understanding of ride-sharing business, demand xv may be primarily influenced by its own recent historical patterns. Based on the above description, we formulate the learning problem of collaborative demand and supply forecasting as follows: P (xv(t+ 1 : t+ τmax)|xv(: t), zv(: t+ τmax)), (1) P (yv(t+ 1 : t+ τmax)|yv(: t), xv(: t+ τmax), zv(: t+ τmax)), (2) where P (·|·) is a conditional distribution. In (1), it is assumed that xv(t+ 1 : t+ τmax) is primarily affected by historical demands in xv(: t) and external covariates in zv(: t+ τmax). Furthermore, in (2), it is assumed that future supplies in yv(t+ 1 : t+ τmax) are primarily affected by historical supplies in yv(: t), demand patterns in xv(: t + τmax), and external covariates in zv(: t + τmax). Comparing (1) with (2), we assume that the distribution of supply during [t+ 1, t+ τmax] is driven by the historical and current distributions of demand besides the historical information in yv(: t) and external covariates in zv(: t+ τmax). 3.2 PROBABILISTIC FORECASTING Most time series forecasting methods produce deterministic values, whereas forecasting results might have large variation and were hardly robust due to the variation of covariates and training process. To enhance forecasting reliability, we adapt the quantile loss function with the Poisson distribution as our final optimization function 1. Empirically, following (Salinas et al., 2020; Wen et al., 2017; Li et al., 2019; Lim et al., 2019), we choose three quantile points q ∈ Q = {10%, 50%, 90%}, in which the gap between forecasting values at 1Ride-hailing supply and demand variables approximately follow with the Poisson distribution. 90% and 10% percentiles can be regarded as the confidence interval. Take demand xt forecasting at time point t as an example, the final quantile loss function is given by LQ = ∑ xt∈Ω ∑ q∈Q τmax∑ τ=1 QLq(xt, x̂qt−τ ) M · τmax , (3) whereQLq(xt, x̂qt ) = {q− I(xt ≤ x̂ q t )}(xt− x̂ q t ), Ω is the training dataset, τmax is the maximum prediction step, and I(·) is an indicator function. For a fair comparison, given the test dataset Ω̃, we employ q-risk (Salinas et al., 2020; Lim et al., 2019; Li et al., 2019), denoted asRq , to evaluate the risk level of each quantile point as follows: Rq = 2 ∑ xt∈Ω̃ ∑τmax τ=1 QLq(xt, x̂ q t−τ )∑ xt∈Ω̃ ∑τmax τ=1 |xt| . (4) There are at least two advantages of using the quantile loss function. First, the quantile loss function is more robust and stable than the mean square error or the hinge loss, especially when forecasting targets have large variation. Second, we can modify external covariates to change the confidence interval of causal attention and analyze real-world cases. 3.3 CAUSAL TRANSFORMER FRAMEWORK Our CausalTrans is a novel combination of causal estimators and the encoder-decoder architecture. Figure 1 shows the overview of the CausalTrans framework. The three key novel contributions of CausalTrans include fast spatial graph fusion, causal attention, and temporal attention units. First, from the spatial perspective, CausalTrans gathers a set of graph attention kernels (GAT) by using assignment scores extracted from temporal patterns. Moreover, we adapt the first-order Taylor’s expansion on multi-head attention from transformer to reduce time complexity from square complexity to linear complexity. Second, from the temporal perspective, causal attention based on sufficient historical observations is trained offline to evaluate the causal weights on peek time slots, those on weather conditions, and those on holidays, which are denoted as θT , θW , and θH , respectively, under diverse spatio-temporal conditions. Furthermore, we simplify three seasonal perspectives (week, month, and holidays) to represent multi-view position encoding (MVPE). Third, temporal attention is used to fill the gap between encoder and decoder, in which we add a sequence mask to ensure that the historical observations of time point t only uses observations smaller than t. We set mask out to be −∞ and illegal connection weights to be zero. In the following subsections, we introduce the main components of CausalTrans: fast spatial graph fusion and causal attention and show how they works together as a causal spatio-temporal predictor. Moreover, for notational simplicity, we focus on describing those components for forecasting demand xv in the following subsections, while avoid repeating the same components for supply yv . 3.4 FAST SPATIAL GRAPH FUSION ATTENTION In this subsection, we describe the fast graph fusion attention unit based on region clustering and fast multihead attention. See Figure 1 (b) for the architecture of Fast S.F.. Since GAT has achieved impressive results in traffic forecasts (Park et al. (2019); Kosaraju et al. (2019); Zhang et al. (2019a)), we use GAT to extract contextual features in huge graphs. However, directly applying GAT to large-scale forecasting problems is a challenging task, so we design spatial fusion subgraphs that share local supply and demand information. Moreover, we build our framework based on transformers (Vaswani et al., 2017). Transformers have been state-of-the-art structure in various natural language processing (NLP) tasks (Wolf et al., 2019; Wang et al., 2019) and time series forecasting due to its prominent powers of long-term feature extraction and parallel computing. However, the multi-head attention in transformers becomes a key bottleneck for time efficiency. We design an approximate Taylor’s expansion attention instead of using softmax function to accelerate matrix products. More detail results of fast attention can be found in Appendices C.2 and C.3. We briefly describe the fast spatio-temporal fusion graph attention procedure below. First, let Xt be the spatio-temporal demand feature matrix of all grids V before time t, the temporal patterns of V are represented as assignment scores given by C = (cxv,k) = [σs(σr(XtWv)Wt)]Batch, (5) where [·]Batch is the mean operator on the batch mode, k belongs to a K-dimensional cluster vector, v ∈ V , Wv and Wt are, respectively, spatial and temporal weight matrices corresponding to Xt, and σs(·) and σr(·) are sigmoid and relu activation functions, respectively. Second, we use the k-th spatial learner Gk(xv) to extract spatial features of sequential data xv in grid v, and the summed outputs of K clusters are given as follows: hv = ∑ k∈K Gk(xv)cxv,k. (6) The softmax function is used to get attention weights among regions as follows: α̂v = ∑ v′∈Nv αv,v′ · xv′ = ∑ v′∈Nv exp(σr(a T [W · xv||W · xv′ ])) · xv′∑ v′∈Nv exp(σr(a T [W · xv||W · xv′ ])) , (7) where αv,v′ is the correlation weight between v and v′, a and W are network parameters, the superscript T denotes the transpose of a vector or matrix, Nv = {v′|v′ ∈ V, v′ 6= v} is the neighboring region set of region v, and [·||·] is the concatenation operation. In (7), the time complexity of computing exp(σr(aT [W · xv||W · xv′ ])) is O(V2). Specifically, the exponent operation in exp(aT ·W ) ·X of the softmax function limits the efficiency of attention. Moreover, cluster number K V , and the time complexity of aTWX is O(K2 · V) ≈ O(V). Many recent studies find that linear attention is feasible for tasks, whose primary focus is on short-term dependence. More details are discussed in Appendix D. Our novel linear attention is easy to implement and interpret. It follows from Taylor expansion that exp(aTW ) ≈ 1 + aTW under the condition of small aTW . Analogous to the self-attention in original Transformer (Vaswani et al., 2017), the approximate mean and variance of QK T √ dk are 0 and 1, respectively, so aTW here is limited to small values. We introduce L2 normalization to ensure small aTW and 1 + aTW ≥ 0 such that exp(aTW ) ≈ T (aTW ) = 1 + ( a ||a||2 )T ( W ||W ||2 ) . (8) where T is an approximate Taylor expansion. Equation (8) is close to inner dot products, which have advantages on parallel implementation and linear time complexity. Finally, α̂v can be transformed into α̂v = ∑ v′∈Nv T (σr(a T [W · xv||W · xv′ ])) · xv′∑ v′∈Nv T (σr(a T [W · xv||W · xv′ ])) . (9) 3.5 CAUSAL ATTENTION MECHANISM Many external covariates causally change the distribution of demand and supply as shown in Figures 2 and 3 of the supplementary document. Meanwhile, many existing works focus on finding the correlation between external covariates and forecasting targets. For example, Li et al. (2019) designed causal convolution to enhance the locality of attention, whereas Lim et al. (2019) added the variables selection networks and gate mechanism to train attention weights. These two studies (Lim et al., 2019; Li et al., 2019) intend to calculate correlations among variables, but not causal effects under counterfactual conditions. Statistically, such issue can be regarded as a heterogeneous treatment effect (HTE) problem. See Figure 1 (c) for the architecture of C.A.. To the best of our knowledge, causal attention methods for HTE have not been proposed in large-scale spatio-temporal forecasting problems. First, we briefly describe the conditional average treatment effect (CATE) (Abrevaya et al., 2015). We still take demand vectors xv(t1 : t2) (abbreviated as x in the following) of grid v starting from time point t1 to time point t2 as an example. The X represents a set of x. The treatments we consider include weather (rainfall, temperature and wind level), peek time slots and holidays. Let x(s) be the target variable under treatment s ∈ S , and z is a vector of other covariates. The HTE for comparing two treatment levels s0 and s1 is defined as τ(s0, s1; z) = E[X(s1)−X(s0)|z]. (10) If treatment s is continuous, then the treatment effect is defined to be E[∇sX(s)|z], where ∇s = ∂/∂s. To unbiasedly estimate treatment effects, we propose a causal attention module based on double machine learning (DML) (Chernozhukov et al., 2017) based on two layers non-parametric fully connected neural networks. Specifically, we assume X(S) = θ(z) · S + g0(z) + and S = g1(z) + η, (11) where and η are independent random variables such that E[ |z] = E[η|z] = 0, g0(·) and g1(·) are two non-parametric neural networks, and θ(z) is the constant marginal CATE. Let X̃ = X − E(X|z) and S̃ = S − E(S|z), we can get X̃ = X − E(X|z) = θ(z) · {S − E(S|z)}+ = θ(z) · S̃ + . (12) Therefore, we can compute θ(z) by solving θ̂(z) = arg min θ∈Θ En [ (X̃ − θ(z) · S̃)2 ] , (13) where En denotes the empirical expectation. Large historical data source contains all kinds of experimental environment and treatments. According to Algorithm 1, given time series xv(: t) at grid v (v is dropped for readability) and treatment s1 ∈ S , loop and search two treatment levels s0 and s1 along with the historical timeline to construct the AB groups {x(t0)|s0} and {x(t1)|s1}. Then, we construct the AA groups {x(t0 − τ : t0)} and {x(t1 − τ : t1)} by a look-back window with the same length τ before t0 and t1, and make sure that both are both stationary processes with equal mean (PKPSS > 0.05 in KPSS test (Shin & Schmidt, 1992) and PT-Test > 0.05 in T-Test on both AA groups’ first-order differences). Based on the selected AA/AB groups, we employ DML to estimate causal attention. In our method, trained causal attention θ̂ will be inserted to transformer, and clustered regions share global θ̂ each other. Algorithm 1 Causal Attention Algorithm with DML Input: Given demand matrix x(: t) at a grid v before time t, three kinds of treatments includes weekday and hour slots T (: t) = {W (: t), H(: t)}, weather vectors W (: t), and holidays one-hot vectors H(: t) Output: causal effect coefficients θT for T (: t), θW for W (: t), and θH for H(: t) 1: Take θT as an example, and suppose that a AA group and AB group on T (: t) is TAA = TAB = {} 2: for all {Tw(t0), Tw(t1)} ∈ {Mon, Tue, ...Sun}, {Th(t0), Th(t1)} ∈ {1, ...24} do 3: if Tw(t0) = Tw(t1), Th(t0) = Th(t1), PT-Test(x(t0), x(t1)) < 0.05 then 4: for all t′0 ∈ {: t0} and t′1 ∈ {: t1} do 5: Calculate 1st-order differences x̃(t′0 : t0) and x̃(t ′ 1 : t1) 6: if PKPSS(x̃(t′0 : t0)), PKPSS(x̃(t′1 : t1)) and PT-Test(x̃(t′0 : t0), x̃(t′1 : t1)) > 0.05 then 7: TAA.append([(x(t′0 : t0), x(t ′ 1 : t1))]) 8: TAB .append([(x(t0), x(t1))]) 9: end if 10: end for 11: end if 12: end for 13: Do DML on TAA and TAB datasets and estimate treatment coefficients θT 14: Repeat from Step 2 and estimate θW and θH by different DML. 15: return θT , θW , and θH 4 EXPERIMENTS 4.1 DATASETS We consider four datasets (Electricity, Traffic, Retail2 and Ride-hailing) in our experiments as follows. Electricity. Electricity contains hourly univariate electricity consumption of 370 customers. According to (Salinas et al., 2020), weekly oberservations before t are inputs to predict the next 24 hours’ series. 2https://www.kaggle.com/c/favorita-grocery-sales-forecasting/ Traffic. Traffic contains hourly univariate occupancy rate of 963 San Francisco bay area freeways, where the look-back rolling window and prediction step are the same as Electricity. Retail. Retail is the Favorita Grocery Sales Dataset from Kaggle competition (Lim et al., 2019), including daily metadata with diverse products, stores and external variables. To compared with some state-of-the-art methods (Lim et al., 2019; Salinas et al., 2020), historical observations across 90 days are trained to forecast product sales in the next 30 days. Ride-hailing. The Ride-hailing dataset contains real supply, demand, and various of metadata at the hourly and hexagonal grid scale between June 2018 and June 2020 in two big cites (city A and city B) obtained from a ride-hailing company. The first 70%, the next 10% and the remaining 20% is used for training, validation and testing, respectively. We group the first two datasets into the univariate group and the last two datasets into the multivariate group. 4.2 BENCHMARKS In this section, two different forecasting methods, including iterative methods and multi-horizon methods, are compared in a wide range of comparison experiments. For our method CausalTrans, a pre-defined search space is used to determine optimal hyperparameters. Experimental details are included in Appendix B. Iterative methods. Iterative methods generate multi-step prediction results by step-by-step rolling windows, where results in previous steps are used to as inputs in the next step. Typically, iterative methods include DeepAR†, Deep State Space Models (DeepState†) (Rangapuram et al., 2018), ARIMA† (Zhang, 2003), ETS (Jain & Mallick, 2017) and TRMF (Yu et al., 2016). Multi-horizon methods. Multi-horizon methods considered here include ConvTrans (Li et al., 2019), MQRNN† (Wen et al., 2017), Seq2Seq† (Sutskever et al., 2014), DMVST (Sutskever et al., 2014), ST-MGCN (Geng et al., 2019), and TFT (Lim et al., 2019). The † methods are trained by using the GluonTS (Alexandrov et al., 2019) package. DMVST and ST-MGCN are spatial baselines. 4.3 RESULTS AND DISCUSSION We adapt the quantile loss as optimization function, and compare various results by q-risk R50/R90 at quantile point 50%/90%. More detailed descriptions of probabilistic forecasting are provided in subsection 3.2. Table 1 includes the R50/R90 losses of all forecasting methods for Electricity and Traffic datasets. The Electricity data set does not have any covariates and is lack of spatial information, whereas the Traffic dataset does have spatial information even without multiple covariates. We observe that ConvTrans and TFT are comparable with each other and both outperform all other methods. We believe that compared with TFT, ConvTrans is able to take advantage of the spatial information in the Traffic dataset. This is not the case for the Electricity data set. Table 2 and Table 3 include theR50 andR90 losses of all multi-horizon methods in the multivariate group. We consider both one-day and seven-day predictions and optimize the hyperparameters of all methods by using grid search. We have several important observations. First, for the one-day prediction, iterative DeepAR outperforms Seq2Seq and MQRNN due to the use of Poisson distribution and weather conditions. Second, for the spatial baselines DMVST and ST-MGCN,R50 andR90 losses are increasing with longer forecasting days, as such methods may overfit biased weights of external covariates. Third, CausalTrans outperforms all other competing methods primarily due to the use of the causal estimator DML. For instance, compared with the second best method, CausalTrans yields maximum 9.3% lowerR50 and 15.2% lowerR90 on the Ride-hailing (7d, city A, Supply) dataset. Fourth, CausalTrans achieves lower losses on forecasting supply than forecasting demand, since we explicitly model causal relationship between supply and demand in (2). Fifth, as expected, different with the one-day prediction, the seven-day prediction focuses on unbiased distribution estimation in order to alleviate error accumulation. This point of view is further reinforced by the results of the ablation study reported in Appendix C.2, and causal attention is visualized in Appendix C.1. 5 CONCLUSION Based on causal inference theory, we develop the CausalTrans framework to address collaborative supply and demand forecasting in large-scale two-sided markets. We design the fast multi-head attention to improve the computational complexity to nearly linear O(V). CausalTrans achieves similar performance as TFT based on the two datasets in the univariate group and outperforms all competing methods including TFT in the nine different experiments for the multivariate group. In particular, for our Ride-hailing datasets, CausalTrans can achieve up to 15% error reduction compared with various baseline methods. In the future, we will continue to integrate causal inferences with existing deep learning methods to deal with large-scale spatio-temporal forecasting problems. A RIDE-HAILING DATASET DETAILS Taking city A3 as an example, supply, demand, delta4 and rainfall trends (January 1st, 2018 to January 1st, 2020) are plotted at daily scale in figure 2. We conclude that the variance of demand is bigger than supply, especially in raining rush hours. Taking August 17th, 2018 in city A as another example in figure 3, we observe that the delta at dark red regions would not be for long, as spatio-temporal supply was changed by corresponding demand and reposition of drivers. The ride-hailing platform would release useful strategies to promote orders. Collaborative demand and supply implies that the distribution of supply corresponds to the distribution of demand. B TRAINING DETAILS Empirically, we consider determining optimal hyperparameters via a pre-defined random search space. For reproducibility, we include essential hyperparameters on our Ride-hailing dataset in Table 4. C INTERPRETABILITY CASES In this section, we analyze the impacts of essential components in CausalTrans and focus on what causal attention learns. First, since causal demand and supply are hardly assembled with unbiased estimation (Figure 2), we demonstrate attention-based interpretability in instance-specific significant events like frequent rainfall, holidays and peek time slots. Second, we perform ablation analysis about target probabilistic distribution PoissonOutput, causal attention with DML and Uplift, FastAttention and SpatialFusion. Finally, we compare fast improvements in multi-head attention on CPU (Intel Xeon E5-2630 2.20GHz) and GPU (Tesla P40), respectively. C.1 CAUSAL ATTENTION VISUALIZATION As one of the most essential components, causal attention employs difference stationary tests and double machine learning to estimate coefficients θ(s) of treatment effects. In this section, we visualize causal attention distribution through sample-specific cases, including rainfall, weekdays, and time slots. Frequent rainfall is the most significant weather event for demand as described in Section A. Unlike with plenty of rainfall events, there are only a dozen of holidays in one year. If sequential context before one holiday fails to pass Kpss stationary test, causal estimator would not to be applied in training attention weights. Large-scale dataset is the fundamental to our method. For the diverse peek time slots, Section 3.1 concludes that demand and supply distributes different at commuting peeks and night hours. In addition, seasonal fluctuation and government’s policies (e.g. traffic restriction in National Day) are considerable factors. Rainfall. Take demand forecasting at an anonymous region in city A as an example, treatment is rainfall s, target is demand x, and other covariates z include regional id, time slots and holidays. For convenience, we select a group of adjacent AB groups from sufficient rainfall cases to give an interpretation. In Figure 4, we backtrack rainfall treatments to fix AB Group 2, and search AB Group 1 by controlling similar covariates. Similarity means that both first-order differences are stationary, and then we construct a group of simple randomized controlled experiments. Given estimated θ(z) by running DML, we plot the distribution of causal attention on the right side of green line. In practice, large amounts of increasing data would enhance the robustness of causal evaluation iteratively. Collaborative demand and supply. As described in Section 3.1 and equation (2), the distribution of supply is driven by the spatio-temporal patterns of demand. Similar with above Rainfall analysis, we take another anonymous region in city A as an example. In this case, forecasting target is supply, causal treatment is demand, and external variables include weather, time slots and holidays. According to Algorithm 1, our method needs to construct AB groups and corresponding lookback AA groups from large-scale historical data. For both AB controlled experiments, the average demands of AB and AA groups should be significant different, while supply is unlimited. In AA experiments, we empirically suggest that the time span maintains for at least one day. We trace back data to the past, but selected AA groups should satisfy randomization grouping hypothesis passed by t-test. Such periods with stable supply are abundant in recent years, which implies that we can easily find proper evaluation dataset for diverse regions. In Figure 5, trained causal attention demonstrates the demand’s causal weights reflect in supply forecasting. Additionally, more novel causal modules similar with equation (2) can be designed to enhance interpretability and robustness, and such modules support end-to-end training in CausalTrans as well. C.2 ABLATION ANALYSIS This subsection focuses on the performance of CausalTrans when some components are excluded. Proposed essential items contain tricky PoissonOutput, Causal Attention(C.A.), FastAttention and SpatialFusion. C.A. can be implemented by different causal algorithms, such as DML and Uplift (Künzel et al., 2019). As shown in Table 5, we listR50 (50% quantile point) losses on previous eight Ridehailing datasets. Table 5 demonstrates that C.A.(DML) outperforms all of other components, and causal supply can be clearly influenced by causal demand. Finally, both FastAttention and SpatialFusion are not harmful to forecasting performance. Furthermore, spatial fusion shows tiny improvement (+0.3% on average) in Table 5. We feel that spatial fusion aggregates adjacent hexagonal grids, leading to reducing statistical noises in both demand and supply. For instance, in some cases, the boundary (usually around 800 meters) of adjacent grids separates large demand hotpots (e.g., large shopping malls), resulting in some noise when counting supply and demand. Spatial fusion can reduce the influence of such noise, while improving the probabilistic forecasting performance. According to Table 5, the longer forecasting time (e.g., 7 days versus 1 day), the more significant gain by using spatial fusion. We consider the use of spatial fusion as a trick for enhancing the robustness of forecasting. The hyperparameter of spatial fusion is K used in the kmeans method. In this paper, we set K ∈ {3, 4, 5}. More ablation analysis about K is shown in Table 6. C.3 TIME EFFICIENCY IMPROVEMENT One of innovations proposed in this paper is to shorten running time of attention without losing overall quantile loss. The long experiment cycle suggests that we should choose a representative dataset, such as one-day demand prediction in city A. Data size of city A is large enough to reflect robust attention weights. In such dataset, we are only interested in the decrease of running time as the number of heads in multi-head attention decreases. As shown in Figure 6, when multi-head is 3, the reduction ratios of CPU(20), GPU(1) and GPU(2) compared with softmax are 58%, 70%, and 68%, respectively. Similarly, when multi-head is equal to 5, the responding reduction ratios are, respectively, 49%, 58% and 60%. An exact time complexity is O(K2V ) (see in Section 3.4), the smaller K, the longer running time. In summary, proposed time-efficient attention outperforms default softmax attention significantly. D DISCUSSIONS ON LINEAR ATTENTION In subsection 3.4, we propose a novel linear attention based on approximate Taylor expansion of exponential function. In contrast, other important methods are also developed to reduce attention cost. These attention acceleration methods can be roughly clarified into two groups. The first one is to construct kernel functions to approximate softmax function, denoted as softmax(QTK) = ϕ(Q)T · φ(K), (14) where Q and K are query matrices and key matrices, respectively. For instance, Katharopoulos et al. (2020) construct a kernel function with basis function ϕ(x) = φ(x) = elu(x) + 1 and reduce the computation complexity from O(N2) to O(N), but such performance is just only concluded from image dataset. Shen et al. (2018) further explore a series of kernel forms to dissect Transformer’s attention. They proposed a new variant of Transformer’s attention by modeling the input as a product of symmetric kernels. This approach replaces the calculation order of softmax, which is equivalent to the basis function φ(x) = softmax(x) and ϕ(x) = ex. The second one is to modify attention’s definition. Child et al. (2019) develope sparse factorizations of the attention matrix, which reduce the computation to O(N √ N), but its attention hyperparameters are very hard to be initialized and actual efficiency is hard to ensure. Kitaev et al. (2020) propose Reformer to replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O(N2) to O(Nlog(N)), where N is the length of the sequence. Furthermore, they use reversible residual layers instead of standard residuals, allowing storing activations only once in the training process instead of L times, where L is the number of layers. However, Reformer is difficult to be implemented and applied in different tasks. Wang et al. (2020) demonstrate that the self-attention mechanism can be approximated by a low-rank matrix, and further propose Linformer mechanism to reduce the overall self-attention complexity to O(N). Linformer uses two additional matrices E and V to project K and V , respectively, in order to get Attention(Q,K, V ) = softmax(Q(EK)T )FV . But the MLM experiment in Linformer does not need to extract long-term dependence and cannot verify its linear time complexity for capturing long-term attention. Eliminating redundancy vectors from the self-attention is a key design idea. Furthermore, Goyal et al. (2020) exploit redundancy pertaining to word-vectors, and propose PoWER-BERT to achieve up to 4.5x reduction in inference time over BERT with <1% loss in accuracy on the standard GLUE benchmark. Similarly, Dai et al. (2020) propose Funnel-Transformer, which gradually compresses the sequence of hidden states to a shorter one, and hence reduces the computation cost. Finally, for our approximate Taylor expansion of softmax attention, if feature maps (i.e. Q, K and V in self-attention) meet the positive definite and normalization conditions and our task focuses on short-term dependence, then our linear attention would be useful for this aspect.
1. What is the focus and contribution of the paper on casual spatial-temporal prediction? 2. What are the strengths of the proposed approach, particularly in reducing complexity and introducing causal attention? 3. What are the weaknesses of the paper, especially regarding assumptions and limitations of the proposed method? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any concerns or questions about the applicability and effectiveness of the proposed method in different domains?
Review
Review This paper proposes a new framework of casual spatial-temporal prediction with high interpretability. Specifically, it first proposes a casual transformer including fast spatial graph fusion, casual attention, and temporal attention units. The authors conduct extensive experiments from different domains (electricity, traffic, etc.) and show the effectiveness, efficiency, and interpretability of their proposed method. Strength: The authors propose to reduce the complexity of the computation of the attention module. The authors first propose a causal attention method for HTE in large-scale spatio-temporal prediction problems. The experiments on datasets from various domains are adequate, which can support the authors' claim. Weakness: Both equations (1) and (2) are based on the authors' assumptions. Do such assumptions have any support (either from previous literature or from the data)? In my opinion, some of the assumptions do not make sense, for example, one assumption is that x v ( t + 1 ) is primarily affected by historical demands in x v ( : t ) and external covariates in z without historical supply y . However, if the historical supply y is not enough, then the demand may raise because more and more demand accumulates. It seems that the authors' proposed approximate Taylor's expansion attention can be used for attention models or even softmax functions in any scenarios. Does it have any limitations? For example, some of the coefficients are very large so that the higher-order Taylor terms cannot be overlooked. Please explain why such problems do not exist in the spatial-temporal prediction scenario in this paper.
ICLR
Title Causal Probabilistic Spatio-temporal Fusion Transformers in Two-sided Ride-Hailing Markets Abstract Achieving accurate spatio-temporal predictions in large-scale systems is extremely valuable in many real-world applications, such as weather forecasts, retail forecasting, and urban traffic forecasting. So far, most existing methods for multi-horizon, multi-task and multitarget predictions select important predicting variables via their correlations with responses of interest, and thus it is highly possible that many forecasting models generated from those methods are not causal, leading to poor interpretability. The aim of this paper is to develop a collaborative causal spatio-temporal fusion transformer, named CausalTrans, to establish the collaborative causal effects of predictors on multiple forecasting targets, such as supply and demand in ride-sharing platforms. Specifically, we integrate the causal attention with the Conditional Average Treatment Effect (CATE) estimation method in causal inference. Moreover, we propose a novel and fast multi-head attention evolved from Taylor’s expansion instead of softmax, reducing time complexity from O(V) to O(V), where V is the number of nodes in a graph. We further design a spatial graph fusion mechanism to significantly reduce the parameters’ scale. We conduct a wide range of experiments to demonstrate the interpretability of causal attention, the effectiveness of various model components, and the time efficiency of our CausalTrans. As shown in these experiments, our CausalTrans framework can achieve up to 15% error reduction compared with various baseline methods. 1 INTRODUCTION This paper is motivated by solving a collaborative probabilistic forecasting problem of both supply and demand in two-sided ride-hailing platforms, such as Uber and DiDi. Collaborative supply and demand relationships are common in various two-sided markets, such as Amazon, Airbnb, and eBay. We consider two-sided ride-hailing platforms as an example. In this case, we denote supply and demand as online driver number and call orders, respectively, on the platform at a specific time in a city. Some major factors for demand include rush hours, weekdays, weather conditions, transportation network, points of interest, and holidays. For instance, if it rains during peak hours in weekdays, demand will dramatically increase and last for a certain time period. In contrast, some major factors for supply include weather, holidays, traffic condition, weekdays, and platform’s dispatching and repositioning policies. Moreover, supply tends to gradually cover the area with many unsatisfied orders, that is, the distribution of supply tends to match with that of demand. We are interested in establishing collaborative causal forecasting models for demand and supply by using various predictors (or covariates). Although many learning methods have been developed to address various collaborative prediction tasks, such as spatio-temporal traffic flow prediction (Zhu & Laptev, 2017; Du et al., 2018; Zhang et al., 2019b; Ermagun & Levinson, 2018; Luo et al., 2019), multivariate prediction (Bahadori et al., 2014; Liang et al., 2018), multi-task prediction (Tang et al., 2018; Chen et al., 2018; Chandra et al., 2017), multi-view prediction (Yao et al., 2018), and multi-horizon prediction (Lim et al., 2019; Yu et al., 2020), these existing methods primarily select important predictors via their correlations with responses, leading to many forecasting models with poor interpretability. In contrast, we propose CausalTrans: a Collaborative Spatio-temporal Fusion Transformer, that generates causal probabilistic multi-horizon forecasts. To the best of our knowledge, this is the first work that captures collaborative causal effects of external covariates on multiple forecasting targets. Building such models is not only essential to enhancing forecasting performance, but also helps the platform to utilize various platform policies to match the distribution of supply with that of demand in two-sided markets. In the CausalTrans framework, our major contributions are summarized as follows: • We design the causal attention based on double machine learning (Chernozhukov et al., 2018) with two layers fully connected neural networks, and successful apply it to various large-scale time series forecasting problems. We conduct a wide range of experiments on real world datasets with multiple covariates and demonstrate that CausalTrans with causal attention outperforms many baseline models in various Ride-hailing scenarios. • We propose a spatial fusion mechanism based on graph attention networks (GAT) (Veličković et al., 2017) to gather local regions and enhance robustness as adjacent regions always share similar supply and demand patterns. • We propose an approximate time-efficient Taylor expansion attention to replace softmax in multihead attention of Transformers (Vaswani et al., 2017) such that time complexity reduces from O(V2) to O(V). We carry out two groups of experiments with three multi-heads and five multi-heads to verify such efficiency improvement. 2 RELATED WORK There is a large body of literature on vehicle flow forecasting (Zhu & Laptev, 2017; Bahadori et al., 2014; Tang et al., 2018; Lim et al., 2019; Yao et al., 2018). We selectively review several major methods as follows. In Zhu & Laptev (2017), the time series forecasting task as a two-step procedure includes offline pre-training and online forecasting. The offline pre-training step is an encoder-decoder framework for compressing sequential features and extracting principal components, whereas the second step gives explainable prediction changes under external variables. Bahadori et al. (2014) proposed a unified low-rank tensor learning framework for multivariate spatio-temporal analysis by combining various attributes of spatio-temporal data including spatial clustering and shared variables structure. For multi-step traffic flow prediction, Tang et al. (2018) proposed a spatio-temporal multi-task collaborative learning model to extract and learn shared information among multiple prediction tasks collaboratively. For example, such model combines spatial features collected from offline observation stations and inherent information between blended time granularities. Lim et al. (2019) proposed a temporal fusion transformer (TFT) to capture temporal correlations at each position, which was similar to self-attention mechanism and expected to capture long-term and short-term dependencies. Yao et al. (2018) proposed a deep multi-view spatio-temporal network (DMVST-Net), including a speed viewpoint (modeling the correlation between historical and future demand by LSTM (Gers & Schmidhuber, 2001)), a spatial viewpoint (modeling local spatial correlation by CNN), and a contextual viewpoint (modeling regional correlations in local temporal patterns). Overall, all above methods improve time series fitting by learning and predicting correlations across multiple spatio-temporal perspectives, targets, and tasks. However, those methods lack convincing interpretability of "how and to what extent external variables affect supply and demand". Achieving good demand forecasting involves not only historical demand targets, but also various current external variables (e.g., weather conditions, traffic conditions, holidays, and driver reposition). Those historical demand observations were affected by historical external factors, so the demand forecasting only based on correlation between variables is hardly convincing. Furthermore, supply forecasting is empirically affected by the distribution of demand besides current external variables. Establishing causal relationship between (supply, demand) and multiple external variables is critically important for accurate supply and demand forecasting. 3 METHODOLOGY We introduce the CausalTrans framework to efficiently establish the collaborative causal effects of multiple predictors on spatio-temporal supply and demand below. 3.1 COLLABORATIVE SUPPLY AND DEMAND FORECASTING We consider all related observations including supply, demand, and external variables collected in a city. Each day is divided into 24 hour segments and a city is divided into non-overlapping hexagonal regions (side length ranges from 600 to 1000 meters). The complete data consists of demand xv(t) ∈ R, supply yv(t) ∈ R, and dynamic covariates zv(t) ∈ Rz , where t is a specific hour segment and v ∈ V is a specific hexagon of the set of hexagonal regions, denoted as V . Dynamic covariates includes weather, holidays, social events, POI (Point Of Interests), and government policies. Weather features consist of temperature (◦C), rainfall (mm), wind level and PM2.5 (mg/m3). Holiday features are represented by one-hot boolean vectors, including seasons, weekdays, and national and popular holidays, such as Christmas Day. POI features are represented by the number of various positions, including traffic stations, business districts, communities, hospitals and schools. More detail cases about collaborative supply and demand are provided in Appendix A. The problem of interest is to use all available observations in {(xv(: t), yv(: t), zv(:, t)), v ∈ V} to predict {(xv(t+ 1 : t+ τmax), yv(t+ 1 : t+ τmax)), v ∈ V}, where τmax is a pre-specified time length, xv(t1 : t2) and yv(t1 : t2) are the demand and supply vectors starting from time point t1 to time point t2, and xv(: t2) and yv(: t2) are the demand and supply vectors starting from the earliest time point to time point t. The demand xv may depend on historical supply yv that happens several weeks (or even longer) ago. But in the latest several weeks (training period), based on our understanding of ride-sharing business, demand xv may be primarily influenced by its own recent historical patterns. Based on the above description, we formulate the learning problem of collaborative demand and supply forecasting as follows: P (xv(t+ 1 : t+ τmax)|xv(: t), zv(: t+ τmax)), (1) P (yv(t+ 1 : t+ τmax)|yv(: t), xv(: t+ τmax), zv(: t+ τmax)), (2) where P (·|·) is a conditional distribution. In (1), it is assumed that xv(t+ 1 : t+ τmax) is primarily affected by historical demands in xv(: t) and external covariates in zv(: t+ τmax). Furthermore, in (2), it is assumed that future supplies in yv(t+ 1 : t+ τmax) are primarily affected by historical supplies in yv(: t), demand patterns in xv(: t + τmax), and external covariates in zv(: t + τmax). Comparing (1) with (2), we assume that the distribution of supply during [t+ 1, t+ τmax] is driven by the historical and current distributions of demand besides the historical information in yv(: t) and external covariates in zv(: t+ τmax). 3.2 PROBABILISTIC FORECASTING Most time series forecasting methods produce deterministic values, whereas forecasting results might have large variation and were hardly robust due to the variation of covariates and training process. To enhance forecasting reliability, we adapt the quantile loss function with the Poisson distribution as our final optimization function 1. Empirically, following (Salinas et al., 2020; Wen et al., 2017; Li et al., 2019; Lim et al., 2019), we choose three quantile points q ∈ Q = {10%, 50%, 90%}, in which the gap between forecasting values at 1Ride-hailing supply and demand variables approximately follow with the Poisson distribution. 90% and 10% percentiles can be regarded as the confidence interval. Take demand xt forecasting at time point t as an example, the final quantile loss function is given by LQ = ∑ xt∈Ω ∑ q∈Q τmax∑ τ=1 QLq(xt, x̂qt−τ ) M · τmax , (3) whereQLq(xt, x̂qt ) = {q− I(xt ≤ x̂ q t )}(xt− x̂ q t ), Ω is the training dataset, τmax is the maximum prediction step, and I(·) is an indicator function. For a fair comparison, given the test dataset Ω̃, we employ q-risk (Salinas et al., 2020; Lim et al., 2019; Li et al., 2019), denoted asRq , to evaluate the risk level of each quantile point as follows: Rq = 2 ∑ xt∈Ω̃ ∑τmax τ=1 QLq(xt, x̂ q t−τ )∑ xt∈Ω̃ ∑τmax τ=1 |xt| . (4) There are at least two advantages of using the quantile loss function. First, the quantile loss function is more robust and stable than the mean square error or the hinge loss, especially when forecasting targets have large variation. Second, we can modify external covariates to change the confidence interval of causal attention and analyze real-world cases. 3.3 CAUSAL TRANSFORMER FRAMEWORK Our CausalTrans is a novel combination of causal estimators and the encoder-decoder architecture. Figure 1 shows the overview of the CausalTrans framework. The three key novel contributions of CausalTrans include fast spatial graph fusion, causal attention, and temporal attention units. First, from the spatial perspective, CausalTrans gathers a set of graph attention kernels (GAT) by using assignment scores extracted from temporal patterns. Moreover, we adapt the first-order Taylor’s expansion on multi-head attention from transformer to reduce time complexity from square complexity to linear complexity. Second, from the temporal perspective, causal attention based on sufficient historical observations is trained offline to evaluate the causal weights on peek time slots, those on weather conditions, and those on holidays, which are denoted as θT , θW , and θH , respectively, under diverse spatio-temporal conditions. Furthermore, we simplify three seasonal perspectives (week, month, and holidays) to represent multi-view position encoding (MVPE). Third, temporal attention is used to fill the gap between encoder and decoder, in which we add a sequence mask to ensure that the historical observations of time point t only uses observations smaller than t. We set mask out to be −∞ and illegal connection weights to be zero. In the following subsections, we introduce the main components of CausalTrans: fast spatial graph fusion and causal attention and show how they works together as a causal spatio-temporal predictor. Moreover, for notational simplicity, we focus on describing those components for forecasting demand xv in the following subsections, while avoid repeating the same components for supply yv . 3.4 FAST SPATIAL GRAPH FUSION ATTENTION In this subsection, we describe the fast graph fusion attention unit based on region clustering and fast multihead attention. See Figure 1 (b) for the architecture of Fast S.F.. Since GAT has achieved impressive results in traffic forecasts (Park et al. (2019); Kosaraju et al. (2019); Zhang et al. (2019a)), we use GAT to extract contextual features in huge graphs. However, directly applying GAT to large-scale forecasting problems is a challenging task, so we design spatial fusion subgraphs that share local supply and demand information. Moreover, we build our framework based on transformers (Vaswani et al., 2017). Transformers have been state-of-the-art structure in various natural language processing (NLP) tasks (Wolf et al., 2019; Wang et al., 2019) and time series forecasting due to its prominent powers of long-term feature extraction and parallel computing. However, the multi-head attention in transformers becomes a key bottleneck for time efficiency. We design an approximate Taylor’s expansion attention instead of using softmax function to accelerate matrix products. More detail results of fast attention can be found in Appendices C.2 and C.3. We briefly describe the fast spatio-temporal fusion graph attention procedure below. First, let Xt be the spatio-temporal demand feature matrix of all grids V before time t, the temporal patterns of V are represented as assignment scores given by C = (cxv,k) = [σs(σr(XtWv)Wt)]Batch, (5) where [·]Batch is the mean operator on the batch mode, k belongs to a K-dimensional cluster vector, v ∈ V , Wv and Wt are, respectively, spatial and temporal weight matrices corresponding to Xt, and σs(·) and σr(·) are sigmoid and relu activation functions, respectively. Second, we use the k-th spatial learner Gk(xv) to extract spatial features of sequential data xv in grid v, and the summed outputs of K clusters are given as follows: hv = ∑ k∈K Gk(xv)cxv,k. (6) The softmax function is used to get attention weights among regions as follows: α̂v = ∑ v′∈Nv αv,v′ · xv′ = ∑ v′∈Nv exp(σr(a T [W · xv||W · xv′ ])) · xv′∑ v′∈Nv exp(σr(a T [W · xv||W · xv′ ])) , (7) where αv,v′ is the correlation weight between v and v′, a and W are network parameters, the superscript T denotes the transpose of a vector or matrix, Nv = {v′|v′ ∈ V, v′ 6= v} is the neighboring region set of region v, and [·||·] is the concatenation operation. In (7), the time complexity of computing exp(σr(aT [W · xv||W · xv′ ])) is O(V2). Specifically, the exponent operation in exp(aT ·W ) ·X of the softmax function limits the efficiency of attention. Moreover, cluster number K V , and the time complexity of aTWX is O(K2 · V) ≈ O(V). Many recent studies find that linear attention is feasible for tasks, whose primary focus is on short-term dependence. More details are discussed in Appendix D. Our novel linear attention is easy to implement and interpret. It follows from Taylor expansion that exp(aTW ) ≈ 1 + aTW under the condition of small aTW . Analogous to the self-attention in original Transformer (Vaswani et al., 2017), the approximate mean and variance of QK T √ dk are 0 and 1, respectively, so aTW here is limited to small values. We introduce L2 normalization to ensure small aTW and 1 + aTW ≥ 0 such that exp(aTW ) ≈ T (aTW ) = 1 + ( a ||a||2 )T ( W ||W ||2 ) . (8) where T is an approximate Taylor expansion. Equation (8) is close to inner dot products, which have advantages on parallel implementation and linear time complexity. Finally, α̂v can be transformed into α̂v = ∑ v′∈Nv T (σr(a T [W · xv||W · xv′ ])) · xv′∑ v′∈Nv T (σr(a T [W · xv||W · xv′ ])) . (9) 3.5 CAUSAL ATTENTION MECHANISM Many external covariates causally change the distribution of demand and supply as shown in Figures 2 and 3 of the supplementary document. Meanwhile, many existing works focus on finding the correlation between external covariates and forecasting targets. For example, Li et al. (2019) designed causal convolution to enhance the locality of attention, whereas Lim et al. (2019) added the variables selection networks and gate mechanism to train attention weights. These two studies (Lim et al., 2019; Li et al., 2019) intend to calculate correlations among variables, but not causal effects under counterfactual conditions. Statistically, such issue can be regarded as a heterogeneous treatment effect (HTE) problem. See Figure 1 (c) for the architecture of C.A.. To the best of our knowledge, causal attention methods for HTE have not been proposed in large-scale spatio-temporal forecasting problems. First, we briefly describe the conditional average treatment effect (CATE) (Abrevaya et al., 2015). We still take demand vectors xv(t1 : t2) (abbreviated as x in the following) of grid v starting from time point t1 to time point t2 as an example. The X represents a set of x. The treatments we consider include weather (rainfall, temperature and wind level), peek time slots and holidays. Let x(s) be the target variable under treatment s ∈ S , and z is a vector of other covariates. The HTE for comparing two treatment levels s0 and s1 is defined as τ(s0, s1; z) = E[X(s1)−X(s0)|z]. (10) If treatment s is continuous, then the treatment effect is defined to be E[∇sX(s)|z], where ∇s = ∂/∂s. To unbiasedly estimate treatment effects, we propose a causal attention module based on double machine learning (DML) (Chernozhukov et al., 2017) based on two layers non-parametric fully connected neural networks. Specifically, we assume X(S) = θ(z) · S + g0(z) + and S = g1(z) + η, (11) where and η are independent random variables such that E[ |z] = E[η|z] = 0, g0(·) and g1(·) are two non-parametric neural networks, and θ(z) is the constant marginal CATE. Let X̃ = X − E(X|z) and S̃ = S − E(S|z), we can get X̃ = X − E(X|z) = θ(z) · {S − E(S|z)}+ = θ(z) · S̃ + . (12) Therefore, we can compute θ(z) by solving θ̂(z) = arg min θ∈Θ En [ (X̃ − θ(z) · S̃)2 ] , (13) where En denotes the empirical expectation. Large historical data source contains all kinds of experimental environment and treatments. According to Algorithm 1, given time series xv(: t) at grid v (v is dropped for readability) and treatment s1 ∈ S , loop and search two treatment levels s0 and s1 along with the historical timeline to construct the AB groups {x(t0)|s0} and {x(t1)|s1}. Then, we construct the AA groups {x(t0 − τ : t0)} and {x(t1 − τ : t1)} by a look-back window with the same length τ before t0 and t1, and make sure that both are both stationary processes with equal mean (PKPSS > 0.05 in KPSS test (Shin & Schmidt, 1992) and PT-Test > 0.05 in T-Test on both AA groups’ first-order differences). Based on the selected AA/AB groups, we employ DML to estimate causal attention. In our method, trained causal attention θ̂ will be inserted to transformer, and clustered regions share global θ̂ each other. Algorithm 1 Causal Attention Algorithm with DML Input: Given demand matrix x(: t) at a grid v before time t, three kinds of treatments includes weekday and hour slots T (: t) = {W (: t), H(: t)}, weather vectors W (: t), and holidays one-hot vectors H(: t) Output: causal effect coefficients θT for T (: t), θW for W (: t), and θH for H(: t) 1: Take θT as an example, and suppose that a AA group and AB group on T (: t) is TAA = TAB = {} 2: for all {Tw(t0), Tw(t1)} ∈ {Mon, Tue, ...Sun}, {Th(t0), Th(t1)} ∈ {1, ...24} do 3: if Tw(t0) = Tw(t1), Th(t0) = Th(t1), PT-Test(x(t0), x(t1)) < 0.05 then 4: for all t′0 ∈ {: t0} and t′1 ∈ {: t1} do 5: Calculate 1st-order differences x̃(t′0 : t0) and x̃(t ′ 1 : t1) 6: if PKPSS(x̃(t′0 : t0)), PKPSS(x̃(t′1 : t1)) and PT-Test(x̃(t′0 : t0), x̃(t′1 : t1)) > 0.05 then 7: TAA.append([(x(t′0 : t0), x(t ′ 1 : t1))]) 8: TAB .append([(x(t0), x(t1))]) 9: end if 10: end for 11: end if 12: end for 13: Do DML on TAA and TAB datasets and estimate treatment coefficients θT 14: Repeat from Step 2 and estimate θW and θH by different DML. 15: return θT , θW , and θH 4 EXPERIMENTS 4.1 DATASETS We consider four datasets (Electricity, Traffic, Retail2 and Ride-hailing) in our experiments as follows. Electricity. Electricity contains hourly univariate electricity consumption of 370 customers. According to (Salinas et al., 2020), weekly oberservations before t are inputs to predict the next 24 hours’ series. 2https://www.kaggle.com/c/favorita-grocery-sales-forecasting/ Traffic. Traffic contains hourly univariate occupancy rate of 963 San Francisco bay area freeways, where the look-back rolling window and prediction step are the same as Electricity. Retail. Retail is the Favorita Grocery Sales Dataset from Kaggle competition (Lim et al., 2019), including daily metadata with diverse products, stores and external variables. To compared with some state-of-the-art methods (Lim et al., 2019; Salinas et al., 2020), historical observations across 90 days are trained to forecast product sales in the next 30 days. Ride-hailing. The Ride-hailing dataset contains real supply, demand, and various of metadata at the hourly and hexagonal grid scale between June 2018 and June 2020 in two big cites (city A and city B) obtained from a ride-hailing company. The first 70%, the next 10% and the remaining 20% is used for training, validation and testing, respectively. We group the first two datasets into the univariate group and the last two datasets into the multivariate group. 4.2 BENCHMARKS In this section, two different forecasting methods, including iterative methods and multi-horizon methods, are compared in a wide range of comparison experiments. For our method CausalTrans, a pre-defined search space is used to determine optimal hyperparameters. Experimental details are included in Appendix B. Iterative methods. Iterative methods generate multi-step prediction results by step-by-step rolling windows, where results in previous steps are used to as inputs in the next step. Typically, iterative methods include DeepAR†, Deep State Space Models (DeepState†) (Rangapuram et al., 2018), ARIMA† (Zhang, 2003), ETS (Jain & Mallick, 2017) and TRMF (Yu et al., 2016). Multi-horizon methods. Multi-horizon methods considered here include ConvTrans (Li et al., 2019), MQRNN† (Wen et al., 2017), Seq2Seq† (Sutskever et al., 2014), DMVST (Sutskever et al., 2014), ST-MGCN (Geng et al., 2019), and TFT (Lim et al., 2019). The † methods are trained by using the GluonTS (Alexandrov et al., 2019) package. DMVST and ST-MGCN are spatial baselines. 4.3 RESULTS AND DISCUSSION We adapt the quantile loss as optimization function, and compare various results by q-risk R50/R90 at quantile point 50%/90%. More detailed descriptions of probabilistic forecasting are provided in subsection 3.2. Table 1 includes the R50/R90 losses of all forecasting methods for Electricity and Traffic datasets. The Electricity data set does not have any covariates and is lack of spatial information, whereas the Traffic dataset does have spatial information even without multiple covariates. We observe that ConvTrans and TFT are comparable with each other and both outperform all other methods. We believe that compared with TFT, ConvTrans is able to take advantage of the spatial information in the Traffic dataset. This is not the case for the Electricity data set. Table 2 and Table 3 include theR50 andR90 losses of all multi-horizon methods in the multivariate group. We consider both one-day and seven-day predictions and optimize the hyperparameters of all methods by using grid search. We have several important observations. First, for the one-day prediction, iterative DeepAR outperforms Seq2Seq and MQRNN due to the use of Poisson distribution and weather conditions. Second, for the spatial baselines DMVST and ST-MGCN,R50 andR90 losses are increasing with longer forecasting days, as such methods may overfit biased weights of external covariates. Third, CausalTrans outperforms all other competing methods primarily due to the use of the causal estimator DML. For instance, compared with the second best method, CausalTrans yields maximum 9.3% lowerR50 and 15.2% lowerR90 on the Ride-hailing (7d, city A, Supply) dataset. Fourth, CausalTrans achieves lower losses on forecasting supply than forecasting demand, since we explicitly model causal relationship between supply and demand in (2). Fifth, as expected, different with the one-day prediction, the seven-day prediction focuses on unbiased distribution estimation in order to alleviate error accumulation. This point of view is further reinforced by the results of the ablation study reported in Appendix C.2, and causal attention is visualized in Appendix C.1. 5 CONCLUSION Based on causal inference theory, we develop the CausalTrans framework to address collaborative supply and demand forecasting in large-scale two-sided markets. We design the fast multi-head attention to improve the computational complexity to nearly linear O(V). CausalTrans achieves similar performance as TFT based on the two datasets in the univariate group and outperforms all competing methods including TFT in the nine different experiments for the multivariate group. In particular, for our Ride-hailing datasets, CausalTrans can achieve up to 15% error reduction compared with various baseline methods. In the future, we will continue to integrate causal inferences with existing deep learning methods to deal with large-scale spatio-temporal forecasting problems. A RIDE-HAILING DATASET DETAILS Taking city A3 as an example, supply, demand, delta4 and rainfall trends (January 1st, 2018 to January 1st, 2020) are plotted at daily scale in figure 2. We conclude that the variance of demand is bigger than supply, especially in raining rush hours. Taking August 17th, 2018 in city A as another example in figure 3, we observe that the delta at dark red regions would not be for long, as spatio-temporal supply was changed by corresponding demand and reposition of drivers. The ride-hailing platform would release useful strategies to promote orders. Collaborative demand and supply implies that the distribution of supply corresponds to the distribution of demand. B TRAINING DETAILS Empirically, we consider determining optimal hyperparameters via a pre-defined random search space. For reproducibility, we include essential hyperparameters on our Ride-hailing dataset in Table 4. C INTERPRETABILITY CASES In this section, we analyze the impacts of essential components in CausalTrans and focus on what causal attention learns. First, since causal demand and supply are hardly assembled with unbiased estimation (Figure 2), we demonstrate attention-based interpretability in instance-specific significant events like frequent rainfall, holidays and peek time slots. Second, we perform ablation analysis about target probabilistic distribution PoissonOutput, causal attention with DML and Uplift, FastAttention and SpatialFusion. Finally, we compare fast improvements in multi-head attention on CPU (Intel Xeon E5-2630 2.20GHz) and GPU (Tesla P40), respectively. C.1 CAUSAL ATTENTION VISUALIZATION As one of the most essential components, causal attention employs difference stationary tests and double machine learning to estimate coefficients θ(s) of treatment effects. In this section, we visualize causal attention distribution through sample-specific cases, including rainfall, weekdays, and time slots. Frequent rainfall is the most significant weather event for demand as described in Section A. Unlike with plenty of rainfall events, there are only a dozen of holidays in one year. If sequential context before one holiday fails to pass Kpss stationary test, causal estimator would not to be applied in training attention weights. Large-scale dataset is the fundamental to our method. For the diverse peek time slots, Section 3.1 concludes that demand and supply distributes different at commuting peeks and night hours. In addition, seasonal fluctuation and government’s policies (e.g. traffic restriction in National Day) are considerable factors. Rainfall. Take demand forecasting at an anonymous region in city A as an example, treatment is rainfall s, target is demand x, and other covariates z include regional id, time slots and holidays. For convenience, we select a group of adjacent AB groups from sufficient rainfall cases to give an interpretation. In Figure 4, we backtrack rainfall treatments to fix AB Group 2, and search AB Group 1 by controlling similar covariates. Similarity means that both first-order differences are stationary, and then we construct a group of simple randomized controlled experiments. Given estimated θ(z) by running DML, we plot the distribution of causal attention on the right side of green line. In practice, large amounts of increasing data would enhance the robustness of causal evaluation iteratively. Collaborative demand and supply. As described in Section 3.1 and equation (2), the distribution of supply is driven by the spatio-temporal patterns of demand. Similar with above Rainfall analysis, we take another anonymous region in city A as an example. In this case, forecasting target is supply, causal treatment is demand, and external variables include weather, time slots and holidays. According to Algorithm 1, our method needs to construct AB groups and corresponding lookback AA groups from large-scale historical data. For both AB controlled experiments, the average demands of AB and AA groups should be significant different, while supply is unlimited. In AA experiments, we empirically suggest that the time span maintains for at least one day. We trace back data to the past, but selected AA groups should satisfy randomization grouping hypothesis passed by t-test. Such periods with stable supply are abundant in recent years, which implies that we can easily find proper evaluation dataset for diverse regions. In Figure 5, trained causal attention demonstrates the demand’s causal weights reflect in supply forecasting. Additionally, more novel causal modules similar with equation (2) can be designed to enhance interpretability and robustness, and such modules support end-to-end training in CausalTrans as well. C.2 ABLATION ANALYSIS This subsection focuses on the performance of CausalTrans when some components are excluded. Proposed essential items contain tricky PoissonOutput, Causal Attention(C.A.), FastAttention and SpatialFusion. C.A. can be implemented by different causal algorithms, such as DML and Uplift (Künzel et al., 2019). As shown in Table 5, we listR50 (50% quantile point) losses on previous eight Ridehailing datasets. Table 5 demonstrates that C.A.(DML) outperforms all of other components, and causal supply can be clearly influenced by causal demand. Finally, both FastAttention and SpatialFusion are not harmful to forecasting performance. Furthermore, spatial fusion shows tiny improvement (+0.3% on average) in Table 5. We feel that spatial fusion aggregates adjacent hexagonal grids, leading to reducing statistical noises in both demand and supply. For instance, in some cases, the boundary (usually around 800 meters) of adjacent grids separates large demand hotpots (e.g., large shopping malls), resulting in some noise when counting supply and demand. Spatial fusion can reduce the influence of such noise, while improving the probabilistic forecasting performance. According to Table 5, the longer forecasting time (e.g., 7 days versus 1 day), the more significant gain by using spatial fusion. We consider the use of spatial fusion as a trick for enhancing the robustness of forecasting. The hyperparameter of spatial fusion is K used in the kmeans method. In this paper, we set K ∈ {3, 4, 5}. More ablation analysis about K is shown in Table 6. C.3 TIME EFFICIENCY IMPROVEMENT One of innovations proposed in this paper is to shorten running time of attention without losing overall quantile loss. The long experiment cycle suggests that we should choose a representative dataset, such as one-day demand prediction in city A. Data size of city A is large enough to reflect robust attention weights. In such dataset, we are only interested in the decrease of running time as the number of heads in multi-head attention decreases. As shown in Figure 6, when multi-head is 3, the reduction ratios of CPU(20), GPU(1) and GPU(2) compared with softmax are 58%, 70%, and 68%, respectively. Similarly, when multi-head is equal to 5, the responding reduction ratios are, respectively, 49%, 58% and 60%. An exact time complexity is O(K2V ) (see in Section 3.4), the smaller K, the longer running time. In summary, proposed time-efficient attention outperforms default softmax attention significantly. D DISCUSSIONS ON LINEAR ATTENTION In subsection 3.4, we propose a novel linear attention based on approximate Taylor expansion of exponential function. In contrast, other important methods are also developed to reduce attention cost. These attention acceleration methods can be roughly clarified into two groups. The first one is to construct kernel functions to approximate softmax function, denoted as softmax(QTK) = ϕ(Q)T · φ(K), (14) where Q and K are query matrices and key matrices, respectively. For instance, Katharopoulos et al. (2020) construct a kernel function with basis function ϕ(x) = φ(x) = elu(x) + 1 and reduce the computation complexity from O(N2) to O(N), but such performance is just only concluded from image dataset. Shen et al. (2018) further explore a series of kernel forms to dissect Transformer’s attention. They proposed a new variant of Transformer’s attention by modeling the input as a product of symmetric kernels. This approach replaces the calculation order of softmax, which is equivalent to the basis function φ(x) = softmax(x) and ϕ(x) = ex. The second one is to modify attention’s definition. Child et al. (2019) develope sparse factorizations of the attention matrix, which reduce the computation to O(N √ N), but its attention hyperparameters are very hard to be initialized and actual efficiency is hard to ensure. Kitaev et al. (2020) propose Reformer to replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O(N2) to O(Nlog(N)), where N is the length of the sequence. Furthermore, they use reversible residual layers instead of standard residuals, allowing storing activations only once in the training process instead of L times, where L is the number of layers. However, Reformer is difficult to be implemented and applied in different tasks. Wang et al. (2020) demonstrate that the self-attention mechanism can be approximated by a low-rank matrix, and further propose Linformer mechanism to reduce the overall self-attention complexity to O(N). Linformer uses two additional matrices E and V to project K and V , respectively, in order to get Attention(Q,K, V ) = softmax(Q(EK)T )FV . But the MLM experiment in Linformer does not need to extract long-term dependence and cannot verify its linear time complexity for capturing long-term attention. Eliminating redundancy vectors from the self-attention is a key design idea. Furthermore, Goyal et al. (2020) exploit redundancy pertaining to word-vectors, and propose PoWER-BERT to achieve up to 4.5x reduction in inference time over BERT with <1% loss in accuracy on the standard GLUE benchmark. Similarly, Dai et al. (2020) propose Funnel-Transformer, which gradually compresses the sequence of hidden states to a shorter one, and hence reduces the computation cost. Finally, for our approximate Taylor expansion of softmax attention, if feature maps (i.e. Q, K and V in self-attention) meet the positive definite and normalization conditions and our task focuses on short-term dependence, then our linear attention would be useful for this aspect.
1. What is the main contribution of the paper, and how does it tackle the demand and supply problem in the ride-hailing market? 2. What are the strengths of the proposed Transformer-like framework, CausalTrans, particularly in terms of its submodules? 3. Do you have any concerns or questions regarding the ablation analysis, especially regarding spatial fusion and fast attention? 4. How does the paper's experimental results demonstrate the effectiveness of the proposed method, especially compared to state-of-the-art methods? 5. Are there any limitations or areas for improvement regarding the experimental details and reproducibility of the results?
Review
Review This paper proposes a Transformer-like framework, named CausalTrans, to tackle the demand and supply problem in the ride-hailing market. The problem is formulated by training two probabilistic models which forecasts collaborative demand and supply, by given historical observations and dynamic covariates. The paper leveragesTransformer encoder-decoder architecture, and proposes submodule (Fast S.F., C.A. and T.A.) for different functionalities. Many experiments and ablation analysis are constructed to show that the proposed method outperforms state-of-the-art. The paper also provides good visualization regarding the casual attention model, facilitating to understand the proposed idea. overall the paper is well organized and easy to read. the proposed architecture is with merit: Fast S.F. with an approximate Taylor’s expansioninstead of using softmax function, showing lower computational cost with little performance dropped; C.A. is proposed to deal with HTE problem in large-scale spatio-temporal forecasting problems, where a DML algorithm is also proposed to learn the C.A. model. the experimental results are comprehensive and promising. Still, I have some questions: The ablation analysis shows that spatial fusion shows tiny improvement (+0.3% on average). I’d like the author to elaborate more on the reason. If there are hyperparameters for the spatial fusion, please do some ablation analysis in this regard. From experiments the fast attention improves computation a lot with only 0.2% performance dropped in multi-horizon methods. Is it the same in multi-horizon methods? From table 1 the results show that, in case of Electricity, the proposed method can’t outperform the state-of-the-art (TFT) due to lack of covariates and spatial information. I’d like to see the ablation analysis (like table 3) in case of Traffic, showing that in case of the iterative method, the performance can be gained by each submodule. Lack of experimental details. For example, learning rate/strategy, batch size, optimizer, architecture settings...etc. If the author(s) plan not to release the code in the future, it’s better to list the experimental details in the appendix, for reproducing the performance. Overall I think that the proposed causal attention is valuable. By adapting the C.A. to the transformer architecture, the proposed method is comparable to state-of-the-art. Moreover, sufficient experiments demonstrate the proposed method can achieve decnet performance in the demand-supply problem. If the paper can better clarify the points mentioned above, I’ll vote for positive.
ICLR
Title Causal Probabilistic Spatio-temporal Fusion Transformers in Two-sided Ride-Hailing Markets Abstract Achieving accurate spatio-temporal predictions in large-scale systems is extremely valuable in many real-world applications, such as weather forecasts, retail forecasting, and urban traffic forecasting. So far, most existing methods for multi-horizon, multi-task and multitarget predictions select important predicting variables via their correlations with responses of interest, and thus it is highly possible that many forecasting models generated from those methods are not causal, leading to poor interpretability. The aim of this paper is to develop a collaborative causal spatio-temporal fusion transformer, named CausalTrans, to establish the collaborative causal effects of predictors on multiple forecasting targets, such as supply and demand in ride-sharing platforms. Specifically, we integrate the causal attention with the Conditional Average Treatment Effect (CATE) estimation method in causal inference. Moreover, we propose a novel and fast multi-head attention evolved from Taylor’s expansion instead of softmax, reducing time complexity from O(V) to O(V), where V is the number of nodes in a graph. We further design a spatial graph fusion mechanism to significantly reduce the parameters’ scale. We conduct a wide range of experiments to demonstrate the interpretability of causal attention, the effectiveness of various model components, and the time efficiency of our CausalTrans. As shown in these experiments, our CausalTrans framework can achieve up to 15% error reduction compared with various baseline methods. 1 INTRODUCTION This paper is motivated by solving a collaborative probabilistic forecasting problem of both supply and demand in two-sided ride-hailing platforms, such as Uber and DiDi. Collaborative supply and demand relationships are common in various two-sided markets, such as Amazon, Airbnb, and eBay. We consider two-sided ride-hailing platforms as an example. In this case, we denote supply and demand as online driver number and call orders, respectively, on the platform at a specific time in a city. Some major factors for demand include rush hours, weekdays, weather conditions, transportation network, points of interest, and holidays. For instance, if it rains during peak hours in weekdays, demand will dramatically increase and last for a certain time period. In contrast, some major factors for supply include weather, holidays, traffic condition, weekdays, and platform’s dispatching and repositioning policies. Moreover, supply tends to gradually cover the area with many unsatisfied orders, that is, the distribution of supply tends to match with that of demand. We are interested in establishing collaborative causal forecasting models for demand and supply by using various predictors (or covariates). Although many learning methods have been developed to address various collaborative prediction tasks, such as spatio-temporal traffic flow prediction (Zhu & Laptev, 2017; Du et al., 2018; Zhang et al., 2019b; Ermagun & Levinson, 2018; Luo et al., 2019), multivariate prediction (Bahadori et al., 2014; Liang et al., 2018), multi-task prediction (Tang et al., 2018; Chen et al., 2018; Chandra et al., 2017), multi-view prediction (Yao et al., 2018), and multi-horizon prediction (Lim et al., 2019; Yu et al., 2020), these existing methods primarily select important predictors via their correlations with responses, leading to many forecasting models with poor interpretability. In contrast, we propose CausalTrans: a Collaborative Spatio-temporal Fusion Transformer, that generates causal probabilistic multi-horizon forecasts. To the best of our knowledge, this is the first work that captures collaborative causal effects of external covariates on multiple forecasting targets. Building such models is not only essential to enhancing forecasting performance, but also helps the platform to utilize various platform policies to match the distribution of supply with that of demand in two-sided markets. In the CausalTrans framework, our major contributions are summarized as follows: • We design the causal attention based on double machine learning (Chernozhukov et al., 2018) with two layers fully connected neural networks, and successful apply it to various large-scale time series forecasting problems. We conduct a wide range of experiments on real world datasets with multiple covariates and demonstrate that CausalTrans with causal attention outperforms many baseline models in various Ride-hailing scenarios. • We propose a spatial fusion mechanism based on graph attention networks (GAT) (Veličković et al., 2017) to gather local regions and enhance robustness as adjacent regions always share similar supply and demand patterns. • We propose an approximate time-efficient Taylor expansion attention to replace softmax in multihead attention of Transformers (Vaswani et al., 2017) such that time complexity reduces from O(V2) to O(V). We carry out two groups of experiments with three multi-heads and five multi-heads to verify such efficiency improvement. 2 RELATED WORK There is a large body of literature on vehicle flow forecasting (Zhu & Laptev, 2017; Bahadori et al., 2014; Tang et al., 2018; Lim et al., 2019; Yao et al., 2018). We selectively review several major methods as follows. In Zhu & Laptev (2017), the time series forecasting task as a two-step procedure includes offline pre-training and online forecasting. The offline pre-training step is an encoder-decoder framework for compressing sequential features and extracting principal components, whereas the second step gives explainable prediction changes under external variables. Bahadori et al. (2014) proposed a unified low-rank tensor learning framework for multivariate spatio-temporal analysis by combining various attributes of spatio-temporal data including spatial clustering and shared variables structure. For multi-step traffic flow prediction, Tang et al. (2018) proposed a spatio-temporal multi-task collaborative learning model to extract and learn shared information among multiple prediction tasks collaboratively. For example, such model combines spatial features collected from offline observation stations and inherent information between blended time granularities. Lim et al. (2019) proposed a temporal fusion transformer (TFT) to capture temporal correlations at each position, which was similar to self-attention mechanism and expected to capture long-term and short-term dependencies. Yao et al. (2018) proposed a deep multi-view spatio-temporal network (DMVST-Net), including a speed viewpoint (modeling the correlation between historical and future demand by LSTM (Gers & Schmidhuber, 2001)), a spatial viewpoint (modeling local spatial correlation by CNN), and a contextual viewpoint (modeling regional correlations in local temporal patterns). Overall, all above methods improve time series fitting by learning and predicting correlations across multiple spatio-temporal perspectives, targets, and tasks. However, those methods lack convincing interpretability of "how and to what extent external variables affect supply and demand". Achieving good demand forecasting involves not only historical demand targets, but also various current external variables (e.g., weather conditions, traffic conditions, holidays, and driver reposition). Those historical demand observations were affected by historical external factors, so the demand forecasting only based on correlation between variables is hardly convincing. Furthermore, supply forecasting is empirically affected by the distribution of demand besides current external variables. Establishing causal relationship between (supply, demand) and multiple external variables is critically important for accurate supply and demand forecasting. 3 METHODOLOGY We introduce the CausalTrans framework to efficiently establish the collaborative causal effects of multiple predictors on spatio-temporal supply and demand below. 3.1 COLLABORATIVE SUPPLY AND DEMAND FORECASTING We consider all related observations including supply, demand, and external variables collected in a city. Each day is divided into 24 hour segments and a city is divided into non-overlapping hexagonal regions (side length ranges from 600 to 1000 meters). The complete data consists of demand xv(t) ∈ R, supply yv(t) ∈ R, and dynamic covariates zv(t) ∈ Rz , where t is a specific hour segment and v ∈ V is a specific hexagon of the set of hexagonal regions, denoted as V . Dynamic covariates includes weather, holidays, social events, POI (Point Of Interests), and government policies. Weather features consist of temperature (◦C), rainfall (mm), wind level and PM2.5 (mg/m3). Holiday features are represented by one-hot boolean vectors, including seasons, weekdays, and national and popular holidays, such as Christmas Day. POI features are represented by the number of various positions, including traffic stations, business districts, communities, hospitals and schools. More detail cases about collaborative supply and demand are provided in Appendix A. The problem of interest is to use all available observations in {(xv(: t), yv(: t), zv(:, t)), v ∈ V} to predict {(xv(t+ 1 : t+ τmax), yv(t+ 1 : t+ τmax)), v ∈ V}, where τmax is a pre-specified time length, xv(t1 : t2) and yv(t1 : t2) are the demand and supply vectors starting from time point t1 to time point t2, and xv(: t2) and yv(: t2) are the demand and supply vectors starting from the earliest time point to time point t. The demand xv may depend on historical supply yv that happens several weeks (or even longer) ago. But in the latest several weeks (training period), based on our understanding of ride-sharing business, demand xv may be primarily influenced by its own recent historical patterns. Based on the above description, we formulate the learning problem of collaborative demand and supply forecasting as follows: P (xv(t+ 1 : t+ τmax)|xv(: t), zv(: t+ τmax)), (1) P (yv(t+ 1 : t+ τmax)|yv(: t), xv(: t+ τmax), zv(: t+ τmax)), (2) where P (·|·) is a conditional distribution. In (1), it is assumed that xv(t+ 1 : t+ τmax) is primarily affected by historical demands in xv(: t) and external covariates in zv(: t+ τmax). Furthermore, in (2), it is assumed that future supplies in yv(t+ 1 : t+ τmax) are primarily affected by historical supplies in yv(: t), demand patterns in xv(: t + τmax), and external covariates in zv(: t + τmax). Comparing (1) with (2), we assume that the distribution of supply during [t+ 1, t+ τmax] is driven by the historical and current distributions of demand besides the historical information in yv(: t) and external covariates in zv(: t+ τmax). 3.2 PROBABILISTIC FORECASTING Most time series forecasting methods produce deterministic values, whereas forecasting results might have large variation and were hardly robust due to the variation of covariates and training process. To enhance forecasting reliability, we adapt the quantile loss function with the Poisson distribution as our final optimization function 1. Empirically, following (Salinas et al., 2020; Wen et al., 2017; Li et al., 2019; Lim et al., 2019), we choose three quantile points q ∈ Q = {10%, 50%, 90%}, in which the gap between forecasting values at 1Ride-hailing supply and demand variables approximately follow with the Poisson distribution. 90% and 10% percentiles can be regarded as the confidence interval. Take demand xt forecasting at time point t as an example, the final quantile loss function is given by LQ = ∑ xt∈Ω ∑ q∈Q τmax∑ τ=1 QLq(xt, x̂qt−τ ) M · τmax , (3) whereQLq(xt, x̂qt ) = {q− I(xt ≤ x̂ q t )}(xt− x̂ q t ), Ω is the training dataset, τmax is the maximum prediction step, and I(·) is an indicator function. For a fair comparison, given the test dataset Ω̃, we employ q-risk (Salinas et al., 2020; Lim et al., 2019; Li et al., 2019), denoted asRq , to evaluate the risk level of each quantile point as follows: Rq = 2 ∑ xt∈Ω̃ ∑τmax τ=1 QLq(xt, x̂ q t−τ )∑ xt∈Ω̃ ∑τmax τ=1 |xt| . (4) There are at least two advantages of using the quantile loss function. First, the quantile loss function is more robust and stable than the mean square error or the hinge loss, especially when forecasting targets have large variation. Second, we can modify external covariates to change the confidence interval of causal attention and analyze real-world cases. 3.3 CAUSAL TRANSFORMER FRAMEWORK Our CausalTrans is a novel combination of causal estimators and the encoder-decoder architecture. Figure 1 shows the overview of the CausalTrans framework. The three key novel contributions of CausalTrans include fast spatial graph fusion, causal attention, and temporal attention units. First, from the spatial perspective, CausalTrans gathers a set of graph attention kernels (GAT) by using assignment scores extracted from temporal patterns. Moreover, we adapt the first-order Taylor’s expansion on multi-head attention from transformer to reduce time complexity from square complexity to linear complexity. Second, from the temporal perspective, causal attention based on sufficient historical observations is trained offline to evaluate the causal weights on peek time slots, those on weather conditions, and those on holidays, which are denoted as θT , θW , and θH , respectively, under diverse spatio-temporal conditions. Furthermore, we simplify three seasonal perspectives (week, month, and holidays) to represent multi-view position encoding (MVPE). Third, temporal attention is used to fill the gap between encoder and decoder, in which we add a sequence mask to ensure that the historical observations of time point t only uses observations smaller than t. We set mask out to be −∞ and illegal connection weights to be zero. In the following subsections, we introduce the main components of CausalTrans: fast spatial graph fusion and causal attention and show how they works together as a causal spatio-temporal predictor. Moreover, for notational simplicity, we focus on describing those components for forecasting demand xv in the following subsections, while avoid repeating the same components for supply yv . 3.4 FAST SPATIAL GRAPH FUSION ATTENTION In this subsection, we describe the fast graph fusion attention unit based on region clustering and fast multihead attention. See Figure 1 (b) for the architecture of Fast S.F.. Since GAT has achieved impressive results in traffic forecasts (Park et al. (2019); Kosaraju et al. (2019); Zhang et al. (2019a)), we use GAT to extract contextual features in huge graphs. However, directly applying GAT to large-scale forecasting problems is a challenging task, so we design spatial fusion subgraphs that share local supply and demand information. Moreover, we build our framework based on transformers (Vaswani et al., 2017). Transformers have been state-of-the-art structure in various natural language processing (NLP) tasks (Wolf et al., 2019; Wang et al., 2019) and time series forecasting due to its prominent powers of long-term feature extraction and parallel computing. However, the multi-head attention in transformers becomes a key bottleneck for time efficiency. We design an approximate Taylor’s expansion attention instead of using softmax function to accelerate matrix products. More detail results of fast attention can be found in Appendices C.2 and C.3. We briefly describe the fast spatio-temporal fusion graph attention procedure below. First, let Xt be the spatio-temporal demand feature matrix of all grids V before time t, the temporal patterns of V are represented as assignment scores given by C = (cxv,k) = [σs(σr(XtWv)Wt)]Batch, (5) where [·]Batch is the mean operator on the batch mode, k belongs to a K-dimensional cluster vector, v ∈ V , Wv and Wt are, respectively, spatial and temporal weight matrices corresponding to Xt, and σs(·) and σr(·) are sigmoid and relu activation functions, respectively. Second, we use the k-th spatial learner Gk(xv) to extract spatial features of sequential data xv in grid v, and the summed outputs of K clusters are given as follows: hv = ∑ k∈K Gk(xv)cxv,k. (6) The softmax function is used to get attention weights among regions as follows: α̂v = ∑ v′∈Nv αv,v′ · xv′ = ∑ v′∈Nv exp(σr(a T [W · xv||W · xv′ ])) · xv′∑ v′∈Nv exp(σr(a T [W · xv||W · xv′ ])) , (7) where αv,v′ is the correlation weight between v and v′, a and W are network parameters, the superscript T denotes the transpose of a vector or matrix, Nv = {v′|v′ ∈ V, v′ 6= v} is the neighboring region set of region v, and [·||·] is the concatenation operation. In (7), the time complexity of computing exp(σr(aT [W · xv||W · xv′ ])) is O(V2). Specifically, the exponent operation in exp(aT ·W ) ·X of the softmax function limits the efficiency of attention. Moreover, cluster number K V , and the time complexity of aTWX is O(K2 · V) ≈ O(V). Many recent studies find that linear attention is feasible for tasks, whose primary focus is on short-term dependence. More details are discussed in Appendix D. Our novel linear attention is easy to implement and interpret. It follows from Taylor expansion that exp(aTW ) ≈ 1 + aTW under the condition of small aTW . Analogous to the self-attention in original Transformer (Vaswani et al., 2017), the approximate mean and variance of QK T √ dk are 0 and 1, respectively, so aTW here is limited to small values. We introduce L2 normalization to ensure small aTW and 1 + aTW ≥ 0 such that exp(aTW ) ≈ T (aTW ) = 1 + ( a ||a||2 )T ( W ||W ||2 ) . (8) where T is an approximate Taylor expansion. Equation (8) is close to inner dot products, which have advantages on parallel implementation and linear time complexity. Finally, α̂v can be transformed into α̂v = ∑ v′∈Nv T (σr(a T [W · xv||W · xv′ ])) · xv′∑ v′∈Nv T (σr(a T [W · xv||W · xv′ ])) . (9) 3.5 CAUSAL ATTENTION MECHANISM Many external covariates causally change the distribution of demand and supply as shown in Figures 2 and 3 of the supplementary document. Meanwhile, many existing works focus on finding the correlation between external covariates and forecasting targets. For example, Li et al. (2019) designed causal convolution to enhance the locality of attention, whereas Lim et al. (2019) added the variables selection networks and gate mechanism to train attention weights. These two studies (Lim et al., 2019; Li et al., 2019) intend to calculate correlations among variables, but not causal effects under counterfactual conditions. Statistically, such issue can be regarded as a heterogeneous treatment effect (HTE) problem. See Figure 1 (c) for the architecture of C.A.. To the best of our knowledge, causal attention methods for HTE have not been proposed in large-scale spatio-temporal forecasting problems. First, we briefly describe the conditional average treatment effect (CATE) (Abrevaya et al., 2015). We still take demand vectors xv(t1 : t2) (abbreviated as x in the following) of grid v starting from time point t1 to time point t2 as an example. The X represents a set of x. The treatments we consider include weather (rainfall, temperature and wind level), peek time slots and holidays. Let x(s) be the target variable under treatment s ∈ S , and z is a vector of other covariates. The HTE for comparing two treatment levels s0 and s1 is defined as τ(s0, s1; z) = E[X(s1)−X(s0)|z]. (10) If treatment s is continuous, then the treatment effect is defined to be E[∇sX(s)|z], where ∇s = ∂/∂s. To unbiasedly estimate treatment effects, we propose a causal attention module based on double machine learning (DML) (Chernozhukov et al., 2017) based on two layers non-parametric fully connected neural networks. Specifically, we assume X(S) = θ(z) · S + g0(z) + and S = g1(z) + η, (11) where and η are independent random variables such that E[ |z] = E[η|z] = 0, g0(·) and g1(·) are two non-parametric neural networks, and θ(z) is the constant marginal CATE. Let X̃ = X − E(X|z) and S̃ = S − E(S|z), we can get X̃ = X − E(X|z) = θ(z) · {S − E(S|z)}+ = θ(z) · S̃ + . (12) Therefore, we can compute θ(z) by solving θ̂(z) = arg min θ∈Θ En [ (X̃ − θ(z) · S̃)2 ] , (13) where En denotes the empirical expectation. Large historical data source contains all kinds of experimental environment and treatments. According to Algorithm 1, given time series xv(: t) at grid v (v is dropped for readability) and treatment s1 ∈ S , loop and search two treatment levels s0 and s1 along with the historical timeline to construct the AB groups {x(t0)|s0} and {x(t1)|s1}. Then, we construct the AA groups {x(t0 − τ : t0)} and {x(t1 − τ : t1)} by a look-back window with the same length τ before t0 and t1, and make sure that both are both stationary processes with equal mean (PKPSS > 0.05 in KPSS test (Shin & Schmidt, 1992) and PT-Test > 0.05 in T-Test on both AA groups’ first-order differences). Based on the selected AA/AB groups, we employ DML to estimate causal attention. In our method, trained causal attention θ̂ will be inserted to transformer, and clustered regions share global θ̂ each other. Algorithm 1 Causal Attention Algorithm with DML Input: Given demand matrix x(: t) at a grid v before time t, three kinds of treatments includes weekday and hour slots T (: t) = {W (: t), H(: t)}, weather vectors W (: t), and holidays one-hot vectors H(: t) Output: causal effect coefficients θT for T (: t), θW for W (: t), and θH for H(: t) 1: Take θT as an example, and suppose that a AA group and AB group on T (: t) is TAA = TAB = {} 2: for all {Tw(t0), Tw(t1)} ∈ {Mon, Tue, ...Sun}, {Th(t0), Th(t1)} ∈ {1, ...24} do 3: if Tw(t0) = Tw(t1), Th(t0) = Th(t1), PT-Test(x(t0), x(t1)) < 0.05 then 4: for all t′0 ∈ {: t0} and t′1 ∈ {: t1} do 5: Calculate 1st-order differences x̃(t′0 : t0) and x̃(t ′ 1 : t1) 6: if PKPSS(x̃(t′0 : t0)), PKPSS(x̃(t′1 : t1)) and PT-Test(x̃(t′0 : t0), x̃(t′1 : t1)) > 0.05 then 7: TAA.append([(x(t′0 : t0), x(t ′ 1 : t1))]) 8: TAB .append([(x(t0), x(t1))]) 9: end if 10: end for 11: end if 12: end for 13: Do DML on TAA and TAB datasets and estimate treatment coefficients θT 14: Repeat from Step 2 and estimate θW and θH by different DML. 15: return θT , θW , and θH 4 EXPERIMENTS 4.1 DATASETS We consider four datasets (Electricity, Traffic, Retail2 and Ride-hailing) in our experiments as follows. Electricity. Electricity contains hourly univariate electricity consumption of 370 customers. According to (Salinas et al., 2020), weekly oberservations before t are inputs to predict the next 24 hours’ series. 2https://www.kaggle.com/c/favorita-grocery-sales-forecasting/ Traffic. Traffic contains hourly univariate occupancy rate of 963 San Francisco bay area freeways, where the look-back rolling window and prediction step are the same as Electricity. Retail. Retail is the Favorita Grocery Sales Dataset from Kaggle competition (Lim et al., 2019), including daily metadata with diverse products, stores and external variables. To compared with some state-of-the-art methods (Lim et al., 2019; Salinas et al., 2020), historical observations across 90 days are trained to forecast product sales in the next 30 days. Ride-hailing. The Ride-hailing dataset contains real supply, demand, and various of metadata at the hourly and hexagonal grid scale between June 2018 and June 2020 in two big cites (city A and city B) obtained from a ride-hailing company. The first 70%, the next 10% and the remaining 20% is used for training, validation and testing, respectively. We group the first two datasets into the univariate group and the last two datasets into the multivariate group. 4.2 BENCHMARKS In this section, two different forecasting methods, including iterative methods and multi-horizon methods, are compared in a wide range of comparison experiments. For our method CausalTrans, a pre-defined search space is used to determine optimal hyperparameters. Experimental details are included in Appendix B. Iterative methods. Iterative methods generate multi-step prediction results by step-by-step rolling windows, where results in previous steps are used to as inputs in the next step. Typically, iterative methods include DeepAR†, Deep State Space Models (DeepState†) (Rangapuram et al., 2018), ARIMA† (Zhang, 2003), ETS (Jain & Mallick, 2017) and TRMF (Yu et al., 2016). Multi-horizon methods. Multi-horizon methods considered here include ConvTrans (Li et al., 2019), MQRNN† (Wen et al., 2017), Seq2Seq† (Sutskever et al., 2014), DMVST (Sutskever et al., 2014), ST-MGCN (Geng et al., 2019), and TFT (Lim et al., 2019). The † methods are trained by using the GluonTS (Alexandrov et al., 2019) package. DMVST and ST-MGCN are spatial baselines. 4.3 RESULTS AND DISCUSSION We adapt the quantile loss as optimization function, and compare various results by q-risk R50/R90 at quantile point 50%/90%. More detailed descriptions of probabilistic forecasting are provided in subsection 3.2. Table 1 includes the R50/R90 losses of all forecasting methods for Electricity and Traffic datasets. The Electricity data set does not have any covariates and is lack of spatial information, whereas the Traffic dataset does have spatial information even without multiple covariates. We observe that ConvTrans and TFT are comparable with each other and both outperform all other methods. We believe that compared with TFT, ConvTrans is able to take advantage of the spatial information in the Traffic dataset. This is not the case for the Electricity data set. Table 2 and Table 3 include theR50 andR90 losses of all multi-horizon methods in the multivariate group. We consider both one-day and seven-day predictions and optimize the hyperparameters of all methods by using grid search. We have several important observations. First, for the one-day prediction, iterative DeepAR outperforms Seq2Seq and MQRNN due to the use of Poisson distribution and weather conditions. Second, for the spatial baselines DMVST and ST-MGCN,R50 andR90 losses are increasing with longer forecasting days, as such methods may overfit biased weights of external covariates. Third, CausalTrans outperforms all other competing methods primarily due to the use of the causal estimator DML. For instance, compared with the second best method, CausalTrans yields maximum 9.3% lowerR50 and 15.2% lowerR90 on the Ride-hailing (7d, city A, Supply) dataset. Fourth, CausalTrans achieves lower losses on forecasting supply than forecasting demand, since we explicitly model causal relationship between supply and demand in (2). Fifth, as expected, different with the one-day prediction, the seven-day prediction focuses on unbiased distribution estimation in order to alleviate error accumulation. This point of view is further reinforced by the results of the ablation study reported in Appendix C.2, and causal attention is visualized in Appendix C.1. 5 CONCLUSION Based on causal inference theory, we develop the CausalTrans framework to address collaborative supply and demand forecasting in large-scale two-sided markets. We design the fast multi-head attention to improve the computational complexity to nearly linear O(V). CausalTrans achieves similar performance as TFT based on the two datasets in the univariate group and outperforms all competing methods including TFT in the nine different experiments for the multivariate group. In particular, for our Ride-hailing datasets, CausalTrans can achieve up to 15% error reduction compared with various baseline methods. In the future, we will continue to integrate causal inferences with existing deep learning methods to deal with large-scale spatio-temporal forecasting problems. A RIDE-HAILING DATASET DETAILS Taking city A3 as an example, supply, demand, delta4 and rainfall trends (January 1st, 2018 to January 1st, 2020) are plotted at daily scale in figure 2. We conclude that the variance of demand is bigger than supply, especially in raining rush hours. Taking August 17th, 2018 in city A as another example in figure 3, we observe that the delta at dark red regions would not be for long, as spatio-temporal supply was changed by corresponding demand and reposition of drivers. The ride-hailing platform would release useful strategies to promote orders. Collaborative demand and supply implies that the distribution of supply corresponds to the distribution of demand. B TRAINING DETAILS Empirically, we consider determining optimal hyperparameters via a pre-defined random search space. For reproducibility, we include essential hyperparameters on our Ride-hailing dataset in Table 4. C INTERPRETABILITY CASES In this section, we analyze the impacts of essential components in CausalTrans and focus on what causal attention learns. First, since causal demand and supply are hardly assembled with unbiased estimation (Figure 2), we demonstrate attention-based interpretability in instance-specific significant events like frequent rainfall, holidays and peek time slots. Second, we perform ablation analysis about target probabilistic distribution PoissonOutput, causal attention with DML and Uplift, FastAttention and SpatialFusion. Finally, we compare fast improvements in multi-head attention on CPU (Intel Xeon E5-2630 2.20GHz) and GPU (Tesla P40), respectively. C.1 CAUSAL ATTENTION VISUALIZATION As one of the most essential components, causal attention employs difference stationary tests and double machine learning to estimate coefficients θ(s) of treatment effects. In this section, we visualize causal attention distribution through sample-specific cases, including rainfall, weekdays, and time slots. Frequent rainfall is the most significant weather event for demand as described in Section A. Unlike with plenty of rainfall events, there are only a dozen of holidays in one year. If sequential context before one holiday fails to pass Kpss stationary test, causal estimator would not to be applied in training attention weights. Large-scale dataset is the fundamental to our method. For the diverse peek time slots, Section 3.1 concludes that demand and supply distributes different at commuting peeks and night hours. In addition, seasonal fluctuation and government’s policies (e.g. traffic restriction in National Day) are considerable factors. Rainfall. Take demand forecasting at an anonymous region in city A as an example, treatment is rainfall s, target is demand x, and other covariates z include regional id, time slots and holidays. For convenience, we select a group of adjacent AB groups from sufficient rainfall cases to give an interpretation. In Figure 4, we backtrack rainfall treatments to fix AB Group 2, and search AB Group 1 by controlling similar covariates. Similarity means that both first-order differences are stationary, and then we construct a group of simple randomized controlled experiments. Given estimated θ(z) by running DML, we plot the distribution of causal attention on the right side of green line. In practice, large amounts of increasing data would enhance the robustness of causal evaluation iteratively. Collaborative demand and supply. As described in Section 3.1 and equation (2), the distribution of supply is driven by the spatio-temporal patterns of demand. Similar with above Rainfall analysis, we take another anonymous region in city A as an example. In this case, forecasting target is supply, causal treatment is demand, and external variables include weather, time slots and holidays. According to Algorithm 1, our method needs to construct AB groups and corresponding lookback AA groups from large-scale historical data. For both AB controlled experiments, the average demands of AB and AA groups should be significant different, while supply is unlimited. In AA experiments, we empirically suggest that the time span maintains for at least one day. We trace back data to the past, but selected AA groups should satisfy randomization grouping hypothesis passed by t-test. Such periods with stable supply are abundant in recent years, which implies that we can easily find proper evaluation dataset for diverse regions. In Figure 5, trained causal attention demonstrates the demand’s causal weights reflect in supply forecasting. Additionally, more novel causal modules similar with equation (2) can be designed to enhance interpretability and robustness, and such modules support end-to-end training in CausalTrans as well. C.2 ABLATION ANALYSIS This subsection focuses on the performance of CausalTrans when some components are excluded. Proposed essential items contain tricky PoissonOutput, Causal Attention(C.A.), FastAttention and SpatialFusion. C.A. can be implemented by different causal algorithms, such as DML and Uplift (Künzel et al., 2019). As shown in Table 5, we listR50 (50% quantile point) losses on previous eight Ridehailing datasets. Table 5 demonstrates that C.A.(DML) outperforms all of other components, and causal supply can be clearly influenced by causal demand. Finally, both FastAttention and SpatialFusion are not harmful to forecasting performance. Furthermore, spatial fusion shows tiny improvement (+0.3% on average) in Table 5. We feel that spatial fusion aggregates adjacent hexagonal grids, leading to reducing statistical noises in both demand and supply. For instance, in some cases, the boundary (usually around 800 meters) of adjacent grids separates large demand hotpots (e.g., large shopping malls), resulting in some noise when counting supply and demand. Spatial fusion can reduce the influence of such noise, while improving the probabilistic forecasting performance. According to Table 5, the longer forecasting time (e.g., 7 days versus 1 day), the more significant gain by using spatial fusion. We consider the use of spatial fusion as a trick for enhancing the robustness of forecasting. The hyperparameter of spatial fusion is K used in the kmeans method. In this paper, we set K ∈ {3, 4, 5}. More ablation analysis about K is shown in Table 6. C.3 TIME EFFICIENCY IMPROVEMENT One of innovations proposed in this paper is to shorten running time of attention without losing overall quantile loss. The long experiment cycle suggests that we should choose a representative dataset, such as one-day demand prediction in city A. Data size of city A is large enough to reflect robust attention weights. In such dataset, we are only interested in the decrease of running time as the number of heads in multi-head attention decreases. As shown in Figure 6, when multi-head is 3, the reduction ratios of CPU(20), GPU(1) and GPU(2) compared with softmax are 58%, 70%, and 68%, respectively. Similarly, when multi-head is equal to 5, the responding reduction ratios are, respectively, 49%, 58% and 60%. An exact time complexity is O(K2V ) (see in Section 3.4), the smaller K, the longer running time. In summary, proposed time-efficient attention outperforms default softmax attention significantly. D DISCUSSIONS ON LINEAR ATTENTION In subsection 3.4, we propose a novel linear attention based on approximate Taylor expansion of exponential function. In contrast, other important methods are also developed to reduce attention cost. These attention acceleration methods can be roughly clarified into two groups. The first one is to construct kernel functions to approximate softmax function, denoted as softmax(QTK) = ϕ(Q)T · φ(K), (14) where Q and K are query matrices and key matrices, respectively. For instance, Katharopoulos et al. (2020) construct a kernel function with basis function ϕ(x) = φ(x) = elu(x) + 1 and reduce the computation complexity from O(N2) to O(N), but such performance is just only concluded from image dataset. Shen et al. (2018) further explore a series of kernel forms to dissect Transformer’s attention. They proposed a new variant of Transformer’s attention by modeling the input as a product of symmetric kernels. This approach replaces the calculation order of softmax, which is equivalent to the basis function φ(x) = softmax(x) and ϕ(x) = ex. The second one is to modify attention’s definition. Child et al. (2019) develope sparse factorizations of the attention matrix, which reduce the computation to O(N √ N), but its attention hyperparameters are very hard to be initialized and actual efficiency is hard to ensure. Kitaev et al. (2020) propose Reformer to replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O(N2) to O(Nlog(N)), where N is the length of the sequence. Furthermore, they use reversible residual layers instead of standard residuals, allowing storing activations only once in the training process instead of L times, where L is the number of layers. However, Reformer is difficult to be implemented and applied in different tasks. Wang et al. (2020) demonstrate that the self-attention mechanism can be approximated by a low-rank matrix, and further propose Linformer mechanism to reduce the overall self-attention complexity to O(N). Linformer uses two additional matrices E and V to project K and V , respectively, in order to get Attention(Q,K, V ) = softmax(Q(EK)T )FV . But the MLM experiment in Linformer does not need to extract long-term dependence and cannot verify its linear time complexity for capturing long-term attention. Eliminating redundancy vectors from the self-attention is a key design idea. Furthermore, Goyal et al. (2020) exploit redundancy pertaining to word-vectors, and propose PoWER-BERT to achieve up to 4.5x reduction in inference time over BERT with <1% loss in accuracy on the standard GLUE benchmark. Similarly, Dai et al. (2020) propose Funnel-Transformer, which gradually compresses the sequence of hidden states to a shorter one, and hence reduces the computation cost. Finally, for our approximate Taylor expansion of softmax attention, if feature maps (i.e. Q, K and V in self-attention) meet the positive definite and normalization conditions and our task focuses on short-term dependence, then our linear attention would be useful for this aspect.
1. What is the focus and contribution of the paper on predicting supply and demand in ride-haling platforms? 2. What are the strengths of the proposed approach, particularly in its ability to extend to other two-sided markets? 3. Do you have any concerns about the assumption that the supply is always conditioned on the demand? 4. How does the reviewer assess the interpretability of the predictions, and what specific results would help demonstrate this aspect of the proposed solution? 5. What are the limitations of using higher-order Taylor terms to approximate the attention procedure, and how does it impact the novelty of the paper? 6. How could the authors improve the clarity of their exposition, specifically regarding the use of the term "collaborative" in the model and the significance of the results? 7. What additional suggestions does the reviewer have for improving the paper, and how might the authors address the concerns raised by the reviewer?
Review
Review The authors propose an interpretable spatio-temporal fusion transformer for predicting supply and demand in ride-haling platforms. More generally, the authors claim that their approach extends to other two-sided markets such as electric grids, retail etc by showing empirical results of their approach using data from these markets. The paper is well-written and easy to follow. The assumption that the supply is always conditioned on the demand x_v(, t + \tau_max) is too strong. Is there a smoother of this assumption where the demand is dependent on a moving window over the past and future supply? The experimental results show the efficacy of the proposed ML architecture. However, one result that will greatly help the conclusions are results that clearly show the interpretability of the predictions, given that the authors state this as one of the main differences of their proposed solution. The use of higher-order Taylor terms to approximate the attention procedure is interesting but the time complexity reductions are obvious and therefore does not meet the novelty bar for an ICLR submission. Other suggestions - make the use of the word 'collaborative' in the model more clear. The significance of the results is not completely clear and how the interpretability is helping understanding of the results. Adding clarifications will help in the exposition. I still have some concerns regarding the paper. The lack of baseline comparisons with spatio-temporal data (as also observed by fellow reviewers). My other concern also remains - from the authors' response, it is not clear how once can clearly attribute explainability of the results from their analysis of the model.
ICLR
Title Causal Probabilistic Spatio-temporal Fusion Transformers in Two-sided Ride-Hailing Markets Abstract Achieving accurate spatio-temporal predictions in large-scale systems is extremely valuable in many real-world applications, such as weather forecasts, retail forecasting, and urban traffic forecasting. So far, most existing methods for multi-horizon, multi-task and multitarget predictions select important predicting variables via their correlations with responses of interest, and thus it is highly possible that many forecasting models generated from those methods are not causal, leading to poor interpretability. The aim of this paper is to develop a collaborative causal spatio-temporal fusion transformer, named CausalTrans, to establish the collaborative causal effects of predictors on multiple forecasting targets, such as supply and demand in ride-sharing platforms. Specifically, we integrate the causal attention with the Conditional Average Treatment Effect (CATE) estimation method in causal inference. Moreover, we propose a novel and fast multi-head attention evolved from Taylor’s expansion instead of softmax, reducing time complexity from O(V) to O(V), where V is the number of nodes in a graph. We further design a spatial graph fusion mechanism to significantly reduce the parameters’ scale. We conduct a wide range of experiments to demonstrate the interpretability of causal attention, the effectiveness of various model components, and the time efficiency of our CausalTrans. As shown in these experiments, our CausalTrans framework can achieve up to 15% error reduction compared with various baseline methods. 1 INTRODUCTION This paper is motivated by solving a collaborative probabilistic forecasting problem of both supply and demand in two-sided ride-hailing platforms, such as Uber and DiDi. Collaborative supply and demand relationships are common in various two-sided markets, such as Amazon, Airbnb, and eBay. We consider two-sided ride-hailing platforms as an example. In this case, we denote supply and demand as online driver number and call orders, respectively, on the platform at a specific time in a city. Some major factors for demand include rush hours, weekdays, weather conditions, transportation network, points of interest, and holidays. For instance, if it rains during peak hours in weekdays, demand will dramatically increase and last for a certain time period. In contrast, some major factors for supply include weather, holidays, traffic condition, weekdays, and platform’s dispatching and repositioning policies. Moreover, supply tends to gradually cover the area with many unsatisfied orders, that is, the distribution of supply tends to match with that of demand. We are interested in establishing collaborative causal forecasting models for demand and supply by using various predictors (or covariates). Although many learning methods have been developed to address various collaborative prediction tasks, such as spatio-temporal traffic flow prediction (Zhu & Laptev, 2017; Du et al., 2018; Zhang et al., 2019b; Ermagun & Levinson, 2018; Luo et al., 2019), multivariate prediction (Bahadori et al., 2014; Liang et al., 2018), multi-task prediction (Tang et al., 2018; Chen et al., 2018; Chandra et al., 2017), multi-view prediction (Yao et al., 2018), and multi-horizon prediction (Lim et al., 2019; Yu et al., 2020), these existing methods primarily select important predictors via their correlations with responses, leading to many forecasting models with poor interpretability. In contrast, we propose CausalTrans: a Collaborative Spatio-temporal Fusion Transformer, that generates causal probabilistic multi-horizon forecasts. To the best of our knowledge, this is the first work that captures collaborative causal effects of external covariates on multiple forecasting targets. Building such models is not only essential to enhancing forecasting performance, but also helps the platform to utilize various platform policies to match the distribution of supply with that of demand in two-sided markets. In the CausalTrans framework, our major contributions are summarized as follows: • We design the causal attention based on double machine learning (Chernozhukov et al., 2018) with two layers fully connected neural networks, and successful apply it to various large-scale time series forecasting problems. We conduct a wide range of experiments on real world datasets with multiple covariates and demonstrate that CausalTrans with causal attention outperforms many baseline models in various Ride-hailing scenarios. • We propose a spatial fusion mechanism based on graph attention networks (GAT) (Veličković et al., 2017) to gather local regions and enhance robustness as adjacent regions always share similar supply and demand patterns. • We propose an approximate time-efficient Taylor expansion attention to replace softmax in multihead attention of Transformers (Vaswani et al., 2017) such that time complexity reduces from O(V2) to O(V). We carry out two groups of experiments with three multi-heads and five multi-heads to verify such efficiency improvement. 2 RELATED WORK There is a large body of literature on vehicle flow forecasting (Zhu & Laptev, 2017; Bahadori et al., 2014; Tang et al., 2018; Lim et al., 2019; Yao et al., 2018). We selectively review several major methods as follows. In Zhu & Laptev (2017), the time series forecasting task as a two-step procedure includes offline pre-training and online forecasting. The offline pre-training step is an encoder-decoder framework for compressing sequential features and extracting principal components, whereas the second step gives explainable prediction changes under external variables. Bahadori et al. (2014) proposed a unified low-rank tensor learning framework for multivariate spatio-temporal analysis by combining various attributes of spatio-temporal data including spatial clustering and shared variables structure. For multi-step traffic flow prediction, Tang et al. (2018) proposed a spatio-temporal multi-task collaborative learning model to extract and learn shared information among multiple prediction tasks collaboratively. For example, such model combines spatial features collected from offline observation stations and inherent information between blended time granularities. Lim et al. (2019) proposed a temporal fusion transformer (TFT) to capture temporal correlations at each position, which was similar to self-attention mechanism and expected to capture long-term and short-term dependencies. Yao et al. (2018) proposed a deep multi-view spatio-temporal network (DMVST-Net), including a speed viewpoint (modeling the correlation between historical and future demand by LSTM (Gers & Schmidhuber, 2001)), a spatial viewpoint (modeling local spatial correlation by CNN), and a contextual viewpoint (modeling regional correlations in local temporal patterns). Overall, all above methods improve time series fitting by learning and predicting correlations across multiple spatio-temporal perspectives, targets, and tasks. However, those methods lack convincing interpretability of "how and to what extent external variables affect supply and demand". Achieving good demand forecasting involves not only historical demand targets, but also various current external variables (e.g., weather conditions, traffic conditions, holidays, and driver reposition). Those historical demand observations were affected by historical external factors, so the demand forecasting only based on correlation between variables is hardly convincing. Furthermore, supply forecasting is empirically affected by the distribution of demand besides current external variables. Establishing causal relationship between (supply, demand) and multiple external variables is critically important for accurate supply and demand forecasting. 3 METHODOLOGY We introduce the CausalTrans framework to efficiently establish the collaborative causal effects of multiple predictors on spatio-temporal supply and demand below. 3.1 COLLABORATIVE SUPPLY AND DEMAND FORECASTING We consider all related observations including supply, demand, and external variables collected in a city. Each day is divided into 24 hour segments and a city is divided into non-overlapping hexagonal regions (side length ranges from 600 to 1000 meters). The complete data consists of demand xv(t) ∈ R, supply yv(t) ∈ R, and dynamic covariates zv(t) ∈ Rz , where t is a specific hour segment and v ∈ V is a specific hexagon of the set of hexagonal regions, denoted as V . Dynamic covariates includes weather, holidays, social events, POI (Point Of Interests), and government policies. Weather features consist of temperature (◦C), rainfall (mm), wind level and PM2.5 (mg/m3). Holiday features are represented by one-hot boolean vectors, including seasons, weekdays, and national and popular holidays, such as Christmas Day. POI features are represented by the number of various positions, including traffic stations, business districts, communities, hospitals and schools. More detail cases about collaborative supply and demand are provided in Appendix A. The problem of interest is to use all available observations in {(xv(: t), yv(: t), zv(:, t)), v ∈ V} to predict {(xv(t+ 1 : t+ τmax), yv(t+ 1 : t+ τmax)), v ∈ V}, where τmax is a pre-specified time length, xv(t1 : t2) and yv(t1 : t2) are the demand and supply vectors starting from time point t1 to time point t2, and xv(: t2) and yv(: t2) are the demand and supply vectors starting from the earliest time point to time point t. The demand xv may depend on historical supply yv that happens several weeks (or even longer) ago. But in the latest several weeks (training period), based on our understanding of ride-sharing business, demand xv may be primarily influenced by its own recent historical patterns. Based on the above description, we formulate the learning problem of collaborative demand and supply forecasting as follows: P (xv(t+ 1 : t+ τmax)|xv(: t), zv(: t+ τmax)), (1) P (yv(t+ 1 : t+ τmax)|yv(: t), xv(: t+ τmax), zv(: t+ τmax)), (2) where P (·|·) is a conditional distribution. In (1), it is assumed that xv(t+ 1 : t+ τmax) is primarily affected by historical demands in xv(: t) and external covariates in zv(: t+ τmax). Furthermore, in (2), it is assumed that future supplies in yv(t+ 1 : t+ τmax) are primarily affected by historical supplies in yv(: t), demand patterns in xv(: t + τmax), and external covariates in zv(: t + τmax). Comparing (1) with (2), we assume that the distribution of supply during [t+ 1, t+ τmax] is driven by the historical and current distributions of demand besides the historical information in yv(: t) and external covariates in zv(: t+ τmax). 3.2 PROBABILISTIC FORECASTING Most time series forecasting methods produce deterministic values, whereas forecasting results might have large variation and were hardly robust due to the variation of covariates and training process. To enhance forecasting reliability, we adapt the quantile loss function with the Poisson distribution as our final optimization function 1. Empirically, following (Salinas et al., 2020; Wen et al., 2017; Li et al., 2019; Lim et al., 2019), we choose three quantile points q ∈ Q = {10%, 50%, 90%}, in which the gap between forecasting values at 1Ride-hailing supply and demand variables approximately follow with the Poisson distribution. 90% and 10% percentiles can be regarded as the confidence interval. Take demand xt forecasting at time point t as an example, the final quantile loss function is given by LQ = ∑ xt∈Ω ∑ q∈Q τmax∑ τ=1 QLq(xt, x̂qt−τ ) M · τmax , (3) whereQLq(xt, x̂qt ) = {q− I(xt ≤ x̂ q t )}(xt− x̂ q t ), Ω is the training dataset, τmax is the maximum prediction step, and I(·) is an indicator function. For a fair comparison, given the test dataset Ω̃, we employ q-risk (Salinas et al., 2020; Lim et al., 2019; Li et al., 2019), denoted asRq , to evaluate the risk level of each quantile point as follows: Rq = 2 ∑ xt∈Ω̃ ∑τmax τ=1 QLq(xt, x̂ q t−τ )∑ xt∈Ω̃ ∑τmax τ=1 |xt| . (4) There are at least two advantages of using the quantile loss function. First, the quantile loss function is more robust and stable than the mean square error or the hinge loss, especially when forecasting targets have large variation. Second, we can modify external covariates to change the confidence interval of causal attention and analyze real-world cases. 3.3 CAUSAL TRANSFORMER FRAMEWORK Our CausalTrans is a novel combination of causal estimators and the encoder-decoder architecture. Figure 1 shows the overview of the CausalTrans framework. The three key novel contributions of CausalTrans include fast spatial graph fusion, causal attention, and temporal attention units. First, from the spatial perspective, CausalTrans gathers a set of graph attention kernels (GAT) by using assignment scores extracted from temporal patterns. Moreover, we adapt the first-order Taylor’s expansion on multi-head attention from transformer to reduce time complexity from square complexity to linear complexity. Second, from the temporal perspective, causal attention based on sufficient historical observations is trained offline to evaluate the causal weights on peek time slots, those on weather conditions, and those on holidays, which are denoted as θT , θW , and θH , respectively, under diverse spatio-temporal conditions. Furthermore, we simplify three seasonal perspectives (week, month, and holidays) to represent multi-view position encoding (MVPE). Third, temporal attention is used to fill the gap between encoder and decoder, in which we add a sequence mask to ensure that the historical observations of time point t only uses observations smaller than t. We set mask out to be −∞ and illegal connection weights to be zero. In the following subsections, we introduce the main components of CausalTrans: fast spatial graph fusion and causal attention and show how they works together as a causal spatio-temporal predictor. Moreover, for notational simplicity, we focus on describing those components for forecasting demand xv in the following subsections, while avoid repeating the same components for supply yv . 3.4 FAST SPATIAL GRAPH FUSION ATTENTION In this subsection, we describe the fast graph fusion attention unit based on region clustering and fast multihead attention. See Figure 1 (b) for the architecture of Fast S.F.. Since GAT has achieved impressive results in traffic forecasts (Park et al. (2019); Kosaraju et al. (2019); Zhang et al. (2019a)), we use GAT to extract contextual features in huge graphs. However, directly applying GAT to large-scale forecasting problems is a challenging task, so we design spatial fusion subgraphs that share local supply and demand information. Moreover, we build our framework based on transformers (Vaswani et al., 2017). Transformers have been state-of-the-art structure in various natural language processing (NLP) tasks (Wolf et al., 2019; Wang et al., 2019) and time series forecasting due to its prominent powers of long-term feature extraction and parallel computing. However, the multi-head attention in transformers becomes a key bottleneck for time efficiency. We design an approximate Taylor’s expansion attention instead of using softmax function to accelerate matrix products. More detail results of fast attention can be found in Appendices C.2 and C.3. We briefly describe the fast spatio-temporal fusion graph attention procedure below. First, let Xt be the spatio-temporal demand feature matrix of all grids V before time t, the temporal patterns of V are represented as assignment scores given by C = (cxv,k) = [σs(σr(XtWv)Wt)]Batch, (5) where [·]Batch is the mean operator on the batch mode, k belongs to a K-dimensional cluster vector, v ∈ V , Wv and Wt are, respectively, spatial and temporal weight matrices corresponding to Xt, and σs(·) and σr(·) are sigmoid and relu activation functions, respectively. Second, we use the k-th spatial learner Gk(xv) to extract spatial features of sequential data xv in grid v, and the summed outputs of K clusters are given as follows: hv = ∑ k∈K Gk(xv)cxv,k. (6) The softmax function is used to get attention weights among regions as follows: α̂v = ∑ v′∈Nv αv,v′ · xv′ = ∑ v′∈Nv exp(σr(a T [W · xv||W · xv′ ])) · xv′∑ v′∈Nv exp(σr(a T [W · xv||W · xv′ ])) , (7) where αv,v′ is the correlation weight between v and v′, a and W are network parameters, the superscript T denotes the transpose of a vector or matrix, Nv = {v′|v′ ∈ V, v′ 6= v} is the neighboring region set of region v, and [·||·] is the concatenation operation. In (7), the time complexity of computing exp(σr(aT [W · xv||W · xv′ ])) is O(V2). Specifically, the exponent operation in exp(aT ·W ) ·X of the softmax function limits the efficiency of attention. Moreover, cluster number K V , and the time complexity of aTWX is O(K2 · V) ≈ O(V). Many recent studies find that linear attention is feasible for tasks, whose primary focus is on short-term dependence. More details are discussed in Appendix D. Our novel linear attention is easy to implement and interpret. It follows from Taylor expansion that exp(aTW ) ≈ 1 + aTW under the condition of small aTW . Analogous to the self-attention in original Transformer (Vaswani et al., 2017), the approximate mean and variance of QK T √ dk are 0 and 1, respectively, so aTW here is limited to small values. We introduce L2 normalization to ensure small aTW and 1 + aTW ≥ 0 such that exp(aTW ) ≈ T (aTW ) = 1 + ( a ||a||2 )T ( W ||W ||2 ) . (8) where T is an approximate Taylor expansion. Equation (8) is close to inner dot products, which have advantages on parallel implementation and linear time complexity. Finally, α̂v can be transformed into α̂v = ∑ v′∈Nv T (σr(a T [W · xv||W · xv′ ])) · xv′∑ v′∈Nv T (σr(a T [W · xv||W · xv′ ])) . (9) 3.5 CAUSAL ATTENTION MECHANISM Many external covariates causally change the distribution of demand and supply as shown in Figures 2 and 3 of the supplementary document. Meanwhile, many existing works focus on finding the correlation between external covariates and forecasting targets. For example, Li et al. (2019) designed causal convolution to enhance the locality of attention, whereas Lim et al. (2019) added the variables selection networks and gate mechanism to train attention weights. These two studies (Lim et al., 2019; Li et al., 2019) intend to calculate correlations among variables, but not causal effects under counterfactual conditions. Statistically, such issue can be regarded as a heterogeneous treatment effect (HTE) problem. See Figure 1 (c) for the architecture of C.A.. To the best of our knowledge, causal attention methods for HTE have not been proposed in large-scale spatio-temporal forecasting problems. First, we briefly describe the conditional average treatment effect (CATE) (Abrevaya et al., 2015). We still take demand vectors xv(t1 : t2) (abbreviated as x in the following) of grid v starting from time point t1 to time point t2 as an example. The X represents a set of x. The treatments we consider include weather (rainfall, temperature and wind level), peek time slots and holidays. Let x(s) be the target variable under treatment s ∈ S , and z is a vector of other covariates. The HTE for comparing two treatment levels s0 and s1 is defined as τ(s0, s1; z) = E[X(s1)−X(s0)|z]. (10) If treatment s is continuous, then the treatment effect is defined to be E[∇sX(s)|z], where ∇s = ∂/∂s. To unbiasedly estimate treatment effects, we propose a causal attention module based on double machine learning (DML) (Chernozhukov et al., 2017) based on two layers non-parametric fully connected neural networks. Specifically, we assume X(S) = θ(z) · S + g0(z) + and S = g1(z) + η, (11) where and η are independent random variables such that E[ |z] = E[η|z] = 0, g0(·) and g1(·) are two non-parametric neural networks, and θ(z) is the constant marginal CATE. Let X̃ = X − E(X|z) and S̃ = S − E(S|z), we can get X̃ = X − E(X|z) = θ(z) · {S − E(S|z)}+ = θ(z) · S̃ + . (12) Therefore, we can compute θ(z) by solving θ̂(z) = arg min θ∈Θ En [ (X̃ − θ(z) · S̃)2 ] , (13) where En denotes the empirical expectation. Large historical data source contains all kinds of experimental environment and treatments. According to Algorithm 1, given time series xv(: t) at grid v (v is dropped for readability) and treatment s1 ∈ S , loop and search two treatment levels s0 and s1 along with the historical timeline to construct the AB groups {x(t0)|s0} and {x(t1)|s1}. Then, we construct the AA groups {x(t0 − τ : t0)} and {x(t1 − τ : t1)} by a look-back window with the same length τ before t0 and t1, and make sure that both are both stationary processes with equal mean (PKPSS > 0.05 in KPSS test (Shin & Schmidt, 1992) and PT-Test > 0.05 in T-Test on both AA groups’ first-order differences). Based on the selected AA/AB groups, we employ DML to estimate causal attention. In our method, trained causal attention θ̂ will be inserted to transformer, and clustered regions share global θ̂ each other. Algorithm 1 Causal Attention Algorithm with DML Input: Given demand matrix x(: t) at a grid v before time t, three kinds of treatments includes weekday and hour slots T (: t) = {W (: t), H(: t)}, weather vectors W (: t), and holidays one-hot vectors H(: t) Output: causal effect coefficients θT for T (: t), θW for W (: t), and θH for H(: t) 1: Take θT as an example, and suppose that a AA group and AB group on T (: t) is TAA = TAB = {} 2: for all {Tw(t0), Tw(t1)} ∈ {Mon, Tue, ...Sun}, {Th(t0), Th(t1)} ∈ {1, ...24} do 3: if Tw(t0) = Tw(t1), Th(t0) = Th(t1), PT-Test(x(t0), x(t1)) < 0.05 then 4: for all t′0 ∈ {: t0} and t′1 ∈ {: t1} do 5: Calculate 1st-order differences x̃(t′0 : t0) and x̃(t ′ 1 : t1) 6: if PKPSS(x̃(t′0 : t0)), PKPSS(x̃(t′1 : t1)) and PT-Test(x̃(t′0 : t0), x̃(t′1 : t1)) > 0.05 then 7: TAA.append([(x(t′0 : t0), x(t ′ 1 : t1))]) 8: TAB .append([(x(t0), x(t1))]) 9: end if 10: end for 11: end if 12: end for 13: Do DML on TAA and TAB datasets and estimate treatment coefficients θT 14: Repeat from Step 2 and estimate θW and θH by different DML. 15: return θT , θW , and θH 4 EXPERIMENTS 4.1 DATASETS We consider four datasets (Electricity, Traffic, Retail2 and Ride-hailing) in our experiments as follows. Electricity. Electricity contains hourly univariate electricity consumption of 370 customers. According to (Salinas et al., 2020), weekly oberservations before t are inputs to predict the next 24 hours’ series. 2https://www.kaggle.com/c/favorita-grocery-sales-forecasting/ Traffic. Traffic contains hourly univariate occupancy rate of 963 San Francisco bay area freeways, where the look-back rolling window and prediction step are the same as Electricity. Retail. Retail is the Favorita Grocery Sales Dataset from Kaggle competition (Lim et al., 2019), including daily metadata with diverse products, stores and external variables. To compared with some state-of-the-art methods (Lim et al., 2019; Salinas et al., 2020), historical observations across 90 days are trained to forecast product sales in the next 30 days. Ride-hailing. The Ride-hailing dataset contains real supply, demand, and various of metadata at the hourly and hexagonal grid scale between June 2018 and June 2020 in two big cites (city A and city B) obtained from a ride-hailing company. The first 70%, the next 10% and the remaining 20% is used for training, validation and testing, respectively. We group the first two datasets into the univariate group and the last two datasets into the multivariate group. 4.2 BENCHMARKS In this section, two different forecasting methods, including iterative methods and multi-horizon methods, are compared in a wide range of comparison experiments. For our method CausalTrans, a pre-defined search space is used to determine optimal hyperparameters. Experimental details are included in Appendix B. Iterative methods. Iterative methods generate multi-step prediction results by step-by-step rolling windows, where results in previous steps are used to as inputs in the next step. Typically, iterative methods include DeepAR†, Deep State Space Models (DeepState†) (Rangapuram et al., 2018), ARIMA† (Zhang, 2003), ETS (Jain & Mallick, 2017) and TRMF (Yu et al., 2016). Multi-horizon methods. Multi-horizon methods considered here include ConvTrans (Li et al., 2019), MQRNN† (Wen et al., 2017), Seq2Seq† (Sutskever et al., 2014), DMVST (Sutskever et al., 2014), ST-MGCN (Geng et al., 2019), and TFT (Lim et al., 2019). The † methods are trained by using the GluonTS (Alexandrov et al., 2019) package. DMVST and ST-MGCN are spatial baselines. 4.3 RESULTS AND DISCUSSION We adapt the quantile loss as optimization function, and compare various results by q-risk R50/R90 at quantile point 50%/90%. More detailed descriptions of probabilistic forecasting are provided in subsection 3.2. Table 1 includes the R50/R90 losses of all forecasting methods for Electricity and Traffic datasets. The Electricity data set does not have any covariates and is lack of spatial information, whereas the Traffic dataset does have spatial information even without multiple covariates. We observe that ConvTrans and TFT are comparable with each other and both outperform all other methods. We believe that compared with TFT, ConvTrans is able to take advantage of the spatial information in the Traffic dataset. This is not the case for the Electricity data set. Table 2 and Table 3 include theR50 andR90 losses of all multi-horizon methods in the multivariate group. We consider both one-day and seven-day predictions and optimize the hyperparameters of all methods by using grid search. We have several important observations. First, for the one-day prediction, iterative DeepAR outperforms Seq2Seq and MQRNN due to the use of Poisson distribution and weather conditions. Second, for the spatial baselines DMVST and ST-MGCN,R50 andR90 losses are increasing with longer forecasting days, as such methods may overfit biased weights of external covariates. Third, CausalTrans outperforms all other competing methods primarily due to the use of the causal estimator DML. For instance, compared with the second best method, CausalTrans yields maximum 9.3% lowerR50 and 15.2% lowerR90 on the Ride-hailing (7d, city A, Supply) dataset. Fourth, CausalTrans achieves lower losses on forecasting supply than forecasting demand, since we explicitly model causal relationship between supply and demand in (2). Fifth, as expected, different with the one-day prediction, the seven-day prediction focuses on unbiased distribution estimation in order to alleviate error accumulation. This point of view is further reinforced by the results of the ablation study reported in Appendix C.2, and causal attention is visualized in Appendix C.1. 5 CONCLUSION Based on causal inference theory, we develop the CausalTrans framework to address collaborative supply and demand forecasting in large-scale two-sided markets. We design the fast multi-head attention to improve the computational complexity to nearly linear O(V). CausalTrans achieves similar performance as TFT based on the two datasets in the univariate group and outperforms all competing methods including TFT in the nine different experiments for the multivariate group. In particular, for our Ride-hailing datasets, CausalTrans can achieve up to 15% error reduction compared with various baseline methods. In the future, we will continue to integrate causal inferences with existing deep learning methods to deal with large-scale spatio-temporal forecasting problems. A RIDE-HAILING DATASET DETAILS Taking city A3 as an example, supply, demand, delta4 and rainfall trends (January 1st, 2018 to January 1st, 2020) are plotted at daily scale in figure 2. We conclude that the variance of demand is bigger than supply, especially in raining rush hours. Taking August 17th, 2018 in city A as another example in figure 3, we observe that the delta at dark red regions would not be for long, as spatio-temporal supply was changed by corresponding demand and reposition of drivers. The ride-hailing platform would release useful strategies to promote orders. Collaborative demand and supply implies that the distribution of supply corresponds to the distribution of demand. B TRAINING DETAILS Empirically, we consider determining optimal hyperparameters via a pre-defined random search space. For reproducibility, we include essential hyperparameters on our Ride-hailing dataset in Table 4. C INTERPRETABILITY CASES In this section, we analyze the impacts of essential components in CausalTrans and focus on what causal attention learns. First, since causal demand and supply are hardly assembled with unbiased estimation (Figure 2), we demonstrate attention-based interpretability in instance-specific significant events like frequent rainfall, holidays and peek time slots. Second, we perform ablation analysis about target probabilistic distribution PoissonOutput, causal attention with DML and Uplift, FastAttention and SpatialFusion. Finally, we compare fast improvements in multi-head attention on CPU (Intel Xeon E5-2630 2.20GHz) and GPU (Tesla P40), respectively. C.1 CAUSAL ATTENTION VISUALIZATION As one of the most essential components, causal attention employs difference stationary tests and double machine learning to estimate coefficients θ(s) of treatment effects. In this section, we visualize causal attention distribution through sample-specific cases, including rainfall, weekdays, and time slots. Frequent rainfall is the most significant weather event for demand as described in Section A. Unlike with plenty of rainfall events, there are only a dozen of holidays in one year. If sequential context before one holiday fails to pass Kpss stationary test, causal estimator would not to be applied in training attention weights. Large-scale dataset is the fundamental to our method. For the diverse peek time slots, Section 3.1 concludes that demand and supply distributes different at commuting peeks and night hours. In addition, seasonal fluctuation and government’s policies (e.g. traffic restriction in National Day) are considerable factors. Rainfall. Take demand forecasting at an anonymous region in city A as an example, treatment is rainfall s, target is demand x, and other covariates z include regional id, time slots and holidays. For convenience, we select a group of adjacent AB groups from sufficient rainfall cases to give an interpretation. In Figure 4, we backtrack rainfall treatments to fix AB Group 2, and search AB Group 1 by controlling similar covariates. Similarity means that both first-order differences are stationary, and then we construct a group of simple randomized controlled experiments. Given estimated θ(z) by running DML, we plot the distribution of causal attention on the right side of green line. In practice, large amounts of increasing data would enhance the robustness of causal evaluation iteratively. Collaborative demand and supply. As described in Section 3.1 and equation (2), the distribution of supply is driven by the spatio-temporal patterns of demand. Similar with above Rainfall analysis, we take another anonymous region in city A as an example. In this case, forecasting target is supply, causal treatment is demand, and external variables include weather, time slots and holidays. According to Algorithm 1, our method needs to construct AB groups and corresponding lookback AA groups from large-scale historical data. For both AB controlled experiments, the average demands of AB and AA groups should be significant different, while supply is unlimited. In AA experiments, we empirically suggest that the time span maintains for at least one day. We trace back data to the past, but selected AA groups should satisfy randomization grouping hypothesis passed by t-test. Such periods with stable supply are abundant in recent years, which implies that we can easily find proper evaluation dataset for diverse regions. In Figure 5, trained causal attention demonstrates the demand’s causal weights reflect in supply forecasting. Additionally, more novel causal modules similar with equation (2) can be designed to enhance interpretability and robustness, and such modules support end-to-end training in CausalTrans as well. C.2 ABLATION ANALYSIS This subsection focuses on the performance of CausalTrans when some components are excluded. Proposed essential items contain tricky PoissonOutput, Causal Attention(C.A.), FastAttention and SpatialFusion. C.A. can be implemented by different causal algorithms, such as DML and Uplift (Künzel et al., 2019). As shown in Table 5, we listR50 (50% quantile point) losses on previous eight Ridehailing datasets. Table 5 demonstrates that C.A.(DML) outperforms all of other components, and causal supply can be clearly influenced by causal demand. Finally, both FastAttention and SpatialFusion are not harmful to forecasting performance. Furthermore, spatial fusion shows tiny improvement (+0.3% on average) in Table 5. We feel that spatial fusion aggregates adjacent hexagonal grids, leading to reducing statistical noises in both demand and supply. For instance, in some cases, the boundary (usually around 800 meters) of adjacent grids separates large demand hotpots (e.g., large shopping malls), resulting in some noise when counting supply and demand. Spatial fusion can reduce the influence of such noise, while improving the probabilistic forecasting performance. According to Table 5, the longer forecasting time (e.g., 7 days versus 1 day), the more significant gain by using spatial fusion. We consider the use of spatial fusion as a trick for enhancing the robustness of forecasting. The hyperparameter of spatial fusion is K used in the kmeans method. In this paper, we set K ∈ {3, 4, 5}. More ablation analysis about K is shown in Table 6. C.3 TIME EFFICIENCY IMPROVEMENT One of innovations proposed in this paper is to shorten running time of attention without losing overall quantile loss. The long experiment cycle suggests that we should choose a representative dataset, such as one-day demand prediction in city A. Data size of city A is large enough to reflect robust attention weights. In such dataset, we are only interested in the decrease of running time as the number of heads in multi-head attention decreases. As shown in Figure 6, when multi-head is 3, the reduction ratios of CPU(20), GPU(1) and GPU(2) compared with softmax are 58%, 70%, and 68%, respectively. Similarly, when multi-head is equal to 5, the responding reduction ratios are, respectively, 49%, 58% and 60%. An exact time complexity is O(K2V ) (see in Section 3.4), the smaller K, the longer running time. In summary, proposed time-efficient attention outperforms default softmax attention significantly. D DISCUSSIONS ON LINEAR ATTENTION In subsection 3.4, we propose a novel linear attention based on approximate Taylor expansion of exponential function. In contrast, other important methods are also developed to reduce attention cost. These attention acceleration methods can be roughly clarified into two groups. The first one is to construct kernel functions to approximate softmax function, denoted as softmax(QTK) = ϕ(Q)T · φ(K), (14) where Q and K are query matrices and key matrices, respectively. For instance, Katharopoulos et al. (2020) construct a kernel function with basis function ϕ(x) = φ(x) = elu(x) + 1 and reduce the computation complexity from O(N2) to O(N), but such performance is just only concluded from image dataset. Shen et al. (2018) further explore a series of kernel forms to dissect Transformer’s attention. They proposed a new variant of Transformer’s attention by modeling the input as a product of symmetric kernels. This approach replaces the calculation order of softmax, which is equivalent to the basis function φ(x) = softmax(x) and ϕ(x) = ex. The second one is to modify attention’s definition. Child et al. (2019) develope sparse factorizations of the attention matrix, which reduce the computation to O(N √ N), but its attention hyperparameters are very hard to be initialized and actual efficiency is hard to ensure. Kitaev et al. (2020) propose Reformer to replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O(N2) to O(Nlog(N)), where N is the length of the sequence. Furthermore, they use reversible residual layers instead of standard residuals, allowing storing activations only once in the training process instead of L times, where L is the number of layers. However, Reformer is difficult to be implemented and applied in different tasks. Wang et al. (2020) demonstrate that the self-attention mechanism can be approximated by a low-rank matrix, and further propose Linformer mechanism to reduce the overall self-attention complexity to O(N). Linformer uses two additional matrices E and V to project K and V , respectively, in order to get Attention(Q,K, V ) = softmax(Q(EK)T )FV . But the MLM experiment in Linformer does not need to extract long-term dependence and cannot verify its linear time complexity for capturing long-term attention. Eliminating redundancy vectors from the self-attention is a key design idea. Furthermore, Goyal et al. (2020) exploit redundancy pertaining to word-vectors, and propose PoWER-BERT to achieve up to 4.5x reduction in inference time over BERT with <1% loss in accuracy on the standard GLUE benchmark. Similarly, Dai et al. (2020) propose Funnel-Transformer, which gradually compresses the sequence of hidden states to a shorter one, and hence reduces the computation cost. Finally, for our approximate Taylor expansion of softmax attention, if feature maps (i.e. Q, K and V in self-attention) meet the positive definite and normalization conditions and our task focuses on short-term dependence, then our linear attention would be useful for this aspect.
1. What is the main contribution of the paper regarding probabilistic forecasting? 2. What are the strengths and weaknesses of the proposed approach in handling causality and spatial aspects? 3. How does the reviewer assess the clarity and readability of the paper's content, particularly in section 3.3? 4. What are the concerns regarding the experimental design and metrics used in the paper? 5. Are there any missing details or discussion related to the Taylor expansion and attention cost in the paper? 6. How does the reviewer evaluate the relevance and impact of the paper's contributions in the context of ride-hailing forecasting? 7. What are some additional suggestions for improving the paper, such as using public datasets or comparing with relevant baselines?
Review
Review The paper is interested with multivariate probabilistic forecasting applied in the context of ride-hailing forecast. The goal is to be able to handle the spatial aspect, causal effects of external covariates (e.g. the causal impact of rain, Christmas on supply/demand) and dependency between supply and demand. To this end, the authors propose a causal attention mechanism (to not just detect correlation with covariates) and a custom transformer architecture where the quadratic attention cost is avoided. Experiments are performed on public and private datasets against a set of (mostly) univariate baselines. Strong points: highly-relevant problem novel consideration of handling causality instead of simple covariate correlation in this context Weak points: very hard to read, many details missing and notation are not introduced lack of relevant baselines (e.g. baselines using spatial information) lack of relevant metrics to illustrate the benefit of the contribution missing related work discussion for efficient attention computation I recommend a reject for this paper. While the problem presented by the paper is highly relevant for the community, the paper has several issues that makes it not ready for publication. The first issue comes in the clarity of the description and in particular section 3.3 where many terms and notation are not introduced at all making the paper very hard to read (see detailed comments). Experiments are also problematic as many details were unclear (see detailed comments) and would be far from being reproducible. But the most problematic bit is in its design: key aspects introduced in the paper are not asserted in the experiments.1) While the method claims ~5% error improvement on the ride-hailing benchmark, only one baseline have access to spatial information while numerous methods have been proposed to handle spatio-temporal forecast, only one baseline (TFT) is provided with spatial information (with no detail). 2) Since you are interested in the probabilistic forecast of the joint demand/supply targets (collaborative), your experimental setup should consider a joint metric (CRPS-sum for instance [1,2]) rather than the average of R50/R90 metrics of demand/supply targets which is "blind" to correlation between the two demand and supply targets. Finally, I had issue with the Taylor expansion as its description was not clear (see detailed comments) but also I found description of related work missing in this aspect. Attention cost has been decreased to N N , N log ⁡ N and N respectively in [3, 4, 5], this discussion is clearly missing to relate your contribution. Additional questions to the author 3.1 at no point in the paper you say explicitly what is the loss that you are minimizing, this makes it harder to read. I would recommend to specify in 3.1 directly that you minimize NLE with poisson distribution (which is what I understood) and specify also clearly how predictions are made. 3.3 the beginning of the section mix model description and related work 3.3 Eq. (3) has two unintroduced notation, what is the \bar? what is "batch"? 3.3 Eq. (6) is so confusing, you are mixing norms and vectors (unintroduced), T is not explicitly introduced, there is no clear explanation why the approximation holds also (e.g. why a^T W would be small) 4.2 DeepState is not an auto-regressive model, it belongs to the second category. 4.3: "CausalTrans outperforms all other competing methods primarily due to the use of the causal estimator DML and spatial information". This is very problematic as you dont compare with methods designed to handle spatial information. Additional feedback (not part of the decision assessment) 3.1 eq (1) (2) makes an assumption that only demand impacts supply: it would be good to discuss it at least. One could imagine where it breaks. 3.1 weather features: you should precise whether they are known in advance 3.3 The section will be easier to read if you indicate variable dimensions Finally, I would recommend to use a public dataset rather than the private one for your benchmark (e.g. NY taxi dataset or Uber), you also have covariates such as date or weather forecast and this would make your work comparable and reproducible in the future. [1] High-dimensional multivariate forecasting with low-rank Gaussian Copula Processes http://papers.nips.cc/paper/8907-high-dimensional-multivariate-forecasting-with-low-rank-gaussian-copula-processes [2] Multi-variate Probabilistic Time Series Forecasting via Conditioned Normalizing Flows https://arxiv.org/abs/2002.06103 [3] Generating Long Sequences with Sparse Transformers https://arxiv.org/pdf/1904.10509.pdf [4] Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention https://arxiv.org/abs/2006.16236 [5] Reformer: The Efficient Transformer https://arxiv.org/abs/2001.04451
ICLR
Title Some Practical Concerns and Solutions for Using Pretrained Representation in Industrial Systems Abstract Deep learning has dramatically changed the way data scientists and engineers craft features – the once tedious process of measuring and constructing can now be achieved by training learnable representations. Recent work shows pretraining can endow representations with relevant signals, and in practice they are often used as feature vectors in downstream models. In real-world production, however, we have encountered key problems that cannot be justified by existing knowledge. They raise concerns that the naive use of pretrained representation as feature vector could lead to unwarranted and suboptimal solution. Our investigation reveals critical insights into the gap of uniform convergence for analyzing pretrained representations, their stochastic nature under gradient descent optimization, what does model convergence means to them, and how they might interact with downstream tasks. Inspired by our analysis, we explore a simple yet powerful approach that can refine pretrained representation in multiple ways, which we call Featurizing Pretrained Representations. Our work balances practicality and rigor, and contributes to both applied and theoretical research of representation learning. 1 INTRODUCTION The ability of neural networks to learn predictive feature representation from data has always fascinated practitioners and researchers (Bengio et al., 2013). The learnt representations, if proved reliable, can potentially renovate the entire life cycle and workflow of industrial machine learning. Behind reliability are the three core principles for extracting information from data, namely stability, predictability, and computability (Yu, 2020). These three principles can not only justify the practical value of learnt representation, but also lead to the efficiency, interpretability, and reproducibility that are cherished in real-world production. Since pretrained representations are optimized to align with the given task, intuitively, they should satisfy all three principles in a reasonable setting. However, when productionizing an automated pipeline for pretrained representations in an industrial system, we encountered key problems that cannot be justified by existing knowledge. In particular, while the daily refresh follows the same modelling and training configurations and uses essentially the same data1, downstream model owners reported unexpectedly high fluctuations in 1Since the pretraining uses years of history data, the proportion of new daily data is quite small. performance when retraining their models. For illustration purpose, here we reproduce the issue using benchmark data, and take one further step where the pretraining is repeated on exactly the same data, under the same model configuration, training setup, and stopping criteria. We implement ten independent runs to essentially generate the i.i.d versions of the pretrained representation. We first visualize the dimension-wise empirical variances of the pretrained representations, provided in Figure 1a. It is surprising to find out that while the pretraining losses almost converge to the same value in each run (Figure 1b), there is such a high degree of uncertainty about the exact values of each dimension. Further, in Figure 1c, we observe that the uncertainty (empirical variance) of pretrained representation will increase as the pretraining progresses. In the downstream task where pretrained representations are used as feature vectors (see the right figure), we observe that the performance does fluctuate wildly from run to run. Since we use logistic regression as the downstream model, the fluctuation can only be caused by the instability of pretrained representations because we can effectively optimize the downstream model to global optimum. To demonstrate that the above phenomenon is not caused by using a specific model or data, we also experiment with a completely different pretraining model and benchmark data from from another domain. We perform the same analysis, and unfortunately the same issues persist (Figure A.1 in the Appendix). Existing deep learning theory, both the convergence and generalization results (we will discuss them more in Section 2), can fail to explain why shall we expect pretrained representation to work well in a downstream task when their exact values are so unstable. This is especially concerning for industrial systems as the issue can lead to unwarranted and suboptimal downstream solutions. We experienced this issue firsthand in production, so we are motivated to crack the mysteries behind pretrained representations, and understand if and how their stability can be improved without sacrificing predictability and computability. We summarize our contributions as below. • We provide a novel uniform convergence result for pretrained representations, which point out gaps that relate to the stability and predictability issues. • We break down and clarify the stability issue by revealing the stochastic nature of pretrained representation, the convergence of model output, and the stable and unstable components involved. • We investigate the interaction between pretrained representation and downstream tasks in both parametric and non-parametric settings, each revealing how predictability can benefit or suffer from stability (or instability) for particular usages of pretrained representations. • We discuss the idea of featurizing pretrained representation, and propose a highly practical solution that has nice guarantees and balances stability, predictability, and computability. We also examine its effectiveness in real-world experiments and online testings. 2 RELATED WORK It is not until recent years that deep learning theory sees major progress. Zhang et al. (2016) observed that parameters of neural networks will stay close to initialization during training. At initialization, wide neural networks with random weights and biases are Gaussian processes, a phenomena first discussed by Neal (1995) and recently refined by Lee et al. (2017); Yang (2019). However, they do not consider effect of optimization. The Neural Tangent Kernel provides a powerful tool to study the limiting convergence and generalization behavior of gradient descent optimization (Jacot et al., 2018; Allen-Zhu et al., 2019), but it sometimes fails to capture meaningful characteristics of practical neural networks (Woodworth et al., 2020; Fort et al., 2020). However, those works require parameters being close to initialization, in which useful representation learning would not take place. Indeed, it has also caught to people’s attention that representation learning can go beyond the neural tangent kernel regime (Yehudai & Shamir, 2019; Wei et al., 2019; Allen-Zhu & Li, 2019; Malach et al., 2021), among which a line of work connects the continuous-time training dynamics with mean field approximation (Mei et al., 2018; Sirignano & Spiliopoulos, 2020), and another direction is to study the lazy training regime (Chizat et al., 2019; Ghorbani et al., 2019) where only the last layer of a neural network is trained. Unfortunately, their assumed training schemas all deviate from practical representation learning. Still, part of our analysis in Section 4.2 can be viewed as a practical discretetime extension of the mean-field method. Perhaps the most practical setting for studying pretrained representation is Arora et al. (2019), which analyzes the contrastive representation learning under a particular data generating mechanism. However, their results do not generalize to broader setting, and they cannot justify the stability issue of pretrained representation. 3 PRELIMINARIES Notations. We use x ∈ X ⊆ Rd0 and y ∈ R to denote the raw feature and outcome, uppercase letters to denote random variables and measures, and bold-font letters to denote matrices. Let h : X → Rd be the representation hypothesis, and f : Rd → R be the prediction hypothesis. The hypothesis classes are given by H and F respectively. Denote by ◦ the operator for function composition, and ℓ : R×R → [0, 1] the loss function. We assume ℓ is 1-Lipschitz without loss of generality. Then the risk for a pair of (h ∈ H, f ∈ F) is given by: R(h, f) := E(X,Y )∼P [ ℓ ( f ◦ h(X), Y )] , where P is a measure on (X ,R). We also use Pn to denote the corresponding product measure for (X1, Y1), . . . , (Xn, Yn). The one-layer multi-layer perceptron (MLP) is perhaps the most fundamental representation learning model, given by: f ◦ h(x) = Θσ(Wx). Here, σ is the activation function, and W ∈ Rd0×d, Θ ∈ Rd×k. We mention that adding the bias terms will not affect our analysis, so we drop them here for brevity. In practice, Θ and W are often initialized as scaled i.i.d Gaussian random variables that follow N(0, 1/d). We will use such as [W ]i to denote the ith row of a matrix. The popular contrastive representation learning can also be considered as a special case of this configuration2. Define the shorthand g(x) := Θσ(Wx). A typical pretraining process involves optimizing the risk function defined for pretraining and extracting the hidden representation. The optimization is done via stochastic gradient descent (SGD), e.g. W(t+1) = W(t)−α∇Wℓ(g(x(t)), y(t)), where α is the learning rate. For convenience, we consider each mini-batch having one random sample, denoted by (x(t), y(t)) that corresponds to the tth step. Given a representation hypothesis h, we define: fh,n := argminf∈F 1/n ∑n i=1 ℓ ( f(h(xi), yi ) . In the sequel, how well fh,n ◦ h can generalize to a new i.i.d sample of the downstream task is: R(h) := E(X,Y )∼PEPn [ ℓ ( fh,n ◦ h(X), Y )] , where the second expectation EPn is taken with respect to the downstream data {Xi, Yi}ni=1 underlying fh,n. Its empirical version is given by Rn(h) := 1/n ∑ i [ ℓ ( fh,n ◦ h(Xi), Yi )] . 4 MAIN ANALYSIS 4.1 THE GAP OF UNIFORM CONVERGENCE FOR PRETRAINED REPRESENTATION Suppose h and f are optimized jointly (end-to-end) via empirical risk minimization (ERM), which amounts to solving: argminh∈H,f∈F 1/n ∑ i ℓ(f ◦ h(xi), yi). In this setting, the generalization behavior of the solution well-studied. In particular, using the notion of Gaussian (or Rademacher) complexity3, the generalization error can be bounded by O ( Gn(F ◦ H)/n + √ (log 1/δ)/n ) with probability at least 1− δ (Bartlett & Mendelson, 2002). This result, known as uniform convergence, is especially appealing because it both includes problem-specific aspects and applies to all functions in the composite hypothesis class F ◦ H := {f ◦ h : f ∈ F , h ∈ H}. Is it possible to achieve a comparable result for pretrained representation? Perhaps the most ideal setting for uniform convergence to hold under pretrained representation is: C1: the pretraining and downstream training will use the same data {(Xi, Yi)}ni=1, i.e. ĥ, f̂ := argminh∈H,f∈F 1 n ∑n i=1 ℓ ( f ◦ h(Xi), Yi ) , fĥ,n = argminf∈F 1 n ∑n i=1 ℓ ( f(ĥ(Xi), Yi ) . 2We can simply set xi ∈ Rn as one-hot encodings, and W,Θ ∈ Rd0,d where they are allowed to coincide. Then we let h(xi) = [W]i or [Θ]i depending on the context. The activation becomes the identity function, and ℓ(f(xi), xj) = log(1−σ(h(xi)Th(xj))) (or log σ(h(xi)Th(xj)), with σ(·) being the Sigmoid function. 3We will use Gaussian complexity G(·) here for some of its technical convenience. Then we let Gn be the empirical Gaussian complexity. See Appendix A for detail. C2: they rely on the same prediction function class F . These two conditions essentially eliminate the confounding effects of model and data mismatch. Thus, if uniform convergence cannot hold in this setting, it is unlikely to serve more general use cases. We first summarize the common intuition behind why pretrained representation might work: • the pretraining objective, when well-designed, reasonably predicts the empirical downstream risk for fh,n (intuition 1); • fh,n’s empirical downstream risk can be generalized to the true downstream risk (intuition 2). These two intuitions have also been exemplified for contrastive representation learning in Arora et al. (2019) and its following work. Our main contribution here is to make the above intuitions rigorous, and reveal whether they are indeed sufficient for uniform convergence in general settings. Recall that, given the complete information on a downstream task, the best we can do is: minh∈H,f∈F R(h, f). We denote the representation hypothesis that achieves this minimum by h∗. Let ĥ be given in C1. Then the generalization error is simply given by: R(ĥ)−minh∈H,f∈F R(h, f). Following the standard derivation which decomposes the generalization error and takes the supremum to upper bound each term, we run into terms that exactly characterize the above two intuitions. As we show in Appendix B, it holds that: R(ĥ)− min h∈H,f∈F R(h, f) ≤ sup h { EPnRn(h)−Rn(h) } + sup h EPn [ E(X,Y )∼P [ ℓ ( fh,n ◦ h(X), Y ) −Rn(h) ]] + remainder, where the first term suph { EPnRn(h) − Rn(h) } exactly seeks to match intuition 1, and the second term can be further upper bounded using: E(X,Y )∼P [ ℓ ( fh,n ◦ h(X), Y ) − Rn(h) ] ≤ supf { E(X,Y )∼P [ ℓ ( f ◦ h(X), Y ) − Rn(h) ]} , which underlies intuition 2. The remainder terms can be bounded using standard concentration results. However, we also spot a critical issue with the first term, and we first expand it for clarity: sup h { EPn [ 1 n ∑ i ℓ ( fh,n ◦ h(Xi), Yi )] − 1 n ∑ i ℓ ( fh,n ◦ h(Xi), Yi )} . Notice that this is not the typical empirical process encountered in a standard generalization setting, and we show that its upper bound is actually given by O ( Gn(H)/ √ n + √ log 1/δ ) following the same procedure as Bartlett & Mendelson (2002). Compared with the standard generalization bound, here the slack term √ log 1/δ does not vanish as we increase n. Therefore, there exist gaps between common intuitions and achieving uniform convergence. Before we discuss the cause of the gaps and its implications, we first present the complete result as below. Proposition 1. Let G′n(·) be a slightly modified Gaussian complexity term. Under the conditions and definitions in C1 and C2, it holds with probability at least 1− δ that: R(ĥ)− min h∈H,f∈F R(h, f) ≲ Gn(H)√ n + G′n(F) suph √ E∥h(X)∥22√ n + √ log 1/δ. The proof is deferred to Appendix B. Proposition 1 can be viewed as a ”no free lunch” result for using pretrained representation: even in the most ideal setting we study here, uniform convergence cannot be expected for all representation hypothesis. The gap is that not for every h ∈ H can the pretraining objective be predictive of fh,n’s empirical downstream risk. Imagine a entirely random h̃. Then both its pretraining objective and the empirical downstream risk of fh̃,n may have high variances that do not scale with n. Thus the prediction will not concentrate whatsoever. Takeaway. The implications of this gap are two folds. Firstly, it does not suffice to only study H and the data distribution – the statistical and algorithmic convergence properties of ĥ(X) could be more relevant as they suggest its stability. Secondly, we cannot take the performance of fĥ,n for granted, at least not without understanding how ĥ(X) interacts with the downstream learning and generates fĥ,n – which ultimately relates to its predictability. Unfortunately, we find a lack of discussion on these two issues in existing literature, so we will investigate them in the next two sections. 4.2 STOCHASTIC NATURE OF PRETRAINED REPRESENTATION, AND THE CONVERGENCE OF PRETRAINING MODEL In this section, we reveal two important statistical and algorithmic properties of pretrained representation. We show that while they persist as random vectors during SGD optimization (as shown in Figure 1), the output of the pretraining model can be deterministic and converge to some optimal solution. Two contributing factors are scaled i.i.d initialization and the inductive bias of gradient descent. Our findings provide critical insight to the stability of pretrained representations. We motivate our statistical analysis by deriving the optimization path of the one-layer MLP introduced in Section 3. For notation convenience, we introduce Θ̃ and W̃ as the rescaled version of Θ and W such that Θ̃(0),W̃(0) i.i.d∼ N(0, 1). We let ℓ′(g(x), y) be the derivative of the loss function and similarly for other functions. In contrast to the existing theoretical work that studies optimization path under gradient flow or infinitesimal learning rate, we fix the learning rate as α = 1 to reflect real-world practice. The output dimension is also set to k = 1 without loss of generality. In the first forward pass, since σ(W(0)x(0)) has i.i.d coordinates, as d → ∞ it holds that: g(0)(x(0)) := 1 d d∑ i=1 [ Θ̃(0) ] i [ σ ( W̃(0)x(0) )] i a.s.−→ EΘ(0)σ ( W (0)x(0) ) (denote by g(0)∗ (x(0))), where we use Θ(t),W (t) to denote an i.i.d element (or row) of Θ̃(t) and W̃(t). As a result, ℓ′ ( g(0)(x(0)), y(0) ) also converges to the deterministic value L(0) := ℓ′ ( g (0) ∗ (x (0)), y(0) ) . Then in the first backward pass, the updated parameters will follow: Θ̃(1) = Θ̃(0) − L(0)σ ( W̃(0)x(0) ) , W̃(1) = W̃(0) − L(0)x(0)Θ̃(0)σ′ ( W̃(0)x(0) ) . An important observation is that the updated parameters remain to be element-wise i.i.d. Consequently, the model output of the second forward pass will also converge to a deterministic value: g(1)(x(1)) a.s.−→ E ( Θ(0) − L(0)σ ( W (0)x(1) ))( W (0)x(1) − L(0)x(0)Θ(0)σ′ ( W (0)x(0) ) x(1) ) . As we show in the following Proposition, the (statistical) convergence result will hold for any t, and there exists a general iterative update rule for g(t)(x). For some intuition, suppose σ(·) is the identity function, then Θ(t), W(t) will simply be linear combinations of Θ(0), W(0). Proposition 2. For the one-layer MLP we consider, with the learning rate α = 1, for any step t > 1, as d → ∞, the model output g(t)(x) will converge almost surely to g(t)∗ (x) defined as follows: g (t) ∗ (x) = ( C (t) 1 C (t) 2 + C (t) 3 C (t) 4 ) x, with ( C (t+1) 1 , C (t+1) 2 , C (t+1) 3 , C (t+1) 4 ) = ( C (t) 1 , C (t) 2 , C (t) 3 , C (t) 4 ) +L(t)x(t) ( C (t) 3 , C (t) 4 , C (t) 1 , C (t) 2 ) . As a corollary, while the hidden representations will remain random vectors throughout the SGD process (which can be seem from the update rule): h(t)(x) := σ(W(t)x) = σ ( W̃(t−1)x− L(t−1)x(t−1)Θ̃(t−1)σ′ ( W̃(t−1)x(t−1) ) x ) , ⟨h(t)(x), h(t)(x′)⟩ will nevertheless also converge to some deterministic value as d → ∞. The proof and detail are deferred to Appendix C. In Figure 1d, we see that the statistical convergence of model output is indeed evident even with moderately small d, and its variance is by magnitudes smaller than the variance of the hidden representation σ(W(t)x) (see the x-axis of Figure 1c and 1d). On the other hand, the algorithmic convergence of model prediction has received considerable attention. It has been shown that over-parameterized models will converge to minimum-norm interpolants due to the inductive bias of gradient descent (Bartlett et al., 2021; Soudry et al., 2018). For the sake of space, here we focus on their implications and leave the details to Appendix C. Roughly speaking, among the many locally optimum solutions that interpolate the training data, gradient descent will converge to the one with the smallest norm, which usually has nice properties such as smoothness. We let g0 be that particular solution such that limt→∞ g(t)(x) = g0(x). Since ⟨h(t), h(t)⟩ converge statistically to a deterministic value at every optimization step, we can immediately conclude that: • if g(t) takes the form of ⟨h(t), h(t)⟩ such as in contrastive representation learning, the inner product between hidden representations also converge algorithmically to g0’s prediction; • if g(t) = θh(t), i.e. the last hidden layer is used as the representation, note that a necessary but not sufficient condition for ∥g(t)(x) − g(t)(x′)∥ to be small is that ∥h(t)(x) − h(t)(x′)∥ is small as well. Suppose h(t) are normalized, then upon the algorithmic convergence, ⟨h(t)(x), h(t)(x′)⟩ are likely to be larger if x, x′ are close to each other under g0’s prediction. Takeaway. The stochastic nature of ĥ := limt→∞ h(t) and the (approximate) convergence of ⟨ĥ(x), ĥ(x′)⟩ under gradient descent reveal two important properties of pretrained representations: 1. Instability of ĥ(x): the exact position of ĥ(x) in Rd is stochastic, depending on the initialization and the order of the pretraining data that is fed to SGD; 2. Stability of ⟨ĥ(x), ĥ(x′)⟩: the pairwise inner product of ⟨ĥ(x), ĥ(x′)⟩ converges (approximately) to a value that is consistent with the minimum-norm interpolant of the pretraining task. These results will also play a crucial role in understanding how ĥ can interact with the downstream learning, which we will study in the next section. 4.3 INTERACTION WITH DOWNSTREAM TASK To be comprehensive, we consider both the parametric and non-parametric set up for downstream task. Interestingly, they will reveal different aspects on the predictability of ĥ. Parametric setup. To eliminate the interference of label noise, we consider the noiseless setting where the output of downstream task is generated by: yi = f∗ ( E[h(xi)] ) , i = 1, . . . , n. Because h(x) might be high-dimensional, we assume there is some sparsity in f∗. The conditions below provide perhaps the easiest parametric setup for pretrained representations to perform well. C3: Let f∗(h) := ⟨θ∗, h⟩, ∥θ∗∥0 ≤ q, and let the inputs hi := Eh(xi) be sampled from: N(0, σ2hI) where σh is the strength of the signal. We show previously that ĥ is stochastic, so we simply set ĥi := hi + ϵi, where ϵi ∼ N(0, σ2ϵ I) captures the variance of the pretrained representation. Intuitively, since ϵi are i.i.d, it holds that Eϵ [ ⟨ĥ(xi), ĥ(xj)⟩ ] = ⟨h(xi), h(xj)⟩ so recovering θ∗ should be less challenging. However, we show that the variance will again prohibit efficient learning, and the best fĥ,n can do is controlled by σϵ/σh – a notion of signal-to-noise ratio for pretrained representation. The result below takes the form of a minimax lower bound: an information-theoretical quantity that characterize the inherent difficulty of a problem. Our proof (in Appendix D) is based on Le Cam’s method that was previously used to prove a lower bound result under label noise (Raskutti et al., 2011), which is very different from our setting. Proposition 3. Under C3, it holds with probability at least 1/2 that: inf θ̂ sup ∥θ∗∥0≤q ∥θ̂ − θ∗∥2 ≳ ( σ2ϵ /σ 2 h ) · qn−1 log(d/q), where inf θ̂ is taken with respect to any learning procedure that is based on {ĥ(xi), yi} n i=1. Takeaway. The result in Proposition 3 is alarming because during pretraining, the variance of h(x) might increase as more and more stochastic terms are being added (suggested by both the derivations in Section 4.2 and the empirical result in Figure 1c). The above lower bound shows the predictability of ĥ(x) can be compromised by its variance inherited from pretraining. This also explains the instability in downstream machine learning that we experienced during real-world production. Non-parametric setup. Among the non-parametric regression estimators, the Nadaraya-Watson (NW) estimator has received considerable attention due to its simplicity and effectiveness (Nadaraya, 1964). It can be thought of as a smoothing nearest-neighbor estimator under a weighting schema: fh,n ◦ h(x) := n∑ i=1 yjwh(x, xi), wh(x, xi) := K ( (h(x)− h(xi) ) /z, where K : Rd → R+ is a kernel, and z is a normalizing constant. Here, we omit the bandwidth parameter for convenience. The Gaussian kernel K(u) ∝ exp(−∥u∥22) is a common choice, so when pretrained representations are normalized, it only depends on h via ⟨h(x), h(x′)⟩ – a more stable quantity according to the previous section. We effectively denote this kernel by K ( ⟨h(x), h(x′)⟩ ) . It is well-understood that the generalization of a kernel support vector machine is controlled by the kernel-target alignment (Cristianini et al., 2001), i.e. 〈 y⃗,Ky⃗ 〉 , where y⃗ = [yi, . . . , ym]T and Ki,j = K ( ⟨h(xi), h(xj)⟩ ) . We prove that this is also the case for NW estimator, with a simple result that does not resort to the concentration arguments. The proof is in Appendix D. Lemma 1. Under 0-1 loss, with probability at least 1− δ, the risk of NW estimator satisfies: R(fh,n ◦ h) ≤ 1− √ δ · E [ 1[Y = Y ′]K ( ⟨h(X), h(X ′)⟩ )] , where the expectation is taken with respect to (X,Y ) ∼ P , (X ′, Y ′) ∼ P . Takeaway. Lemma 1 shows the predictability of h(x), when expressed and measured through the more stable ⟨h(x), h(x′)⟩, is strictly guaranteed. Therefore, using h(x) in downstream task in the form of h⃗(x) := [ e⟨h(x),h(x1)⟩, . . . , e⟨h(x),h(xn)⟩ ] can be beneficial, and it can be interpreted as a representation of weights in the NW estimator. Further, h⃗(x) contains all the pairwise relationship that can be more closely related to the pretraining objective. Note that h(x) can also be viewed as the compression of h⃗(x) because: [⃗h(xi)]j = exp(⟨h(xi), h(xj)⟩). Nevertheless, h⃗(x) and h(x) cannot be compared directly because they have different intrinsic dimensionality. In terms of computability, h⃗(x) ∈ Rn is also no compare to h(x) ∈ Rd – computing h⃗(x) itself can be non-trivial for largescale applications. We aim to resolve these issues in the next section. 5 FEATURIZING PRETRAINED REPRESENTATION Our next goal is to build on top of h(x) features or representations that are comparable to h⃗(x) in terms of stability and predictability, and have similar computability to h(x). Suppose {h(xi)}ni=1 are normalized. Then h⃗(xi) is simply the exponential of pairwise cosine distances between h(xi) and all the pretrained representations. Notice that the angle between any pair of (h(xi), h(xj)) can be decomposed into their respective angle with a baseline direction u ∈ Rd, ∥u∥2 = 1. When the set of baseline directions is rich enough, we can recover all the pairwise cosine distances in h⃗(xi) using their angles with the baseline directions. Given U := [u1, . . . , um] ∈ Rd×m, the set of angles between h(xi) and U forms a measurement for the relative location of h(x) ∈ Rd. We refer to such a measurement process as featurizing pretrained representation, as it is similar to how features are constructed by measuring experimental subjects. While featurizing h(x) according to its geometrically property is an appealing solution, it is unknown how many baseline directions are needed to preserve the stability and predictability of h⃗, as well as the optimal way to choose those directions. Fortunately, the Bochner’s Theorem (Loomis, 2013) from harmonic analysis lays a solid foundation for selecting the directions and providing approximation and learning guarantees. Also, the resulting measurements will coincide with the random Fourier feature (Rahimi & Recht, 2007; Liu et al., 2021) that plays a critical role in many machine learning communities. For the Gaussian kernel we studied, Bochner’s Theorem states that there exists a measure Q on Rd such that: K(h(x), h(x′)) = ∫ Rd eiu(h(x)−h(x ′))q(u)du real part = Eu∼Q [ cos ( u ( h(x1)− h(x2) ))] . Since cos(a− a′) = cos(a) cos(a′) + sin(a) sin(a′), we can approximate the kernel value using the Monte Carlo method as below: K(h(x), h(x′)) ≈ 1 m m∑ i=1 cos ( uih(x) ) cos ( uih(x ′) ) + sin ( uih(x) ) sin ( uih(x ′) ) , ui i.i.d∼ Q. Let ϕm ( h(x), Q ) := 1/ √ m [ cos ( u1h(x) ) , sin ( u1h(x) ) , . . . , cos ( umh(x) ) , sin ( umh(x) )] be the featurization of h(x) according to Bochner’s Theorem. Note that it amounts to measuring h(x)’s distances with respect to random directions drawn from Q(u), and then transforming them through trigonometric functions. Furthermore, 〈 ϕm ( h(·), Q ) , ϕm ( h(·), Q )〉 can approximate any entries in h⃗. To be more precise, Rahimi & Recht (2007) shows that it only requires m = Ω ( d/ϵ2 log(σQ/ϵ) ) to achieve ∣∣K(h(x), h(x′))−〈ϕm(h(x), Q), ϕm(h(x′), Q)〉∣∣ ≤ ϵ, where σ2Q is the second moment Q. Therefore, when m is comparable to d, the featurized ϕm ( h(x), Q ) achieves the stability and predictability of h⃗, as well as the computability of h. Converting h(x) to ϕm ( h(x), Q ) is computationally efficient, since u1, . . . , um only need to be drawn from Q once and apply to all h(xi), i = 1, . . . , n. However, there is still the obstacle of finding the optimal Q∗. Strictly speaking, Q∗ is obtained from the inverse Fourier transform, but in practice the standard Gaussian distribution is often used. Indeed, compute the inverse Fourier transform and sample from it poses another challenging task. To our knowledge, there is no existing study on whether we can safely sample u from a proxy Q. In the following proposition, we show that using Q instead of Q∗ will not cost stability as long as their discrepancy is bounded. In particular, we state our result in the context of Lemma 1, that is, the downstream risk is controlled by the alignment A := E [ 1[Y = Y ′]K ( ⟨h(X), h(X ′)⟩ )] . We use Ds(Q,Q∗) :=∫ s(dQ/dQ∗)dQ∗ to denote the f-divergence induced by s(·). Proposition 4. Let Q(Q∗; δ) := {Q : Ds(Q,Q∗) ≤ δ} be a Ds-ball with radius δ centered at Q∗. Let {h(xi), yi}ni=1 be the downstream data, and An(Q) := 1n(n−1) ∑ i ̸=j 1[yi = yj ]⟨ϕm(h(xi), Q), ϕm(h(xi), Q)⟩. It holds that: Pr ( sup Q∈Q(Q∗;δ) ∣∣∣An(Q)−An(Q∗)∣∣∣ ≥ ϵ) ≲ σ2Q ϵ2 exp ( − mϵ 2 16(d+ 2) ) + exp ( − nϵ 2 64(1 + δ) ) , where σQ := maxQ∈Q σQ. The significance of Proposition 4 is that even if the optimal Q∗ is not used, in the worst case scenario, the instability caused by it (reflected via δ) vanishes quickly as the sample size gets larger. Similarly, increasing the dimension of featurized representation ϕm also speeds up the convergence exponentially. They provide the guarantee for predictability even if Q∗ is not used. The proof is provided in Appendix E. Takeaway. Featurzing pretrained representation as ϕm(h,Q) offers a simple and practical solution to balance stability, predictability, and computability. We just showed that Q can simply be standard Gaussian distribution, and the dimension of ϕm(h) can be obtained by satisfying a specific approximation threshold ϵ. It can also be treated as a tuning parameter in downstream tasks. 6 BENCHMARK AND REAL-WORLD EXPERIMENTS We conduct experiments on the benchmark dataset MovieLens-1m (ML-1m) for illustration and reproducibility purposes. The real-world production experiments took place at a major US ecommerce platform anonymized as Ecom. The detailed descriptions for ML-1m and the introduction of Ecom’s production environment are provided in Appendix F. On ML-1m. The dataset supports two types of pretraining-downstream task combination: (a). leverage the sequences of user viewing data to pretrain movie embeddings, then use the embeddings to predict the genre of the movie (ML-1m task 1); (b). pretrain movie embeddings using the title and other descriptions, then use the embeddings for downtream sequential recommendation (ML-1m task 2). The detailed data processing, model and pretraining configurations, downstream training/testing setup, evaluation metrics, and sensitivity analysis are deferred to Appendix F. On ML-1m task 1, we use contrastive representation learning to pretrain the movie embeddings, and employ logistic regression to predict the genre using movie embeddings as features. On ML-1m task 2, we use a bidirectional-RNN-type structure on movies’ NLP data, and extract the final hidden layer as pretrained representation. The downstream sequential recommendation task employs a two-tower structure, and a RNN is used to aggregate the history viewing sequence. In Table 1, we first see that ϕm(h) improves the stability of h by at least ×10 in both tasks. Even under the same dimension, ϕm(h) outperforms h, and is highly comparable to avg(h) – the manually stabilized version of h by averaging it over ten independent runs. Note that avg(h) is almost never a good practical solution because it requires repeating the same pretraining process multiple times. Here, we use it as an analytical baseline, and show that ϕm(h) is just as good. When the dimension increases for ϕm(h), it delivers much more superior results. Although changing dimension can also change the downstream model complexity, but as we discuss below, it offers more flexibility for real-world problems. On Ecom. The item representation learning pipeline is being used by several downstream productions: item-page recommendation (Task1), search ranking (Task2), email recommendation (Task3), and home-page marketing (Task4). They all have task-specific features and non-trivial model architectures different. The refreshing of pretrained item embedding is done on a daily basis, and downstream model owners may have separate schedules to update and refresh the relevant parts of their models. In Appendix F.4, we describe our engineering solutions of deploying the featurization process on the frontend and backend. During A/B testing, we observe performance lifts (in terms of click-through rate) that are statistically significant for all four downstream applications. The average revenue-per-visitor lift is also positive during the testing period. The detailed online results and analysis are provided in Appendix F. Lessons learnt. In addition to improved stability and performance, an important feedback we received from downstream model owners is that the flexibility in choosing ϕm(h)’s dimension is very helpful for their tasks. Prior to our featurization technique, it is almost impossible to personalize the dimension of pretrained representation for different applications, let alone tuning it in downstream tasks. Now knowing that the predictability will not vary much, experimenting with different dimensions often allows them to find a better bias-variance tradeoff for downstream tasks. 7 DISCUSSION The analytical results and the proposed featurization method in our work can apply to a broad range of applications and research problems. Nevertheless, our results may still be rudimentary and far from providing the complete picture or optimal practice for using pretrained representation. We hope the progress we made will lead to more advanced future research and applications. Scope and limitation. Most of our analysis are performed in basic settings: while they ensure the results will hold in generality, advanced methods for pretraining representation are not considered. Also, we do not include additional downstream features and their correlation with pretrained representations, or connections between the pretraining objective and downstream task. Those additional knowledge can be useful for deriving task-specific results (Arora et al., 2019). For application, our featurization technique may be less helpful if the downstream task simply uses embedding distance like KNN search. Optimizing the space and time complexity by such as embedding quantization might be more useful for such tasks (Chen et al., 2020), which is not discussed in our paper. A future direction. While our work studies h(x) as a whole, it can be inferred from Figure 1c that the element-wise variance of ĥ(x) is bimodal, which suggests heterogeneity within h(x). Possible explanations are that a (random) subset of h(x) is responsible for overfitting the pretraining task (Bartlett et al., 2020), or that some dimensions are forced to become more independent of others so the representation matrix has nice spectral properties (Hastie et al., 2022). It is thus an important future direction to identify the internal structure of h(x) to better featurize pretrained representations. A TECHNICAL BACKGROUND In this part of the paper we provide the technical background for both the discussions in the paper and the following proofs. A central role for proving uniform convergence results is the Gaussian / Rademarcher complexity. For a set A ⊂ Rn, it is defined as: G(A) := Eϵ [ sup a∈A n∑ i=1 ai ] , where ϵ are i.i.d Gaussian / Rademarcher random variables. It essentially measures how good a function class can interpolate a random sign pattern assigned to a set of points. Given a function class F and n samples (x1, . . . , xn), the empirical Gaussian / Rademarcher complexity is given by: Gn(F) := Eϵ [ sup f∈F n∑ i=1 f(xi) ] . Remark A.1. We mention that in some versions of the definition, there is a 1/n factor in the complexity term. Here, we explicit pull that factor out and place it in the resulting bound. As we mentioned earlier, an important reason for us using Gaussian complexity is because some of its technical properties, which is the Slepian’s Lemma (Slepian, 1962) and its corollary, which we state as below: Lemma A.1 (From Slepian’s Lemma). Suppose ϕ : A → Rq has Lipschitz constant L. Then it holds that: G(ϕ(A)) ≤ LG(A). This result can be viewed as the contraction lemma for Gaussian complexity (Ledoux & Talagrand, 1991). A.1 INDUCTIVE BIAS OF GRADIENT DESCENT Our introduction primarily follows Soudry et al. (2018); Ji & Telgarsky (2019); ?); Gunasekar et al. (2018) and their follow-up works. The key factor that contributes to the implicit bias of gradient decent is the divergence of model parameters after separating the data under loss functions that has exponential-tail behavior. When the predictor f ∈ F parameterized by θ is over-parameterized, other than certain degenerated cases, the data can be separated at certain point if the predictor class satisfies some regularity assumptions (Lyu & Li, 2019), e.g. • f ∈ F is homogeneous such that: f(x; c · θ) = cβf(x; θ),∀c > 0; • f ∈ F is smooth and has bounded Lipschitz constant. These conditions can be met for many neural network structures and activation functions. The exponential-tail of loss function can be satisfied by the common exponential loss and logistic loss (which we use through our discussions and experiments). To see why the the norm of model parameters will diverge, simply note that under such as exponential loss, both the risk and the gradient will take the form of: ∑ i ci exp(−yif(xi; θ)), where ci are lower order terms. Since gradient descent will converge to a stationary point due to the nice properties of F , we expect ∑ i ci exp(−yif(xi; θ)) = 0 to hold upon convergence. A necessary condition for that is: exp(−yif(xi; θ)) = 0, i = 1, . . . , n, and this condition is actually sufficient with high probability (Soudry et al., 2018). Therefore, for all exp(−yif(xi; θ)) to reach 0, ∥θ∥ must diverge so |f(·; θ)| → ∞. With that said, since the loss function decays exponentially fast around 0, the data points with the largest margin will dominate both the gradient and the loss function. As a direct consequence, the decision boundary will share characteristics with the hard-margin SVM, given by: min ∥θ∥2 s.t. yif(xi; θ) ≥ 1, ∀i = 1, . . . , n. Indeed, recent work shows that the optimization path of over-parameterized models will indeed converge to some minimum-norm predictor: Corollary A.1 (Chizat et al. (2019); Woodworth et al. (2020), and others). Under the conditions specified in the reference work, which are mostly exponential loss, scaled initialization, appropriate learning rate, and regularity conditions for the predictor class, it holds that: lim t→∞ lim d→∞ F ( θ(t)/∥θ(t)∥ ) stationary points of→ { argmin ∥f∥K s.t. yif(xi) ≥ 1, ∀i ∈ [n]}, where F is the decision boundary of f , d is the dimension of the hidden layer(s) of f , and ∥ · ∥K is an appropriate RKHS norm. Note that in Section 4.2 we use g0 to denote the converged result, and the above corollary guarantees its existence and uniqueness. However, one open question is which particular RKHS norm best describes the solution, because it will particularly affect the convergence of the parameters. Therefore, in our work, we leave the convergence of parameters out of our discussion. Remark A.2. It is also worth mentioning that the convergence of E[h(t)(x)] plays no part in our arguments and results. Indeed, it will not change the stochasticity of h(t)(x), and (in some cases) can be implied from the convergence of g(t)(x) (Lyu & Li, 2019). Therefore, we do not discuss it in our work. B PROOF OF THE RESULTS IN SECTION 4.1 We prove Proposition 1 in this part of the appendix. An important result we will be using is the Gaussian complexity bound for empirical risk minimization, and we will use the version of Bartlett & Mendelson (2002). Lemma A.2. Let F be real-valued function class from X to [0, 1]. Let (X1, . . . , Xn) be i.i.d random variables, then for all f ∈ F , it holds with probability at least 1− δ that: E [ f(X) ] ≤ 1 n ∑ i f(Xi) + √ 2πGn(F) n + √ 9 log 2/δ 2n . We now provide the proof, part of which will be using Corollary Lemma A.1, and Lemma A.2. We also assume F has a Lipschitz constant of at most L. Proof. Recall that h∗, f∗ := argminh∈H,f∈F R(h, f). We decompose the generalization error via: R(ĥ)− min h∈H,f∈F R(h, f) = ( R(ĥ)−min f∈F 1 n ∑ i ℓ ( f ◦ ĥ(Xi), Yi )) + + ( min f∈F 1 n ∑ i ℓ ( f ◦ ĥ(Xi), Yi ) −min f∈F 1 n ∑ i ℓ ( f ◦ h∗(Xi), Yi )) + ( min f∈F 1 n ∑ i ℓ ( f ◦ h∗(Xi), Yi ) − EPn [ min f∈F 1 n ∑ i ℓ ( f ◦ h∗(Xi), Yi )]) + ( EPn [ min f∈F 1 n ∑ i ℓ ( f ◦ h∗(Xi), Yi )] −min f∈F E(X,Y )∼P ℓ ( f ◦ h∗(X), Y )) . (A.1) We first discuss the first term, which incurs a major discussion in Section 4.1. By a standard practice, the first term can be bounded via: R(ĥ)−min f∈F 1 n ∑ i ℓ ( f ◦ ĥ(Xi), Yi ) ≤ sup h∈H { R(ĥ)−min f∈F 1 n ∑ i ℓ ( f ◦ ĥ(Xi), Yi )} ≤ sup h∈H EPn [ E(X,Y )∼P [ ℓ ( fh,n ◦ h(X), Y ) −Rn(h) ]] (a) sup h∈H { EPn [ 1 n ∑ i ℓ ( fh,n ◦ h(Xi), Yi )] − 1 n ∑ i ℓ ( fh,n ◦ h(Xi), Yi )} (b) Using Lemma A.2, term (b) can be bounded as: sup h∈H { EPn [ 1 n ∑ i ℓ ( fh,n◦h(Xi), Yi )] − 1 n ∑ i ℓ ( fh,n◦h(Xi), Yi )} ≤ √ 2πGn(A(H))+ √ 9 log 2/δ, where the set A(H) is given by:{( 1 n ℓ ( fh,n ◦ h(X1), Y1 ) , . . . , 1 n ℓ ( fh,n ◦ h(Xn), Yn )) : h ∈ H } . It is easy to examine that A(H) invokes Slepian’s lemma, so we can use the contraction result from Lemma A.1 to further bound it: Gn(A(H)) ≤ L√ n Gn(H). Combined together, the term (b) is upper bounded as: √ 2πLGn(H)√ n + √ 9 log 2/δ. Now we bound term (a) as below. Define the shorthand ℓ(F(h)) :={ ℓ ( f(h(X1), Y1), . . . , ℓ ( f(h(Xn), Yn) )) : f ∈ F } . It holds that: sup h∈H EPn [ E(X,Y )∼P [ ℓ ( fh,n ◦ h(X), Y ) −Rn(h) ]] ≤ sup h∈H EPn sup f∈F { E(X,Y )∼P ℓ ( f ◦ h(X), Y ) − 1 n ∑ i ℓ ( f ◦ h(Xi), Yi )} ≤ √ 2π sup h∈H EPn Gn(ℓ(F(h))) n (using Lemma A.2) and A.1 = √ 2πn−1 sup h∈H EPn Gn(ℓ(F(h))) ∥h(X)∥ ∥h(X)∥ (where h(X) := [h(X1), . . . , h(xn)]) ≤ √ 2πn−1 sup h∈H √ E∥h(X)∥2 · sup A∈Rn×d 1 ∥A∥ E sup f∈F ∑ i ϵif([A]i) ϵi i.i.d∼ N(0, 1). (A.2) We let G′n(F) := supA∈Rn×d 1∥A∥E supf∈F ∑ i ϵif([A]i) be the modified Gaussian complexity, so the term (b) is finally bounded by: √ 2π n G ′ n(F) suph∈H √ E∥h(X)∥2. Next, notice in the last term that: EPn [ min f∈F 1 n ∑ i ℓ ( f ◦ h∗(Xi), Yi )] ≤ EPn 1 n ∑ i ℓ ( f∗ ◦ h∗(Xi), Yi ) = E(X,Y )∼P ℓ ( f∗ ◦ h∗(X), Y ) . Therefore, the last term is always non-positive. Similar, by definition, the second term is non-positive as well. Finally, as for the second term, since there is already non-concentrating terms appearing in the bound of the first term, it does not hurt to simply bound it using Hoeffding’s bound, i.e. the first term will not exceed O( √ log 1/δ) with probability at least 1 − δ. Putting things together, we conclude the final result. C TECHNICAL DETAILS FOR SECTION 4.2 We first restate the proposition: Proposition A.1. For the one-layer MLP we consider, with the learning rate α = 1, for any step t > 1, as d → ∞, the model output g(t)(x) will converge almost surely to g(t)∗ (x) defined as follows: g (t) ∗ (x) = ( C (t) 1 C (t) 2 + C (t) 3 C (t) 4 ) x, with ( C (t+1) 1 , C (t+1) 2 , C (t+1) 3 , C (t+1) 4 ) = ( C (t) 1 , C (t) 2 , C (t) 3 , C (t) 4 ) +L(t)x(t) ( C (t) 3 , C (t) 4 , C (t) 1 , C (t) 2 ) . The above iterative update result can be shown by making explicit of the terms following the forward and backward pass in tth gradient step. In particular, it holds that: g(t)(x) a.s.→ EΘ(t)σ ( W (t)x ) ( def = g (t) ∗ (x)), ℓ′ ( g(t)(x(t)), y(t) ) a.s.→ ℓ′(g(t)∗ (x(t)), y(t))L(t) (def= L(t)), Θ̃(t+1) = Θ̃(t) − L(t)σ ( W̃(t)x(t) ) , W̃(t+1) = W̃(t) − L(t)x(t)Θ̃(t)σ′ ( W̃(t)x(t) ) . The only extra requirement for the above convergence to hold is that the activation function is wellbehaved (see Yang (2019) for a detailed description). To see how the above system of equations lead to the result in Proposition A.1, imagine the activation is the identity function. In this case, Θ̃(t) and W̃(t) are always deterministic linear combinations of Θ̃(0) and W̃(0). Observe that the update becomes: Θ̃(t) = C1Θ̃ (0) + C2W̃ (0), W̃(t) = C3Θ̃ (0) + C4W̃ (0). We mention that as a corollary, W(t+1)(x) is also element-wise i.i.d, so the inner product of the hidden representations 〈 W(t+1)(x),W(t+1)(x′) 〉 a.s.→ E[W (t)x ·W (t)x′], where W (t) is an i.i.d row of W̃(t+1), which is the rescaled version of W(t+1). D PROOFS OF THE RESULTS IN SECTION 4.3 Proof for Proposition 3 Proof. The proofs for the lower bound often starts by converting the problem to a hypothesis testing task. Denote our parameter space by B(k) = {θ ∈ Rd : ∥θ∥0 ≤ k}. The intuition is that suppose the data is generated by: 1. drawing θ according to an uniform distribution on the parameter space; 2. conditioned on the particular θ, the observed data is drawn. Then the problem is converted to determining according to the data if we can recover the underlying θ as a canonical hypothesis testing problem. For any δ-packing {θ1, . . . , θM} of B(k), suppose B is sampled uniformly from the δ-packing. Then following a standard argument of the Fano method Wainwright (2019), it holds that: P ( min θ̂ sup ∥θ∗∥0≤k ∥θ̂ − θ∗∥2 ≥ δ/2 ) ≥ min θ̃ P ( θ̃ ̸= B ) , (A.3) where θ̃ is a testing function that decides according to the data if the some estimated θ equals to an element sampled from the δ-packing. The next step is to bound minθ̃ P ( θ̃ ̸= B ) , whereas by the information-theoretical lower bound (Fano’s Lemma), we have: min θ̃ P ( θ̃ ̸= B ) ≥ 1− I(y,B) + log 2 logM , (A.4) where I(·, ·) denotes the mutual information. Then we only need to bound the mutual information term. Let Pθ be the distribution of y (which the vector consisting of the n samples) given B = θ. Since y is distributed according to the mixture of: 1M ∑ i Pθi , it holds: I(y,B) = 1 M ∑ i DKL ( Pθi∥ 1 M ∑ j Pθj ) ≤ 1 M2 ∑ i,j DKL ( Pθi∥Pθj ) , where DKL is the Kullback-Leibler divergence. The next step is to determine M : the size of the δ−packing, and the upper bound on DKL ( Pθi∥Pθj ) where Pθi , Pθj are elements of the δ−packing. For the first part, it has been shown that there exists a 1/2-packing of B(k) in ℓ2-norm with logM ≥ k 2 log d−k k/2 (Raskutti et al., 2011). As for the bound on the KL-divergence term, note that given θ, Pθ is a product distribution of the condition Gaussian: y|ϵ ∼ N ( θ⊺ϵ σ2h σ2ϵ , θ⊺θ(σ2z − σ4z/σ2ϵ ) ) , where σ2ϵ := σ 2 h + σ 2 ϵ . Henceforth, for any θ1, θ2 ∈ B(k), it is easy to compute that: DKL(Pθ1∥Pθ2) = EPθ1 [n 2 log (θ⊺1θ1(σ2z − σ4z/σ2ϵ ) θ⊺2θ2(σ 2 z − σ4z/σ2ϵ ) ) + ∥∥y − θ⊺2ϵσ2hσ2ϵ ∥∥22 2θ⊺2θ2(σ 2 z − σ4z/σ2ϵ ) − ∥∥y − θ⊺1ϵσ2hσ2ϵ ∥∥22 2θ⊺1θ1(σ 2 z − σ4z/σ2ϵ ) ] = σ2z 2σ2ϵ ∥ϵ(θ1 − θ2)∥22, where y and ϵ are the vector and matrix consists of the n samples, i.e. y ∈ Rn and ϵ ∈ Rn×d. Since each row in the matrix ϵ is drawn from N(0, σ2ϵ Id×d), standard concentration result shows that with high probability, ∥ϵ(θ1 − θ2)∥22 can be bounded by C∥θ1 − θ2∥22 for some constant C. It gives us the final upper bound on the KL divergence term: DKL(Pθ1∥Pθ2) ≲ nσ2zδ 2 2σ2ϵ . Substituting this result into (A.4) and (A.3), by choosing δ2 = Ckσ 2 ϵ σ2zn log d−kk/2 and rearranging terms, we obtain the desired result that with probability at least 1/2: inf θ̂ sup θ∗:∥θ∗∥0≤k ∥θ̂ − θ∗∥2 ≳ σ2ϵ σ2h kn−1 log(d/k). Proof of Lemma 1 Proof. We first express the NW predictor in its expectation form: fϕ(X) = EX′ [ y′K(X,X ′) ] Z , where Z is the normalization constant. Recall that y ∈ {−1,+1}, R(·) is risk associated with the 0− 1 classification loss. We first define for x ∈ X : γϕ(X) := √ EX′ [ K(X,X ′) ] Z , where the expectation is taken w.r.t. the underlying distribution. Using the Markov inequality, we immediately have: |γ(X)| ≤ 1√ δ with probability at least 1− δ. It then holds that: 1−R(f) = P ( Y f(X) ≥ 0 ) ≥ E [Y f(X) γ(X) · 1[Y f(X) ≥ 0] ] ≥ E [Y f(X) γ(X) ] ≥ E [ 1[Y = Y ′]K(X,X ′) ] Z √ δ ,with probability 1− δ, which concludes the proof. E PROOF OF THE RESULT IN SECTION 5 The proof of Proposition 4 relies on two important results, which we state below. Lemma A.3 (Ben-Tal et al. (2013)). Let c be any closed convex function with domain [0,+∞), and this conjugate is given by c∗(s) = supt≥0{ts−c(t)}. Then for any distribution Q∗ and any function g : Rd → R, it holds: sup Q∈Q(Q∗;δ) ∫ g(u)dQ(u) = inf λ≥0,η { λ ∫ c∗ (g(u)− η λ ) dQ∗(u) + δλ+ η } . (A.5) The next lemma is adapted from the concentration of random Fourier feature in Rahimi & Recht (2007). Recall that ϕm ( h(x), Q ) := 1/ √ m [ cos ( u1h(x) ) , sin ( u1h(x) ) , . . . , cos ( umh(x) ) , sin ( umh(x) )] comes from the Monte Carlo approximation of K(h(x), h(x′)). Lemma A.4. Let A ⊂ Rd has diameter dA such that h(x) ∈ A for all x ∈ X . It holds that: Pr ( sup h(x),h(x′) ∣∣K(h(x), h(x′))− ⟨ϕm(h(x), Q), ϕm(h(x′), Q)⟩∣∣ ≥ ϵ) ≤ 28 (σQdA ϵ ) exp ( − mϵ 2 4(d+ 2) ) , (A.6) where Q is given by the inverse Fourier transform of K, and σQ is the second moment of Q. Recall that An(Q) := 1n(n−1) ∑ i̸=j 1[yi = yj ]⟨ϕm(h(xi), Q), ϕm(h(xi), Q)⟩. For notation convenience, in what follows, we let hi := h(xi), and further define ϕ̃(h, U) := [cos(UTh), sin(UTh)] as the actual random Fourier feature underlying ϕm(h,Q), where U ∼ Q. Also, we let K(Y, Y ′) := 1[Y = Y ′] to be the labelling kernel of the downstream task. Proof. Following Lemma A.3, we work with a scaled version of the f-divergence under c(t) = 1 k (t k − 1) (because its dual function has a cleaner form). It is easy to check that c∗(s) = 1k′ [s] k′ + + 1 k with 1k′ + 1 k = 1. First note that the sampling error of the alignment E [ K(Yi, Yj)KQ(Hi, Hj) ] , i.e. replacing the expectation by the sample average, can be given by: ∆n(U) := 1 n(n− 1) ∑ i ̸=j K(yi, yj)ϕ̃(hi, U) T ϕ̃(hj , U)− E [ K(Yi, Yj)KQ(Hi, Hj) ] = 1 n(n− 1) ∑ i ̸=j K(yi, yj)ϕ̃(hi, U) T ϕ̃(hj , U)− E [ K(Yi, Yj)ϕ̃(Hi, U) T ϕ̃(Hi, U) ] . We show that ∆n(U) is sub-Gaussian. Let {h′i, y′i}ni=1 be an i.i.d copy off the observation expect for one element such that (hj , yj) ̸= (h′j , y′j). Without loss of generality, we assume the last element is different: (hn, yn) ̸= (h′n, y′n). Let ∆′n(U) be computed by replace {hi, yi}ni=1 with {h′i, y′i}ni=1, and their difference can be bounded via: |∆n(U)−∆′n(U)| = 1 n(n− 1) ∣∣∑ i ̸=j K(yi, yj)ϕ̃(hi, U) T ϕ̃(hj , U)−K(y′i, y′j)ϕ̃(h′i, U)T ϕ̃(h′j , U) ∣∣ ≤ 1 n(n− 1) (∑ i<n ∣∣K(yi, yn)ϕ̃(hi, U)T ϕ̃(hn, U)−K(yi, y′n)ϕ̃(hi, U)T ϕ̃(h′n, U)∣∣ + ∑ j<n ∣∣K(yn, yj)ϕ̃(hn, U)T ϕ̃(hj , U)−K(y′n, yj)ϕ̃(h′n, U)T ϕ̃(hj , U)∣∣) ≤ 4 n where the last inequality comes from the fact that the random Fourier features ϕ̃ and the labelling kernel K(y, y′) are both bounded by 1. Therefore, the above bounded difference result tells that ∆n(U) is a 4n -subGaussian random variable. To bound ∆n(U), we use: sup Q∈Q(Q∗;δ) ∣∣∣ ∫ ∆n(U)dQ∣∣∣ ≤ sup Q∈Q(Q∗;δ) ∫ |∆n(U)|dQ ≤ inf λ≥0 {λ1−k′ k′ EQ∗ [ |∆n(U)|k ′] + λ(δ + 1) k } (using Lemma A.3) = (δ + 1)1/kEQ∗ [ |∆n(U)|k ′]1/k′ (solving for λ∗ from above) = √ δ + 1EQ∗ [ |∆n(U)|2 ]1/2 (let k = k′ = 1/2). (A.7) It means that in order to bound supQ∈Q(Q∗;δ) ∣∣∣ ∫ ∆n(U)dQ∣∣∣, we only need to bound |∆n(U)|2. Using classical results for sub-Gaussian random variables (Boucheron et al., 2013), it holds for λ ≤ n/8 that: E [ exp ( λ∆n(U) )2] ≤ exp (− 1 2 log(1− 8λ/n) ) . We can take its integral and further upper bound the result with: p (∫ ∆n(U) 2dQ ≥ ϵ 2 δ + 1 ) ≤ E [ exp ( λ ∫ ∆n(U) 2dQ )] exp ( − λϵ 2 δ + 1 ) (Chernoff bound) ≤ exp ( − 1 2 log ( 1− 8λ n ) − λϵ 2 δ + 1 ) (apply Jensen’s inequality). Hence, it holds that: Pr ( sup Q∈Q(Q∗;δ) ∆n(U) ≥ ϵ ) ≤ exp ( − nϵ 2 16(1 + δ) ) . Combine this result with the approximation error of random Feature feature in Lemma A.4, we obtain the desired result. F SUPPLEMENTARY MATERIAL FOR THE EXPERIMENTS We provide the descriptions, details, and additional results of our experiments in this part of the appendix. F.1 REPLICATING THE INSTABILITY ISSUE WITH IMDB DATASET The IMDB dataset is a binary sentiment analysis dataset consisting of 50,000 reviews from the Internet Movie Database (IMDb) labeled as positive or negative4. We particularly consider using this dataset for an addition proof of concept because it appears on the official tutorial of Tensorflow5. We directly adopt the implementation from the tutorial, including the text prepossessing pipeline and model architecture. In particular, the raw input texts are pass to a text vectorization layer, an embedding layer, a bidirectional RNN layer, and finally two dense layers to produce the final score for binary classification. We extract the hidden representation from the last hidden layer as the hidden representation. In our experiments, we set the number of hidden dimension as 32. The results are provided in Figure A.1, where we observe patterns highly similar to the ML-1m data. In particular, the pretrained embeddings both have high variances in their exact values even if their pretraining objectives converge to similar loss and accuracy, and the variances gets larger as the pretraining progress. Two minor differences from the ML-1m result are that the pretraining process is less stable for IMDB (Figure A.1b), and the variance distribution here is unimodal instead of the bimodal distribution we observed in Figure 1c. F.2 DETAILS OF THE BENCHMARK EXPERIMENTS The main benchmark experiments in our paper are conducted on the Movielens-1m6 dataset, which is a well-established public dataset for movie & user contextual analysis and examining recommendation. The ML-1m dataset consists of 1 million movie ratings from 6,000 users on 4,000 movies, with a one-to-five rating scale. According to Harper & Konstan (2015), the data is collected from the initial and follow-up stages, where the initial stage mainly involves popularity-based exposure (a very small proportion involves random exposure), while in the follow-up stage, rating feedback is collected under some deterministic recommender systems. By convention, we convert the dataset to implicit feedback, which amounts to treating all rating events as clicks. For contextual information, each movie is provided with its title and genre, in the form of English words or sentences. There are 18 genres in total. Pretraining movie embeddings from user behavior data 4https://www.imdb.com/interfaces/ 5https://www.tensorflow.org/text/tutorials/text_classification_rnn 6https://grouplens.org/datasets/movielens/1m/ We use Item2vec (Barkan & Koenigstein, 2016) to train movie embedding from users’ consecutive viewing data. Item2vec uses the same objective function as Word2vec (Mikolov et al., 2013)j, where the words become movies and the corpus become each user’s viewing sequence. Movies belong to a consecutive viewing window of #ws are treated as positive pairs, and for each positive pair, we randomly sample #ns negative movies. Together with the embedding dimension d and ℓ2regularization parameter (weight decay) λ, the training schema is described by the quadruplet of (#ws, #ns, d, λ). Since our goal is not to find the best pretraining schema, we fix #ws=3 and #ns=3, and focus on studying how the our results may change under different d. Pretraining movie embeddings from movie contextual data Since the movie titles and other contextual information can be relatively short, large NLP models may not be appropriate. Therefore, we use the Doc2vec model Dai et al. (2015) to pretrain the movie embeddings. Since Doc2vec is built on top of Word2vec, the training schema can also be described by the quadruplet of (#ws, #ns, d, λ). Therefore, we also #ws=3 and #ns=3. Using pretrained movie embedding for downstream genre prediction Given pretrained movie embeddings ĥ(x), we employ logistic regression to predict the score for the movie to belong to a particular genre, i.e. p(Yi = k) ∝ exp(θkĥ(x)). Due to its simplicity, we use the logistic regression subroutine from the scikit-learn package. Using pretrained movie embedding for downstream sequential recommendation We employ a two-tower model structure (Figure A.2) for the downstream sequential recommendation, which is very common in the recommendation community. In particular, we use RNN to aggregate the past interaction sequence, so the whole model is very similar to GRU4Rec (Jannach & Ludewig, 2017). We use the sigmoid function as the activation function for the dense layers. The model training can be done in a seq-to-seq fashion, where for each positive target, we randomly sample 3 negative targets. We fix the hidden dimension of both the RNN and the dense layers as 16. Model Training Besides Doc2vec and the logistic regression, all of our models are optimized using the Adam optimizer with early stopping, which stops the training if the improvement in the loss is less than 1e− 4 for three consecutive epochs. For all the experiments, we set the initial learning rate to 0.001, and set the weight decay to 1e− 4. Our main implementation is with Tensorflow, and all the computations are conducted on a 16-core Linux cluster with 128 Gb memory, and two Nvidia Tesla V100 GPU each with 16 Gb memory. We use the Doc2vec subroutine from the Gensim package7 to pretrain the movie embeddings for ML-1m task2. Train/test split and metrics Since the goal of our experiments is not to find the best modelling and training configuration, we do not use a validation set to tune the hyperparameters. Instead, we provide sensitivity analysis on certain parameters of interest in Appendix F.3. For ML-1m task1, we randomly split the movies by 80%-20% to construct the training and testing set for genre classification. For evaluation, we use the accuracy and Macro F1 score as metrics. For ML-1m task2, we follow the convention of using the last user-movie interaction for testing, and use all the previous interactions for training. For evaluation, we use Recall@5, i.e. if the movie that the user truly viewed is among the top-5 recommendation, and NDCG@5 that further discounts the position of the viewed movie in the top-5 recommendation. 7https://radimrehurek.com/gensim/models/doc2vec.html F.3 SUPPLEMENTARY RESULTS We provide the sensitivity analysis for the featurization method. We focus on two variables, the dimension d and the variance of Q (denoted by σ2Q). Recall that we consider Q as Gaussian distributions. We vary d in {16, 32, 64}, and vary σ2Q in {0.2, 0.4, 0.6, 0.8}. In particular, we first compare side-to-side hd, ϕd(h), and ϕ2d(h), while fixing Q as the standard Gaussian distribution. We see from Figure A.3 that ϕd(h) consistently outperforms hd on both ML1m task1 and ML-1m task2. ϕ2d(h) also significantly improves upon the performance of ϕd(h), which suggests the benefits of allowing extra model complexity in the particular tasks we consider. Further, the performance of both ϕd(h) and ϕ2d(h) have considerable smaller variances than h(x). We then examine the sensitivity of the downstream performance w.r.t. Q – the sampling distribution for constructing ϕd(h). As stated before, we let Q be zero-mean Gaussian distribution, and vary its variance. From Figure A.4, we observe that for all the dimensions we consider, the downstream task under ϕd(h) is very stable under different σQ. This echos Corollary 4 that our approach enjoys robustness to the selection of Q. In real-world productions, we have been using standard Gaussian distribution and observed no issues. F.4 ONLINE DEPLOYMENT To avoid potential conflict of interest, we provide an overview of our production experiments. We aim to provide enough detail for interested practitioners to draw inspiration for both developing their own solutions and replicating ours. Some background introduction. In e-commerce application, the representation of items serves as a central component for almost all machine learning algorithms Wang et al. (2018); Xu et al. (2021). In the past few years, we have built a dedicated item representation learning pipeline that uses multiple sources of data to optimize item embeddings. Since there are over billions of items on our platform Ecom, it took us considerable effort to optimize the data pipeline and training routines such that the model refresh can be done on a daily basis. We point out that the daily refresh is necessary for item representation learning because the catalog of items, which is a major source of pretraining data, also gets minor updates on a daily basis. For example, new items can be added, and the features of items (e.g. title, price, description) can be modified by the vendors. The other major source of pretraining data is the historical customer behavior data. They are critical for revealing the relationship (e.g. similarity, complementariness, compatibility, substitutability) among items. These relationships are relatively stable in the customer population, so the more data we use, the more likely for us to discovery useful signals. Our model for pretraining item embeddings has both feed-forward component, recurrent-unit component, as well as contrastive learning component. The reason for using these different components is to effective handle data that has different structures. It is expected that the pretrained item embeddings are stable. As we mentioned above, the relationship among items are quite stable, and the catalog data has very minor differences in a limited span of time. Therefore, downstream machine learning models may follow a weekly or bi-weekly refresh schedule and are expecting very stable performances. The four major applications that depend on our pretrained item embeddings, which we first introduced in Section 6, are item-page recommendation, search ranking, email recommendation, and home-page marketing. Each of the four tasks use both item embeddings and task-specific features to optimize their objectives. Most of them use model structures similar to the Wide and Deep network (Covington et al., 2016) to effectively combine information from different sources. Item-page recommendation aims to provide items that are related to the anchor item on that particular page that the customer is viewing. Item embeddings are in both the recall and reranking stage. Search ranking is a huge system that combines multiple components. In particular, the item embeddings are used in a particular recall stage. Email recommendation is a simpler task that aims to recommendation items related to what the customers recently interacted with, or are supposed to interact again. Item embeddings is used along with other features to build a model that optimizes CTR. Marketing is also a huge system in Ecom, and the particular component that uses item embedding is to build the predicted click-through rate model to support bidding and placement ranking. Brief summary of the production environment and implementation. Our production environment is relative standard in the e-commerce industry, with Hive/Spark supporting the offline data streaming and Tensorflow Server supporting the online inference of deep learning models. Featuring h(·) via ϕ(h(·), Q) can be easily implemented in production. Some practical advantages are: • the algorithm is very simple and requires no training; • it fits seamlessly into the current big-data infrastructure and frontend service; • it require no change to the downstream model; • the overhand for both the training and inference time are small; • the communication can be easily done by simply recording the dimension and random seed under which a particular U is generated. On the backend, featurizing pretrained representation is engineered into a subroutine (on top of the original automated representation learning pipeline) callable by downstream applications. For instance, it can be a simple PySpark function if the end point of the automated representation learning pipeline is a feature store in HDFS. The dimension m and the random seed for generated the random directions U = [u1, . . . , ud] are the two main inputs. Configuring and logging the random seed used by each experiment is important because U might be reused for
1. What are the insights revealed by the authors regarding pre-trained representations and their stochastic nature? 2. How does the proposed approach contribute to both applied and theoretical research of representation learning? 3. What are the limitations of the setting considered in the paper? 4. How could the connection between theoretical results and empirical results be improved? 5. What is the significance of the paper's contributions to the field of representation learning?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The Author's investigation reveals critical insights into the gap of uniform convergence for analyzing pre-trained representations, their stochastic nature under gradient descent optimization, what model convergence means to them, and how they might interact with downstream tasks. Authors propose a simple approach which contributes to both applied and theoretical research of representation learning. Strengths And Weaknesses Strengths: Authors explain concerns with existing methods clearly. Authors divide each of their theoretical results and give a takeaway summary for better understanding for readers Authors back their theoretical claims with empirical results Weaknesses: Setting considered in the paper is too simplistic to be broadly used in the industry. More connection between theoretical results and empirical results is not discussed well enough. Clarity, Quality, Novelty And Reproducibility Paper is clearly written and the contribution is novel.
ICLR
Title Some Practical Concerns and Solutions for Using Pretrained Representation in Industrial Systems Abstract Deep learning has dramatically changed the way data scientists and engineers craft features – the once tedious process of measuring and constructing can now be achieved by training learnable representations. Recent work shows pretraining can endow representations with relevant signals, and in practice they are often used as feature vectors in downstream models. In real-world production, however, we have encountered key problems that cannot be justified by existing knowledge. They raise concerns that the naive use of pretrained representation as feature vector could lead to unwarranted and suboptimal solution. Our investigation reveals critical insights into the gap of uniform convergence for analyzing pretrained representations, their stochastic nature under gradient descent optimization, what does model convergence means to them, and how they might interact with downstream tasks. Inspired by our analysis, we explore a simple yet powerful approach that can refine pretrained representation in multiple ways, which we call Featurizing Pretrained Representations. Our work balances practicality and rigor, and contributes to both applied and theoretical research of representation learning. 1 INTRODUCTION The ability of neural networks to learn predictive feature representation from data has always fascinated practitioners and researchers (Bengio et al., 2013). The learnt representations, if proved reliable, can potentially renovate the entire life cycle and workflow of industrial machine learning. Behind reliability are the three core principles for extracting information from data, namely stability, predictability, and computability (Yu, 2020). These three principles can not only justify the practical value of learnt representation, but also lead to the efficiency, interpretability, and reproducibility that are cherished in real-world production. Since pretrained representations are optimized to align with the given task, intuitively, they should satisfy all three principles in a reasonable setting. However, when productionizing an automated pipeline for pretrained representations in an industrial system, we encountered key problems that cannot be justified by existing knowledge. In particular, while the daily refresh follows the same modelling and training configurations and uses essentially the same data1, downstream model owners reported unexpectedly high fluctuations in 1Since the pretraining uses years of history data, the proportion of new daily data is quite small. performance when retraining their models. For illustration purpose, here we reproduce the issue using benchmark data, and take one further step where the pretraining is repeated on exactly the same data, under the same model configuration, training setup, and stopping criteria. We implement ten independent runs to essentially generate the i.i.d versions of the pretrained representation. We first visualize the dimension-wise empirical variances of the pretrained representations, provided in Figure 1a. It is surprising to find out that while the pretraining losses almost converge to the same value in each run (Figure 1b), there is such a high degree of uncertainty about the exact values of each dimension. Further, in Figure 1c, we observe that the uncertainty (empirical variance) of pretrained representation will increase as the pretraining progresses. In the downstream task where pretrained representations are used as feature vectors (see the right figure), we observe that the performance does fluctuate wildly from run to run. Since we use logistic regression as the downstream model, the fluctuation can only be caused by the instability of pretrained representations because we can effectively optimize the downstream model to global optimum. To demonstrate that the above phenomenon is not caused by using a specific model or data, we also experiment with a completely different pretraining model and benchmark data from from another domain. We perform the same analysis, and unfortunately the same issues persist (Figure A.1 in the Appendix). Existing deep learning theory, both the convergence and generalization results (we will discuss them more in Section 2), can fail to explain why shall we expect pretrained representation to work well in a downstream task when their exact values are so unstable. This is especially concerning for industrial systems as the issue can lead to unwarranted and suboptimal downstream solutions. We experienced this issue firsthand in production, so we are motivated to crack the mysteries behind pretrained representations, and understand if and how their stability can be improved without sacrificing predictability and computability. We summarize our contributions as below. • We provide a novel uniform convergence result for pretrained representations, which point out gaps that relate to the stability and predictability issues. • We break down and clarify the stability issue by revealing the stochastic nature of pretrained representation, the convergence of model output, and the stable and unstable components involved. • We investigate the interaction between pretrained representation and downstream tasks in both parametric and non-parametric settings, each revealing how predictability can benefit or suffer from stability (or instability) for particular usages of pretrained representations. • We discuss the idea of featurizing pretrained representation, and propose a highly practical solution that has nice guarantees and balances stability, predictability, and computability. We also examine its effectiveness in real-world experiments and online testings. 2 RELATED WORK It is not until recent years that deep learning theory sees major progress. Zhang et al. (2016) observed that parameters of neural networks will stay close to initialization during training. At initialization, wide neural networks with random weights and biases are Gaussian processes, a phenomena first discussed by Neal (1995) and recently refined by Lee et al. (2017); Yang (2019). However, they do not consider effect of optimization. The Neural Tangent Kernel provides a powerful tool to study the limiting convergence and generalization behavior of gradient descent optimization (Jacot et al., 2018; Allen-Zhu et al., 2019), but it sometimes fails to capture meaningful characteristics of practical neural networks (Woodworth et al., 2020; Fort et al., 2020). However, those works require parameters being close to initialization, in which useful representation learning would not take place. Indeed, it has also caught to people’s attention that representation learning can go beyond the neural tangent kernel regime (Yehudai & Shamir, 2019; Wei et al., 2019; Allen-Zhu & Li, 2019; Malach et al., 2021), among which a line of work connects the continuous-time training dynamics with mean field approximation (Mei et al., 2018; Sirignano & Spiliopoulos, 2020), and another direction is to study the lazy training regime (Chizat et al., 2019; Ghorbani et al., 2019) where only the last layer of a neural network is trained. Unfortunately, their assumed training schemas all deviate from practical representation learning. Still, part of our analysis in Section 4.2 can be viewed as a practical discretetime extension of the mean-field method. Perhaps the most practical setting for studying pretrained representation is Arora et al. (2019), which analyzes the contrastive representation learning under a particular data generating mechanism. However, their results do not generalize to broader setting, and they cannot justify the stability issue of pretrained representation. 3 PRELIMINARIES Notations. We use x ∈ X ⊆ Rd0 and y ∈ R to denote the raw feature and outcome, uppercase letters to denote random variables and measures, and bold-font letters to denote matrices. Let h : X → Rd be the representation hypothesis, and f : Rd → R be the prediction hypothesis. The hypothesis classes are given by H and F respectively. Denote by ◦ the operator for function composition, and ℓ : R×R → [0, 1] the loss function. We assume ℓ is 1-Lipschitz without loss of generality. Then the risk for a pair of (h ∈ H, f ∈ F) is given by: R(h, f) := E(X,Y )∼P [ ℓ ( f ◦ h(X), Y )] , where P is a measure on (X ,R). We also use Pn to denote the corresponding product measure for (X1, Y1), . . . , (Xn, Yn). The one-layer multi-layer perceptron (MLP) is perhaps the most fundamental representation learning model, given by: f ◦ h(x) = Θσ(Wx). Here, σ is the activation function, and W ∈ Rd0×d, Θ ∈ Rd×k. We mention that adding the bias terms will not affect our analysis, so we drop them here for brevity. In practice, Θ and W are often initialized as scaled i.i.d Gaussian random variables that follow N(0, 1/d). We will use such as [W ]i to denote the ith row of a matrix. The popular contrastive representation learning can also be considered as a special case of this configuration2. Define the shorthand g(x) := Θσ(Wx). A typical pretraining process involves optimizing the risk function defined for pretraining and extracting the hidden representation. The optimization is done via stochastic gradient descent (SGD), e.g. W(t+1) = W(t)−α∇Wℓ(g(x(t)), y(t)), where α is the learning rate. For convenience, we consider each mini-batch having one random sample, denoted by (x(t), y(t)) that corresponds to the tth step. Given a representation hypothesis h, we define: fh,n := argminf∈F 1/n ∑n i=1 ℓ ( f(h(xi), yi ) . In the sequel, how well fh,n ◦ h can generalize to a new i.i.d sample of the downstream task is: R(h) := E(X,Y )∼PEPn [ ℓ ( fh,n ◦ h(X), Y )] , where the second expectation EPn is taken with respect to the downstream data {Xi, Yi}ni=1 underlying fh,n. Its empirical version is given by Rn(h) := 1/n ∑ i [ ℓ ( fh,n ◦ h(Xi), Yi )] . 4 MAIN ANALYSIS 4.1 THE GAP OF UNIFORM CONVERGENCE FOR PRETRAINED REPRESENTATION Suppose h and f are optimized jointly (end-to-end) via empirical risk minimization (ERM), which amounts to solving: argminh∈H,f∈F 1/n ∑ i ℓ(f ◦ h(xi), yi). In this setting, the generalization behavior of the solution well-studied. In particular, using the notion of Gaussian (or Rademacher) complexity3, the generalization error can be bounded by O ( Gn(F ◦ H)/n + √ (log 1/δ)/n ) with probability at least 1− δ (Bartlett & Mendelson, 2002). This result, known as uniform convergence, is especially appealing because it both includes problem-specific aspects and applies to all functions in the composite hypothesis class F ◦ H := {f ◦ h : f ∈ F , h ∈ H}. Is it possible to achieve a comparable result for pretrained representation? Perhaps the most ideal setting for uniform convergence to hold under pretrained representation is: C1: the pretraining and downstream training will use the same data {(Xi, Yi)}ni=1, i.e. ĥ, f̂ := argminh∈H,f∈F 1 n ∑n i=1 ℓ ( f ◦ h(Xi), Yi ) , fĥ,n = argminf∈F 1 n ∑n i=1 ℓ ( f(ĥ(Xi), Yi ) . 2We can simply set xi ∈ Rn as one-hot encodings, and W,Θ ∈ Rd0,d where they are allowed to coincide. Then we let h(xi) = [W]i or [Θ]i depending on the context. The activation becomes the identity function, and ℓ(f(xi), xj) = log(1−σ(h(xi)Th(xj))) (or log σ(h(xi)Th(xj)), with σ(·) being the Sigmoid function. 3We will use Gaussian complexity G(·) here for some of its technical convenience. Then we let Gn be the empirical Gaussian complexity. See Appendix A for detail. C2: they rely on the same prediction function class F . These two conditions essentially eliminate the confounding effects of model and data mismatch. Thus, if uniform convergence cannot hold in this setting, it is unlikely to serve more general use cases. We first summarize the common intuition behind why pretrained representation might work: • the pretraining objective, when well-designed, reasonably predicts the empirical downstream risk for fh,n (intuition 1); • fh,n’s empirical downstream risk can be generalized to the true downstream risk (intuition 2). These two intuitions have also been exemplified for contrastive representation learning in Arora et al. (2019) and its following work. Our main contribution here is to make the above intuitions rigorous, and reveal whether they are indeed sufficient for uniform convergence in general settings. Recall that, given the complete information on a downstream task, the best we can do is: minh∈H,f∈F R(h, f). We denote the representation hypothesis that achieves this minimum by h∗. Let ĥ be given in C1. Then the generalization error is simply given by: R(ĥ)−minh∈H,f∈F R(h, f). Following the standard derivation which decomposes the generalization error and takes the supremum to upper bound each term, we run into terms that exactly characterize the above two intuitions. As we show in Appendix B, it holds that: R(ĥ)− min h∈H,f∈F R(h, f) ≤ sup h { EPnRn(h)−Rn(h) } + sup h EPn [ E(X,Y )∼P [ ℓ ( fh,n ◦ h(X), Y ) −Rn(h) ]] + remainder, where the first term suph { EPnRn(h) − Rn(h) } exactly seeks to match intuition 1, and the second term can be further upper bounded using: E(X,Y )∼P [ ℓ ( fh,n ◦ h(X), Y ) − Rn(h) ] ≤ supf { E(X,Y )∼P [ ℓ ( f ◦ h(X), Y ) − Rn(h) ]} , which underlies intuition 2. The remainder terms can be bounded using standard concentration results. However, we also spot a critical issue with the first term, and we first expand it for clarity: sup h { EPn [ 1 n ∑ i ℓ ( fh,n ◦ h(Xi), Yi )] − 1 n ∑ i ℓ ( fh,n ◦ h(Xi), Yi )} . Notice that this is not the typical empirical process encountered in a standard generalization setting, and we show that its upper bound is actually given by O ( Gn(H)/ √ n + √ log 1/δ ) following the same procedure as Bartlett & Mendelson (2002). Compared with the standard generalization bound, here the slack term √ log 1/δ does not vanish as we increase n. Therefore, there exist gaps between common intuitions and achieving uniform convergence. Before we discuss the cause of the gaps and its implications, we first present the complete result as below. Proposition 1. Let G′n(·) be a slightly modified Gaussian complexity term. Under the conditions and definitions in C1 and C2, it holds with probability at least 1− δ that: R(ĥ)− min h∈H,f∈F R(h, f) ≲ Gn(H)√ n + G′n(F) suph √ E∥h(X)∥22√ n + √ log 1/δ. The proof is deferred to Appendix B. Proposition 1 can be viewed as a ”no free lunch” result for using pretrained representation: even in the most ideal setting we study here, uniform convergence cannot be expected for all representation hypothesis. The gap is that not for every h ∈ H can the pretraining objective be predictive of fh,n’s empirical downstream risk. Imagine a entirely random h̃. Then both its pretraining objective and the empirical downstream risk of fh̃,n may have high variances that do not scale with n. Thus the prediction will not concentrate whatsoever. Takeaway. The implications of this gap are two folds. Firstly, it does not suffice to only study H and the data distribution – the statistical and algorithmic convergence properties of ĥ(X) could be more relevant as they suggest its stability. Secondly, we cannot take the performance of fĥ,n for granted, at least not without understanding how ĥ(X) interacts with the downstream learning and generates fĥ,n – which ultimately relates to its predictability. Unfortunately, we find a lack of discussion on these two issues in existing literature, so we will investigate them in the next two sections. 4.2 STOCHASTIC NATURE OF PRETRAINED REPRESENTATION, AND THE CONVERGENCE OF PRETRAINING MODEL In this section, we reveal two important statistical and algorithmic properties of pretrained representation. We show that while they persist as random vectors during SGD optimization (as shown in Figure 1), the output of the pretraining model can be deterministic and converge to some optimal solution. Two contributing factors are scaled i.i.d initialization and the inductive bias of gradient descent. Our findings provide critical insight to the stability of pretrained representations. We motivate our statistical analysis by deriving the optimization path of the one-layer MLP introduced in Section 3. For notation convenience, we introduce Θ̃ and W̃ as the rescaled version of Θ and W such that Θ̃(0),W̃(0) i.i.d∼ N(0, 1). We let ℓ′(g(x), y) be the derivative of the loss function and similarly for other functions. In contrast to the existing theoretical work that studies optimization path under gradient flow or infinitesimal learning rate, we fix the learning rate as α = 1 to reflect real-world practice. The output dimension is also set to k = 1 without loss of generality. In the first forward pass, since σ(W(0)x(0)) has i.i.d coordinates, as d → ∞ it holds that: g(0)(x(0)) := 1 d d∑ i=1 [ Θ̃(0) ] i [ σ ( W̃(0)x(0) )] i a.s.−→ EΘ(0)σ ( W (0)x(0) ) (denote by g(0)∗ (x(0))), where we use Θ(t),W (t) to denote an i.i.d element (or row) of Θ̃(t) and W̃(t). As a result, ℓ′ ( g(0)(x(0)), y(0) ) also converges to the deterministic value L(0) := ℓ′ ( g (0) ∗ (x (0)), y(0) ) . Then in the first backward pass, the updated parameters will follow: Θ̃(1) = Θ̃(0) − L(0)σ ( W̃(0)x(0) ) , W̃(1) = W̃(0) − L(0)x(0)Θ̃(0)σ′ ( W̃(0)x(0) ) . An important observation is that the updated parameters remain to be element-wise i.i.d. Consequently, the model output of the second forward pass will also converge to a deterministic value: g(1)(x(1)) a.s.−→ E ( Θ(0) − L(0)σ ( W (0)x(1) ))( W (0)x(1) − L(0)x(0)Θ(0)σ′ ( W (0)x(0) ) x(1) ) . As we show in the following Proposition, the (statistical) convergence result will hold for any t, and there exists a general iterative update rule for g(t)(x). For some intuition, suppose σ(·) is the identity function, then Θ(t), W(t) will simply be linear combinations of Θ(0), W(0). Proposition 2. For the one-layer MLP we consider, with the learning rate α = 1, for any step t > 1, as d → ∞, the model output g(t)(x) will converge almost surely to g(t)∗ (x) defined as follows: g (t) ∗ (x) = ( C (t) 1 C (t) 2 + C (t) 3 C (t) 4 ) x, with ( C (t+1) 1 , C (t+1) 2 , C (t+1) 3 , C (t+1) 4 ) = ( C (t) 1 , C (t) 2 , C (t) 3 , C (t) 4 ) +L(t)x(t) ( C (t) 3 , C (t) 4 , C (t) 1 , C (t) 2 ) . As a corollary, while the hidden representations will remain random vectors throughout the SGD process (which can be seem from the update rule): h(t)(x) := σ(W(t)x) = σ ( W̃(t−1)x− L(t−1)x(t−1)Θ̃(t−1)σ′ ( W̃(t−1)x(t−1) ) x ) , ⟨h(t)(x), h(t)(x′)⟩ will nevertheless also converge to some deterministic value as d → ∞. The proof and detail are deferred to Appendix C. In Figure 1d, we see that the statistical convergence of model output is indeed evident even with moderately small d, and its variance is by magnitudes smaller than the variance of the hidden representation σ(W(t)x) (see the x-axis of Figure 1c and 1d). On the other hand, the algorithmic convergence of model prediction has received considerable attention. It has been shown that over-parameterized models will converge to minimum-norm interpolants due to the inductive bias of gradient descent (Bartlett et al., 2021; Soudry et al., 2018). For the sake of space, here we focus on their implications and leave the details to Appendix C. Roughly speaking, among the many locally optimum solutions that interpolate the training data, gradient descent will converge to the one with the smallest norm, which usually has nice properties such as smoothness. We let g0 be that particular solution such that limt→∞ g(t)(x) = g0(x). Since ⟨h(t), h(t)⟩ converge statistically to a deterministic value at every optimization step, we can immediately conclude that: • if g(t) takes the form of ⟨h(t), h(t)⟩ such as in contrastive representation learning, the inner product between hidden representations also converge algorithmically to g0’s prediction; • if g(t) = θh(t), i.e. the last hidden layer is used as the representation, note that a necessary but not sufficient condition for ∥g(t)(x) − g(t)(x′)∥ to be small is that ∥h(t)(x) − h(t)(x′)∥ is small as well. Suppose h(t) are normalized, then upon the algorithmic convergence, ⟨h(t)(x), h(t)(x′)⟩ are likely to be larger if x, x′ are close to each other under g0’s prediction. Takeaway. The stochastic nature of ĥ := limt→∞ h(t) and the (approximate) convergence of ⟨ĥ(x), ĥ(x′)⟩ under gradient descent reveal two important properties of pretrained representations: 1. Instability of ĥ(x): the exact position of ĥ(x) in Rd is stochastic, depending on the initialization and the order of the pretraining data that is fed to SGD; 2. Stability of ⟨ĥ(x), ĥ(x′)⟩: the pairwise inner product of ⟨ĥ(x), ĥ(x′)⟩ converges (approximately) to a value that is consistent with the minimum-norm interpolant of the pretraining task. These results will also play a crucial role in understanding how ĥ can interact with the downstream learning, which we will study in the next section. 4.3 INTERACTION WITH DOWNSTREAM TASK To be comprehensive, we consider both the parametric and non-parametric set up for downstream task. Interestingly, they will reveal different aspects on the predictability of ĥ. Parametric setup. To eliminate the interference of label noise, we consider the noiseless setting where the output of downstream task is generated by: yi = f∗ ( E[h(xi)] ) , i = 1, . . . , n. Because h(x) might be high-dimensional, we assume there is some sparsity in f∗. The conditions below provide perhaps the easiest parametric setup for pretrained representations to perform well. C3: Let f∗(h) := ⟨θ∗, h⟩, ∥θ∗∥0 ≤ q, and let the inputs hi := Eh(xi) be sampled from: N(0, σ2hI) where σh is the strength of the signal. We show previously that ĥ is stochastic, so we simply set ĥi := hi + ϵi, where ϵi ∼ N(0, σ2ϵ I) captures the variance of the pretrained representation. Intuitively, since ϵi are i.i.d, it holds that Eϵ [ ⟨ĥ(xi), ĥ(xj)⟩ ] = ⟨h(xi), h(xj)⟩ so recovering θ∗ should be less challenging. However, we show that the variance will again prohibit efficient learning, and the best fĥ,n can do is controlled by σϵ/σh – a notion of signal-to-noise ratio for pretrained representation. The result below takes the form of a minimax lower bound: an information-theoretical quantity that characterize the inherent difficulty of a problem. Our proof (in Appendix D) is based on Le Cam’s method that was previously used to prove a lower bound result under label noise (Raskutti et al., 2011), which is very different from our setting. Proposition 3. Under C3, it holds with probability at least 1/2 that: inf θ̂ sup ∥θ∗∥0≤q ∥θ̂ − θ∗∥2 ≳ ( σ2ϵ /σ 2 h ) · qn−1 log(d/q), where inf θ̂ is taken with respect to any learning procedure that is based on {ĥ(xi), yi} n i=1. Takeaway. The result in Proposition 3 is alarming because during pretraining, the variance of h(x) might increase as more and more stochastic terms are being added (suggested by both the derivations in Section 4.2 and the empirical result in Figure 1c). The above lower bound shows the predictability of ĥ(x) can be compromised by its variance inherited from pretraining. This also explains the instability in downstream machine learning that we experienced during real-world production. Non-parametric setup. Among the non-parametric regression estimators, the Nadaraya-Watson (NW) estimator has received considerable attention due to its simplicity and effectiveness (Nadaraya, 1964). It can be thought of as a smoothing nearest-neighbor estimator under a weighting schema: fh,n ◦ h(x) := n∑ i=1 yjwh(x, xi), wh(x, xi) := K ( (h(x)− h(xi) ) /z, where K : Rd → R+ is a kernel, and z is a normalizing constant. Here, we omit the bandwidth parameter for convenience. The Gaussian kernel K(u) ∝ exp(−∥u∥22) is a common choice, so when pretrained representations are normalized, it only depends on h via ⟨h(x), h(x′)⟩ – a more stable quantity according to the previous section. We effectively denote this kernel by K ( ⟨h(x), h(x′)⟩ ) . It is well-understood that the generalization of a kernel support vector machine is controlled by the kernel-target alignment (Cristianini et al., 2001), i.e. 〈 y⃗,Ky⃗ 〉 , where y⃗ = [yi, . . . , ym]T and Ki,j = K ( ⟨h(xi), h(xj)⟩ ) . We prove that this is also the case for NW estimator, with a simple result that does not resort to the concentration arguments. The proof is in Appendix D. Lemma 1. Under 0-1 loss, with probability at least 1− δ, the risk of NW estimator satisfies: R(fh,n ◦ h) ≤ 1− √ δ · E [ 1[Y = Y ′]K ( ⟨h(X), h(X ′)⟩ )] , where the expectation is taken with respect to (X,Y ) ∼ P , (X ′, Y ′) ∼ P . Takeaway. Lemma 1 shows the predictability of h(x), when expressed and measured through the more stable ⟨h(x), h(x′)⟩, is strictly guaranteed. Therefore, using h(x) in downstream task in the form of h⃗(x) := [ e⟨h(x),h(x1)⟩, . . . , e⟨h(x),h(xn)⟩ ] can be beneficial, and it can be interpreted as a representation of weights in the NW estimator. Further, h⃗(x) contains all the pairwise relationship that can be more closely related to the pretraining objective. Note that h(x) can also be viewed as the compression of h⃗(x) because: [⃗h(xi)]j = exp(⟨h(xi), h(xj)⟩). Nevertheless, h⃗(x) and h(x) cannot be compared directly because they have different intrinsic dimensionality. In terms of computability, h⃗(x) ∈ Rn is also no compare to h(x) ∈ Rd – computing h⃗(x) itself can be non-trivial for largescale applications. We aim to resolve these issues in the next section. 5 FEATURIZING PRETRAINED REPRESENTATION Our next goal is to build on top of h(x) features or representations that are comparable to h⃗(x) in terms of stability and predictability, and have similar computability to h(x). Suppose {h(xi)}ni=1 are normalized. Then h⃗(xi) is simply the exponential of pairwise cosine distances between h(xi) and all the pretrained representations. Notice that the angle between any pair of (h(xi), h(xj)) can be decomposed into their respective angle with a baseline direction u ∈ Rd, ∥u∥2 = 1. When the set of baseline directions is rich enough, we can recover all the pairwise cosine distances in h⃗(xi) using their angles with the baseline directions. Given U := [u1, . . . , um] ∈ Rd×m, the set of angles between h(xi) and U forms a measurement for the relative location of h(x) ∈ Rd. We refer to such a measurement process as featurizing pretrained representation, as it is similar to how features are constructed by measuring experimental subjects. While featurizing h(x) according to its geometrically property is an appealing solution, it is unknown how many baseline directions are needed to preserve the stability and predictability of h⃗, as well as the optimal way to choose those directions. Fortunately, the Bochner’s Theorem (Loomis, 2013) from harmonic analysis lays a solid foundation for selecting the directions and providing approximation and learning guarantees. Also, the resulting measurements will coincide with the random Fourier feature (Rahimi & Recht, 2007; Liu et al., 2021) that plays a critical role in many machine learning communities. For the Gaussian kernel we studied, Bochner’s Theorem states that there exists a measure Q on Rd such that: K(h(x), h(x′)) = ∫ Rd eiu(h(x)−h(x ′))q(u)du real part = Eu∼Q [ cos ( u ( h(x1)− h(x2) ))] . Since cos(a− a′) = cos(a) cos(a′) + sin(a) sin(a′), we can approximate the kernel value using the Monte Carlo method as below: K(h(x), h(x′)) ≈ 1 m m∑ i=1 cos ( uih(x) ) cos ( uih(x ′) ) + sin ( uih(x) ) sin ( uih(x ′) ) , ui i.i.d∼ Q. Let ϕm ( h(x), Q ) := 1/ √ m [ cos ( u1h(x) ) , sin ( u1h(x) ) , . . . , cos ( umh(x) ) , sin ( umh(x) )] be the featurization of h(x) according to Bochner’s Theorem. Note that it amounts to measuring h(x)’s distances with respect to random directions drawn from Q(u), and then transforming them through trigonometric functions. Furthermore, 〈 ϕm ( h(·), Q ) , ϕm ( h(·), Q )〉 can approximate any entries in h⃗. To be more precise, Rahimi & Recht (2007) shows that it only requires m = Ω ( d/ϵ2 log(σQ/ϵ) ) to achieve ∣∣K(h(x), h(x′))−〈ϕm(h(x), Q), ϕm(h(x′), Q)〉∣∣ ≤ ϵ, where σ2Q is the second moment Q. Therefore, when m is comparable to d, the featurized ϕm ( h(x), Q ) achieves the stability and predictability of h⃗, as well as the computability of h. Converting h(x) to ϕm ( h(x), Q ) is computationally efficient, since u1, . . . , um only need to be drawn from Q once and apply to all h(xi), i = 1, . . . , n. However, there is still the obstacle of finding the optimal Q∗. Strictly speaking, Q∗ is obtained from the inverse Fourier transform, but in practice the standard Gaussian distribution is often used. Indeed, compute the inverse Fourier transform and sample from it poses another challenging task. To our knowledge, there is no existing study on whether we can safely sample u from a proxy Q. In the following proposition, we show that using Q instead of Q∗ will not cost stability as long as their discrepancy is bounded. In particular, we state our result in the context of Lemma 1, that is, the downstream risk is controlled by the alignment A := E [ 1[Y = Y ′]K ( ⟨h(X), h(X ′)⟩ )] . We use Ds(Q,Q∗) :=∫ s(dQ/dQ∗)dQ∗ to denote the f-divergence induced by s(·). Proposition 4. Let Q(Q∗; δ) := {Q : Ds(Q,Q∗) ≤ δ} be a Ds-ball with radius δ centered at Q∗. Let {h(xi), yi}ni=1 be the downstream data, and An(Q) := 1n(n−1) ∑ i ̸=j 1[yi = yj ]⟨ϕm(h(xi), Q), ϕm(h(xi), Q)⟩. It holds that: Pr ( sup Q∈Q(Q∗;δ) ∣∣∣An(Q)−An(Q∗)∣∣∣ ≥ ϵ) ≲ σ2Q ϵ2 exp ( − mϵ 2 16(d+ 2) ) + exp ( − nϵ 2 64(1 + δ) ) , where σQ := maxQ∈Q σQ. The significance of Proposition 4 is that even if the optimal Q∗ is not used, in the worst case scenario, the instability caused by it (reflected via δ) vanishes quickly as the sample size gets larger. Similarly, increasing the dimension of featurized representation ϕm also speeds up the convergence exponentially. They provide the guarantee for predictability even if Q∗ is not used. The proof is provided in Appendix E. Takeaway. Featurzing pretrained representation as ϕm(h,Q) offers a simple and practical solution to balance stability, predictability, and computability. We just showed that Q can simply be standard Gaussian distribution, and the dimension of ϕm(h) can be obtained by satisfying a specific approximation threshold ϵ. It can also be treated as a tuning parameter in downstream tasks. 6 BENCHMARK AND REAL-WORLD EXPERIMENTS We conduct experiments on the benchmark dataset MovieLens-1m (ML-1m) for illustration and reproducibility purposes. The real-world production experiments took place at a major US ecommerce platform anonymized as Ecom. The detailed descriptions for ML-1m and the introduction of Ecom’s production environment are provided in Appendix F. On ML-1m. The dataset supports two types of pretraining-downstream task combination: (a). leverage the sequences of user viewing data to pretrain movie embeddings, then use the embeddings to predict the genre of the movie (ML-1m task 1); (b). pretrain movie embeddings using the title and other descriptions, then use the embeddings for downtream sequential recommendation (ML-1m task 2). The detailed data processing, model and pretraining configurations, downstream training/testing setup, evaluation metrics, and sensitivity analysis are deferred to Appendix F. On ML-1m task 1, we use contrastive representation learning to pretrain the movie embeddings, and employ logistic regression to predict the genre using movie embeddings as features. On ML-1m task 2, we use a bidirectional-RNN-type structure on movies’ NLP data, and extract the final hidden layer as pretrained representation. The downstream sequential recommendation task employs a two-tower structure, and a RNN is used to aggregate the history viewing sequence. In Table 1, we first see that ϕm(h) improves the stability of h by at least ×10 in both tasks. Even under the same dimension, ϕm(h) outperforms h, and is highly comparable to avg(h) – the manually stabilized version of h by averaging it over ten independent runs. Note that avg(h) is almost never a good practical solution because it requires repeating the same pretraining process multiple times. Here, we use it as an analytical baseline, and show that ϕm(h) is just as good. When the dimension increases for ϕm(h), it delivers much more superior results. Although changing dimension can also change the downstream model complexity, but as we discuss below, it offers more flexibility for real-world problems. On Ecom. The item representation learning pipeline is being used by several downstream productions: item-page recommendation (Task1), search ranking (Task2), email recommendation (Task3), and home-page marketing (Task4). They all have task-specific features and non-trivial model architectures different. The refreshing of pretrained item embedding is done on a daily basis, and downstream model owners may have separate schedules to update and refresh the relevant parts of their models. In Appendix F.4, we describe our engineering solutions of deploying the featurization process on the frontend and backend. During A/B testing, we observe performance lifts (in terms of click-through rate) that are statistically significant for all four downstream applications. The average revenue-per-visitor lift is also positive during the testing period. The detailed online results and analysis are provided in Appendix F. Lessons learnt. In addition to improved stability and performance, an important feedback we received from downstream model owners is that the flexibility in choosing ϕm(h)’s dimension is very helpful for their tasks. Prior to our featurization technique, it is almost impossible to personalize the dimension of pretrained representation for different applications, let alone tuning it in downstream tasks. Now knowing that the predictability will not vary much, experimenting with different dimensions often allows them to find a better bias-variance tradeoff for downstream tasks. 7 DISCUSSION The analytical results and the proposed featurization method in our work can apply to a broad range of applications and research problems. Nevertheless, our results may still be rudimentary and far from providing the complete picture or optimal practice for using pretrained representation. We hope the progress we made will lead to more advanced future research and applications. Scope and limitation. Most of our analysis are performed in basic settings: while they ensure the results will hold in generality, advanced methods for pretraining representation are not considered. Also, we do not include additional downstream features and their correlation with pretrained representations, or connections between the pretraining objective and downstream task. Those additional knowledge can be useful for deriving task-specific results (Arora et al., 2019). For application, our featurization technique may be less helpful if the downstream task simply uses embedding distance like KNN search. Optimizing the space and time complexity by such as embedding quantization might be more useful for such tasks (Chen et al., 2020), which is not discussed in our paper. A future direction. While our work studies h(x) as a whole, it can be inferred from Figure 1c that the element-wise variance of ĥ(x) is bimodal, which suggests heterogeneity within h(x). Possible explanations are that a (random) subset of h(x) is responsible for overfitting the pretraining task (Bartlett et al., 2020), or that some dimensions are forced to become more independent of others so the representation matrix has nice spectral properties (Hastie et al., 2022). It is thus an important future direction to identify the internal structure of h(x) to better featurize pretrained representations. A TECHNICAL BACKGROUND In this part of the paper we provide the technical background for both the discussions in the paper and the following proofs. A central role for proving uniform convergence results is the Gaussian / Rademarcher complexity. For a set A ⊂ Rn, it is defined as: G(A) := Eϵ [ sup a∈A n∑ i=1 ai ] , where ϵ are i.i.d Gaussian / Rademarcher random variables. It essentially measures how good a function class can interpolate a random sign pattern assigned to a set of points. Given a function class F and n samples (x1, . . . , xn), the empirical Gaussian / Rademarcher complexity is given by: Gn(F) := Eϵ [ sup f∈F n∑ i=1 f(xi) ] . Remark A.1. We mention that in some versions of the definition, there is a 1/n factor in the complexity term. Here, we explicit pull that factor out and place it in the resulting bound. As we mentioned earlier, an important reason for us using Gaussian complexity is because some of its technical properties, which is the Slepian’s Lemma (Slepian, 1962) and its corollary, which we state as below: Lemma A.1 (From Slepian’s Lemma). Suppose ϕ : A → Rq has Lipschitz constant L. Then it holds that: G(ϕ(A)) ≤ LG(A). This result can be viewed as the contraction lemma for Gaussian complexity (Ledoux & Talagrand, 1991). A.1 INDUCTIVE BIAS OF GRADIENT DESCENT Our introduction primarily follows Soudry et al. (2018); Ji & Telgarsky (2019); ?); Gunasekar et al. (2018) and their follow-up works. The key factor that contributes to the implicit bias of gradient decent is the divergence of model parameters after separating the data under loss functions that has exponential-tail behavior. When the predictor f ∈ F parameterized by θ is over-parameterized, other than certain degenerated cases, the data can be separated at certain point if the predictor class satisfies some regularity assumptions (Lyu & Li, 2019), e.g. • f ∈ F is homogeneous such that: f(x; c · θ) = cβf(x; θ),∀c > 0; • f ∈ F is smooth and has bounded Lipschitz constant. These conditions can be met for many neural network structures and activation functions. The exponential-tail of loss function can be satisfied by the common exponential loss and logistic loss (which we use through our discussions and experiments). To see why the the norm of model parameters will diverge, simply note that under such as exponential loss, both the risk and the gradient will take the form of: ∑ i ci exp(−yif(xi; θ)), where ci are lower order terms. Since gradient descent will converge to a stationary point due to the nice properties of F , we expect ∑ i ci exp(−yif(xi; θ)) = 0 to hold upon convergence. A necessary condition for that is: exp(−yif(xi; θ)) = 0, i = 1, . . . , n, and this condition is actually sufficient with high probability (Soudry et al., 2018). Therefore, for all exp(−yif(xi; θ)) to reach 0, ∥θ∥ must diverge so |f(·; θ)| → ∞. With that said, since the loss function decays exponentially fast around 0, the data points with the largest margin will dominate both the gradient and the loss function. As a direct consequence, the decision boundary will share characteristics with the hard-margin SVM, given by: min ∥θ∥2 s.t. yif(xi; θ) ≥ 1, ∀i = 1, . . . , n. Indeed, recent work shows that the optimization path of over-parameterized models will indeed converge to some minimum-norm predictor: Corollary A.1 (Chizat et al. (2019); Woodworth et al. (2020), and others). Under the conditions specified in the reference work, which are mostly exponential loss, scaled initialization, appropriate learning rate, and regularity conditions for the predictor class, it holds that: lim t→∞ lim d→∞ F ( θ(t)/∥θ(t)∥ ) stationary points of→ { argmin ∥f∥K s.t. yif(xi) ≥ 1, ∀i ∈ [n]}, where F is the decision boundary of f , d is the dimension of the hidden layer(s) of f , and ∥ · ∥K is an appropriate RKHS norm. Note that in Section 4.2 we use g0 to denote the converged result, and the above corollary guarantees its existence and uniqueness. However, one open question is which particular RKHS norm best describes the solution, because it will particularly affect the convergence of the parameters. Therefore, in our work, we leave the convergence of parameters out of our discussion. Remark A.2. It is also worth mentioning that the convergence of E[h(t)(x)] plays no part in our arguments and results. Indeed, it will not change the stochasticity of h(t)(x), and (in some cases) can be implied from the convergence of g(t)(x) (Lyu & Li, 2019). Therefore, we do not discuss it in our work. B PROOF OF THE RESULTS IN SECTION 4.1 We prove Proposition 1 in this part of the appendix. An important result we will be using is the Gaussian complexity bound for empirical risk minimization, and we will use the version of Bartlett & Mendelson (2002). Lemma A.2. Let F be real-valued function class from X to [0, 1]. Let (X1, . . . , Xn) be i.i.d random variables, then for all f ∈ F , it holds with probability at least 1− δ that: E [ f(X) ] ≤ 1 n ∑ i f(Xi) + √ 2πGn(F) n + √ 9 log 2/δ 2n . We now provide the proof, part of which will be using Corollary Lemma A.1, and Lemma A.2. We also assume F has a Lipschitz constant of at most L. Proof. Recall that h∗, f∗ := argminh∈H,f∈F R(h, f). We decompose the generalization error via: R(ĥ)− min h∈H,f∈F R(h, f) = ( R(ĥ)−min f∈F 1 n ∑ i ℓ ( f ◦ ĥ(Xi), Yi )) + + ( min f∈F 1 n ∑ i ℓ ( f ◦ ĥ(Xi), Yi ) −min f∈F 1 n ∑ i ℓ ( f ◦ h∗(Xi), Yi )) + ( min f∈F 1 n ∑ i ℓ ( f ◦ h∗(Xi), Yi ) − EPn [ min f∈F 1 n ∑ i ℓ ( f ◦ h∗(Xi), Yi )]) + ( EPn [ min f∈F 1 n ∑ i ℓ ( f ◦ h∗(Xi), Yi )] −min f∈F E(X,Y )∼P ℓ ( f ◦ h∗(X), Y )) . (A.1) We first discuss the first term, which incurs a major discussion in Section 4.1. By a standard practice, the first term can be bounded via: R(ĥ)−min f∈F 1 n ∑ i ℓ ( f ◦ ĥ(Xi), Yi ) ≤ sup h∈H { R(ĥ)−min f∈F 1 n ∑ i ℓ ( f ◦ ĥ(Xi), Yi )} ≤ sup h∈H EPn [ E(X,Y )∼P [ ℓ ( fh,n ◦ h(X), Y ) −Rn(h) ]] (a) sup h∈H { EPn [ 1 n ∑ i ℓ ( fh,n ◦ h(Xi), Yi )] − 1 n ∑ i ℓ ( fh,n ◦ h(Xi), Yi )} (b) Using Lemma A.2, term (b) can be bounded as: sup h∈H { EPn [ 1 n ∑ i ℓ ( fh,n◦h(Xi), Yi )] − 1 n ∑ i ℓ ( fh,n◦h(Xi), Yi )} ≤ √ 2πGn(A(H))+ √ 9 log 2/δ, where the set A(H) is given by:{( 1 n ℓ ( fh,n ◦ h(X1), Y1 ) , . . . , 1 n ℓ ( fh,n ◦ h(Xn), Yn )) : h ∈ H } . It is easy to examine that A(H) invokes Slepian’s lemma, so we can use the contraction result from Lemma A.1 to further bound it: Gn(A(H)) ≤ L√ n Gn(H). Combined together, the term (b) is upper bounded as: √ 2πLGn(H)√ n + √ 9 log 2/δ. Now we bound term (a) as below. Define the shorthand ℓ(F(h)) :={ ℓ ( f(h(X1), Y1), . . . , ℓ ( f(h(Xn), Yn) )) : f ∈ F } . It holds that: sup h∈H EPn [ E(X,Y )∼P [ ℓ ( fh,n ◦ h(X), Y ) −Rn(h) ]] ≤ sup h∈H EPn sup f∈F { E(X,Y )∼P ℓ ( f ◦ h(X), Y ) − 1 n ∑ i ℓ ( f ◦ h(Xi), Yi )} ≤ √ 2π sup h∈H EPn Gn(ℓ(F(h))) n (using Lemma A.2) and A.1 = √ 2πn−1 sup h∈H EPn Gn(ℓ(F(h))) ∥h(X)∥ ∥h(X)∥ (where h(X) := [h(X1), . . . , h(xn)]) ≤ √ 2πn−1 sup h∈H √ E∥h(X)∥2 · sup A∈Rn×d 1 ∥A∥ E sup f∈F ∑ i ϵif([A]i) ϵi i.i.d∼ N(0, 1). (A.2) We let G′n(F) := supA∈Rn×d 1∥A∥E supf∈F ∑ i ϵif([A]i) be the modified Gaussian complexity, so the term (b) is finally bounded by: √ 2π n G ′ n(F) suph∈H √ E∥h(X)∥2. Next, notice in the last term that: EPn [ min f∈F 1 n ∑ i ℓ ( f ◦ h∗(Xi), Yi )] ≤ EPn 1 n ∑ i ℓ ( f∗ ◦ h∗(Xi), Yi ) = E(X,Y )∼P ℓ ( f∗ ◦ h∗(X), Y ) . Therefore, the last term is always non-positive. Similar, by definition, the second term is non-positive as well. Finally, as for the second term, since there is already non-concentrating terms appearing in the bound of the first term, it does not hurt to simply bound it using Hoeffding’s bound, i.e. the first term will not exceed O( √ log 1/δ) with probability at least 1 − δ. Putting things together, we conclude the final result. C TECHNICAL DETAILS FOR SECTION 4.2 We first restate the proposition: Proposition A.1. For the one-layer MLP we consider, with the learning rate α = 1, for any step t > 1, as d → ∞, the model output g(t)(x) will converge almost surely to g(t)∗ (x) defined as follows: g (t) ∗ (x) = ( C (t) 1 C (t) 2 + C (t) 3 C (t) 4 ) x, with ( C (t+1) 1 , C (t+1) 2 , C (t+1) 3 , C (t+1) 4 ) = ( C (t) 1 , C (t) 2 , C (t) 3 , C (t) 4 ) +L(t)x(t) ( C (t) 3 , C (t) 4 , C (t) 1 , C (t) 2 ) . The above iterative update result can be shown by making explicit of the terms following the forward and backward pass in tth gradient step. In particular, it holds that: g(t)(x) a.s.→ EΘ(t)σ ( W (t)x ) ( def = g (t) ∗ (x)), ℓ′ ( g(t)(x(t)), y(t) ) a.s.→ ℓ′(g(t)∗ (x(t)), y(t))L(t) (def= L(t)), Θ̃(t+1) = Θ̃(t) − L(t)σ ( W̃(t)x(t) ) , W̃(t+1) = W̃(t) − L(t)x(t)Θ̃(t)σ′ ( W̃(t)x(t) ) . The only extra requirement for the above convergence to hold is that the activation function is wellbehaved (see Yang (2019) for a detailed description). To see how the above system of equations lead to the result in Proposition A.1, imagine the activation is the identity function. In this case, Θ̃(t) and W̃(t) are always deterministic linear combinations of Θ̃(0) and W̃(0). Observe that the update becomes: Θ̃(t) = C1Θ̃ (0) + C2W̃ (0), W̃(t) = C3Θ̃ (0) + C4W̃ (0). We mention that as a corollary, W(t+1)(x) is also element-wise i.i.d, so the inner product of the hidden representations 〈 W(t+1)(x),W(t+1)(x′) 〉 a.s.→ E[W (t)x ·W (t)x′], where W (t) is an i.i.d row of W̃(t+1), which is the rescaled version of W(t+1). D PROOFS OF THE RESULTS IN SECTION 4.3 Proof for Proposition 3 Proof. The proofs for the lower bound often starts by converting the problem to a hypothesis testing task. Denote our parameter space by B(k) = {θ ∈ Rd : ∥θ∥0 ≤ k}. The intuition is that suppose the data is generated by: 1. drawing θ according to an uniform distribution on the parameter space; 2. conditioned on the particular θ, the observed data is drawn. Then the problem is converted to determining according to the data if we can recover the underlying θ as a canonical hypothesis testing problem. For any δ-packing {θ1, . . . , θM} of B(k), suppose B is sampled uniformly from the δ-packing. Then following a standard argument of the Fano method Wainwright (2019), it holds that: P ( min θ̂ sup ∥θ∗∥0≤k ∥θ̂ − θ∗∥2 ≥ δ/2 ) ≥ min θ̃ P ( θ̃ ̸= B ) , (A.3) where θ̃ is a testing function that decides according to the data if the some estimated θ equals to an element sampled from the δ-packing. The next step is to bound minθ̃ P ( θ̃ ̸= B ) , whereas by the information-theoretical lower bound (Fano’s Lemma), we have: min θ̃ P ( θ̃ ̸= B ) ≥ 1− I(y,B) + log 2 logM , (A.4) where I(·, ·) denotes the mutual information. Then we only need to bound the mutual information term. Let Pθ be the distribution of y (which the vector consisting of the n samples) given B = θ. Since y is distributed according to the mixture of: 1M ∑ i Pθi , it holds: I(y,B) = 1 M ∑ i DKL ( Pθi∥ 1 M ∑ j Pθj ) ≤ 1 M2 ∑ i,j DKL ( Pθi∥Pθj ) , where DKL is the Kullback-Leibler divergence. The next step is to determine M : the size of the δ−packing, and the upper bound on DKL ( Pθi∥Pθj ) where Pθi , Pθj are elements of the δ−packing. For the first part, it has been shown that there exists a 1/2-packing of B(k) in ℓ2-norm with logM ≥ k 2 log d−k k/2 (Raskutti et al., 2011). As for the bound on the KL-divergence term, note that given θ, Pθ is a product distribution of the condition Gaussian: y|ϵ ∼ N ( θ⊺ϵ σ2h σ2ϵ , θ⊺θ(σ2z − σ4z/σ2ϵ ) ) , where σ2ϵ := σ 2 h + σ 2 ϵ . Henceforth, for any θ1, θ2 ∈ B(k), it is easy to compute that: DKL(Pθ1∥Pθ2) = EPθ1 [n 2 log (θ⊺1θ1(σ2z − σ4z/σ2ϵ ) θ⊺2θ2(σ 2 z − σ4z/σ2ϵ ) ) + ∥∥y − θ⊺2ϵσ2hσ2ϵ ∥∥22 2θ⊺2θ2(σ 2 z − σ4z/σ2ϵ ) − ∥∥y − θ⊺1ϵσ2hσ2ϵ ∥∥22 2θ⊺1θ1(σ 2 z − σ4z/σ2ϵ ) ] = σ2z 2σ2ϵ ∥ϵ(θ1 − θ2)∥22, where y and ϵ are the vector and matrix consists of the n samples, i.e. y ∈ Rn and ϵ ∈ Rn×d. Since each row in the matrix ϵ is drawn from N(0, σ2ϵ Id×d), standard concentration result shows that with high probability, ∥ϵ(θ1 − θ2)∥22 can be bounded by C∥θ1 − θ2∥22 for some constant C. It gives us the final upper bound on the KL divergence term: DKL(Pθ1∥Pθ2) ≲ nσ2zδ 2 2σ2ϵ . Substituting this result into (A.4) and (A.3), by choosing δ2 = Ckσ 2 ϵ σ2zn log d−kk/2 and rearranging terms, we obtain the desired result that with probability at least 1/2: inf θ̂ sup θ∗:∥θ∗∥0≤k ∥θ̂ − θ∗∥2 ≳ σ2ϵ σ2h kn−1 log(d/k). Proof of Lemma 1 Proof. We first express the NW predictor in its expectation form: fϕ(X) = EX′ [ y′K(X,X ′) ] Z , where Z is the normalization constant. Recall that y ∈ {−1,+1}, R(·) is risk associated with the 0− 1 classification loss. We first define for x ∈ X : γϕ(X) := √ EX′ [ K(X,X ′) ] Z , where the expectation is taken w.r.t. the underlying distribution. Using the Markov inequality, we immediately have: |γ(X)| ≤ 1√ δ with probability at least 1− δ. It then holds that: 1−R(f) = P ( Y f(X) ≥ 0 ) ≥ E [Y f(X) γ(X) · 1[Y f(X) ≥ 0] ] ≥ E [Y f(X) γ(X) ] ≥ E [ 1[Y = Y ′]K(X,X ′) ] Z √ δ ,with probability 1− δ, which concludes the proof. E PROOF OF THE RESULT IN SECTION 5 The proof of Proposition 4 relies on two important results, which we state below. Lemma A.3 (Ben-Tal et al. (2013)). Let c be any closed convex function with domain [0,+∞), and this conjugate is given by c∗(s) = supt≥0{ts−c(t)}. Then for any distribution Q∗ and any function g : Rd → R, it holds: sup Q∈Q(Q∗;δ) ∫ g(u)dQ(u) = inf λ≥0,η { λ ∫ c∗ (g(u)− η λ ) dQ∗(u) + δλ+ η } . (A.5) The next lemma is adapted from the concentration of random Fourier feature in Rahimi & Recht (2007). Recall that ϕm ( h(x), Q ) := 1/ √ m [ cos ( u1h(x) ) , sin ( u1h(x) ) , . . . , cos ( umh(x) ) , sin ( umh(x) )] comes from the Monte Carlo approximation of K(h(x), h(x′)). Lemma A.4. Let A ⊂ Rd has diameter dA such that h(x) ∈ A for all x ∈ X . It holds that: Pr ( sup h(x),h(x′) ∣∣K(h(x), h(x′))− ⟨ϕm(h(x), Q), ϕm(h(x′), Q)⟩∣∣ ≥ ϵ) ≤ 28 (σQdA ϵ ) exp ( − mϵ 2 4(d+ 2) ) , (A.6) where Q is given by the inverse Fourier transform of K, and σQ is the second moment of Q. Recall that An(Q) := 1n(n−1) ∑ i̸=j 1[yi = yj ]⟨ϕm(h(xi), Q), ϕm(h(xi), Q)⟩. For notation convenience, in what follows, we let hi := h(xi), and further define ϕ̃(h, U) := [cos(UTh), sin(UTh)] as the actual random Fourier feature underlying ϕm(h,Q), where U ∼ Q. Also, we let K(Y, Y ′) := 1[Y = Y ′] to be the labelling kernel of the downstream task. Proof. Following Lemma A.3, we work with a scaled version of the f-divergence under c(t) = 1 k (t k − 1) (because its dual function has a cleaner form). It is easy to check that c∗(s) = 1k′ [s] k′ + + 1 k with 1k′ + 1 k = 1. First note that the sampling error of the alignment E [ K(Yi, Yj)KQ(Hi, Hj) ] , i.e. replacing the expectation by the sample average, can be given by: ∆n(U) := 1 n(n− 1) ∑ i ̸=j K(yi, yj)ϕ̃(hi, U) T ϕ̃(hj , U)− E [ K(Yi, Yj)KQ(Hi, Hj) ] = 1 n(n− 1) ∑ i ̸=j K(yi, yj)ϕ̃(hi, U) T ϕ̃(hj , U)− E [ K(Yi, Yj)ϕ̃(Hi, U) T ϕ̃(Hi, U) ] . We show that ∆n(U) is sub-Gaussian. Let {h′i, y′i}ni=1 be an i.i.d copy off the observation expect for one element such that (hj , yj) ̸= (h′j , y′j). Without loss of generality, we assume the last element is different: (hn, yn) ̸= (h′n, y′n). Let ∆′n(U) be computed by replace {hi, yi}ni=1 with {h′i, y′i}ni=1, and their difference can be bounded via: |∆n(U)−∆′n(U)| = 1 n(n− 1) ∣∣∑ i ̸=j K(yi, yj)ϕ̃(hi, U) T ϕ̃(hj , U)−K(y′i, y′j)ϕ̃(h′i, U)T ϕ̃(h′j , U) ∣∣ ≤ 1 n(n− 1) (∑ i<n ∣∣K(yi, yn)ϕ̃(hi, U)T ϕ̃(hn, U)−K(yi, y′n)ϕ̃(hi, U)T ϕ̃(h′n, U)∣∣ + ∑ j<n ∣∣K(yn, yj)ϕ̃(hn, U)T ϕ̃(hj , U)−K(y′n, yj)ϕ̃(h′n, U)T ϕ̃(hj , U)∣∣) ≤ 4 n where the last inequality comes from the fact that the random Fourier features ϕ̃ and the labelling kernel K(y, y′) are both bounded by 1. Therefore, the above bounded difference result tells that ∆n(U) is a 4n -subGaussian random variable. To bound ∆n(U), we use: sup Q∈Q(Q∗;δ) ∣∣∣ ∫ ∆n(U)dQ∣∣∣ ≤ sup Q∈Q(Q∗;δ) ∫ |∆n(U)|dQ ≤ inf λ≥0 {λ1−k′ k′ EQ∗ [ |∆n(U)|k ′] + λ(δ + 1) k } (using Lemma A.3) = (δ + 1)1/kEQ∗ [ |∆n(U)|k ′]1/k′ (solving for λ∗ from above) = √ δ + 1EQ∗ [ |∆n(U)|2 ]1/2 (let k = k′ = 1/2). (A.7) It means that in order to bound supQ∈Q(Q∗;δ) ∣∣∣ ∫ ∆n(U)dQ∣∣∣, we only need to bound |∆n(U)|2. Using classical results for sub-Gaussian random variables (Boucheron et al., 2013), it holds for λ ≤ n/8 that: E [ exp ( λ∆n(U) )2] ≤ exp (− 1 2 log(1− 8λ/n) ) . We can take its integral and further upper bound the result with: p (∫ ∆n(U) 2dQ ≥ ϵ 2 δ + 1 ) ≤ E [ exp ( λ ∫ ∆n(U) 2dQ )] exp ( − λϵ 2 δ + 1 ) (Chernoff bound) ≤ exp ( − 1 2 log ( 1− 8λ n ) − λϵ 2 δ + 1 ) (apply Jensen’s inequality). Hence, it holds that: Pr ( sup Q∈Q(Q∗;δ) ∆n(U) ≥ ϵ ) ≤ exp ( − nϵ 2 16(1 + δ) ) . Combine this result with the approximation error of random Feature feature in Lemma A.4, we obtain the desired result. F SUPPLEMENTARY MATERIAL FOR THE EXPERIMENTS We provide the descriptions, details, and additional results of our experiments in this part of the appendix. F.1 REPLICATING THE INSTABILITY ISSUE WITH IMDB DATASET The IMDB dataset is a binary sentiment analysis dataset consisting of 50,000 reviews from the Internet Movie Database (IMDb) labeled as positive or negative4. We particularly consider using this dataset for an addition proof of concept because it appears on the official tutorial of Tensorflow5. We directly adopt the implementation from the tutorial, including the text prepossessing pipeline and model architecture. In particular, the raw input texts are pass to a text vectorization layer, an embedding layer, a bidirectional RNN layer, and finally two dense layers to produce the final score for binary classification. We extract the hidden representation from the last hidden layer as the hidden representation. In our experiments, we set the number of hidden dimension as 32. The results are provided in Figure A.1, where we observe patterns highly similar to the ML-1m data. In particular, the pretrained embeddings both have high variances in their exact values even if their pretraining objectives converge to similar loss and accuracy, and the variances gets larger as the pretraining progress. Two minor differences from the ML-1m result are that the pretraining process is less stable for IMDB (Figure A.1b), and the variance distribution here is unimodal instead of the bimodal distribution we observed in Figure 1c. F.2 DETAILS OF THE BENCHMARK EXPERIMENTS The main benchmark experiments in our paper are conducted on the Movielens-1m6 dataset, which is a well-established public dataset for movie & user contextual analysis and examining recommendation. The ML-1m dataset consists of 1 million movie ratings from 6,000 users on 4,000 movies, with a one-to-five rating scale. According to Harper & Konstan (2015), the data is collected from the initial and follow-up stages, where the initial stage mainly involves popularity-based exposure (a very small proportion involves random exposure), while in the follow-up stage, rating feedback is collected under some deterministic recommender systems. By convention, we convert the dataset to implicit feedback, which amounts to treating all rating events as clicks. For contextual information, each movie is provided with its title and genre, in the form of English words or sentences. There are 18 genres in total. Pretraining movie embeddings from user behavior data 4https://www.imdb.com/interfaces/ 5https://www.tensorflow.org/text/tutorials/text_classification_rnn 6https://grouplens.org/datasets/movielens/1m/ We use Item2vec (Barkan & Koenigstein, 2016) to train movie embedding from users’ consecutive viewing data. Item2vec uses the same objective function as Word2vec (Mikolov et al., 2013)j, where the words become movies and the corpus become each user’s viewing sequence. Movies belong to a consecutive viewing window of #ws are treated as positive pairs, and for each positive pair, we randomly sample #ns negative movies. Together with the embedding dimension d and ℓ2regularization parameter (weight decay) λ, the training schema is described by the quadruplet of (#ws, #ns, d, λ). Since our goal is not to find the best pretraining schema, we fix #ws=3 and #ns=3, and focus on studying how the our results may change under different d. Pretraining movie embeddings from movie contextual data Since the movie titles and other contextual information can be relatively short, large NLP models may not be appropriate. Therefore, we use the Doc2vec model Dai et al. (2015) to pretrain the movie embeddings. Since Doc2vec is built on top of Word2vec, the training schema can also be described by the quadruplet of (#ws, #ns, d, λ). Therefore, we also #ws=3 and #ns=3. Using pretrained movie embedding for downstream genre prediction Given pretrained movie embeddings ĥ(x), we employ logistic regression to predict the score for the movie to belong to a particular genre, i.e. p(Yi = k) ∝ exp(θkĥ(x)). Due to its simplicity, we use the logistic regression subroutine from the scikit-learn package. Using pretrained movie embedding for downstream sequential recommendation We employ a two-tower model structure (Figure A.2) for the downstream sequential recommendation, which is very common in the recommendation community. In particular, we use RNN to aggregate the past interaction sequence, so the whole model is very similar to GRU4Rec (Jannach & Ludewig, 2017). We use the sigmoid function as the activation function for the dense layers. The model training can be done in a seq-to-seq fashion, where for each positive target, we randomly sample 3 negative targets. We fix the hidden dimension of both the RNN and the dense layers as 16. Model Training Besides Doc2vec and the logistic regression, all of our models are optimized using the Adam optimizer with early stopping, which stops the training if the improvement in the loss is less than 1e− 4 for three consecutive epochs. For all the experiments, we set the initial learning rate to 0.001, and set the weight decay to 1e− 4. Our main implementation is with Tensorflow, and all the computations are conducted on a 16-core Linux cluster with 128 Gb memory, and two Nvidia Tesla V100 GPU each with 16 Gb memory. We use the Doc2vec subroutine from the Gensim package7 to pretrain the movie embeddings for ML-1m task2. Train/test split and metrics Since the goal of our experiments is not to find the best modelling and training configuration, we do not use a validation set to tune the hyperparameters. Instead, we provide sensitivity analysis on certain parameters of interest in Appendix F.3. For ML-1m task1, we randomly split the movies by 80%-20% to construct the training and testing set for genre classification. For evaluation, we use the accuracy and Macro F1 score as metrics. For ML-1m task2, we follow the convention of using the last user-movie interaction for testing, and use all the previous interactions for training. For evaluation, we use Recall@5, i.e. if the movie that the user truly viewed is among the top-5 recommendation, and NDCG@5 that further discounts the position of the viewed movie in the top-5 recommendation. 7https://radimrehurek.com/gensim/models/doc2vec.html F.3 SUPPLEMENTARY RESULTS We provide the sensitivity analysis for the featurization method. We focus on two variables, the dimension d and the variance of Q (denoted by σ2Q). Recall that we consider Q as Gaussian distributions. We vary d in {16, 32, 64}, and vary σ2Q in {0.2, 0.4, 0.6, 0.8}. In particular, we first compare side-to-side hd, ϕd(h), and ϕ2d(h), while fixing Q as the standard Gaussian distribution. We see from Figure A.3 that ϕd(h) consistently outperforms hd on both ML1m task1 and ML-1m task2. ϕ2d(h) also significantly improves upon the performance of ϕd(h), which suggests the benefits of allowing extra model complexity in the particular tasks we consider. Further, the performance of both ϕd(h) and ϕ2d(h) have considerable smaller variances than h(x). We then examine the sensitivity of the downstream performance w.r.t. Q – the sampling distribution for constructing ϕd(h). As stated before, we let Q be zero-mean Gaussian distribution, and vary its variance. From Figure A.4, we observe that for all the dimensions we consider, the downstream task under ϕd(h) is very stable under different σQ. This echos Corollary 4 that our approach enjoys robustness to the selection of Q. In real-world productions, we have been using standard Gaussian distribution and observed no issues. F.4 ONLINE DEPLOYMENT To avoid potential conflict of interest, we provide an overview of our production experiments. We aim to provide enough detail for interested practitioners to draw inspiration for both developing their own solutions and replicating ours. Some background introduction. In e-commerce application, the representation of items serves as a central component for almost all machine learning algorithms Wang et al. (2018); Xu et al. (2021). In the past few years, we have built a dedicated item representation learning pipeline that uses multiple sources of data to optimize item embeddings. Since there are over billions of items on our platform Ecom, it took us considerable effort to optimize the data pipeline and training routines such that the model refresh can be done on a daily basis. We point out that the daily refresh is necessary for item representation learning because the catalog of items, which is a major source of pretraining data, also gets minor updates on a daily basis. For example, new items can be added, and the features of items (e.g. title, price, description) can be modified by the vendors. The other major source of pretraining data is the historical customer behavior data. They are critical for revealing the relationship (e.g. similarity, complementariness, compatibility, substitutability) among items. These relationships are relatively stable in the customer population, so the more data we use, the more likely for us to discovery useful signals. Our model for pretraining item embeddings has both feed-forward component, recurrent-unit component, as well as contrastive learning component. The reason for using these different components is to effective handle data that has different structures. It is expected that the pretrained item embeddings are stable. As we mentioned above, the relationship among items are quite stable, and the catalog data has very minor differences in a limited span of time. Therefore, downstream machine learning models may follow a weekly or bi-weekly refresh schedule and are expecting very stable performances. The four major applications that depend on our pretrained item embeddings, which we first introduced in Section 6, are item-page recommendation, search ranking, email recommendation, and home-page marketing. Each of the four tasks use both item embeddings and task-specific features to optimize their objectives. Most of them use model structures similar to the Wide and Deep network (Covington et al., 2016) to effectively combine information from different sources. Item-page recommendation aims to provide items that are related to the anchor item on that particular page that the customer is viewing. Item embeddings are in both the recall and reranking stage. Search ranking is a huge system that combines multiple components. In particular, the item embeddings are used in a particular recall stage. Email recommendation is a simpler task that aims to recommendation items related to what the customers recently interacted with, or are supposed to interact again. Item embeddings is used along with other features to build a model that optimizes CTR. Marketing is also a huge system in Ecom, and the particular component that uses item embedding is to build the predicted click-through rate model to support bidding and placement ranking. Brief summary of the production environment and implementation. Our production environment is relative standard in the e-commerce industry, with Hive/Spark supporting the offline data streaming and Tensorflow Server supporting the online inference of deep learning models. Featuring h(·) via ϕ(h(·), Q) can be easily implemented in production. Some practical advantages are: • the algorithm is very simple and requires no training; • it fits seamlessly into the current big-data infrastructure and frontend service; • it require no change to the downstream model; • the overhand for both the training and inference time are small; • the communication can be easily done by simply recording the dimension and random seed under which a particular U is generated. On the backend, featurizing pretrained representation is engineered into a subroutine (on top of the original automated representation learning pipeline) callable by downstream applications. For instance, it can be a simple PySpark function if the end point of the automated representation learning pipeline is a feature store in HDFS. The dimension m and the random seed for generated the random directions U = [u1, . . . , ud] are the two main inputs. Configuring and logging the random seed used by each experiment is important because U might be reused for
1. What is the focus and contribution of the paper regarding semantic correspondence? 2. What are the strengths of the proposed approach, particularly in terms of neural representation? 3. What are the weaknesses of the paper, especially for the experiment section? 4. Do you have any concerns about the semantic correspondence representation? 5. What are the limitations regarding the NeMF approach? 6. How does NeMF deal with different matching costs for different image pairs? 7. Can the authors provide more visual experiments to convince readers? 8. How much time will the method cost to train a network? 9. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 10. What is the significance of the proposed modules in the differentiable optical flow data generation pipeline? 11. What are the strengths of the proposed differentiable data generation pipeline? 12. What are the weaknesses of the paper regarding its claims, experiments, and comparisons with other works? 13. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 14. What are the key contributions and novel aspects introduced by the paper in spin glass techniques? 15. What are the weaknesses of the paper compared to prior works? 16. What are the shining points of the paper regarding dictionary learning? 17. What are the weaknesses of the paper regarding theoretical analysis? 18. How do the authors handle permutation invariance in their algorithm? 19. Can the authors provide simulation results verifying their convergence rate? 20. What are the questions that the reviewer has regarding the paper?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper observed an instability issue of downstream prediction based on features from pretrained models (i.e., the accuracy of a downstream task is varing, which depends on the pretrained representation trained on the same data, model configuration, training setup, and stopping criteria). This empirical observation is theoretically analyzed via a novel uniform convergence bound for pretrained representations, and eventually proposes a solution, called featurizing pretrained representation, which is empirically justified to outperform a baseline (i.e., a downstream task with usual pretrained features) using the MovieLens1m dataset and an in-house real-world production dataset. Strengths And Weaknesses Strengths: Provide theoretical interpretations on empirical observations. A solution to address the claimed instability issue is simple. Weaknesses: The main empirical observations (Figure 1) are made in weak setups Proposition 1 is not necessarily true. The experiment result is weak (Weakness 1) The paper first made empirical observations on MovieLens-1m along with the traditional Doc2vec model as a pretrained representation, along with logistic regression for a downstream task. Considering a huge advance on a large language model, this experiment setup does not represent the current state of representation learning. If this observation is made with IMDB (or any larger dataset) along with BERT (or RoBERTa), the story would be much stronger. Moreover, the following statement needs to be more carefully explained: “Since we use logistic regression as the downstream model, the fluctuation can only be caused by the instability of pretrained representations.” Why logistic regression cannot be the source of the instability? In particular, do you mean the optimization for the logistic regression is done until it converges to the global optimum? (Weakness 2) Proposition 1 implies that the uniform convergence bound for pretrained representation has an irreducible term, contributing to the instability of the downstream task performance. However, I think Proposition 1 is not necessarily true. In particular, (A.1) decomposes the generalization bound, and the proof claims some terms are non-negative, which is not always true. Consider the second term in (A.1), where the proof said “by definition, the second term is non-positive “. First, I think "non-positive" is a typo as if it is non-positive, it can cancel out the irreducible term, thus the claim does not make sense; so I assume that the term is "non-negative". Here, h* is the optimal representation w.r.t. the expected loss R(h, f) instead of the empirical loss, thus the second term can be negative. In other words, if h ∗ = a r g m i n h ∑ i ℓ ( f ∘ h ( X i ) , Y i ) , the second term cannot be negative; but the if statement does not hold. Additionally, the non-negative argument on the last term is the same issue. This proof and the affected arguments in the following sections need to be fixed and adjusted. (Weakness 3) I liked the experiments on real-world production experiments, which demonstrates how knowledge in academia is actually used in practice. But, to justify the efficacy of the proposed approach, it is always good to evaluate multiple public datasets, but currently only one is used. For the ML-1m, I’m not sure if this is a widely acceptable dataset in representation learning; the main reason is that its input dimension looks not large enough (e.g., based on the paper, saying “each movie is provided with its title and genre, in the form of English words or sentences. There are 18 genres in total.”). Based on this limited information, classifying five ratings sounds very challenging (contrast to this, a human rates each movie based on richer information). I personally think this unusual property of the dataset may lead to the unstable results in Figure 1. Appendix F1 includes some results on IMDB, but I cannot see the downstream accuracy results as in the figure in page 2. In short, evaluating the proposed approach at least 2 widely acceptable datasets would make the experiment results stronger. I guess IMDB's downstream accuracy results need to be presented to strongly show the observed instability issue. Moreover, I’d recommend using features from BERT or RoBERTa for pretrained model features. Based on Table 1, the featurting pretrained representation looks promising, and hoping that the same trend holds for other datasets and representations. Clarity, Quality, Novelty And Reproducibility (clarity) I think the paper is well written. It would be better if details on the data is provided (e.g., the dataset size), as the stability issues may be related to the small number of samples. (quality) If my concerns are addressed, the paper quality is good as the main claim is supported by both theories and empirical results. (novelty) I think the claim could be novel if my concerns on the proof are addressed.
ICLR
Title Some Practical Concerns and Solutions for Using Pretrained Representation in Industrial Systems Abstract Deep learning has dramatically changed the way data scientists and engineers craft features – the once tedious process of measuring and constructing can now be achieved by training learnable representations. Recent work shows pretraining can endow representations with relevant signals, and in practice they are often used as feature vectors in downstream models. In real-world production, however, we have encountered key problems that cannot be justified by existing knowledge. They raise concerns that the naive use of pretrained representation as feature vector could lead to unwarranted and suboptimal solution. Our investigation reveals critical insights into the gap of uniform convergence for analyzing pretrained representations, their stochastic nature under gradient descent optimization, what does model convergence means to them, and how they might interact with downstream tasks. Inspired by our analysis, we explore a simple yet powerful approach that can refine pretrained representation in multiple ways, which we call Featurizing Pretrained Representations. Our work balances practicality and rigor, and contributes to both applied and theoretical research of representation learning. 1 INTRODUCTION The ability of neural networks to learn predictive feature representation from data has always fascinated practitioners and researchers (Bengio et al., 2013). The learnt representations, if proved reliable, can potentially renovate the entire life cycle and workflow of industrial machine learning. Behind reliability are the three core principles for extracting information from data, namely stability, predictability, and computability (Yu, 2020). These three principles can not only justify the practical value of learnt representation, but also lead to the efficiency, interpretability, and reproducibility that are cherished in real-world production. Since pretrained representations are optimized to align with the given task, intuitively, they should satisfy all three principles in a reasonable setting. However, when productionizing an automated pipeline for pretrained representations in an industrial system, we encountered key problems that cannot be justified by existing knowledge. In particular, while the daily refresh follows the same modelling and training configurations and uses essentially the same data1, downstream model owners reported unexpectedly high fluctuations in 1Since the pretraining uses years of history data, the proportion of new daily data is quite small. performance when retraining their models. For illustration purpose, here we reproduce the issue using benchmark data, and take one further step where the pretraining is repeated on exactly the same data, under the same model configuration, training setup, and stopping criteria. We implement ten independent runs to essentially generate the i.i.d versions of the pretrained representation. We first visualize the dimension-wise empirical variances of the pretrained representations, provided in Figure 1a. It is surprising to find out that while the pretraining losses almost converge to the same value in each run (Figure 1b), there is such a high degree of uncertainty about the exact values of each dimension. Further, in Figure 1c, we observe that the uncertainty (empirical variance) of pretrained representation will increase as the pretraining progresses. In the downstream task where pretrained representations are used as feature vectors (see the right figure), we observe that the performance does fluctuate wildly from run to run. Since we use logistic regression as the downstream model, the fluctuation can only be caused by the instability of pretrained representations because we can effectively optimize the downstream model to global optimum. To demonstrate that the above phenomenon is not caused by using a specific model or data, we also experiment with a completely different pretraining model and benchmark data from from another domain. We perform the same analysis, and unfortunately the same issues persist (Figure A.1 in the Appendix). Existing deep learning theory, both the convergence and generalization results (we will discuss them more in Section 2), can fail to explain why shall we expect pretrained representation to work well in a downstream task when their exact values are so unstable. This is especially concerning for industrial systems as the issue can lead to unwarranted and suboptimal downstream solutions. We experienced this issue firsthand in production, so we are motivated to crack the mysteries behind pretrained representations, and understand if and how their stability can be improved without sacrificing predictability and computability. We summarize our contributions as below. • We provide a novel uniform convergence result for pretrained representations, which point out gaps that relate to the stability and predictability issues. • We break down and clarify the stability issue by revealing the stochastic nature of pretrained representation, the convergence of model output, and the stable and unstable components involved. • We investigate the interaction between pretrained representation and downstream tasks in both parametric and non-parametric settings, each revealing how predictability can benefit or suffer from stability (or instability) for particular usages of pretrained representations. • We discuss the idea of featurizing pretrained representation, and propose a highly practical solution that has nice guarantees and balances stability, predictability, and computability. We also examine its effectiveness in real-world experiments and online testings. 2 RELATED WORK It is not until recent years that deep learning theory sees major progress. Zhang et al. (2016) observed that parameters of neural networks will stay close to initialization during training. At initialization, wide neural networks with random weights and biases are Gaussian processes, a phenomena first discussed by Neal (1995) and recently refined by Lee et al. (2017); Yang (2019). However, they do not consider effect of optimization. The Neural Tangent Kernel provides a powerful tool to study the limiting convergence and generalization behavior of gradient descent optimization (Jacot et al., 2018; Allen-Zhu et al., 2019), but it sometimes fails to capture meaningful characteristics of practical neural networks (Woodworth et al., 2020; Fort et al., 2020). However, those works require parameters being close to initialization, in which useful representation learning would not take place. Indeed, it has also caught to people’s attention that representation learning can go beyond the neural tangent kernel regime (Yehudai & Shamir, 2019; Wei et al., 2019; Allen-Zhu & Li, 2019; Malach et al., 2021), among which a line of work connects the continuous-time training dynamics with mean field approximation (Mei et al., 2018; Sirignano & Spiliopoulos, 2020), and another direction is to study the lazy training regime (Chizat et al., 2019; Ghorbani et al., 2019) where only the last layer of a neural network is trained. Unfortunately, their assumed training schemas all deviate from practical representation learning. Still, part of our analysis in Section 4.2 can be viewed as a practical discretetime extension of the mean-field method. Perhaps the most practical setting for studying pretrained representation is Arora et al. (2019), which analyzes the contrastive representation learning under a particular data generating mechanism. However, their results do not generalize to broader setting, and they cannot justify the stability issue of pretrained representation. 3 PRELIMINARIES Notations. We use x ∈ X ⊆ Rd0 and y ∈ R to denote the raw feature and outcome, uppercase letters to denote random variables and measures, and bold-font letters to denote matrices. Let h : X → Rd be the representation hypothesis, and f : Rd → R be the prediction hypothesis. The hypothesis classes are given by H and F respectively. Denote by ◦ the operator for function composition, and ℓ : R×R → [0, 1] the loss function. We assume ℓ is 1-Lipschitz without loss of generality. Then the risk for a pair of (h ∈ H, f ∈ F) is given by: R(h, f) := E(X,Y )∼P [ ℓ ( f ◦ h(X), Y )] , where P is a measure on (X ,R). We also use Pn to denote the corresponding product measure for (X1, Y1), . . . , (Xn, Yn). The one-layer multi-layer perceptron (MLP) is perhaps the most fundamental representation learning model, given by: f ◦ h(x) = Θσ(Wx). Here, σ is the activation function, and W ∈ Rd0×d, Θ ∈ Rd×k. We mention that adding the bias terms will not affect our analysis, so we drop them here for brevity. In practice, Θ and W are often initialized as scaled i.i.d Gaussian random variables that follow N(0, 1/d). We will use such as [W ]i to denote the ith row of a matrix. The popular contrastive representation learning can also be considered as a special case of this configuration2. Define the shorthand g(x) := Θσ(Wx). A typical pretraining process involves optimizing the risk function defined for pretraining and extracting the hidden representation. The optimization is done via stochastic gradient descent (SGD), e.g. W(t+1) = W(t)−α∇Wℓ(g(x(t)), y(t)), where α is the learning rate. For convenience, we consider each mini-batch having one random sample, denoted by (x(t), y(t)) that corresponds to the tth step. Given a representation hypothesis h, we define: fh,n := argminf∈F 1/n ∑n i=1 ℓ ( f(h(xi), yi ) . In the sequel, how well fh,n ◦ h can generalize to a new i.i.d sample of the downstream task is: R(h) := E(X,Y )∼PEPn [ ℓ ( fh,n ◦ h(X), Y )] , where the second expectation EPn is taken with respect to the downstream data {Xi, Yi}ni=1 underlying fh,n. Its empirical version is given by Rn(h) := 1/n ∑ i [ ℓ ( fh,n ◦ h(Xi), Yi )] . 4 MAIN ANALYSIS 4.1 THE GAP OF UNIFORM CONVERGENCE FOR PRETRAINED REPRESENTATION Suppose h and f are optimized jointly (end-to-end) via empirical risk minimization (ERM), which amounts to solving: argminh∈H,f∈F 1/n ∑ i ℓ(f ◦ h(xi), yi). In this setting, the generalization behavior of the solution well-studied. In particular, using the notion of Gaussian (or Rademacher) complexity3, the generalization error can be bounded by O ( Gn(F ◦ H)/n + √ (log 1/δ)/n ) with probability at least 1− δ (Bartlett & Mendelson, 2002). This result, known as uniform convergence, is especially appealing because it both includes problem-specific aspects and applies to all functions in the composite hypothesis class F ◦ H := {f ◦ h : f ∈ F , h ∈ H}. Is it possible to achieve a comparable result for pretrained representation? Perhaps the most ideal setting for uniform convergence to hold under pretrained representation is: C1: the pretraining and downstream training will use the same data {(Xi, Yi)}ni=1, i.e. ĥ, f̂ := argminh∈H,f∈F 1 n ∑n i=1 ℓ ( f ◦ h(Xi), Yi ) , fĥ,n = argminf∈F 1 n ∑n i=1 ℓ ( f(ĥ(Xi), Yi ) . 2We can simply set xi ∈ Rn as one-hot encodings, and W,Θ ∈ Rd0,d where they are allowed to coincide. Then we let h(xi) = [W]i or [Θ]i depending on the context. The activation becomes the identity function, and ℓ(f(xi), xj) = log(1−σ(h(xi)Th(xj))) (or log σ(h(xi)Th(xj)), with σ(·) being the Sigmoid function. 3We will use Gaussian complexity G(·) here for some of its technical convenience. Then we let Gn be the empirical Gaussian complexity. See Appendix A for detail. C2: they rely on the same prediction function class F . These two conditions essentially eliminate the confounding effects of model and data mismatch. Thus, if uniform convergence cannot hold in this setting, it is unlikely to serve more general use cases. We first summarize the common intuition behind why pretrained representation might work: • the pretraining objective, when well-designed, reasonably predicts the empirical downstream risk for fh,n (intuition 1); • fh,n’s empirical downstream risk can be generalized to the true downstream risk (intuition 2). These two intuitions have also been exemplified for contrastive representation learning in Arora et al. (2019) and its following work. Our main contribution here is to make the above intuitions rigorous, and reveal whether they are indeed sufficient for uniform convergence in general settings. Recall that, given the complete information on a downstream task, the best we can do is: minh∈H,f∈F R(h, f). We denote the representation hypothesis that achieves this minimum by h∗. Let ĥ be given in C1. Then the generalization error is simply given by: R(ĥ)−minh∈H,f∈F R(h, f). Following the standard derivation which decomposes the generalization error and takes the supremum to upper bound each term, we run into terms that exactly characterize the above two intuitions. As we show in Appendix B, it holds that: R(ĥ)− min h∈H,f∈F R(h, f) ≤ sup h { EPnRn(h)−Rn(h) } + sup h EPn [ E(X,Y )∼P [ ℓ ( fh,n ◦ h(X), Y ) −Rn(h) ]] + remainder, where the first term suph { EPnRn(h) − Rn(h) } exactly seeks to match intuition 1, and the second term can be further upper bounded using: E(X,Y )∼P [ ℓ ( fh,n ◦ h(X), Y ) − Rn(h) ] ≤ supf { E(X,Y )∼P [ ℓ ( f ◦ h(X), Y ) − Rn(h) ]} , which underlies intuition 2. The remainder terms can be bounded using standard concentration results. However, we also spot a critical issue with the first term, and we first expand it for clarity: sup h { EPn [ 1 n ∑ i ℓ ( fh,n ◦ h(Xi), Yi )] − 1 n ∑ i ℓ ( fh,n ◦ h(Xi), Yi )} . Notice that this is not the typical empirical process encountered in a standard generalization setting, and we show that its upper bound is actually given by O ( Gn(H)/ √ n + √ log 1/δ ) following the same procedure as Bartlett & Mendelson (2002). Compared with the standard generalization bound, here the slack term √ log 1/δ does not vanish as we increase n. Therefore, there exist gaps between common intuitions and achieving uniform convergence. Before we discuss the cause of the gaps and its implications, we first present the complete result as below. Proposition 1. Let G′n(·) be a slightly modified Gaussian complexity term. Under the conditions and definitions in C1 and C2, it holds with probability at least 1− δ that: R(ĥ)− min h∈H,f∈F R(h, f) ≲ Gn(H)√ n + G′n(F) suph √ E∥h(X)∥22√ n + √ log 1/δ. The proof is deferred to Appendix B. Proposition 1 can be viewed as a ”no free lunch” result for using pretrained representation: even in the most ideal setting we study here, uniform convergence cannot be expected for all representation hypothesis. The gap is that not for every h ∈ H can the pretraining objective be predictive of fh,n’s empirical downstream risk. Imagine a entirely random h̃. Then both its pretraining objective and the empirical downstream risk of fh̃,n may have high variances that do not scale with n. Thus the prediction will not concentrate whatsoever. Takeaway. The implications of this gap are two folds. Firstly, it does not suffice to only study H and the data distribution – the statistical and algorithmic convergence properties of ĥ(X) could be more relevant as they suggest its stability. Secondly, we cannot take the performance of fĥ,n for granted, at least not without understanding how ĥ(X) interacts with the downstream learning and generates fĥ,n – which ultimately relates to its predictability. Unfortunately, we find a lack of discussion on these two issues in existing literature, so we will investigate them in the next two sections. 4.2 STOCHASTIC NATURE OF PRETRAINED REPRESENTATION, AND THE CONVERGENCE OF PRETRAINING MODEL In this section, we reveal two important statistical and algorithmic properties of pretrained representation. We show that while they persist as random vectors during SGD optimization (as shown in Figure 1), the output of the pretraining model can be deterministic and converge to some optimal solution. Two contributing factors are scaled i.i.d initialization and the inductive bias of gradient descent. Our findings provide critical insight to the stability of pretrained representations. We motivate our statistical analysis by deriving the optimization path of the one-layer MLP introduced in Section 3. For notation convenience, we introduce Θ̃ and W̃ as the rescaled version of Θ and W such that Θ̃(0),W̃(0) i.i.d∼ N(0, 1). We let ℓ′(g(x), y) be the derivative of the loss function and similarly for other functions. In contrast to the existing theoretical work that studies optimization path under gradient flow or infinitesimal learning rate, we fix the learning rate as α = 1 to reflect real-world practice. The output dimension is also set to k = 1 without loss of generality. In the first forward pass, since σ(W(0)x(0)) has i.i.d coordinates, as d → ∞ it holds that: g(0)(x(0)) := 1 d d∑ i=1 [ Θ̃(0) ] i [ σ ( W̃(0)x(0) )] i a.s.−→ EΘ(0)σ ( W (0)x(0) ) (denote by g(0)∗ (x(0))), where we use Θ(t),W (t) to denote an i.i.d element (or row) of Θ̃(t) and W̃(t). As a result, ℓ′ ( g(0)(x(0)), y(0) ) also converges to the deterministic value L(0) := ℓ′ ( g (0) ∗ (x (0)), y(0) ) . Then in the first backward pass, the updated parameters will follow: Θ̃(1) = Θ̃(0) − L(0)σ ( W̃(0)x(0) ) , W̃(1) = W̃(0) − L(0)x(0)Θ̃(0)σ′ ( W̃(0)x(0) ) . An important observation is that the updated parameters remain to be element-wise i.i.d. Consequently, the model output of the second forward pass will also converge to a deterministic value: g(1)(x(1)) a.s.−→ E ( Θ(0) − L(0)σ ( W (0)x(1) ))( W (0)x(1) − L(0)x(0)Θ(0)σ′ ( W (0)x(0) ) x(1) ) . As we show in the following Proposition, the (statistical) convergence result will hold for any t, and there exists a general iterative update rule for g(t)(x). For some intuition, suppose σ(·) is the identity function, then Θ(t), W(t) will simply be linear combinations of Θ(0), W(0). Proposition 2. For the one-layer MLP we consider, with the learning rate α = 1, for any step t > 1, as d → ∞, the model output g(t)(x) will converge almost surely to g(t)∗ (x) defined as follows: g (t) ∗ (x) = ( C (t) 1 C (t) 2 + C (t) 3 C (t) 4 ) x, with ( C (t+1) 1 , C (t+1) 2 , C (t+1) 3 , C (t+1) 4 ) = ( C (t) 1 , C (t) 2 , C (t) 3 , C (t) 4 ) +L(t)x(t) ( C (t) 3 , C (t) 4 , C (t) 1 , C (t) 2 ) . As a corollary, while the hidden representations will remain random vectors throughout the SGD process (which can be seem from the update rule): h(t)(x) := σ(W(t)x) = σ ( W̃(t−1)x− L(t−1)x(t−1)Θ̃(t−1)σ′ ( W̃(t−1)x(t−1) ) x ) , ⟨h(t)(x), h(t)(x′)⟩ will nevertheless also converge to some deterministic value as d → ∞. The proof and detail are deferred to Appendix C. In Figure 1d, we see that the statistical convergence of model output is indeed evident even with moderately small d, and its variance is by magnitudes smaller than the variance of the hidden representation σ(W(t)x) (see the x-axis of Figure 1c and 1d). On the other hand, the algorithmic convergence of model prediction has received considerable attention. It has been shown that over-parameterized models will converge to minimum-norm interpolants due to the inductive bias of gradient descent (Bartlett et al., 2021; Soudry et al., 2018). For the sake of space, here we focus on their implications and leave the details to Appendix C. Roughly speaking, among the many locally optimum solutions that interpolate the training data, gradient descent will converge to the one with the smallest norm, which usually has nice properties such as smoothness. We let g0 be that particular solution such that limt→∞ g(t)(x) = g0(x). Since ⟨h(t), h(t)⟩ converge statistically to a deterministic value at every optimization step, we can immediately conclude that: • if g(t) takes the form of ⟨h(t), h(t)⟩ such as in contrastive representation learning, the inner product between hidden representations also converge algorithmically to g0’s prediction; • if g(t) = θh(t), i.e. the last hidden layer is used as the representation, note that a necessary but not sufficient condition for ∥g(t)(x) − g(t)(x′)∥ to be small is that ∥h(t)(x) − h(t)(x′)∥ is small as well. Suppose h(t) are normalized, then upon the algorithmic convergence, ⟨h(t)(x), h(t)(x′)⟩ are likely to be larger if x, x′ are close to each other under g0’s prediction. Takeaway. The stochastic nature of ĥ := limt→∞ h(t) and the (approximate) convergence of ⟨ĥ(x), ĥ(x′)⟩ under gradient descent reveal two important properties of pretrained representations: 1. Instability of ĥ(x): the exact position of ĥ(x) in Rd is stochastic, depending on the initialization and the order of the pretraining data that is fed to SGD; 2. Stability of ⟨ĥ(x), ĥ(x′)⟩: the pairwise inner product of ⟨ĥ(x), ĥ(x′)⟩ converges (approximately) to a value that is consistent with the minimum-norm interpolant of the pretraining task. These results will also play a crucial role in understanding how ĥ can interact with the downstream learning, which we will study in the next section. 4.3 INTERACTION WITH DOWNSTREAM TASK To be comprehensive, we consider both the parametric and non-parametric set up for downstream task. Interestingly, they will reveal different aspects on the predictability of ĥ. Parametric setup. To eliminate the interference of label noise, we consider the noiseless setting where the output of downstream task is generated by: yi = f∗ ( E[h(xi)] ) , i = 1, . . . , n. Because h(x) might be high-dimensional, we assume there is some sparsity in f∗. The conditions below provide perhaps the easiest parametric setup for pretrained representations to perform well. C3: Let f∗(h) := ⟨θ∗, h⟩, ∥θ∗∥0 ≤ q, and let the inputs hi := Eh(xi) be sampled from: N(0, σ2hI) where σh is the strength of the signal. We show previously that ĥ is stochastic, so we simply set ĥi := hi + ϵi, where ϵi ∼ N(0, σ2ϵ I) captures the variance of the pretrained representation. Intuitively, since ϵi are i.i.d, it holds that Eϵ [ ⟨ĥ(xi), ĥ(xj)⟩ ] = ⟨h(xi), h(xj)⟩ so recovering θ∗ should be less challenging. However, we show that the variance will again prohibit efficient learning, and the best fĥ,n can do is controlled by σϵ/σh – a notion of signal-to-noise ratio for pretrained representation. The result below takes the form of a minimax lower bound: an information-theoretical quantity that characterize the inherent difficulty of a problem. Our proof (in Appendix D) is based on Le Cam’s method that was previously used to prove a lower bound result under label noise (Raskutti et al., 2011), which is very different from our setting. Proposition 3. Under C3, it holds with probability at least 1/2 that: inf θ̂ sup ∥θ∗∥0≤q ∥θ̂ − θ∗∥2 ≳ ( σ2ϵ /σ 2 h ) · qn−1 log(d/q), where inf θ̂ is taken with respect to any learning procedure that is based on {ĥ(xi), yi} n i=1. Takeaway. The result in Proposition 3 is alarming because during pretraining, the variance of h(x) might increase as more and more stochastic terms are being added (suggested by both the derivations in Section 4.2 and the empirical result in Figure 1c). The above lower bound shows the predictability of ĥ(x) can be compromised by its variance inherited from pretraining. This also explains the instability in downstream machine learning that we experienced during real-world production. Non-parametric setup. Among the non-parametric regression estimators, the Nadaraya-Watson (NW) estimator has received considerable attention due to its simplicity and effectiveness (Nadaraya, 1964). It can be thought of as a smoothing nearest-neighbor estimator under a weighting schema: fh,n ◦ h(x) := n∑ i=1 yjwh(x, xi), wh(x, xi) := K ( (h(x)− h(xi) ) /z, where K : Rd → R+ is a kernel, and z is a normalizing constant. Here, we omit the bandwidth parameter for convenience. The Gaussian kernel K(u) ∝ exp(−∥u∥22) is a common choice, so when pretrained representations are normalized, it only depends on h via ⟨h(x), h(x′)⟩ – a more stable quantity according to the previous section. We effectively denote this kernel by K ( ⟨h(x), h(x′)⟩ ) . It is well-understood that the generalization of a kernel support vector machine is controlled by the kernel-target alignment (Cristianini et al., 2001), i.e. 〈 y⃗,Ky⃗ 〉 , where y⃗ = [yi, . . . , ym]T and Ki,j = K ( ⟨h(xi), h(xj)⟩ ) . We prove that this is also the case for NW estimator, with a simple result that does not resort to the concentration arguments. The proof is in Appendix D. Lemma 1. Under 0-1 loss, with probability at least 1− δ, the risk of NW estimator satisfies: R(fh,n ◦ h) ≤ 1− √ δ · E [ 1[Y = Y ′]K ( ⟨h(X), h(X ′)⟩ )] , where the expectation is taken with respect to (X,Y ) ∼ P , (X ′, Y ′) ∼ P . Takeaway. Lemma 1 shows the predictability of h(x), when expressed and measured through the more stable ⟨h(x), h(x′)⟩, is strictly guaranteed. Therefore, using h(x) in downstream task in the form of h⃗(x) := [ e⟨h(x),h(x1)⟩, . . . , e⟨h(x),h(xn)⟩ ] can be beneficial, and it can be interpreted as a representation of weights in the NW estimator. Further, h⃗(x) contains all the pairwise relationship that can be more closely related to the pretraining objective. Note that h(x) can also be viewed as the compression of h⃗(x) because: [⃗h(xi)]j = exp(⟨h(xi), h(xj)⟩). Nevertheless, h⃗(x) and h(x) cannot be compared directly because they have different intrinsic dimensionality. In terms of computability, h⃗(x) ∈ Rn is also no compare to h(x) ∈ Rd – computing h⃗(x) itself can be non-trivial for largescale applications. We aim to resolve these issues in the next section. 5 FEATURIZING PRETRAINED REPRESENTATION Our next goal is to build on top of h(x) features or representations that are comparable to h⃗(x) in terms of stability and predictability, and have similar computability to h(x). Suppose {h(xi)}ni=1 are normalized. Then h⃗(xi) is simply the exponential of pairwise cosine distances between h(xi) and all the pretrained representations. Notice that the angle between any pair of (h(xi), h(xj)) can be decomposed into their respective angle with a baseline direction u ∈ Rd, ∥u∥2 = 1. When the set of baseline directions is rich enough, we can recover all the pairwise cosine distances in h⃗(xi) using their angles with the baseline directions. Given U := [u1, . . . , um] ∈ Rd×m, the set of angles between h(xi) and U forms a measurement for the relative location of h(x) ∈ Rd. We refer to such a measurement process as featurizing pretrained representation, as it is similar to how features are constructed by measuring experimental subjects. While featurizing h(x) according to its geometrically property is an appealing solution, it is unknown how many baseline directions are needed to preserve the stability and predictability of h⃗, as well as the optimal way to choose those directions. Fortunately, the Bochner’s Theorem (Loomis, 2013) from harmonic analysis lays a solid foundation for selecting the directions and providing approximation and learning guarantees. Also, the resulting measurements will coincide with the random Fourier feature (Rahimi & Recht, 2007; Liu et al., 2021) that plays a critical role in many machine learning communities. For the Gaussian kernel we studied, Bochner’s Theorem states that there exists a measure Q on Rd such that: K(h(x), h(x′)) = ∫ Rd eiu(h(x)−h(x ′))q(u)du real part = Eu∼Q [ cos ( u ( h(x1)− h(x2) ))] . Since cos(a− a′) = cos(a) cos(a′) + sin(a) sin(a′), we can approximate the kernel value using the Monte Carlo method as below: K(h(x), h(x′)) ≈ 1 m m∑ i=1 cos ( uih(x) ) cos ( uih(x ′) ) + sin ( uih(x) ) sin ( uih(x ′) ) , ui i.i.d∼ Q. Let ϕm ( h(x), Q ) := 1/ √ m [ cos ( u1h(x) ) , sin ( u1h(x) ) , . . . , cos ( umh(x) ) , sin ( umh(x) )] be the featurization of h(x) according to Bochner’s Theorem. Note that it amounts to measuring h(x)’s distances with respect to random directions drawn from Q(u), and then transforming them through trigonometric functions. Furthermore, 〈 ϕm ( h(·), Q ) , ϕm ( h(·), Q )〉 can approximate any entries in h⃗. To be more precise, Rahimi & Recht (2007) shows that it only requires m = Ω ( d/ϵ2 log(σQ/ϵ) ) to achieve ∣∣K(h(x), h(x′))−〈ϕm(h(x), Q), ϕm(h(x′), Q)〉∣∣ ≤ ϵ, where σ2Q is the second moment Q. Therefore, when m is comparable to d, the featurized ϕm ( h(x), Q ) achieves the stability and predictability of h⃗, as well as the computability of h. Converting h(x) to ϕm ( h(x), Q ) is computationally efficient, since u1, . . . , um only need to be drawn from Q once and apply to all h(xi), i = 1, . . . , n. However, there is still the obstacle of finding the optimal Q∗. Strictly speaking, Q∗ is obtained from the inverse Fourier transform, but in practice the standard Gaussian distribution is often used. Indeed, compute the inverse Fourier transform and sample from it poses another challenging task. To our knowledge, there is no existing study on whether we can safely sample u from a proxy Q. In the following proposition, we show that using Q instead of Q∗ will not cost stability as long as their discrepancy is bounded. In particular, we state our result in the context of Lemma 1, that is, the downstream risk is controlled by the alignment A := E [ 1[Y = Y ′]K ( ⟨h(X), h(X ′)⟩ )] . We use Ds(Q,Q∗) :=∫ s(dQ/dQ∗)dQ∗ to denote the f-divergence induced by s(·). Proposition 4. Let Q(Q∗; δ) := {Q : Ds(Q,Q∗) ≤ δ} be a Ds-ball with radius δ centered at Q∗. Let {h(xi), yi}ni=1 be the downstream data, and An(Q) := 1n(n−1) ∑ i ̸=j 1[yi = yj ]⟨ϕm(h(xi), Q), ϕm(h(xi), Q)⟩. It holds that: Pr ( sup Q∈Q(Q∗;δ) ∣∣∣An(Q)−An(Q∗)∣∣∣ ≥ ϵ) ≲ σ2Q ϵ2 exp ( − mϵ 2 16(d+ 2) ) + exp ( − nϵ 2 64(1 + δ) ) , where σQ := maxQ∈Q σQ. The significance of Proposition 4 is that even if the optimal Q∗ is not used, in the worst case scenario, the instability caused by it (reflected via δ) vanishes quickly as the sample size gets larger. Similarly, increasing the dimension of featurized representation ϕm also speeds up the convergence exponentially. They provide the guarantee for predictability even if Q∗ is not used. The proof is provided in Appendix E. Takeaway. Featurzing pretrained representation as ϕm(h,Q) offers a simple and practical solution to balance stability, predictability, and computability. We just showed that Q can simply be standard Gaussian distribution, and the dimension of ϕm(h) can be obtained by satisfying a specific approximation threshold ϵ. It can also be treated as a tuning parameter in downstream tasks. 6 BENCHMARK AND REAL-WORLD EXPERIMENTS We conduct experiments on the benchmark dataset MovieLens-1m (ML-1m) for illustration and reproducibility purposes. The real-world production experiments took place at a major US ecommerce platform anonymized as Ecom. The detailed descriptions for ML-1m and the introduction of Ecom’s production environment are provided in Appendix F. On ML-1m. The dataset supports two types of pretraining-downstream task combination: (a). leverage the sequences of user viewing data to pretrain movie embeddings, then use the embeddings to predict the genre of the movie (ML-1m task 1); (b). pretrain movie embeddings using the title and other descriptions, then use the embeddings for downtream sequential recommendation (ML-1m task 2). The detailed data processing, model and pretraining configurations, downstream training/testing setup, evaluation metrics, and sensitivity analysis are deferred to Appendix F. On ML-1m task 1, we use contrastive representation learning to pretrain the movie embeddings, and employ logistic regression to predict the genre using movie embeddings as features. On ML-1m task 2, we use a bidirectional-RNN-type structure on movies’ NLP data, and extract the final hidden layer as pretrained representation. The downstream sequential recommendation task employs a two-tower structure, and a RNN is used to aggregate the history viewing sequence. In Table 1, we first see that ϕm(h) improves the stability of h by at least ×10 in both tasks. Even under the same dimension, ϕm(h) outperforms h, and is highly comparable to avg(h) – the manually stabilized version of h by averaging it over ten independent runs. Note that avg(h) is almost never a good practical solution because it requires repeating the same pretraining process multiple times. Here, we use it as an analytical baseline, and show that ϕm(h) is just as good. When the dimension increases for ϕm(h), it delivers much more superior results. Although changing dimension can also change the downstream model complexity, but as we discuss below, it offers more flexibility for real-world problems. On Ecom. The item representation learning pipeline is being used by several downstream productions: item-page recommendation (Task1), search ranking (Task2), email recommendation (Task3), and home-page marketing (Task4). They all have task-specific features and non-trivial model architectures different. The refreshing of pretrained item embedding is done on a daily basis, and downstream model owners may have separate schedules to update and refresh the relevant parts of their models. In Appendix F.4, we describe our engineering solutions of deploying the featurization process on the frontend and backend. During A/B testing, we observe performance lifts (in terms of click-through rate) that are statistically significant for all four downstream applications. The average revenue-per-visitor lift is also positive during the testing period. The detailed online results and analysis are provided in Appendix F. Lessons learnt. In addition to improved stability and performance, an important feedback we received from downstream model owners is that the flexibility in choosing ϕm(h)’s dimension is very helpful for their tasks. Prior to our featurization technique, it is almost impossible to personalize the dimension of pretrained representation for different applications, let alone tuning it in downstream tasks. Now knowing that the predictability will not vary much, experimenting with different dimensions often allows them to find a better bias-variance tradeoff for downstream tasks. 7 DISCUSSION The analytical results and the proposed featurization method in our work can apply to a broad range of applications and research problems. Nevertheless, our results may still be rudimentary and far from providing the complete picture or optimal practice for using pretrained representation. We hope the progress we made will lead to more advanced future research and applications. Scope and limitation. Most of our analysis are performed in basic settings: while they ensure the results will hold in generality, advanced methods for pretraining representation are not considered. Also, we do not include additional downstream features and their correlation with pretrained representations, or connections between the pretraining objective and downstream task. Those additional knowledge can be useful for deriving task-specific results (Arora et al., 2019). For application, our featurization technique may be less helpful if the downstream task simply uses embedding distance like KNN search. Optimizing the space and time complexity by such as embedding quantization might be more useful for such tasks (Chen et al., 2020), which is not discussed in our paper. A future direction. While our work studies h(x) as a whole, it can be inferred from Figure 1c that the element-wise variance of ĥ(x) is bimodal, which suggests heterogeneity within h(x). Possible explanations are that a (random) subset of h(x) is responsible for overfitting the pretraining task (Bartlett et al., 2020), or that some dimensions are forced to become more independent of others so the representation matrix has nice spectral properties (Hastie et al., 2022). It is thus an important future direction to identify the internal structure of h(x) to better featurize pretrained representations. A TECHNICAL BACKGROUND In this part of the paper we provide the technical background for both the discussions in the paper and the following proofs. A central role for proving uniform convergence results is the Gaussian / Rademarcher complexity. For a set A ⊂ Rn, it is defined as: G(A) := Eϵ [ sup a∈A n∑ i=1 ai ] , where ϵ are i.i.d Gaussian / Rademarcher random variables. It essentially measures how good a function class can interpolate a random sign pattern assigned to a set of points. Given a function class F and n samples (x1, . . . , xn), the empirical Gaussian / Rademarcher complexity is given by: Gn(F) := Eϵ [ sup f∈F n∑ i=1 f(xi) ] . Remark A.1. We mention that in some versions of the definition, there is a 1/n factor in the complexity term. Here, we explicit pull that factor out and place it in the resulting bound. As we mentioned earlier, an important reason for us using Gaussian complexity is because some of its technical properties, which is the Slepian’s Lemma (Slepian, 1962) and its corollary, which we state as below: Lemma A.1 (From Slepian’s Lemma). Suppose ϕ : A → Rq has Lipschitz constant L. Then it holds that: G(ϕ(A)) ≤ LG(A). This result can be viewed as the contraction lemma for Gaussian complexity (Ledoux & Talagrand, 1991). A.1 INDUCTIVE BIAS OF GRADIENT DESCENT Our introduction primarily follows Soudry et al. (2018); Ji & Telgarsky (2019); ?); Gunasekar et al. (2018) and their follow-up works. The key factor that contributes to the implicit bias of gradient decent is the divergence of model parameters after separating the data under loss functions that has exponential-tail behavior. When the predictor f ∈ F parameterized by θ is over-parameterized, other than certain degenerated cases, the data can be separated at certain point if the predictor class satisfies some regularity assumptions (Lyu & Li, 2019), e.g. • f ∈ F is homogeneous such that: f(x; c · θ) = cβf(x; θ),∀c > 0; • f ∈ F is smooth and has bounded Lipschitz constant. These conditions can be met for many neural network structures and activation functions. The exponential-tail of loss function can be satisfied by the common exponential loss and logistic loss (which we use through our discussions and experiments). To see why the the norm of model parameters will diverge, simply note that under such as exponential loss, both the risk and the gradient will take the form of: ∑ i ci exp(−yif(xi; θ)), where ci are lower order terms. Since gradient descent will converge to a stationary point due to the nice properties of F , we expect ∑ i ci exp(−yif(xi; θ)) = 0 to hold upon convergence. A necessary condition for that is: exp(−yif(xi; θ)) = 0, i = 1, . . . , n, and this condition is actually sufficient with high probability (Soudry et al., 2018). Therefore, for all exp(−yif(xi; θ)) to reach 0, ∥θ∥ must diverge so |f(·; θ)| → ∞. With that said, since the loss function decays exponentially fast around 0, the data points with the largest margin will dominate both the gradient and the loss function. As a direct consequence, the decision boundary will share characteristics with the hard-margin SVM, given by: min ∥θ∥2 s.t. yif(xi; θ) ≥ 1, ∀i = 1, . . . , n. Indeed, recent work shows that the optimization path of over-parameterized models will indeed converge to some minimum-norm predictor: Corollary A.1 (Chizat et al. (2019); Woodworth et al. (2020), and others). Under the conditions specified in the reference work, which are mostly exponential loss, scaled initialization, appropriate learning rate, and regularity conditions for the predictor class, it holds that: lim t→∞ lim d→∞ F ( θ(t)/∥θ(t)∥ ) stationary points of→ { argmin ∥f∥K s.t. yif(xi) ≥ 1, ∀i ∈ [n]}, where F is the decision boundary of f , d is the dimension of the hidden layer(s) of f , and ∥ · ∥K is an appropriate RKHS norm. Note that in Section 4.2 we use g0 to denote the converged result, and the above corollary guarantees its existence and uniqueness. However, one open question is which particular RKHS norm best describes the solution, because it will particularly affect the convergence of the parameters. Therefore, in our work, we leave the convergence of parameters out of our discussion. Remark A.2. It is also worth mentioning that the convergence of E[h(t)(x)] plays no part in our arguments and results. Indeed, it will not change the stochasticity of h(t)(x), and (in some cases) can be implied from the convergence of g(t)(x) (Lyu & Li, 2019). Therefore, we do not discuss it in our work. B PROOF OF THE RESULTS IN SECTION 4.1 We prove Proposition 1 in this part of the appendix. An important result we will be using is the Gaussian complexity bound for empirical risk minimization, and we will use the version of Bartlett & Mendelson (2002). Lemma A.2. Let F be real-valued function class from X to [0, 1]. Let (X1, . . . , Xn) be i.i.d random variables, then for all f ∈ F , it holds with probability at least 1− δ that: E [ f(X) ] ≤ 1 n ∑ i f(Xi) + √ 2πGn(F) n + √ 9 log 2/δ 2n . We now provide the proof, part of which will be using Corollary Lemma A.1, and Lemma A.2. We also assume F has a Lipschitz constant of at most L. Proof. Recall that h∗, f∗ := argminh∈H,f∈F R(h, f). We decompose the generalization error via: R(ĥ)− min h∈H,f∈F R(h, f) = ( R(ĥ)−min f∈F 1 n ∑ i ℓ ( f ◦ ĥ(Xi), Yi )) + + ( min f∈F 1 n ∑ i ℓ ( f ◦ ĥ(Xi), Yi ) −min f∈F 1 n ∑ i ℓ ( f ◦ h∗(Xi), Yi )) + ( min f∈F 1 n ∑ i ℓ ( f ◦ h∗(Xi), Yi ) − EPn [ min f∈F 1 n ∑ i ℓ ( f ◦ h∗(Xi), Yi )]) + ( EPn [ min f∈F 1 n ∑ i ℓ ( f ◦ h∗(Xi), Yi )] −min f∈F E(X,Y )∼P ℓ ( f ◦ h∗(X), Y )) . (A.1) We first discuss the first term, which incurs a major discussion in Section 4.1. By a standard practice, the first term can be bounded via: R(ĥ)−min f∈F 1 n ∑ i ℓ ( f ◦ ĥ(Xi), Yi ) ≤ sup h∈H { R(ĥ)−min f∈F 1 n ∑ i ℓ ( f ◦ ĥ(Xi), Yi )} ≤ sup h∈H EPn [ E(X,Y )∼P [ ℓ ( fh,n ◦ h(X), Y ) −Rn(h) ]] (a) sup h∈H { EPn [ 1 n ∑ i ℓ ( fh,n ◦ h(Xi), Yi )] − 1 n ∑ i ℓ ( fh,n ◦ h(Xi), Yi )} (b) Using Lemma A.2, term (b) can be bounded as: sup h∈H { EPn [ 1 n ∑ i ℓ ( fh,n◦h(Xi), Yi )] − 1 n ∑ i ℓ ( fh,n◦h(Xi), Yi )} ≤ √ 2πGn(A(H))+ √ 9 log 2/δ, where the set A(H) is given by:{( 1 n ℓ ( fh,n ◦ h(X1), Y1 ) , . . . , 1 n ℓ ( fh,n ◦ h(Xn), Yn )) : h ∈ H } . It is easy to examine that A(H) invokes Slepian’s lemma, so we can use the contraction result from Lemma A.1 to further bound it: Gn(A(H)) ≤ L√ n Gn(H). Combined together, the term (b) is upper bounded as: √ 2πLGn(H)√ n + √ 9 log 2/δ. Now we bound term (a) as below. Define the shorthand ℓ(F(h)) :={ ℓ ( f(h(X1), Y1), . . . , ℓ ( f(h(Xn), Yn) )) : f ∈ F } . It holds that: sup h∈H EPn [ E(X,Y )∼P [ ℓ ( fh,n ◦ h(X), Y ) −Rn(h) ]] ≤ sup h∈H EPn sup f∈F { E(X,Y )∼P ℓ ( f ◦ h(X), Y ) − 1 n ∑ i ℓ ( f ◦ h(Xi), Yi )} ≤ √ 2π sup h∈H EPn Gn(ℓ(F(h))) n (using Lemma A.2) and A.1 = √ 2πn−1 sup h∈H EPn Gn(ℓ(F(h))) ∥h(X)∥ ∥h(X)∥ (where h(X) := [h(X1), . . . , h(xn)]) ≤ √ 2πn−1 sup h∈H √ E∥h(X)∥2 · sup A∈Rn×d 1 ∥A∥ E sup f∈F ∑ i ϵif([A]i) ϵi i.i.d∼ N(0, 1). (A.2) We let G′n(F) := supA∈Rn×d 1∥A∥E supf∈F ∑ i ϵif([A]i) be the modified Gaussian complexity, so the term (b) is finally bounded by: √ 2π n G ′ n(F) suph∈H √ E∥h(X)∥2. Next, notice in the last term that: EPn [ min f∈F 1 n ∑ i ℓ ( f ◦ h∗(Xi), Yi )] ≤ EPn 1 n ∑ i ℓ ( f∗ ◦ h∗(Xi), Yi ) = E(X,Y )∼P ℓ ( f∗ ◦ h∗(X), Y ) . Therefore, the last term is always non-positive. Similar, by definition, the second term is non-positive as well. Finally, as for the second term, since there is already non-concentrating terms appearing in the bound of the first term, it does not hurt to simply bound it using Hoeffding’s bound, i.e. the first term will not exceed O( √ log 1/δ) with probability at least 1 − δ. Putting things together, we conclude the final result. C TECHNICAL DETAILS FOR SECTION 4.2 We first restate the proposition: Proposition A.1. For the one-layer MLP we consider, with the learning rate α = 1, for any step t > 1, as d → ∞, the model output g(t)(x) will converge almost surely to g(t)∗ (x) defined as follows: g (t) ∗ (x) = ( C (t) 1 C (t) 2 + C (t) 3 C (t) 4 ) x, with ( C (t+1) 1 , C (t+1) 2 , C (t+1) 3 , C (t+1) 4 ) = ( C (t) 1 , C (t) 2 , C (t) 3 , C (t) 4 ) +L(t)x(t) ( C (t) 3 , C (t) 4 , C (t) 1 , C (t) 2 ) . The above iterative update result can be shown by making explicit of the terms following the forward and backward pass in tth gradient step. In particular, it holds that: g(t)(x) a.s.→ EΘ(t)σ ( W (t)x ) ( def = g (t) ∗ (x)), ℓ′ ( g(t)(x(t)), y(t) ) a.s.→ ℓ′(g(t)∗ (x(t)), y(t))L(t) (def= L(t)), Θ̃(t+1) = Θ̃(t) − L(t)σ ( W̃(t)x(t) ) , W̃(t+1) = W̃(t) − L(t)x(t)Θ̃(t)σ′ ( W̃(t)x(t) ) . The only extra requirement for the above convergence to hold is that the activation function is wellbehaved (see Yang (2019) for a detailed description). To see how the above system of equations lead to the result in Proposition A.1, imagine the activation is the identity function. In this case, Θ̃(t) and W̃(t) are always deterministic linear combinations of Θ̃(0) and W̃(0). Observe that the update becomes: Θ̃(t) = C1Θ̃ (0) + C2W̃ (0), W̃(t) = C3Θ̃ (0) + C4W̃ (0). We mention that as a corollary, W(t+1)(x) is also element-wise i.i.d, so the inner product of the hidden representations 〈 W(t+1)(x),W(t+1)(x′) 〉 a.s.→ E[W (t)x ·W (t)x′], where W (t) is an i.i.d row of W̃(t+1), which is the rescaled version of W(t+1). D PROOFS OF THE RESULTS IN SECTION 4.3 Proof for Proposition 3 Proof. The proofs for the lower bound often starts by converting the problem to a hypothesis testing task. Denote our parameter space by B(k) = {θ ∈ Rd : ∥θ∥0 ≤ k}. The intuition is that suppose the data is generated by: 1. drawing θ according to an uniform distribution on the parameter space; 2. conditioned on the particular θ, the observed data is drawn. Then the problem is converted to determining according to the data if we can recover the underlying θ as a canonical hypothesis testing problem. For any δ-packing {θ1, . . . , θM} of B(k), suppose B is sampled uniformly from the δ-packing. Then following a standard argument of the Fano method Wainwright (2019), it holds that: P ( min θ̂ sup ∥θ∗∥0≤k ∥θ̂ − θ∗∥2 ≥ δ/2 ) ≥ min θ̃ P ( θ̃ ̸= B ) , (A.3) where θ̃ is a testing function that decides according to the data if the some estimated θ equals to an element sampled from the δ-packing. The next step is to bound minθ̃ P ( θ̃ ̸= B ) , whereas by the information-theoretical lower bound (Fano’s Lemma), we have: min θ̃ P ( θ̃ ̸= B ) ≥ 1− I(y,B) + log 2 logM , (A.4) where I(·, ·) denotes the mutual information. Then we only need to bound the mutual information term. Let Pθ be the distribution of y (which the vector consisting of the n samples) given B = θ. Since y is distributed according to the mixture of: 1M ∑ i Pθi , it holds: I(y,B) = 1 M ∑ i DKL ( Pθi∥ 1 M ∑ j Pθj ) ≤ 1 M2 ∑ i,j DKL ( Pθi∥Pθj ) , where DKL is the Kullback-Leibler divergence. The next step is to determine M : the size of the δ−packing, and the upper bound on DKL ( Pθi∥Pθj ) where Pθi , Pθj are elements of the δ−packing. For the first part, it has been shown that there exists a 1/2-packing of B(k) in ℓ2-norm with logM ≥ k 2 log d−k k/2 (Raskutti et al., 2011). As for the bound on the KL-divergence term, note that given θ, Pθ is a product distribution of the condition Gaussian: y|ϵ ∼ N ( θ⊺ϵ σ2h σ2ϵ , θ⊺θ(σ2z − σ4z/σ2ϵ ) ) , where σ2ϵ := σ 2 h + σ 2 ϵ . Henceforth, for any θ1, θ2 ∈ B(k), it is easy to compute that: DKL(Pθ1∥Pθ2) = EPθ1 [n 2 log (θ⊺1θ1(σ2z − σ4z/σ2ϵ ) θ⊺2θ2(σ 2 z − σ4z/σ2ϵ ) ) + ∥∥y − θ⊺2ϵσ2hσ2ϵ ∥∥22 2θ⊺2θ2(σ 2 z − σ4z/σ2ϵ ) − ∥∥y − θ⊺1ϵσ2hσ2ϵ ∥∥22 2θ⊺1θ1(σ 2 z − σ4z/σ2ϵ ) ] = σ2z 2σ2ϵ ∥ϵ(θ1 − θ2)∥22, where y and ϵ are the vector and matrix consists of the n samples, i.e. y ∈ Rn and ϵ ∈ Rn×d. Since each row in the matrix ϵ is drawn from N(0, σ2ϵ Id×d), standard concentration result shows that with high probability, ∥ϵ(θ1 − θ2)∥22 can be bounded by C∥θ1 − θ2∥22 for some constant C. It gives us the final upper bound on the KL divergence term: DKL(Pθ1∥Pθ2) ≲ nσ2zδ 2 2σ2ϵ . Substituting this result into (A.4) and (A.3), by choosing δ2 = Ckσ 2 ϵ σ2zn log d−kk/2 and rearranging terms, we obtain the desired result that with probability at least 1/2: inf θ̂ sup θ∗:∥θ∗∥0≤k ∥θ̂ − θ∗∥2 ≳ σ2ϵ σ2h kn−1 log(d/k). Proof of Lemma 1 Proof. We first express the NW predictor in its expectation form: fϕ(X) = EX′ [ y′K(X,X ′) ] Z , where Z is the normalization constant. Recall that y ∈ {−1,+1}, R(·) is risk associated with the 0− 1 classification loss. We first define for x ∈ X : γϕ(X) := √ EX′ [ K(X,X ′) ] Z , where the expectation is taken w.r.t. the underlying distribution. Using the Markov inequality, we immediately have: |γ(X)| ≤ 1√ δ with probability at least 1− δ. It then holds that: 1−R(f) = P ( Y f(X) ≥ 0 ) ≥ E [Y f(X) γ(X) · 1[Y f(X) ≥ 0] ] ≥ E [Y f(X) γ(X) ] ≥ E [ 1[Y = Y ′]K(X,X ′) ] Z √ δ ,with probability 1− δ, which concludes the proof. E PROOF OF THE RESULT IN SECTION 5 The proof of Proposition 4 relies on two important results, which we state below. Lemma A.3 (Ben-Tal et al. (2013)). Let c be any closed convex function with domain [0,+∞), and this conjugate is given by c∗(s) = supt≥0{ts−c(t)}. Then for any distribution Q∗ and any function g : Rd → R, it holds: sup Q∈Q(Q∗;δ) ∫ g(u)dQ(u) = inf λ≥0,η { λ ∫ c∗ (g(u)− η λ ) dQ∗(u) + δλ+ η } . (A.5) The next lemma is adapted from the concentration of random Fourier feature in Rahimi & Recht (2007). Recall that ϕm ( h(x), Q ) := 1/ √ m [ cos ( u1h(x) ) , sin ( u1h(x) ) , . . . , cos ( umh(x) ) , sin ( umh(x) )] comes from the Monte Carlo approximation of K(h(x), h(x′)). Lemma A.4. Let A ⊂ Rd has diameter dA such that h(x) ∈ A for all x ∈ X . It holds that: Pr ( sup h(x),h(x′) ∣∣K(h(x), h(x′))− ⟨ϕm(h(x), Q), ϕm(h(x′), Q)⟩∣∣ ≥ ϵ) ≤ 28 (σQdA ϵ ) exp ( − mϵ 2 4(d+ 2) ) , (A.6) where Q is given by the inverse Fourier transform of K, and σQ is the second moment of Q. Recall that An(Q) := 1n(n−1) ∑ i̸=j 1[yi = yj ]⟨ϕm(h(xi), Q), ϕm(h(xi), Q)⟩. For notation convenience, in what follows, we let hi := h(xi), and further define ϕ̃(h, U) := [cos(UTh), sin(UTh)] as the actual random Fourier feature underlying ϕm(h,Q), where U ∼ Q. Also, we let K(Y, Y ′) := 1[Y = Y ′] to be the labelling kernel of the downstream task. Proof. Following Lemma A.3, we work with a scaled version of the f-divergence under c(t) = 1 k (t k − 1) (because its dual function has a cleaner form). It is easy to check that c∗(s) = 1k′ [s] k′ + + 1 k with 1k′ + 1 k = 1. First note that the sampling error of the alignment E [ K(Yi, Yj)KQ(Hi, Hj) ] , i.e. replacing the expectation by the sample average, can be given by: ∆n(U) := 1 n(n− 1) ∑ i ̸=j K(yi, yj)ϕ̃(hi, U) T ϕ̃(hj , U)− E [ K(Yi, Yj)KQ(Hi, Hj) ] = 1 n(n− 1) ∑ i ̸=j K(yi, yj)ϕ̃(hi, U) T ϕ̃(hj , U)− E [ K(Yi, Yj)ϕ̃(Hi, U) T ϕ̃(Hi, U) ] . We show that ∆n(U) is sub-Gaussian. Let {h′i, y′i}ni=1 be an i.i.d copy off the observation expect for one element such that (hj , yj) ̸= (h′j , y′j). Without loss of generality, we assume the last element is different: (hn, yn) ̸= (h′n, y′n). Let ∆′n(U) be computed by replace {hi, yi}ni=1 with {h′i, y′i}ni=1, and their difference can be bounded via: |∆n(U)−∆′n(U)| = 1 n(n− 1) ∣∣∑ i ̸=j K(yi, yj)ϕ̃(hi, U) T ϕ̃(hj , U)−K(y′i, y′j)ϕ̃(h′i, U)T ϕ̃(h′j , U) ∣∣ ≤ 1 n(n− 1) (∑ i<n ∣∣K(yi, yn)ϕ̃(hi, U)T ϕ̃(hn, U)−K(yi, y′n)ϕ̃(hi, U)T ϕ̃(h′n, U)∣∣ + ∑ j<n ∣∣K(yn, yj)ϕ̃(hn, U)T ϕ̃(hj , U)−K(y′n, yj)ϕ̃(h′n, U)T ϕ̃(hj , U)∣∣) ≤ 4 n where the last inequality comes from the fact that the random Fourier features ϕ̃ and the labelling kernel K(y, y′) are both bounded by 1. Therefore, the above bounded difference result tells that ∆n(U) is a 4n -subGaussian random variable. To bound ∆n(U), we use: sup Q∈Q(Q∗;δ) ∣∣∣ ∫ ∆n(U)dQ∣∣∣ ≤ sup Q∈Q(Q∗;δ) ∫ |∆n(U)|dQ ≤ inf λ≥0 {λ1−k′ k′ EQ∗ [ |∆n(U)|k ′] + λ(δ + 1) k } (using Lemma A.3) = (δ + 1)1/kEQ∗ [ |∆n(U)|k ′]1/k′ (solving for λ∗ from above) = √ δ + 1EQ∗ [ |∆n(U)|2 ]1/2 (let k = k′ = 1/2). (A.7) It means that in order to bound supQ∈Q(Q∗;δ) ∣∣∣ ∫ ∆n(U)dQ∣∣∣, we only need to bound |∆n(U)|2. Using classical results for sub-Gaussian random variables (Boucheron et al., 2013), it holds for λ ≤ n/8 that: E [ exp ( λ∆n(U) )2] ≤ exp (− 1 2 log(1− 8λ/n) ) . We can take its integral and further upper bound the result with: p (∫ ∆n(U) 2dQ ≥ ϵ 2 δ + 1 ) ≤ E [ exp ( λ ∫ ∆n(U) 2dQ )] exp ( − λϵ 2 δ + 1 ) (Chernoff bound) ≤ exp ( − 1 2 log ( 1− 8λ n ) − λϵ 2 δ + 1 ) (apply Jensen’s inequality). Hence, it holds that: Pr ( sup Q∈Q(Q∗;δ) ∆n(U) ≥ ϵ ) ≤ exp ( − nϵ 2 16(1 + δ) ) . Combine this result with the approximation error of random Feature feature in Lemma A.4, we obtain the desired result. F SUPPLEMENTARY MATERIAL FOR THE EXPERIMENTS We provide the descriptions, details, and additional results of our experiments in this part of the appendix. F.1 REPLICATING THE INSTABILITY ISSUE WITH IMDB DATASET The IMDB dataset is a binary sentiment analysis dataset consisting of 50,000 reviews from the Internet Movie Database (IMDb) labeled as positive or negative4. We particularly consider using this dataset for an addition proof of concept because it appears on the official tutorial of Tensorflow5. We directly adopt the implementation from the tutorial, including the text prepossessing pipeline and model architecture. In particular, the raw input texts are pass to a text vectorization layer, an embedding layer, a bidirectional RNN layer, and finally two dense layers to produce the final score for binary classification. We extract the hidden representation from the last hidden layer as the hidden representation. In our experiments, we set the number of hidden dimension as 32. The results are provided in Figure A.1, where we observe patterns highly similar to the ML-1m data. In particular, the pretrained embeddings both have high variances in their exact values even if their pretraining objectives converge to similar loss and accuracy, and the variances gets larger as the pretraining progress. Two minor differences from the ML-1m result are that the pretraining process is less stable for IMDB (Figure A.1b), and the variance distribution here is unimodal instead of the bimodal distribution we observed in Figure 1c. F.2 DETAILS OF THE BENCHMARK EXPERIMENTS The main benchmark experiments in our paper are conducted on the Movielens-1m6 dataset, which is a well-established public dataset for movie & user contextual analysis and examining recommendation. The ML-1m dataset consists of 1 million movie ratings from 6,000 users on 4,000 movies, with a one-to-five rating scale. According to Harper & Konstan (2015), the data is collected from the initial and follow-up stages, where the initial stage mainly involves popularity-based exposure (a very small proportion involves random exposure), while in the follow-up stage, rating feedback is collected under some deterministic recommender systems. By convention, we convert the dataset to implicit feedback, which amounts to treating all rating events as clicks. For contextual information, each movie is provided with its title and genre, in the form of English words or sentences. There are 18 genres in total. Pretraining movie embeddings from user behavior data 4https://www.imdb.com/interfaces/ 5https://www.tensorflow.org/text/tutorials/text_classification_rnn 6https://grouplens.org/datasets/movielens/1m/ We use Item2vec (Barkan & Koenigstein, 2016) to train movie embedding from users’ consecutive viewing data. Item2vec uses the same objective function as Word2vec (Mikolov et al., 2013)j, where the words become movies and the corpus become each user’s viewing sequence. Movies belong to a consecutive viewing window of #ws are treated as positive pairs, and for each positive pair, we randomly sample #ns negative movies. Together with the embedding dimension d and ℓ2regularization parameter (weight decay) λ, the training schema is described by the quadruplet of (#ws, #ns, d, λ). Since our goal is not to find the best pretraining schema, we fix #ws=3 and #ns=3, and focus on studying how the our results may change under different d. Pretraining movie embeddings from movie contextual data Since the movie titles and other contextual information can be relatively short, large NLP models may not be appropriate. Therefore, we use the Doc2vec model Dai et al. (2015) to pretrain the movie embeddings. Since Doc2vec is built on top of Word2vec, the training schema can also be described by the quadruplet of (#ws, #ns, d, λ). Therefore, we also #ws=3 and #ns=3. Using pretrained movie embedding for downstream genre prediction Given pretrained movie embeddings ĥ(x), we employ logistic regression to predict the score for the movie to belong to a particular genre, i.e. p(Yi = k) ∝ exp(θkĥ(x)). Due to its simplicity, we use the logistic regression subroutine from the scikit-learn package. Using pretrained movie embedding for downstream sequential recommendation We employ a two-tower model structure (Figure A.2) for the downstream sequential recommendation, which is very common in the recommendation community. In particular, we use RNN to aggregate the past interaction sequence, so the whole model is very similar to GRU4Rec (Jannach & Ludewig, 2017). We use the sigmoid function as the activation function for the dense layers. The model training can be done in a seq-to-seq fashion, where for each positive target, we randomly sample 3 negative targets. We fix the hidden dimension of both the RNN and the dense layers as 16. Model Training Besides Doc2vec and the logistic regression, all of our models are optimized using the Adam optimizer with early stopping, which stops the training if the improvement in the loss is less than 1e− 4 for three consecutive epochs. For all the experiments, we set the initial learning rate to 0.001, and set the weight decay to 1e− 4. Our main implementation is with Tensorflow, and all the computations are conducted on a 16-core Linux cluster with 128 Gb memory, and two Nvidia Tesla V100 GPU each with 16 Gb memory. We use the Doc2vec subroutine from the Gensim package7 to pretrain the movie embeddings for ML-1m task2. Train/test split and metrics Since the goal of our experiments is not to find the best modelling and training configuration, we do not use a validation set to tune the hyperparameters. Instead, we provide sensitivity analysis on certain parameters of interest in Appendix F.3. For ML-1m task1, we randomly split the movies by 80%-20% to construct the training and testing set for genre classification. For evaluation, we use the accuracy and Macro F1 score as metrics. For ML-1m task2, we follow the convention of using the last user-movie interaction for testing, and use all the previous interactions for training. For evaluation, we use Recall@5, i.e. if the movie that the user truly viewed is among the top-5 recommendation, and NDCG@5 that further discounts the position of the viewed movie in the top-5 recommendation. 7https://radimrehurek.com/gensim/models/doc2vec.html F.3 SUPPLEMENTARY RESULTS We provide the sensitivity analysis for the featurization method. We focus on two variables, the dimension d and the variance of Q (denoted by σ2Q). Recall that we consider Q as Gaussian distributions. We vary d in {16, 32, 64}, and vary σ2Q in {0.2, 0.4, 0.6, 0.8}. In particular, we first compare side-to-side hd, ϕd(h), and ϕ2d(h), while fixing Q as the standard Gaussian distribution. We see from Figure A.3 that ϕd(h) consistently outperforms hd on both ML1m task1 and ML-1m task2. ϕ2d(h) also significantly improves upon the performance of ϕd(h), which suggests the benefits of allowing extra model complexity in the particular tasks we consider. Further, the performance of both ϕd(h) and ϕ2d(h) have considerable smaller variances than h(x). We then examine the sensitivity of the downstream performance w.r.t. Q – the sampling distribution for constructing ϕd(h). As stated before, we let Q be zero-mean Gaussian distribution, and vary its variance. From Figure A.4, we observe that for all the dimensions we consider, the downstream task under ϕd(h) is very stable under different σQ. This echos Corollary 4 that our approach enjoys robustness to the selection of Q. In real-world productions, we have been using standard Gaussian distribution and observed no issues. F.4 ONLINE DEPLOYMENT To avoid potential conflict of interest, we provide an overview of our production experiments. We aim to provide enough detail for interested practitioners to draw inspiration for both developing their own solutions and replicating ours. Some background introduction. In e-commerce application, the representation of items serves as a central component for almost all machine learning algorithms Wang et al. (2018); Xu et al. (2021). In the past few years, we have built a dedicated item representation learning pipeline that uses multiple sources of data to optimize item embeddings. Since there are over billions of items on our platform Ecom, it took us considerable effort to optimize the data pipeline and training routines such that the model refresh can be done on a daily basis. We point out that the daily refresh is necessary for item representation learning because the catalog of items, which is a major source of pretraining data, also gets minor updates on a daily basis. For example, new items can be added, and the features of items (e.g. title, price, description) can be modified by the vendors. The other major source of pretraining data is the historical customer behavior data. They are critical for revealing the relationship (e.g. similarity, complementariness, compatibility, substitutability) among items. These relationships are relatively stable in the customer population, so the more data we use, the more likely for us to discovery useful signals. Our model for pretraining item embeddings has both feed-forward component, recurrent-unit component, as well as contrastive learning component. The reason for using these different components is to effective handle data that has different structures. It is expected that the pretrained item embeddings are stable. As we mentioned above, the relationship among items are quite stable, and the catalog data has very minor differences in a limited span of time. Therefore, downstream machine learning models may follow a weekly or bi-weekly refresh schedule and are expecting very stable performances. The four major applications that depend on our pretrained item embeddings, which we first introduced in Section 6, are item-page recommendation, search ranking, email recommendation, and home-page marketing. Each of the four tasks use both item embeddings and task-specific features to optimize their objectives. Most of them use model structures similar to the Wide and Deep network (Covington et al., 2016) to effectively combine information from different sources. Item-page recommendation aims to provide items that are related to the anchor item on that particular page that the customer is viewing. Item embeddings are in both the recall and reranking stage. Search ranking is a huge system that combines multiple components. In particular, the item embeddings are used in a particular recall stage. Email recommendation is a simpler task that aims to recommendation items related to what the customers recently interacted with, or are supposed to interact again. Item embeddings is used along with other features to build a model that optimizes CTR. Marketing is also a huge system in Ecom, and the particular component that uses item embedding is to build the predicted click-through rate model to support bidding and placement ranking. Brief summary of the production environment and implementation. Our production environment is relative standard in the e-commerce industry, with Hive/Spark supporting the offline data streaming and Tensorflow Server supporting the online inference of deep learning models. Featuring h(·) via ϕ(h(·), Q) can be easily implemented in production. Some practical advantages are: • the algorithm is very simple and requires no training; • it fits seamlessly into the current big-data infrastructure and frontend service; • it require no change to the downstream model; • the overhand for both the training and inference time are small; • the communication can be easily done by simply recording the dimension and random seed under which a particular U is generated. On the backend, featurizing pretrained representation is engineered into a subroutine (on top of the original automated representation learning pipeline) callable by downstream applications. For instance, it can be a simple PySpark function if the end point of the automated representation learning pipeline is a feature store in HDFS. The dimension m and the random seed for generated the random directions U = [u1, . . . , ud] are the two main inputs. Configuring and logging the random seed used by each experiment is important because U might be reused for
1. What is the main contribution of the paper regarding improving the stability of downstream task models? 2. What are the strengths of the proposed method, particularly in its effectiveness and intuition? 3. What are the weaknesses of the paper, especially regarding the suitability of the chosen h(x) method and the comprehensiveness of the analysis? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper aims to solve the suboptimal problems caused by the pretraining models in industrial systems. And the authors analyze the problem with four aspects( the gap of uniform convergence for analyzing pretrained representations, their stochastic nature under gradient descent optimization, what model convergence means to them, and how they might interact with downstream tasks). Then they propose a method to refine the features of pretraining models in various ways to improve the stability of the downstream task model. Experiments on six tasks prove the effectiveness of the proposed method. Strengths And Weaknesses Pros: -The problem addressed has high practical value: it tries to make pre-trained model more accessible to a range of industrial systems. The "Featurizing Pretrained Representations" idea will promote the performance of downstream tasks compared to other approaches. -This paper verifies its thought with a detailed formula. By analyzing the features of the pre-training models, the author proves that the features from the pre-training models are related to the stability of the performance of the downstream tasks. In detail, the variance of the pre-training features h (x) plays an essential role in the learning process of downstream tasks, and the variance will increase with the increase of random variables in the pre-training model. -The proposed method (Featurizing Pretrained Representations) is intuitive and effective, which uses a more stable h (x), e.g., the representation of weights in the NW estimator in Section 4.3, in downstream tasks. Moreover, experiments on six downstream tasks indicate that the method has superior performance and stability. Cons: -It may have a more suitable h(x). The authors think the optimal h(x) can be obtained through Fourier Transform. But it is not easy to get the representation of the sample by the inverse Fourier transform. So they replaced this method by increasing the number of samples and the dimension of features. Nevertheless, this method may not be the most suitable one in industrial systems due to the limited sample and computing resources. Maybe the authors can seek a better method from the perspective of math or Fourier Transform. -The analysis is not comprehensive enough. Although the paper proves that the features of pre-training model do affect the stability of downstream tasks, it ignores the features of downstream tasks, which are more essential for downstream tasks. Have the authors tried to analyze the downstream tasks, and what does the result look like? -The essence of h(x) is still not clear. The paper discussed the impact of the variance of h(x) on downstream tasks, and then the authors improved the performance with the optimization of h(x). However, they did not give any description of the reason why the pretraining models generate such features h(x). Maybe it is more important. Clarity, Quality, Novelty And Reproducibility Have met the standards.
ICLR
Title One-Pixel Shortcut: On the Learning Preference of Deep Neural Networks Abstract Unlearnable examples (ULEs) aim to protect data from unauthorized usage for training DNNs. Existing work adds l∞-bounded perturbations to the original sample so that the trained model generalizes poorly. Such perturbations, however, are easy to eliminate by adversarial training and data augmentations. In this paper, we resolve this problem from a novel perspective by perturbing only one pixel in each image. Interestingly, such a small modification could effectively degrade model accuracy to almost an untrained counterpart. Moreover, our produced One-Pixel Shortcut (OPS) could not be erased by adversarial training and strong augmentations. To generate OPS, we perturb in-class images at the same position to the same target value that could mostly and stably deviate from all the original images. Since such generation is only based on images, OPS needs significantly less computational cost than the previous methods using DNN generators. Based on OPS, we introduce an unlearnable dataset called CIFAR-10-S, which is indistinguishable from CIFAR-10 by humans but induces the trained model to extremely low accuracy. Even under adversarial training, a ResNet-18 trained on CIFAR-10-S has only 10.61% accuracy, compared to 83.02% by the existing error-minimizing method. 1 INTRODUCTION Deep neural networks (DNNs) have successfully promoted the computer vision field in the past decade. As DNNs are scaling up unprecedentedly (Brock et al., 2018; Huang et al., 2019; Riquelme et al., 2021; Zhang et al., 2022), data becomes increasingly vital. For example, ImageNet (Russakovsky et al., 2015) fostered the development of AlexNet (Krizhevsky et al., 2017). Besides, people or organizations also collect online data to train DNNs, e.g., IG-3.5B-17k (Mahajan et al., 2018) and JFT-300M (Sun et al., 2017). This practice, however, raises the privacy concerns of Internet users. In this concern, researchers have made substantial efforts to protect personal data from abuse in model learning without affecting user experience (Feng et al., 2019; Huang et al., 2020a; Fowl et al., 2021; Yuan & Wu, 2021; Yu et al., 2021). Among those proposed methods, unlearnable examples (ULEs) (Huang et al., 2020a) take a great step to inject original images with protective but imperceptible perturbations from bi-level error minimization (EM). DNNs trained on ULEs generalize very poorly on normal images. However, such perturbations could be completely canceled out by adversarial training, which fails the protection, limiting the practicality of ULEs. We view the data protection problem from the perspective of shortcut learning (Geirhos et al., 2020), which shows that DNN training is “lazy” (Chizat et al., 2019; Caron & Chrétien, 2020), i.e., converges to the solution with the minimum norm when optimized by gradient descent (Wilson et al., 2017; Shah et al., 2018; Zhang et al., 2021). In this case, a DNN would rely on every accessible feature to minimize the training loss, no matter whether it is semantic or not (Ilyas et al., 2019; Geirhos et al., 2018; Baker et al., 2018). Thus, DNNs tend to ignore semantic features if there are other easy-to-learn shortcuts that are sufficient for distinguishing examples from different classes. Such shortcuts exist naturally or manually. In data collection, e.g., cows may mostly appear with grasslands, misleading DNN to predict cows by large-area green, because the color is easier to learn than those semantic features and also sufficient to correctly classify images of cows during training. Such natural shortcuts ∗equal contribution; correspondence to Xiaolin Huang ([email protected]). †code available at https://github.com/cychomatica/One-Pixel-Shotcut. have been illustrated in detail in datasets such as ImageNet-A (Hendrycks et al., 2021) and ObjectNet (Barbu et al., 2019). Besides, shortcuts could also be manually crafted. For instance, EM-based ULEs (Huang et al., 2020a) mislead DNNs to learn features belonging to the perturbations, which falls in the category of shortcut learning (Yu et al., 2021). In this paper, we are surprised to find that shortcuts could be so small in the area that it can even be simply instantiated as a single pixel. By perturbing a pixel of each training sample, our method, namely One-Pixel Shortcut (OPS), degrades the model accuracy on clean data to almost an untrained counterpart. Moreover, our generated unbounded small noise could not be erased by adversarial training (Madry et al., 2018), which is effective in mitigating existing ULEs (Huang et al., 2020a). To make the specific pixel stand out in view of DNNs, OPS perturbs in-class images at the same position to the same target value that, if changed to a boundary value, could mostly and stably deviate from all original images. Specifically, the difference between the perturbed pixel and the original one in all in-class images should be large with low variance. Since such generation is only based on images, OPS needs significantly less computational cost than the previous methods based on DNN generators. We evaluate OPS and its counterparts in 6 architectures, 6 model sizes, 8 training strategies on CIFAR-10 (Krizhevsky et al., 2009) and ImageNet (Russakovsky et al., 2015) subset, and find that OPS is always superior in degrading model’s testing accuracy than EM ULEs. In this regard, we introduce a new unlearnable dataset named CIFAR-10-S, which combines the EM and OPS to craft stronger imperceptible ULEs. Even under adversarial training, a ResNet-18 (He et al., 2016) trained on CIFAR-10-S has 10.61% test accuracy, compared to 83.02% by the existing error-minimizing method. Different from the existing datasets like ImageNet-A (Hendrycks et al., 2021) or ObjectNet (Barbu et al., 2019), which place objects into special environments to remove shortcuts, CIFAR-10-S injects shortcuts to evaluate the model’s resistance to them. Altogether, our contributions are included as follows: • We analyze unlearnable examples from the perspective of shortcut learning, and demonstrate that a strong shortcut for DNNs could be as small as a single pixel. • We propose a novel data protection method named One-Pixel Shortcut (OPS), which perturbs in-class images in the pixel that could mostly and stably deviate from the original images. OPS is a model-free method that is significantly faster than previous work. • We extensively evaluate OPS on various models and training strategies, and find it outperforms baselines by a large margin in the ability to degrade DNN training. Besides, we introduce CIFAR-10-S to assess the model’s ability to learn essential semantic features. 2 RELATED WORK 2.1 ADVERSARIAL ATTACK AND DATA POISONING Adversarial examples are perturbed by small perturbations, which are indistinguishable from the original examples by humans but can make DNNs give wrong predictions (Szegedy et al., 2014). Many different adversarial attacks are proposed in recent years. Generally, most adversarial attacks aim to perturb the whole image with a constrained intensity (usually bounded by ℓp norm), e.g., PGD (Madry et al., 2018), C&W (Carlini & Wagner, 2017) and Autoattack (Croce & Hein, 2020). Besides, there are also other methods that only perturb a small part of an image (Croce & Hein (2019); Dong et al. (2020)) or even a single pixel (Su et al. (2019)). The existence of adversarial examples and their transferability (Chen et al. (2022)) indicates that DNNs do not sufficiently learn critical semantic information as we wish, but more or less depend on some non-robust features. Data poisoning aims to modify the training data in order to affect the performance of models. Usually, the poisoned examples are notably modified and take only a part of the whole dataset (Yang et al., 2017; Koh & Liang, 2017). But those methods cannot degrade the performance of models to a low enough level, and the poisoned examples are easily distinguishable. Recently researchers have paid great attention to imperceptible poisoning which modifies examples slightly and does not damage their semantic information (Huang et al., 2020a; Fowl et al., 2021; Huang et al., 2020b; Doan et al., 2021; Geiping et al., 2020; Chen et al., 2023). Fowl et al. (2021) uses adversarial perturbations which contain information of wrong labels to poison the training data, which is equivalent to random label fitting. On the contrary, Huang et al. (2020a) attack the training examples inversely, i.e., using error-minimizing perturbation, to craft unlearnable examples. 2.2 SHORTCUT LEARNING Recently, researches on deep neural networks indicate that under the sufficiency of correct classification, DNNs tend to learn easier features instead of semantic features which make the object itself. To be more specific, for example, the same object in different environments will get different predictions, which means the DNN overly relies on features that do not belong to the object (Beery et al., 2018). Geirhos et al. (2020) investigate this phenomenon in different fields of deep learning and explain why shortcuts exist and how to understand them. Lapuschkin et al. (2019) also observe this problem and attribute it to the unsuitable performance evaluation metrics that we generally use. The existence of natural adversarial examples (Hendrycks et al., 2021) also indicates that DNNs do not sufficiently learn the real semantic information during training. Instead, they may learn to use the background or texture of an image to predict. Unlearnable examples (ULEs) (Huang et al., 2020a), which are manually crafted by error-minimizing noises and able to lead the models trained on them to obtain terrible generalization on test data, are believed to be some kind of shortcut that provides some textures that are easy to learn (Yu et al., 2021). Generally, if we get enough data, the interconnection of different features will be enhanced so that those shortcuts may not be sufficient for classification tasks, i.e., the model will have to use more complicated composed features in order to minimize the risk. However, when the data we collect obtains some specific bias (e.g., similar backgrounds), shortcut learning will not be mitigated effectively. 2.3 DATA AUGMENTATION Data augmentation aims to enhance the generalization ability of models. This is usually implemented by applying some transformations to the training data, e.g., random stretching, random cropping, or color changing. Nowadays different kinds of data augmentation policies (Zhang et al., 2018; DeVries & Taylor, 2017; Cubuk et al., 2019; 2020) are proven to effectively boost the generalization ability of DNNs. Sometimes adversarial training Madry et al. (2018); Li et al. (2022) is also regarded as a kind of data augmentation (Shorten & Khoshgoftaar, 2019; Xie et al., 2020). Dao et al. (2019) deem data augmentation as a kind of data-dependent regularization term. Since data augmentations are believed to improve the generalization ability of DNNs, we use different augmentations to evaluate the effectiveness of different data protection methods. 3 ONE-PIXEL SHORTCUT 3.1 PRELIMINARIES Unlearnable examples (ULEs) are a data protection method created by error-minimizing (EM) noises (Huang et al., 2020a). Models trained on examples that are perturbed by those noises will get almost zero training error, but perform like random guessing on clean test data. Due to the imperceptibility of the noise, this method can prevent the abuse of data by some unauthorized users who attempt to train deep models for improper purposes, without affecting normal usage. This bi-level problem can be solved by optimizing the inner minimization and the outer minimization alternately. It is proved that the perturbations belonging to the same class are well clustered and linearly separable (Yu et al., 2021). Thus, EM provides easy-to-learn features which are closely interconnected with labels. We design a shuffling experiment to demonstrate that DNNs learn the shortcuts instead of images if trained on data that have shortcuts. Denote D̂ = {(xi + δi, yi)} as the perturbed training set, where δi is the perturbation associated with the i-th example. After shuffling, the training set becomes D̂′ = {(xj + δi, yi)}. We are curious about how the DNNs trained on different data (with or without shortcuts) would predict the shuffled perturbed training set. As shown in Table 1, the DNN trained on clean data performs like a random guess on shuffled perturbed data while keeping high accuracy on unshuffled data, indicating that it successfully learns the representations from images {xi} rather than giving predictions according to the perturbations {δi} from EM or OPS. In contrast, the DNN trained on EM training data tends to memorize a significant proportion of perturbations, because it has 48.97% accuracy on the shuffled EM training data. Moreover, OPS training data produces a DNN with 72.43% accuracy on the shuffled set, reflecting it forcing the DNN to learn the perturbations to a greater extent. Our study illustrates the learning characteristics of DNNs trained with or without shortcuts, and also shows that OPS is a more effective shortcut than EM. 3.2 FOOLING DNN TRAINING BY A PIXEL Following the discussion above, for the purpose of data protection, we need to craft shortcuts that are easy enough to learn and thus fool the network training. According to previous studies, shortcuts can come from background environments that naturally exist inside our datasets (Beery et al., 2018), or be manually crafted like EM (Huang et al., 2020a). Unlike those shortcuts which might occupy the whole image or a notable part, we investigate how a single pixel, which is the minimum unit of digital images, can affect the learning process of deep neural networks. Thus, we propose One-Pixel Shortcut (OPS), which modifies only a single pixel of each image. Images belonging to the same category are perturbed at the same position, which means the perturbed pixel is interconnected with the category label. Although so minuscule, it is efficient enough to fool the training of deep learning models. We use a heuristic but effective method to generate perturbations for images belonging to each category. We search the position and value of the pixel which can result in the most significant change for the whole category. Denoting D as the clean dataset and Dk as the clean subset containing all the examples of class k, the problem can be formulated as: argmax σk,ξk E(x,y)∈Dk [Gk (x, σk, ξk)] s.t. ∥σk∥0 = 1, ∑ i,j σk(i, j) = 1, (1) where σk ∈ RH×W represents the perturbed position mask and σk(i, j) is the element at the i-th row and j-th column, ξk ∈ RC stands for the perturbed target color (C = 3 for RGB images), and G is the objective function. Since the optimization above is an NP-hard problem, we cannot solve it directly. Thus we constrain the feasible region to a limited discrete searching space, where we search the boundary value of each color channel, i.e.,ξk ∈ {0, 1}3, at every point of an image. Specifically, for CIFAR-10 images, the discrete searching space will contain 32× 32× 23 = 8192 elements. To ensure that the pixel is stably perturbed, we also hope that the variance of the difference between them is small. Accordingly, we design the objective function Gk for class k as: Gk = E(x,y)∈Dk (∑C j=1|∥xj · σk∥F − ξkj | ) Var(x,y)∈Dk (∑C j=1|∥xj · σk∥F − ξkj | ) (2) where xi ∈ RH×W denotes the i-th channel of x, and ξkj ∈ R is the i-th channel of ξk. After solving the position map and color, we get perturbation δ for each example (x, y) as: δ = [ξy1σy − x1 · σy, ξy2σy − x2 · σy, . . . , ξyCσy − xC · σy]⊤ (3) Details can be found in Algorithm 1. The resulting One-Pixel Shortcut is illustrated in Figure 2. Algorithm 1 Model-Free Searching for One-Pixel Shortcut Input: Clean dataset D = D1 ⋃ · · · ⋃ DM Output: One-Pixel Shortcut dataset D̂ = D̂1 ⋃ · · · ⋃ D̂M 1: for k = 1, 2, 3, ...,M do 2: solve Eq.1 and Eq.2 to get σk and ξk # calculate the best perturbed point for class k 3: for each x ∈ Dk do 4: for i = 1, 2, 3 do 5: x̂i = xi · (I − σk) + ξki · σk # modify the optimal pixel for every image in class k 6: end for 7: end for 8: D̂k = {x̂} 9: end for 10: return D̂ = D̂1 ⋃ · · · ⋃ D̂M 3.3 PROPERTIES OF ONE-PIXEL SHORTCUT Since we all empirically believe that convolutional networks tend to capture textures (Hermann et al., 2020) or shapes (Geirhos et al., 2018; Zhang & Zhu, 2019), it is surprising that convolutional networks can be affected so severely by just one pixel. As illustrated by Figure 1, the network indeed tends to learn those less complicated nonsemantic features brought by One-Pixel Shortcut. Besides convolutional networks, we observe that compact vision transformers (Hassani et al., 2021) are also attracted by One-Pixel Shortcut and ignore other semantic features. This indicates that shortcuts are not particularly learned by some specific architecture. We also visualize the loss landscape of ResNet-18 trained on clean CIFAR-10 data and One-Pixel Shortcut data. Illustrated as Figure 3, while trained on OPS data, the loss surface is much flatter, which means that these minima found by the network are more difficult to escape. Even if we use a ResNet-18 pretrained on clean CIFAR-10 and then fine-tune it on the OPS data, the network will still fall into this badly generalized minima. In addition, we record the trajectories of training accuracy and the Frobenius norm of parameter difference, ∥θ − θ0∥F , which can reflect the magnitude of network parameter change. Here θ and θ0 respectively indicate the parameters after training and at the initialized point. We draw the relation curve between training accuracy and ∥θ − θ0∥F , which can be found in Figure 4. When training accuracy rises up to 90% for the first time, the model trained on OPS data has a much smaller ∥θ − θ0∥F than that trained on clean data, which indicates that the OPS-trained model gets stuck in an optimum closer to the initialization. It has been widely known that overparameterized DNNs optimized by gradient descent will converge to the solution that is close to the initialization, i.e. with the minimum norm of parameter difference (Wilson et al., 2017; Shah et al., 2018; Li & Liang, 2018; Zhang et al., 2021). Since OPS only perturbs a single pixel, the original representations of images are not damaged, and the model trained on clean data can still keep great performance on OPS data, which indicates that the well-generalized solution far from the initialization still exists but is not reached due to the tendency for a close solution. The close solution is believed to obtain better generalization ability. Nevertheless, this argument is true only under the assumption that the training data and test data are from the exact same distribution and have the exact same features. The existence of OPS forces the model to converge to an optimum where the model generalizes well on OPS features, which are not contained in test data. From our experiment results in Table 2, OPS can degrade test accuracy to a lower level. This is because EM requires a generator model, and thus may contain features more or less depending on it, which constrains the effectiveness of other models. On the other hand, OPS is a universal model-free method, and the shortcuts are crafted based on the inherent learning preference of DNNs. 4 EXPERIMENTS 4.1 SETTING Our experiments are implemented on CIFAR-10 and ImageNet subset, using 4 NVIDIA RTX 2080Ti GPUs. We investigate how the One-Pixel Shortcut can affect the training of different models (including different architectures and different capacities). We evaluate our method on different models including convolutional networks (He et al., 2016; Zagoruyko & Komodakis, 2016; Huang et al., 2017) and the recently proposed compact vision transformers (Hassani et al., 2021). For all the convolutional networks, we use an SGD optimizer with a learning rate set to 0.1, momentum set to 0.9, and weight decay set to 5e− 4. For all the compact vision transformers, we use AdamW optimizer with β1 = 0.9, β2 = 0.999, learning rate set to 5e − 4, and weight decay set to 3e − 2. Batch size is set to 128 for all the models except WideResNet-28-10, where it is set to 64. Model Training DataClean EM OPS (Ours) LeNet-5 70.27 26.98 22.19 CVT-7-4 87.46 27.60 18.21 CCT-7-3×1 88.98 27.06 17.95 DenseNet-121 94.10 23.72 11.45 ResNet-18 94.01 19.58 15.56 WideResNet-28-10 96.08 23.96 12.76 learning rate schedule, and the training accuracy of each model is guaranteed to reach near 100%. We additionally tested our method on the ImageNet subset (the first 100 classes). We center-crop all the images to 224 × 224 and train common DNNs with results in Table 4. We adopt an initial learning rate of 0.1 with a multi-step learning rate scheduler and train models for 200 epochs. Our One-Pixel Shortcut can still be effective in protecting large-scale datasets. The networks trained on OPS data will get much lower clean test accuracy than those trained on clean data. 4.2 EFFECTIVENESS ON DIFFERENT MODELS We train different convolutional networks and vision transformers on the One-Pixel Shortcut CIFAR-10 training set, and evaluate their performance on the unmodified CIFAR-10 test set. Details are shown in Table 2. Every model reaches a very high training accuracy after only several epochs, which is much faster than training on clean data. Meanwhile, they all get really low test accuracy (about 15%) on clean test data, indicating that they do not generalize at all. Although the perturbed image looks virtually the same as the original image, and all the models get near 100% training accuracy quickly, they do not capture any semantic information but just the pixels we modify in the images. We also train models on EM training set, which is generated by a ResNet-18 using the official implementation of Huang et al. (2020a). The ℓ∞ bound of EM noises is set to 8/255. The generation of OPS costs only about 30 seconds, which is much faster than EM costing about half an hour. For different networks, OPS can degrade their test accuracy to a lower level than EM. EM works the best on ResNet-18 (19.58% test accuracy), which has the same architecture as the generator. On other models, they get higher test accuracy than ResNet-18. Meanwhile, since OPS is a model-free method that takes advantage of the natural learning preference of neural networks, its transferability is better across different models. Besides different architectures, we also explore the impact on models with the same architecture but different capacities. We trained several WideResNets (Zagoruyko & Komodakis, 2016) with different sizes. The experiment results can be found in Table 3. From our observation, overparameterization, which is generally believed to enhance the ability to capture complicated features, does not circumvent the shortcut features. Moreover, we observe that vision transformers are easily affected by manually crafted shortcuts, even though it is believed that their self-attention mechanism makes them less sensitive to data distribution shifts (Shao et al., 2021; Bhojanapalli et al., 2021). For CCT-7-3×1 and CVT-7-4 (Hassani et al., 2021), EM and OPS can degrade their test accuracy below 30% and 20%. This indicates that vision transformers may not generalize on out-of-distributions data as well as our expectations. If the training data is largely biased, i.e., has notable shortcuts, and vision transformers will not perform much better than convolutional networks. 4.3 EFFECTIVENESS UNDER DIFFERENT TRAINING STRATEGIES To evaluate the effectiveness of OPS under different training strategies, we train models on OPS perturbed data using adversarial training and different data augmentations such as Mixup (Zhang et al., 2018), Cutout (DeVries & Taylor, 2017) and RandAugment (Cubuk et al., 2020). Simple augmentations like random crop and flip are used by default in standard training. Models are also trained on EM perturbed data. As shown in Table 5, we can observe that both EM and OPS have a good performance on data protection, which degrade test accuracy to 19.58% and 15.56%. As mentioned in previous works (Huang et al., 2020a; Fu et al., 2021), EM can not work so effectively under adversarial training, and the model can reach an even higher accuracy than adversarially trained on clean data. Meanwhile, OPS can still keep effective under adversarial training. However, when it comes to data augmentation, EM seems more impervious, while OPS is more sensitive, especially to Cutout and RandAugment. This is due to the fact that EM injects global noises into images, while OPS only modifies a single pixel, which is equivalent to adding a very local perturbation. Adversarial training, which can be regarded as a kind of global augmentation, is able to attenuate the dependence on global shortcuts. On the other hand, local data augmentations like Cutout make models less sensitive to local shortcuts. Naturally, for the purpose of complementing each other, we can combine EM and our proposed OPS together to craft a kind of ensemble shortcut. Since OPS only modified a single pixel, after being applied to EM perturbed images, the imperceptibility can still be guaranteed. We evaluate the effectiveness of this ensemble method under different training strategies and find that it can always keep effective. Even if we use adversarial training and strong data augmentation like RandAugment, it is still able to degrade test accuracy to a relatively low level. Based on this property, we introduce CIFAR-10-S, where all the images are perturbed by the EM-OPS-composed noises. It can serve as a new benchmark to evaluate the ability to learn critical information under the disturbance of composed non-semantic representations. We also extend our method to multi-pixel scenarios. According to Table 6, as the number of perturbed pixels increases, the test accuracy can be degraded to a lower level. Nevertheless, the more pixels are perturbed, the imperceptibility gets weaker, as illustrated in Figure 5. From our experiment on ResNet-18, 3-Pixel Shortcut can easily degrade the test accuracy to 9.74%. Moreover, more perturbed pixels alleviate the sensitivity to different data augmentations. For RandAugment, one more perturbed pixel can degrade the test accuracy to 46.45%, which is much lower than 71.18% of OPS. 5 DISCUSSION AND CONCLUSION In this paper, we study the mechanism of recently proposed unlearnable examples (ULEs) which use error-minimizing (EM) noises. We figure out that instead of semantic features contained by images themselves, features from EM noises are mainly learned by DNNs after training. These kinds of easy-to-learn representations work as shortcuts, which could naturally exist or be manually crafted. Since DNNs optimized by gradient descent always find the solution with the minimum norm, shortcuts take precedence over those semantic features during training. We find that shortcuts can be as small as even a single pixel. Thus, we propose One-Pixel Shortcut (OPS), which is an imperceivable and effective data protection method. OPS does not require a generator model and therefore needs very little computational cost and has better transferability between different models. Besides, OPS is less sensitive to adversarial training compared to EM ULEs. We investigate the effectiveness of OPS and EM under different training strategies. We find that EM and OPS have their respective advantages and disadvantages. While EM cannot keep effective under global data augmentations like adversarial training, OPS is sensitive to local data augmentations like Cutout. Based on our investigation, we combine EM and OPS together to craft a kind of stronger unlearnable examples, which can still keep imperceptible but more impervious, and consequently introduce CIFAR-10-S, which can be a new benchmark. Besides, we have also discussed our method in multi-pixel scenarios. There are still questions that need to be discussed in the future. Besides shortcuts that are crafted deliberately for the purpose of data protection, there are also shortcuts that naturally exist due to the inevitable bias during data collection. They can be the crux of network generalization on unseen data. How to identify and avoid them (e.g., design data-dependent augmentation) is a challenging problem. We believe our work will shed light on the important impacts of shortcuts, and provide inspiration to harness them for more practical applications. ACKNOWLEDGEMENT This work is partly supported by National Natural Science Foundation of China (61977046, 61876107, U1803261), Shanghai Science and Technology Program (22511105600), and Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102). Cihang Xie is supported by a gift from Open Philanthropy. In addition, we sincerely thank the reviewers for their valuable discussions. A FURTHER INVESTIGATION ON SHORTCUT GENERATION In Sec 3.2, we craft OPS by perturbing a fixed position to a fixed color target for each class. What if we only constrain one of these two properties? In other words, if perturbing a fixed position to different color targets, or perturbing different positions to a fixed color target, will a strong shortcut be created? With curiosity, we further explore these two properties individually. We use OPS-Position and OPS-Color to represent the former and the latter setting. For OPS-Position, after finding the optimal position for each class via Algorithm 1, we perturb pixels at this position to random colors. For OPS-Color, we assign each class a predefined color (we add two colors since there are only 8 boundary colors) and perturb pixels at different positions to this color. Note that there might be many positions where the original color is already the same as the predefined color. We perturb the position where the original color is the furthest from the predefined color (measured by the l2 norm) for each sample. As shown in Table A, the results indicate that if we only constrain one property, the resulting shortcuts will be harder to be captured. Besides, random perturbations at a fixed position can serve as a stronger shortcut than a fixed color at different positions. Table A: One-Pixel Shortcut generated by different constraints. If perturbing a fixed position to different color targets or perturbing different positions to a fixed color target, the ability to degrade DNNs training will be less strong. Training Data Clean OPS OPS-Position OPS-Color Clean Test Acc. 94.01 15.56 46.22 85.34 B BASELINE OF A RANDOM PIXEL For the purpose of evaluating our searching algorithm proposed in Sec. 3.2, we compare it with the baseline, which randomly chooses a fixed position and fixed color (denoted as RandPix) for each class to craft the shortcut training set, and report the clean accuracy of 10 runs in Table B. Since 15.56% is out of the three standard deviation intervals, our OPS design is validated to be meaningful. Table B: Comparison with the randomly chosen perturbed position and target color. Training Data Clean OPS RandPix Clean Test Acc. 94.01 15.56 50.67± 11.29 C SEARCHING OPS ON A SMALL PROPORTION OF DATA In Algorithm 1, we search the entire dataset to generate OPS. What if the data is gradually added to the dataset? In other words, will the perturbed position and target color of OPS that are found on a small proportion of data serve as a strong shortcut on the entire dataset? Moreover, users might want to add their own noise before uploading their data to the dataset. From a practical standpoint, we further explore the potential of OPS by designing the following experiments. Firstly, we take out different proportions of training data from the CIFAR-10 training set to search for the perturbed position and target color. Then we perturbed the whole training set based on the results we get from the selected small proportion of data, and train DNNs on the perturbed data. Secondly, considering the scenario that users add their own noise before uploading data to the dataset, based on the setting of the first experiment, we additionally inject random noise sampled from the uniform distribution [−ϵ, ϵ] (here we set ϵ = 8/255, following the common setting of adversarial learning) to the unselected data. The results are shown in Table C, where we denote the first setting as Clean-Upload and the second setting as Noisy-Upload. Even if we only use 1% of the whole dataset to search the perturbed position and target color, OPS is still able to degrade the clean accuracy of the trained DNN to 32.87%. Besides, additional noise does not degrade the efficiency of OPS. To some degree, the experiments prove the generalization capability and robustness of OPS from a practical standpoint. Table C: We use different proportions of data to search the perturbed position and target color of OPS. Even if we only use 1% of data, OPS can still largely degrade the clean accuracy of the trained network. Updating Type Searching Proportion 1% 10% 100% Clean-Upload 32.87 17.07 15.56 Noisy-Upload 32.10 17.30 15.56 D EXPLORATIONS ON L2 ADVERSARIAL TRAINING In Sec 4.3, we have studied the effectiveness of different types of shortcuts under different types of training strategies, including l∞ adversarial training and various data augmentations. For a more comprehensive exploration, we additionally evaluate them on l2 adversarial training. Besides the commonly used setting (ϵ = 0.5), we also try larger attack budgets, as shown in Table D. Compared to l∞, l2 is proved to be a more efficient AT manner for alleviating OPS. Nevertheless, when the perturbation budget is not large enough, the network will still be affected by the shortcuts. If we further enlarge ϵ, it will hurt the clean test accuracy to a greater extent. As a local shortcut, OPS displays stronger inertia under l2 AT, which can be viewed as a global data augmentation. As for EM, although generated by l∞ perturbations, is still sensitive to l2 AT. These results conform to our discussion about the properties of local&global shortcuts and local&global data augmentations in Sec 4.3. Table D: ResNet-18 trained on different training data using l2 adversarial training with different attack budgets. Training Strategy Training DataClean OPS EM Standard 94.01 15.56 19.58 l2 AT (ϵ=0.5) 87.68 43.26 71.00 l2 AT (ϵ=1) 82.58 50.29 81.75 l2 AT (ϵ=2) 73.45 73.70 74.51
1. What is the focus of the paper regarding unlearnable examples in neural networks? 2. What are the strengths of the proposed method in creating unlearnable examples? 3. What are the weaknesses or areas that need further study in the paper's approach? 4. How do the authors analyze the feature changes in ULEs, and what does this result indicate for convolutional layers? 5. Does the reviewer have any questions or concerns regarding the paper's content or methodology?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper presents a method to generate unlearnable examples (ULE), using which if one trains a neural network, that network will not perform well (sometimes random performance) on an unclean test. The authors show that even changing just a small pixel value, given that that change is done consistently at the same location in all the images of the same class, can lead to surprisingly effective unlearnable examples. The application of such unlearnable examples is shown in many image classification settings (e.g., CIFAR-10) and superior performance over the existing paradigm of creating ULEs is shown. Strengths And Weaknesses The method is well motivated, very simple, and most importantly, surprisingly effective at creating unlearnable examples. The results are not simply empirical, but the authors also analyze what goes on with the created ULEs. For example, the authors visualize what happens to the feature computed through ULEs and show that with just one pixel change, all the intermediate features change their properties. This result will be surprising to the community. That being said, it also needs to be studied a bit more. In other words, in what way have the community understood the convolutional layers incorrectly which led to us assuming something like this (Fig. 1) happens. The phenomenon discovered by the authors seems to generalize well enough; in that it not only affects the traditional convolutional architectures (e.g. ResNet) but also to transformer based architectures (ViT). This corroborates the fact that this result is indeed not an exploitation of a specific weakness, but is a bug (or feature) of neural networks in general. Comments/Questions “Images belonging to the same category are perturbed at the same position”. If I understand correctly, if you have two images of dogs, both will be perturbed at the same location with the same perturbation. How are these two properties related? In other words, what happens if you enforce one of the constraints (need for the perturbation to be in the same location) but not the other, and vice versa. This property should be studied better. Mathematical notations are confusing sometimes. For example, in Eq. 1, it is not clear what c (in D_c, k) and i, j (in sigma_k(i, j)) mean. Is there any trend in the discovered location for perturbation? In other words, is it the case that for certain classes, the perturbed locations turn out to be in center whereas for some other they are on the edge? Clarity, Quality, Novelty And Reproducibility Apart from a few issues surrounding some mathematical notations (please see the comments section), the paper is well written with motivations explained clearly. To the best of my knowledge, the idea about the ability of perturbing one pixel to drastically change the behavior of a network is novel, and has been shown to be generalizable enough.
ICLR
Title One-Pixel Shortcut: On the Learning Preference of Deep Neural Networks Abstract Unlearnable examples (ULEs) aim to protect data from unauthorized usage for training DNNs. Existing work adds l∞-bounded perturbations to the original sample so that the trained model generalizes poorly. Such perturbations, however, are easy to eliminate by adversarial training and data augmentations. In this paper, we resolve this problem from a novel perspective by perturbing only one pixel in each image. Interestingly, such a small modification could effectively degrade model accuracy to almost an untrained counterpart. Moreover, our produced One-Pixel Shortcut (OPS) could not be erased by adversarial training and strong augmentations. To generate OPS, we perturb in-class images at the same position to the same target value that could mostly and stably deviate from all the original images. Since such generation is only based on images, OPS needs significantly less computational cost than the previous methods using DNN generators. Based on OPS, we introduce an unlearnable dataset called CIFAR-10-S, which is indistinguishable from CIFAR-10 by humans but induces the trained model to extremely low accuracy. Even under adversarial training, a ResNet-18 trained on CIFAR-10-S has only 10.61% accuracy, compared to 83.02% by the existing error-minimizing method. 1 INTRODUCTION Deep neural networks (DNNs) have successfully promoted the computer vision field in the past decade. As DNNs are scaling up unprecedentedly (Brock et al., 2018; Huang et al., 2019; Riquelme et al., 2021; Zhang et al., 2022), data becomes increasingly vital. For example, ImageNet (Russakovsky et al., 2015) fostered the development of AlexNet (Krizhevsky et al., 2017). Besides, people or organizations also collect online data to train DNNs, e.g., IG-3.5B-17k (Mahajan et al., 2018) and JFT-300M (Sun et al., 2017). This practice, however, raises the privacy concerns of Internet users. In this concern, researchers have made substantial efforts to protect personal data from abuse in model learning without affecting user experience (Feng et al., 2019; Huang et al., 2020a; Fowl et al., 2021; Yuan & Wu, 2021; Yu et al., 2021). Among those proposed methods, unlearnable examples (ULEs) (Huang et al., 2020a) take a great step to inject original images with protective but imperceptible perturbations from bi-level error minimization (EM). DNNs trained on ULEs generalize very poorly on normal images. However, such perturbations could be completely canceled out by adversarial training, which fails the protection, limiting the practicality of ULEs. We view the data protection problem from the perspective of shortcut learning (Geirhos et al., 2020), which shows that DNN training is “lazy” (Chizat et al., 2019; Caron & Chrétien, 2020), i.e., converges to the solution with the minimum norm when optimized by gradient descent (Wilson et al., 2017; Shah et al., 2018; Zhang et al., 2021). In this case, a DNN would rely on every accessible feature to minimize the training loss, no matter whether it is semantic or not (Ilyas et al., 2019; Geirhos et al., 2018; Baker et al., 2018). Thus, DNNs tend to ignore semantic features if there are other easy-to-learn shortcuts that are sufficient for distinguishing examples from different classes. Such shortcuts exist naturally or manually. In data collection, e.g., cows may mostly appear with grasslands, misleading DNN to predict cows by large-area green, because the color is easier to learn than those semantic features and also sufficient to correctly classify images of cows during training. Such natural shortcuts ∗equal contribution; correspondence to Xiaolin Huang ([email protected]). †code available at https://github.com/cychomatica/One-Pixel-Shotcut. have been illustrated in detail in datasets such as ImageNet-A (Hendrycks et al., 2021) and ObjectNet (Barbu et al., 2019). Besides, shortcuts could also be manually crafted. For instance, EM-based ULEs (Huang et al., 2020a) mislead DNNs to learn features belonging to the perturbations, which falls in the category of shortcut learning (Yu et al., 2021). In this paper, we are surprised to find that shortcuts could be so small in the area that it can even be simply instantiated as a single pixel. By perturbing a pixel of each training sample, our method, namely One-Pixel Shortcut (OPS), degrades the model accuracy on clean data to almost an untrained counterpart. Moreover, our generated unbounded small noise could not be erased by adversarial training (Madry et al., 2018), which is effective in mitigating existing ULEs (Huang et al., 2020a). To make the specific pixel stand out in view of DNNs, OPS perturbs in-class images at the same position to the same target value that, if changed to a boundary value, could mostly and stably deviate from all original images. Specifically, the difference between the perturbed pixel and the original one in all in-class images should be large with low variance. Since such generation is only based on images, OPS needs significantly less computational cost than the previous methods based on DNN generators. We evaluate OPS and its counterparts in 6 architectures, 6 model sizes, 8 training strategies on CIFAR-10 (Krizhevsky et al., 2009) and ImageNet (Russakovsky et al., 2015) subset, and find that OPS is always superior in degrading model’s testing accuracy than EM ULEs. In this regard, we introduce a new unlearnable dataset named CIFAR-10-S, which combines the EM and OPS to craft stronger imperceptible ULEs. Even under adversarial training, a ResNet-18 (He et al., 2016) trained on CIFAR-10-S has 10.61% test accuracy, compared to 83.02% by the existing error-minimizing method. Different from the existing datasets like ImageNet-A (Hendrycks et al., 2021) or ObjectNet (Barbu et al., 2019), which place objects into special environments to remove shortcuts, CIFAR-10-S injects shortcuts to evaluate the model’s resistance to them. Altogether, our contributions are included as follows: • We analyze unlearnable examples from the perspective of shortcut learning, and demonstrate that a strong shortcut for DNNs could be as small as a single pixel. • We propose a novel data protection method named One-Pixel Shortcut (OPS), which perturbs in-class images in the pixel that could mostly and stably deviate from the original images. OPS is a model-free method that is significantly faster than previous work. • We extensively evaluate OPS on various models and training strategies, and find it outperforms baselines by a large margin in the ability to degrade DNN training. Besides, we introduce CIFAR-10-S to assess the model’s ability to learn essential semantic features. 2 RELATED WORK 2.1 ADVERSARIAL ATTACK AND DATA POISONING Adversarial examples are perturbed by small perturbations, which are indistinguishable from the original examples by humans but can make DNNs give wrong predictions (Szegedy et al., 2014). Many different adversarial attacks are proposed in recent years. Generally, most adversarial attacks aim to perturb the whole image with a constrained intensity (usually bounded by ℓp norm), e.g., PGD (Madry et al., 2018), C&W (Carlini & Wagner, 2017) and Autoattack (Croce & Hein, 2020). Besides, there are also other methods that only perturb a small part of an image (Croce & Hein (2019); Dong et al. (2020)) or even a single pixel (Su et al. (2019)). The existence of adversarial examples and their transferability (Chen et al. (2022)) indicates that DNNs do not sufficiently learn critical semantic information as we wish, but more or less depend on some non-robust features. Data poisoning aims to modify the training data in order to affect the performance of models. Usually, the poisoned examples are notably modified and take only a part of the whole dataset (Yang et al., 2017; Koh & Liang, 2017). But those methods cannot degrade the performance of models to a low enough level, and the poisoned examples are easily distinguishable. Recently researchers have paid great attention to imperceptible poisoning which modifies examples slightly and does not damage their semantic information (Huang et al., 2020a; Fowl et al., 2021; Huang et al., 2020b; Doan et al., 2021; Geiping et al., 2020; Chen et al., 2023). Fowl et al. (2021) uses adversarial perturbations which contain information of wrong labels to poison the training data, which is equivalent to random label fitting. On the contrary, Huang et al. (2020a) attack the training examples inversely, i.e., using error-minimizing perturbation, to craft unlearnable examples. 2.2 SHORTCUT LEARNING Recently, researches on deep neural networks indicate that under the sufficiency of correct classification, DNNs tend to learn easier features instead of semantic features which make the object itself. To be more specific, for example, the same object in different environments will get different predictions, which means the DNN overly relies on features that do not belong to the object (Beery et al., 2018). Geirhos et al. (2020) investigate this phenomenon in different fields of deep learning and explain why shortcuts exist and how to understand them. Lapuschkin et al. (2019) also observe this problem and attribute it to the unsuitable performance evaluation metrics that we generally use. The existence of natural adversarial examples (Hendrycks et al., 2021) also indicates that DNNs do not sufficiently learn the real semantic information during training. Instead, they may learn to use the background or texture of an image to predict. Unlearnable examples (ULEs) (Huang et al., 2020a), which are manually crafted by error-minimizing noises and able to lead the models trained on them to obtain terrible generalization on test data, are believed to be some kind of shortcut that provides some textures that are easy to learn (Yu et al., 2021). Generally, if we get enough data, the interconnection of different features will be enhanced so that those shortcuts may not be sufficient for classification tasks, i.e., the model will have to use more complicated composed features in order to minimize the risk. However, when the data we collect obtains some specific bias (e.g., similar backgrounds), shortcut learning will not be mitigated effectively. 2.3 DATA AUGMENTATION Data augmentation aims to enhance the generalization ability of models. This is usually implemented by applying some transformations to the training data, e.g., random stretching, random cropping, or color changing. Nowadays different kinds of data augmentation policies (Zhang et al., 2018; DeVries & Taylor, 2017; Cubuk et al., 2019; 2020) are proven to effectively boost the generalization ability of DNNs. Sometimes adversarial training Madry et al. (2018); Li et al. (2022) is also regarded as a kind of data augmentation (Shorten & Khoshgoftaar, 2019; Xie et al., 2020). Dao et al. (2019) deem data augmentation as a kind of data-dependent regularization term. Since data augmentations are believed to improve the generalization ability of DNNs, we use different augmentations to evaluate the effectiveness of different data protection methods. 3 ONE-PIXEL SHORTCUT 3.1 PRELIMINARIES Unlearnable examples (ULEs) are a data protection method created by error-minimizing (EM) noises (Huang et al., 2020a). Models trained on examples that are perturbed by those noises will get almost zero training error, but perform like random guessing on clean test data. Due to the imperceptibility of the noise, this method can prevent the abuse of data by some unauthorized users who attempt to train deep models for improper purposes, without affecting normal usage. This bi-level problem can be solved by optimizing the inner minimization and the outer minimization alternately. It is proved that the perturbations belonging to the same class are well clustered and linearly separable (Yu et al., 2021). Thus, EM provides easy-to-learn features which are closely interconnected with labels. We design a shuffling experiment to demonstrate that DNNs learn the shortcuts instead of images if trained on data that have shortcuts. Denote D̂ = {(xi + δi, yi)} as the perturbed training set, where δi is the perturbation associated with the i-th example. After shuffling, the training set becomes D̂′ = {(xj + δi, yi)}. We are curious about how the DNNs trained on different data (with or without shortcuts) would predict the shuffled perturbed training set. As shown in Table 1, the DNN trained on clean data performs like a random guess on shuffled perturbed data while keeping high accuracy on unshuffled data, indicating that it successfully learns the representations from images {xi} rather than giving predictions according to the perturbations {δi} from EM or OPS. In contrast, the DNN trained on EM training data tends to memorize a significant proportion of perturbations, because it has 48.97% accuracy on the shuffled EM training data. Moreover, OPS training data produces a DNN with 72.43% accuracy on the shuffled set, reflecting it forcing the DNN to learn the perturbations to a greater extent. Our study illustrates the learning characteristics of DNNs trained with or without shortcuts, and also shows that OPS is a more effective shortcut than EM. 3.2 FOOLING DNN TRAINING BY A PIXEL Following the discussion above, for the purpose of data protection, we need to craft shortcuts that are easy enough to learn and thus fool the network training. According to previous studies, shortcuts can come from background environments that naturally exist inside our datasets (Beery et al., 2018), or be manually crafted like EM (Huang et al., 2020a). Unlike those shortcuts which might occupy the whole image or a notable part, we investigate how a single pixel, which is the minimum unit of digital images, can affect the learning process of deep neural networks. Thus, we propose One-Pixel Shortcut (OPS), which modifies only a single pixel of each image. Images belonging to the same category are perturbed at the same position, which means the perturbed pixel is interconnected with the category label. Although so minuscule, it is efficient enough to fool the training of deep learning models. We use a heuristic but effective method to generate perturbations for images belonging to each category. We search the position and value of the pixel which can result in the most significant change for the whole category. Denoting D as the clean dataset and Dk as the clean subset containing all the examples of class k, the problem can be formulated as: argmax σk,ξk E(x,y)∈Dk [Gk (x, σk, ξk)] s.t. ∥σk∥0 = 1, ∑ i,j σk(i, j) = 1, (1) where σk ∈ RH×W represents the perturbed position mask and σk(i, j) is the element at the i-th row and j-th column, ξk ∈ RC stands for the perturbed target color (C = 3 for RGB images), and G is the objective function. Since the optimization above is an NP-hard problem, we cannot solve it directly. Thus we constrain the feasible region to a limited discrete searching space, where we search the boundary value of each color channel, i.e.,ξk ∈ {0, 1}3, at every point of an image. Specifically, for CIFAR-10 images, the discrete searching space will contain 32× 32× 23 = 8192 elements. To ensure that the pixel is stably perturbed, we also hope that the variance of the difference between them is small. Accordingly, we design the objective function Gk for class k as: Gk = E(x,y)∈Dk (∑C j=1|∥xj · σk∥F − ξkj | ) Var(x,y)∈Dk (∑C j=1|∥xj · σk∥F − ξkj | ) (2) where xi ∈ RH×W denotes the i-th channel of x, and ξkj ∈ R is the i-th channel of ξk. After solving the position map and color, we get perturbation δ for each example (x, y) as: δ = [ξy1σy − x1 · σy, ξy2σy − x2 · σy, . . . , ξyCσy − xC · σy]⊤ (3) Details can be found in Algorithm 1. The resulting One-Pixel Shortcut is illustrated in Figure 2. Algorithm 1 Model-Free Searching for One-Pixel Shortcut Input: Clean dataset D = D1 ⋃ · · · ⋃ DM Output: One-Pixel Shortcut dataset D̂ = D̂1 ⋃ · · · ⋃ D̂M 1: for k = 1, 2, 3, ...,M do 2: solve Eq.1 and Eq.2 to get σk and ξk # calculate the best perturbed point for class k 3: for each x ∈ Dk do 4: for i = 1, 2, 3 do 5: x̂i = xi · (I − σk) + ξki · σk # modify the optimal pixel for every image in class k 6: end for 7: end for 8: D̂k = {x̂} 9: end for 10: return D̂ = D̂1 ⋃ · · · ⋃ D̂M 3.3 PROPERTIES OF ONE-PIXEL SHORTCUT Since we all empirically believe that convolutional networks tend to capture textures (Hermann et al., 2020) or shapes (Geirhos et al., 2018; Zhang & Zhu, 2019), it is surprising that convolutional networks can be affected so severely by just one pixel. As illustrated by Figure 1, the network indeed tends to learn those less complicated nonsemantic features brought by One-Pixel Shortcut. Besides convolutional networks, we observe that compact vision transformers (Hassani et al., 2021) are also attracted by One-Pixel Shortcut and ignore other semantic features. This indicates that shortcuts are not particularly learned by some specific architecture. We also visualize the loss landscape of ResNet-18 trained on clean CIFAR-10 data and One-Pixel Shortcut data. Illustrated as Figure 3, while trained on OPS data, the loss surface is much flatter, which means that these minima found by the network are more difficult to escape. Even if we use a ResNet-18 pretrained on clean CIFAR-10 and then fine-tune it on the OPS data, the network will still fall into this badly generalized minima. In addition, we record the trajectories of training accuracy and the Frobenius norm of parameter difference, ∥θ − θ0∥F , which can reflect the magnitude of network parameter change. Here θ and θ0 respectively indicate the parameters after training and at the initialized point. We draw the relation curve between training accuracy and ∥θ − θ0∥F , which can be found in Figure 4. When training accuracy rises up to 90% for the first time, the model trained on OPS data has a much smaller ∥θ − θ0∥F than that trained on clean data, which indicates that the OPS-trained model gets stuck in an optimum closer to the initialization. It has been widely known that overparameterized DNNs optimized by gradient descent will converge to the solution that is close to the initialization, i.e. with the minimum norm of parameter difference (Wilson et al., 2017; Shah et al., 2018; Li & Liang, 2018; Zhang et al., 2021). Since OPS only perturbs a single pixel, the original representations of images are not damaged, and the model trained on clean data can still keep great performance on OPS data, which indicates that the well-generalized solution far from the initialization still exists but is not reached due to the tendency for a close solution. The close solution is believed to obtain better generalization ability. Nevertheless, this argument is true only under the assumption that the training data and test data are from the exact same distribution and have the exact same features. The existence of OPS forces the model to converge to an optimum where the model generalizes well on OPS features, which are not contained in test data. From our experiment results in Table 2, OPS can degrade test accuracy to a lower level. This is because EM requires a generator model, and thus may contain features more or less depending on it, which constrains the effectiveness of other models. On the other hand, OPS is a universal model-free method, and the shortcuts are crafted based on the inherent learning preference of DNNs. 4 EXPERIMENTS 4.1 SETTING Our experiments are implemented on CIFAR-10 and ImageNet subset, using 4 NVIDIA RTX 2080Ti GPUs. We investigate how the One-Pixel Shortcut can affect the training of different models (including different architectures and different capacities). We evaluate our method on different models including convolutional networks (He et al., 2016; Zagoruyko & Komodakis, 2016; Huang et al., 2017) and the recently proposed compact vision transformers (Hassani et al., 2021). For all the convolutional networks, we use an SGD optimizer with a learning rate set to 0.1, momentum set to 0.9, and weight decay set to 5e− 4. For all the compact vision transformers, we use AdamW optimizer with β1 = 0.9, β2 = 0.999, learning rate set to 5e − 4, and weight decay set to 3e − 2. Batch size is set to 128 for all the models except WideResNet-28-10, where it is set to 64. Model Training DataClean EM OPS (Ours) LeNet-5 70.27 26.98 22.19 CVT-7-4 87.46 27.60 18.21 CCT-7-3×1 88.98 27.06 17.95 DenseNet-121 94.10 23.72 11.45 ResNet-18 94.01 19.58 15.56 WideResNet-28-10 96.08 23.96 12.76 learning rate schedule, and the training accuracy of each model is guaranteed to reach near 100%. We additionally tested our method on the ImageNet subset (the first 100 classes). We center-crop all the images to 224 × 224 and train common DNNs with results in Table 4. We adopt an initial learning rate of 0.1 with a multi-step learning rate scheduler and train models for 200 epochs. Our One-Pixel Shortcut can still be effective in protecting large-scale datasets. The networks trained on OPS data will get much lower clean test accuracy than those trained on clean data. 4.2 EFFECTIVENESS ON DIFFERENT MODELS We train different convolutional networks and vision transformers on the One-Pixel Shortcut CIFAR-10 training set, and evaluate their performance on the unmodified CIFAR-10 test set. Details are shown in Table 2. Every model reaches a very high training accuracy after only several epochs, which is much faster than training on clean data. Meanwhile, they all get really low test accuracy (about 15%) on clean test data, indicating that they do not generalize at all. Although the perturbed image looks virtually the same as the original image, and all the models get near 100% training accuracy quickly, they do not capture any semantic information but just the pixels we modify in the images. We also train models on EM training set, which is generated by a ResNet-18 using the official implementation of Huang et al. (2020a). The ℓ∞ bound of EM noises is set to 8/255. The generation of OPS costs only about 30 seconds, which is much faster than EM costing about half an hour. For different networks, OPS can degrade their test accuracy to a lower level than EM. EM works the best on ResNet-18 (19.58% test accuracy), which has the same architecture as the generator. On other models, they get higher test accuracy than ResNet-18. Meanwhile, since OPS is a model-free method that takes advantage of the natural learning preference of neural networks, its transferability is better across different models. Besides different architectures, we also explore the impact on models with the same architecture but different capacities. We trained several WideResNets (Zagoruyko & Komodakis, 2016) with different sizes. The experiment results can be found in Table 3. From our observation, overparameterization, which is generally believed to enhance the ability to capture complicated features, does not circumvent the shortcut features. Moreover, we observe that vision transformers are easily affected by manually crafted shortcuts, even though it is believed that their self-attention mechanism makes them less sensitive to data distribution shifts (Shao et al., 2021; Bhojanapalli et al., 2021). For CCT-7-3×1 and CVT-7-4 (Hassani et al., 2021), EM and OPS can degrade their test accuracy below 30% and 20%. This indicates that vision transformers may not generalize on out-of-distributions data as well as our expectations. If the training data is largely biased, i.e., has notable shortcuts, and vision transformers will not perform much better than convolutional networks. 4.3 EFFECTIVENESS UNDER DIFFERENT TRAINING STRATEGIES To evaluate the effectiveness of OPS under different training strategies, we train models on OPS perturbed data using adversarial training and different data augmentations such as Mixup (Zhang et al., 2018), Cutout (DeVries & Taylor, 2017) and RandAugment (Cubuk et al., 2020). Simple augmentations like random crop and flip are used by default in standard training. Models are also trained on EM perturbed data. As shown in Table 5, we can observe that both EM and OPS have a good performance on data protection, which degrade test accuracy to 19.58% and 15.56%. As mentioned in previous works (Huang et al., 2020a; Fu et al., 2021), EM can not work so effectively under adversarial training, and the model can reach an even higher accuracy than adversarially trained on clean data. Meanwhile, OPS can still keep effective under adversarial training. However, when it comes to data augmentation, EM seems more impervious, while OPS is more sensitive, especially to Cutout and RandAugment. This is due to the fact that EM injects global noises into images, while OPS only modifies a single pixel, which is equivalent to adding a very local perturbation. Adversarial training, which can be regarded as a kind of global augmentation, is able to attenuate the dependence on global shortcuts. On the other hand, local data augmentations like Cutout make models less sensitive to local shortcuts. Naturally, for the purpose of complementing each other, we can combine EM and our proposed OPS together to craft a kind of ensemble shortcut. Since OPS only modified a single pixel, after being applied to EM perturbed images, the imperceptibility can still be guaranteed. We evaluate the effectiveness of this ensemble method under different training strategies and find that it can always keep effective. Even if we use adversarial training and strong data augmentation like RandAugment, it is still able to degrade test accuracy to a relatively low level. Based on this property, we introduce CIFAR-10-S, where all the images are perturbed by the EM-OPS-composed noises. It can serve as a new benchmark to evaluate the ability to learn critical information under the disturbance of composed non-semantic representations. We also extend our method to multi-pixel scenarios. According to Table 6, as the number of perturbed pixels increases, the test accuracy can be degraded to a lower level. Nevertheless, the more pixels are perturbed, the imperceptibility gets weaker, as illustrated in Figure 5. From our experiment on ResNet-18, 3-Pixel Shortcut can easily degrade the test accuracy to 9.74%. Moreover, more perturbed pixels alleviate the sensitivity to different data augmentations. For RandAugment, one more perturbed pixel can degrade the test accuracy to 46.45%, which is much lower than 71.18% of OPS. 5 DISCUSSION AND CONCLUSION In this paper, we study the mechanism of recently proposed unlearnable examples (ULEs) which use error-minimizing (EM) noises. We figure out that instead of semantic features contained by images themselves, features from EM noises are mainly learned by DNNs after training. These kinds of easy-to-learn representations work as shortcuts, which could naturally exist or be manually crafted. Since DNNs optimized by gradient descent always find the solution with the minimum norm, shortcuts take precedence over those semantic features during training. We find that shortcuts can be as small as even a single pixel. Thus, we propose One-Pixel Shortcut (OPS), which is an imperceivable and effective data protection method. OPS does not require a generator model and therefore needs very little computational cost and has better transferability between different models. Besides, OPS is less sensitive to adversarial training compared to EM ULEs. We investigate the effectiveness of OPS and EM under different training strategies. We find that EM and OPS have their respective advantages and disadvantages. While EM cannot keep effective under global data augmentations like adversarial training, OPS is sensitive to local data augmentations like Cutout. Based on our investigation, we combine EM and OPS together to craft a kind of stronger unlearnable examples, which can still keep imperceptible but more impervious, and consequently introduce CIFAR-10-S, which can be a new benchmark. Besides, we have also discussed our method in multi-pixel scenarios. There are still questions that need to be discussed in the future. Besides shortcuts that are crafted deliberately for the purpose of data protection, there are also shortcuts that naturally exist due to the inevitable bias during data collection. They can be the crux of network generalization on unseen data. How to identify and avoid them (e.g., design data-dependent augmentation) is a challenging problem. We believe our work will shed light on the important impacts of shortcuts, and provide inspiration to harness them for more practical applications. ACKNOWLEDGEMENT This work is partly supported by National Natural Science Foundation of China (61977046, 61876107, U1803261), Shanghai Science and Technology Program (22511105600), and Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102). Cihang Xie is supported by a gift from Open Philanthropy. In addition, we sincerely thank the reviewers for their valuable discussions. A FURTHER INVESTIGATION ON SHORTCUT GENERATION In Sec 3.2, we craft OPS by perturbing a fixed position to a fixed color target for each class. What if we only constrain one of these two properties? In other words, if perturbing a fixed position to different color targets, or perturbing different positions to a fixed color target, will a strong shortcut be created? With curiosity, we further explore these two properties individually. We use OPS-Position and OPS-Color to represent the former and the latter setting. For OPS-Position, after finding the optimal position for each class via Algorithm 1, we perturb pixels at this position to random colors. For OPS-Color, we assign each class a predefined color (we add two colors since there are only 8 boundary colors) and perturb pixels at different positions to this color. Note that there might be many positions where the original color is already the same as the predefined color. We perturb the position where the original color is the furthest from the predefined color (measured by the l2 norm) for each sample. As shown in Table A, the results indicate that if we only constrain one property, the resulting shortcuts will be harder to be captured. Besides, random perturbations at a fixed position can serve as a stronger shortcut than a fixed color at different positions. Table A: One-Pixel Shortcut generated by different constraints. If perturbing a fixed position to different color targets or perturbing different positions to a fixed color target, the ability to degrade DNNs training will be less strong. Training Data Clean OPS OPS-Position OPS-Color Clean Test Acc. 94.01 15.56 46.22 85.34 B BASELINE OF A RANDOM PIXEL For the purpose of evaluating our searching algorithm proposed in Sec. 3.2, we compare it with the baseline, which randomly chooses a fixed position and fixed color (denoted as RandPix) for each class to craft the shortcut training set, and report the clean accuracy of 10 runs in Table B. Since 15.56% is out of the three standard deviation intervals, our OPS design is validated to be meaningful. Table B: Comparison with the randomly chosen perturbed position and target color. Training Data Clean OPS RandPix Clean Test Acc. 94.01 15.56 50.67± 11.29 C SEARCHING OPS ON A SMALL PROPORTION OF DATA In Algorithm 1, we search the entire dataset to generate OPS. What if the data is gradually added to the dataset? In other words, will the perturbed position and target color of OPS that are found on a small proportion of data serve as a strong shortcut on the entire dataset? Moreover, users might want to add their own noise before uploading their data to the dataset. From a practical standpoint, we further explore the potential of OPS by designing the following experiments. Firstly, we take out different proportions of training data from the CIFAR-10 training set to search for the perturbed position and target color. Then we perturbed the whole training set based on the results we get from the selected small proportion of data, and train DNNs on the perturbed data. Secondly, considering the scenario that users add their own noise before uploading data to the dataset, based on the setting of the first experiment, we additionally inject random noise sampled from the uniform distribution [−ϵ, ϵ] (here we set ϵ = 8/255, following the common setting of adversarial learning) to the unselected data. The results are shown in Table C, where we denote the first setting as Clean-Upload and the second setting as Noisy-Upload. Even if we only use 1% of the whole dataset to search the perturbed position and target color, OPS is still able to degrade the clean accuracy of the trained DNN to 32.87%. Besides, additional noise does not degrade the efficiency of OPS. To some degree, the experiments prove the generalization capability and robustness of OPS from a practical standpoint. Table C: We use different proportions of data to search the perturbed position and target color of OPS. Even if we only use 1% of data, OPS can still largely degrade the clean accuracy of the trained network. Updating Type Searching Proportion 1% 10% 100% Clean-Upload 32.87 17.07 15.56 Noisy-Upload 32.10 17.30 15.56 D EXPLORATIONS ON L2 ADVERSARIAL TRAINING In Sec 4.3, we have studied the effectiveness of different types of shortcuts under different types of training strategies, including l∞ adversarial training and various data augmentations. For a more comprehensive exploration, we additionally evaluate them on l2 adversarial training. Besides the commonly used setting (ϵ = 0.5), we also try larger attack budgets, as shown in Table D. Compared to l∞, l2 is proved to be a more efficient AT manner for alleviating OPS. Nevertheless, when the perturbation budget is not large enough, the network will still be affected by the shortcuts. If we further enlarge ϵ, it will hurt the clean test accuracy to a greater extent. As a local shortcut, OPS displays stronger inertia under l2 AT, which can be viewed as a global data augmentation. As for EM, although generated by l∞ perturbations, is still sensitive to l2 AT. These results conform to our discussion about the properties of local&global shortcuts and local&global data augmentations in Sec 4.3. Table D: ResNet-18 trained on different training data using l2 adversarial training with different attack budgets. Training Strategy Training DataClean OPS EM Standard 94.01 15.56 19.58 l2 AT (ϵ=0.5) 87.68 43.26 71.00 l2 AT (ϵ=1) 82.58 50.29 81.75 l2 AT (ϵ=2) 73.45 73.70 74.51
1. What is the focus of the paper regarding generating unlearnable examples? 2. What are the strengths and weaknesses of the proposed one-pixel shortcut (OPS) method? 3. Do you have any concerns or suggestions regarding the method's complexity, perceptibility, and robustness? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any relevant works or methods that could be compared or combined with OPS, such as Noise2Void?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes a one-pixel shortcut (OPS) method to generate unlearnable examples that would render a trained model perform not better than a random network on test examples. This performs better than the existing unlearning example generation method by error minimization (EM) at different settings. EM and OPS are combined to create a dataset named CIFAR-10-S, which injects shortcuts in the dataset to evaluate different models' resistance to them. Strengths And Weaknesses Strengths: The paper provides a new perspective on learning a per-class noise that minimizes test accuracy while achieving good training accuracy. Shortcuts like these are easy to pick up by a network, especially if they are consistent across images in the same class. Evaluation is very thorough, with different architectures showing that the method is not dependent or sensitive to architecture choices, augmentations, or training strategies. Weaknesses: The method is rather complex in the sense that since neural networks are good at picking up shortcuts, it should be possible to perturb any pixel with any color consistently across all images in the same class - as long as the values of σ k , ξ k are different for different k. This should be an initial baseline to compare against OPS which is obtained by minimizing Eqn (2). One major downside of using OPS is that regardless of the claims about the perceptibility of the augmented images, it is extremely easy for a human (the reviewer in this case) to see exactly which pixel is perturbed. Unlike the EM method which has the capability to learn a per-image noise, the OPS method performs a class-wise perturbation. This is problematic for few reasons. First, the easy perceptibility of the augmented pixel can lead to manual solutions to fix this problem (one way to combat OPS - if done across the dataset, is to just find the pixel such that the standard deviation of the pixel is 0 across the images of that class - and replace it with a mean of neighboring pixels, etc.) - and finding an analogous way to "undo the noise" in EM is not easy. Secondly, if part of the information is perturbed, visual inspection is enough to discard the perturbed data and then learn a network, but in EM, this is not possible. There are no comments about the "generalization capability" of this noise location and color. How robust is this in settings where more data is gradually added to the dataset (like continual learning, etc.)? The OPS shortcut is learned on the entire dataset, which is not what is done from a practical standpoint (an end user would want to add their own noise before uploading a picture to the internet). This issue need not be solved in the paper, but should be addressed. Moreover, it would be interesting to see how a method like Noise2Void [1] can be modified to use as an adversarial method to combat OPS - which performs selective masking on the image. [1] https://arxiv.org/pdf/1811.10980.pdf Clarity, Quality, Novelty And Reproducibility Paper is generally very clear, and I had no issues understanding the paper. One thing I didn't find very useful is Fig4. and its relevance in the overall theme of the paper. Method is proposed on publicly available datasets, and pseudocode/algorithm (Alg.1) is very clear too.
ICLR
Title One-Pixel Shortcut: On the Learning Preference of Deep Neural Networks Abstract Unlearnable examples (ULEs) aim to protect data from unauthorized usage for training DNNs. Existing work adds l∞-bounded perturbations to the original sample so that the trained model generalizes poorly. Such perturbations, however, are easy to eliminate by adversarial training and data augmentations. In this paper, we resolve this problem from a novel perspective by perturbing only one pixel in each image. Interestingly, such a small modification could effectively degrade model accuracy to almost an untrained counterpart. Moreover, our produced One-Pixel Shortcut (OPS) could not be erased by adversarial training and strong augmentations. To generate OPS, we perturb in-class images at the same position to the same target value that could mostly and stably deviate from all the original images. Since such generation is only based on images, OPS needs significantly less computational cost than the previous methods using DNN generators. Based on OPS, we introduce an unlearnable dataset called CIFAR-10-S, which is indistinguishable from CIFAR-10 by humans but induces the trained model to extremely low accuracy. Even under adversarial training, a ResNet-18 trained on CIFAR-10-S has only 10.61% accuracy, compared to 83.02% by the existing error-minimizing method. 1 INTRODUCTION Deep neural networks (DNNs) have successfully promoted the computer vision field in the past decade. As DNNs are scaling up unprecedentedly (Brock et al., 2018; Huang et al., 2019; Riquelme et al., 2021; Zhang et al., 2022), data becomes increasingly vital. For example, ImageNet (Russakovsky et al., 2015) fostered the development of AlexNet (Krizhevsky et al., 2017). Besides, people or organizations also collect online data to train DNNs, e.g., IG-3.5B-17k (Mahajan et al., 2018) and JFT-300M (Sun et al., 2017). This practice, however, raises the privacy concerns of Internet users. In this concern, researchers have made substantial efforts to protect personal data from abuse in model learning without affecting user experience (Feng et al., 2019; Huang et al., 2020a; Fowl et al., 2021; Yuan & Wu, 2021; Yu et al., 2021). Among those proposed methods, unlearnable examples (ULEs) (Huang et al., 2020a) take a great step to inject original images with protective but imperceptible perturbations from bi-level error minimization (EM). DNNs trained on ULEs generalize very poorly on normal images. However, such perturbations could be completely canceled out by adversarial training, which fails the protection, limiting the practicality of ULEs. We view the data protection problem from the perspective of shortcut learning (Geirhos et al., 2020), which shows that DNN training is “lazy” (Chizat et al., 2019; Caron & Chrétien, 2020), i.e., converges to the solution with the minimum norm when optimized by gradient descent (Wilson et al., 2017; Shah et al., 2018; Zhang et al., 2021). In this case, a DNN would rely on every accessible feature to minimize the training loss, no matter whether it is semantic or not (Ilyas et al., 2019; Geirhos et al., 2018; Baker et al., 2018). Thus, DNNs tend to ignore semantic features if there are other easy-to-learn shortcuts that are sufficient for distinguishing examples from different classes. Such shortcuts exist naturally or manually. In data collection, e.g., cows may mostly appear with grasslands, misleading DNN to predict cows by large-area green, because the color is easier to learn than those semantic features and also sufficient to correctly classify images of cows during training. Such natural shortcuts ∗equal contribution; correspondence to Xiaolin Huang ([email protected]). †code available at https://github.com/cychomatica/One-Pixel-Shotcut. have been illustrated in detail in datasets such as ImageNet-A (Hendrycks et al., 2021) and ObjectNet (Barbu et al., 2019). Besides, shortcuts could also be manually crafted. For instance, EM-based ULEs (Huang et al., 2020a) mislead DNNs to learn features belonging to the perturbations, which falls in the category of shortcut learning (Yu et al., 2021). In this paper, we are surprised to find that shortcuts could be so small in the area that it can even be simply instantiated as a single pixel. By perturbing a pixel of each training sample, our method, namely One-Pixel Shortcut (OPS), degrades the model accuracy on clean data to almost an untrained counterpart. Moreover, our generated unbounded small noise could not be erased by adversarial training (Madry et al., 2018), which is effective in mitigating existing ULEs (Huang et al., 2020a). To make the specific pixel stand out in view of DNNs, OPS perturbs in-class images at the same position to the same target value that, if changed to a boundary value, could mostly and stably deviate from all original images. Specifically, the difference between the perturbed pixel and the original one in all in-class images should be large with low variance. Since such generation is only based on images, OPS needs significantly less computational cost than the previous methods based on DNN generators. We evaluate OPS and its counterparts in 6 architectures, 6 model sizes, 8 training strategies on CIFAR-10 (Krizhevsky et al., 2009) and ImageNet (Russakovsky et al., 2015) subset, and find that OPS is always superior in degrading model’s testing accuracy than EM ULEs. In this regard, we introduce a new unlearnable dataset named CIFAR-10-S, which combines the EM and OPS to craft stronger imperceptible ULEs. Even under adversarial training, a ResNet-18 (He et al., 2016) trained on CIFAR-10-S has 10.61% test accuracy, compared to 83.02% by the existing error-minimizing method. Different from the existing datasets like ImageNet-A (Hendrycks et al., 2021) or ObjectNet (Barbu et al., 2019), which place objects into special environments to remove shortcuts, CIFAR-10-S injects shortcuts to evaluate the model’s resistance to them. Altogether, our contributions are included as follows: • We analyze unlearnable examples from the perspective of shortcut learning, and demonstrate that a strong shortcut for DNNs could be as small as a single pixel. • We propose a novel data protection method named One-Pixel Shortcut (OPS), which perturbs in-class images in the pixel that could mostly and stably deviate from the original images. OPS is a model-free method that is significantly faster than previous work. • We extensively evaluate OPS on various models and training strategies, and find it outperforms baselines by a large margin in the ability to degrade DNN training. Besides, we introduce CIFAR-10-S to assess the model’s ability to learn essential semantic features. 2 RELATED WORK 2.1 ADVERSARIAL ATTACK AND DATA POISONING Adversarial examples are perturbed by small perturbations, which are indistinguishable from the original examples by humans but can make DNNs give wrong predictions (Szegedy et al., 2014). Many different adversarial attacks are proposed in recent years. Generally, most adversarial attacks aim to perturb the whole image with a constrained intensity (usually bounded by ℓp norm), e.g., PGD (Madry et al., 2018), C&W (Carlini & Wagner, 2017) and Autoattack (Croce & Hein, 2020). Besides, there are also other methods that only perturb a small part of an image (Croce & Hein (2019); Dong et al. (2020)) or even a single pixel (Su et al. (2019)). The existence of adversarial examples and their transferability (Chen et al. (2022)) indicates that DNNs do not sufficiently learn critical semantic information as we wish, but more or less depend on some non-robust features. Data poisoning aims to modify the training data in order to affect the performance of models. Usually, the poisoned examples are notably modified and take only a part of the whole dataset (Yang et al., 2017; Koh & Liang, 2017). But those methods cannot degrade the performance of models to a low enough level, and the poisoned examples are easily distinguishable. Recently researchers have paid great attention to imperceptible poisoning which modifies examples slightly and does not damage their semantic information (Huang et al., 2020a; Fowl et al., 2021; Huang et al., 2020b; Doan et al., 2021; Geiping et al., 2020; Chen et al., 2023). Fowl et al. (2021) uses adversarial perturbations which contain information of wrong labels to poison the training data, which is equivalent to random label fitting. On the contrary, Huang et al. (2020a) attack the training examples inversely, i.e., using error-minimizing perturbation, to craft unlearnable examples. 2.2 SHORTCUT LEARNING Recently, researches on deep neural networks indicate that under the sufficiency of correct classification, DNNs tend to learn easier features instead of semantic features which make the object itself. To be more specific, for example, the same object in different environments will get different predictions, which means the DNN overly relies on features that do not belong to the object (Beery et al., 2018). Geirhos et al. (2020) investigate this phenomenon in different fields of deep learning and explain why shortcuts exist and how to understand them. Lapuschkin et al. (2019) also observe this problem and attribute it to the unsuitable performance evaluation metrics that we generally use. The existence of natural adversarial examples (Hendrycks et al., 2021) also indicates that DNNs do not sufficiently learn the real semantic information during training. Instead, they may learn to use the background or texture of an image to predict. Unlearnable examples (ULEs) (Huang et al., 2020a), which are manually crafted by error-minimizing noises and able to lead the models trained on them to obtain terrible generalization on test data, are believed to be some kind of shortcut that provides some textures that are easy to learn (Yu et al., 2021). Generally, if we get enough data, the interconnection of different features will be enhanced so that those shortcuts may not be sufficient for classification tasks, i.e., the model will have to use more complicated composed features in order to minimize the risk. However, when the data we collect obtains some specific bias (e.g., similar backgrounds), shortcut learning will not be mitigated effectively. 2.3 DATA AUGMENTATION Data augmentation aims to enhance the generalization ability of models. This is usually implemented by applying some transformations to the training data, e.g., random stretching, random cropping, or color changing. Nowadays different kinds of data augmentation policies (Zhang et al., 2018; DeVries & Taylor, 2017; Cubuk et al., 2019; 2020) are proven to effectively boost the generalization ability of DNNs. Sometimes adversarial training Madry et al. (2018); Li et al. (2022) is also regarded as a kind of data augmentation (Shorten & Khoshgoftaar, 2019; Xie et al., 2020). Dao et al. (2019) deem data augmentation as a kind of data-dependent regularization term. Since data augmentations are believed to improve the generalization ability of DNNs, we use different augmentations to evaluate the effectiveness of different data protection methods. 3 ONE-PIXEL SHORTCUT 3.1 PRELIMINARIES Unlearnable examples (ULEs) are a data protection method created by error-minimizing (EM) noises (Huang et al., 2020a). Models trained on examples that are perturbed by those noises will get almost zero training error, but perform like random guessing on clean test data. Due to the imperceptibility of the noise, this method can prevent the abuse of data by some unauthorized users who attempt to train deep models for improper purposes, without affecting normal usage. This bi-level problem can be solved by optimizing the inner minimization and the outer minimization alternately. It is proved that the perturbations belonging to the same class are well clustered and linearly separable (Yu et al., 2021). Thus, EM provides easy-to-learn features which are closely interconnected with labels. We design a shuffling experiment to demonstrate that DNNs learn the shortcuts instead of images if trained on data that have shortcuts. Denote D̂ = {(xi + δi, yi)} as the perturbed training set, where δi is the perturbation associated with the i-th example. After shuffling, the training set becomes D̂′ = {(xj + δi, yi)}. We are curious about how the DNNs trained on different data (with or without shortcuts) would predict the shuffled perturbed training set. As shown in Table 1, the DNN trained on clean data performs like a random guess on shuffled perturbed data while keeping high accuracy on unshuffled data, indicating that it successfully learns the representations from images {xi} rather than giving predictions according to the perturbations {δi} from EM or OPS. In contrast, the DNN trained on EM training data tends to memorize a significant proportion of perturbations, because it has 48.97% accuracy on the shuffled EM training data. Moreover, OPS training data produces a DNN with 72.43% accuracy on the shuffled set, reflecting it forcing the DNN to learn the perturbations to a greater extent. Our study illustrates the learning characteristics of DNNs trained with or without shortcuts, and also shows that OPS is a more effective shortcut than EM. 3.2 FOOLING DNN TRAINING BY A PIXEL Following the discussion above, for the purpose of data protection, we need to craft shortcuts that are easy enough to learn and thus fool the network training. According to previous studies, shortcuts can come from background environments that naturally exist inside our datasets (Beery et al., 2018), or be manually crafted like EM (Huang et al., 2020a). Unlike those shortcuts which might occupy the whole image or a notable part, we investigate how a single pixel, which is the minimum unit of digital images, can affect the learning process of deep neural networks. Thus, we propose One-Pixel Shortcut (OPS), which modifies only a single pixel of each image. Images belonging to the same category are perturbed at the same position, which means the perturbed pixel is interconnected with the category label. Although so minuscule, it is efficient enough to fool the training of deep learning models. We use a heuristic but effective method to generate perturbations for images belonging to each category. We search the position and value of the pixel which can result in the most significant change for the whole category. Denoting D as the clean dataset and Dk as the clean subset containing all the examples of class k, the problem can be formulated as: argmax σk,ξk E(x,y)∈Dk [Gk (x, σk, ξk)] s.t. ∥σk∥0 = 1, ∑ i,j σk(i, j) = 1, (1) where σk ∈ RH×W represents the perturbed position mask and σk(i, j) is the element at the i-th row and j-th column, ξk ∈ RC stands for the perturbed target color (C = 3 for RGB images), and G is the objective function. Since the optimization above is an NP-hard problem, we cannot solve it directly. Thus we constrain the feasible region to a limited discrete searching space, where we search the boundary value of each color channel, i.e.,ξk ∈ {0, 1}3, at every point of an image. Specifically, for CIFAR-10 images, the discrete searching space will contain 32× 32× 23 = 8192 elements. To ensure that the pixel is stably perturbed, we also hope that the variance of the difference between them is small. Accordingly, we design the objective function Gk for class k as: Gk = E(x,y)∈Dk (∑C j=1|∥xj · σk∥F − ξkj | ) Var(x,y)∈Dk (∑C j=1|∥xj · σk∥F − ξkj | ) (2) where xi ∈ RH×W denotes the i-th channel of x, and ξkj ∈ R is the i-th channel of ξk. After solving the position map and color, we get perturbation δ for each example (x, y) as: δ = [ξy1σy − x1 · σy, ξy2σy − x2 · σy, . . . , ξyCσy − xC · σy]⊤ (3) Details can be found in Algorithm 1. The resulting One-Pixel Shortcut is illustrated in Figure 2. Algorithm 1 Model-Free Searching for One-Pixel Shortcut Input: Clean dataset D = D1 ⋃ · · · ⋃ DM Output: One-Pixel Shortcut dataset D̂ = D̂1 ⋃ · · · ⋃ D̂M 1: for k = 1, 2, 3, ...,M do 2: solve Eq.1 and Eq.2 to get σk and ξk # calculate the best perturbed point for class k 3: for each x ∈ Dk do 4: for i = 1, 2, 3 do 5: x̂i = xi · (I − σk) + ξki · σk # modify the optimal pixel for every image in class k 6: end for 7: end for 8: D̂k = {x̂} 9: end for 10: return D̂ = D̂1 ⋃ · · · ⋃ D̂M 3.3 PROPERTIES OF ONE-PIXEL SHORTCUT Since we all empirically believe that convolutional networks tend to capture textures (Hermann et al., 2020) or shapes (Geirhos et al., 2018; Zhang & Zhu, 2019), it is surprising that convolutional networks can be affected so severely by just one pixel. As illustrated by Figure 1, the network indeed tends to learn those less complicated nonsemantic features brought by One-Pixel Shortcut. Besides convolutional networks, we observe that compact vision transformers (Hassani et al., 2021) are also attracted by One-Pixel Shortcut and ignore other semantic features. This indicates that shortcuts are not particularly learned by some specific architecture. We also visualize the loss landscape of ResNet-18 trained on clean CIFAR-10 data and One-Pixel Shortcut data. Illustrated as Figure 3, while trained on OPS data, the loss surface is much flatter, which means that these minima found by the network are more difficult to escape. Even if we use a ResNet-18 pretrained on clean CIFAR-10 and then fine-tune it on the OPS data, the network will still fall into this badly generalized minima. In addition, we record the trajectories of training accuracy and the Frobenius norm of parameter difference, ∥θ − θ0∥F , which can reflect the magnitude of network parameter change. Here θ and θ0 respectively indicate the parameters after training and at the initialized point. We draw the relation curve between training accuracy and ∥θ − θ0∥F , which can be found in Figure 4. When training accuracy rises up to 90% for the first time, the model trained on OPS data has a much smaller ∥θ − θ0∥F than that trained on clean data, which indicates that the OPS-trained model gets stuck in an optimum closer to the initialization. It has been widely known that overparameterized DNNs optimized by gradient descent will converge to the solution that is close to the initialization, i.e. with the minimum norm of parameter difference (Wilson et al., 2017; Shah et al., 2018; Li & Liang, 2018; Zhang et al., 2021). Since OPS only perturbs a single pixel, the original representations of images are not damaged, and the model trained on clean data can still keep great performance on OPS data, which indicates that the well-generalized solution far from the initialization still exists but is not reached due to the tendency for a close solution. The close solution is believed to obtain better generalization ability. Nevertheless, this argument is true only under the assumption that the training data and test data are from the exact same distribution and have the exact same features. The existence of OPS forces the model to converge to an optimum where the model generalizes well on OPS features, which are not contained in test data. From our experiment results in Table 2, OPS can degrade test accuracy to a lower level. This is because EM requires a generator model, and thus may contain features more or less depending on it, which constrains the effectiveness of other models. On the other hand, OPS is a universal model-free method, and the shortcuts are crafted based on the inherent learning preference of DNNs. 4 EXPERIMENTS 4.1 SETTING Our experiments are implemented on CIFAR-10 and ImageNet subset, using 4 NVIDIA RTX 2080Ti GPUs. We investigate how the One-Pixel Shortcut can affect the training of different models (including different architectures and different capacities). We evaluate our method on different models including convolutional networks (He et al., 2016; Zagoruyko & Komodakis, 2016; Huang et al., 2017) and the recently proposed compact vision transformers (Hassani et al., 2021). For all the convolutional networks, we use an SGD optimizer with a learning rate set to 0.1, momentum set to 0.9, and weight decay set to 5e− 4. For all the compact vision transformers, we use AdamW optimizer with β1 = 0.9, β2 = 0.999, learning rate set to 5e − 4, and weight decay set to 3e − 2. Batch size is set to 128 for all the models except WideResNet-28-10, where it is set to 64. Model Training DataClean EM OPS (Ours) LeNet-5 70.27 26.98 22.19 CVT-7-4 87.46 27.60 18.21 CCT-7-3×1 88.98 27.06 17.95 DenseNet-121 94.10 23.72 11.45 ResNet-18 94.01 19.58 15.56 WideResNet-28-10 96.08 23.96 12.76 learning rate schedule, and the training accuracy of each model is guaranteed to reach near 100%. We additionally tested our method on the ImageNet subset (the first 100 classes). We center-crop all the images to 224 × 224 and train common DNNs with results in Table 4. We adopt an initial learning rate of 0.1 with a multi-step learning rate scheduler and train models for 200 epochs. Our One-Pixel Shortcut can still be effective in protecting large-scale datasets. The networks trained on OPS data will get much lower clean test accuracy than those trained on clean data. 4.2 EFFECTIVENESS ON DIFFERENT MODELS We train different convolutional networks and vision transformers on the One-Pixel Shortcut CIFAR-10 training set, and evaluate their performance on the unmodified CIFAR-10 test set. Details are shown in Table 2. Every model reaches a very high training accuracy after only several epochs, which is much faster than training on clean data. Meanwhile, they all get really low test accuracy (about 15%) on clean test data, indicating that they do not generalize at all. Although the perturbed image looks virtually the same as the original image, and all the models get near 100% training accuracy quickly, they do not capture any semantic information but just the pixels we modify in the images. We also train models on EM training set, which is generated by a ResNet-18 using the official implementation of Huang et al. (2020a). The ℓ∞ bound of EM noises is set to 8/255. The generation of OPS costs only about 30 seconds, which is much faster than EM costing about half an hour. For different networks, OPS can degrade their test accuracy to a lower level than EM. EM works the best on ResNet-18 (19.58% test accuracy), which has the same architecture as the generator. On other models, they get higher test accuracy than ResNet-18. Meanwhile, since OPS is a model-free method that takes advantage of the natural learning preference of neural networks, its transferability is better across different models. Besides different architectures, we also explore the impact on models with the same architecture but different capacities. We trained several WideResNets (Zagoruyko & Komodakis, 2016) with different sizes. The experiment results can be found in Table 3. From our observation, overparameterization, which is generally believed to enhance the ability to capture complicated features, does not circumvent the shortcut features. Moreover, we observe that vision transformers are easily affected by manually crafted shortcuts, even though it is believed that their self-attention mechanism makes them less sensitive to data distribution shifts (Shao et al., 2021; Bhojanapalli et al., 2021). For CCT-7-3×1 and CVT-7-4 (Hassani et al., 2021), EM and OPS can degrade their test accuracy below 30% and 20%. This indicates that vision transformers may not generalize on out-of-distributions data as well as our expectations. If the training data is largely biased, i.e., has notable shortcuts, and vision transformers will not perform much better than convolutional networks. 4.3 EFFECTIVENESS UNDER DIFFERENT TRAINING STRATEGIES To evaluate the effectiveness of OPS under different training strategies, we train models on OPS perturbed data using adversarial training and different data augmentations such as Mixup (Zhang et al., 2018), Cutout (DeVries & Taylor, 2017) and RandAugment (Cubuk et al., 2020). Simple augmentations like random crop and flip are used by default in standard training. Models are also trained on EM perturbed data. As shown in Table 5, we can observe that both EM and OPS have a good performance on data protection, which degrade test accuracy to 19.58% and 15.56%. As mentioned in previous works (Huang et al., 2020a; Fu et al., 2021), EM can not work so effectively under adversarial training, and the model can reach an even higher accuracy than adversarially trained on clean data. Meanwhile, OPS can still keep effective under adversarial training. However, when it comes to data augmentation, EM seems more impervious, while OPS is more sensitive, especially to Cutout and RandAugment. This is due to the fact that EM injects global noises into images, while OPS only modifies a single pixel, which is equivalent to adding a very local perturbation. Adversarial training, which can be regarded as a kind of global augmentation, is able to attenuate the dependence on global shortcuts. On the other hand, local data augmentations like Cutout make models less sensitive to local shortcuts. Naturally, for the purpose of complementing each other, we can combine EM and our proposed OPS together to craft a kind of ensemble shortcut. Since OPS only modified a single pixel, after being applied to EM perturbed images, the imperceptibility can still be guaranteed. We evaluate the effectiveness of this ensemble method under different training strategies and find that it can always keep effective. Even if we use adversarial training and strong data augmentation like RandAugment, it is still able to degrade test accuracy to a relatively low level. Based on this property, we introduce CIFAR-10-S, where all the images are perturbed by the EM-OPS-composed noises. It can serve as a new benchmark to evaluate the ability to learn critical information under the disturbance of composed non-semantic representations. We also extend our method to multi-pixel scenarios. According to Table 6, as the number of perturbed pixels increases, the test accuracy can be degraded to a lower level. Nevertheless, the more pixels are perturbed, the imperceptibility gets weaker, as illustrated in Figure 5. From our experiment on ResNet-18, 3-Pixel Shortcut can easily degrade the test accuracy to 9.74%. Moreover, more perturbed pixels alleviate the sensitivity to different data augmentations. For RandAugment, one more perturbed pixel can degrade the test accuracy to 46.45%, which is much lower than 71.18% of OPS. 5 DISCUSSION AND CONCLUSION In this paper, we study the mechanism of recently proposed unlearnable examples (ULEs) which use error-minimizing (EM) noises. We figure out that instead of semantic features contained by images themselves, features from EM noises are mainly learned by DNNs after training. These kinds of easy-to-learn representations work as shortcuts, which could naturally exist or be manually crafted. Since DNNs optimized by gradient descent always find the solution with the minimum norm, shortcuts take precedence over those semantic features during training. We find that shortcuts can be as small as even a single pixel. Thus, we propose One-Pixel Shortcut (OPS), which is an imperceivable and effective data protection method. OPS does not require a generator model and therefore needs very little computational cost and has better transferability between different models. Besides, OPS is less sensitive to adversarial training compared to EM ULEs. We investigate the effectiveness of OPS and EM under different training strategies. We find that EM and OPS have their respective advantages and disadvantages. While EM cannot keep effective under global data augmentations like adversarial training, OPS is sensitive to local data augmentations like Cutout. Based on our investigation, we combine EM and OPS together to craft a kind of stronger unlearnable examples, which can still keep imperceptible but more impervious, and consequently introduce CIFAR-10-S, which can be a new benchmark. Besides, we have also discussed our method in multi-pixel scenarios. There are still questions that need to be discussed in the future. Besides shortcuts that are crafted deliberately for the purpose of data protection, there are also shortcuts that naturally exist due to the inevitable bias during data collection. They can be the crux of network generalization on unseen data. How to identify and avoid them (e.g., design data-dependent augmentation) is a challenging problem. We believe our work will shed light on the important impacts of shortcuts, and provide inspiration to harness them for more practical applications. ACKNOWLEDGEMENT This work is partly supported by National Natural Science Foundation of China (61977046, 61876107, U1803261), Shanghai Science and Technology Program (22511105600), and Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102). Cihang Xie is supported by a gift from Open Philanthropy. In addition, we sincerely thank the reviewers for their valuable discussions. A FURTHER INVESTIGATION ON SHORTCUT GENERATION In Sec 3.2, we craft OPS by perturbing a fixed position to a fixed color target for each class. What if we only constrain one of these two properties? In other words, if perturbing a fixed position to different color targets, or perturbing different positions to a fixed color target, will a strong shortcut be created? With curiosity, we further explore these two properties individually. We use OPS-Position and OPS-Color to represent the former and the latter setting. For OPS-Position, after finding the optimal position for each class via Algorithm 1, we perturb pixels at this position to random colors. For OPS-Color, we assign each class a predefined color (we add two colors since there are only 8 boundary colors) and perturb pixels at different positions to this color. Note that there might be many positions where the original color is already the same as the predefined color. We perturb the position where the original color is the furthest from the predefined color (measured by the l2 norm) for each sample. As shown in Table A, the results indicate that if we only constrain one property, the resulting shortcuts will be harder to be captured. Besides, random perturbations at a fixed position can serve as a stronger shortcut than a fixed color at different positions. Table A: One-Pixel Shortcut generated by different constraints. If perturbing a fixed position to different color targets or perturbing different positions to a fixed color target, the ability to degrade DNNs training will be less strong. Training Data Clean OPS OPS-Position OPS-Color Clean Test Acc. 94.01 15.56 46.22 85.34 B BASELINE OF A RANDOM PIXEL For the purpose of evaluating our searching algorithm proposed in Sec. 3.2, we compare it with the baseline, which randomly chooses a fixed position and fixed color (denoted as RandPix) for each class to craft the shortcut training set, and report the clean accuracy of 10 runs in Table B. Since 15.56% is out of the three standard deviation intervals, our OPS design is validated to be meaningful. Table B: Comparison with the randomly chosen perturbed position and target color. Training Data Clean OPS RandPix Clean Test Acc. 94.01 15.56 50.67± 11.29 C SEARCHING OPS ON A SMALL PROPORTION OF DATA In Algorithm 1, we search the entire dataset to generate OPS. What if the data is gradually added to the dataset? In other words, will the perturbed position and target color of OPS that are found on a small proportion of data serve as a strong shortcut on the entire dataset? Moreover, users might want to add their own noise before uploading their data to the dataset. From a practical standpoint, we further explore the potential of OPS by designing the following experiments. Firstly, we take out different proportions of training data from the CIFAR-10 training set to search for the perturbed position and target color. Then we perturbed the whole training set based on the results we get from the selected small proportion of data, and train DNNs on the perturbed data. Secondly, considering the scenario that users add their own noise before uploading data to the dataset, based on the setting of the first experiment, we additionally inject random noise sampled from the uniform distribution [−ϵ, ϵ] (here we set ϵ = 8/255, following the common setting of adversarial learning) to the unselected data. The results are shown in Table C, where we denote the first setting as Clean-Upload and the second setting as Noisy-Upload. Even if we only use 1% of the whole dataset to search the perturbed position and target color, OPS is still able to degrade the clean accuracy of the trained DNN to 32.87%. Besides, additional noise does not degrade the efficiency of OPS. To some degree, the experiments prove the generalization capability and robustness of OPS from a practical standpoint. Table C: We use different proportions of data to search the perturbed position and target color of OPS. Even if we only use 1% of data, OPS can still largely degrade the clean accuracy of the trained network. Updating Type Searching Proportion 1% 10% 100% Clean-Upload 32.87 17.07 15.56 Noisy-Upload 32.10 17.30 15.56 D EXPLORATIONS ON L2 ADVERSARIAL TRAINING In Sec 4.3, we have studied the effectiveness of different types of shortcuts under different types of training strategies, including l∞ adversarial training and various data augmentations. For a more comprehensive exploration, we additionally evaluate them on l2 adversarial training. Besides the commonly used setting (ϵ = 0.5), we also try larger attack budgets, as shown in Table D. Compared to l∞, l2 is proved to be a more efficient AT manner for alleviating OPS. Nevertheless, when the perturbation budget is not large enough, the network will still be affected by the shortcuts. If we further enlarge ϵ, it will hurt the clean test accuracy to a greater extent. As a local shortcut, OPS displays stronger inertia under l2 AT, which can be viewed as a global data augmentation. As for EM, although generated by l∞ perturbations, is still sensitive to l2 AT. These results conform to our discussion about the properties of local&global shortcuts and local&global data augmentations in Sec 4.3. Table D: ResNet-18 trained on different training data using l2 adversarial training with different attack budgets. Training Strategy Training DataClean OPS EM Standard 94.01 15.56 19.58 l2 AT (ϵ=0.5) 87.68 43.26 71.00 l2 AT (ϵ=1) 82.58 50.29 81.75 l2 AT (ϵ=2) 73.45 73.70 74.51
1. What is the focus and contribution of the paper regarding preventing unauthorized usage of data for DNN training? 2. What are the strengths of the proposed One-Pixel Shortcut (OPS) method? 3. What are the weaknesses of the paper, particularly in terms of clarity and experimentation? 4. Do you have any concerns or suggestions regarding the effectiveness of the OPS method against modern architectures and data augmentation strategies? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper aims at preventing data from being used (without authorization) for training DNNs. They propose the One-Pixel Shortcut (OPS) method where all the images of a same class get the same pixel location replaced with a same color pixel. This method fools the network during training as the network will use this replaced color pixel shared within the class as a shortcut during training to learn the class instead of using actual discriminative features. The authors evaluate this method on CIFAR-10 on several architectures from ResNet to Vision Transformers. Strengths And Weaknesses Strengths: The authors evaluate their method on several architectures. The authors evaluate various training strategies as defenses against their proposed attack. They investigate data augmentations and adversarial training. Weaknesses: Subsection 3.1 is unclear and could be re-worked. Table 1 is not clear and could be better explained: the metric could be stated more clearly. In Table 5, CutOut and RandAugment seem individually quite effective against the one pixel shortcut method. This table is missing the combination of CutOut and RandAugment. It seems that such a combination could strongly affect the proposed one pixel shortcut method (as the individual components are already effective). Furthermore, modern architectures such as ViT use considerable amount of data augmentations so it makes sense to study combination of augmentations. As easy missing training strategy in this study as mean of defense against the proposed one pixel shortcut, I would like to point to median filters and gaussian blurring. Using l-inf perturbations for adversarial training seems unfair as by construction such perturbations will not be able to compensate the pixel replacement of the proposed OPS method. Using l2 perturbations with medium-strong perturbation radiuses seems much more appropriate and interesting in this setting. Especially as adversarial training with l-inf perturbations already lowers the clean test accuracy due to the robustness/accuracy tradeoff observed with adversarial training. Clarity, Quality, Novelty And Reproducibility The paper is novel, clear and easy to read except for section 3.1. The paper gives implementation details and code so the paper can be reproduced.
ICLR
Title RPM: Generalizable Multi-Agent Policies for Multi-Agent Reinforcement Learning Abstract Despite the recent advancement in multi-agent reinforcement learning (MARL), the MARL agents easily overfit the training environment and perform poorly in evaluation scenarios where other agents behave differently. Obtaining generalizable policies for MARL agents is thus necessary but challenging mainly due to complex multi-agent interactions. In this work, we model the MARL problem with Markov Games and propose a simple yet effective method, called ranked policy memory (RPM), i.e., to maintain a look-up memory of policies to achieve good generalizability. The main idea of RPM is to train MARL policies via gathering massive multi-agent interaction data. In particular, we first rank each agent’s policies by its training episode return, i.e., the episode return of each agent in the training environment; we then save the ranked policies in the memory; when an episode starts, each agent can randomly select a policy from the RPM as the behavior policy. Each agent uses the behavior policy to gather multi-agent interaction data for MARL training. This innovative self-play framework guarantees the diversity of multi-agent interaction in the training data. Experimental results on Melting Pot demonstrate that RPM enables MARL agents to interact with unseen agents in multi-agent generalization evaluation scenarios and complete given tasks. It significantly boosts the performance up to 818% on average. 1 INTRODUCTION In Multi-Agent Reinforcement Learning (MARL) (Yang & Wang, 2020), each agent acts decentrally and interacts with other agents to complete given tasks or achieve specified goals via reinforcement learning (RL) (Sutton & Barto, 2018). In recent years, much progress has been achieved in MARL research (Vinyals et al., 2019; Jaderberg et al., 2019; Perolat et al., 2022). However, the MARL agents trained with current methods tend to suffer poor generalizability (Hupkes et al., 2020) in the new environments. The generalizability issue is critical to real-world MARL applications (Leibo et al., 2021), but is mostly neglected in current research. In this work, we aim to train MARL agents that can adapt to new scenarios where other agents’ policies are unseen during training. We illustrate a two-agent hunting game as an example in Fig. 1. The game’s objective for two agents is to catch the stag together, as one agent acting alone cannot catch the stag and risks being killed. They may perform well in evaluation scenarios similar to the training environment, as shown in Fig. 1 (a) and (b), respectively, but when evaluated in scenarios different from the training ones, these agents often fail. As shown in Fig. 1 (c), the learning agent (called the focal agent following (Leibo et al., 2021)) is supposed to work together with the other agent (called the background agent also following (Leibo et al., 2021)) that is pre-trained and can capture the hare and the stag. In this case, the focal agent would fail to capture the stag without help from its teammate. The teammate of the focal agent may be tempted to catch the hare alone and not cooperate, or may only choose to cooperate with the focal agent after capturing the hare. Thus, the focal agent should adapt to their teammate’s behavior to catch the stag. However, the policy of the background agent is unseen to the focal agent during training. Therefore, without generalization, the agents trained as Fig. 1 (left) cannot achieve an optimal policy in the new evaluation scenario. ∗Wei Qiu did the work while interning at Sea AI Lab. Corresponding author. Inspired by the fact that human learning is often accelerated by interacting with individuals of diverse skills and experiences (Meltzoff et al., 2009; Tomasello, 2010), we propose a novel method aimed at improving the generalization of MARL through the collection of diverse multi-agent interactions. Concretely, we first model the MARL problem with Markov Games (Littman, 1994) and then propose a simple yet effective method called ranked policy memory (RPM) to attain generalizable policies. The core idea of RPM is to maintain a look-up memory of policies during training for the agents. In particular, we first evaluate the trained agents’ policies after each training update. We then rank the trained agents’ policies by the training episode returns and save them in the memory. In this way, we obtain various levels, i.e., the performance of the policies. When starting an episode, the agent can access the memory and load a randomly sampled policy to replace the current behavior policy. The new ensemble of policies enables the agents in self-play to collect diversified experiences in the training environment. These diversified experiences contain many novel multi-agent interactions that can enhance the extrapolation capacity of MARL, thus boosting the generalization performance. We note that an easy extension by incorporating different behavior properties as the keys in RPM could potentially further enrich the generalization but it is left for future work. We implement RPM on top of the state-of-the-art MARL algorithm MAPPO (Yu et al., 2021). To verify its effectiveness, we conduct large-scale experiments with the Melting Pot (Leibo et al., 2021), which is a well-recognized benchmark for MARL generalization evaluation. The experiment results demonstrate that RPM significantly boosts the performance of generalized social behaviors up to 818% on average and outperforms many baselines in a variety of multi-agent generalization evaluation scenarios. Our code, pictorial examples, videos and experimental results are available at this link: https://sites.google.com/view/rpm-iclr2023/. 2 PRELIMINARIES Markov Games. We consider the Markov Games (Littman, 1994) represented by a tuple G = ⟨N ,S,A,O, P,R, γ, ρ⟩. N is a set of agents with the size |N | = N ; S is a set of states; A = ×Ni=1Ai is a set of joint actions with Ai denoting the set of actions for an agent i; O = ×Ni=1Oi is the observation set, with Oi denoting the observation set of the agent i; P : S ×A → S is the transition function and R = ×Ni=1ri is the reward function where ri : S ×A → R specifies the reward for the agent i given the state and the joint action; γ is the discount factor; the initial states are determined by a distribution ρ : S → [0, 1]. Given a state s ∈ S , each agent i ∈ N chooses its action ui and obtains the reward r(s,u) with the private observation oi ∈ Oi, where u = {ui}Ni=1 is the joint action. The joint policy of agents is denoted as πθ = {πθi}Ni=1 where πθi : S ×Ai → [0, 1] is the policy for the agent i. The objective of each agent is to maximize its total expected return Ri = ∑∞ t=0 γ trti . Multi-Agent RL. In MARL, multiple agents act in the multi-agent systems to maximize their respective returns with RL. Each agent’s policy πi is optimized by maximizing the following objective: J (πi) ≜ Es0:∞∼ρ0:∞G ,ai0:∞∼πi [ ∞∑ t=0 γtrit ] , where J (πi) is a performance measure for policy gradient RL methods (Williams, 1992; Lillicrap et al., 2016; Fujimoto et al., 2018). Each policy’s Q value Qi is optimized by minimizing the following regression loss (Mnih et al., 2015) with TD-learning (Sutton, 1984): L(θi) ≜ ED′∼D [( yit −Qiθi ( st,ut, s i t, u i t ))2] , where yit = r i t + γmaxu′ Q i θ̄i ( st+1,u ′, sit, u i,′). θi are the parameters of the agents. θ̄i is the parameter of the target Qi and periodically copied from θ. D′ is a sample from the replay buffer D. 3 PROBLEM FORMULATION We introduce the formulation of MARL for training and evaluation in our problem. Our goal is to improve generalizabiliby of MARL policies in scenarios where policies of agents or opponents are unseen during training while the physical environment is unchanged. Following Leibo et al. (2021), the training environment is defined as substrate. Each substrate is an N -agent partially observable Markov game G. Each agent optimizes its policy πθi via the following protocol. Definition 1 (Multi-Agent Training). There are N agents act in the substrate, which is denoted as G. Each agent receives partial environmental observation not known to other agents and aims to optimizes its policy πθi by optimizing its accumulated rewards: ∑∞ t=0 γ trit. The performance of the joint policy πθ = {πθi}Ni=1 is measured by the mean individual return: R̄(πθ) = 1N ∑N i=1R(πθi ;G). R(πθi ;G) measures the episode return of policy πθi in game G for agent i. In order to evaluate the trained MARL policies in evaluation scenario G′, we follow the evaluation protocol defined by Leibo et al. (2021): Definition 2 (Multi-Agent Evaluation). There are M (1 ≤ M ≤ N − 1) focal agents that are selected from N agents. The focal agents are agents to be evaluated in evaluation scenarios. They are paired with N −M background agents whose policies πϕ = {πϕj}N−Mj=1 were pre-trained with pseudo rewards in the same physical environment where the policies πθ are trained. To measure the generalized performance in evaluation scenarios, we use the mean individual return of focal agents as the performance measure: R̄({πθ}Mi=1) = 1M ∑M i=1R(πθi ;G′). We show an example of our formulation in Fig. 2. Note that the focal agents cannot utilise the interaction data collected during evaluation to train or finetune their policies. Without training the policies of focal agents with the collected trajectories during evaluation, the focal agents should behave adaptively to interact with the background agents to complete challenging multi-agent tasks. It is also worth noting that the ad-hoc team building (Stone & Kraus, 2010; Gu et al., 2021) is different from our formulation both in the training and evaluation. We discuss the differences in the related works section (Paragraph 3, Sec. 7). In MARL, the focal agents need adaptively interact with background agents to complete given tasks. Formally, we define the objective for optimizing performance of the focal agents without exploiting their trajectories in the evaluation scenario for training the policies {πθj}Mj=1: maxJ ({πθj} M j=1) ≜ maxEs0:∞∼ρ0:∞G′ ,a j 0:∞∼{πθj } M j=1 [ ∞∑ t=0 γt 1 M M∑ j=1 rjt ∣∣∣∣∣G′ ] . (1) 4 RANKED POLICY MEMORY To improve the generalization of MARL, agents in the substrate must cover as much as multi-agent interactions, i.e., data, that resemble the unseen multi-agent interactions in the evaluation scenario. However, current training paradigms, like independent learning (Tampuu et al., 2017) and centralized training and decentralized execution (CTDE) (Oliehoek et al., 2008), cannot give diversified multiagent interactions, as the agents’ policies are trained at the same pace. To this end, we propose a Ranked Policy Memory (RPM) method to provide diversified multi-agent behaviors. RPM Building & Updating. We denote an RPM with Ψ, which consists of |Rmax| entries, i.e., ranks, where |Rmax| is the maximum training episode return (the episode return in the substrate). When an agent is acting in the substrate, it will receive the training episode return R of all agents with policies {πiθ}Ni=1. Then {πiθ}Ni=1 are saved into Ψ by appending agents’ policies into the corresponding memory slot, Ψ[re].add({πie}Ni=1). To avoid there being too many entries in the policy memory caused by continuous episode return values, we discretize the training episode return. Each discretized entry κ covers a range of [κ, κ+ ψ), where ψ > 0 and it can be either an integer or a float number. For the training episode return R, the corresponding entry κ can be calculated by: κ = { ⌊R/ψ⌋ × 1{(R mod ψ) ̸= 0} × ψ, if R ≥ 0, ⌊R/ψ⌋ × ψ, otherwise. (2) where 1{·} is the indicator function, and ⌊·⌋ is the floor function. Intuitively, discretizing R saves memory and memorize policies of similar performance in to the same rank. Therefore, diversified policies can be saved to be sampled for agents. RPM Sampling. The memory Ψ stores diversified policies with different levels of performance. We can sample various policies of different ranks and assign each policy to each agent in the substrate to collect multi-agent trajectories for training. These diversified multi-agent trajectories can resemble trajectories generated by the interaction with agents possessing unknown policies in the evaluation scenario. At the beginning of an episode, we first randomly sampleN keys with replacement and then randomly sample one policy for each key from the corresponding list. All agents’ policies will be replaced with the newly sampled policies for multi-agent interactions in the substrate, thus generating diversified multi-agent trajectories. Algorithm 1: MARL with RPM 1 Input: Initialize πθ , Ψ, D, G and G′; 2 Input: Initialize behavior policy πθb ← πθ; 3 for each update do 4 if RPM sampling then 5 πθb ← SamplingRPM(Ψ); 6 D ← GatherTrajectories(πθb ,G); 7 πθ ← MARLTrainig(πθ,D); 8 Ψ← UpdateRPM(πθ,Ψ,G); 9 R̄← Evaluate(πθ,G′); 10 πθb ← πθ; 11 Output: πθ . The Workflow of RPM. We showcase an example of the workflow of RPM in Fig. 3. There are three agents in training. Agents sample policies from RPM. Then all agents collect data in the substrate for training. The training episode return is then used to update RPM. During evaluation, agents 1 and 2 are selected as focal agents and agent 3 is selected as the background agent. We present the pseudo-code of MARL training with RPM in Algorithm 1. In Lines 4-5, the πθb is updated by sampling policies from RPM. Then, new trajectories of D are collected in Line 6. πθ is trained in Line 7 with MARL method by using the newly collected trajecotries and πθb is updated with the newly updated πθ. RPM is updated in Line 8. After that, the performance of πθ is evaluated in the evaluation scenario G′ and the evaluation score R̄ is returned in Line 9. Discussion. RPM leverages agents’ previously trained models in substrates to cover as many patterns of multi-agent interactions as possible to achieve generalization of MARL agents when paired with agents with unseen policies in evaluation scenarios. It uses the self-play framework for data collection. Self-play (Brown, 1951; Heinrich et al., 2015; Silver et al., 2018; Baker et al., 2019) maintains a memory of the opponent’s previous policies for acquiring equilibria. RPM differs from other self-play methods in four aspects: (i) self-play utilizes agent’s previous policies to create fictitious opponents when the real opponents are not available. By playing with the fictitious opponents, many fictitious data are generated for training the agents. In RPM, agents load their previous policies to diversify the multi-agent interactions, such as multi-agent coordination and social dilemmas, and all agents’ policies are trained by utilizing the diversified multi-agent data. (ii) Self-play does not maintain explicit ranks for policies while RPM maintains ranks of policies. (iii) Self-play was not introduced for generalization of MARL while RPM aims to improve the generalization of MARL. In Sec. 6, we also present the evaluation results of a self-play method. 5 MARL TRAINING We incorporate RPM into the MARL training pipeline. We take MAPPO (Yu et al., 2021) for instantiating our method, which is a multi-agent variant of PPO (Schulman et al., 2017) and outperforms many MARL methods (Rashid et al., 2018; 2020; Wang et al., 2021a) in various complex multi-agent domains. In MAPPO, a central critic is maintained for utilizing the concealed information of agents to boost multi-agent learning due to non-stationarity. RPM introduces a novel method for agents to collect experiences/trajectories τ = {τi}Ni=1. Each agent optimizes the following objective: J (θi) = E [ min ( ηti ( θti ) ·Ati,clip ( ηti ( θti ) , 1− ϵ, 1 + ϵ ) ·Ati )] , (3) where ηti(θ t i) = πθt i (uti|τ t i ) π θold i (uti|τti ) denotes the important sampling weight. The clip (·) clips the values of θi that are outside the range [1− ϵ, 1 + ϵ] and ϵ is a hyperparameter. Ati is a generalized advantage estimator (GAE) (Schulman et al., 2015). To optimize the central critic Vψ({oti, uti}Ni=1), we mix agents’ observation-action pairs and output an N -head vector where each value corresponds to the agent’s value: L(ψ) := ED′∼D [( yt − Vψ̄({oti, uti}Ni=1) )2] , (4) where yt = [∑k−1 l=0 γ lrt+li + γ kVψ̄({ot+ki , u t+k i }Ni=1)[i] ]N i=1 is a vector of k-step returns, and D′ is a sample from the replay buffer D. In complex scenarios, e.g., Melting Pot, with an agent’s observation as input, its action would not impact other agents’ return, since the global states contain redundant information that deteriorates multi-agent learning. We present the whole training process, the network architectures of the agent and the central critic in Appx. D. 6 EXPERIMENTS In this section, to verify the effectiveness of RPM in improving the generalization of MARL, we conduct extensive experiments on Melting Pot and present the empirical results. We first introduce Melting Pot, baselines and experiment setups. Then we present the main results of RPM. To demonstrate that ψ is important for RPM, we conducted ablation studies. We finally showcase a case study to visualize RPM. To sum up, we answer the following questions: Q1: Is RPM effective in boosting the generalization performance of MARL agents? Q2: How does the value of ψ impact RPM training? Q3: Does RPM gather diversified policies and trajectories? 6.1 EXPERIMENTAL SETUP Melting Pot. To demonstrate that RPM enables MARL agents to learn generalizable behaviors, we carry out extensive experiments on DeepMind’s Melting Pot (Leibo et al., 2021). Melting Pot is a suite of testbeds for the generalization of MARL methods. It proposes a novel evaluation pipeline for the evaluation of the MARL method in various domains. That is, all MARL agents are trained in the substrate; during evaluation, some agents are selected as the focal agents and the rest agents become the background agents (pre-trained policies of MARL models will be loaded); the evaluation scenarios share the same physical properties as the substrates. Melting Pot environments possess many properties, such as temporal coordination and free riding as depicted in Table 1. An agent performing well in such environments indicates that its behaviors exhibit these properties. In Fig. 4, the agent’s observation is shown in the green box to the lower left of the state (i.e., the whole image). The agent is in the lower middle of the observation. The deep neural network architecture of the agent’s policy is shown on the left. More information about substrates, scenarios, neural network architectures and training details can be found in Appx. D. Baselines. Our baselines are MAPPO (Yu et al., 2021), MAA2C (Papoudakis et al., 2021), OPRE (Vezhnevets et al., 2020), heuristic fictitious self-play (HFSP) (Heinrich, 2017; Berner et al., 2019) and RandNet (Lee et al., 2019). MAPPO and MAA2C are MARL methods that achieved outstanding performance in various multi-agent scenarios (Papoudakis et al., 2021). OPRE was proposed for the generalization of MARL. RandNet is a general method for the generalization of RL by introducing a novel component in the convolutional neural network. HFSP is a general self-play method for obtaining equilibria in competitive games, we use it by using the policies saved by RPM. Training setup. We use 6 representative substrates (Fig. 5) to train MARL policies and choose some evaluation scenarios from each substrate as our evaluation testbed. The properties of the environments are listed in Table 1. We train agents in Melting Pot substrates for 200 million frames with 3 random seeds for all methods. Our training framework is distributed with 30 CPU actors to collect experiences and 1 GPU for the learner to learn policies. We implement our actors with Ray (Moritz et al., 2018) and the learner with EPyMARL (Papoudakis et al., 2021). We use mean-std to measure the performance of all methods. The bold lines in all figures are mean values, and the shades stand for the standard deviation. Due to a limited computation budget, it is redundant to compare our method with other methods, such as QMIX (Rashid et al., 2018) and MADDPG (Lowe et al., 2017) as MAPPO outperforms them. All experiments are conducted on NVIDIA A100 GPUs. 6.2 EXPERIMENT RESULTS To answer Q1, we present the evaluation results of 17 Melting Pot evaluation scenarios in Fig. 6. Our method can boost MARL in various evaluation scenarios, which have different properties, as shown in Table 1. In Chicken Game (CG) 1-2 (the number stands for the number of the evaluation scenario of Chicken Game), RPM outperforms its counterparts by a convincing margin. HFSP performs no better than RPM. RandNet gets around 15 evaluation mean returns on Chicken Game (CG) 1. MAA2C and OPRE perform nearly random (the red dash lines indicate the random result) in the two scenarios. In Pure Coordination (PC) 1-3, Rational Coordination (PC) 1-3 and Prisoners’ Dilemma (PD) 1-3, most baselines perform poorly. In Stag Hunt (SH) 1-3 and Clean Up (CU) 1-2, MAPPO and MAA2C perform unsatisfactorily. We can also find that HFSP even gets competitive performance in Stag Hunt (SH) 1-3. However, HFSP performs poorly in Pure Coordination (PC) 1-3, Rational Coordination (RC) 1-3 and Prisoners’ Dilemma (PD) 1-3. Therefore, the vanilla self-play method cannot directly be applied to improve the generalization of MARL methods. In summary, RPM boosts the performance up to around 818% on average compared with MAPPO on 6 evaluation scenarios. To answer Q2, we present experimental results of the impact of ψ and the sampling ratio in HFSP in the following. 0 50 100 150 200 0 20 40 60 opt: 98.9 CG 1 0 50 100 150 200 0 5 10 opt: 14.3 CG 2 0 50 100 150 200 0 10 opt: 36.4 CG 3 0 50 100 150 200 0 5 10 15 opt: 65.9 SH 1 0 50 100 150 200 0 10 opt: 54.4 SH 2 0 50 100 150 200 0 10 20 opt: 53.8 SH 3 0 50 100 150 200 0 200 400 opt: 722.6 CU 1 0 50 100 150 200 0 100 opt: 385.9 CU 2 0 50 100 150 200 0.00 0.25 0.50 0.75 opt: 4.4 PC 1 0 50 100 150 200 0.0 0.2 0.4 opt: 3.2 PC 2 0 50 100 150 200 0.0 0.5 1.0 opt: 3.2 PC 3 0 50 100 150 200 0 10 opt: 55.7 PD 1 0 50 100 150 200 0 10 20 30 opt: 60.8 PD 2 0 50 100 150 200 0 10 20 opt: 36.8 PD 3 0 50 100 150 200 0 1 2 3 opt: 11.9 RC 1 0 50 100 150 200 0 1 opt: 7.7 RC 2 0 50 100 150 200 0 2 4 opt: 13.1 RC 3 RPM (ours) MAPPO MAA2C RandNet OPRE HFSP Random Policy Training Steps (million) Ev al R et ur n M ea n Figure 6: Evaluation results of RPM and baselines in 17 scenarios. The red dash horizontal lines indicate the results of random policy. The optimal (opt) values are shown in each sub-figure and were gathered from (Leibo et al., 2021), which an exploiter generated. The exploiter was trained in the evaluation scenarios with RL methods, and the training time steps were 1,000 M. 0 10 0 250 500 750 Co un ts Chicken Game 0 10 0 250 500 750 Stag Hunt 0 100 0 2000 4000 Clean Up 0 1 0 500 1000 1500 Pure Coordination 0 10 0 250 500 750 Prisoners Dilemma 0 2 0 1000 2000 Rational Coordianation Training Episode Returns Figure 7: Histograms of training episode returns. 6.3 ABLATION STUDY The Impact of ψ. To investigate which value of ψ has the greatest impact on RPM performance, we conduct ablation studies by (i) removing ranks and sampling from the checkpoint directly; (ii) reducing the number of ranks by changing the value of ψ. As shown in Fig. 8, without ranks (sampling policies without ranks randomly), RPM cannot attain stable performance in some evaluation scenarios. Especially in Pure Coordination (PC) 1-3, the result is low and has a large variance. In RPM, choosing the right interval ψ can improve the performance, as shown in the results of Pure Coordination (PC) 1- 3 and Prisoners’ Dilemma (PD) 1-3, showing that the value of ψ is important for RPM. We summarize the results and values of ψ in Table 2 and Table 3. The Sampling Ratio in HFSP HFSP shows comparable results in some scenarios in Figure 6. In Figure 6, the sampling ratio of HFSP is 0.3. We are interested in studying the impact of the sampling RPM (ours) HFSP-0.9 HFSP-0.7 HFSP-0.5 HFSP-0.3 HFSP-0.1 Random Policy ratio in HFSP on evaluation performance. We conduct experiments in CU 1 and 2, PC 1 and 3 and PD 1 and 3. The sampling ratio list is [0.9, 0.7, 0.5, 0.3, 0.1]. We use the default training setup and use 3 random seeds. HFSP shows comparable results in PC 2 and 3, but its performances are poor in CU 1 and 2 and PD 2 and 3. As shown in Figure 9, HFSP heavily relies on the sampling ratio. HFSP should be carefully tuned on each substrate to attain good performance, which is not feasible. In contrast, RPM is stable (the sampling ratio is 0.5) on all substrates. HFSP can also perform well in substrates such as PC and PD, where the return-checkpoint count distribution is more uniform. The absence of ranks leads to the frequent sampling of policies with high count values in substrates that have skewed return-checkpoint count distribution, thereby reducing the diversity of training data. Such distributions typically comprise a large number of policies with suboptimal performance. 6.4 CASE STUDY We showcase how RPM helps to train the focal agents to choose the right behaviors in the evaluation scenario after training in the substrate. To illustrate the trained performance of RPM agents, we use the RPM agent trained on Stag Hunt and run the evaluation on Stag Hunt 1. In Stag Hunt, there are 8 agents. Each agent collects resources that represent ‘hare’ (red) or ‘stag’ (green) and compares inventories in an interaction, i.e., encounter. The results of solving the encounter are the same as the classic Stag Hunt matrix game. In this environment, agents are facing tension between the reward for the team and the risk for the individual. In Stag Hunt 1, One focal agent interacts with seven pretrained background agents. All background agents were trained to play the ‘stag’ strategy during the interaction1. The optimal policy for the focal agent is also to play ‘stag’. However, it is challenging for agents to detect other agents’ strategy since such a behavior may not persist in the substrate. Luckily, RPM enables focal agents to behave correctly in this scenario. To answer Q3, we present the analysis of RPM on the substrate Stag Hunt and its evaluation scenario SH 1 in Fig. 10. We can find that in Fig. 10 (b), the number of the keys in RPM is growing monotonically during training and the maximum number of the keys in RPM is over 20, showing that agents trained with RPM discover many novel patterns of multi-agent interaction and new keys are created and the trained models are saved in RPM. Meanwhile, the evaluation performance is also increasing in SH 1 as depicted in Fig. 10 (a). In Fig. 10 (c), it is interesting to see that the distribution of the keys of RPM is expanding during training. In the last 25 million training steps, the last distribution of RPM keys covers all policies of different performance levels, ranging from 0 to 14. By utilizing RPM, agents can collect diversified multi-agent trajectories for multi-agent training. Fig. 10 (d) demonstrates the final histogram of RPM keys after training. There are over 600 trained policies that have a small value of keys. Since agents should explore the environment at the early 1This preference was trained with pseudo rewards by Leibo et al. (2021) and the trained models are available at this link: https://github.com/deepmind/meltingpot stage of training, it is reasonable to find that many trained policies of RPM keys have low training episode returns. After 50 million training steps, RPM has more policies with higher training episode returns. Note that the maximum training episode return of RPM keys is over 14 while the maximum mean evaluation return of RPM shown in Fig. 10 (a) is around 14. Our experiments show that training policies with good performance in the substrate is crucial for improving generalization performance in the evaluation scenarios. When MARL agents perform poorly in the substrate, the evaluation performance will also be inferior or random, making it hard to have diversified policies. We show the results in Appx. E. 7 RELATED WORKS Recent advances in MARL (Yang & Wang, 2020; Zhang et al., 2021) have demonstrated its success in various complex multi-agent domains, including multi-agent coordination (Lowe et al., 2017; Rashid et al., 2018; Wang et al., 2021b), real-time strategy (RTS) games (Jaderberg et al., 2019; Berner et al., 2019; Vinyals et al., 2019), social dilemma (Leibo et al., 2017; Wang et al., 2018; Jaques et al., 2019; Vezhnevets et al., 2020), multi-agent communication (Foerster et al., 2016; Yuan et al., 2022), asynchronous multi-agent learning (Amato et al., 2019; Qiu et al., 2022), open-ended environment (Stooke et al., 2021), autonomous systems (Hüttenrauch et al., 2017; Peng et al., 2021) and game theory equilibrium solving (Lanctot et al., 2017; Perolat et al., 2022). Despite strides made in MARL, training generalizable behaviors in MARL is yet to be investigated. Recently, generalization in RL (Packer et al., 2018; Song et al., 2019; Ghosh et al., 2021; Lyle et al., 2022) has achieved much progress in domain adaptation (Higgins et al., 2017) and procedurally generated environments (Lee et al., 2019; Igl et al., 2020; Zha et al., 2020). However, there are few works of generalization in MARL domains (Carion et al., 2019; Vezhnevets et al., 2020; Mahajan et al., 2022; McKee et al., 2022). Recently, Vezhnevets et al. (2020) propose a hierarchical MARL method for agents to play against opponents it hasn’t seen during training. However, the evaluation scenarios are only limited to simple competitive scenarios. Mahajan et al. (2022) studied the generalization in MARL empirically and proposed theoretical findings based on successor features (Dayan, 1993). However, no method to achieve generalization in MARL was proposed in (Mahajan et al., 2022). Ad-hoc team building (Stone & Kraus, 2010; Gu et al., 2021) models the multi-agent problem as a single-agent learning task. In ad-hoc team building, one ad-hoc agent is trained by interacting with agents that have fixed pretrained policies and the non-stationarity issue is not severe. However, in our formulation, non-stationarity is the main obstacle to MARL training. In addition, there is only one ad-hoc agent evaluated by interacting agents that are unseen during training, while there can be more than one focal agent in our formulation as defined in Definition 2, thus making our formulation general and challenging. There has been a growing interest in applying self-play to solve complex games (Heinrich et al., 2015; Silver et al., 2018; Hernandez et al., 2019; Baker et al., 2019); however, its value in enhancing the generalization of MARL agents has yet to be examined. Due to space constraints, we discuss meta-learning (Al-Shedivat et al., 2018; Kim et al., 2021) and population-based training (Strouse et al., 2021; Lupu et al., 2021; Tang et al., 2021) works in Appx. F. 8 CONCLUSION, LIMITATIONS AND FUTURE WORK In this paper, we consider the problem of achieving generalizable behaviors in MARL. We first model the problem with Markov Game. To train agents that can interact with agents that possess unseen policies. We propose a simple yet effective method, RPM, to collect diversified multi-agent interaction data. We save policies in RPM by ranking the training episode return. Empirically, RPM significantly boosts the performance of MARL agents in various Melting Pot evaluation scenarios. RPM’s performance is dependent on the appropriate value of ψ. Several attempts may be needed to determine the correct value of ψ for RPM. We are interested in discovering broader measures for ranking policies that do not explicitly consider the training episode return. Recently, there has been a growing interest in planning in RL, especially with model-based RL. We are interested in exploring the direction of applying planning and opponent/teammate modelling for attaining generalized MARL policies for future work. Agents are engaged in complex interactions in multi-agent scenarios. Devising novel self-play methods is our future direction. ETHICS STATEMENT We addressed the relevant aspects in our conclusion and have no conflicts of interest to declare. REPRODUCIBILITY STATEMENT We provide detailed descriptions of our experiments in the appendix and list all relevant parameters in Table 4 and Table 5 in Appx. D. The code can be found at this link: https://sites.google. com/view/rpm-iclr2023/. ACKNOWLEDGMENTS We would like to thank the anonymous reviewers for their suggestions. We thank the support from Xinyi Wan, Jiahao Ji and Xiangfan Li of the infrastructure team at Sea AI Lab. Wei Qiu and Bo An are supported by the National Research Foundation, Singapore under its Industry Alignment Fund – Pre-positioning (IAF-PP) Funding Initiative. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
1. What is the main contribution of the paper regarding multi-behavior policies in a single agent? 2. What are the strengths and weaknesses of the proposed approach, particularly in its assumption about returns and its computational intensity? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any questions or concerns regarding the methodology, such as agent selection and handling instability during training?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors propose a method for containing multiple distinct behaviours (or policies) in a single agent. To do this they store a dictionary of policies observed during training, where the key relates to returns observed during training (this is discretised to keep the buffer size reasonable). This is actually very similar to simply holding a population of agents during training. At the start of training the dictionary (size N) is initialised with N random policies which are all selected randomly and trained with. RPM uses self-play from its buffer to train more agents. Unlike self-play or population-play their is no risk of degeneracy as values are only overwritten for agents which produce the same return. Strengths And Weaknesses Strengths Idea is simple and good first step in the melting pot game Good ablations Weaknesses No explanation of the RPM update rule. Method makes a major assumption that return is enough to distinguish behaviour. A toy example of where this fails would be that grim trigger and tit-for-tat would get similar returns in IPD against a defective agent but intuitively have some variance in behaviour (e.g. tit-for-tat is forgiving but grim trigger is not) Unclear how large a RPM buffer should be This method is computationally much more intensive than baselines, some analysis to count the different number of timesteps used in training or when convergence is achieved would also be reasonable. Clarity, Quality, Novelty And Reproducibility Clarity It is unclear how agents are chosen at evaluation time to be entered into the substrate. The focal agent's return is clearly dependent on the co-player trained with, how do you handle this instability during training? Quality Typos in "newly collected trajecotries and πθb" In the ablations diagrams you refer to \phi type III but do not explain what this is. Novelty Method is sufficiently novel - however it is very similar to Fictitious Co-Play, the agents stored in the buffer are very similar to a population of agents and thus explaining the differences in related work could be useful. Reproducibility No code is provided, nor is method clear enough for reproducibility.
ICLR
Title RPM: Generalizable Multi-Agent Policies for Multi-Agent Reinforcement Learning Abstract Despite the recent advancement in multi-agent reinforcement learning (MARL), the MARL agents easily overfit the training environment and perform poorly in evaluation scenarios where other agents behave differently. Obtaining generalizable policies for MARL agents is thus necessary but challenging mainly due to complex multi-agent interactions. In this work, we model the MARL problem with Markov Games and propose a simple yet effective method, called ranked policy memory (RPM), i.e., to maintain a look-up memory of policies to achieve good generalizability. The main idea of RPM is to train MARL policies via gathering massive multi-agent interaction data. In particular, we first rank each agent’s policies by its training episode return, i.e., the episode return of each agent in the training environment; we then save the ranked policies in the memory; when an episode starts, each agent can randomly select a policy from the RPM as the behavior policy. Each agent uses the behavior policy to gather multi-agent interaction data for MARL training. This innovative self-play framework guarantees the diversity of multi-agent interaction in the training data. Experimental results on Melting Pot demonstrate that RPM enables MARL agents to interact with unseen agents in multi-agent generalization evaluation scenarios and complete given tasks. It significantly boosts the performance up to 818% on average. 1 INTRODUCTION In Multi-Agent Reinforcement Learning (MARL) (Yang & Wang, 2020), each agent acts decentrally and interacts with other agents to complete given tasks or achieve specified goals via reinforcement learning (RL) (Sutton & Barto, 2018). In recent years, much progress has been achieved in MARL research (Vinyals et al., 2019; Jaderberg et al., 2019; Perolat et al., 2022). However, the MARL agents trained with current methods tend to suffer poor generalizability (Hupkes et al., 2020) in the new environments. The generalizability issue is critical to real-world MARL applications (Leibo et al., 2021), but is mostly neglected in current research. In this work, we aim to train MARL agents that can adapt to new scenarios where other agents’ policies are unseen during training. We illustrate a two-agent hunting game as an example in Fig. 1. The game’s objective for two agents is to catch the stag together, as one agent acting alone cannot catch the stag and risks being killed. They may perform well in evaluation scenarios similar to the training environment, as shown in Fig. 1 (a) and (b), respectively, but when evaluated in scenarios different from the training ones, these agents often fail. As shown in Fig. 1 (c), the learning agent (called the focal agent following (Leibo et al., 2021)) is supposed to work together with the other agent (called the background agent also following (Leibo et al., 2021)) that is pre-trained and can capture the hare and the stag. In this case, the focal agent would fail to capture the stag without help from its teammate. The teammate of the focal agent may be tempted to catch the hare alone and not cooperate, or may only choose to cooperate with the focal agent after capturing the hare. Thus, the focal agent should adapt to their teammate’s behavior to catch the stag. However, the policy of the background agent is unseen to the focal agent during training. Therefore, without generalization, the agents trained as Fig. 1 (left) cannot achieve an optimal policy in the new evaluation scenario. ∗Wei Qiu did the work while interning at Sea AI Lab. Corresponding author. Inspired by the fact that human learning is often accelerated by interacting with individuals of diverse skills and experiences (Meltzoff et al., 2009; Tomasello, 2010), we propose a novel method aimed at improving the generalization of MARL through the collection of diverse multi-agent interactions. Concretely, we first model the MARL problem with Markov Games (Littman, 1994) and then propose a simple yet effective method called ranked policy memory (RPM) to attain generalizable policies. The core idea of RPM is to maintain a look-up memory of policies during training for the agents. In particular, we first evaluate the trained agents’ policies after each training update. We then rank the trained agents’ policies by the training episode returns and save them in the memory. In this way, we obtain various levels, i.e., the performance of the policies. When starting an episode, the agent can access the memory and load a randomly sampled policy to replace the current behavior policy. The new ensemble of policies enables the agents in self-play to collect diversified experiences in the training environment. These diversified experiences contain many novel multi-agent interactions that can enhance the extrapolation capacity of MARL, thus boosting the generalization performance. We note that an easy extension by incorporating different behavior properties as the keys in RPM could potentially further enrich the generalization but it is left for future work. We implement RPM on top of the state-of-the-art MARL algorithm MAPPO (Yu et al., 2021). To verify its effectiveness, we conduct large-scale experiments with the Melting Pot (Leibo et al., 2021), which is a well-recognized benchmark for MARL generalization evaluation. The experiment results demonstrate that RPM significantly boosts the performance of generalized social behaviors up to 818% on average and outperforms many baselines in a variety of multi-agent generalization evaluation scenarios. Our code, pictorial examples, videos and experimental results are available at this link: https://sites.google.com/view/rpm-iclr2023/. 2 PRELIMINARIES Markov Games. We consider the Markov Games (Littman, 1994) represented by a tuple G = ⟨N ,S,A,O, P,R, γ, ρ⟩. N is a set of agents with the size |N | = N ; S is a set of states; A = ×Ni=1Ai is a set of joint actions with Ai denoting the set of actions for an agent i; O = ×Ni=1Oi is the observation set, with Oi denoting the observation set of the agent i; P : S ×A → S is the transition function and R = ×Ni=1ri is the reward function where ri : S ×A → R specifies the reward for the agent i given the state and the joint action; γ is the discount factor; the initial states are determined by a distribution ρ : S → [0, 1]. Given a state s ∈ S , each agent i ∈ N chooses its action ui and obtains the reward r(s,u) with the private observation oi ∈ Oi, where u = {ui}Ni=1 is the joint action. The joint policy of agents is denoted as πθ = {πθi}Ni=1 where πθi : S ×Ai → [0, 1] is the policy for the agent i. The objective of each agent is to maximize its total expected return Ri = ∑∞ t=0 γ trti . Multi-Agent RL. In MARL, multiple agents act in the multi-agent systems to maximize their respective returns with RL. Each agent’s policy πi is optimized by maximizing the following objective: J (πi) ≜ Es0:∞∼ρ0:∞G ,ai0:∞∼πi [ ∞∑ t=0 γtrit ] , where J (πi) is a performance measure for policy gradient RL methods (Williams, 1992; Lillicrap et al., 2016; Fujimoto et al., 2018). Each policy’s Q value Qi is optimized by minimizing the following regression loss (Mnih et al., 2015) with TD-learning (Sutton, 1984): L(θi) ≜ ED′∼D [( yit −Qiθi ( st,ut, s i t, u i t ))2] , where yit = r i t + γmaxu′ Q i θ̄i ( st+1,u ′, sit, u i,′). θi are the parameters of the agents. θ̄i is the parameter of the target Qi and periodically copied from θ. D′ is a sample from the replay buffer D. 3 PROBLEM FORMULATION We introduce the formulation of MARL for training and evaluation in our problem. Our goal is to improve generalizabiliby of MARL policies in scenarios where policies of agents or opponents are unseen during training while the physical environment is unchanged. Following Leibo et al. (2021), the training environment is defined as substrate. Each substrate is an N -agent partially observable Markov game G. Each agent optimizes its policy πθi via the following protocol. Definition 1 (Multi-Agent Training). There are N agents act in the substrate, which is denoted as G. Each agent receives partial environmental observation not known to other agents and aims to optimizes its policy πθi by optimizing its accumulated rewards: ∑∞ t=0 γ trit. The performance of the joint policy πθ = {πθi}Ni=1 is measured by the mean individual return: R̄(πθ) = 1N ∑N i=1R(πθi ;G). R(πθi ;G) measures the episode return of policy πθi in game G for agent i. In order to evaluate the trained MARL policies in evaluation scenario G′, we follow the evaluation protocol defined by Leibo et al. (2021): Definition 2 (Multi-Agent Evaluation). There are M (1 ≤ M ≤ N − 1) focal agents that are selected from N agents. The focal agents are agents to be evaluated in evaluation scenarios. They are paired with N −M background agents whose policies πϕ = {πϕj}N−Mj=1 were pre-trained with pseudo rewards in the same physical environment where the policies πθ are trained. To measure the generalized performance in evaluation scenarios, we use the mean individual return of focal agents as the performance measure: R̄({πθ}Mi=1) = 1M ∑M i=1R(πθi ;G′). We show an example of our formulation in Fig. 2. Note that the focal agents cannot utilise the interaction data collected during evaluation to train or finetune their policies. Without training the policies of focal agents with the collected trajectories during evaluation, the focal agents should behave adaptively to interact with the background agents to complete challenging multi-agent tasks. It is also worth noting that the ad-hoc team building (Stone & Kraus, 2010; Gu et al., 2021) is different from our formulation both in the training and evaluation. We discuss the differences in the related works section (Paragraph 3, Sec. 7). In MARL, the focal agents need adaptively interact with background agents to complete given tasks. Formally, we define the objective for optimizing performance of the focal agents without exploiting their trajectories in the evaluation scenario for training the policies {πθj}Mj=1: maxJ ({πθj} M j=1) ≜ maxEs0:∞∼ρ0:∞G′ ,a j 0:∞∼{πθj } M j=1 [ ∞∑ t=0 γt 1 M M∑ j=1 rjt ∣∣∣∣∣G′ ] . (1) 4 RANKED POLICY MEMORY To improve the generalization of MARL, agents in the substrate must cover as much as multi-agent interactions, i.e., data, that resemble the unseen multi-agent interactions in the evaluation scenario. However, current training paradigms, like independent learning (Tampuu et al., 2017) and centralized training and decentralized execution (CTDE) (Oliehoek et al., 2008), cannot give diversified multiagent interactions, as the agents’ policies are trained at the same pace. To this end, we propose a Ranked Policy Memory (RPM) method to provide diversified multi-agent behaviors. RPM Building & Updating. We denote an RPM with Ψ, which consists of |Rmax| entries, i.e., ranks, where |Rmax| is the maximum training episode return (the episode return in the substrate). When an agent is acting in the substrate, it will receive the training episode return R of all agents with policies {πiθ}Ni=1. Then {πiθ}Ni=1 are saved into Ψ by appending agents’ policies into the corresponding memory slot, Ψ[re].add({πie}Ni=1). To avoid there being too many entries in the policy memory caused by continuous episode return values, we discretize the training episode return. Each discretized entry κ covers a range of [κ, κ+ ψ), where ψ > 0 and it can be either an integer or a float number. For the training episode return R, the corresponding entry κ can be calculated by: κ = { ⌊R/ψ⌋ × 1{(R mod ψ) ̸= 0} × ψ, if R ≥ 0, ⌊R/ψ⌋ × ψ, otherwise. (2) where 1{·} is the indicator function, and ⌊·⌋ is the floor function. Intuitively, discretizing R saves memory and memorize policies of similar performance in to the same rank. Therefore, diversified policies can be saved to be sampled for agents. RPM Sampling. The memory Ψ stores diversified policies with different levels of performance. We can sample various policies of different ranks and assign each policy to each agent in the substrate to collect multi-agent trajectories for training. These diversified multi-agent trajectories can resemble trajectories generated by the interaction with agents possessing unknown policies in the evaluation scenario. At the beginning of an episode, we first randomly sampleN keys with replacement and then randomly sample one policy for each key from the corresponding list. All agents’ policies will be replaced with the newly sampled policies for multi-agent interactions in the substrate, thus generating diversified multi-agent trajectories. Algorithm 1: MARL with RPM 1 Input: Initialize πθ , Ψ, D, G and G′; 2 Input: Initialize behavior policy πθb ← πθ; 3 for each update do 4 if RPM sampling then 5 πθb ← SamplingRPM(Ψ); 6 D ← GatherTrajectories(πθb ,G); 7 πθ ← MARLTrainig(πθ,D); 8 Ψ← UpdateRPM(πθ,Ψ,G); 9 R̄← Evaluate(πθ,G′); 10 πθb ← πθ; 11 Output: πθ . The Workflow of RPM. We showcase an example of the workflow of RPM in Fig. 3. There are three agents in training. Agents sample policies from RPM. Then all agents collect data in the substrate for training. The training episode return is then used to update RPM. During evaluation, agents 1 and 2 are selected as focal agents and agent 3 is selected as the background agent. We present the pseudo-code of MARL training with RPM in Algorithm 1. In Lines 4-5, the πθb is updated by sampling policies from RPM. Then, new trajectories of D are collected in Line 6. πθ is trained in Line 7 with MARL method by using the newly collected trajecotries and πθb is updated with the newly updated πθ. RPM is updated in Line 8. After that, the performance of πθ is evaluated in the evaluation scenario G′ and the evaluation score R̄ is returned in Line 9. Discussion. RPM leverages agents’ previously trained models in substrates to cover as many patterns of multi-agent interactions as possible to achieve generalization of MARL agents when paired with agents with unseen policies in evaluation scenarios. It uses the self-play framework for data collection. Self-play (Brown, 1951; Heinrich et al., 2015; Silver et al., 2018; Baker et al., 2019) maintains a memory of the opponent’s previous policies for acquiring equilibria. RPM differs from other self-play methods in four aspects: (i) self-play utilizes agent’s previous policies to create fictitious opponents when the real opponents are not available. By playing with the fictitious opponents, many fictitious data are generated for training the agents. In RPM, agents load their previous policies to diversify the multi-agent interactions, such as multi-agent coordination and social dilemmas, and all agents’ policies are trained by utilizing the diversified multi-agent data. (ii) Self-play does not maintain explicit ranks for policies while RPM maintains ranks of policies. (iii) Self-play was not introduced for generalization of MARL while RPM aims to improve the generalization of MARL. In Sec. 6, we also present the evaluation results of a self-play method. 5 MARL TRAINING We incorporate RPM into the MARL training pipeline. We take MAPPO (Yu et al., 2021) for instantiating our method, which is a multi-agent variant of PPO (Schulman et al., 2017) and outperforms many MARL methods (Rashid et al., 2018; 2020; Wang et al., 2021a) in various complex multi-agent domains. In MAPPO, a central critic is maintained for utilizing the concealed information of agents to boost multi-agent learning due to non-stationarity. RPM introduces a novel method for agents to collect experiences/trajectories τ = {τi}Ni=1. Each agent optimizes the following objective: J (θi) = E [ min ( ηti ( θti ) ·Ati,clip ( ηti ( θti ) , 1− ϵ, 1 + ϵ ) ·Ati )] , (3) where ηti(θ t i) = πθt i (uti|τ t i ) π θold i (uti|τti ) denotes the important sampling weight. The clip (·) clips the values of θi that are outside the range [1− ϵ, 1 + ϵ] and ϵ is a hyperparameter. Ati is a generalized advantage estimator (GAE) (Schulman et al., 2015). To optimize the central critic Vψ({oti, uti}Ni=1), we mix agents’ observation-action pairs and output an N -head vector where each value corresponds to the agent’s value: L(ψ) := ED′∼D [( yt − Vψ̄({oti, uti}Ni=1) )2] , (4) where yt = [∑k−1 l=0 γ lrt+li + γ kVψ̄({ot+ki , u t+k i }Ni=1)[i] ]N i=1 is a vector of k-step returns, and D′ is a sample from the replay buffer D. In complex scenarios, e.g., Melting Pot, with an agent’s observation as input, its action would not impact other agents’ return, since the global states contain redundant information that deteriorates multi-agent learning. We present the whole training process, the network architectures of the agent and the central critic in Appx. D. 6 EXPERIMENTS In this section, to verify the effectiveness of RPM in improving the generalization of MARL, we conduct extensive experiments on Melting Pot and present the empirical results. We first introduce Melting Pot, baselines and experiment setups. Then we present the main results of RPM. To demonstrate that ψ is important for RPM, we conducted ablation studies. We finally showcase a case study to visualize RPM. To sum up, we answer the following questions: Q1: Is RPM effective in boosting the generalization performance of MARL agents? Q2: How does the value of ψ impact RPM training? Q3: Does RPM gather diversified policies and trajectories? 6.1 EXPERIMENTAL SETUP Melting Pot. To demonstrate that RPM enables MARL agents to learn generalizable behaviors, we carry out extensive experiments on DeepMind’s Melting Pot (Leibo et al., 2021). Melting Pot is a suite of testbeds for the generalization of MARL methods. It proposes a novel evaluation pipeline for the evaluation of the MARL method in various domains. That is, all MARL agents are trained in the substrate; during evaluation, some agents are selected as the focal agents and the rest agents become the background agents (pre-trained policies of MARL models will be loaded); the evaluation scenarios share the same physical properties as the substrates. Melting Pot environments possess many properties, such as temporal coordination and free riding as depicted in Table 1. An agent performing well in such environments indicates that its behaviors exhibit these properties. In Fig. 4, the agent’s observation is shown in the green box to the lower left of the state (i.e., the whole image). The agent is in the lower middle of the observation. The deep neural network architecture of the agent’s policy is shown on the left. More information about substrates, scenarios, neural network architectures and training details can be found in Appx. D. Baselines. Our baselines are MAPPO (Yu et al., 2021), MAA2C (Papoudakis et al., 2021), OPRE (Vezhnevets et al., 2020), heuristic fictitious self-play (HFSP) (Heinrich, 2017; Berner et al., 2019) and RandNet (Lee et al., 2019). MAPPO and MAA2C are MARL methods that achieved outstanding performance in various multi-agent scenarios (Papoudakis et al., 2021). OPRE was proposed for the generalization of MARL. RandNet is a general method for the generalization of RL by introducing a novel component in the convolutional neural network. HFSP is a general self-play method for obtaining equilibria in competitive games, we use it by using the policies saved by RPM. Training setup. We use 6 representative substrates (Fig. 5) to train MARL policies and choose some evaluation scenarios from each substrate as our evaluation testbed. The properties of the environments are listed in Table 1. We train agents in Melting Pot substrates for 200 million frames with 3 random seeds for all methods. Our training framework is distributed with 30 CPU actors to collect experiences and 1 GPU for the learner to learn policies. We implement our actors with Ray (Moritz et al., 2018) and the learner with EPyMARL (Papoudakis et al., 2021). We use mean-std to measure the performance of all methods. The bold lines in all figures are mean values, and the shades stand for the standard deviation. Due to a limited computation budget, it is redundant to compare our method with other methods, such as QMIX (Rashid et al., 2018) and MADDPG (Lowe et al., 2017) as MAPPO outperforms them. All experiments are conducted on NVIDIA A100 GPUs. 6.2 EXPERIMENT RESULTS To answer Q1, we present the evaluation results of 17 Melting Pot evaluation scenarios in Fig. 6. Our method can boost MARL in various evaluation scenarios, which have different properties, as shown in Table 1. In Chicken Game (CG) 1-2 (the number stands for the number of the evaluation scenario of Chicken Game), RPM outperforms its counterparts by a convincing margin. HFSP performs no better than RPM. RandNet gets around 15 evaluation mean returns on Chicken Game (CG) 1. MAA2C and OPRE perform nearly random (the red dash lines indicate the random result) in the two scenarios. In Pure Coordination (PC) 1-3, Rational Coordination (PC) 1-3 and Prisoners’ Dilemma (PD) 1-3, most baselines perform poorly. In Stag Hunt (SH) 1-3 and Clean Up (CU) 1-2, MAPPO and MAA2C perform unsatisfactorily. We can also find that HFSP even gets competitive performance in Stag Hunt (SH) 1-3. However, HFSP performs poorly in Pure Coordination (PC) 1-3, Rational Coordination (RC) 1-3 and Prisoners’ Dilemma (PD) 1-3. Therefore, the vanilla self-play method cannot directly be applied to improve the generalization of MARL methods. In summary, RPM boosts the performance up to around 818% on average compared with MAPPO on 6 evaluation scenarios. To answer Q2, we present experimental results of the impact of ψ and the sampling ratio in HFSP in the following. 0 50 100 150 200 0 20 40 60 opt: 98.9 CG 1 0 50 100 150 200 0 5 10 opt: 14.3 CG 2 0 50 100 150 200 0 10 opt: 36.4 CG 3 0 50 100 150 200 0 5 10 15 opt: 65.9 SH 1 0 50 100 150 200 0 10 opt: 54.4 SH 2 0 50 100 150 200 0 10 20 opt: 53.8 SH 3 0 50 100 150 200 0 200 400 opt: 722.6 CU 1 0 50 100 150 200 0 100 opt: 385.9 CU 2 0 50 100 150 200 0.00 0.25 0.50 0.75 opt: 4.4 PC 1 0 50 100 150 200 0.0 0.2 0.4 opt: 3.2 PC 2 0 50 100 150 200 0.0 0.5 1.0 opt: 3.2 PC 3 0 50 100 150 200 0 10 opt: 55.7 PD 1 0 50 100 150 200 0 10 20 30 opt: 60.8 PD 2 0 50 100 150 200 0 10 20 opt: 36.8 PD 3 0 50 100 150 200 0 1 2 3 opt: 11.9 RC 1 0 50 100 150 200 0 1 opt: 7.7 RC 2 0 50 100 150 200 0 2 4 opt: 13.1 RC 3 RPM (ours) MAPPO MAA2C RandNet OPRE HFSP Random Policy Training Steps (million) Ev al R et ur n M ea n Figure 6: Evaluation results of RPM and baselines in 17 scenarios. The red dash horizontal lines indicate the results of random policy. The optimal (opt) values are shown in each sub-figure and were gathered from (Leibo et al., 2021), which an exploiter generated. The exploiter was trained in the evaluation scenarios with RL methods, and the training time steps were 1,000 M. 0 10 0 250 500 750 Co un ts Chicken Game 0 10 0 250 500 750 Stag Hunt 0 100 0 2000 4000 Clean Up 0 1 0 500 1000 1500 Pure Coordination 0 10 0 250 500 750 Prisoners Dilemma 0 2 0 1000 2000 Rational Coordianation Training Episode Returns Figure 7: Histograms of training episode returns. 6.3 ABLATION STUDY The Impact of ψ. To investigate which value of ψ has the greatest impact on RPM performance, we conduct ablation studies by (i) removing ranks and sampling from the checkpoint directly; (ii) reducing the number of ranks by changing the value of ψ. As shown in Fig. 8, without ranks (sampling policies without ranks randomly), RPM cannot attain stable performance in some evaluation scenarios. Especially in Pure Coordination (PC) 1-3, the result is low and has a large variance. In RPM, choosing the right interval ψ can improve the performance, as shown in the results of Pure Coordination (PC) 1- 3 and Prisoners’ Dilemma (PD) 1-3, showing that the value of ψ is important for RPM. We summarize the results and values of ψ in Table 2 and Table 3. The Sampling Ratio in HFSP HFSP shows comparable results in some scenarios in Figure 6. In Figure 6, the sampling ratio of HFSP is 0.3. We are interested in studying the impact of the sampling RPM (ours) HFSP-0.9 HFSP-0.7 HFSP-0.5 HFSP-0.3 HFSP-0.1 Random Policy ratio in HFSP on evaluation performance. We conduct experiments in CU 1 and 2, PC 1 and 3 and PD 1 and 3. The sampling ratio list is [0.9, 0.7, 0.5, 0.3, 0.1]. We use the default training setup and use 3 random seeds. HFSP shows comparable results in PC 2 and 3, but its performances are poor in CU 1 and 2 and PD 2 and 3. As shown in Figure 9, HFSP heavily relies on the sampling ratio. HFSP should be carefully tuned on each substrate to attain good performance, which is not feasible. In contrast, RPM is stable (the sampling ratio is 0.5) on all substrates. HFSP can also perform well in substrates such as PC and PD, where the return-checkpoint count distribution is more uniform. The absence of ranks leads to the frequent sampling of policies with high count values in substrates that have skewed return-checkpoint count distribution, thereby reducing the diversity of training data. Such distributions typically comprise a large number of policies with suboptimal performance. 6.4 CASE STUDY We showcase how RPM helps to train the focal agents to choose the right behaviors in the evaluation scenario after training in the substrate. To illustrate the trained performance of RPM agents, we use the RPM agent trained on Stag Hunt and run the evaluation on Stag Hunt 1. In Stag Hunt, there are 8 agents. Each agent collects resources that represent ‘hare’ (red) or ‘stag’ (green) and compares inventories in an interaction, i.e., encounter. The results of solving the encounter are the same as the classic Stag Hunt matrix game. In this environment, agents are facing tension between the reward for the team and the risk for the individual. In Stag Hunt 1, One focal agent interacts with seven pretrained background agents. All background agents were trained to play the ‘stag’ strategy during the interaction1. The optimal policy for the focal agent is also to play ‘stag’. However, it is challenging for agents to detect other agents’ strategy since such a behavior may not persist in the substrate. Luckily, RPM enables focal agents to behave correctly in this scenario. To answer Q3, we present the analysis of RPM on the substrate Stag Hunt and its evaluation scenario SH 1 in Fig. 10. We can find that in Fig. 10 (b), the number of the keys in RPM is growing monotonically during training and the maximum number of the keys in RPM is over 20, showing that agents trained with RPM discover many novel patterns of multi-agent interaction and new keys are created and the trained models are saved in RPM. Meanwhile, the evaluation performance is also increasing in SH 1 as depicted in Fig. 10 (a). In Fig. 10 (c), it is interesting to see that the distribution of the keys of RPM is expanding during training. In the last 25 million training steps, the last distribution of RPM keys covers all policies of different performance levels, ranging from 0 to 14. By utilizing RPM, agents can collect diversified multi-agent trajectories for multi-agent training. Fig. 10 (d) demonstrates the final histogram of RPM keys after training. There are over 600 trained policies that have a small value of keys. Since agents should explore the environment at the early 1This preference was trained with pseudo rewards by Leibo et al. (2021) and the trained models are available at this link: https://github.com/deepmind/meltingpot stage of training, it is reasonable to find that many trained policies of RPM keys have low training episode returns. After 50 million training steps, RPM has more policies with higher training episode returns. Note that the maximum training episode return of RPM keys is over 14 while the maximum mean evaluation return of RPM shown in Fig. 10 (a) is around 14. Our experiments show that training policies with good performance in the substrate is crucial for improving generalization performance in the evaluation scenarios. When MARL agents perform poorly in the substrate, the evaluation performance will also be inferior or random, making it hard to have diversified policies. We show the results in Appx. E. 7 RELATED WORKS Recent advances in MARL (Yang & Wang, 2020; Zhang et al., 2021) have demonstrated its success in various complex multi-agent domains, including multi-agent coordination (Lowe et al., 2017; Rashid et al., 2018; Wang et al., 2021b), real-time strategy (RTS) games (Jaderberg et al., 2019; Berner et al., 2019; Vinyals et al., 2019), social dilemma (Leibo et al., 2017; Wang et al., 2018; Jaques et al., 2019; Vezhnevets et al., 2020), multi-agent communication (Foerster et al., 2016; Yuan et al., 2022), asynchronous multi-agent learning (Amato et al., 2019; Qiu et al., 2022), open-ended environment (Stooke et al., 2021), autonomous systems (Hüttenrauch et al., 2017; Peng et al., 2021) and game theory equilibrium solving (Lanctot et al., 2017; Perolat et al., 2022). Despite strides made in MARL, training generalizable behaviors in MARL is yet to be investigated. Recently, generalization in RL (Packer et al., 2018; Song et al., 2019; Ghosh et al., 2021; Lyle et al., 2022) has achieved much progress in domain adaptation (Higgins et al., 2017) and procedurally generated environments (Lee et al., 2019; Igl et al., 2020; Zha et al., 2020). However, there are few works of generalization in MARL domains (Carion et al., 2019; Vezhnevets et al., 2020; Mahajan et al., 2022; McKee et al., 2022). Recently, Vezhnevets et al. (2020) propose a hierarchical MARL method for agents to play against opponents it hasn’t seen during training. However, the evaluation scenarios are only limited to simple competitive scenarios. Mahajan et al. (2022) studied the generalization in MARL empirically and proposed theoretical findings based on successor features (Dayan, 1993). However, no method to achieve generalization in MARL was proposed in (Mahajan et al., 2022). Ad-hoc team building (Stone & Kraus, 2010; Gu et al., 2021) models the multi-agent problem as a single-agent learning task. In ad-hoc team building, one ad-hoc agent is trained by interacting with agents that have fixed pretrained policies and the non-stationarity issue is not severe. However, in our formulation, non-stationarity is the main obstacle to MARL training. In addition, there is only one ad-hoc agent evaluated by interacting agents that are unseen during training, while there can be more than one focal agent in our formulation as defined in Definition 2, thus making our formulation general and challenging. There has been a growing interest in applying self-play to solve complex games (Heinrich et al., 2015; Silver et al., 2018; Hernandez et al., 2019; Baker et al., 2019); however, its value in enhancing the generalization of MARL agents has yet to be examined. Due to space constraints, we discuss meta-learning (Al-Shedivat et al., 2018; Kim et al., 2021) and population-based training (Strouse et al., 2021; Lupu et al., 2021; Tang et al., 2021) works in Appx. F. 8 CONCLUSION, LIMITATIONS AND FUTURE WORK In this paper, we consider the problem of achieving generalizable behaviors in MARL. We first model the problem with Markov Game. To train agents that can interact with agents that possess unseen policies. We propose a simple yet effective method, RPM, to collect diversified multi-agent interaction data. We save policies in RPM by ranking the training episode return. Empirically, RPM significantly boosts the performance of MARL agents in various Melting Pot evaluation scenarios. RPM’s performance is dependent on the appropriate value of ψ. Several attempts may be needed to determine the correct value of ψ for RPM. We are interested in discovering broader measures for ranking policies that do not explicitly consider the training episode return. Recently, there has been a growing interest in planning in RL, especially with model-based RL. We are interested in exploring the direction of applying planning and opponent/teammate modelling for attaining generalized MARL policies for future work. Agents are engaged in complex interactions in multi-agent scenarios. Devising novel self-play methods is our future direction. ETHICS STATEMENT We addressed the relevant aspects in our conclusion and have no conflicts of interest to declare. REPRODUCIBILITY STATEMENT We provide detailed descriptions of our experiments in the appendix and list all relevant parameters in Table 4 and Table 5 in Appx. D. The code can be found at this link: https://sites.google. com/view/rpm-iclr2023/. ACKNOWLEDGMENTS We would like to thank the anonymous reviewers for their suggestions. We thank the support from Xinyi Wan, Jiahao Ji and Xiangfan Li of the infrastructure team at Sea AI Lab. Wei Qiu and Bo An are supported by the National Research Foundation, Singapore under its Industry Alignment Fund – Pre-positioning (IAF-PP) Funding Initiative. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
1. What is the focus of the paper regarding multi-agent reinforcement learning? 2. What are the strengths of the proposed approach, particularly in its simplicity and applicability? 3. Do you have any concerns or questions about the method's effectiveness when interacting with unseen agents? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any potential benefits to using more sophisticated ranking systems like TrueSkill? 6. Would adding lines to denote optimal values in Figure 6 help identify the gap and amount of generalization performed by RPM? 7. How does the reviewer compare this work with related research in MARL (Al-Shevidat et al., ICLR 2018; Kim et al., ICML 2021)?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a new self-play training framework called RPM that focuses on the diversity of the multi-agent population. Specifically, RPM builds the population by maintaining various levels of policies via ranking. Then, RPM trains focal agents that behave well with background agents sampled from the population. The extensive evaluations based on the melting pot domain show the generalization of RPM when interacting with unseen agents in the evaluation scenarios. Strengths And Weaknesses Strengths: The paper is generally well-written and conveys the methods clearly. The figures are also helpful in understanding the method. While RPM is a relatively simple algorithm based on the episodic return ranking system, it shows effectiveness in multiple scenarios. As such, the algorithm is directly applicable to other settings/methods. Questions: The problem formation and objective in Section 3 are closely related to meta-learning in MARL (Al-Shevidat et al., ICLR 2018; Kim et al., ICML 2021), where the goal is to train a meta-agent with a population of other agents such that the meta-agent can adapt well when interacting with a new agent at meta-testing. The main difference between this paper and meta-MARL is that the focal agents are not allowed to fine-tune their policies during the evaluation in this paper, while meta-agents are allowed to fine-tune their policies. Because both settings are concerned with generalization, I would like to ask discussion comparing the two settings. Would RPM benefit from using more complicated ranking systems (e.g., TrueSkill)? Adding lines that denote optimal values in Figure 6 (i.e., if ideal generalization is possible) can help identify the gap and the amount of generalization performed by RPM. References: Maruan Al-Shedivat, Trapit Bansal, Yuri Burda, Ilya Sutskever, Igor Mordatch, Pieter Abbeel. Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments. ICLR 2018 Dong-Ki Kim, Miao Liu, Matthew Riemer, Chuangchuang Sun, Marwa Abdulhai, Golnaz Habibi, Sebastian Lopez-Cot, Gerald Tesauro, Jonathan P. How. A Policy Gradient Algorithm for Learning to Learn in Multiagent Reinforcement Learning. ICML 2021 Clarity, Quality, Novelty And Reproducibility Clarity: The paper is well-written and conveys the main insights well. Quality & Novelty: I agree that RPM is new regarding the self-play notion. The setting and objective may overlap with multi-agent meta-learning methods. Reproducibility: The source code is provided to reproduce the results.
ICLR
Title RPM: Generalizable Multi-Agent Policies for Multi-Agent Reinforcement Learning Abstract Despite the recent advancement in multi-agent reinforcement learning (MARL), the MARL agents easily overfit the training environment and perform poorly in evaluation scenarios where other agents behave differently. Obtaining generalizable policies for MARL agents is thus necessary but challenging mainly due to complex multi-agent interactions. In this work, we model the MARL problem with Markov Games and propose a simple yet effective method, called ranked policy memory (RPM), i.e., to maintain a look-up memory of policies to achieve good generalizability. The main idea of RPM is to train MARL policies via gathering massive multi-agent interaction data. In particular, we first rank each agent’s policies by its training episode return, i.e., the episode return of each agent in the training environment; we then save the ranked policies in the memory; when an episode starts, each agent can randomly select a policy from the RPM as the behavior policy. Each agent uses the behavior policy to gather multi-agent interaction data for MARL training. This innovative self-play framework guarantees the diversity of multi-agent interaction in the training data. Experimental results on Melting Pot demonstrate that RPM enables MARL agents to interact with unseen agents in multi-agent generalization evaluation scenarios and complete given tasks. It significantly boosts the performance up to 818% on average. 1 INTRODUCTION In Multi-Agent Reinforcement Learning (MARL) (Yang & Wang, 2020), each agent acts decentrally and interacts with other agents to complete given tasks or achieve specified goals via reinforcement learning (RL) (Sutton & Barto, 2018). In recent years, much progress has been achieved in MARL research (Vinyals et al., 2019; Jaderberg et al., 2019; Perolat et al., 2022). However, the MARL agents trained with current methods tend to suffer poor generalizability (Hupkes et al., 2020) in the new environments. The generalizability issue is critical to real-world MARL applications (Leibo et al., 2021), but is mostly neglected in current research. In this work, we aim to train MARL agents that can adapt to new scenarios where other agents’ policies are unseen during training. We illustrate a two-agent hunting game as an example in Fig. 1. The game’s objective for two agents is to catch the stag together, as one agent acting alone cannot catch the stag and risks being killed. They may perform well in evaluation scenarios similar to the training environment, as shown in Fig. 1 (a) and (b), respectively, but when evaluated in scenarios different from the training ones, these agents often fail. As shown in Fig. 1 (c), the learning agent (called the focal agent following (Leibo et al., 2021)) is supposed to work together with the other agent (called the background agent also following (Leibo et al., 2021)) that is pre-trained and can capture the hare and the stag. In this case, the focal agent would fail to capture the stag without help from its teammate. The teammate of the focal agent may be tempted to catch the hare alone and not cooperate, or may only choose to cooperate with the focal agent after capturing the hare. Thus, the focal agent should adapt to their teammate’s behavior to catch the stag. However, the policy of the background agent is unseen to the focal agent during training. Therefore, without generalization, the agents trained as Fig. 1 (left) cannot achieve an optimal policy in the new evaluation scenario. ∗Wei Qiu did the work while interning at Sea AI Lab. Corresponding author. Inspired by the fact that human learning is often accelerated by interacting with individuals of diverse skills and experiences (Meltzoff et al., 2009; Tomasello, 2010), we propose a novel method aimed at improving the generalization of MARL through the collection of diverse multi-agent interactions. Concretely, we first model the MARL problem with Markov Games (Littman, 1994) and then propose a simple yet effective method called ranked policy memory (RPM) to attain generalizable policies. The core idea of RPM is to maintain a look-up memory of policies during training for the agents. In particular, we first evaluate the trained agents’ policies after each training update. We then rank the trained agents’ policies by the training episode returns and save them in the memory. In this way, we obtain various levels, i.e., the performance of the policies. When starting an episode, the agent can access the memory and load a randomly sampled policy to replace the current behavior policy. The new ensemble of policies enables the agents in self-play to collect diversified experiences in the training environment. These diversified experiences contain many novel multi-agent interactions that can enhance the extrapolation capacity of MARL, thus boosting the generalization performance. We note that an easy extension by incorporating different behavior properties as the keys in RPM could potentially further enrich the generalization but it is left for future work. We implement RPM on top of the state-of-the-art MARL algorithm MAPPO (Yu et al., 2021). To verify its effectiveness, we conduct large-scale experiments with the Melting Pot (Leibo et al., 2021), which is a well-recognized benchmark for MARL generalization evaluation. The experiment results demonstrate that RPM significantly boosts the performance of generalized social behaviors up to 818% on average and outperforms many baselines in a variety of multi-agent generalization evaluation scenarios. Our code, pictorial examples, videos and experimental results are available at this link: https://sites.google.com/view/rpm-iclr2023/. 2 PRELIMINARIES Markov Games. We consider the Markov Games (Littman, 1994) represented by a tuple G = ⟨N ,S,A,O, P,R, γ, ρ⟩. N is a set of agents with the size |N | = N ; S is a set of states; A = ×Ni=1Ai is a set of joint actions with Ai denoting the set of actions for an agent i; O = ×Ni=1Oi is the observation set, with Oi denoting the observation set of the agent i; P : S ×A → S is the transition function and R = ×Ni=1ri is the reward function where ri : S ×A → R specifies the reward for the agent i given the state and the joint action; γ is the discount factor; the initial states are determined by a distribution ρ : S → [0, 1]. Given a state s ∈ S , each agent i ∈ N chooses its action ui and obtains the reward r(s,u) with the private observation oi ∈ Oi, where u = {ui}Ni=1 is the joint action. The joint policy of agents is denoted as πθ = {πθi}Ni=1 where πθi : S ×Ai → [0, 1] is the policy for the agent i. The objective of each agent is to maximize its total expected return Ri = ∑∞ t=0 γ trti . Multi-Agent RL. In MARL, multiple agents act in the multi-agent systems to maximize their respective returns with RL. Each agent’s policy πi is optimized by maximizing the following objective: J (πi) ≜ Es0:∞∼ρ0:∞G ,ai0:∞∼πi [ ∞∑ t=0 γtrit ] , where J (πi) is a performance measure for policy gradient RL methods (Williams, 1992; Lillicrap et al., 2016; Fujimoto et al., 2018). Each policy’s Q value Qi is optimized by minimizing the following regression loss (Mnih et al., 2015) with TD-learning (Sutton, 1984): L(θi) ≜ ED′∼D [( yit −Qiθi ( st,ut, s i t, u i t ))2] , where yit = r i t + γmaxu′ Q i θ̄i ( st+1,u ′, sit, u i,′). θi are the parameters of the agents. θ̄i is the parameter of the target Qi and periodically copied from θ. D′ is a sample from the replay buffer D. 3 PROBLEM FORMULATION We introduce the formulation of MARL for training and evaluation in our problem. Our goal is to improve generalizabiliby of MARL policies in scenarios where policies of agents or opponents are unseen during training while the physical environment is unchanged. Following Leibo et al. (2021), the training environment is defined as substrate. Each substrate is an N -agent partially observable Markov game G. Each agent optimizes its policy πθi via the following protocol. Definition 1 (Multi-Agent Training). There are N agents act in the substrate, which is denoted as G. Each agent receives partial environmental observation not known to other agents and aims to optimizes its policy πθi by optimizing its accumulated rewards: ∑∞ t=0 γ trit. The performance of the joint policy πθ = {πθi}Ni=1 is measured by the mean individual return: R̄(πθ) = 1N ∑N i=1R(πθi ;G). R(πθi ;G) measures the episode return of policy πθi in game G for agent i. In order to evaluate the trained MARL policies in evaluation scenario G′, we follow the evaluation protocol defined by Leibo et al. (2021): Definition 2 (Multi-Agent Evaluation). There are M (1 ≤ M ≤ N − 1) focal agents that are selected from N agents. The focal agents are agents to be evaluated in evaluation scenarios. They are paired with N −M background agents whose policies πϕ = {πϕj}N−Mj=1 were pre-trained with pseudo rewards in the same physical environment where the policies πθ are trained. To measure the generalized performance in evaluation scenarios, we use the mean individual return of focal agents as the performance measure: R̄({πθ}Mi=1) = 1M ∑M i=1R(πθi ;G′). We show an example of our formulation in Fig. 2. Note that the focal agents cannot utilise the interaction data collected during evaluation to train or finetune their policies. Without training the policies of focal agents with the collected trajectories during evaluation, the focal agents should behave adaptively to interact with the background agents to complete challenging multi-agent tasks. It is also worth noting that the ad-hoc team building (Stone & Kraus, 2010; Gu et al., 2021) is different from our formulation both in the training and evaluation. We discuss the differences in the related works section (Paragraph 3, Sec. 7). In MARL, the focal agents need adaptively interact with background agents to complete given tasks. Formally, we define the objective for optimizing performance of the focal agents without exploiting their trajectories in the evaluation scenario for training the policies {πθj}Mj=1: maxJ ({πθj} M j=1) ≜ maxEs0:∞∼ρ0:∞G′ ,a j 0:∞∼{πθj } M j=1 [ ∞∑ t=0 γt 1 M M∑ j=1 rjt ∣∣∣∣∣G′ ] . (1) 4 RANKED POLICY MEMORY To improve the generalization of MARL, agents in the substrate must cover as much as multi-agent interactions, i.e., data, that resemble the unseen multi-agent interactions in the evaluation scenario. However, current training paradigms, like independent learning (Tampuu et al., 2017) and centralized training and decentralized execution (CTDE) (Oliehoek et al., 2008), cannot give diversified multiagent interactions, as the agents’ policies are trained at the same pace. To this end, we propose a Ranked Policy Memory (RPM) method to provide diversified multi-agent behaviors. RPM Building & Updating. We denote an RPM with Ψ, which consists of |Rmax| entries, i.e., ranks, where |Rmax| is the maximum training episode return (the episode return in the substrate). When an agent is acting in the substrate, it will receive the training episode return R of all agents with policies {πiθ}Ni=1. Then {πiθ}Ni=1 are saved into Ψ by appending agents’ policies into the corresponding memory slot, Ψ[re].add({πie}Ni=1). To avoid there being too many entries in the policy memory caused by continuous episode return values, we discretize the training episode return. Each discretized entry κ covers a range of [κ, κ+ ψ), where ψ > 0 and it can be either an integer or a float number. For the training episode return R, the corresponding entry κ can be calculated by: κ = { ⌊R/ψ⌋ × 1{(R mod ψ) ̸= 0} × ψ, if R ≥ 0, ⌊R/ψ⌋ × ψ, otherwise. (2) where 1{·} is the indicator function, and ⌊·⌋ is the floor function. Intuitively, discretizing R saves memory and memorize policies of similar performance in to the same rank. Therefore, diversified policies can be saved to be sampled for agents. RPM Sampling. The memory Ψ stores diversified policies with different levels of performance. We can sample various policies of different ranks and assign each policy to each agent in the substrate to collect multi-agent trajectories for training. These diversified multi-agent trajectories can resemble trajectories generated by the interaction with agents possessing unknown policies in the evaluation scenario. At the beginning of an episode, we first randomly sampleN keys with replacement and then randomly sample one policy for each key from the corresponding list. All agents’ policies will be replaced with the newly sampled policies for multi-agent interactions in the substrate, thus generating diversified multi-agent trajectories. Algorithm 1: MARL with RPM 1 Input: Initialize πθ , Ψ, D, G and G′; 2 Input: Initialize behavior policy πθb ← πθ; 3 for each update do 4 if RPM sampling then 5 πθb ← SamplingRPM(Ψ); 6 D ← GatherTrajectories(πθb ,G); 7 πθ ← MARLTrainig(πθ,D); 8 Ψ← UpdateRPM(πθ,Ψ,G); 9 R̄← Evaluate(πθ,G′); 10 πθb ← πθ; 11 Output: πθ . The Workflow of RPM. We showcase an example of the workflow of RPM in Fig. 3. There are three agents in training. Agents sample policies from RPM. Then all agents collect data in the substrate for training. The training episode return is then used to update RPM. During evaluation, agents 1 and 2 are selected as focal agents and agent 3 is selected as the background agent. We present the pseudo-code of MARL training with RPM in Algorithm 1. In Lines 4-5, the πθb is updated by sampling policies from RPM. Then, new trajectories of D are collected in Line 6. πθ is trained in Line 7 with MARL method by using the newly collected trajecotries and πθb is updated with the newly updated πθ. RPM is updated in Line 8. After that, the performance of πθ is evaluated in the evaluation scenario G′ and the evaluation score R̄ is returned in Line 9. Discussion. RPM leverages agents’ previously trained models in substrates to cover as many patterns of multi-agent interactions as possible to achieve generalization of MARL agents when paired with agents with unseen policies in evaluation scenarios. It uses the self-play framework for data collection. Self-play (Brown, 1951; Heinrich et al., 2015; Silver et al., 2018; Baker et al., 2019) maintains a memory of the opponent’s previous policies for acquiring equilibria. RPM differs from other self-play methods in four aspects: (i) self-play utilizes agent’s previous policies to create fictitious opponents when the real opponents are not available. By playing with the fictitious opponents, many fictitious data are generated for training the agents. In RPM, agents load their previous policies to diversify the multi-agent interactions, such as multi-agent coordination and social dilemmas, and all agents’ policies are trained by utilizing the diversified multi-agent data. (ii) Self-play does not maintain explicit ranks for policies while RPM maintains ranks of policies. (iii) Self-play was not introduced for generalization of MARL while RPM aims to improve the generalization of MARL. In Sec. 6, we also present the evaluation results of a self-play method. 5 MARL TRAINING We incorporate RPM into the MARL training pipeline. We take MAPPO (Yu et al., 2021) for instantiating our method, which is a multi-agent variant of PPO (Schulman et al., 2017) and outperforms many MARL methods (Rashid et al., 2018; 2020; Wang et al., 2021a) in various complex multi-agent domains. In MAPPO, a central critic is maintained for utilizing the concealed information of agents to boost multi-agent learning due to non-stationarity. RPM introduces a novel method for agents to collect experiences/trajectories τ = {τi}Ni=1. Each agent optimizes the following objective: J (θi) = E [ min ( ηti ( θti ) ·Ati,clip ( ηti ( θti ) , 1− ϵ, 1 + ϵ ) ·Ati )] , (3) where ηti(θ t i) = πθt i (uti|τ t i ) π θold i (uti|τti ) denotes the important sampling weight. The clip (·) clips the values of θi that are outside the range [1− ϵ, 1 + ϵ] and ϵ is a hyperparameter. Ati is a generalized advantage estimator (GAE) (Schulman et al., 2015). To optimize the central critic Vψ({oti, uti}Ni=1), we mix agents’ observation-action pairs and output an N -head vector where each value corresponds to the agent’s value: L(ψ) := ED′∼D [( yt − Vψ̄({oti, uti}Ni=1) )2] , (4) where yt = [∑k−1 l=0 γ lrt+li + γ kVψ̄({ot+ki , u t+k i }Ni=1)[i] ]N i=1 is a vector of k-step returns, and D′ is a sample from the replay buffer D. In complex scenarios, e.g., Melting Pot, with an agent’s observation as input, its action would not impact other agents’ return, since the global states contain redundant information that deteriorates multi-agent learning. We present the whole training process, the network architectures of the agent and the central critic in Appx. D. 6 EXPERIMENTS In this section, to verify the effectiveness of RPM in improving the generalization of MARL, we conduct extensive experiments on Melting Pot and present the empirical results. We first introduce Melting Pot, baselines and experiment setups. Then we present the main results of RPM. To demonstrate that ψ is important for RPM, we conducted ablation studies. We finally showcase a case study to visualize RPM. To sum up, we answer the following questions: Q1: Is RPM effective in boosting the generalization performance of MARL agents? Q2: How does the value of ψ impact RPM training? Q3: Does RPM gather diversified policies and trajectories? 6.1 EXPERIMENTAL SETUP Melting Pot. To demonstrate that RPM enables MARL agents to learn generalizable behaviors, we carry out extensive experiments on DeepMind’s Melting Pot (Leibo et al., 2021). Melting Pot is a suite of testbeds for the generalization of MARL methods. It proposes a novel evaluation pipeline for the evaluation of the MARL method in various domains. That is, all MARL agents are trained in the substrate; during evaluation, some agents are selected as the focal agents and the rest agents become the background agents (pre-trained policies of MARL models will be loaded); the evaluation scenarios share the same physical properties as the substrates. Melting Pot environments possess many properties, such as temporal coordination and free riding as depicted in Table 1. An agent performing well in such environments indicates that its behaviors exhibit these properties. In Fig. 4, the agent’s observation is shown in the green box to the lower left of the state (i.e., the whole image). The agent is in the lower middle of the observation. The deep neural network architecture of the agent’s policy is shown on the left. More information about substrates, scenarios, neural network architectures and training details can be found in Appx. D. Baselines. Our baselines are MAPPO (Yu et al., 2021), MAA2C (Papoudakis et al., 2021), OPRE (Vezhnevets et al., 2020), heuristic fictitious self-play (HFSP) (Heinrich, 2017; Berner et al., 2019) and RandNet (Lee et al., 2019). MAPPO and MAA2C are MARL methods that achieved outstanding performance in various multi-agent scenarios (Papoudakis et al., 2021). OPRE was proposed for the generalization of MARL. RandNet is a general method for the generalization of RL by introducing a novel component in the convolutional neural network. HFSP is a general self-play method for obtaining equilibria in competitive games, we use it by using the policies saved by RPM. Training setup. We use 6 representative substrates (Fig. 5) to train MARL policies and choose some evaluation scenarios from each substrate as our evaluation testbed. The properties of the environments are listed in Table 1. We train agents in Melting Pot substrates for 200 million frames with 3 random seeds for all methods. Our training framework is distributed with 30 CPU actors to collect experiences and 1 GPU for the learner to learn policies. We implement our actors with Ray (Moritz et al., 2018) and the learner with EPyMARL (Papoudakis et al., 2021). We use mean-std to measure the performance of all methods. The bold lines in all figures are mean values, and the shades stand for the standard deviation. Due to a limited computation budget, it is redundant to compare our method with other methods, such as QMIX (Rashid et al., 2018) and MADDPG (Lowe et al., 2017) as MAPPO outperforms them. All experiments are conducted on NVIDIA A100 GPUs. 6.2 EXPERIMENT RESULTS To answer Q1, we present the evaluation results of 17 Melting Pot evaluation scenarios in Fig. 6. Our method can boost MARL in various evaluation scenarios, which have different properties, as shown in Table 1. In Chicken Game (CG) 1-2 (the number stands for the number of the evaluation scenario of Chicken Game), RPM outperforms its counterparts by a convincing margin. HFSP performs no better than RPM. RandNet gets around 15 evaluation mean returns on Chicken Game (CG) 1. MAA2C and OPRE perform nearly random (the red dash lines indicate the random result) in the two scenarios. In Pure Coordination (PC) 1-3, Rational Coordination (PC) 1-3 and Prisoners’ Dilemma (PD) 1-3, most baselines perform poorly. In Stag Hunt (SH) 1-3 and Clean Up (CU) 1-2, MAPPO and MAA2C perform unsatisfactorily. We can also find that HFSP even gets competitive performance in Stag Hunt (SH) 1-3. However, HFSP performs poorly in Pure Coordination (PC) 1-3, Rational Coordination (RC) 1-3 and Prisoners’ Dilemma (PD) 1-3. Therefore, the vanilla self-play method cannot directly be applied to improve the generalization of MARL methods. In summary, RPM boosts the performance up to around 818% on average compared with MAPPO on 6 evaluation scenarios. To answer Q2, we present experimental results of the impact of ψ and the sampling ratio in HFSP in the following. 0 50 100 150 200 0 20 40 60 opt: 98.9 CG 1 0 50 100 150 200 0 5 10 opt: 14.3 CG 2 0 50 100 150 200 0 10 opt: 36.4 CG 3 0 50 100 150 200 0 5 10 15 opt: 65.9 SH 1 0 50 100 150 200 0 10 opt: 54.4 SH 2 0 50 100 150 200 0 10 20 opt: 53.8 SH 3 0 50 100 150 200 0 200 400 opt: 722.6 CU 1 0 50 100 150 200 0 100 opt: 385.9 CU 2 0 50 100 150 200 0.00 0.25 0.50 0.75 opt: 4.4 PC 1 0 50 100 150 200 0.0 0.2 0.4 opt: 3.2 PC 2 0 50 100 150 200 0.0 0.5 1.0 opt: 3.2 PC 3 0 50 100 150 200 0 10 opt: 55.7 PD 1 0 50 100 150 200 0 10 20 30 opt: 60.8 PD 2 0 50 100 150 200 0 10 20 opt: 36.8 PD 3 0 50 100 150 200 0 1 2 3 opt: 11.9 RC 1 0 50 100 150 200 0 1 opt: 7.7 RC 2 0 50 100 150 200 0 2 4 opt: 13.1 RC 3 RPM (ours) MAPPO MAA2C RandNet OPRE HFSP Random Policy Training Steps (million) Ev al R et ur n M ea n Figure 6: Evaluation results of RPM and baselines in 17 scenarios. The red dash horizontal lines indicate the results of random policy. The optimal (opt) values are shown in each sub-figure and were gathered from (Leibo et al., 2021), which an exploiter generated. The exploiter was trained in the evaluation scenarios with RL methods, and the training time steps were 1,000 M. 0 10 0 250 500 750 Co un ts Chicken Game 0 10 0 250 500 750 Stag Hunt 0 100 0 2000 4000 Clean Up 0 1 0 500 1000 1500 Pure Coordination 0 10 0 250 500 750 Prisoners Dilemma 0 2 0 1000 2000 Rational Coordianation Training Episode Returns Figure 7: Histograms of training episode returns. 6.3 ABLATION STUDY The Impact of ψ. To investigate which value of ψ has the greatest impact on RPM performance, we conduct ablation studies by (i) removing ranks and sampling from the checkpoint directly; (ii) reducing the number of ranks by changing the value of ψ. As shown in Fig. 8, without ranks (sampling policies without ranks randomly), RPM cannot attain stable performance in some evaluation scenarios. Especially in Pure Coordination (PC) 1-3, the result is low and has a large variance. In RPM, choosing the right interval ψ can improve the performance, as shown in the results of Pure Coordination (PC) 1- 3 and Prisoners’ Dilemma (PD) 1-3, showing that the value of ψ is important for RPM. We summarize the results and values of ψ in Table 2 and Table 3. The Sampling Ratio in HFSP HFSP shows comparable results in some scenarios in Figure 6. In Figure 6, the sampling ratio of HFSP is 0.3. We are interested in studying the impact of the sampling RPM (ours) HFSP-0.9 HFSP-0.7 HFSP-0.5 HFSP-0.3 HFSP-0.1 Random Policy ratio in HFSP on evaluation performance. We conduct experiments in CU 1 and 2, PC 1 and 3 and PD 1 and 3. The sampling ratio list is [0.9, 0.7, 0.5, 0.3, 0.1]. We use the default training setup and use 3 random seeds. HFSP shows comparable results in PC 2 and 3, but its performances are poor in CU 1 and 2 and PD 2 and 3. As shown in Figure 9, HFSP heavily relies on the sampling ratio. HFSP should be carefully tuned on each substrate to attain good performance, which is not feasible. In contrast, RPM is stable (the sampling ratio is 0.5) on all substrates. HFSP can also perform well in substrates such as PC and PD, where the return-checkpoint count distribution is more uniform. The absence of ranks leads to the frequent sampling of policies with high count values in substrates that have skewed return-checkpoint count distribution, thereby reducing the diversity of training data. Such distributions typically comprise a large number of policies with suboptimal performance. 6.4 CASE STUDY We showcase how RPM helps to train the focal agents to choose the right behaviors in the evaluation scenario after training in the substrate. To illustrate the trained performance of RPM agents, we use the RPM agent trained on Stag Hunt and run the evaluation on Stag Hunt 1. In Stag Hunt, there are 8 agents. Each agent collects resources that represent ‘hare’ (red) or ‘stag’ (green) and compares inventories in an interaction, i.e., encounter. The results of solving the encounter are the same as the classic Stag Hunt matrix game. In this environment, agents are facing tension between the reward for the team and the risk for the individual. In Stag Hunt 1, One focal agent interacts with seven pretrained background agents. All background agents were trained to play the ‘stag’ strategy during the interaction1. The optimal policy for the focal agent is also to play ‘stag’. However, it is challenging for agents to detect other agents’ strategy since such a behavior may not persist in the substrate. Luckily, RPM enables focal agents to behave correctly in this scenario. To answer Q3, we present the analysis of RPM on the substrate Stag Hunt and its evaluation scenario SH 1 in Fig. 10. We can find that in Fig. 10 (b), the number of the keys in RPM is growing monotonically during training and the maximum number of the keys in RPM is over 20, showing that agents trained with RPM discover many novel patterns of multi-agent interaction and new keys are created and the trained models are saved in RPM. Meanwhile, the evaluation performance is also increasing in SH 1 as depicted in Fig. 10 (a). In Fig. 10 (c), it is interesting to see that the distribution of the keys of RPM is expanding during training. In the last 25 million training steps, the last distribution of RPM keys covers all policies of different performance levels, ranging from 0 to 14. By utilizing RPM, agents can collect diversified multi-agent trajectories for multi-agent training. Fig. 10 (d) demonstrates the final histogram of RPM keys after training. There are over 600 trained policies that have a small value of keys. Since agents should explore the environment at the early 1This preference was trained with pseudo rewards by Leibo et al. (2021) and the trained models are available at this link: https://github.com/deepmind/meltingpot stage of training, it is reasonable to find that many trained policies of RPM keys have low training episode returns. After 50 million training steps, RPM has more policies with higher training episode returns. Note that the maximum training episode return of RPM keys is over 14 while the maximum mean evaluation return of RPM shown in Fig. 10 (a) is around 14. Our experiments show that training policies with good performance in the substrate is crucial for improving generalization performance in the evaluation scenarios. When MARL agents perform poorly in the substrate, the evaluation performance will also be inferior or random, making it hard to have diversified policies. We show the results in Appx. E. 7 RELATED WORKS Recent advances in MARL (Yang & Wang, 2020; Zhang et al., 2021) have demonstrated its success in various complex multi-agent domains, including multi-agent coordination (Lowe et al., 2017; Rashid et al., 2018; Wang et al., 2021b), real-time strategy (RTS) games (Jaderberg et al., 2019; Berner et al., 2019; Vinyals et al., 2019), social dilemma (Leibo et al., 2017; Wang et al., 2018; Jaques et al., 2019; Vezhnevets et al., 2020), multi-agent communication (Foerster et al., 2016; Yuan et al., 2022), asynchronous multi-agent learning (Amato et al., 2019; Qiu et al., 2022), open-ended environment (Stooke et al., 2021), autonomous systems (Hüttenrauch et al., 2017; Peng et al., 2021) and game theory equilibrium solving (Lanctot et al., 2017; Perolat et al., 2022). Despite strides made in MARL, training generalizable behaviors in MARL is yet to be investigated. Recently, generalization in RL (Packer et al., 2018; Song et al., 2019; Ghosh et al., 2021; Lyle et al., 2022) has achieved much progress in domain adaptation (Higgins et al., 2017) and procedurally generated environments (Lee et al., 2019; Igl et al., 2020; Zha et al., 2020). However, there are few works of generalization in MARL domains (Carion et al., 2019; Vezhnevets et al., 2020; Mahajan et al., 2022; McKee et al., 2022). Recently, Vezhnevets et al. (2020) propose a hierarchical MARL method for agents to play against opponents it hasn’t seen during training. However, the evaluation scenarios are only limited to simple competitive scenarios. Mahajan et al. (2022) studied the generalization in MARL empirically and proposed theoretical findings based on successor features (Dayan, 1993). However, no method to achieve generalization in MARL was proposed in (Mahajan et al., 2022). Ad-hoc team building (Stone & Kraus, 2010; Gu et al., 2021) models the multi-agent problem as a single-agent learning task. In ad-hoc team building, one ad-hoc agent is trained by interacting with agents that have fixed pretrained policies and the non-stationarity issue is not severe. However, in our formulation, non-stationarity is the main obstacle to MARL training. In addition, there is only one ad-hoc agent evaluated by interacting agents that are unseen during training, while there can be more than one focal agent in our formulation as defined in Definition 2, thus making our formulation general and challenging. There has been a growing interest in applying self-play to solve complex games (Heinrich et al., 2015; Silver et al., 2018; Hernandez et al., 2019; Baker et al., 2019); however, its value in enhancing the generalization of MARL agents has yet to be examined. Due to space constraints, we discuss meta-learning (Al-Shedivat et al., 2018; Kim et al., 2021) and population-based training (Strouse et al., 2021; Lupu et al., 2021; Tang et al., 2021) works in Appx. F. 8 CONCLUSION, LIMITATIONS AND FUTURE WORK In this paper, we consider the problem of achieving generalizable behaviors in MARL. We first model the problem with Markov Game. To train agents that can interact with agents that possess unseen policies. We propose a simple yet effective method, RPM, to collect diversified multi-agent interaction data. We save policies in RPM by ranking the training episode return. Empirically, RPM significantly boosts the performance of MARL agents in various Melting Pot evaluation scenarios. RPM’s performance is dependent on the appropriate value of ψ. Several attempts may be needed to determine the correct value of ψ for RPM. We are interested in discovering broader measures for ranking policies that do not explicitly consider the training episode return. Recently, there has been a growing interest in planning in RL, especially with model-based RL. We are interested in exploring the direction of applying planning and opponent/teammate modelling for attaining generalized MARL policies for future work. Agents are engaged in complex interactions in multi-agent scenarios. Devising novel self-play methods is our future direction. ETHICS STATEMENT We addressed the relevant aspects in our conclusion and have no conflicts of interest to declare. REPRODUCIBILITY STATEMENT We provide detailed descriptions of our experiments in the appendix and list all relevant parameters in Table 4 and Table 5 in Appx. D. The code can be found at this link: https://sites.google. com/view/rpm-iclr2023/. ACKNOWLEDGMENTS We would like to thank the anonymous reviewers for their suggestions. We thank the support from Xinyi Wan, Jiahao Ji and Xiangfan Li of the infrastructure team at Sea AI Lab. Wei Qiu and Bo An are supported by the National Research Foundation, Singapore under its Industry Alignment Fund – Pre-positioning (IAF-PP) Funding Initiative. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
1. What is the focus and contribution of the paper on multi-agent reinforcement learning? 2. What are the strengths and weaknesses of the proposed method, particularly regarding its algorithmic difference from traditional fictitious self-play? 3. Do you have any concerns or questions about the relationship between the proposed method and fictitious self-play? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper considers improving the generalization capability of MARL agents. The idea is to maintain an archive of policies based on their discretized returns. In each episode, agents from random return categories are selected to perform RL training, which ensures that each episode can contain diversified agent interactions. Experiments are conducted on the melting pot environment against a collection of baselines. Although the idea of using policies of different skill levels for improved generalization isn't new in MARL, the execution of this idea in the setting of general multi-agent games is neat and intuitive. +++++++++++++++++++++++ post rebuttal +++++++++++++++++++++++ The authors have included additional results on the baselines, which makes the paper more compete. Therefore, I decided to update my score from 5 to 6. Strengths And Weaknesses Strength The paper is clearly written and easy to follow. Although the overall idea isn't groundbreaking, it is still novel to store policies in a discretized memory and perform training over randomly sampled policies. This is algorithmically different from the classical way of using policy archives in MARL, which typically follows the framework of fictitious self-play. In addition, the selected testbed is sufficiently challenging. The proposed method could be well served as a baseline for the following works. Weakness Missing citations In the related work section, the authors claim that "there is no method proposed to achieve generalization in MARL", which, to the best of my knowledge, is not true. It is true that there isn't a systematic study on the setting of general sum and more than two agents, but there are indeed a lot of works on relatively more narrow domains. For example, [1] adopts the same idea of using past policies to generate diverse interactions so that the learned policy can generalize to humans. The difference is that [1] conducts two stages: i.e., first, create a policy population of different skill levels and then train an adaptive policy from scratch to compete with diverse partners so that it can generalize during evaluation. [2] also considers the stag hunt game and follows a similar training paradigm to [1] to train a policy that can adapt to cooperative or non-cooperative partners. [3] adopts a similar population-based training framework to this work but improves the generalization ability of policies by promoting policy diversity. I think the paper should also carefully discuss these related works in the paper. [1] Collaborating with Humans without Human Data, DJ Strouse, Kevin R. McKee, Matt Botvinick, Edward Hughes, Richard Everett, NeurIPS 2021. [2] Discovering Diverse Multi-Agent Strategic Behavior via Reward Randomization, Zhenggang Tang, Chao Yu, Boyuan Chen, Huazhe Xu, Xiaolong Wang, Fei Fang, Simon Shaolei Du, Yu Wang, Yi Wu, ICLR 2021 [3] Trajectory Diversity for Zero-Shot Coordination, Andrei Lupu, Brandon Cui, Hengyuan Hu, Jakob Foerster, ICML 2021 Relationship to Fictitious Self-Play (FSP) I have been carrying by the same question throughout the reading of this paper. Although I do see experiments between the proposed method and FSP, I could still hardly understand why (at least intuitively) FSP is worse than RPM. I think the paper can be much improved if this question can be carefully answered and discussed. Some of my thoughts on this question are listed below. The performance of FSP should be tuned. In Fig 6, the performance of HSP is substantially worse than MAPPO and it is even worse than random. I checked the appendix, which states that HSP uses a surprisingly high past sampling rate of 30%. Particularly in the case of multiple agents, such a hyper-parameter choice could largely hurt MARL training. As a reference, [4] uses a 5% past sampling rate. I do think this hyper-parameter should be carefully tuned to ensure a fair comparison, considering the fact that FSP is perhaps the most important baseline to compare with. In particular, the RPM-random baseline achieves comparable performance to RPM in the prisoner's dilemma game while FSP is even worse than random. [4] Emergent Tool Use From Multi-Agent Autocurricula, Bowen Baker, Ingmar Kanitscheider, Todor Markov, Yi Wu, Glenn Powell, Bob McGrew, Igor Mordatch, ICLR 2020 The motivation for using a discretized category (rank). I'm definitely convinced that we should run MARL training with partners with different skill levels. And, it is great to see in the ablation studies that RPM-random baseline works worse. However, I don't think the paper at any place explains and motivates why the use of a discretized rank is necessary. Intuitively, why wouldn't past sampling achieve the same effect? Note that RPM-random is indeed comparable to RPM in the prisoner's dilemma case. By contrast, why does RPM-random works poorly in pure cooperation? Note that random sampling achieves strong performances in the overcooked game [1], which is also a purely cooperative game. I do think some in-depth analysis should be conducted. A possible reason that I can imagine is as follows. Based on the histogram in fig 8(e), the distribution of policy skill levels is not uniform. Those high-reward policies are much rarer than sub-optimal ones. So a naive random sampling may hardly choose those high-reward policies, which accordingly makes training slow. This could be a fair argument. However, if we exclude those policies with the highest rewards (e.g., return > 10), those sub-optimal policies are distributed pretty much uniformly in fig.8(e) to some extent. So, couldn't this issue (uneven distribution of skill levels) be just solved by FSP with a properly tuned past sampling rate? With a properly tuned rate, the probability of choosing a recent high-reward policy and choosing a poor past policy can be well balanced, if fig 8(e) is a generic diagram for most MARL applications. So, in addition to tuning the FSP baseline better, I would suggest the authors study the policy distribution over every scenario to have a better understanding of why random sampling would fail and why a discretized category is necessary. A fair comparison. This is a final and possibly repetitive comment. RPM requires careful tuning over the value of ϕ , and RPM would simply generate RPM-random if ϕ approaches 0. Unfortunately, there isn't a generic way to choose a good ϕ (at least not in the current draft), so I would believe a similar amount of tuning efforts should be made for FSP on past sampling rates for a fair comparison. Clarity, Quality, Novelty And Reproducibility The paper is in general well written and particularly easy to follow. There is a small type saying ϕ is an integer (two lines above equation (2)). However, according to Table 2, this isn't the case. The reproducibility is good and the website is well-prepared. Regarding the novelty, I think the paper can be much stronger if the authors could provide an in-depth analysis of the necessity of the discretized categorization of policy skill level.
ICLR
Title RPM: Generalizable Multi-Agent Policies for Multi-Agent Reinforcement Learning Abstract Despite the recent advancement in multi-agent reinforcement learning (MARL), the MARL agents easily overfit the training environment and perform poorly in evaluation scenarios where other agents behave differently. Obtaining generalizable policies for MARL agents is thus necessary but challenging mainly due to complex multi-agent interactions. In this work, we model the MARL problem with Markov Games and propose a simple yet effective method, called ranked policy memory (RPM), i.e., to maintain a look-up memory of policies to achieve good generalizability. The main idea of RPM is to train MARL policies via gathering massive multi-agent interaction data. In particular, we first rank each agent’s policies by its training episode return, i.e., the episode return of each agent in the training environment; we then save the ranked policies in the memory; when an episode starts, each agent can randomly select a policy from the RPM as the behavior policy. Each agent uses the behavior policy to gather multi-agent interaction data for MARL training. This innovative self-play framework guarantees the diversity of multi-agent interaction in the training data. Experimental results on Melting Pot demonstrate that RPM enables MARL agents to interact with unseen agents in multi-agent generalization evaluation scenarios and complete given tasks. It significantly boosts the performance up to 818% on average. 1 INTRODUCTION In Multi-Agent Reinforcement Learning (MARL) (Yang & Wang, 2020), each agent acts decentrally and interacts with other agents to complete given tasks or achieve specified goals via reinforcement learning (RL) (Sutton & Barto, 2018). In recent years, much progress has been achieved in MARL research (Vinyals et al., 2019; Jaderberg et al., 2019; Perolat et al., 2022). However, the MARL agents trained with current methods tend to suffer poor generalizability (Hupkes et al., 2020) in the new environments. The generalizability issue is critical to real-world MARL applications (Leibo et al., 2021), but is mostly neglected in current research. In this work, we aim to train MARL agents that can adapt to new scenarios where other agents’ policies are unseen during training. We illustrate a two-agent hunting game as an example in Fig. 1. The game’s objective for two agents is to catch the stag together, as one agent acting alone cannot catch the stag and risks being killed. They may perform well in evaluation scenarios similar to the training environment, as shown in Fig. 1 (a) and (b), respectively, but when evaluated in scenarios different from the training ones, these agents often fail. As shown in Fig. 1 (c), the learning agent (called the focal agent following (Leibo et al., 2021)) is supposed to work together with the other agent (called the background agent also following (Leibo et al., 2021)) that is pre-trained and can capture the hare and the stag. In this case, the focal agent would fail to capture the stag without help from its teammate. The teammate of the focal agent may be tempted to catch the hare alone and not cooperate, or may only choose to cooperate with the focal agent after capturing the hare. Thus, the focal agent should adapt to their teammate’s behavior to catch the stag. However, the policy of the background agent is unseen to the focal agent during training. Therefore, without generalization, the agents trained as Fig. 1 (left) cannot achieve an optimal policy in the new evaluation scenario. ∗Wei Qiu did the work while interning at Sea AI Lab. Corresponding author. Inspired by the fact that human learning is often accelerated by interacting with individuals of diverse skills and experiences (Meltzoff et al., 2009; Tomasello, 2010), we propose a novel method aimed at improving the generalization of MARL through the collection of diverse multi-agent interactions. Concretely, we first model the MARL problem with Markov Games (Littman, 1994) and then propose a simple yet effective method called ranked policy memory (RPM) to attain generalizable policies. The core idea of RPM is to maintain a look-up memory of policies during training for the agents. In particular, we first evaluate the trained agents’ policies after each training update. We then rank the trained agents’ policies by the training episode returns and save them in the memory. In this way, we obtain various levels, i.e., the performance of the policies. When starting an episode, the agent can access the memory and load a randomly sampled policy to replace the current behavior policy. The new ensemble of policies enables the agents in self-play to collect diversified experiences in the training environment. These diversified experiences contain many novel multi-agent interactions that can enhance the extrapolation capacity of MARL, thus boosting the generalization performance. We note that an easy extension by incorporating different behavior properties as the keys in RPM could potentially further enrich the generalization but it is left for future work. We implement RPM on top of the state-of-the-art MARL algorithm MAPPO (Yu et al., 2021). To verify its effectiveness, we conduct large-scale experiments with the Melting Pot (Leibo et al., 2021), which is a well-recognized benchmark for MARL generalization evaluation. The experiment results demonstrate that RPM significantly boosts the performance of generalized social behaviors up to 818% on average and outperforms many baselines in a variety of multi-agent generalization evaluation scenarios. Our code, pictorial examples, videos and experimental results are available at this link: https://sites.google.com/view/rpm-iclr2023/. 2 PRELIMINARIES Markov Games. We consider the Markov Games (Littman, 1994) represented by a tuple G = ⟨N ,S,A,O, P,R, γ, ρ⟩. N is a set of agents with the size |N | = N ; S is a set of states; A = ×Ni=1Ai is a set of joint actions with Ai denoting the set of actions for an agent i; O = ×Ni=1Oi is the observation set, with Oi denoting the observation set of the agent i; P : S ×A → S is the transition function and R = ×Ni=1ri is the reward function where ri : S ×A → R specifies the reward for the agent i given the state and the joint action; γ is the discount factor; the initial states are determined by a distribution ρ : S → [0, 1]. Given a state s ∈ S , each agent i ∈ N chooses its action ui and obtains the reward r(s,u) with the private observation oi ∈ Oi, where u = {ui}Ni=1 is the joint action. The joint policy of agents is denoted as πθ = {πθi}Ni=1 where πθi : S ×Ai → [0, 1] is the policy for the agent i. The objective of each agent is to maximize its total expected return Ri = ∑∞ t=0 γ trti . Multi-Agent RL. In MARL, multiple agents act in the multi-agent systems to maximize their respective returns with RL. Each agent’s policy πi is optimized by maximizing the following objective: J (πi) ≜ Es0:∞∼ρ0:∞G ,ai0:∞∼πi [ ∞∑ t=0 γtrit ] , where J (πi) is a performance measure for policy gradient RL methods (Williams, 1992; Lillicrap et al., 2016; Fujimoto et al., 2018). Each policy’s Q value Qi is optimized by minimizing the following regression loss (Mnih et al., 2015) with TD-learning (Sutton, 1984): L(θi) ≜ ED′∼D [( yit −Qiθi ( st,ut, s i t, u i t ))2] , where yit = r i t + γmaxu′ Q i θ̄i ( st+1,u ′, sit, u i,′). θi are the parameters of the agents. θ̄i is the parameter of the target Qi and periodically copied from θ. D′ is a sample from the replay buffer D. 3 PROBLEM FORMULATION We introduce the formulation of MARL for training and evaluation in our problem. Our goal is to improve generalizabiliby of MARL policies in scenarios where policies of agents or opponents are unseen during training while the physical environment is unchanged. Following Leibo et al. (2021), the training environment is defined as substrate. Each substrate is an N -agent partially observable Markov game G. Each agent optimizes its policy πθi via the following protocol. Definition 1 (Multi-Agent Training). There are N agents act in the substrate, which is denoted as G. Each agent receives partial environmental observation not known to other agents and aims to optimizes its policy πθi by optimizing its accumulated rewards: ∑∞ t=0 γ trit. The performance of the joint policy πθ = {πθi}Ni=1 is measured by the mean individual return: R̄(πθ) = 1N ∑N i=1R(πθi ;G). R(πθi ;G) measures the episode return of policy πθi in game G for agent i. In order to evaluate the trained MARL policies in evaluation scenario G′, we follow the evaluation protocol defined by Leibo et al. (2021): Definition 2 (Multi-Agent Evaluation). There are M (1 ≤ M ≤ N − 1) focal agents that are selected from N agents. The focal agents are agents to be evaluated in evaluation scenarios. They are paired with N −M background agents whose policies πϕ = {πϕj}N−Mj=1 were pre-trained with pseudo rewards in the same physical environment where the policies πθ are trained. To measure the generalized performance in evaluation scenarios, we use the mean individual return of focal agents as the performance measure: R̄({πθ}Mi=1) = 1M ∑M i=1R(πθi ;G′). We show an example of our formulation in Fig. 2. Note that the focal agents cannot utilise the interaction data collected during evaluation to train or finetune their policies. Without training the policies of focal agents with the collected trajectories during evaluation, the focal agents should behave adaptively to interact with the background agents to complete challenging multi-agent tasks. It is also worth noting that the ad-hoc team building (Stone & Kraus, 2010; Gu et al., 2021) is different from our formulation both in the training and evaluation. We discuss the differences in the related works section (Paragraph 3, Sec. 7). In MARL, the focal agents need adaptively interact with background agents to complete given tasks. Formally, we define the objective for optimizing performance of the focal agents without exploiting their trajectories in the evaluation scenario for training the policies {πθj}Mj=1: maxJ ({πθj} M j=1) ≜ maxEs0:∞∼ρ0:∞G′ ,a j 0:∞∼{πθj } M j=1 [ ∞∑ t=0 γt 1 M M∑ j=1 rjt ∣∣∣∣∣G′ ] . (1) 4 RANKED POLICY MEMORY To improve the generalization of MARL, agents in the substrate must cover as much as multi-agent interactions, i.e., data, that resemble the unseen multi-agent interactions in the evaluation scenario. However, current training paradigms, like independent learning (Tampuu et al., 2017) and centralized training and decentralized execution (CTDE) (Oliehoek et al., 2008), cannot give diversified multiagent interactions, as the agents’ policies are trained at the same pace. To this end, we propose a Ranked Policy Memory (RPM) method to provide diversified multi-agent behaviors. RPM Building & Updating. We denote an RPM with Ψ, which consists of |Rmax| entries, i.e., ranks, where |Rmax| is the maximum training episode return (the episode return in the substrate). When an agent is acting in the substrate, it will receive the training episode return R of all agents with policies {πiθ}Ni=1. Then {πiθ}Ni=1 are saved into Ψ by appending agents’ policies into the corresponding memory slot, Ψ[re].add({πie}Ni=1). To avoid there being too many entries in the policy memory caused by continuous episode return values, we discretize the training episode return. Each discretized entry κ covers a range of [κ, κ+ ψ), where ψ > 0 and it can be either an integer or a float number. For the training episode return R, the corresponding entry κ can be calculated by: κ = { ⌊R/ψ⌋ × 1{(R mod ψ) ̸= 0} × ψ, if R ≥ 0, ⌊R/ψ⌋ × ψ, otherwise. (2) where 1{·} is the indicator function, and ⌊·⌋ is the floor function. Intuitively, discretizing R saves memory and memorize policies of similar performance in to the same rank. Therefore, diversified policies can be saved to be sampled for agents. RPM Sampling. The memory Ψ stores diversified policies with different levels of performance. We can sample various policies of different ranks and assign each policy to each agent in the substrate to collect multi-agent trajectories for training. These diversified multi-agent trajectories can resemble trajectories generated by the interaction with agents possessing unknown policies in the evaluation scenario. At the beginning of an episode, we first randomly sampleN keys with replacement and then randomly sample one policy for each key from the corresponding list. All agents’ policies will be replaced with the newly sampled policies for multi-agent interactions in the substrate, thus generating diversified multi-agent trajectories. Algorithm 1: MARL with RPM 1 Input: Initialize πθ , Ψ, D, G and G′; 2 Input: Initialize behavior policy πθb ← πθ; 3 for each update do 4 if RPM sampling then 5 πθb ← SamplingRPM(Ψ); 6 D ← GatherTrajectories(πθb ,G); 7 πθ ← MARLTrainig(πθ,D); 8 Ψ← UpdateRPM(πθ,Ψ,G); 9 R̄← Evaluate(πθ,G′); 10 πθb ← πθ; 11 Output: πθ . The Workflow of RPM. We showcase an example of the workflow of RPM in Fig. 3. There are three agents in training. Agents sample policies from RPM. Then all agents collect data in the substrate for training. The training episode return is then used to update RPM. During evaluation, agents 1 and 2 are selected as focal agents and agent 3 is selected as the background agent. We present the pseudo-code of MARL training with RPM in Algorithm 1. In Lines 4-5, the πθb is updated by sampling policies from RPM. Then, new trajectories of D are collected in Line 6. πθ is trained in Line 7 with MARL method by using the newly collected trajecotries and πθb is updated with the newly updated πθ. RPM is updated in Line 8. After that, the performance of πθ is evaluated in the evaluation scenario G′ and the evaluation score R̄ is returned in Line 9. Discussion. RPM leverages agents’ previously trained models in substrates to cover as many patterns of multi-agent interactions as possible to achieve generalization of MARL agents when paired with agents with unseen policies in evaluation scenarios. It uses the self-play framework for data collection. Self-play (Brown, 1951; Heinrich et al., 2015; Silver et al., 2018; Baker et al., 2019) maintains a memory of the opponent’s previous policies for acquiring equilibria. RPM differs from other self-play methods in four aspects: (i) self-play utilizes agent’s previous policies to create fictitious opponents when the real opponents are not available. By playing with the fictitious opponents, many fictitious data are generated for training the agents. In RPM, agents load their previous policies to diversify the multi-agent interactions, such as multi-agent coordination and social dilemmas, and all agents’ policies are trained by utilizing the diversified multi-agent data. (ii) Self-play does not maintain explicit ranks for policies while RPM maintains ranks of policies. (iii) Self-play was not introduced for generalization of MARL while RPM aims to improve the generalization of MARL. In Sec. 6, we also present the evaluation results of a self-play method. 5 MARL TRAINING We incorporate RPM into the MARL training pipeline. We take MAPPO (Yu et al., 2021) for instantiating our method, which is a multi-agent variant of PPO (Schulman et al., 2017) and outperforms many MARL methods (Rashid et al., 2018; 2020; Wang et al., 2021a) in various complex multi-agent domains. In MAPPO, a central critic is maintained for utilizing the concealed information of agents to boost multi-agent learning due to non-stationarity. RPM introduces a novel method for agents to collect experiences/trajectories τ = {τi}Ni=1. Each agent optimizes the following objective: J (θi) = E [ min ( ηti ( θti ) ·Ati,clip ( ηti ( θti ) , 1− ϵ, 1 + ϵ ) ·Ati )] , (3) where ηti(θ t i) = πθt i (uti|τ t i ) π θold i (uti|τti ) denotes the important sampling weight. The clip (·) clips the values of θi that are outside the range [1− ϵ, 1 + ϵ] and ϵ is a hyperparameter. Ati is a generalized advantage estimator (GAE) (Schulman et al., 2015). To optimize the central critic Vψ({oti, uti}Ni=1), we mix agents’ observation-action pairs and output an N -head vector where each value corresponds to the agent’s value: L(ψ) := ED′∼D [( yt − Vψ̄({oti, uti}Ni=1) )2] , (4) where yt = [∑k−1 l=0 γ lrt+li + γ kVψ̄({ot+ki , u t+k i }Ni=1)[i] ]N i=1 is a vector of k-step returns, and D′ is a sample from the replay buffer D. In complex scenarios, e.g., Melting Pot, with an agent’s observation as input, its action would not impact other agents’ return, since the global states contain redundant information that deteriorates multi-agent learning. We present the whole training process, the network architectures of the agent and the central critic in Appx. D. 6 EXPERIMENTS In this section, to verify the effectiveness of RPM in improving the generalization of MARL, we conduct extensive experiments on Melting Pot and present the empirical results. We first introduce Melting Pot, baselines and experiment setups. Then we present the main results of RPM. To demonstrate that ψ is important for RPM, we conducted ablation studies. We finally showcase a case study to visualize RPM. To sum up, we answer the following questions: Q1: Is RPM effective in boosting the generalization performance of MARL agents? Q2: How does the value of ψ impact RPM training? Q3: Does RPM gather diversified policies and trajectories? 6.1 EXPERIMENTAL SETUP Melting Pot. To demonstrate that RPM enables MARL agents to learn generalizable behaviors, we carry out extensive experiments on DeepMind’s Melting Pot (Leibo et al., 2021). Melting Pot is a suite of testbeds for the generalization of MARL methods. It proposes a novel evaluation pipeline for the evaluation of the MARL method in various domains. That is, all MARL agents are trained in the substrate; during evaluation, some agents are selected as the focal agents and the rest agents become the background agents (pre-trained policies of MARL models will be loaded); the evaluation scenarios share the same physical properties as the substrates. Melting Pot environments possess many properties, such as temporal coordination and free riding as depicted in Table 1. An agent performing well in such environments indicates that its behaviors exhibit these properties. In Fig. 4, the agent’s observation is shown in the green box to the lower left of the state (i.e., the whole image). The agent is in the lower middle of the observation. The deep neural network architecture of the agent’s policy is shown on the left. More information about substrates, scenarios, neural network architectures and training details can be found in Appx. D. Baselines. Our baselines are MAPPO (Yu et al., 2021), MAA2C (Papoudakis et al., 2021), OPRE (Vezhnevets et al., 2020), heuristic fictitious self-play (HFSP) (Heinrich, 2017; Berner et al., 2019) and RandNet (Lee et al., 2019). MAPPO and MAA2C are MARL methods that achieved outstanding performance in various multi-agent scenarios (Papoudakis et al., 2021). OPRE was proposed for the generalization of MARL. RandNet is a general method for the generalization of RL by introducing a novel component in the convolutional neural network. HFSP is a general self-play method for obtaining equilibria in competitive games, we use it by using the policies saved by RPM. Training setup. We use 6 representative substrates (Fig. 5) to train MARL policies and choose some evaluation scenarios from each substrate as our evaluation testbed. The properties of the environments are listed in Table 1. We train agents in Melting Pot substrates for 200 million frames with 3 random seeds for all methods. Our training framework is distributed with 30 CPU actors to collect experiences and 1 GPU for the learner to learn policies. We implement our actors with Ray (Moritz et al., 2018) and the learner with EPyMARL (Papoudakis et al., 2021). We use mean-std to measure the performance of all methods. The bold lines in all figures are mean values, and the shades stand for the standard deviation. Due to a limited computation budget, it is redundant to compare our method with other methods, such as QMIX (Rashid et al., 2018) and MADDPG (Lowe et al., 2017) as MAPPO outperforms them. All experiments are conducted on NVIDIA A100 GPUs. 6.2 EXPERIMENT RESULTS To answer Q1, we present the evaluation results of 17 Melting Pot evaluation scenarios in Fig. 6. Our method can boost MARL in various evaluation scenarios, which have different properties, as shown in Table 1. In Chicken Game (CG) 1-2 (the number stands for the number of the evaluation scenario of Chicken Game), RPM outperforms its counterparts by a convincing margin. HFSP performs no better than RPM. RandNet gets around 15 evaluation mean returns on Chicken Game (CG) 1. MAA2C and OPRE perform nearly random (the red dash lines indicate the random result) in the two scenarios. In Pure Coordination (PC) 1-3, Rational Coordination (PC) 1-3 and Prisoners’ Dilemma (PD) 1-3, most baselines perform poorly. In Stag Hunt (SH) 1-3 and Clean Up (CU) 1-2, MAPPO and MAA2C perform unsatisfactorily. We can also find that HFSP even gets competitive performance in Stag Hunt (SH) 1-3. However, HFSP performs poorly in Pure Coordination (PC) 1-3, Rational Coordination (RC) 1-3 and Prisoners’ Dilemma (PD) 1-3. Therefore, the vanilla self-play method cannot directly be applied to improve the generalization of MARL methods. In summary, RPM boosts the performance up to around 818% on average compared with MAPPO on 6 evaluation scenarios. To answer Q2, we present experimental results of the impact of ψ and the sampling ratio in HFSP in the following. 0 50 100 150 200 0 20 40 60 opt: 98.9 CG 1 0 50 100 150 200 0 5 10 opt: 14.3 CG 2 0 50 100 150 200 0 10 opt: 36.4 CG 3 0 50 100 150 200 0 5 10 15 opt: 65.9 SH 1 0 50 100 150 200 0 10 opt: 54.4 SH 2 0 50 100 150 200 0 10 20 opt: 53.8 SH 3 0 50 100 150 200 0 200 400 opt: 722.6 CU 1 0 50 100 150 200 0 100 opt: 385.9 CU 2 0 50 100 150 200 0.00 0.25 0.50 0.75 opt: 4.4 PC 1 0 50 100 150 200 0.0 0.2 0.4 opt: 3.2 PC 2 0 50 100 150 200 0.0 0.5 1.0 opt: 3.2 PC 3 0 50 100 150 200 0 10 opt: 55.7 PD 1 0 50 100 150 200 0 10 20 30 opt: 60.8 PD 2 0 50 100 150 200 0 10 20 opt: 36.8 PD 3 0 50 100 150 200 0 1 2 3 opt: 11.9 RC 1 0 50 100 150 200 0 1 opt: 7.7 RC 2 0 50 100 150 200 0 2 4 opt: 13.1 RC 3 RPM (ours) MAPPO MAA2C RandNet OPRE HFSP Random Policy Training Steps (million) Ev al R et ur n M ea n Figure 6: Evaluation results of RPM and baselines in 17 scenarios. The red dash horizontal lines indicate the results of random policy. The optimal (opt) values are shown in each sub-figure and were gathered from (Leibo et al., 2021), which an exploiter generated. The exploiter was trained in the evaluation scenarios with RL methods, and the training time steps were 1,000 M. 0 10 0 250 500 750 Co un ts Chicken Game 0 10 0 250 500 750 Stag Hunt 0 100 0 2000 4000 Clean Up 0 1 0 500 1000 1500 Pure Coordination 0 10 0 250 500 750 Prisoners Dilemma 0 2 0 1000 2000 Rational Coordianation Training Episode Returns Figure 7: Histograms of training episode returns. 6.3 ABLATION STUDY The Impact of ψ. To investigate which value of ψ has the greatest impact on RPM performance, we conduct ablation studies by (i) removing ranks and sampling from the checkpoint directly; (ii) reducing the number of ranks by changing the value of ψ. As shown in Fig. 8, without ranks (sampling policies without ranks randomly), RPM cannot attain stable performance in some evaluation scenarios. Especially in Pure Coordination (PC) 1-3, the result is low and has a large variance. In RPM, choosing the right interval ψ can improve the performance, as shown in the results of Pure Coordination (PC) 1- 3 and Prisoners’ Dilemma (PD) 1-3, showing that the value of ψ is important for RPM. We summarize the results and values of ψ in Table 2 and Table 3. The Sampling Ratio in HFSP HFSP shows comparable results in some scenarios in Figure 6. In Figure 6, the sampling ratio of HFSP is 0.3. We are interested in studying the impact of the sampling RPM (ours) HFSP-0.9 HFSP-0.7 HFSP-0.5 HFSP-0.3 HFSP-0.1 Random Policy ratio in HFSP on evaluation performance. We conduct experiments in CU 1 and 2, PC 1 and 3 and PD 1 and 3. The sampling ratio list is [0.9, 0.7, 0.5, 0.3, 0.1]. We use the default training setup and use 3 random seeds. HFSP shows comparable results in PC 2 and 3, but its performances are poor in CU 1 and 2 and PD 2 and 3. As shown in Figure 9, HFSP heavily relies on the sampling ratio. HFSP should be carefully tuned on each substrate to attain good performance, which is not feasible. In contrast, RPM is stable (the sampling ratio is 0.5) on all substrates. HFSP can also perform well in substrates such as PC and PD, where the return-checkpoint count distribution is more uniform. The absence of ranks leads to the frequent sampling of policies with high count values in substrates that have skewed return-checkpoint count distribution, thereby reducing the diversity of training data. Such distributions typically comprise a large number of policies with suboptimal performance. 6.4 CASE STUDY We showcase how RPM helps to train the focal agents to choose the right behaviors in the evaluation scenario after training in the substrate. To illustrate the trained performance of RPM agents, we use the RPM agent trained on Stag Hunt and run the evaluation on Stag Hunt 1. In Stag Hunt, there are 8 agents. Each agent collects resources that represent ‘hare’ (red) or ‘stag’ (green) and compares inventories in an interaction, i.e., encounter. The results of solving the encounter are the same as the classic Stag Hunt matrix game. In this environment, agents are facing tension between the reward for the team and the risk for the individual. In Stag Hunt 1, One focal agent interacts with seven pretrained background agents. All background agents were trained to play the ‘stag’ strategy during the interaction1. The optimal policy for the focal agent is also to play ‘stag’. However, it is challenging for agents to detect other agents’ strategy since such a behavior may not persist in the substrate. Luckily, RPM enables focal agents to behave correctly in this scenario. To answer Q3, we present the analysis of RPM on the substrate Stag Hunt and its evaluation scenario SH 1 in Fig. 10. We can find that in Fig. 10 (b), the number of the keys in RPM is growing monotonically during training and the maximum number of the keys in RPM is over 20, showing that agents trained with RPM discover many novel patterns of multi-agent interaction and new keys are created and the trained models are saved in RPM. Meanwhile, the evaluation performance is also increasing in SH 1 as depicted in Fig. 10 (a). In Fig. 10 (c), it is interesting to see that the distribution of the keys of RPM is expanding during training. In the last 25 million training steps, the last distribution of RPM keys covers all policies of different performance levels, ranging from 0 to 14. By utilizing RPM, agents can collect diversified multi-agent trajectories for multi-agent training. Fig. 10 (d) demonstrates the final histogram of RPM keys after training. There are over 600 trained policies that have a small value of keys. Since agents should explore the environment at the early 1This preference was trained with pseudo rewards by Leibo et al. (2021) and the trained models are available at this link: https://github.com/deepmind/meltingpot stage of training, it is reasonable to find that many trained policies of RPM keys have low training episode returns. After 50 million training steps, RPM has more policies with higher training episode returns. Note that the maximum training episode return of RPM keys is over 14 while the maximum mean evaluation return of RPM shown in Fig. 10 (a) is around 14. Our experiments show that training policies with good performance in the substrate is crucial for improving generalization performance in the evaluation scenarios. When MARL agents perform poorly in the substrate, the evaluation performance will also be inferior or random, making it hard to have diversified policies. We show the results in Appx. E. 7 RELATED WORKS Recent advances in MARL (Yang & Wang, 2020; Zhang et al., 2021) have demonstrated its success in various complex multi-agent domains, including multi-agent coordination (Lowe et al., 2017; Rashid et al., 2018; Wang et al., 2021b), real-time strategy (RTS) games (Jaderberg et al., 2019; Berner et al., 2019; Vinyals et al., 2019), social dilemma (Leibo et al., 2017; Wang et al., 2018; Jaques et al., 2019; Vezhnevets et al., 2020), multi-agent communication (Foerster et al., 2016; Yuan et al., 2022), asynchronous multi-agent learning (Amato et al., 2019; Qiu et al., 2022), open-ended environment (Stooke et al., 2021), autonomous systems (Hüttenrauch et al., 2017; Peng et al., 2021) and game theory equilibrium solving (Lanctot et al., 2017; Perolat et al., 2022). Despite strides made in MARL, training generalizable behaviors in MARL is yet to be investigated. Recently, generalization in RL (Packer et al., 2018; Song et al., 2019; Ghosh et al., 2021; Lyle et al., 2022) has achieved much progress in domain adaptation (Higgins et al., 2017) and procedurally generated environments (Lee et al., 2019; Igl et al., 2020; Zha et al., 2020). However, there are few works of generalization in MARL domains (Carion et al., 2019; Vezhnevets et al., 2020; Mahajan et al., 2022; McKee et al., 2022). Recently, Vezhnevets et al. (2020) propose a hierarchical MARL method for agents to play against opponents it hasn’t seen during training. However, the evaluation scenarios are only limited to simple competitive scenarios. Mahajan et al. (2022) studied the generalization in MARL empirically and proposed theoretical findings based on successor features (Dayan, 1993). However, no method to achieve generalization in MARL was proposed in (Mahajan et al., 2022). Ad-hoc team building (Stone & Kraus, 2010; Gu et al., 2021) models the multi-agent problem as a single-agent learning task. In ad-hoc team building, one ad-hoc agent is trained by interacting with agents that have fixed pretrained policies and the non-stationarity issue is not severe. However, in our formulation, non-stationarity is the main obstacle to MARL training. In addition, there is only one ad-hoc agent evaluated by interacting agents that are unseen during training, while there can be more than one focal agent in our formulation as defined in Definition 2, thus making our formulation general and challenging. There has been a growing interest in applying self-play to solve complex games (Heinrich et al., 2015; Silver et al., 2018; Hernandez et al., 2019; Baker et al., 2019); however, its value in enhancing the generalization of MARL agents has yet to be examined. Due to space constraints, we discuss meta-learning (Al-Shedivat et al., 2018; Kim et al., 2021) and population-based training (Strouse et al., 2021; Lupu et al., 2021; Tang et al., 2021) works in Appx. F. 8 CONCLUSION, LIMITATIONS AND FUTURE WORK In this paper, we consider the problem of achieving generalizable behaviors in MARL. We first model the problem with Markov Game. To train agents that can interact with agents that possess unseen policies. We propose a simple yet effective method, RPM, to collect diversified multi-agent interaction data. We save policies in RPM by ranking the training episode return. Empirically, RPM significantly boosts the performance of MARL agents in various Melting Pot evaluation scenarios. RPM’s performance is dependent on the appropriate value of ψ. Several attempts may be needed to determine the correct value of ψ for RPM. We are interested in discovering broader measures for ranking policies that do not explicitly consider the training episode return. Recently, there has been a growing interest in planning in RL, especially with model-based RL. We are interested in exploring the direction of applying planning and opponent/teammate modelling for attaining generalized MARL policies for future work. Agents are engaged in complex interactions in multi-agent scenarios. Devising novel self-play methods is our future direction. ETHICS STATEMENT We addressed the relevant aspects in our conclusion and have no conflicts of interest to declare. REPRODUCIBILITY STATEMENT We provide detailed descriptions of our experiments in the appendix and list all relevant parameters in Table 4 and Table 5 in Appx. D. The code can be found at this link: https://sites.google. com/view/rpm-iclr2023/. ACKNOWLEDGMENTS We would like to thank the anonymous reviewers for their suggestions. We thank the support from Xinyi Wan, Jiahao Ji and Xiangfan Li of the infrastructure team at Sea AI Lab. Wei Qiu and Bo An are supported by the National Research Foundation, Singapore under its Industry Alignment Fund – Pre-positioning (IAF-PP) Funding Initiative. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
1. What is the main contribution of the paper, and how does it address the generalization problem in multi-agent reinforcement learning? 2. What are the strengths and weaknesses of the proposed method, particularly regarding its heuristic nature and dependence on a hyperparameter? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns or questions regarding the paper's explanation of diversified multi-agent trajectories resembling those generated by unknown policies in the evaluation scenario? 5. Does the reviewer have any suggestions for improving the proposed RPM method or addressing its limitations?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a simple method called ranked policy memory (RPM) that can be plugged in any existing MARL algorithms to solve the generalization problem of MARL. The main idea of RPM is maintaining a look-up memory of the history rollout policies which are ranked by the training episode return. At each episode, the rollout policies are uniformly sampled from RPM. The objective of this process is to keep the diversity of agents' policies and resemble the the unknown policies that may appear in the evaluation scenarios. The performance of RPM is empirically verified by experiments and abnalation studies. Strengths And Weaknesses Strength This paper is generally well written and the description of the proposed method is clear. The motviation of this paper is clear and I believe this is a meaningful work. The proposed method is demonstrated to improve the performance of generalization. The ablation study well spots the limitation of the proposed method, so that it is transparent and easy for the followers to improve it. The comparisons to the prior works are sufficient. Weaknesses The proposed method is so heuristic (though it cannot be a reason to reject it), so that it is unclear why it works. Although the authors claim that the diversified multi-agent trajectories can resemble trajectories generated by the interaction with agents possessing unknown policies in the evaluation scenario, this explanation is unconvincing to me. The most critical reason is that the background agent is pre-trained, so it is possible that the policy space of trained agents completely deviates from the policy space of pre-trained agents (with no intersection). The authors should give a better explanation. The proposed RPM is highly dependent on the choice of a hyperparameter ψ as shown in ablation study. Could the authors give a brief proposal to address this issue? As shwon in Fig. (e) in ablation study, the mode of the final distribution of keys of RPM is almost the mean of returns. It can be estimated that the effect of RPM could make the MARL policies adapt the average performance. This might lead to the issue as the authors discussed in the paper that MARL agents' performances will affect the evaluation performance. Clarity, Quality, Novelty And Reproducibility The clarity of this paper is good, since the whole paper is well written. The novelty of this paper may not be adequate, since it is just an intuitive extension from the prior self-play framework with no reasonable explanations. The originality of the paper is moderate as far as I know. However, I am not sure whether the similar idea has appeared in the past (since it is so direct and intuitive)? The reproducibility of this paper is good, since it provides both codes and experimental setups. Overall, the quality of this paper is not bad.
ICLR
Title Character Generation through Self-Supervised Vectorization Abstract The prevalent approach in self-supervised image generation is to operate on pixel level representations. While this approach can produce high quality images, it cannot benefit from the simplicity and innate quality of vectorization. Here we present a drawing agent that operates on stroke-level representation of images. At each time step, the agent first assesses the current canvas and decides whether to stop or keep drawing. When a ‘draw’ decision is made, the agent outputs a program indicating the stroke to be drawn. As a result, it produces a final raster image by drawing the strokes on a canvas, using a minimal number of strokes and dynamically deciding when to stop. We train our agent through reinforcement learning on MNIST and Omniglot datasets for unconditional generation and parsing (reconstruction) tasks. We utilize our parsing agent for exemplar generation and type conditioned concept generation in Omniglot challenge without any further training. We present successful results on all three generation tasks and the parsing task. Crucially, we do not need any stroke-level or vector supervision; we only use raster images for training. Code will be made available upon acceptance. 1 INTRODUCTION While, innately, humans sketch or write through strokes, this type of visual depiction is a more difficult task for machines. Image generation problems are typically addressed by raster-based algorithms. The introduction of generative adversarial networks (GAN) (Goodfellow et al., 2014), variational autoencoders (VAE) (Kingma & Welling, 2013) and autoregressive models (Van Oord et al., 2016) has led to a variety of applications. Style transfer (Gatys et al., 2015; Isola et al., 2017), photo realistic image generation (Brock et al., 2018; Karras et al., 2019), and super resolution (Ledig et al., 2017; Bin et al., 2017) are some of the significant instances of the advancing field. Additionally, Hierarchical Bayesian models formulated by deep neural networks are able to use the same generative model for multiple tasks such as classification, conditional and unconditional generation (Hewitt et al., 2018; Edwards & Storkey, 2016). These raster-based algorithms can produce high quality images, yet they cannot benefit from the leverage that higher level abstractions bring about. Vector-level image representation intrinsically prevents models from generating blurry samples and allows for compositional image generation which eventually may contribute to our understanding of how humans create or replicate images (Lake et al., 2017). This idea, with the introduction of sketch-based datasets such as Omniglot (Lake et al., 2012), Sketchy (Sangkloy et al., 2016), and QuickDraw (Ha & Eck, 2017) has triggered a significant body of work in recent years. Stroke based image generation and parsing has been addressed with both vector supervised models and self-supervised generation. Of these, one prominent algorithm is Bayesian Program Learning (Lake et al., 2015), where a single model can be utilized for 5 tasks in the Omniglot challenge: (i) parsing, (ii) unconditional generation or, (iii) generating exemplars of a given concept, (iv) generating novel concepts of a type, and (v) one-shot classification. This approach is also shown to be scalable when supported by the representative capabilities of neural networks (Feinman & Lake, 2020b;a), however, it requires stroke-level or vector supervision, which is costly to obtain or simply nonexistent. VAE/RNN (Ha & Eck, 2017; Cao et al., 2019; Chen et al., 2017; Aksan et al., 2020) and Transformer based models (Ribeiro et al., 2020; Lin et al., 2020) are other common methods applied to vector based image generation. Although impressive results have been presented, stroke-level supervision is required to train these models. Figure 1: Our drawing agent can accomplish four different tasks. From left to right: it can generate novel characters, parse a given character into its strokes, generate new exemplars for a given character, and generate novel concepts (i.e. characters) given a type (i.e. alphabet). Ours is the first stroke-based method to tackle all of the generation and parsing tasks in the Omniglot Challenge, without requiring any stroke-level supervision. Recently, self-supervised (i.e. the absence of strokelevel supervision) strokebased image generation has been addressed with Reinforcement Learning (RL) (Ganin et al., 2018; Mellor et al., 2019; Huang et al., 2019; Schaldenbrand & Oh, 2020). We call this approach self-supervised vectorization, since the vectorization of images is learned using only rasterimages as supervision. These methods mostly focus on image reconstruction and their exploration in generation is limited. For example, none of them address the conditional generation problem, or, they need the number of actions (i.e. strokes) as input. In this paper, we propose a self-supervised reinforcement learning approach where we train a drawing agent for character generation and parsing. Our drawing agent operates on the stroke-level (i.e. vector) representation of images. At each time step, our agent takes the current canvas as input and dynamically decides whether to continue drawing or stop. When a ‘continue’ decision is made, the agent outputs a program specifying the stroke to be drawn. A non-differentiable renderer takes this program and draws it on the current canvas. Consequently, a raster image is produced stroke-by-stroke. We first train this agent for two tasks by formulating appropriate loss functions: (i) unconditional character generation and (ii) parsing. Unconditional character generation is the task of generating a novel concept1 (i.e. character) given a dataset of concepts. For this task, our loss function includes the following components: an adversarial loss produced by a discriminator to make generated characters as “real” as possible, and two data fidelity losses assessing the conformity of the current canvas with the statistical properties of the overall dataset. We also use an additional entropy loss to prevent mode collapse. In the parsing task, the goal for our agent is to reconstruct a given character (in raster-image) by drawing it through strokes using as few of them as possible. We utilize the same action space and environment as in the unconditional generation model, only difference being the input fed to the policy is a complete canvas to be reconstructed. Our reward function in this task has two components: a fidelity reward that indicates how much of a stroke is consistent with the target image and a penalty that increases with every ‘continue’ action being taken. This model explicitly learns the vectorization of the input raster-image in a self-supervised manner. Next, we show that our parsing model can be exploited for exemplar generation (i.e. a novel drawing of a given character) and novel concept generation from type (i.e. novel character generation given an alphabet of 10 characters) without any further training. Given a character, the policy network of our parsing model outputs a distribution over the action space where likelihood of actions at each time step eventually allows us to generate variations of the input image. For novel concept generation conditioned on a type (i.e. alphabet), we compose a stroke library by parsing the provided inputs. As we sample strokes from this library, we observe novel samples forming, in coherence with the overall structure of the alphabet. To the best of our knowledge, we are the first to tackle these tasks with a self-supervised approach that operates on stroke space. Through experiments we show that our agent can successfully generate novel characters in all three ways (unconditionally, conditioned on a given alphabet, conditioned on a given character), and parse and reconstruct input characters. For both exemplar generation and type conditioned novel concept generation, we provide LPIPS (Zhang et al., 2018), L2 and SSIM measures between input samples and generated images. Our contributions in this paper are two-fold: (i) we present a drawing agent that can successfully handle all of the generation and parsing tasks in the Omniglot challenge in a self-supervised, stroke- 1Omniglot challenge terminology. based manner – such a model did not exist (ii) we provide for the first time perceptual similarity based quantitative benchmarks for the ‘exemplar generation’ and ‘type conditioned novel concept generation’ tasks. 2 RELATED WORK The main purpose of this work is to present a self-supervised approach in order to solve the generation and parsing tasks in the Omniglot Challenge (Lake et al., 2015), by capturing the stroke-level representation of images. Here we initially examine the supervised and self-supervised approaches to Omniglot challenge. Then, we review the work on image vectorization. And lastly, we touch upon the research on program synthesis in the context of this study. Omniglot Challenge Omniglot dataset of world alphabets was released with a set of challenges: parsing a given letter, one shot classification, generating a new letter given an alphabet, generating a novel sample of a character, and unconditional generation. Omniglot letters have samples that are conditionally independent based on the alphabet-character hierarchy, hence, a distinctive approach to achieve all these tasks is Hierarchical Bayesian modeling (Lake et al., 2015), (Lake et al., 2013). As the Omniglot letters included human strokes as labels, the compositional and causal nature of letters are leveraged to model the generation process. Later, neurosymbolic models are also shown to be successful for unconditional generation (Feinman & Lake, 2020a) and conceptual compression for multiple tasks presented within the Omniglot Challenge (Feinman & Lake, 2020b). However, without the stroke set that generated a concept, these tasks become more difficult. The idea of sequential image generation is examined by recurrent VAE models (Rezende et al., 2016), (Gregor et al., 2015), (Gregor et al., 2016). DRAW (Gregor et al., 2015) and Convolutional DRAW (Gregor et al., 2016) were able to generate quality unconditional samples from MNIST and Omniglot datasets respectively. DRAW is proposed as an algorithm to generate images recurrently. The network is able to iteratively generate a given image by attending to certain parts of the input at each time step. Convolutional DRAW improved the idea with an RNN/VAE based algorithm that can capture the global structure and low-level details of an image separately in order to increase the quality of generations. Later, it is shown that Hierarchical Bayesian Modeling can be improved by the representational power of deep learning and attentional mechanisms in order to achieve three of the five Omniglot challenges (Rezende et al., 2016). Another novel idea to leverage Bayesian modeling to tackle Omniglot Challenge was performing modifications on the VAE architecture to represent hierarchical datasets (Edwards & Storkey, 2016) (Hewitt et al., 2018). The significance of these studies is that they were able obtain latent variables to describe class-level features effectively. Despite the ability to utilize the same model for different problems (one-shot classification, unconditional and conditional generation), raster-based one-step generative models have two disadvantages we want to address. First, they cannot leverage the higher level abstraction and quality comes with working on a vector space. Secondly, one-step generation does not provide an interpretable compositional and causal process describing how a character is generated. In this work, we combine the advantages of two groups of aforementioned models with an agent operating on stroke representation of images that uses only raster images during training. Thus, we aim to solve all three generative and the parsing (reconstruction) tasks of the Omniglot challenge. We show that the model trained for reconstruction can also be adopted as a tool that captures the compositional structure of a given character. Without any further training, our agent can solve exemplar generation and type conditioned novel concept generation problems. Image Generation by Vectorization — With Stroke Supervision Sketch-RNN (Ha & Eck, 2017) is the first LSTM/VAE based sketch generation algorithm. It is later improved to generate multiclass samples (Cao et al., 2019) and increase the quality of generations by representing strokes as Bezier curves (Song, 2020). The idea of obtaining a generalizable latent space by imagestroke mapping is studied by many (Aksan et al., 2020; Das et al., 2021; Bhunia et al., 2021; Wang et al., 2020). In CoSE (Aksan et al., 2020), the problem is articulated as ‘completion of partially drawn sketch’. They achieved state of the art reconstruction performance by utilizing variable-length strokes and a novel relational model that is able to capture the global structure of the sketch. The progress in stroke representation is continued with incorporation of variable-degree Bezier curves (Das et al., 2021), and capturing Gestalt structure of partially occluded sketches (Lin et al., 2020). Self Supervised Vectorization Self-supervised vector-based image generation problem has been approached by RL based frameworks (Zhou et al., 2018), (Ganin et al., 2018), (Mellor et al., 2019), (Huang et al., 2019), (Schaldenbrand & Oh, 2020), and (Zou et al., 2020). In SPIRAL (Ganin et al., 2018), unconditional generation and reconstruction tasks are tackled with adversarially trained RL agents. Succeeding research enhanced the reconstruction process by a differentiable renderer, making it possible for agents to operate on a continuous space (Huang et al., 2019; Schaldenbrand & Oh, 2020). In order to avert the computational expense of RL based algorithms, end-to-end differentiable models are developed through altering the rendering process (Nakano, 2019) or formulating the generation process as a parameter search (Zou et al., 2020). More recently, a differentiable renderer and compositor is utilized for generating closed Bezier paths and the final image respectively (Reddy et al., 2021). This method led to successful interpolation, reconstruction, and sampling processes. Most related to our work is SPIRAL where both reconstruction and unconditional generation is studied through self-supervised deep reinforcement learning. However, our approach has some significant differences. First, in SPIRAL each stroke is also represented as a Bezier curve, yet, the starting point of each curve is set as the final point of the previous curve. In our model, all control points of the Bezier curve are predicted by the agent at each time step. Hence, the agent has to learn the continuity and the compositionality of the given character in order to produce quality samples. Secondly, SPIRAL provides a generative model that works through a graphics renderer without addressing the conditional generation problem. They show impressive results on both natural images and handwritten characters. While we provide a solution for multiple generative tasks, we have not explored our model in the context of natural images. Another approach that presents a similar scheme to the reconstruction problem is “Learning to Paint” (Huang et al., 2019). In Learning to Paint, the proposed model is utilized specifically for reconstruction. When reconstruction is considered, the main difference of our model is that since we try to model a human-like generation process, our agent outputs a single stroke at each time step with the environment being altered throughout this process while in Learning to Paint, 5 strokes are predicted by the agent at each time step. As a major difference from previous studies, our agent decides whether to stop or keep drawing before generating a stroke. This enables the agent to synthesize an image with as few actions as possible when motivated with our reward formulations. Self Supervised Program Synthesis Our method essentially outputs a visual program that depends only on the rastered data. In that sense, studies on Constructive Solid Geometry (CSG)are also related. Different RL frameworks for reconstruction of a given CSG image, that is essentially a composition of geometric shapes, are proposed (Ellis et al., 2019; Zhou et al., 2020). The former considered parsing as a search problem that is solved by using a read-eval-print-loop within a Markov Decision Process. The latter adopted a Tree-LSTM model to eliminate invalid programs and the reward is considered to be the Chamfer distance between the target image and current canvas. 3 METHOD Our model consists of a policy network and a (nondifferentiable) renderer. At time step t, the policy network takes the current canvas, Ct – a raster-image, as input and outputs two distributions, πB and πS . The first distribution, πB , is for stroke (i.e. Bezier curve)parameters and the second one, πS , is for the continue/stop decision. From the first distribution, we randomly sample a stroke defined by its 7 parameters (x-y coordinates of start, end, control points of the quadratic Bezier curve, and a brush-width). From the second distribution, we randomly sample a decision. If the decision happens to be ‘continue’, we add the newly sampled stroke to the current canvas, Ct, increment time (i.e. t ← t + 1) and restart. If the decision was to ‘stop’, then Ct is returned as the final output. Our model is able to handle parsing and different generation tasks, and the processing pipeline we just described is common in all these tasks. What changes among tasks is the reward functions and/or training procedures, which we explain below. Unconditional Generation The task of ‘generating new concepts’ as dubbed in Omniglot challenge, is essentially unconditional sampling from a distribution obtained from the whole Omniglot training set. Here, the model is asked to generate completely novel samples (i.e. characters) without any constraints. For this task, at each time step t, we calculate an instantaneous reward, rt, that has three components: rt = D(Ct) + λ1align(Ct, I) + λ2N (|Ct|;µ, σ). (1) The first term is a reward based on a discriminator to make generated characters as ‘real’ as possible. D(·) is a discriminator that outputs the “realness” score of its input canvas. We train it in an adversarial manner by using the generated examples as negatives and the elements of the input dataset as positives. The second term is a clustering-based data fidelity reward. The function align(Ct, I) measures the alignment between the current canvas Ct and another canvas I, which is a randomly selected cluster center at the beginning of each episode. The cluster centers are obtained by applying k-means on all characters in the input dataset. align basically counts the number of intersecting on-pixels (between the two canvases) minus the number of non-intersecting on-pixels in Ct, and divides this quantity by the number of on-pixels in I. The final term assesses the conformity of the current canvas with the dataset in terms of the number of on-pixels. N (|Ct|;µ, σ) evaluates a normal distribution with (µ, σ) at |Ct| which is the number of on-pixels in the current canvas. We obtain (µ, σ) by fitting a normal distribution to the on-pixel counts of characters in the training set. We observed that the second and third terms accelerate learning as they guide the exploration within the vicinity of real characters. During training, instead of using the instantaneous reward, rt, we use the difference of successive rewards, i.e. rt − rt−1. In order to encourage exploration and avoid mode collapse, we use an entropy penalty term as αmax(0,KL([πB , πS ],U)− τ). (2) Here, KL indicates KL-divergence and U is the uniform distribution. This term first measures the divergence between the uniform distribution and πB , πS , the distributions output by the policy network. Then, through the hinge function, if the divergence exceeds a threshold (τ ), this term activates and increases the penalty. The policy network and the discriminator D are updated alternatingly after 256 images are generated at each iteration. We employ the REINFORCE algorithm (Williams, 1992) to update the weights of the policy network. Discriminator is trained using hinge loss. In order to stabilize the discriminator and keep the Lipschitz constant for the whole network equal to 1, Spectral Normalization is applied at each layer (Miyato et al., 2018). Throughout the training, we kept the balance ratio between generated and real samples at 3. Image Reconstruction by Parsing In the “parsing” task, the goal is to reconstruct the given input image by re-drawing it through strokes as accurately as possible. To this end, we formulate a new reward function with two terms: a fidelity reward that indicates how much of a stroke is consistent with the input image (using the “align” function introduced above) and a penalty that is added with every time increment represented by t as ‘continue’ decisions being made: rt = align(St,Ct)− λ1t, (3) where St is the newly sampled stroke and Ct is the current canvas (input). Second term simply acts as a penalty for every ‘continue’ action. The first term ensures the sampled stroke to be well-aligned with the input and the second term forces the model to use as few strokes as possible. There is no need for a discriminator. This model explicitly learns the vectorization of the input raster-image in a self-supervised manner. Apart from the different reward function, another crucial difference between the training of the unconditional generation model and the parsing model is how the input and output are handled. In unconditional generation, the newly-sampled stroke is added to the current canvas, whereas in parsing, we do the opposite: the sampled stroke is removed (masked out) from the current canvas, and the returned final canvas is the combination of all sampled strokes until the ‘stop’ decision. λ, α and τ in Equations 1, 2, and 3 are hyperparameters adjusted experimentally. (see ‘Training Details’ in Appendix B ). Generating New Exemplars In this task, a model is required to generate a new exemplar (i.e. a variation) of an unseen concept (i.e. character). To the best of our knowledge, we are the first to tackle this task in a self-supervised stroke-based setting. Most importantly, we do not require any training to achieve this task. We utilize our parsing network described in the previous section to capture the overall structure of a given letter. In order to produce new exemplars, we randomly sample different parsings (a set of strokes) from the distribution generated by the agent. In order to eliminate ‘unlikely’ samples, we compute the likelihood of the parsing given the resulting policy, and apply a threshold. Generating Novel Concepts from Type In this task, the goal is to to generate a novel concept (i.e. character) given a previously unseen type (i.e. alphabet) consisting of 10 concepts. The novel concepts should conform to the overall structure, that is, the stroke formulation and composition of the given type (alphabet). We, again, tackle this challenge using our parsing network without any further training. To do so, we first parse all input images into its strokes. For each input image, we sample five stroke sets from the stroke-parameters distribution output by the policy network. During the sampling process, we again use the likelihood-based quality function described in the previous section. We add all the strokes sampled during this process to form a stroke library. Here the strokes are stored with the time steps they are generated. Noting that the number of strokes sampled for a given character is not constant, we approximate a distribution for stopping actions. This process provides a stroke set representing the structure of letters and the way they are composed, that is, we can exploit the compositionality and causality of an alphabet. Throughout the character generation process, a stroke is sampled at each time step belonging to that particular group of the library. The sampled strokes are summed together to obtain the final canvas. 4 EXPERIMENTS Datasets and Implementation Details We report generation and reconstruction (parsing) results on the Omniglot dataset (Lake et al., 2015), which includes 1623 characters from 50 different alphabets, with 20 samples for each character. 30 alphabets are used for training and the remaining 20 are used for evaluation. For unconditional generation and reconstruction, we also report results on the MNIST dataset (LeCun, 1998). For both datasets, we rescale input images to 32x32 in order for them to conform with our model. Our policy network is composed of a ResNet feature extraction backbone and three MLP branches for computing the distributions over the action space. Architectural details can be found in Appendix A. For the Omniglot dataset, we take brush width as a constant and omit the corresponding MLP branch. We tune the learning rate and weight decay of the generator, λ hyperparameters in Equation 1 and Equation 3, α and τ hyperparameters in Equation 2, using the Tree-structured Parzen Estimator algorithm (Bergstra et al., 2011) in the RayTune library (Liaw et al., 2018). For unconditional generation, we use the discriminator architecture proposed by Miyato et al. (2018). In order to stabilize the discriminator and keep the Lipschitz constant for the whole network equal to 1, Spectral Normalization is applied at each layer. Discriminator is trained using the hinge loss. Throughout the training, we set the balance ratio between fake and real samples as 3. We performed hard-negative mining to speed up convergence during this process. 4.1 UNCONDITIONAL GENERATION We initially tested our approach on the MNIST dataset. Figure 3 presents the improvement in the quality of samples generated throughout the policy network updates. At the beginning, generated characters are mostly random scribbles. Towards the end, they start to look like real digits. Table 1 shows that our method achieves an acceptable FID score (Heusel et al., 2017) given the scores of other prominent GAN and VAE methods. Presented FID values are taken from Lucic et al. (2017). Figure 4 shows sample generations for the Omniglot dataset. To demonstrate that our generations are not duplicates of the characters in the training set, we present the four most similar characters from the training set to our generations. Similarity is computed using pixelwise L2 distance. Finally, Figure 5 presents more generated characters, which demonstrate the variability and the quality of generated concepts. The agent was able to capture the type of strokes, number of strokes a character has and letter structures without any stroke supervision. 4.2 IMAGE RECONSTRUCTION BY PARSING Figure 6 presents sample parsing and reconstruction results on MNIST. Our agent can reconstruct a character from the test set in a minimal number of actions within the abilities of quadratic Bezier curves. Selected brush widths also conform with the stroke heterogeneity of the dataset. Then, we train our model with the characters in the Omniglot training set. For evaluation, we utilize the evaluation set with completely novel characters from unseen alphabets. Thereby, we can see that our agent has learned how to parse a given character. Due to the penalty term that increases with the number of strokes, there is a tradeoff for the agent to replicate a character exactly and replicate it in a small number of actions. This indirectly demotivates the agent from retouching the image with small strokes to minimize the difference to the target. Results in Figure 7 show that overall structure of the target images are preserved, however, small details are lacking in some of the examples. This reflects on the distance measures (Table 2). 4.3 GENERATING NEW EXEMPLARS For this task, we use the evaluation set of the Omniglot dataset. For each character in the test set, we sample 500 different parses from the policy. In Figure 8, it can be observed that given an unseen letter from a novel alphabet, our agent can sample from the resulting distribution, and output quality variations. The major indications of variation are structures of the strokes, number of actions to generate a sample and the fine details of certain characters. We compare each produced character with its corresponding input image using LPIPS, SSIM and L2 distance values. The mean and standard deviation of these values for the whole evaluation set are 0.078± 0.002, 0.616± 0.018 and 0.08± 0.016, respectively. Results per alphabet can be found in Appendix C.3. 4.4 GENERATING NOVEL CONCEPTS FROM TYPE In order to generate a concept that is likely to belong in a given alphabet, we again leverage our reconstruction model. Given 10 different characters of an unseen alphabet, we are able to generate novel images with similar structural features. Results presented in Figure 9 show that our algorithm can model the compositional pattern of an alphabet in stroke space. In order to obtain quantitative results, (e.g. LPIPS, L2 and SSIM), we produce 10000 images conditioned on each input set and randomly sample characters by utilizing the discriminator trained for the unconditional generation model, assuming it has learned what features of a given input imply a real character. We generate a sampling distribution according to the discriminator scores of generated samples and repeat the sampling process multiple times for each input to obtain a set of outputs to be considered. For a sample generated, we calculate performance metrics with respect to all characters in the input. In order to report final metrics presented in supplemental figures 12b and 12a, we consider the most similar input-output pairs. The mean and standard deviation of LPIPS, SSIM and L2 values for the whole evaluation set are 0.0801± 0.003, 0.502± 0.068 and 0.1263± 0.00086 respectively. 5 CONCLUSION We proposed a self-supervised reinforcement learning approach for stroke based image generation. We trained our model for unconditional generation and parsing on handwritten character datasets by defining a single action space and environment. Through experiments, we showed that, given the whole training set, our agent is able to capture the overall distribution and generate quality novel samples for the challenging Omniglot dataset. Then, we trained our agent for the parsing task; given a raster image, the goal is to reconstruct it through as few strokes as possible. We demonstrated that the parsing agent can be utilized for generating exemplars of a concept and creating novel samples conditioned on a type, without any further training, only difference being how it is called among tasks. To the best of our knowledge, we are the first to tackle these tasks with a self-supervised approach that operates on a stroke level. In this work, we used quadratic Bezier curves as the smallest unit of sketching. However, for human-level generations, the stroke representations should be enhanced to capture more complex structures. We anticipate that this will improve the overall performance. A NETWORK ARCHITECTURE The backbone is a ResNet with 3 convolutional layers and 8 residual layers. The first convolutional layer has 32 filters of size 5x5. The second and third convolutional layers have 32 filters of size 4x4 and stride of 2, resulting in a tensor with dimensions 8x8x32. Then, we use standard residual layers described in (He et al., 2016). Each convolutional layer is followed by a Batch Normalization process and ReLU activation. The output of the final residual layer is flattened to a 2048x1 vector to be processed by the MLPs. The first MLP outputs a set of distributions for each control point of the Bezier curve. It has 1 fully connected layer that outputs a 192x1 vector. This vector is reshaped to a 32x6 matrix where each 32x1 vector defines a distribution over the possible coordinates. The MLPs used for selecting the brush width and sampling the stop/continue decision consist of 2 layers with 64 and 2 neurons. B TRAINING DETAILS The hyperparamaters used for unconditional generation and reconstruction are presented in Table 3 and 4, respectively. C EXPERIMENTS: SUPPLEMENTAL FIGURES C.1 UNCONDITIONAL GENERATION In Figure 10, we present the FID values for the generated images along the training on Omniglot dataset. C.2 PARSING In Table 5, we present the mean number of strokes our agent used to parse the characters for each alphabet in the test set. C.3 EXEMPLAR GENERATION In Figure 11a, we demonstrate LPIPS metrics calculated by using 3 different backbones (AlexNet, VGG, and SqueezeNet). In Figure 11b, we present L2 and SSIM values. These metrics are calculated over all examples generated for the test set. C.4 GENERATING NOVEL CONCEPTS FROM TYPE In Figure 12a, we demonstrate LPIPS metrics calculated by using 3 different backbones (AlexNet, VGG, and SqueezeNet). In Figure 12b, we present L2 and SSIM values.
1. What is the focus and contribution of the paper on character generation and parsing? 2. What are the strengths of the proposed approach, particularly in its self-supervised reinforcement learning methodology? 3. What are the weaknesses of the paper regarding its modeling and experimental design? 4. Do you have any concerns about the training process and the choice of rewards? 5. How does the reviewer assess the significance and novelty of the proposed method compared to prior works?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a self-supervised reinforcement learning approach to train a drawing agent for character generation and parsing. The drawing agent operates on the stroke-level (i.e. vector) representation of images. Different from the one-step generative model, the proposed method can capture the compositional structure of a given character. It is interesting to tackle these tasks with a self-supervised approach and reinforcement learning that operates on stroke space. Review [Paper weakness] Modeling: The rewards are designed based on a discriminator. As we know, generative adversarial networks are not easy to train since generative networks and discriminative networks are trained alternatively. In the proposed method, the policy network and the discriminator are trained alternatively. I doubt if it is easy to train the model. I would like to see the training curves for rewards value. The detailed alignment function used in Eq. (1) and Eq. (3) need to be provided. Experiment: - The results are not satisfying. In the experiment, the generation quality of the proposed method is not good as traditional generative networks in terms of FID. In the image parsing part, the results are far behind the compared methods. - Since the results are not comparable to the existing methods, there seems not too much significance for the proposed methods.
ICLR
Title Character Generation through Self-Supervised Vectorization Abstract The prevalent approach in self-supervised image generation is to operate on pixel level representations. While this approach can produce high quality images, it cannot benefit from the simplicity and innate quality of vectorization. Here we present a drawing agent that operates on stroke-level representation of images. At each time step, the agent first assesses the current canvas and decides whether to stop or keep drawing. When a ‘draw’ decision is made, the agent outputs a program indicating the stroke to be drawn. As a result, it produces a final raster image by drawing the strokes on a canvas, using a minimal number of strokes and dynamically deciding when to stop. We train our agent through reinforcement learning on MNIST and Omniglot datasets for unconditional generation and parsing (reconstruction) tasks. We utilize our parsing agent for exemplar generation and type conditioned concept generation in Omniglot challenge without any further training. We present successful results on all three generation tasks and the parsing task. Crucially, we do not need any stroke-level or vector supervision; we only use raster images for training. Code will be made available upon acceptance. 1 INTRODUCTION While, innately, humans sketch or write through strokes, this type of visual depiction is a more difficult task for machines. Image generation problems are typically addressed by raster-based algorithms. The introduction of generative adversarial networks (GAN) (Goodfellow et al., 2014), variational autoencoders (VAE) (Kingma & Welling, 2013) and autoregressive models (Van Oord et al., 2016) has led to a variety of applications. Style transfer (Gatys et al., 2015; Isola et al., 2017), photo realistic image generation (Brock et al., 2018; Karras et al., 2019), and super resolution (Ledig et al., 2017; Bin et al., 2017) are some of the significant instances of the advancing field. Additionally, Hierarchical Bayesian models formulated by deep neural networks are able to use the same generative model for multiple tasks such as classification, conditional and unconditional generation (Hewitt et al., 2018; Edwards & Storkey, 2016). These raster-based algorithms can produce high quality images, yet they cannot benefit from the leverage that higher level abstractions bring about. Vector-level image representation intrinsically prevents models from generating blurry samples and allows for compositional image generation which eventually may contribute to our understanding of how humans create or replicate images (Lake et al., 2017). This idea, with the introduction of sketch-based datasets such as Omniglot (Lake et al., 2012), Sketchy (Sangkloy et al., 2016), and QuickDraw (Ha & Eck, 2017) has triggered a significant body of work in recent years. Stroke based image generation and parsing has been addressed with both vector supervised models and self-supervised generation. Of these, one prominent algorithm is Bayesian Program Learning (Lake et al., 2015), where a single model can be utilized for 5 tasks in the Omniglot challenge: (i) parsing, (ii) unconditional generation or, (iii) generating exemplars of a given concept, (iv) generating novel concepts of a type, and (v) one-shot classification. This approach is also shown to be scalable when supported by the representative capabilities of neural networks (Feinman & Lake, 2020b;a), however, it requires stroke-level or vector supervision, which is costly to obtain or simply nonexistent. VAE/RNN (Ha & Eck, 2017; Cao et al., 2019; Chen et al., 2017; Aksan et al., 2020) and Transformer based models (Ribeiro et al., 2020; Lin et al., 2020) are other common methods applied to vector based image generation. Although impressive results have been presented, stroke-level supervision is required to train these models. Figure 1: Our drawing agent can accomplish four different tasks. From left to right: it can generate novel characters, parse a given character into its strokes, generate new exemplars for a given character, and generate novel concepts (i.e. characters) given a type (i.e. alphabet). Ours is the first stroke-based method to tackle all of the generation and parsing tasks in the Omniglot Challenge, without requiring any stroke-level supervision. Recently, self-supervised (i.e. the absence of strokelevel supervision) strokebased image generation has been addressed with Reinforcement Learning (RL) (Ganin et al., 2018; Mellor et al., 2019; Huang et al., 2019; Schaldenbrand & Oh, 2020). We call this approach self-supervised vectorization, since the vectorization of images is learned using only rasterimages as supervision. These methods mostly focus on image reconstruction and their exploration in generation is limited. For example, none of them address the conditional generation problem, or, they need the number of actions (i.e. strokes) as input. In this paper, we propose a self-supervised reinforcement learning approach where we train a drawing agent for character generation and parsing. Our drawing agent operates on the stroke-level (i.e. vector) representation of images. At each time step, our agent takes the current canvas as input and dynamically decides whether to continue drawing or stop. When a ‘continue’ decision is made, the agent outputs a program specifying the stroke to be drawn. A non-differentiable renderer takes this program and draws it on the current canvas. Consequently, a raster image is produced stroke-by-stroke. We first train this agent for two tasks by formulating appropriate loss functions: (i) unconditional character generation and (ii) parsing. Unconditional character generation is the task of generating a novel concept1 (i.e. character) given a dataset of concepts. For this task, our loss function includes the following components: an adversarial loss produced by a discriminator to make generated characters as “real” as possible, and two data fidelity losses assessing the conformity of the current canvas with the statistical properties of the overall dataset. We also use an additional entropy loss to prevent mode collapse. In the parsing task, the goal for our agent is to reconstruct a given character (in raster-image) by drawing it through strokes using as few of them as possible. We utilize the same action space and environment as in the unconditional generation model, only difference being the input fed to the policy is a complete canvas to be reconstructed. Our reward function in this task has two components: a fidelity reward that indicates how much of a stroke is consistent with the target image and a penalty that increases with every ‘continue’ action being taken. This model explicitly learns the vectorization of the input raster-image in a self-supervised manner. Next, we show that our parsing model can be exploited for exemplar generation (i.e. a novel drawing of a given character) and novel concept generation from type (i.e. novel character generation given an alphabet of 10 characters) without any further training. Given a character, the policy network of our parsing model outputs a distribution over the action space where likelihood of actions at each time step eventually allows us to generate variations of the input image. For novel concept generation conditioned on a type (i.e. alphabet), we compose a stroke library by parsing the provided inputs. As we sample strokes from this library, we observe novel samples forming, in coherence with the overall structure of the alphabet. To the best of our knowledge, we are the first to tackle these tasks with a self-supervised approach that operates on stroke space. Through experiments we show that our agent can successfully generate novel characters in all three ways (unconditionally, conditioned on a given alphabet, conditioned on a given character), and parse and reconstruct input characters. For both exemplar generation and type conditioned novel concept generation, we provide LPIPS (Zhang et al., 2018), L2 and SSIM measures between input samples and generated images. Our contributions in this paper are two-fold: (i) we present a drawing agent that can successfully handle all of the generation and parsing tasks in the Omniglot challenge in a self-supervised, stroke- 1Omniglot challenge terminology. based manner – such a model did not exist (ii) we provide for the first time perceptual similarity based quantitative benchmarks for the ‘exemplar generation’ and ‘type conditioned novel concept generation’ tasks. 2 RELATED WORK The main purpose of this work is to present a self-supervised approach in order to solve the generation and parsing tasks in the Omniglot Challenge (Lake et al., 2015), by capturing the stroke-level representation of images. Here we initially examine the supervised and self-supervised approaches to Omniglot challenge. Then, we review the work on image vectorization. And lastly, we touch upon the research on program synthesis in the context of this study. Omniglot Challenge Omniglot dataset of world alphabets was released with a set of challenges: parsing a given letter, one shot classification, generating a new letter given an alphabet, generating a novel sample of a character, and unconditional generation. Omniglot letters have samples that are conditionally independent based on the alphabet-character hierarchy, hence, a distinctive approach to achieve all these tasks is Hierarchical Bayesian modeling (Lake et al., 2015), (Lake et al., 2013). As the Omniglot letters included human strokes as labels, the compositional and causal nature of letters are leveraged to model the generation process. Later, neurosymbolic models are also shown to be successful for unconditional generation (Feinman & Lake, 2020a) and conceptual compression for multiple tasks presented within the Omniglot Challenge (Feinman & Lake, 2020b). However, without the stroke set that generated a concept, these tasks become more difficult. The idea of sequential image generation is examined by recurrent VAE models (Rezende et al., 2016), (Gregor et al., 2015), (Gregor et al., 2016). DRAW (Gregor et al., 2015) and Convolutional DRAW (Gregor et al., 2016) were able to generate quality unconditional samples from MNIST and Omniglot datasets respectively. DRAW is proposed as an algorithm to generate images recurrently. The network is able to iteratively generate a given image by attending to certain parts of the input at each time step. Convolutional DRAW improved the idea with an RNN/VAE based algorithm that can capture the global structure and low-level details of an image separately in order to increase the quality of generations. Later, it is shown that Hierarchical Bayesian Modeling can be improved by the representational power of deep learning and attentional mechanisms in order to achieve three of the five Omniglot challenges (Rezende et al., 2016). Another novel idea to leverage Bayesian modeling to tackle Omniglot Challenge was performing modifications on the VAE architecture to represent hierarchical datasets (Edwards & Storkey, 2016) (Hewitt et al., 2018). The significance of these studies is that they were able obtain latent variables to describe class-level features effectively. Despite the ability to utilize the same model for different problems (one-shot classification, unconditional and conditional generation), raster-based one-step generative models have two disadvantages we want to address. First, they cannot leverage the higher level abstraction and quality comes with working on a vector space. Secondly, one-step generation does not provide an interpretable compositional and causal process describing how a character is generated. In this work, we combine the advantages of two groups of aforementioned models with an agent operating on stroke representation of images that uses only raster images during training. Thus, we aim to solve all three generative and the parsing (reconstruction) tasks of the Omniglot challenge. We show that the model trained for reconstruction can also be adopted as a tool that captures the compositional structure of a given character. Without any further training, our agent can solve exemplar generation and type conditioned novel concept generation problems. Image Generation by Vectorization — With Stroke Supervision Sketch-RNN (Ha & Eck, 2017) is the first LSTM/VAE based sketch generation algorithm. It is later improved to generate multiclass samples (Cao et al., 2019) and increase the quality of generations by representing strokes as Bezier curves (Song, 2020). The idea of obtaining a generalizable latent space by imagestroke mapping is studied by many (Aksan et al., 2020; Das et al., 2021; Bhunia et al., 2021; Wang et al., 2020). In CoSE (Aksan et al., 2020), the problem is articulated as ‘completion of partially drawn sketch’. They achieved state of the art reconstruction performance by utilizing variable-length strokes and a novel relational model that is able to capture the global structure of the sketch. The progress in stroke representation is continued with incorporation of variable-degree Bezier curves (Das et al., 2021), and capturing Gestalt structure of partially occluded sketches (Lin et al., 2020). Self Supervised Vectorization Self-supervised vector-based image generation problem has been approached by RL based frameworks (Zhou et al., 2018), (Ganin et al., 2018), (Mellor et al., 2019), (Huang et al., 2019), (Schaldenbrand & Oh, 2020), and (Zou et al., 2020). In SPIRAL (Ganin et al., 2018), unconditional generation and reconstruction tasks are tackled with adversarially trained RL agents. Succeeding research enhanced the reconstruction process by a differentiable renderer, making it possible for agents to operate on a continuous space (Huang et al., 2019; Schaldenbrand & Oh, 2020). In order to avert the computational expense of RL based algorithms, end-to-end differentiable models are developed through altering the rendering process (Nakano, 2019) or formulating the generation process as a parameter search (Zou et al., 2020). More recently, a differentiable renderer and compositor is utilized for generating closed Bezier paths and the final image respectively (Reddy et al., 2021). This method led to successful interpolation, reconstruction, and sampling processes. Most related to our work is SPIRAL where both reconstruction and unconditional generation is studied through self-supervised deep reinforcement learning. However, our approach has some significant differences. First, in SPIRAL each stroke is also represented as a Bezier curve, yet, the starting point of each curve is set as the final point of the previous curve. In our model, all control points of the Bezier curve are predicted by the agent at each time step. Hence, the agent has to learn the continuity and the compositionality of the given character in order to produce quality samples. Secondly, SPIRAL provides a generative model that works through a graphics renderer without addressing the conditional generation problem. They show impressive results on both natural images and handwritten characters. While we provide a solution for multiple generative tasks, we have not explored our model in the context of natural images. Another approach that presents a similar scheme to the reconstruction problem is “Learning to Paint” (Huang et al., 2019). In Learning to Paint, the proposed model is utilized specifically for reconstruction. When reconstruction is considered, the main difference of our model is that since we try to model a human-like generation process, our agent outputs a single stroke at each time step with the environment being altered throughout this process while in Learning to Paint, 5 strokes are predicted by the agent at each time step. As a major difference from previous studies, our agent decides whether to stop or keep drawing before generating a stroke. This enables the agent to synthesize an image with as few actions as possible when motivated with our reward formulations. Self Supervised Program Synthesis Our method essentially outputs a visual program that depends only on the rastered data. In that sense, studies on Constructive Solid Geometry (CSG)are also related. Different RL frameworks for reconstruction of a given CSG image, that is essentially a composition of geometric shapes, are proposed (Ellis et al., 2019; Zhou et al., 2020). The former considered parsing as a search problem that is solved by using a read-eval-print-loop within a Markov Decision Process. The latter adopted a Tree-LSTM model to eliminate invalid programs and the reward is considered to be the Chamfer distance between the target image and current canvas. 3 METHOD Our model consists of a policy network and a (nondifferentiable) renderer. At time step t, the policy network takes the current canvas, Ct – a raster-image, as input and outputs two distributions, πB and πS . The first distribution, πB , is for stroke (i.e. Bezier curve)parameters and the second one, πS , is for the continue/stop decision. From the first distribution, we randomly sample a stroke defined by its 7 parameters (x-y coordinates of start, end, control points of the quadratic Bezier curve, and a brush-width). From the second distribution, we randomly sample a decision. If the decision happens to be ‘continue’, we add the newly sampled stroke to the current canvas, Ct, increment time (i.e. t ← t + 1) and restart. If the decision was to ‘stop’, then Ct is returned as the final output. Our model is able to handle parsing and different generation tasks, and the processing pipeline we just described is common in all these tasks. What changes among tasks is the reward functions and/or training procedures, which we explain below. Unconditional Generation The task of ‘generating new concepts’ as dubbed in Omniglot challenge, is essentially unconditional sampling from a distribution obtained from the whole Omniglot training set. Here, the model is asked to generate completely novel samples (i.e. characters) without any constraints. For this task, at each time step t, we calculate an instantaneous reward, rt, that has three components: rt = D(Ct) + λ1align(Ct, I) + λ2N (|Ct|;µ, σ). (1) The first term is a reward based on a discriminator to make generated characters as ‘real’ as possible. D(·) is a discriminator that outputs the “realness” score of its input canvas. We train it in an adversarial manner by using the generated examples as negatives and the elements of the input dataset as positives. The second term is a clustering-based data fidelity reward. The function align(Ct, I) measures the alignment between the current canvas Ct and another canvas I, which is a randomly selected cluster center at the beginning of each episode. The cluster centers are obtained by applying k-means on all characters in the input dataset. align basically counts the number of intersecting on-pixels (between the two canvases) minus the number of non-intersecting on-pixels in Ct, and divides this quantity by the number of on-pixels in I. The final term assesses the conformity of the current canvas with the dataset in terms of the number of on-pixels. N (|Ct|;µ, σ) evaluates a normal distribution with (µ, σ) at |Ct| which is the number of on-pixels in the current canvas. We obtain (µ, σ) by fitting a normal distribution to the on-pixel counts of characters in the training set. We observed that the second and third terms accelerate learning as they guide the exploration within the vicinity of real characters. During training, instead of using the instantaneous reward, rt, we use the difference of successive rewards, i.e. rt − rt−1. In order to encourage exploration and avoid mode collapse, we use an entropy penalty term as αmax(0,KL([πB , πS ],U)− τ). (2) Here, KL indicates KL-divergence and U is the uniform distribution. This term first measures the divergence between the uniform distribution and πB , πS , the distributions output by the policy network. Then, through the hinge function, if the divergence exceeds a threshold (τ ), this term activates and increases the penalty. The policy network and the discriminator D are updated alternatingly after 256 images are generated at each iteration. We employ the REINFORCE algorithm (Williams, 1992) to update the weights of the policy network. Discriminator is trained using hinge loss. In order to stabilize the discriminator and keep the Lipschitz constant for the whole network equal to 1, Spectral Normalization is applied at each layer (Miyato et al., 2018). Throughout the training, we kept the balance ratio between generated and real samples at 3. Image Reconstruction by Parsing In the “parsing” task, the goal is to reconstruct the given input image by re-drawing it through strokes as accurately as possible. To this end, we formulate a new reward function with two terms: a fidelity reward that indicates how much of a stroke is consistent with the input image (using the “align” function introduced above) and a penalty that is added with every time increment represented by t as ‘continue’ decisions being made: rt = align(St,Ct)− λ1t, (3) where St is the newly sampled stroke and Ct is the current canvas (input). Second term simply acts as a penalty for every ‘continue’ action. The first term ensures the sampled stroke to be well-aligned with the input and the second term forces the model to use as few strokes as possible. There is no need for a discriminator. This model explicitly learns the vectorization of the input raster-image in a self-supervised manner. Apart from the different reward function, another crucial difference between the training of the unconditional generation model and the parsing model is how the input and output are handled. In unconditional generation, the newly-sampled stroke is added to the current canvas, whereas in parsing, we do the opposite: the sampled stroke is removed (masked out) from the current canvas, and the returned final canvas is the combination of all sampled strokes until the ‘stop’ decision. λ, α and τ in Equations 1, 2, and 3 are hyperparameters adjusted experimentally. (see ‘Training Details’ in Appendix B ). Generating New Exemplars In this task, a model is required to generate a new exemplar (i.e. a variation) of an unseen concept (i.e. character). To the best of our knowledge, we are the first to tackle this task in a self-supervised stroke-based setting. Most importantly, we do not require any training to achieve this task. We utilize our parsing network described in the previous section to capture the overall structure of a given letter. In order to produce new exemplars, we randomly sample different parsings (a set of strokes) from the distribution generated by the agent. In order to eliminate ‘unlikely’ samples, we compute the likelihood of the parsing given the resulting policy, and apply a threshold. Generating Novel Concepts from Type In this task, the goal is to to generate a novel concept (i.e. character) given a previously unseen type (i.e. alphabet) consisting of 10 concepts. The novel concepts should conform to the overall structure, that is, the stroke formulation and composition of the given type (alphabet). We, again, tackle this challenge using our parsing network without any further training. To do so, we first parse all input images into its strokes. For each input image, we sample five stroke sets from the stroke-parameters distribution output by the policy network. During the sampling process, we again use the likelihood-based quality function described in the previous section. We add all the strokes sampled during this process to form a stroke library. Here the strokes are stored with the time steps they are generated. Noting that the number of strokes sampled for a given character is not constant, we approximate a distribution for stopping actions. This process provides a stroke set representing the structure of letters and the way they are composed, that is, we can exploit the compositionality and causality of an alphabet. Throughout the character generation process, a stroke is sampled at each time step belonging to that particular group of the library. The sampled strokes are summed together to obtain the final canvas. 4 EXPERIMENTS Datasets and Implementation Details We report generation and reconstruction (parsing) results on the Omniglot dataset (Lake et al., 2015), which includes 1623 characters from 50 different alphabets, with 20 samples for each character. 30 alphabets are used for training and the remaining 20 are used for evaluation. For unconditional generation and reconstruction, we also report results on the MNIST dataset (LeCun, 1998). For both datasets, we rescale input images to 32x32 in order for them to conform with our model. Our policy network is composed of a ResNet feature extraction backbone and three MLP branches for computing the distributions over the action space. Architectural details can be found in Appendix A. For the Omniglot dataset, we take brush width as a constant and omit the corresponding MLP branch. We tune the learning rate and weight decay of the generator, λ hyperparameters in Equation 1 and Equation 3, α and τ hyperparameters in Equation 2, using the Tree-structured Parzen Estimator algorithm (Bergstra et al., 2011) in the RayTune library (Liaw et al., 2018). For unconditional generation, we use the discriminator architecture proposed by Miyato et al. (2018). In order to stabilize the discriminator and keep the Lipschitz constant for the whole network equal to 1, Spectral Normalization is applied at each layer. Discriminator is trained using the hinge loss. Throughout the training, we set the balance ratio between fake and real samples as 3. We performed hard-negative mining to speed up convergence during this process. 4.1 UNCONDITIONAL GENERATION We initially tested our approach on the MNIST dataset. Figure 3 presents the improvement in the quality of samples generated throughout the policy network updates. At the beginning, generated characters are mostly random scribbles. Towards the end, they start to look like real digits. Table 1 shows that our method achieves an acceptable FID score (Heusel et al., 2017) given the scores of other prominent GAN and VAE methods. Presented FID values are taken from Lucic et al. (2017). Figure 4 shows sample generations for the Omniglot dataset. To demonstrate that our generations are not duplicates of the characters in the training set, we present the four most similar characters from the training set to our generations. Similarity is computed using pixelwise L2 distance. Finally, Figure 5 presents more generated characters, which demonstrate the variability and the quality of generated concepts. The agent was able to capture the type of strokes, number of strokes a character has and letter structures without any stroke supervision. 4.2 IMAGE RECONSTRUCTION BY PARSING Figure 6 presents sample parsing and reconstruction results on MNIST. Our agent can reconstruct a character from the test set in a minimal number of actions within the abilities of quadratic Bezier curves. Selected brush widths also conform with the stroke heterogeneity of the dataset. Then, we train our model with the characters in the Omniglot training set. For evaluation, we utilize the evaluation set with completely novel characters from unseen alphabets. Thereby, we can see that our agent has learned how to parse a given character. Due to the penalty term that increases with the number of strokes, there is a tradeoff for the agent to replicate a character exactly and replicate it in a small number of actions. This indirectly demotivates the agent from retouching the image with small strokes to minimize the difference to the target. Results in Figure 7 show that overall structure of the target images are preserved, however, small details are lacking in some of the examples. This reflects on the distance measures (Table 2). 4.3 GENERATING NEW EXEMPLARS For this task, we use the evaluation set of the Omniglot dataset. For each character in the test set, we sample 500 different parses from the policy. In Figure 8, it can be observed that given an unseen letter from a novel alphabet, our agent can sample from the resulting distribution, and output quality variations. The major indications of variation are structures of the strokes, number of actions to generate a sample and the fine details of certain characters. We compare each produced character with its corresponding input image using LPIPS, SSIM and L2 distance values. The mean and standard deviation of these values for the whole evaluation set are 0.078± 0.002, 0.616± 0.018 and 0.08± 0.016, respectively. Results per alphabet can be found in Appendix C.3. 4.4 GENERATING NOVEL CONCEPTS FROM TYPE In order to generate a concept that is likely to belong in a given alphabet, we again leverage our reconstruction model. Given 10 different characters of an unseen alphabet, we are able to generate novel images with similar structural features. Results presented in Figure 9 show that our algorithm can model the compositional pattern of an alphabet in stroke space. In order to obtain quantitative results, (e.g. LPIPS, L2 and SSIM), we produce 10000 images conditioned on each input set and randomly sample characters by utilizing the discriminator trained for the unconditional generation model, assuming it has learned what features of a given input imply a real character. We generate a sampling distribution according to the discriminator scores of generated samples and repeat the sampling process multiple times for each input to obtain a set of outputs to be considered. For a sample generated, we calculate performance metrics with respect to all characters in the input. In order to report final metrics presented in supplemental figures 12b and 12a, we consider the most similar input-output pairs. The mean and standard deviation of LPIPS, SSIM and L2 values for the whole evaluation set are 0.0801± 0.003, 0.502± 0.068 and 0.1263± 0.00086 respectively. 5 CONCLUSION We proposed a self-supervised reinforcement learning approach for stroke based image generation. We trained our model for unconditional generation and parsing on handwritten character datasets by defining a single action space and environment. Through experiments, we showed that, given the whole training set, our agent is able to capture the overall distribution and generate quality novel samples for the challenging Omniglot dataset. Then, we trained our agent for the parsing task; given a raster image, the goal is to reconstruct it through as few strokes as possible. We demonstrated that the parsing agent can be utilized for generating exemplars of a concept and creating novel samples conditioned on a type, without any further training, only difference being how it is called among tasks. To the best of our knowledge, we are the first to tackle these tasks with a self-supervised approach that operates on a stroke level. In this work, we used quadratic Bezier curves as the smallest unit of sketching. However, for human-level generations, the stroke representations should be enhanced to capture more complex structures. We anticipate that this will improve the overall performance. A NETWORK ARCHITECTURE The backbone is a ResNet with 3 convolutional layers and 8 residual layers. The first convolutional layer has 32 filters of size 5x5. The second and third convolutional layers have 32 filters of size 4x4 and stride of 2, resulting in a tensor with dimensions 8x8x32. Then, we use standard residual layers described in (He et al., 2016). Each convolutional layer is followed by a Batch Normalization process and ReLU activation. The output of the final residual layer is flattened to a 2048x1 vector to be processed by the MLPs. The first MLP outputs a set of distributions for each control point of the Bezier curve. It has 1 fully connected layer that outputs a 192x1 vector. This vector is reshaped to a 32x6 matrix where each 32x1 vector defines a distribution over the possible coordinates. The MLPs used for selecting the brush width and sampling the stop/continue decision consist of 2 layers with 64 and 2 neurons. B TRAINING DETAILS The hyperparamaters used for unconditional generation and reconstruction are presented in Table 3 and 4, respectively. C EXPERIMENTS: SUPPLEMENTAL FIGURES C.1 UNCONDITIONAL GENERATION In Figure 10, we present the FID values for the generated images along the training on Omniglot dataset. C.2 PARSING In Table 5, we present the mean number of strokes our agent used to parse the characters for each alphabet in the test set. C.3 EXEMPLAR GENERATION In Figure 11a, we demonstrate LPIPS metrics calculated by using 3 different backbones (AlexNet, VGG, and SqueezeNet). In Figure 11b, we present L2 and SSIM values. These metrics are calculated over all examples generated for the test set. C.4 GENERATING NOVEL CONCEPTS FROM TYPE In Figure 12a, we demonstrate LPIPS metrics calculated by using 3 different backbones (AlexNet, VGG, and SqueezeNet). In Figure 12b, we present L2 and SSIM values.
1. What is the focus of the paper regarding character generation? 2. What are the strengths of the proposed approach, particularly in leveraging high-level image information? 3. What are the weaknesses of the paper, especially regarding experiment comparisons and limitations? 4. Do you have any concerns about the method's ability to incorporate higher-level abstractions? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper In this paper, the authors present a method for character generation using the self-supervised technology. Different from existing approaches, it can leverage the benefits from higher level abstration (due to the used strokes) and get rid of stroke supervision. In this way, high-quality images are generated while the supervised training data requirements are relieved. Although some comparisons are made on Omniglot dataset, the advantages of this method are not very clear. Review In general, I think this submission is OK due to the following points: The problem that this submission aims to solve, how to introduce high-level image information without the data limitation, is important. The proposed method is simple and easy to implement. The writting is good and I can follow it easily. However, I have the following concerns, mainly about the experiment part: I can understand that using the stroke information in the neural network is better than the pixel-based alternatives but the authors do not mark this advantages empirically. For example, as for me, I think there should be some experiments about the comparisons between the pixel-based methods and the stroke-based methods (at least the proposed one). Incorporating higher level abstration of image information into the deep framework is widely studied, which is also mentioend by the authros in the related work part. So I wonder what's the improvement between the developed method and those methods (like DRAW). Specifically, I am really curious about the difference when comparing with DRAW, which also uses one single image as training data. Additionally, the paper states that DRAW cannot provide an interpretable compositional and causal process describing how a character is generated, which is not true. I am confused about the functionality of each term in Eq. (1). Can the authors conduct some ablation studies to support the role of the last two terms, like why using the align operation can accelerate the training stage. In terms of the baseline comparisons, the chosen algorithms are old (all before 2018). Even though, the proposed method cannot enjoy a quantitative evaluation merit in Tab. 1-2. After reading the submission, I find the technical contribution minor. Except the two terms for speeding up the training, I cannot see any specific strategy (insights) to tackle the self-supervised image generation problem. Can the authors explain a bit here?
ICLR
Title Character Generation through Self-Supervised Vectorization Abstract The prevalent approach in self-supervised image generation is to operate on pixel level representations. While this approach can produce high quality images, it cannot benefit from the simplicity and innate quality of vectorization. Here we present a drawing agent that operates on stroke-level representation of images. At each time step, the agent first assesses the current canvas and decides whether to stop or keep drawing. When a ‘draw’ decision is made, the agent outputs a program indicating the stroke to be drawn. As a result, it produces a final raster image by drawing the strokes on a canvas, using a minimal number of strokes and dynamically deciding when to stop. We train our agent through reinforcement learning on MNIST and Omniglot datasets for unconditional generation and parsing (reconstruction) tasks. We utilize our parsing agent for exemplar generation and type conditioned concept generation in Omniglot challenge without any further training. We present successful results on all three generation tasks and the parsing task. Crucially, we do not need any stroke-level or vector supervision; we only use raster images for training. Code will be made available upon acceptance. 1 INTRODUCTION While, innately, humans sketch or write through strokes, this type of visual depiction is a more difficult task for machines. Image generation problems are typically addressed by raster-based algorithms. The introduction of generative adversarial networks (GAN) (Goodfellow et al., 2014), variational autoencoders (VAE) (Kingma & Welling, 2013) and autoregressive models (Van Oord et al., 2016) has led to a variety of applications. Style transfer (Gatys et al., 2015; Isola et al., 2017), photo realistic image generation (Brock et al., 2018; Karras et al., 2019), and super resolution (Ledig et al., 2017; Bin et al., 2017) are some of the significant instances of the advancing field. Additionally, Hierarchical Bayesian models formulated by deep neural networks are able to use the same generative model for multiple tasks such as classification, conditional and unconditional generation (Hewitt et al., 2018; Edwards & Storkey, 2016). These raster-based algorithms can produce high quality images, yet they cannot benefit from the leverage that higher level abstractions bring about. Vector-level image representation intrinsically prevents models from generating blurry samples and allows for compositional image generation which eventually may contribute to our understanding of how humans create or replicate images (Lake et al., 2017). This idea, with the introduction of sketch-based datasets such as Omniglot (Lake et al., 2012), Sketchy (Sangkloy et al., 2016), and QuickDraw (Ha & Eck, 2017) has triggered a significant body of work in recent years. Stroke based image generation and parsing has been addressed with both vector supervised models and self-supervised generation. Of these, one prominent algorithm is Bayesian Program Learning (Lake et al., 2015), where a single model can be utilized for 5 tasks in the Omniglot challenge: (i) parsing, (ii) unconditional generation or, (iii) generating exemplars of a given concept, (iv) generating novel concepts of a type, and (v) one-shot classification. This approach is also shown to be scalable when supported by the representative capabilities of neural networks (Feinman & Lake, 2020b;a), however, it requires stroke-level or vector supervision, which is costly to obtain or simply nonexistent. VAE/RNN (Ha & Eck, 2017; Cao et al., 2019; Chen et al., 2017; Aksan et al., 2020) and Transformer based models (Ribeiro et al., 2020; Lin et al., 2020) are other common methods applied to vector based image generation. Although impressive results have been presented, stroke-level supervision is required to train these models. Figure 1: Our drawing agent can accomplish four different tasks. From left to right: it can generate novel characters, parse a given character into its strokes, generate new exemplars for a given character, and generate novel concepts (i.e. characters) given a type (i.e. alphabet). Ours is the first stroke-based method to tackle all of the generation and parsing tasks in the Omniglot Challenge, without requiring any stroke-level supervision. Recently, self-supervised (i.e. the absence of strokelevel supervision) strokebased image generation has been addressed with Reinforcement Learning (RL) (Ganin et al., 2018; Mellor et al., 2019; Huang et al., 2019; Schaldenbrand & Oh, 2020). We call this approach self-supervised vectorization, since the vectorization of images is learned using only rasterimages as supervision. These methods mostly focus on image reconstruction and their exploration in generation is limited. For example, none of them address the conditional generation problem, or, they need the number of actions (i.e. strokes) as input. In this paper, we propose a self-supervised reinforcement learning approach where we train a drawing agent for character generation and parsing. Our drawing agent operates on the stroke-level (i.e. vector) representation of images. At each time step, our agent takes the current canvas as input and dynamically decides whether to continue drawing or stop. When a ‘continue’ decision is made, the agent outputs a program specifying the stroke to be drawn. A non-differentiable renderer takes this program and draws it on the current canvas. Consequently, a raster image is produced stroke-by-stroke. We first train this agent for two tasks by formulating appropriate loss functions: (i) unconditional character generation and (ii) parsing. Unconditional character generation is the task of generating a novel concept1 (i.e. character) given a dataset of concepts. For this task, our loss function includes the following components: an adversarial loss produced by a discriminator to make generated characters as “real” as possible, and two data fidelity losses assessing the conformity of the current canvas with the statistical properties of the overall dataset. We also use an additional entropy loss to prevent mode collapse. In the parsing task, the goal for our agent is to reconstruct a given character (in raster-image) by drawing it through strokes using as few of them as possible. We utilize the same action space and environment as in the unconditional generation model, only difference being the input fed to the policy is a complete canvas to be reconstructed. Our reward function in this task has two components: a fidelity reward that indicates how much of a stroke is consistent with the target image and a penalty that increases with every ‘continue’ action being taken. This model explicitly learns the vectorization of the input raster-image in a self-supervised manner. Next, we show that our parsing model can be exploited for exemplar generation (i.e. a novel drawing of a given character) and novel concept generation from type (i.e. novel character generation given an alphabet of 10 characters) without any further training. Given a character, the policy network of our parsing model outputs a distribution over the action space where likelihood of actions at each time step eventually allows us to generate variations of the input image. For novel concept generation conditioned on a type (i.e. alphabet), we compose a stroke library by parsing the provided inputs. As we sample strokes from this library, we observe novel samples forming, in coherence with the overall structure of the alphabet. To the best of our knowledge, we are the first to tackle these tasks with a self-supervised approach that operates on stroke space. Through experiments we show that our agent can successfully generate novel characters in all three ways (unconditionally, conditioned on a given alphabet, conditioned on a given character), and parse and reconstruct input characters. For both exemplar generation and type conditioned novel concept generation, we provide LPIPS (Zhang et al., 2018), L2 and SSIM measures between input samples and generated images. Our contributions in this paper are two-fold: (i) we present a drawing agent that can successfully handle all of the generation and parsing tasks in the Omniglot challenge in a self-supervised, stroke- 1Omniglot challenge terminology. based manner – such a model did not exist (ii) we provide for the first time perceptual similarity based quantitative benchmarks for the ‘exemplar generation’ and ‘type conditioned novel concept generation’ tasks. 2 RELATED WORK The main purpose of this work is to present a self-supervised approach in order to solve the generation and parsing tasks in the Omniglot Challenge (Lake et al., 2015), by capturing the stroke-level representation of images. Here we initially examine the supervised and self-supervised approaches to Omniglot challenge. Then, we review the work on image vectorization. And lastly, we touch upon the research on program synthesis in the context of this study. Omniglot Challenge Omniglot dataset of world alphabets was released with a set of challenges: parsing a given letter, one shot classification, generating a new letter given an alphabet, generating a novel sample of a character, and unconditional generation. Omniglot letters have samples that are conditionally independent based on the alphabet-character hierarchy, hence, a distinctive approach to achieve all these tasks is Hierarchical Bayesian modeling (Lake et al., 2015), (Lake et al., 2013). As the Omniglot letters included human strokes as labels, the compositional and causal nature of letters are leveraged to model the generation process. Later, neurosymbolic models are also shown to be successful for unconditional generation (Feinman & Lake, 2020a) and conceptual compression for multiple tasks presented within the Omniglot Challenge (Feinman & Lake, 2020b). However, without the stroke set that generated a concept, these tasks become more difficult. The idea of sequential image generation is examined by recurrent VAE models (Rezende et al., 2016), (Gregor et al., 2015), (Gregor et al., 2016). DRAW (Gregor et al., 2015) and Convolutional DRAW (Gregor et al., 2016) were able to generate quality unconditional samples from MNIST and Omniglot datasets respectively. DRAW is proposed as an algorithm to generate images recurrently. The network is able to iteratively generate a given image by attending to certain parts of the input at each time step. Convolutional DRAW improved the idea with an RNN/VAE based algorithm that can capture the global structure and low-level details of an image separately in order to increase the quality of generations. Later, it is shown that Hierarchical Bayesian Modeling can be improved by the representational power of deep learning and attentional mechanisms in order to achieve three of the five Omniglot challenges (Rezende et al., 2016). Another novel idea to leverage Bayesian modeling to tackle Omniglot Challenge was performing modifications on the VAE architecture to represent hierarchical datasets (Edwards & Storkey, 2016) (Hewitt et al., 2018). The significance of these studies is that they were able obtain latent variables to describe class-level features effectively. Despite the ability to utilize the same model for different problems (one-shot classification, unconditional and conditional generation), raster-based one-step generative models have two disadvantages we want to address. First, they cannot leverage the higher level abstraction and quality comes with working on a vector space. Secondly, one-step generation does not provide an interpretable compositional and causal process describing how a character is generated. In this work, we combine the advantages of two groups of aforementioned models with an agent operating on stroke representation of images that uses only raster images during training. Thus, we aim to solve all three generative and the parsing (reconstruction) tasks of the Omniglot challenge. We show that the model trained for reconstruction can also be adopted as a tool that captures the compositional structure of a given character. Without any further training, our agent can solve exemplar generation and type conditioned novel concept generation problems. Image Generation by Vectorization — With Stroke Supervision Sketch-RNN (Ha & Eck, 2017) is the first LSTM/VAE based sketch generation algorithm. It is later improved to generate multiclass samples (Cao et al., 2019) and increase the quality of generations by representing strokes as Bezier curves (Song, 2020). The idea of obtaining a generalizable latent space by imagestroke mapping is studied by many (Aksan et al., 2020; Das et al., 2021; Bhunia et al., 2021; Wang et al., 2020). In CoSE (Aksan et al., 2020), the problem is articulated as ‘completion of partially drawn sketch’. They achieved state of the art reconstruction performance by utilizing variable-length strokes and a novel relational model that is able to capture the global structure of the sketch. The progress in stroke representation is continued with incorporation of variable-degree Bezier curves (Das et al., 2021), and capturing Gestalt structure of partially occluded sketches (Lin et al., 2020). Self Supervised Vectorization Self-supervised vector-based image generation problem has been approached by RL based frameworks (Zhou et al., 2018), (Ganin et al., 2018), (Mellor et al., 2019), (Huang et al., 2019), (Schaldenbrand & Oh, 2020), and (Zou et al., 2020). In SPIRAL (Ganin et al., 2018), unconditional generation and reconstruction tasks are tackled with adversarially trained RL agents. Succeeding research enhanced the reconstruction process by a differentiable renderer, making it possible for agents to operate on a continuous space (Huang et al., 2019; Schaldenbrand & Oh, 2020). In order to avert the computational expense of RL based algorithms, end-to-end differentiable models are developed through altering the rendering process (Nakano, 2019) or formulating the generation process as a parameter search (Zou et al., 2020). More recently, a differentiable renderer and compositor is utilized for generating closed Bezier paths and the final image respectively (Reddy et al., 2021). This method led to successful interpolation, reconstruction, and sampling processes. Most related to our work is SPIRAL where both reconstruction and unconditional generation is studied through self-supervised deep reinforcement learning. However, our approach has some significant differences. First, in SPIRAL each stroke is also represented as a Bezier curve, yet, the starting point of each curve is set as the final point of the previous curve. In our model, all control points of the Bezier curve are predicted by the agent at each time step. Hence, the agent has to learn the continuity and the compositionality of the given character in order to produce quality samples. Secondly, SPIRAL provides a generative model that works through a graphics renderer without addressing the conditional generation problem. They show impressive results on both natural images and handwritten characters. While we provide a solution for multiple generative tasks, we have not explored our model in the context of natural images. Another approach that presents a similar scheme to the reconstruction problem is “Learning to Paint” (Huang et al., 2019). In Learning to Paint, the proposed model is utilized specifically for reconstruction. When reconstruction is considered, the main difference of our model is that since we try to model a human-like generation process, our agent outputs a single stroke at each time step with the environment being altered throughout this process while in Learning to Paint, 5 strokes are predicted by the agent at each time step. As a major difference from previous studies, our agent decides whether to stop or keep drawing before generating a stroke. This enables the agent to synthesize an image with as few actions as possible when motivated with our reward formulations. Self Supervised Program Synthesis Our method essentially outputs a visual program that depends only on the rastered data. In that sense, studies on Constructive Solid Geometry (CSG)are also related. Different RL frameworks for reconstruction of a given CSG image, that is essentially a composition of geometric shapes, are proposed (Ellis et al., 2019; Zhou et al., 2020). The former considered parsing as a search problem that is solved by using a read-eval-print-loop within a Markov Decision Process. The latter adopted a Tree-LSTM model to eliminate invalid programs and the reward is considered to be the Chamfer distance between the target image and current canvas. 3 METHOD Our model consists of a policy network and a (nondifferentiable) renderer. At time step t, the policy network takes the current canvas, Ct – a raster-image, as input and outputs two distributions, πB and πS . The first distribution, πB , is for stroke (i.e. Bezier curve)parameters and the second one, πS , is for the continue/stop decision. From the first distribution, we randomly sample a stroke defined by its 7 parameters (x-y coordinates of start, end, control points of the quadratic Bezier curve, and a brush-width). From the second distribution, we randomly sample a decision. If the decision happens to be ‘continue’, we add the newly sampled stroke to the current canvas, Ct, increment time (i.e. t ← t + 1) and restart. If the decision was to ‘stop’, then Ct is returned as the final output. Our model is able to handle parsing and different generation tasks, and the processing pipeline we just described is common in all these tasks. What changes among tasks is the reward functions and/or training procedures, which we explain below. Unconditional Generation The task of ‘generating new concepts’ as dubbed in Omniglot challenge, is essentially unconditional sampling from a distribution obtained from the whole Omniglot training set. Here, the model is asked to generate completely novel samples (i.e. characters) without any constraints. For this task, at each time step t, we calculate an instantaneous reward, rt, that has three components: rt = D(Ct) + λ1align(Ct, I) + λ2N (|Ct|;µ, σ). (1) The first term is a reward based on a discriminator to make generated characters as ‘real’ as possible. D(·) is a discriminator that outputs the “realness” score of its input canvas. We train it in an adversarial manner by using the generated examples as negatives and the elements of the input dataset as positives. The second term is a clustering-based data fidelity reward. The function align(Ct, I) measures the alignment between the current canvas Ct and another canvas I, which is a randomly selected cluster center at the beginning of each episode. The cluster centers are obtained by applying k-means on all characters in the input dataset. align basically counts the number of intersecting on-pixels (between the two canvases) minus the number of non-intersecting on-pixels in Ct, and divides this quantity by the number of on-pixels in I. The final term assesses the conformity of the current canvas with the dataset in terms of the number of on-pixels. N (|Ct|;µ, σ) evaluates a normal distribution with (µ, σ) at |Ct| which is the number of on-pixels in the current canvas. We obtain (µ, σ) by fitting a normal distribution to the on-pixel counts of characters in the training set. We observed that the second and third terms accelerate learning as they guide the exploration within the vicinity of real characters. During training, instead of using the instantaneous reward, rt, we use the difference of successive rewards, i.e. rt − rt−1. In order to encourage exploration and avoid mode collapse, we use an entropy penalty term as αmax(0,KL([πB , πS ],U)− τ). (2) Here, KL indicates KL-divergence and U is the uniform distribution. This term first measures the divergence between the uniform distribution and πB , πS , the distributions output by the policy network. Then, through the hinge function, if the divergence exceeds a threshold (τ ), this term activates and increases the penalty. The policy network and the discriminator D are updated alternatingly after 256 images are generated at each iteration. We employ the REINFORCE algorithm (Williams, 1992) to update the weights of the policy network. Discriminator is trained using hinge loss. In order to stabilize the discriminator and keep the Lipschitz constant for the whole network equal to 1, Spectral Normalization is applied at each layer (Miyato et al., 2018). Throughout the training, we kept the balance ratio between generated and real samples at 3. Image Reconstruction by Parsing In the “parsing” task, the goal is to reconstruct the given input image by re-drawing it through strokes as accurately as possible. To this end, we formulate a new reward function with two terms: a fidelity reward that indicates how much of a stroke is consistent with the input image (using the “align” function introduced above) and a penalty that is added with every time increment represented by t as ‘continue’ decisions being made: rt = align(St,Ct)− λ1t, (3) where St is the newly sampled stroke and Ct is the current canvas (input). Second term simply acts as a penalty for every ‘continue’ action. The first term ensures the sampled stroke to be well-aligned with the input and the second term forces the model to use as few strokes as possible. There is no need for a discriminator. This model explicitly learns the vectorization of the input raster-image in a self-supervised manner. Apart from the different reward function, another crucial difference between the training of the unconditional generation model and the parsing model is how the input and output are handled. In unconditional generation, the newly-sampled stroke is added to the current canvas, whereas in parsing, we do the opposite: the sampled stroke is removed (masked out) from the current canvas, and the returned final canvas is the combination of all sampled strokes until the ‘stop’ decision. λ, α and τ in Equations 1, 2, and 3 are hyperparameters adjusted experimentally. (see ‘Training Details’ in Appendix B ). Generating New Exemplars In this task, a model is required to generate a new exemplar (i.e. a variation) of an unseen concept (i.e. character). To the best of our knowledge, we are the first to tackle this task in a self-supervised stroke-based setting. Most importantly, we do not require any training to achieve this task. We utilize our parsing network described in the previous section to capture the overall structure of a given letter. In order to produce new exemplars, we randomly sample different parsings (a set of strokes) from the distribution generated by the agent. In order to eliminate ‘unlikely’ samples, we compute the likelihood of the parsing given the resulting policy, and apply a threshold. Generating Novel Concepts from Type In this task, the goal is to to generate a novel concept (i.e. character) given a previously unseen type (i.e. alphabet) consisting of 10 concepts. The novel concepts should conform to the overall structure, that is, the stroke formulation and composition of the given type (alphabet). We, again, tackle this challenge using our parsing network without any further training. To do so, we first parse all input images into its strokes. For each input image, we sample five stroke sets from the stroke-parameters distribution output by the policy network. During the sampling process, we again use the likelihood-based quality function described in the previous section. We add all the strokes sampled during this process to form a stroke library. Here the strokes are stored with the time steps they are generated. Noting that the number of strokes sampled for a given character is not constant, we approximate a distribution for stopping actions. This process provides a stroke set representing the structure of letters and the way they are composed, that is, we can exploit the compositionality and causality of an alphabet. Throughout the character generation process, a stroke is sampled at each time step belonging to that particular group of the library. The sampled strokes are summed together to obtain the final canvas. 4 EXPERIMENTS Datasets and Implementation Details We report generation and reconstruction (parsing) results on the Omniglot dataset (Lake et al., 2015), which includes 1623 characters from 50 different alphabets, with 20 samples for each character. 30 alphabets are used for training and the remaining 20 are used for evaluation. For unconditional generation and reconstruction, we also report results on the MNIST dataset (LeCun, 1998). For both datasets, we rescale input images to 32x32 in order for them to conform with our model. Our policy network is composed of a ResNet feature extraction backbone and three MLP branches for computing the distributions over the action space. Architectural details can be found in Appendix A. For the Omniglot dataset, we take brush width as a constant and omit the corresponding MLP branch. We tune the learning rate and weight decay of the generator, λ hyperparameters in Equation 1 and Equation 3, α and τ hyperparameters in Equation 2, using the Tree-structured Parzen Estimator algorithm (Bergstra et al., 2011) in the RayTune library (Liaw et al., 2018). For unconditional generation, we use the discriminator architecture proposed by Miyato et al. (2018). In order to stabilize the discriminator and keep the Lipschitz constant for the whole network equal to 1, Spectral Normalization is applied at each layer. Discriminator is trained using the hinge loss. Throughout the training, we set the balance ratio between fake and real samples as 3. We performed hard-negative mining to speed up convergence during this process. 4.1 UNCONDITIONAL GENERATION We initially tested our approach on the MNIST dataset. Figure 3 presents the improvement in the quality of samples generated throughout the policy network updates. At the beginning, generated characters are mostly random scribbles. Towards the end, they start to look like real digits. Table 1 shows that our method achieves an acceptable FID score (Heusel et al., 2017) given the scores of other prominent GAN and VAE methods. Presented FID values are taken from Lucic et al. (2017). Figure 4 shows sample generations for the Omniglot dataset. To demonstrate that our generations are not duplicates of the characters in the training set, we present the four most similar characters from the training set to our generations. Similarity is computed using pixelwise L2 distance. Finally, Figure 5 presents more generated characters, which demonstrate the variability and the quality of generated concepts. The agent was able to capture the type of strokes, number of strokes a character has and letter structures without any stroke supervision. 4.2 IMAGE RECONSTRUCTION BY PARSING Figure 6 presents sample parsing and reconstruction results on MNIST. Our agent can reconstruct a character from the test set in a minimal number of actions within the abilities of quadratic Bezier curves. Selected brush widths also conform with the stroke heterogeneity of the dataset. Then, we train our model with the characters in the Omniglot training set. For evaluation, we utilize the evaluation set with completely novel characters from unseen alphabets. Thereby, we can see that our agent has learned how to parse a given character. Due to the penalty term that increases with the number of strokes, there is a tradeoff for the agent to replicate a character exactly and replicate it in a small number of actions. This indirectly demotivates the agent from retouching the image with small strokes to minimize the difference to the target. Results in Figure 7 show that overall structure of the target images are preserved, however, small details are lacking in some of the examples. This reflects on the distance measures (Table 2). 4.3 GENERATING NEW EXEMPLARS For this task, we use the evaluation set of the Omniglot dataset. For each character in the test set, we sample 500 different parses from the policy. In Figure 8, it can be observed that given an unseen letter from a novel alphabet, our agent can sample from the resulting distribution, and output quality variations. The major indications of variation are structures of the strokes, number of actions to generate a sample and the fine details of certain characters. We compare each produced character with its corresponding input image using LPIPS, SSIM and L2 distance values. The mean and standard deviation of these values for the whole evaluation set are 0.078± 0.002, 0.616± 0.018 and 0.08± 0.016, respectively. Results per alphabet can be found in Appendix C.3. 4.4 GENERATING NOVEL CONCEPTS FROM TYPE In order to generate a concept that is likely to belong in a given alphabet, we again leverage our reconstruction model. Given 10 different characters of an unseen alphabet, we are able to generate novel images with similar structural features. Results presented in Figure 9 show that our algorithm can model the compositional pattern of an alphabet in stroke space. In order to obtain quantitative results, (e.g. LPIPS, L2 and SSIM), we produce 10000 images conditioned on each input set and randomly sample characters by utilizing the discriminator trained for the unconditional generation model, assuming it has learned what features of a given input imply a real character. We generate a sampling distribution according to the discriminator scores of generated samples and repeat the sampling process multiple times for each input to obtain a set of outputs to be considered. For a sample generated, we calculate performance metrics with respect to all characters in the input. In order to report final metrics presented in supplemental figures 12b and 12a, we consider the most similar input-output pairs. The mean and standard deviation of LPIPS, SSIM and L2 values for the whole evaluation set are 0.0801± 0.003, 0.502± 0.068 and 0.1263± 0.00086 respectively. 5 CONCLUSION We proposed a self-supervised reinforcement learning approach for stroke based image generation. We trained our model for unconditional generation and parsing on handwritten character datasets by defining a single action space and environment. Through experiments, we showed that, given the whole training set, our agent is able to capture the overall distribution and generate quality novel samples for the challenging Omniglot dataset. Then, we trained our agent for the parsing task; given a raster image, the goal is to reconstruct it through as few strokes as possible. We demonstrated that the parsing agent can be utilized for generating exemplars of a concept and creating novel samples conditioned on a type, without any further training, only difference being how it is called among tasks. To the best of our knowledge, we are the first to tackle these tasks with a self-supervised approach that operates on a stroke level. In this work, we used quadratic Bezier curves as the smallest unit of sketching. However, for human-level generations, the stroke representations should be enhanced to capture more complex structures. We anticipate that this will improve the overall performance. A NETWORK ARCHITECTURE The backbone is a ResNet with 3 convolutional layers and 8 residual layers. The first convolutional layer has 32 filters of size 5x5. The second and third convolutional layers have 32 filters of size 4x4 and stride of 2, resulting in a tensor with dimensions 8x8x32. Then, we use standard residual layers described in (He et al., 2016). Each convolutional layer is followed by a Batch Normalization process and ReLU activation. The output of the final residual layer is flattened to a 2048x1 vector to be processed by the MLPs. The first MLP outputs a set of distributions for each control point of the Bezier curve. It has 1 fully connected layer that outputs a 192x1 vector. This vector is reshaped to a 32x6 matrix where each 32x1 vector defines a distribution over the possible coordinates. The MLPs used for selecting the brush width and sampling the stop/continue decision consist of 2 layers with 64 and 2 neurons. B TRAINING DETAILS The hyperparamaters used for unconditional generation and reconstruction are presented in Table 3 and 4, respectively. C EXPERIMENTS: SUPPLEMENTAL FIGURES C.1 UNCONDITIONAL GENERATION In Figure 10, we present the FID values for the generated images along the training on Omniglot dataset. C.2 PARSING In Table 5, we present the mean number of strokes our agent used to parse the characters for each alphabet in the test set. C.3 EXEMPLAR GENERATION In Figure 11a, we demonstrate LPIPS metrics calculated by using 3 different backbones (AlexNet, VGG, and SqueezeNet). In Figure 11b, we present L2 and SSIM values. These metrics are calculated over all examples generated for the test set. C.4 GENERATING NOVEL CONCEPTS FROM TYPE In Figure 12a, we demonstrate LPIPS metrics calculated by using 3 different backbones (AlexNet, VGG, and SqueezeNet). In Figure 12b, we present L2 and SSIM values.
1. What is the main contribution of the paper regarding reinforcement learning for character parsing and generation? 2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to recent advancements in self-supervised vectorization? 3. How does the reviewer assess the quality of the quantitative results, specifically in relation to reconstruction quality? 4. Why did the authors choose to use perceptual models such as SSIM or LPIPS for evaluating performance, and what is the reviewer's opinion on this choice?
Summary Of The Paper Review
Summary Of The Paper This paper presents an approach using reinforcement learning to parse and generate characters. Experiments are performed both with the omniglot challenge and MNIST datasets. In the context of omniglot, in addition to unconditional generation and parsing, results are also shown on the exemplar generation and type conditioned generation tasks. The proposed approach only uses pixel/raster level information during training and does not use stroke or vector data. Review Positives The overall approach proposed by the paper seems sound, and the results look visually reasonable in most cases. Concerns My biggest concern is that the approach and results do not feel well situated in the context of advancements in self supervised vectorisation over the last 11 or so months. For example, these papers both propose differentiable approaches to drawing stroked curves (without the requirement that they be closed): Tzu-Mao Li, Michal Lukáč, Michaël Gharbi, and Jonathan Ragan-Kelley. 2020. Differentiable vector graphics rasterization for editing and learning. ACM Trans. Graph. 39, 6, Article 193 (December 2020), 15 pages. DOI:https://doi.org/10.1145/3414685.3417871 Daniela Mihai, and Jonathon Hare. 2021. Differentiable Drawing and Sketching, arXiv preprint arXiv:2103.16194, https://arxiv.org/abs/2103.16194 In both those papers experiments were performed with character reconstruction, and the former demonstrates vector generation with a VAE. In both papers, like this one, the training of the models is self-supervised utilising only the rasters. Whilst I acknowledge that this paper explores more of the OmniGlot tasks, it's not obvious to me why the approaches in these other papers could not do the same thing (and certainly could be compared for unconditional parsing/reconstruction tasks). My secondary concern would be that whilst many of the qualitative results look okay, many of the quantitative results using the proposed model seem not to be so good; for example in terms of reconstruction quality in table 2, the proposed model is significantly worse performing than all the compared approaches. As a side note, it's not obvious to me why one would necessarily use perceptual models like SSIM or LPIPS for measuring performance in this particular task. I'd at least like to see some kind of justification for that choice.
ICLR
Title Non-convex Optimization for Learning a Fair Predictor under Equalized Loss Fairness Constraint Abstract Supervised learning models have been increasingly used in various domains such as lending, college admission, natural language processing, face recognition, etc. These models may inherit pre-existing biases from training datasets and exhibit discrimination against protected social groups. Various fairness notions have been introduced to address fairness issues. In general, finding a fair predictor leads to a constrained optimization problem, and depending on the fairness notion, it may be non-convex. In this work, we focus on Equalized Loss (EL), a fairness notion that requires the prediction error/loss to be equalized across different demographic groups. Imposing this constraint to the learning process leads to a non-convex optimization problem even if the loss function is convex. We introduce algorithms that can leverage off-the-shelf convex programming tools and efficiently find the global optimum of this non-convex problem. In particular, we first propose the ELminimizer algorithm, which finds the optimal EL fair predictor by reducing the non-convex optimization problem to a sequence of convex constrained optimizations. We then propose a simple algorithm that is computationally more efficient compared to ELminimizer and finds a sub-optimal EL fair predictor using unconstrained convex programming tools. Experiments on real-world data show the effectiveness of our algorithms. N/A Supervised learning models have been increasingly used in various domains such as lending, college admission, natural language processing, face recognition, etc. These models may inherit pre-existing biases from training datasets and exhibit discrimination against protected social groups. Various fairness notions have been introduced to address fairness issues. In general, finding a fair predictor leads to a constrained optimization problem, and depending on the fairness notion, it may be non-convex. In this work, we focus on Equalized Loss (EL), a fairness notion that requires the prediction error/loss to be equalized across different demographic groups. Imposing this constraint to the learning process leads to a non-convex optimization problem even if the loss function is convex. We introduce algorithms that can leverage off-the-shelf convex programming tools and efficiently find the global optimum of this non-convex problem. In particular, we first propose the ELminimizer algorithm, which finds the optimal EL fair predictor by reducing the non-convex optimization problem to a sequence of convex constrained optimizations. We then propose a simple algorithm that is computationally more efficient compared to ELminimizer and finds a sub-optimal EL fair predictor using unconstrained convex programming tools. Experiments on real-world data show the effectiveness of our algorithms. 1 INTRODUCTION As machine learning (ML) algorithms are increasingly being used in applications such as education, lending, recruitment, healthcare, criminal justice, etc., there is a growing concern that the algorithms may exhibit discrimination against protected population groups. For example, speech recognition products such as Google Home and Amazon Alexa were shown to have accent bias (Harwell, 2018). The COMPAS recidivism prediction tool, used by courts in the US in parole decisions, has been shown to have a substantially higher false positive rate for African Americans compared to the general population (Dressel & Farid, 2018). Amazon had been using automated software since 2014 to assess applicants’ resumes, which were found to be biased against women (Dastin, 2018). Various fairness notions have been proposed in the literature to measure and remedy the biases in ML systems; they can be roughly classified into two classes: 1) individual fairness focuses on the equity at individual level and it requires the similar individuals to be treated similarly (Dwork et al., 2012; Biega et al., 2018; Jung et al., 2019; Gupta & Kamble, 2019); 2) group fairness requires certain statistical measures to be (approximately) equalized across different groups distinguished by some sensitive attributes. Their suitability for use is often application dependent, and many of them are incompatible with each other (Zhang et al., 2019; Hardt et al., 2016; Conitzer et al., 2019; Zhang et al., 2020; Khalili et al., 2020). Extensive approaches have been developed to satisfying a given definition of fairness and they generally fall under three categories: pre-processing, by modifying the original dataset such as removing certain features and reweighing, e.g., (Kamiran & Calders, 2012; Celis et al., 2020); in-processing, by modifying the algorithms such as imposing fairness constraints or changing objective functions, e.g., (Zhang et al., 2018; Agarwal et al., 2018; 2019; Reimers et al., 2021; Calmon et al., 2017); post-processing, by adjusting the output of the algorithms based on sensitive attributes, e.g., (Hardt et al., 2016). In this paper, we focus on group fairness and we aim to mitigate unfairness issues in supervised learning using in-processing approaches. The problem can be cast as a constrained optimization problem where a fair predictor can be found by minimizing the prediction error (i.e., loss) subject to certain group fairness constraint. In Section 2.1, we present a number of definitions of commonly used group fairness notions, namely, statistical parity (Dwork et al., 2012), equal opportunity (Hardt et al., 2016), equalized loss (Zhang et al., 2019), and bounded group loss (Agarwal et al., 2019). Here we are particularly interested in equalized loss which requires the expected loss to be equalized across different groups. Constrained optimization problems for finding a fair predictor have been studied in the iterature. In general, imposing a fairness criterion to the optimization problem may lead to a non-convex optimization problem. Existing works have proposed various approaches to solving such a non-convex optimization in different settings. For example, Komiyama et al. (2018) studied the non-convex optimization for regression problems under the coefficient of determination constraint. Agarwal et al. (2019) proposed an approach to finding a fair regression model under bounded group loss and statistical parity fairness constraints. Agarwal et al. (2018) studied classification problems and aimed at finding fair classifiers under various fairness notions including statistical parity and equal opportunity. In particular, they considered zero-one loss as the objective function and trained a randomized fair classifier over a finite hypothesis space; this problem was reduced to a problem of finding the saddle point of a linear Lagrangian function in (Agarwal et al., 2018). Zhang et al. (2018) proposed an adversarial debasing technique to find a fair classifier under equalized odd, equal opportunity, and statistical parity. However, there is no guarantee that this technique finds the global optimal solution. The main difference between the present work and the existing in-processing approaches are as follows: 1) we consider a non-convex problem for finding a fair predictor satisfying Equalized Loss fairness notion, which has not been studied in the literature to the best of our knowledge. 2) We propose algorithms for finding the global optimal solution to this non-convex problem efficiently. 3) Our algorithms are easy to implement and are applicable to both regression and classification problems. 4) Unlike (Agarwal et al., 2018), our algorithms are not limited to finite hypothesis space. Non-convex optimization problems have also been studied in other contexts such as learning overparametrized models. For example, deep neural networks are typically trained by solving unconstrained, non-convex problems, and methods such as gradient descent may not be suitable as they are likely to find saddle points but not optimums. To address this issue, approaches have been proposed in recent works by incorporating the higher order derivatives (Celis et al., 2020; Anandkumar & Ge, 2016) or noisy gradients (Ge et al., 2015). However, these methods only find a local minimum (not a global minimum) and are not applicable to our problem with a non-convex constraint. In this work, we develop novel algorithms that find the fair (sub-)optimal solutions under Equalized Loss fairness constraint efficiently. Note that while our approach and algorithms are presented in the context of fair machine learning, they are applicable to any problem that can be formulated as a constrained optimization problem in the form of minw L0(w)+αL1(w) s.t. |L0(w)−L1(w)| < γ, where α is a constant Our main contributions and findings are as follows. 1. We study the relationship between Equalized Loss (EL) and Bounded Group Loss (BGL) fairness notions. We show that given the existence of feasible solutions satisfying (approximate) BGL fairness, imposing (approximate) EL fairness constraint never increase losses of both groups simultaneously (Theorems 1 and 2 in Section 2.1). These results help policy makers to have a better understanding of these two fairness notions. 2. We develop an algorithm (ELminimizer) to solve a non-convex constrained optimization problem that finds the optimal (approximate) EL fair solution. We show that such non-convex optimization can be reduced to a sequence of convex constrained optimizations and the convergence property of the algorithm is analyzed (Theorems 3 and 4, Section 3). 3. We develop a simple algorithm for finding a sub-optimal (approximate) EL fair solution. We show that a sub-optimal solution is a linear combination of optimal solutions to two unconstrained optimizations and it can be found efficiently without solving constrained optimizations (Theorem 5, Section 4). 4. We conduct sample complexity analysis and provide the guarantee on generalization performance (Theorem 7, Section 5). 5. We validate the theoretical results by conducting experiments on real-world data (Section 6). 2 PROBLEM FORMULATION Consider a supervised learning problem where the training dataset consists of triples (X,A, Y ) from two social groups. Random variableX ∈ X ⊂ Rdx is the feature vector (in form of a column vector), A ∈ {0, 1} is the sensitive attribute (e.g., race, gender) indicating the group membership, and Y ∈ Y ⊂ R is the label. The feature vector X may or may not include sensitive attribute A. Label Y can be either discrete or continuous depending on the given problem: if Y is discrete (resp. continuous), then the problem is a classification (resp. regression) problem. Let F be a set of predictors fw : X → R parameterized by weight vector w ∈ Rdw .1 Consider loss function l : Y×X → Rwhere l(Y, fw(X )) measures the error of fw in predicting label Y . Denote the expected loss with respect to the joint probability distribution of (X,Y ) by L(w) := E{l(Y, fw(X ))}. Then, La(w) := E{l(Y, fw(X ))|A = a} denotes the expected loss of the group with attribute A = a. A predictor that minimizes the total expected loss, i.e., argminw L(w), can be biased against certain groups. To mitigate the risk of unfairness, various fairness notions have been proposed in the literature. Some of the most commonly used notions of group fairness are as follows: 1) Statistical Parity (SP) (Dwork et al., 2012) implies that the predictor and the sensitive attribute should be independent, i.e., fw(X ) ⊥ A; 2) Equal Opportunity (EqOpt) (Hardt et al., 2016) requires that conditional on Y = 1, prediction and sensitive attribute are independent, i.e., fw(X ) ⊥ A|Y = 1; 3) Equalized Odds (EO) (Hardt et al., 2016) requires the conditional independence between prediction and sensitive attribute given Y , i.e., fw(X ) ⊥ A|Y ; 4) Equalized Loss (EL) (Zhang et al., 2019; Berk et al., 2021) requires that the losses experienced by different groups are equalized, i.e., L0(w) = L1(w); 5) Bounded Group Loss (BGL) (Agarwal et al., 2019) requires that the loss experienced by each group is bounded. With fairness consideration, the goal is to find weight vectorw that minimizes total expected loss in predicting Y givenX , subject to certain fairness condition, i.e., minw L(w) s.t. fairness constraint. This is a typical formulation in fair machine learning literature, and above method of finding a fair predictor belongs to in-processing approaches. Because such constrained optimization can be nonconvex, finding the optimal solution efficiently can be challenging. In this work, we develop novel algorithms that solves such an optimization problem udder EL fairness constraint. 2.1 EQUALIZED LOSS (EL) AND BOUNDED GROUP LOSS (BGL) As mentioned in Section 2, various fairness notions have been introduced in the literature. Among them, Statistical Parity (SP), Equal Opportunity (EqOpt), Equalized Odds (EO), and Bounded Group Loss (BGL) have been studied extensively in the literature, and both in-processing and postprocessing approaches have been developed to satisfy these constraints (Dwork et al., 2012; Agarwal et al., 2018; Hardt et al., 2016; Zafar et al., 2019; Fitzsimons et al., 2019). Note that different fairness notions may be conflict with each other and which one to adopt is application and context dependent. In this work, we are interested in Equalized Loss (EL) fairness notion (Zhang et al., 2019; Berk et al., 2021) which implies that the prediction error should be the same across different groups,2 and Group Bounded Loss (BGL) fairness notion (Agarwal et al., 2019) which requires the prediction error of every group to be bounded. We consider a relaxed version of EL fairness defined as follows. Definition 1 (γ-EL) A predictor f satisfies γ-EL if the expected losses experienced by different demographic groups satisfy the following, − γ ≤ L0(w)− L1(w) ≤ γ. (1) Parameter γ controls the degree of fairness; the smaller γ implies the stronger fairness. When γ = 0, the exact EL fairness is attained. We say a group is disadvantaged if it experiences a larger loss. Similarly, Group Bounded Loss (BGL) fairness notion is formally defined as follows. Definition 2 (γ-BGL) A predictor f satisfies γ-BGL if the expected loss of each demographic group is bounded by γ, i.e., La(w) ≤ γ, ∀a ∈ {0, 1}. (2) 1Predictive models such as logistic regression, linear regression, deep learning models, etc., are parameter- ized by a weight vector. 2EL has also been referred to as Overall Accuracy Equality in (Berk et al., 2021; Agarwal et al., 2019). 2.2 RELATIONS BETWEEN γ-EL AND γ-BGL In this section, we formally study the relations between γ-EL and γ-BGL fairness notions. Under γ-EL fairness constraint, finding a fair predictor is equivalent to solving the following constrained optimization problem: minw L(w) s.t. |L0(w)− L1(w)| ≤ γ. (3) Letw∗ be denoted as the solution to (3) and fw∗ is the optimal γ-EL fair predictor. Theorem 1 below shows that given the existence of a feasible point satisfying γ-BGL fairness, it’s impossible for both groups experiencing loss larger than γ from the optimal γ-EL fair predictor. Theorem 1 Consider the following optimization for finding the optimal γ-BGL fair predictor, minw L(w) s.t. La(w) ≤ γ, ∀a ∈ {0, 1}. (4) If L0(w∗) > γ and L1(w∗) > γ, then optimization problem (4) does not have a feasible point. Proof 1 We prove by contradiction. Assume w̃ is a feasible point of optimization (4). Note that w̃ is a feasible point for optimization problem (3) as well. Since both L0(w∗) and L1(w∗) are larger than γ, we have, E{l(Y, fw∗)} = Pr{A = 0}L0(w∗) + Pr{A = 1}L1(w∗) > γ, E{l(Y, fw̃)} = Pr{A = 0}L0(w̃) + Pr{A = 1}L1(w̃) ≤ γ. Therefore,w∗ can not be the solution to (3). This contradiction proves that the optimization problem (4) cannot have a feasible point. Theorem 1 implies that if γ-EL notion leads to an increase of the loss of every demographic group, then there is no optimal predictor under γ-BGL.3 The next theorem further shows that for any predictor satisfying γ-EL, it must satisfy 2γ-BGL. Theorem 2 Assume optimization problem (4) has at least one feasible point. Then, we have, min{L0(w∗), L1(w∗)} ≤ γ and max{L0(w∗), L1(w∗)} ≤ 2γ. Proof 2 Let w̃ be a feasible point of optimization problem (4), then w̃ is also a feasible point to (3). If min{L0(w∗), L1(w∗)} > γ, then L(w∗) > γ ≥ L(w̃) must hold. This is a contradiction because it implies thatw∗ is not an optimal solution to (3). Therefore, min{L0(w∗), L1(w∗)} ≤ γ. Similarly, we can prove max{L0(w∗), L1(w∗)} ≤ 2γ by contradiction. Assume max{L0(w∗), L1(w∗)} > 2γ. Then, max{L0(w∗), L1(w∗)} − min{L0(w∗), L1(w∗)} > γ which shows that w∗ is not a feasible point for (3). This is a contradiction. Therefore, max{L0(w∗), L1(w∗)} ≤ 2γ. Theorems 1 and 2 investigated the relations between EL and BGL fairness notions. Since γ-EL implies 2γ-BGL and it additionally requires the approximate equality across different groups, we will focus on γ-EL fairness notion in the rest of the paper. Because optimization problem (3) is a non-convex optimization, finding the optimal fair γ-EL solution efficiently can be challenging. In the next sections, we propose a number of algorithms that are easy to implement and can solve the optimization (3) efficiently. 3 OPTIMAL FAIR MODEL UNDER EL FAIRNESS In this section, we consider the optimization problem (3) under the EL fairness constraint. Note that this optimization problem is non-convex and finding the global optimal solution is difficult. However, we propose an algorithm which is able to find the solution to non-convex optimization (3) by solving a sequence of convex optimization problems. Before presenting the algorithm, we need to introduce two assumptions. Assumption 1 L0(w), L1(w), and L(w) are strictly convex functions inw. 3Theorem 1 is related to (Agarwal et al., 2019). In particular, they considered γ-BGL fairness and mentioned that the equalized loss fairness notion may increase the loss of both groups. Algorithm 1: Function ELminimizer 1 ELminimizer(wG0 ,wG1 , , γ): 2 λ0start = L0(wG0) 3 λ0end = L0(wG1) 4 Define L̃1(w) = L1(w) + γ 5 i = 0 6 while λ(i)end − λ (i) start > do 7 λ (i) mid = (λ (i) end + λ (i) start)/2; 8 Solve the following convex optimization problem, w∗i = argmin w L̃1(w) s.t. L0(w) ≤ λ(i)mid (5) 9 λ(i) = L̃1(w ∗ i ); 10 if λ(i) ≥ λ(i)mid then 11 λ (i+1) start = λ (i) mid; λ (i+1) end = λ (i) end; 12 end 13 else 14 λ (i+1) end = λ (i) mid; λ (i+1) start = λ (i) start; 15 end 16 i = i+ 1; 17 end 18 Returnw∗i Example 1 Consider a linear classifier fw(X ) = wTX with squared loss l(Y, fw(X )) = (wTX − Y )2. In this example, E{l(Y, fw(X ))} = wTE{XXT }w − 2E{YXT }w + E{Y 2} is strictly convex in w if covariance matrix E{XXT } is positive definite. Similarly, La(w) is strictly convex if E{XXT |A = a} is positive definite. LetwGa be the weight vector minimizing the loss associated with group A = a. That is, wGa = argmin w La(w). (6) Since optimization problem (6) is an unconstrained convex optimization problem,wGa can be found efficiently by the first order condition or the gradient descent. We make the following assumption. Assumption 2 We assume that the following holds, L0(wG0) ≤ L1(wG0) and L1(wG1) ≤ L0(wG1). Algorithm 2: Solving Optimization Problem (3) Input: wG0 ,wG1 , ,γ 1 wγ = ELminimizer(wG0 ,wG1 , , γ); 2 w−γ = ELminimizer(wG0 ,wG1 , ,−γ); 3 if L(wγ) ≤ L(w−γ) then 4 w∗ = wγ ; 5 end 6 else 7 w∗ = w−γ ; 8 end Output: w∗ Assumption 2 implies that when a group experiences its lowest possible loss, it should not be the disadvantaged group. Under Assumption 2, given wG0 and wG1 , Algorithm 1 with γ = 0 (i.e., function ELminimizer(wG0 ,wG1 , , 0)) finds the optimal 0-EL fair solution, where parameter > 0 specifies the stopping criterion; as → 0, the output approaches to the optimal solution. Intuitively, Algorithm 1 solves non-convex optimization (3) by solving a sequence of convex and constrained optimization problems. If γ > 0, Algorithm 2 finds the optimal predictor under γ-EL using function ELminimizer. The convergence of Algorithm 1 for finding the optimal 0-EL fair solution, and convergence of Algorithm 2 for finding the optimal γ-EL fair solution are proved in the following theorems. Theorem 3 Consider sequences {λ(i)mid|i = 1, 2, . . .} and {w∗i |i = 1, 2, . . .} generated by Algorithm 1 when γ = 0, i.e., ELminimizer(wG0 ,wG1 , → 0, 0). Under Assumptions 1 and 2, we have, lim i→∞ w∗i = w ∗ and lim i→∞ λ (i) mid = E{L(Y, fw∗(X))} where fw∗ is the optimal 0-EL fair predictor. Similarly, we can prove the convergence for the approximate EL fairness when γ 6= 0. Theorem 4 Assume that L0(wG0) − L1(wG0) < −γ and L0(wG1) − L1(wG1) > γ. Then, as → 0, the output of Algorithm 2 goes to the optimal γ-EL fair solutionw∗. Complexity Analysis: The While loop in Algorithm 1 is executed for O(log(1/ )) times. Therefore, Algorithm 1 needs to solve a constrained convex optimization problem for O(log(1/ )) times. Note that constrained convex optimization problems can be efficiently solved via sub-gradient methods (Nedić & Ozdaglar, 2009), brier methods (Wright, 2001), stochastic gradient descent with one projection (Mahdavi et al., 2012), etc. For instance, Nedić & Ozdaglar (2009) introduces a subgradient method that finds the saddle point of the Lagrangian function corresponding to (5) and it converges at the rate ofO(1/k) (k is the number of iterations). Therefore, if is the maximum error tolerance for (5), the total time complexity of Algorithm 2 is O(1/ log(1/ )). 4 SUB-OPTIMAL FAIR MODEL UNDER γ-EL In Section 3, we have shown that non-convex optimization problem (3) can be reduced to a sequence of convex constrained optimizations (5), and based on this we proposed an algorithm (Algorithm 2) that finds the optimal γ-EL fair predictor. However, the proposed algorithm still requires solving a convex constrained optimization in each iteration. In this section, we propose another algorithm which finds a sub-optimal solution to optimization (3) without solving constrained optimization in each iteration. The algorithm consists of two phases in sequence: (1) finding two weight vectors by solving two unconstrained convex optimization problems; (2) generating a new weight vector satisfying γ-EL fairness with the two weight vectors found in the first phase. Because of the convexity, two unconstrained convex optimization problems in the first phase can be solved efficiently. Phase 1: Unconstrained optimization. In this phase, we remove EL fairness constraint and first solve the following uncontrained optimization problem, wO = argmin w L(w) (7) Because L(w) is strictly convex inw, the above optimization problem can be solved efficiently using the gradient descent method. Predictor fwO is the optimal predictor without fairness constraint, and L(wO) is the smallest overall expected loss that is attainable. Let â = argmaxa∈{0,1} La(wO), i.e., group â is the group that is disadvantaged under predictor fwO . Then, for the disadvantaged group â, we findwGâ by solving unconstrained optimization problem (6). Phase 2: Binary search to find the fair predictor. For β ∈ [0, 1], we define the followings, g(β) = Lâ ( (1− β)wO + βwGâ ) − L1−â ( (1− β)wO + βwGâ ) ; h(β) = L ( (1− β)wO + βwGâ ) , where function g(β) can be interpreted as loss disparity between two demographic group under predictor f(1−β)wO+βwGâ , and h(β) is the corresponding overall expected loss. Some properties of functions g(.) and h(.) are summarized in the following theorem. Theorem 5 Under Assumptions 1 and 2, the followings hold, 1. There exists β0 ∈ [0, 1] such that g(β0) = 0. 2. h(β) is strictly increasing in β ∈ [0, 1]; g(β) is strictly decreasing in β ∈ [0, 1]. Theorem 5 implies that in a dw dimensional space, if we start fromwO and move towardwGâ along a straight line, the overall loss increases and the disparity between two groups decreases until we reach (1 − β0)wO + β0wGâ , at which 0-EL fairness is satisfied. Note that β0 is the unique root of g. Since g(β) is a strictly decreasing function, β0 can be found using binary search. For the approximate γ-EL fairness, there are multiple values of β such that (1− β)wO + βwGâ satisfies γEL. Since h(β) is strictly increasing in β, among all β that satisfies γ-EL fairness, we would choose the smallest one. The method for finding a sub-optimal solution to optimization (3) is described in Algorithm 3. Algorithm 3: Sub-optimal solution to optimization problem (3) 1 Input: wGâ ,wO, , γ 2 Initialization: gγ(β) = g(β)− γ, i = 0, β(0)start = 0, β (0) end = 1 3 if gγ(0) ≤ 0 then 4 w = wO, and go to line 16; 5 end 6 while β(i)end − β (i) start > do 7 β (i) mid = (β (i) start + β (i) end)/2; 8 if gγ(β (i) mid) ≥ 0 then 9 β (i+1) start = β (i) mid, β (i+1) end = β (i) end; 10 end 11 else 12 β (i+1) start = β (i) start, β (i+1) end = β (i) mid; 13 end 14 end 15 w = (1− β(i)mid)wO + β (i) midwGâ ; 16 Output: w Note that while loop in Algorithm 3 is repeated for O(log(1/ )) times. Since the time complexity of operations in each loop isO(1), the total time complexity of Algorithm 3 isO(log(1/ )). We can formally prove that the output returned by Algorithm 3 satisfies γ-EL fairness constraint. Theorem 6 Assume that Assumption 1 holds. If gγ(0) ≤ 0, then wO satisfies the γ-EL fairness; if gγ(0) > 0, then limi→∞ β (i) mid = β (∞) mid exists, and (1 − β (∞) mid)wO + β (∞) midwGâ satisfies the γ-EL fairness constraint. It is worth mentioning, since h(β) is incrasing, we are intrested in finding the smallest possible β that (1 − β)wO + βwGâ satisfies γ-EL. Here, β (∞) mid is the smallest possible β under which (1 − β)wO + βwGâ satisfies γ-EL. 5 GENERALIZATION PERFORMANCE So far we proposed algorithms for solving optimization (3). In practice, the joint probability distribution of (X,A, Y ) is often unknown and the expected loss needs to be estimated using the empirical loss. Specifically, given n samples (X i, Ai, Yi), i = 1, . . . , n and predictor fw , the empirical losses of entire population and each group are defined as follows, L̂(w) = 1 n n∑ i=1 l(Yi, fw(X i)); L̂a(w) = 1 na ∑ i:Ai=a l(Yi, fw(X i)), (8) where na = |{i|Ai = a}|. Because γ-EL fairness constraint is defined in terms of expected loss, the optimization problem of finding an optimal γ-EL fair predictor using empirical losses is as follows, ŵ = argmin w L̂(w) s.t. |L̂0(w)− L̂1(w)| ≤ γ̂. (9) Note that γ̂ 6= γ and one goal in this section is to find relation between γ̂ and γ. We aim to investigate how to determine γ̂ so that with high probability the predictor found by solving problem (9) satisfies γ-EL fairness, and meanwhile ŵ is a good estimate of w∗. To present our result, we make the following assumption. Assumption 3 With probability 1− δ, we have the following, sup fw∈F |L(w)− L̂(w)| ≤ B(δ, n,F), where B(δ, n,F) is a bound that goes to zero as n goes to infinity. Note that if the class F is learnable with respect to loss function l, then there exists such a bound B(δ, n,F) that goes to zero as n goes to infinity (Shalev-Shwartz & Ben-David, 2014).4 Theorem 7 Let F be a set of learnable functions, and let fŵ and fw∗ be the solution to (9) and (3) respectively with γ̂ = γ + ∑ a∈{0,1}B(δ, na,F). Then, with probability at least 1 − 6δ the followings hold, L(ŵ)− L(w∗) ≤ 2B(δ, n,F) and |L0(ŵ)− L1(ŵ)| ≤ γ + 2B(δ, n0,F) + 2B(δ, n1,F). Theorem 7 shows that as n0, n1 go to infinity, γ̂ → γ, and both empirical loss and expected loss satisfy γ-EL. In addition, as n goes to infinity, the expected loss at ŵ goes to the minimum possible expected loss. Therefore, solving (9) using empirical loss is equivalent to solving (3) if the number of data points from each group is sufficiently large. 6 EXPERIMENTS 6.1 EXPERIMENT 1: QUADRATIC FUNCTIONS First, we solve optimization problem (3) given the following quadratic functions, L0(w) = (w1 + 5) 2 + (w2 + 2) 2 + (w3 + 1) 2 + 4w1 · w3, L1(w) = (w1 − 9)2 + (w2 − 9)2 + (w3 − 9)2 + w1 · w2 + w2 · w3 + w1 · w3 + 1, L(w) = L0(w) + L1(w). By the first order condition, we obtainwG0 ,wG1 ,wO as follows, wG0 = [1,−2,−3]T , wG1 = [4.5, 4.5, 4.5]T , wO = [24.53, 3.0, 26.53]T We use Algorithm 1 to find the optimal solution to (3) and run Algorithm 3 to find a sub-optimal solution. In particular, we adopt the penalty method (Ben-Tal & Zibulevsky, 1997) to solve constrained convex optimization (5), i.e., by solving the following unconstrained optimization, min w L1(w) + t ·max{0, (L0(w)− λ(i)mid)} 2, (10) where t is the penalty parameter. We solve the optimization problem (10) using gradient descent with learning rate 0.001 and 10000 iterations. We set penalty parameter t = 0.5 and increase t by 0.1 after every 250 iterations. Note that optimization (5) is convex and the penalty method for a constrained convex optimization converges to the optimal solution (Ben-Tal & Zibulevsky, 1997). We compare the our algorithms with a baseline: the solution to optimization problem (3) found using the penalty method, i.e., by solving the following unconstrained optimization, min w L0(w) +L1(w) + t · [ max{0, (L0(w)− L1(w)− γ)}2 +max{0, (L1(w)− L0(w)− γ)}2 ] . (11) When solving the optimization problem (11), we use learning rate 0.001. We set penalty parameter t = 0.5 and increase it by 0.1 every 250 iterations. Figure 1a illustrates the overall loss L(w) at the (sub-) optimal points obtained from Algorithms 2 and 3 and the baseline. x-axis represents fairness parameter γ. Since Algorithm 2 converges to the optimal solution, it achieves the smallest loss. Figure 1b illustrates the distance of the optimal point w∗ from the sub-optimal solutions obtained by Algorithm 3 and the baseline penalty method. It shows that when γ is sufficiently large (less strict fairness constraint), a sub-optimal solution generated by Algorithm 3 is closer to the optimal solution than the solution found using the baseline method. 4As an example, if F is a compact subset of linear predictors in Reproducing Kernel Hilbert Space (RKHS) and loss l(y, f(x)) is Lipschitz in f(x) (second argument), then Assumption 3 can be satisfied (Bartlett & Mendelson, 2002). Vast majority of linear predictors such as support vector machine and logistic regression can be defined in RKHS. 6.2 EXPERIMENT 2: LOGISTIC REGRESSION AND THE ADULT INCOME DATASET The adult income dataset is a public dataset containing the information of 48,842 individuals (Kohavi, 1996). Each data point includes 14 features including age, education, race, etc. Consider race (White or Black) as the sensitive attribute, we denote White demographic group byA = 0 and Black group by A = 1. We first pre-process the dataset by removing the data points with a missing value or with the race other than Black and White and obtain 41,961 data points. Among these data points, 4585 belong to Black demographic group. For each data point, we convert all the categorical features to one-hot vectors and result in dx = 110 dimensional features. We then normalize the feature vectors such that they have zero mean value and unit variance. Our goal is to find a logistic regression model satisfying γ-EL to predict whether the income of an individual is above $50K or not. We use Algorithm 2 and Algorithm 3 with = 0.01 to find the optimal logistic regression model under EL. We use the penalty method described in equation (11) as the baseline. Similar to Experiment 1, we set learning rate as 0.001 for solving (10) and (11). Penalty parameter t is set to be 0.5 and increases by 0.1 every 250 iterations. Figure 1c illustrates the loss of logistic regression model trained by Algorithm 2, Algorithm 3, and the baseline. It shows that Algorithm 2 outperforms the baseline; this is because that the baseline only finds a sub-optimal solution while Algorithm 2 finds the global optimal solution. As mentioned in Section 4, Algorithm 3 finds a sub-optimal solution that satisfies γ-EL, and its performance can vary from case to case. Even though Algorithm 3 has a good performance in Experiment 1, it does not outperform the baseline in Experiment 2. Figure 1d illustrates the distances from the optimal point w∗ to the sub-optimal solutions obtained by Algorithm 3 and the baseline penalty method. It shows that the distance fromw∗ to the solution obtained under Algorithm 3 is slightly larger than that fromw∗ to the solution obtained under the baseline. 7 CONCLUSION In this work, we studied the problem of fair supervised learning under the Equalized Loss (EL) fairness notion which requires the prediction error/loss to be the same across different demographic groups. By imposing EL constraint, the learning problem can be formulated as a non-convex optimization problem. We introduce a number of algorithms that find the global optimal solution to this non-convex optimization problem. In particular, we showed that the optimal solution to such a non-convex problem can be found by solving a sequence of convex constrained optimizations. We also introduced a simple algorithm for finding a sub-optimal solution to the non-convex problem without solving constrained convex optimization problems. In addition to the theoretical guarantees, we demonstrated the performance of the proposed algorithm through numerical experiments. 8 REPRODUCIBILITY STATEMENT Regarding the theoretical results: This paper includes six Theorems. The proof of Theorem 1 and Theorem 2 have been provided in the main text. Due to the page limit, the proofs of the other theorems have been provided in the appendix. Regarding the numerical examples: the first experiment does not use any dataset, and we study the performance of our proposed method on quadratic objective functions. The values for hyperparameters (including learning and penalty parameter) have been explicitly mentioned in section 6. In the second numerical example, we used the adult income dataset which is a well-known public dataset in our community. We explained the data pre-processing procedure in Section 6.2 in details. 9 ETHICS STATEMENT In this work, we proposed algorithms to find fair predictors under the EL fairness notion. We want to emphasize that selecting a right fairness notion depends on the application and the authors do not make any suggestions to policy/law makers about choosing or avoiding this fairness notion. APPENDIX PROOFS In order to prove Theorem 3, we first introduce two lemmas. Lemma 1 Under assumption 2, there exists w ∈ Rdw such that L0(w) = L1(w) = L(w) and λ (1) start ≤ L(w) ≤ λ (1) end. Proof. Let h0(β) = L0((1 − β)wG0 + βwG1) and h1(β) = L1((1 − β)wG0 + βwG1), and h(β) = h0(β) − h1(β), β ∈ [0, 1]. Note that ∇wLa(wGa) = 0 because wGa is the minimizer of La(w). Moreover, ∇2wLa(w) is positive semi-definit because La(.) is a strictly convex function. First, we show that L0((1− β)wG0 + βwG1) is an increasing function in β, and L1((1− β)wG0 + βwG1) is a decreasing function in β. Note that h ′ 0(0) = (wG1 − wG0)T∇wL0(wG0) = 0, and h′′0(0) = (wG1 −wG0)T∇2wL0(wG0)(wG1 −wG0) ≥ 0. This implies that h′0(β) ≥ 0,∀β ∈ [0, 1]. Similarly, we can show that h′1(β) ≤ 0,∀β ∈ [0, 1]. Note that under Assumption (2), h(0) < 0 and h(1) > 0. Therefore, by the intermediate value theorem, the exists β ∈ (0, 1) such that h(β) = 0. Definew = (1− β)wG0 + βwG1 . We have, h(β) = 0 =⇒ L0(w) = L1(w) = L(w) (12) wG0 is minimizer of L0 =⇒ L(w) = L0(w) ≥ λ (1) start (13) h′0(β) ≥ 0,∀β ∈ [0, 1] =⇒ h0(1) ≥ h0(β) =⇒ λ (1) end ≥ L0(w) = L(w) (14) Lemma 2 L0(w∗i ) = λ (i) mid, wherew ∗ i is the solution to (5). Proof. We proceed by contradiction. Assume that L0(w∗i ) < λ (i) mid. SincewG1 is not in the feasible set of (5),∇wL1(w∗i ) 6= 0. This is a contradiction becausew∗i is an interior point of the feasible set of a convex optimization and cannot be optimal if∇wL1(w∗i ) is equal to zero. Proof [Theorem 3] Let Ii = [λ (i) start, λ (i) end] be a sequence of intervals. It is easy to see that I1 ⊇ I2 ⊇ · · · and λ (i) end−λ (i) start → 0 as i→∞. Therefore, by the Nested Interval Theorem, ∩∞i=1Ii consists of exactly one real number λ∗, and both λ(i)start and λ (i) end converge to λ ∗. Because λ(i)mid = λ (i) start+λ (i) start 2 , λ (i) mid also converges to λ∗. Now, we show that L(w∗) ∈ Ii for all i. Note that L(w∗) = L0(w∗) ≥ λ(1)start because wG0 is the minimizer of L0. Moreover, λ (1) end ≥ L(w∗) otherwise L(w) < L(w∗) (w is defined in Lemma 1) andw∗ is not optimal solution under 0-EL. Therefore, L(w∗) ∈ I1. Now we proceed by induction. Suppose L(w∗) ∈ Ii. We show that L(w∗) ∈ Ii+1 as well. We consider two cases. • L(w∗) ≤ λ(i)mid. In this case w∗ is a feasible point for (5), and λ(i) ≤ L(w∗) ≤ λ (i) mid. Therefore, L(w∗) ∈ Ii+1. • L(w∗) < λ(i)mid. In this case, we proceed by contradiction to show that λ (i) ≥ λ(i)mid. Assume that λ(i) < λ(i)mid. Define g(β) = g0(β)−g1(β), where gi(β) = Li((1−β)wG0 + βw∗i ). Note that λ (i) = g1(1) By Lemma 2, g0(1) = λ (i) mid. Therefore, g(1) = λ (i) mid − λ(i) > 0. Moreover, under Assumption 2, g(0) < 0. Therefore, by the intermediate value theorem, there exists β ∈ (0, 1) such that g(β) = 0. Similar to the proof of Lemma 1, we can show that g0(β) in an increasing function for all β ∈ [0, 1]. As a result g0(β) < g0(1) = λ (i) mid. Definew = (1− β)wG0 + βw∗i . We have, g0(β) = L0(w) = L1(w) = L(w) < λ (i) mid (15) L(w∗) < λ (i) mid (16) The last two equations imply that w∗ is not an optimal fair solution under 0-EL fairness constraint. This is a contradiction. Therefore, if L(w∗) > λ(i)mid, then λ (i) ≥ λ(i)mid. As a result, L(w∗) ∈ Ii+1 By two above cases and the nested interval theorem, we conclude that, L(w∗) ∈ ∩∞i=1Ii, lim i→∞ λ (i) mid = L(w ∗) For the second part of the theorem, consider the following, w∗∞ = argmin w L1(w)s.t., L0(w) ≤ λ∞mid = L(w∗) lim i→∞ w∗i = w ∗ ∞ In order to show that w∗∞ is equal to w ∗, we proceed by contradiction. Suppose w∗∞ 6= w∗. As a result, L1(w∗∞) < L(w ∗). Define η(β) = η0(β)− η1(β), where ηi(β) = Li((1− β)wG0 + βw∗∞). Note that L1(w∗∞) = η1(1). By Lemma 2, the condition in (5) is binding and η0(1) = L(w ∗). Therefore, η(1) = L(w∗) − L1(w∗∞) > 0. Moreover, under Assumption 2, η(0) < 0. Therefore, by the intermediate value theorem, there exists β ∈ (0, 1) such that η(β) = 0. Similar to the proof of Lemma 1, we can show that η0(β) is an increasing function for all β ∈ [0, 1]. As a result η0(β) < η0(1) = L(w ∗). Definew = (1− β)wG0 + βw∗∞. We have, η0(β) = L0(w) = L1(w) = L(w) < L(w ∗) (17) The last equation implies thatw∗ is not an optimal fair solution under 0-EL fairness constraint. This a contradiction. As a result,w∗∞ = ŵ. Proof [Theorem 4 ] Letw∗ be the optimal weight vector under γ-EL. Step 1. we show that one of the following holds, L0(w ∗)− L1(w∗) = γ (18) L0(w ∗)− L1(w∗) = −γ (19) Proof by contradiction. Assume −γ < L0(w∗) − L1(w∗) < γ. This implies that w∗ is an interior point of the feasible set of optimization problem (3). Since w∗ 6= w∗O, then ∇L(w∗) 6= 0. As a result, object function of (3) can be improved at w∗ by moving toward −∇L(w∗). This a contradiction. Therefore, |L0(w∗)− L1(w∗)| = γ. Step 2. Function wγ = ELminimizer(wG0 ,wG0 , , γ) is the solution to the following optimization problem, min w Pr{A = 0}L0(w) + Pr{A = 1}L1(w), s.t., L0(w∗)− L1(w∗) = γ (20) To show the above claim, notice that the solution to optimization problem (20) is the same as the following, min w Pr{A = 0}L0(w) + Pr{A = 1}L̃1(w), s.t., L0(w∗)− L̃1(w∗) = 0, (21) where L̃1(w) = L1(w) + γ. Since L0(wG0) − L̃1(wG0) < 0 and L0(wG1) − L̃1(wG1) > 0, by Theorem 3, we know thatwγ = ELminimizer(wG0 ,wG0 , , γ) find the solution to (21). Lastly, because |L0(w∗)− L1(w∗)| = γ, we have, w∗ = { wγ if L(wγ) ≤ L(w−γ) w−γ o.w. (22) Thus, Algorithm 2 finds the solution to (3). Proof [Theorem 5] 1. Under Assumption 2, g(1) < 0. Moreover, g(0) ≥ 0. Therefore, by the intermediate value theorem, there exists β0 ∈ [0, 1] such that g(β0) = 0. 2. Since wO is the minimizer of L(w), h′(0) = 0. Moreover, since L(w) is strictly convex, h′′(0) > 0. As a result, h′(β) > 0 for β > 0. 3. SincewGâ is the minimizer ofLâ(w), andLâ(w) is strictly convex, Lâ((1−β)wO+βwGâ) is strictly decreasing function. Note that since h(β) = Pr{A = â}Lâ((1− β)wO + βwGâ) +Pr{A = 1− â}L1−â((1− β)wO + βwGâ) is strictly increasing and Lâ((1 − β)wO + βwGâ) is strictly decreasing, we conclude that L1−â((1− β)wO + βwGâ) is strictly increasing. As a result, g should be strictly decreasing. Proof [Theorem 6] First, we show that if gγ(0) ≤ 0, thenwO satisfies γ-EL. gγ(0) ≤ 0 =⇒ g(β)− γ ≤ 0 =⇒ Lâ(wO)− L1−â(wO) ≤ γ Moreover, Lâ(wO)−L1−â(wO) ≥ 0 because â = argmaxa La(wO). Therefore, γ-EL is satisfied. Secondly, assume that gγ(0) > 0. Under Assumption 1, gγ(1) = Lâ(wGâ)−L1−â(wGâ)− γ < 0. Therefore, by the intermediate value there exists β0 such that gγ(β0) = 0. Moreover, gγ is a strictly decreasing function. Therefore, the binary search proposed in Algorithm 3 converges to root of gγ(β). As a result, (1 − β(∞)mid)wO + β (∞) midwGâ satisfies satisfies γ-EL. Moreover, Lâ(wO) − L1−â(wO) ≥ 0 because â = argmaxa La(wO). Note that since g(β) is decreasing, β(∞)mid is the smallest possible β under which (1 − β)wO + βwGâ γ-EL. Since h is increasing, the smallest possible β gives us a better accuracy. Proof [Theorem 7] By the triangle inequality, the following holds, sup fw∈F ||L0(w)−L1(w)| − |L̂0(w)− L̂1(w)|| ≤ sup fw∈F |L0(w)− L̂0(w)|+ sup fw∈F |L1(w)− L̂1(w)|. (23) Therefore, with probability at least 1− 2δ we have, sup fw∈F ||L0(w)− L1(w)| − |L̂0(w)− L̂1(w)|| ≤ B(δ, n0,F) +B(δ, n1,F) (24) As a result, with probability 1− 2δ holds, {w|fw ∈ F , |L0(w)− L1(w)| ≤ γ} ⊆ {w|fw ∈ F , |L̂0(w)− L̂1(w)| ≤ γ̂} (25) Now consider the following, L(ŵ)− L(w∗) = L(ŵ)− L̂(ŵ) + L̂(ŵ)− L̂(w∗) + L̂(w∗)− L(w∗) (26) By (25), L̂(ŵ)− L̂(w∗) ≤ 0 with probability 1−2δ. Thus, with probability at least 1−2δ, we have, L(ŵ)− L(w∗) ≤ L(ŵ)− L̂(ŵ) + L̂(w∗)− L(w∗). (27) Therefore, under assumption 3, we can conclude with probability at least 1− 6δ, L(ŵ)− L(w∗) ≤ 2B(δ, n,F). In addition, by (24), with probability at least 1− 2δ, we have, |L0(ŵ)− L1(ŵ)| ≤ B(δ, n0,F) +B(δ, n1,F) + |L̂0(w)− L̂1(w)| ≤ γ̂ +B(δ, n0,F) +B(δ, n1,F) = γ + 2B(δ, n0,F) + 2B(δ, n1,F)
1. What is the focus of the paper regarding minimizing convex losses constrained by bounded loss on each group or bounded difference of losses over two groups? 2. What are the strengths of the proposed approach, particularly in its theoretical analysis, such as Theorems 1 and 2, and the binary search-based thresholding method? 3. What are the weaknesses of the paper, especially regarding its assumptions, such as the feasibility condition in Theorem 2 and Assumption 2, which may not always hold in practice? 4. How does the paper's approach compare to recent works that solve non-convex constrained formulations resulting from fair objectives, such as implicit rate-constrained optimization, decomposition approaches using monotonicity of parameters, and separable convex optimization with nested lower and upper constraints? 5. Are there any concerns or questions about the paper's relevance to policy, considering the cases where the losses could be unbounded or where demographic groups are at a disadvantage?
Summary Of The Paper Review
Summary Of The Paper The authors consider minimization of convex losses constrained by either bounded loss on each group, or bounded difference of losses over two groups. The second formulation is non-convex, whereas the first formulation is convex. When the losses are strictly convex on both the demographic groups, so that their optima are distinct (I think this is the condition they need, but they use a more restrictive condition in the paper), they can find the "EL" fair predictor by solving a sequence of convex constrained optimizations, by exploiting a monotonicity property. They next give a more computationally efficient approximate algorithm for finding the EL fair predictor. Review Strengths: Theorem 1 and 2 are interesting. Bounded loss constrained minimization does not increase the maximum loss per group "too much" for classification problems. This can be a nice point to make to policy makers. The authors give a binary-search based thresholding method for optimizing bounded group loss constrained problems, which is also interesting, and might potentially have connections to existing optimization algorithms that use monotonicity of bounds to solve iterative subproblems. Weaknesses: Assumptions: If the condition of feasibility of (4) in Theorem 2 is not true, then bounded increase in loss may not hold any more. For example, it is not true for facility location between two groups, where one can ask to minimize distance of a facility to each of two groups (for e.g., see, Too Many Fairness Metrics: Is There a Solution? By Gupta et.al. ). Placing a facility at infinity can make "absolute diff of group distances" to be zero, while there may be no facility that minimizes the total distances to both groups of populations. What happens in the case of classification losses? I suspect that the losses could then be unbounded. Which case might arise more frequently in practice? (if we want to think about relevance to policy). Assumption 2 implies that when a group experiences its lowest possible loss, it should not be the disadvantaged group. -- What is the rationale for this assumption? This might be misleading a bit -- as by disadvantaged the authors simply mean the non-optimized group, but since this is a "fairness" paper, it can also mean demographics groups which are at a disadvantage (minorities). Is Assumption 2 even needed, since the convex functions are assumed "strictly" convex? Related work - there is a lot of recent work on solving non-convex constrained formulations that result from "fair" objectives. To give a few examples: Nonconvex optimization for regression with fairness constraints. http://proceedings.mlr.press/v80/komiyama18a/komiyama18a.pdf Implicit Rate-Constrained Optimization of Non-decomposable Objectives http://proceedings.mlr.press/v139/kumar21b/kumar21b.pdf Optimization with Non-Differentiable Constraints with Applications to Fairness, Recall, Churn, and Other Goals https://jmlr.csail.mit.edu/papers/volume20/18-616/18-616.pdf Decomposition approach using monotonicity of parameters: e.g., Separable Convex Optimization with Nested Lower and Upper Constraints. https://pubsonline.informs.org/doi/10.1287/ijoo.2018.0004 How does the current work compare to these - in theory and experiments?
ICLR
Title Non-convex Optimization for Learning a Fair Predictor under Equalized Loss Fairness Constraint Abstract Supervised learning models have been increasingly used in various domains such as lending, college admission, natural language processing, face recognition, etc. These models may inherit pre-existing biases from training datasets and exhibit discrimination against protected social groups. Various fairness notions have been introduced to address fairness issues. In general, finding a fair predictor leads to a constrained optimization problem, and depending on the fairness notion, it may be non-convex. In this work, we focus on Equalized Loss (EL), a fairness notion that requires the prediction error/loss to be equalized across different demographic groups. Imposing this constraint to the learning process leads to a non-convex optimization problem even if the loss function is convex. We introduce algorithms that can leverage off-the-shelf convex programming tools and efficiently find the global optimum of this non-convex problem. In particular, we first propose the ELminimizer algorithm, which finds the optimal EL fair predictor by reducing the non-convex optimization problem to a sequence of convex constrained optimizations. We then propose a simple algorithm that is computationally more efficient compared to ELminimizer and finds a sub-optimal EL fair predictor using unconstrained convex programming tools. Experiments on real-world data show the effectiveness of our algorithms. N/A Supervised learning models have been increasingly used in various domains such as lending, college admission, natural language processing, face recognition, etc. These models may inherit pre-existing biases from training datasets and exhibit discrimination against protected social groups. Various fairness notions have been introduced to address fairness issues. In general, finding a fair predictor leads to a constrained optimization problem, and depending on the fairness notion, it may be non-convex. In this work, we focus on Equalized Loss (EL), a fairness notion that requires the prediction error/loss to be equalized across different demographic groups. Imposing this constraint to the learning process leads to a non-convex optimization problem even if the loss function is convex. We introduce algorithms that can leverage off-the-shelf convex programming tools and efficiently find the global optimum of this non-convex problem. In particular, we first propose the ELminimizer algorithm, which finds the optimal EL fair predictor by reducing the non-convex optimization problem to a sequence of convex constrained optimizations. We then propose a simple algorithm that is computationally more efficient compared to ELminimizer and finds a sub-optimal EL fair predictor using unconstrained convex programming tools. Experiments on real-world data show the effectiveness of our algorithms. 1 INTRODUCTION As machine learning (ML) algorithms are increasingly being used in applications such as education, lending, recruitment, healthcare, criminal justice, etc., there is a growing concern that the algorithms may exhibit discrimination against protected population groups. For example, speech recognition products such as Google Home and Amazon Alexa were shown to have accent bias (Harwell, 2018). The COMPAS recidivism prediction tool, used by courts in the US in parole decisions, has been shown to have a substantially higher false positive rate for African Americans compared to the general population (Dressel & Farid, 2018). Amazon had been using automated software since 2014 to assess applicants’ resumes, which were found to be biased against women (Dastin, 2018). Various fairness notions have been proposed in the literature to measure and remedy the biases in ML systems; they can be roughly classified into two classes: 1) individual fairness focuses on the equity at individual level and it requires the similar individuals to be treated similarly (Dwork et al., 2012; Biega et al., 2018; Jung et al., 2019; Gupta & Kamble, 2019); 2) group fairness requires certain statistical measures to be (approximately) equalized across different groups distinguished by some sensitive attributes. Their suitability for use is often application dependent, and many of them are incompatible with each other (Zhang et al., 2019; Hardt et al., 2016; Conitzer et al., 2019; Zhang et al., 2020; Khalili et al., 2020). Extensive approaches have been developed to satisfying a given definition of fairness and they generally fall under three categories: pre-processing, by modifying the original dataset such as removing certain features and reweighing, e.g., (Kamiran & Calders, 2012; Celis et al., 2020); in-processing, by modifying the algorithms such as imposing fairness constraints or changing objective functions, e.g., (Zhang et al., 2018; Agarwal et al., 2018; 2019; Reimers et al., 2021; Calmon et al., 2017); post-processing, by adjusting the output of the algorithms based on sensitive attributes, e.g., (Hardt et al., 2016). In this paper, we focus on group fairness and we aim to mitigate unfairness issues in supervised learning using in-processing approaches. The problem can be cast as a constrained optimization problem where a fair predictor can be found by minimizing the prediction error (i.e., loss) subject to certain group fairness constraint. In Section 2.1, we present a number of definitions of commonly used group fairness notions, namely, statistical parity (Dwork et al., 2012), equal opportunity (Hardt et al., 2016), equalized loss (Zhang et al., 2019), and bounded group loss (Agarwal et al., 2019). Here we are particularly interested in equalized loss which requires the expected loss to be equalized across different groups. Constrained optimization problems for finding a fair predictor have been studied in the iterature. In general, imposing a fairness criterion to the optimization problem may lead to a non-convex optimization problem. Existing works have proposed various approaches to solving such a non-convex optimization in different settings. For example, Komiyama et al. (2018) studied the non-convex optimization for regression problems under the coefficient of determination constraint. Agarwal et al. (2019) proposed an approach to finding a fair regression model under bounded group loss and statistical parity fairness constraints. Agarwal et al. (2018) studied classification problems and aimed at finding fair classifiers under various fairness notions including statistical parity and equal opportunity. In particular, they considered zero-one loss as the objective function and trained a randomized fair classifier over a finite hypothesis space; this problem was reduced to a problem of finding the saddle point of a linear Lagrangian function in (Agarwal et al., 2018). Zhang et al. (2018) proposed an adversarial debasing technique to find a fair classifier under equalized odd, equal opportunity, and statistical parity. However, there is no guarantee that this technique finds the global optimal solution. The main difference between the present work and the existing in-processing approaches are as follows: 1) we consider a non-convex problem for finding a fair predictor satisfying Equalized Loss fairness notion, which has not been studied in the literature to the best of our knowledge. 2) We propose algorithms for finding the global optimal solution to this non-convex problem efficiently. 3) Our algorithms are easy to implement and are applicable to both regression and classification problems. 4) Unlike (Agarwal et al., 2018), our algorithms are not limited to finite hypothesis space. Non-convex optimization problems have also been studied in other contexts such as learning overparametrized models. For example, deep neural networks are typically trained by solving unconstrained, non-convex problems, and methods such as gradient descent may not be suitable as they are likely to find saddle points but not optimums. To address this issue, approaches have been proposed in recent works by incorporating the higher order derivatives (Celis et al., 2020; Anandkumar & Ge, 2016) or noisy gradients (Ge et al., 2015). However, these methods only find a local minimum (not a global minimum) and are not applicable to our problem with a non-convex constraint. In this work, we develop novel algorithms that find the fair (sub-)optimal solutions under Equalized Loss fairness constraint efficiently. Note that while our approach and algorithms are presented in the context of fair machine learning, they are applicable to any problem that can be formulated as a constrained optimization problem in the form of minw L0(w)+αL1(w) s.t. |L0(w)−L1(w)| < γ, where α is a constant Our main contributions and findings are as follows. 1. We study the relationship between Equalized Loss (EL) and Bounded Group Loss (BGL) fairness notions. We show that given the existence of feasible solutions satisfying (approximate) BGL fairness, imposing (approximate) EL fairness constraint never increase losses of both groups simultaneously (Theorems 1 and 2 in Section 2.1). These results help policy makers to have a better understanding of these two fairness notions. 2. We develop an algorithm (ELminimizer) to solve a non-convex constrained optimization problem that finds the optimal (approximate) EL fair solution. We show that such non-convex optimization can be reduced to a sequence of convex constrained optimizations and the convergence property of the algorithm is analyzed (Theorems 3 and 4, Section 3). 3. We develop a simple algorithm for finding a sub-optimal (approximate) EL fair solution. We show that a sub-optimal solution is a linear combination of optimal solutions to two unconstrained optimizations and it can be found efficiently without solving constrained optimizations (Theorem 5, Section 4). 4. We conduct sample complexity analysis and provide the guarantee on generalization performance (Theorem 7, Section 5). 5. We validate the theoretical results by conducting experiments on real-world data (Section 6). 2 PROBLEM FORMULATION Consider a supervised learning problem where the training dataset consists of triples (X,A, Y ) from two social groups. Random variableX ∈ X ⊂ Rdx is the feature vector (in form of a column vector), A ∈ {0, 1} is the sensitive attribute (e.g., race, gender) indicating the group membership, and Y ∈ Y ⊂ R is the label. The feature vector X may or may not include sensitive attribute A. Label Y can be either discrete or continuous depending on the given problem: if Y is discrete (resp. continuous), then the problem is a classification (resp. regression) problem. Let F be a set of predictors fw : X → R parameterized by weight vector w ∈ Rdw .1 Consider loss function l : Y×X → Rwhere l(Y, fw(X )) measures the error of fw in predicting label Y . Denote the expected loss with respect to the joint probability distribution of (X,Y ) by L(w) := E{l(Y, fw(X ))}. Then, La(w) := E{l(Y, fw(X ))|A = a} denotes the expected loss of the group with attribute A = a. A predictor that minimizes the total expected loss, i.e., argminw L(w), can be biased against certain groups. To mitigate the risk of unfairness, various fairness notions have been proposed in the literature. Some of the most commonly used notions of group fairness are as follows: 1) Statistical Parity (SP) (Dwork et al., 2012) implies that the predictor and the sensitive attribute should be independent, i.e., fw(X ) ⊥ A; 2) Equal Opportunity (EqOpt) (Hardt et al., 2016) requires that conditional on Y = 1, prediction and sensitive attribute are independent, i.e., fw(X ) ⊥ A|Y = 1; 3) Equalized Odds (EO) (Hardt et al., 2016) requires the conditional independence between prediction and sensitive attribute given Y , i.e., fw(X ) ⊥ A|Y ; 4) Equalized Loss (EL) (Zhang et al., 2019; Berk et al., 2021) requires that the losses experienced by different groups are equalized, i.e., L0(w) = L1(w); 5) Bounded Group Loss (BGL) (Agarwal et al., 2019) requires that the loss experienced by each group is bounded. With fairness consideration, the goal is to find weight vectorw that minimizes total expected loss in predicting Y givenX , subject to certain fairness condition, i.e., minw L(w) s.t. fairness constraint. This is a typical formulation in fair machine learning literature, and above method of finding a fair predictor belongs to in-processing approaches. Because such constrained optimization can be nonconvex, finding the optimal solution efficiently can be challenging. In this work, we develop novel algorithms that solves such an optimization problem udder EL fairness constraint. 2.1 EQUALIZED LOSS (EL) AND BOUNDED GROUP LOSS (BGL) As mentioned in Section 2, various fairness notions have been introduced in the literature. Among them, Statistical Parity (SP), Equal Opportunity (EqOpt), Equalized Odds (EO), and Bounded Group Loss (BGL) have been studied extensively in the literature, and both in-processing and postprocessing approaches have been developed to satisfy these constraints (Dwork et al., 2012; Agarwal et al., 2018; Hardt et al., 2016; Zafar et al., 2019; Fitzsimons et al., 2019). Note that different fairness notions may be conflict with each other and which one to adopt is application and context dependent. In this work, we are interested in Equalized Loss (EL) fairness notion (Zhang et al., 2019; Berk et al., 2021) which implies that the prediction error should be the same across different groups,2 and Group Bounded Loss (BGL) fairness notion (Agarwal et al., 2019) which requires the prediction error of every group to be bounded. We consider a relaxed version of EL fairness defined as follows. Definition 1 (γ-EL) A predictor f satisfies γ-EL if the expected losses experienced by different demographic groups satisfy the following, − γ ≤ L0(w)− L1(w) ≤ γ. (1) Parameter γ controls the degree of fairness; the smaller γ implies the stronger fairness. When γ = 0, the exact EL fairness is attained. We say a group is disadvantaged if it experiences a larger loss. Similarly, Group Bounded Loss (BGL) fairness notion is formally defined as follows. Definition 2 (γ-BGL) A predictor f satisfies γ-BGL if the expected loss of each demographic group is bounded by γ, i.e., La(w) ≤ γ, ∀a ∈ {0, 1}. (2) 1Predictive models such as logistic regression, linear regression, deep learning models, etc., are parameter- ized by a weight vector. 2EL has also been referred to as Overall Accuracy Equality in (Berk et al., 2021; Agarwal et al., 2019). 2.2 RELATIONS BETWEEN γ-EL AND γ-BGL In this section, we formally study the relations between γ-EL and γ-BGL fairness notions. Under γ-EL fairness constraint, finding a fair predictor is equivalent to solving the following constrained optimization problem: minw L(w) s.t. |L0(w)− L1(w)| ≤ γ. (3) Letw∗ be denoted as the solution to (3) and fw∗ is the optimal γ-EL fair predictor. Theorem 1 below shows that given the existence of a feasible point satisfying γ-BGL fairness, it’s impossible for both groups experiencing loss larger than γ from the optimal γ-EL fair predictor. Theorem 1 Consider the following optimization for finding the optimal γ-BGL fair predictor, minw L(w) s.t. La(w) ≤ γ, ∀a ∈ {0, 1}. (4) If L0(w∗) > γ and L1(w∗) > γ, then optimization problem (4) does not have a feasible point. Proof 1 We prove by contradiction. Assume w̃ is a feasible point of optimization (4). Note that w̃ is a feasible point for optimization problem (3) as well. Since both L0(w∗) and L1(w∗) are larger than γ, we have, E{l(Y, fw∗)} = Pr{A = 0}L0(w∗) + Pr{A = 1}L1(w∗) > γ, E{l(Y, fw̃)} = Pr{A = 0}L0(w̃) + Pr{A = 1}L1(w̃) ≤ γ. Therefore,w∗ can not be the solution to (3). This contradiction proves that the optimization problem (4) cannot have a feasible point. Theorem 1 implies that if γ-EL notion leads to an increase of the loss of every demographic group, then there is no optimal predictor under γ-BGL.3 The next theorem further shows that for any predictor satisfying γ-EL, it must satisfy 2γ-BGL. Theorem 2 Assume optimization problem (4) has at least one feasible point. Then, we have, min{L0(w∗), L1(w∗)} ≤ γ and max{L0(w∗), L1(w∗)} ≤ 2γ. Proof 2 Let w̃ be a feasible point of optimization problem (4), then w̃ is also a feasible point to (3). If min{L0(w∗), L1(w∗)} > γ, then L(w∗) > γ ≥ L(w̃) must hold. This is a contradiction because it implies thatw∗ is not an optimal solution to (3). Therefore, min{L0(w∗), L1(w∗)} ≤ γ. Similarly, we can prove max{L0(w∗), L1(w∗)} ≤ 2γ by contradiction. Assume max{L0(w∗), L1(w∗)} > 2γ. Then, max{L0(w∗), L1(w∗)} − min{L0(w∗), L1(w∗)} > γ which shows that w∗ is not a feasible point for (3). This is a contradiction. Therefore, max{L0(w∗), L1(w∗)} ≤ 2γ. Theorems 1 and 2 investigated the relations between EL and BGL fairness notions. Since γ-EL implies 2γ-BGL and it additionally requires the approximate equality across different groups, we will focus on γ-EL fairness notion in the rest of the paper. Because optimization problem (3) is a non-convex optimization, finding the optimal fair γ-EL solution efficiently can be challenging. In the next sections, we propose a number of algorithms that are easy to implement and can solve the optimization (3) efficiently. 3 OPTIMAL FAIR MODEL UNDER EL FAIRNESS In this section, we consider the optimization problem (3) under the EL fairness constraint. Note that this optimization problem is non-convex and finding the global optimal solution is difficult. However, we propose an algorithm which is able to find the solution to non-convex optimization (3) by solving a sequence of convex optimization problems. Before presenting the algorithm, we need to introduce two assumptions. Assumption 1 L0(w), L1(w), and L(w) are strictly convex functions inw. 3Theorem 1 is related to (Agarwal et al., 2019). In particular, they considered γ-BGL fairness and mentioned that the equalized loss fairness notion may increase the loss of both groups. Algorithm 1: Function ELminimizer 1 ELminimizer(wG0 ,wG1 , , γ): 2 λ0start = L0(wG0) 3 λ0end = L0(wG1) 4 Define L̃1(w) = L1(w) + γ 5 i = 0 6 while λ(i)end − λ (i) start > do 7 λ (i) mid = (λ (i) end + λ (i) start)/2; 8 Solve the following convex optimization problem, w∗i = argmin w L̃1(w) s.t. L0(w) ≤ λ(i)mid (5) 9 λ(i) = L̃1(w ∗ i ); 10 if λ(i) ≥ λ(i)mid then 11 λ (i+1) start = λ (i) mid; λ (i+1) end = λ (i) end; 12 end 13 else 14 λ (i+1) end = λ (i) mid; λ (i+1) start = λ (i) start; 15 end 16 i = i+ 1; 17 end 18 Returnw∗i Example 1 Consider a linear classifier fw(X ) = wTX with squared loss l(Y, fw(X )) = (wTX − Y )2. In this example, E{l(Y, fw(X ))} = wTE{XXT }w − 2E{YXT }w + E{Y 2} is strictly convex in w if covariance matrix E{XXT } is positive definite. Similarly, La(w) is strictly convex if E{XXT |A = a} is positive definite. LetwGa be the weight vector minimizing the loss associated with group A = a. That is, wGa = argmin w La(w). (6) Since optimization problem (6) is an unconstrained convex optimization problem,wGa can be found efficiently by the first order condition or the gradient descent. We make the following assumption. Assumption 2 We assume that the following holds, L0(wG0) ≤ L1(wG0) and L1(wG1) ≤ L0(wG1). Algorithm 2: Solving Optimization Problem (3) Input: wG0 ,wG1 , ,γ 1 wγ = ELminimizer(wG0 ,wG1 , , γ); 2 w−γ = ELminimizer(wG0 ,wG1 , ,−γ); 3 if L(wγ) ≤ L(w−γ) then 4 w∗ = wγ ; 5 end 6 else 7 w∗ = w−γ ; 8 end Output: w∗ Assumption 2 implies that when a group experiences its lowest possible loss, it should not be the disadvantaged group. Under Assumption 2, given wG0 and wG1 , Algorithm 1 with γ = 0 (i.e., function ELminimizer(wG0 ,wG1 , , 0)) finds the optimal 0-EL fair solution, where parameter > 0 specifies the stopping criterion; as → 0, the output approaches to the optimal solution. Intuitively, Algorithm 1 solves non-convex optimization (3) by solving a sequence of convex and constrained optimization problems. If γ > 0, Algorithm 2 finds the optimal predictor under γ-EL using function ELminimizer. The convergence of Algorithm 1 for finding the optimal 0-EL fair solution, and convergence of Algorithm 2 for finding the optimal γ-EL fair solution are proved in the following theorems. Theorem 3 Consider sequences {λ(i)mid|i = 1, 2, . . .} and {w∗i |i = 1, 2, . . .} generated by Algorithm 1 when γ = 0, i.e., ELminimizer(wG0 ,wG1 , → 0, 0). Under Assumptions 1 and 2, we have, lim i→∞ w∗i = w ∗ and lim i→∞ λ (i) mid = E{L(Y, fw∗(X))} where fw∗ is the optimal 0-EL fair predictor. Similarly, we can prove the convergence for the approximate EL fairness when γ 6= 0. Theorem 4 Assume that L0(wG0) − L1(wG0) < −γ and L0(wG1) − L1(wG1) > γ. Then, as → 0, the output of Algorithm 2 goes to the optimal γ-EL fair solutionw∗. Complexity Analysis: The While loop in Algorithm 1 is executed for O(log(1/ )) times. Therefore, Algorithm 1 needs to solve a constrained convex optimization problem for O(log(1/ )) times. Note that constrained convex optimization problems can be efficiently solved via sub-gradient methods (Nedić & Ozdaglar, 2009), brier methods (Wright, 2001), stochastic gradient descent with one projection (Mahdavi et al., 2012), etc. For instance, Nedić & Ozdaglar (2009) introduces a subgradient method that finds the saddle point of the Lagrangian function corresponding to (5) and it converges at the rate ofO(1/k) (k is the number of iterations). Therefore, if is the maximum error tolerance for (5), the total time complexity of Algorithm 2 is O(1/ log(1/ )). 4 SUB-OPTIMAL FAIR MODEL UNDER γ-EL In Section 3, we have shown that non-convex optimization problem (3) can be reduced to a sequence of convex constrained optimizations (5), and based on this we proposed an algorithm (Algorithm 2) that finds the optimal γ-EL fair predictor. However, the proposed algorithm still requires solving a convex constrained optimization in each iteration. In this section, we propose another algorithm which finds a sub-optimal solution to optimization (3) without solving constrained optimization in each iteration. The algorithm consists of two phases in sequence: (1) finding two weight vectors by solving two unconstrained convex optimization problems; (2) generating a new weight vector satisfying γ-EL fairness with the two weight vectors found in the first phase. Because of the convexity, two unconstrained convex optimization problems in the first phase can be solved efficiently. Phase 1: Unconstrained optimization. In this phase, we remove EL fairness constraint and first solve the following uncontrained optimization problem, wO = argmin w L(w) (7) Because L(w) is strictly convex inw, the above optimization problem can be solved efficiently using the gradient descent method. Predictor fwO is the optimal predictor without fairness constraint, and L(wO) is the smallest overall expected loss that is attainable. Let â = argmaxa∈{0,1} La(wO), i.e., group â is the group that is disadvantaged under predictor fwO . Then, for the disadvantaged group â, we findwGâ by solving unconstrained optimization problem (6). Phase 2: Binary search to find the fair predictor. For β ∈ [0, 1], we define the followings, g(β) = Lâ ( (1− β)wO + βwGâ ) − L1−â ( (1− β)wO + βwGâ ) ; h(β) = L ( (1− β)wO + βwGâ ) , where function g(β) can be interpreted as loss disparity between two demographic group under predictor f(1−β)wO+βwGâ , and h(β) is the corresponding overall expected loss. Some properties of functions g(.) and h(.) are summarized in the following theorem. Theorem 5 Under Assumptions 1 and 2, the followings hold, 1. There exists β0 ∈ [0, 1] such that g(β0) = 0. 2. h(β) is strictly increasing in β ∈ [0, 1]; g(β) is strictly decreasing in β ∈ [0, 1]. Theorem 5 implies that in a dw dimensional space, if we start fromwO and move towardwGâ along a straight line, the overall loss increases and the disparity between two groups decreases until we reach (1 − β0)wO + β0wGâ , at which 0-EL fairness is satisfied. Note that β0 is the unique root of g. Since g(β) is a strictly decreasing function, β0 can be found using binary search. For the approximate γ-EL fairness, there are multiple values of β such that (1− β)wO + βwGâ satisfies γEL. Since h(β) is strictly increasing in β, among all β that satisfies γ-EL fairness, we would choose the smallest one. The method for finding a sub-optimal solution to optimization (3) is described in Algorithm 3. Algorithm 3: Sub-optimal solution to optimization problem (3) 1 Input: wGâ ,wO, , γ 2 Initialization: gγ(β) = g(β)− γ, i = 0, β(0)start = 0, β (0) end = 1 3 if gγ(0) ≤ 0 then 4 w = wO, and go to line 16; 5 end 6 while β(i)end − β (i) start > do 7 β (i) mid = (β (i) start + β (i) end)/2; 8 if gγ(β (i) mid) ≥ 0 then 9 β (i+1) start = β (i) mid, β (i+1) end = β (i) end; 10 end 11 else 12 β (i+1) start = β (i) start, β (i+1) end = β (i) mid; 13 end 14 end 15 w = (1− β(i)mid)wO + β (i) midwGâ ; 16 Output: w Note that while loop in Algorithm 3 is repeated for O(log(1/ )) times. Since the time complexity of operations in each loop isO(1), the total time complexity of Algorithm 3 isO(log(1/ )). We can formally prove that the output returned by Algorithm 3 satisfies γ-EL fairness constraint. Theorem 6 Assume that Assumption 1 holds. If gγ(0) ≤ 0, then wO satisfies the γ-EL fairness; if gγ(0) > 0, then limi→∞ β (i) mid = β (∞) mid exists, and (1 − β (∞) mid)wO + β (∞) midwGâ satisfies the γ-EL fairness constraint. It is worth mentioning, since h(β) is incrasing, we are intrested in finding the smallest possible β that (1 − β)wO + βwGâ satisfies γ-EL. Here, β (∞) mid is the smallest possible β under which (1 − β)wO + βwGâ satisfies γ-EL. 5 GENERALIZATION PERFORMANCE So far we proposed algorithms for solving optimization (3). In practice, the joint probability distribution of (X,A, Y ) is often unknown and the expected loss needs to be estimated using the empirical loss. Specifically, given n samples (X i, Ai, Yi), i = 1, . . . , n and predictor fw , the empirical losses of entire population and each group are defined as follows, L̂(w) = 1 n n∑ i=1 l(Yi, fw(X i)); L̂a(w) = 1 na ∑ i:Ai=a l(Yi, fw(X i)), (8) where na = |{i|Ai = a}|. Because γ-EL fairness constraint is defined in terms of expected loss, the optimization problem of finding an optimal γ-EL fair predictor using empirical losses is as follows, ŵ = argmin w L̂(w) s.t. |L̂0(w)− L̂1(w)| ≤ γ̂. (9) Note that γ̂ 6= γ and one goal in this section is to find relation between γ̂ and γ. We aim to investigate how to determine γ̂ so that with high probability the predictor found by solving problem (9) satisfies γ-EL fairness, and meanwhile ŵ is a good estimate of w∗. To present our result, we make the following assumption. Assumption 3 With probability 1− δ, we have the following, sup fw∈F |L(w)− L̂(w)| ≤ B(δ, n,F), where B(δ, n,F) is a bound that goes to zero as n goes to infinity. Note that if the class F is learnable with respect to loss function l, then there exists such a bound B(δ, n,F) that goes to zero as n goes to infinity (Shalev-Shwartz & Ben-David, 2014).4 Theorem 7 Let F be a set of learnable functions, and let fŵ and fw∗ be the solution to (9) and (3) respectively with γ̂ = γ + ∑ a∈{0,1}B(δ, na,F). Then, with probability at least 1 − 6δ the followings hold, L(ŵ)− L(w∗) ≤ 2B(δ, n,F) and |L0(ŵ)− L1(ŵ)| ≤ γ + 2B(δ, n0,F) + 2B(δ, n1,F). Theorem 7 shows that as n0, n1 go to infinity, γ̂ → γ, and both empirical loss and expected loss satisfy γ-EL. In addition, as n goes to infinity, the expected loss at ŵ goes to the minimum possible expected loss. Therefore, solving (9) using empirical loss is equivalent to solving (3) if the number of data points from each group is sufficiently large. 6 EXPERIMENTS 6.1 EXPERIMENT 1: QUADRATIC FUNCTIONS First, we solve optimization problem (3) given the following quadratic functions, L0(w) = (w1 + 5) 2 + (w2 + 2) 2 + (w3 + 1) 2 + 4w1 · w3, L1(w) = (w1 − 9)2 + (w2 − 9)2 + (w3 − 9)2 + w1 · w2 + w2 · w3 + w1 · w3 + 1, L(w) = L0(w) + L1(w). By the first order condition, we obtainwG0 ,wG1 ,wO as follows, wG0 = [1,−2,−3]T , wG1 = [4.5, 4.5, 4.5]T , wO = [24.53, 3.0, 26.53]T We use Algorithm 1 to find the optimal solution to (3) and run Algorithm 3 to find a sub-optimal solution. In particular, we adopt the penalty method (Ben-Tal & Zibulevsky, 1997) to solve constrained convex optimization (5), i.e., by solving the following unconstrained optimization, min w L1(w) + t ·max{0, (L0(w)− λ(i)mid)} 2, (10) where t is the penalty parameter. We solve the optimization problem (10) using gradient descent with learning rate 0.001 and 10000 iterations. We set penalty parameter t = 0.5 and increase t by 0.1 after every 250 iterations. Note that optimization (5) is convex and the penalty method for a constrained convex optimization converges to the optimal solution (Ben-Tal & Zibulevsky, 1997). We compare the our algorithms with a baseline: the solution to optimization problem (3) found using the penalty method, i.e., by solving the following unconstrained optimization, min w L0(w) +L1(w) + t · [ max{0, (L0(w)− L1(w)− γ)}2 +max{0, (L1(w)− L0(w)− γ)}2 ] . (11) When solving the optimization problem (11), we use learning rate 0.001. We set penalty parameter t = 0.5 and increase it by 0.1 every 250 iterations. Figure 1a illustrates the overall loss L(w) at the (sub-) optimal points obtained from Algorithms 2 and 3 and the baseline. x-axis represents fairness parameter γ. Since Algorithm 2 converges to the optimal solution, it achieves the smallest loss. Figure 1b illustrates the distance of the optimal point w∗ from the sub-optimal solutions obtained by Algorithm 3 and the baseline penalty method. It shows that when γ is sufficiently large (less strict fairness constraint), a sub-optimal solution generated by Algorithm 3 is closer to the optimal solution than the solution found using the baseline method. 4As an example, if F is a compact subset of linear predictors in Reproducing Kernel Hilbert Space (RKHS) and loss l(y, f(x)) is Lipschitz in f(x) (second argument), then Assumption 3 can be satisfied (Bartlett & Mendelson, 2002). Vast majority of linear predictors such as support vector machine and logistic regression can be defined in RKHS. 6.2 EXPERIMENT 2: LOGISTIC REGRESSION AND THE ADULT INCOME DATASET The adult income dataset is a public dataset containing the information of 48,842 individuals (Kohavi, 1996). Each data point includes 14 features including age, education, race, etc. Consider race (White or Black) as the sensitive attribute, we denote White demographic group byA = 0 and Black group by A = 1. We first pre-process the dataset by removing the data points with a missing value or with the race other than Black and White and obtain 41,961 data points. Among these data points, 4585 belong to Black demographic group. For each data point, we convert all the categorical features to one-hot vectors and result in dx = 110 dimensional features. We then normalize the feature vectors such that they have zero mean value and unit variance. Our goal is to find a logistic regression model satisfying γ-EL to predict whether the income of an individual is above $50K or not. We use Algorithm 2 and Algorithm 3 with = 0.01 to find the optimal logistic regression model under EL. We use the penalty method described in equation (11) as the baseline. Similar to Experiment 1, we set learning rate as 0.001 for solving (10) and (11). Penalty parameter t is set to be 0.5 and increases by 0.1 every 250 iterations. Figure 1c illustrates the loss of logistic regression model trained by Algorithm 2, Algorithm 3, and the baseline. It shows that Algorithm 2 outperforms the baseline; this is because that the baseline only finds a sub-optimal solution while Algorithm 2 finds the global optimal solution. As mentioned in Section 4, Algorithm 3 finds a sub-optimal solution that satisfies γ-EL, and its performance can vary from case to case. Even though Algorithm 3 has a good performance in Experiment 1, it does not outperform the baseline in Experiment 2. Figure 1d illustrates the distances from the optimal point w∗ to the sub-optimal solutions obtained by Algorithm 3 and the baseline penalty method. It shows that the distance fromw∗ to the solution obtained under Algorithm 3 is slightly larger than that fromw∗ to the solution obtained under the baseline. 7 CONCLUSION In this work, we studied the problem of fair supervised learning under the Equalized Loss (EL) fairness notion which requires the prediction error/loss to be the same across different demographic groups. By imposing EL constraint, the learning problem can be formulated as a non-convex optimization problem. We introduce a number of algorithms that find the global optimal solution to this non-convex optimization problem. In particular, we showed that the optimal solution to such a non-convex problem can be found by solving a sequence of convex constrained optimizations. We also introduced a simple algorithm for finding a sub-optimal solution to the non-convex problem without solving constrained convex optimization problems. In addition to the theoretical guarantees, we demonstrated the performance of the proposed algorithm through numerical experiments. 8 REPRODUCIBILITY STATEMENT Regarding the theoretical results: This paper includes six Theorems. The proof of Theorem 1 and Theorem 2 have been provided in the main text. Due to the page limit, the proofs of the other theorems have been provided in the appendix. Regarding the numerical examples: the first experiment does not use any dataset, and we study the performance of our proposed method on quadratic objective functions. The values for hyperparameters (including learning and penalty parameter) have been explicitly mentioned in section 6. In the second numerical example, we used the adult income dataset which is a well-known public dataset in our community. We explained the data pre-processing procedure in Section 6.2 in details. 9 ETHICS STATEMENT In this work, we proposed algorithms to find fair predictors under the EL fairness notion. We want to emphasize that selecting a right fairness notion depends on the application and the authors do not make any suggestions to policy/law makers about choosing or avoiding this fairness notion. APPENDIX PROOFS In order to prove Theorem 3, we first introduce two lemmas. Lemma 1 Under assumption 2, there exists w ∈ Rdw such that L0(w) = L1(w) = L(w) and λ (1) start ≤ L(w) ≤ λ (1) end. Proof. Let h0(β) = L0((1 − β)wG0 + βwG1) and h1(β) = L1((1 − β)wG0 + βwG1), and h(β) = h0(β) − h1(β), β ∈ [0, 1]. Note that ∇wLa(wGa) = 0 because wGa is the minimizer of La(w). Moreover, ∇2wLa(w) is positive semi-definit because La(.) is a strictly convex function. First, we show that L0((1− β)wG0 + βwG1) is an increasing function in β, and L1((1− β)wG0 + βwG1) is a decreasing function in β. Note that h ′ 0(0) = (wG1 − wG0)T∇wL0(wG0) = 0, and h′′0(0) = (wG1 −wG0)T∇2wL0(wG0)(wG1 −wG0) ≥ 0. This implies that h′0(β) ≥ 0,∀β ∈ [0, 1]. Similarly, we can show that h′1(β) ≤ 0,∀β ∈ [0, 1]. Note that under Assumption (2), h(0) < 0 and h(1) > 0. Therefore, by the intermediate value theorem, the exists β ∈ (0, 1) such that h(β) = 0. Definew = (1− β)wG0 + βwG1 . We have, h(β) = 0 =⇒ L0(w) = L1(w) = L(w) (12) wG0 is minimizer of L0 =⇒ L(w) = L0(w) ≥ λ (1) start (13) h′0(β) ≥ 0,∀β ∈ [0, 1] =⇒ h0(1) ≥ h0(β) =⇒ λ (1) end ≥ L0(w) = L(w) (14) Lemma 2 L0(w∗i ) = λ (i) mid, wherew ∗ i is the solution to (5). Proof. We proceed by contradiction. Assume that L0(w∗i ) < λ (i) mid. SincewG1 is not in the feasible set of (5),∇wL1(w∗i ) 6= 0. This is a contradiction becausew∗i is an interior point of the feasible set of a convex optimization and cannot be optimal if∇wL1(w∗i ) is equal to zero. Proof [Theorem 3] Let Ii = [λ (i) start, λ (i) end] be a sequence of intervals. It is easy to see that I1 ⊇ I2 ⊇ · · · and λ (i) end−λ (i) start → 0 as i→∞. Therefore, by the Nested Interval Theorem, ∩∞i=1Ii consists of exactly one real number λ∗, and both λ(i)start and λ (i) end converge to λ ∗. Because λ(i)mid = λ (i) start+λ (i) start 2 , λ (i) mid also converges to λ∗. Now, we show that L(w∗) ∈ Ii for all i. Note that L(w∗) = L0(w∗) ≥ λ(1)start because wG0 is the minimizer of L0. Moreover, λ (1) end ≥ L(w∗) otherwise L(w) < L(w∗) (w is defined in Lemma 1) andw∗ is not optimal solution under 0-EL. Therefore, L(w∗) ∈ I1. Now we proceed by induction. Suppose L(w∗) ∈ Ii. We show that L(w∗) ∈ Ii+1 as well. We consider two cases. • L(w∗) ≤ λ(i)mid. In this case w∗ is a feasible point for (5), and λ(i) ≤ L(w∗) ≤ λ (i) mid. Therefore, L(w∗) ∈ Ii+1. • L(w∗) < λ(i)mid. In this case, we proceed by contradiction to show that λ (i) ≥ λ(i)mid. Assume that λ(i) < λ(i)mid. Define g(β) = g0(β)−g1(β), where gi(β) = Li((1−β)wG0 + βw∗i ). Note that λ (i) = g1(1) By Lemma 2, g0(1) = λ (i) mid. Therefore, g(1) = λ (i) mid − λ(i) > 0. Moreover, under Assumption 2, g(0) < 0. Therefore, by the intermediate value theorem, there exists β ∈ (0, 1) such that g(β) = 0. Similar to the proof of Lemma 1, we can show that g0(β) in an increasing function for all β ∈ [0, 1]. As a result g0(β) < g0(1) = λ (i) mid. Definew = (1− β)wG0 + βw∗i . We have, g0(β) = L0(w) = L1(w) = L(w) < λ (i) mid (15) L(w∗) < λ (i) mid (16) The last two equations imply that w∗ is not an optimal fair solution under 0-EL fairness constraint. This is a contradiction. Therefore, if L(w∗) > λ(i)mid, then λ (i) ≥ λ(i)mid. As a result, L(w∗) ∈ Ii+1 By two above cases and the nested interval theorem, we conclude that, L(w∗) ∈ ∩∞i=1Ii, lim i→∞ λ (i) mid = L(w ∗) For the second part of the theorem, consider the following, w∗∞ = argmin w L1(w)s.t., L0(w) ≤ λ∞mid = L(w∗) lim i→∞ w∗i = w ∗ ∞ In order to show that w∗∞ is equal to w ∗, we proceed by contradiction. Suppose w∗∞ 6= w∗. As a result, L1(w∗∞) < L(w ∗). Define η(β) = η0(β)− η1(β), where ηi(β) = Li((1− β)wG0 + βw∗∞). Note that L1(w∗∞) = η1(1). By Lemma 2, the condition in (5) is binding and η0(1) = L(w ∗). Therefore, η(1) = L(w∗) − L1(w∗∞) > 0. Moreover, under Assumption 2, η(0) < 0. Therefore, by the intermediate value theorem, there exists β ∈ (0, 1) such that η(β) = 0. Similar to the proof of Lemma 1, we can show that η0(β) is an increasing function for all β ∈ [0, 1]. As a result η0(β) < η0(1) = L(w ∗). Definew = (1− β)wG0 + βw∗∞. We have, η0(β) = L0(w) = L1(w) = L(w) < L(w ∗) (17) The last equation implies thatw∗ is not an optimal fair solution under 0-EL fairness constraint. This a contradiction. As a result,w∗∞ = ŵ. Proof [Theorem 4 ] Letw∗ be the optimal weight vector under γ-EL. Step 1. we show that one of the following holds, L0(w ∗)− L1(w∗) = γ (18) L0(w ∗)− L1(w∗) = −γ (19) Proof by contradiction. Assume −γ < L0(w∗) − L1(w∗) < γ. This implies that w∗ is an interior point of the feasible set of optimization problem (3). Since w∗ 6= w∗O, then ∇L(w∗) 6= 0. As a result, object function of (3) can be improved at w∗ by moving toward −∇L(w∗). This a contradiction. Therefore, |L0(w∗)− L1(w∗)| = γ. Step 2. Function wγ = ELminimizer(wG0 ,wG0 , , γ) is the solution to the following optimization problem, min w Pr{A = 0}L0(w) + Pr{A = 1}L1(w), s.t., L0(w∗)− L1(w∗) = γ (20) To show the above claim, notice that the solution to optimization problem (20) is the same as the following, min w Pr{A = 0}L0(w) + Pr{A = 1}L̃1(w), s.t., L0(w∗)− L̃1(w∗) = 0, (21) where L̃1(w) = L1(w) + γ. Since L0(wG0) − L̃1(wG0) < 0 and L0(wG1) − L̃1(wG1) > 0, by Theorem 3, we know thatwγ = ELminimizer(wG0 ,wG0 , , γ) find the solution to (21). Lastly, because |L0(w∗)− L1(w∗)| = γ, we have, w∗ = { wγ if L(wγ) ≤ L(w−γ) w−γ o.w. (22) Thus, Algorithm 2 finds the solution to (3). Proof [Theorem 5] 1. Under Assumption 2, g(1) < 0. Moreover, g(0) ≥ 0. Therefore, by the intermediate value theorem, there exists β0 ∈ [0, 1] such that g(β0) = 0. 2. Since wO is the minimizer of L(w), h′(0) = 0. Moreover, since L(w) is strictly convex, h′′(0) > 0. As a result, h′(β) > 0 for β > 0. 3. SincewGâ is the minimizer ofLâ(w), andLâ(w) is strictly convex, Lâ((1−β)wO+βwGâ) is strictly decreasing function. Note that since h(β) = Pr{A = â}Lâ((1− β)wO + βwGâ) +Pr{A = 1− â}L1−â((1− β)wO + βwGâ) is strictly increasing and Lâ((1 − β)wO + βwGâ) is strictly decreasing, we conclude that L1−â((1− β)wO + βwGâ) is strictly increasing. As a result, g should be strictly decreasing. Proof [Theorem 6] First, we show that if gγ(0) ≤ 0, thenwO satisfies γ-EL. gγ(0) ≤ 0 =⇒ g(β)− γ ≤ 0 =⇒ Lâ(wO)− L1−â(wO) ≤ γ Moreover, Lâ(wO)−L1−â(wO) ≥ 0 because â = argmaxa La(wO). Therefore, γ-EL is satisfied. Secondly, assume that gγ(0) > 0. Under Assumption 1, gγ(1) = Lâ(wGâ)−L1−â(wGâ)− γ < 0. Therefore, by the intermediate value there exists β0 such that gγ(β0) = 0. Moreover, gγ is a strictly decreasing function. Therefore, the binary search proposed in Algorithm 3 converges to root of gγ(β). As a result, (1 − β(∞)mid)wO + β (∞) midwGâ satisfies satisfies γ-EL. Moreover, Lâ(wO) − L1−â(wO) ≥ 0 because â = argmaxa La(wO). Note that since g(β) is decreasing, β(∞)mid is the smallest possible β under which (1 − β)wO + βwGâ γ-EL. Since h is increasing, the smallest possible β gives us a better accuracy. Proof [Theorem 7] By the triangle inequality, the following holds, sup fw∈F ||L0(w)−L1(w)| − |L̂0(w)− L̂1(w)|| ≤ sup fw∈F |L0(w)− L̂0(w)|+ sup fw∈F |L1(w)− L̂1(w)|. (23) Therefore, with probability at least 1− 2δ we have, sup fw∈F ||L0(w)− L1(w)| − |L̂0(w)− L̂1(w)|| ≤ B(δ, n0,F) +B(δ, n1,F) (24) As a result, with probability 1− 2δ holds, {w|fw ∈ F , |L0(w)− L1(w)| ≤ γ} ⊆ {w|fw ∈ F , |L̂0(w)− L̂1(w)| ≤ γ̂} (25) Now consider the following, L(ŵ)− L(w∗) = L(ŵ)− L̂(ŵ) + L̂(ŵ)− L̂(w∗) + L̂(w∗)− L(w∗) (26) By (25), L̂(ŵ)− L̂(w∗) ≤ 0 with probability 1−2δ. Thus, with probability at least 1−2δ, we have, L(ŵ)− L(w∗) ≤ L(ŵ)− L̂(ŵ) + L̂(w∗)− L(w∗). (27) Therefore, under assumption 3, we can conclude with probability at least 1− 6δ, L(ŵ)− L(w∗) ≤ 2B(δ, n,F). In addition, by (24), with probability at least 1− 2δ, we have, |L0(ŵ)− L1(ŵ)| ≤ B(δ, n0,F) +B(δ, n1,F) + |L̂0(w)− L̂1(w)| ≤ γ̂ +B(δ, n0,F) +B(δ, n1,F) = γ + 2B(δ, n0,F) + 2B(δ, n1,F)
1. What is the main contribution of the paper regarding fair prediction and Equalized Loss? 2. What are the strengths and weaknesses of the proposed approaches for solving the problem of finding the globally optimal predictor that satisfies EL? 3. Do you have any concerns about the assumptions made in the paper, particularly in Assumption 1? 4. How does the reviewer assess the performance of Algorithm 3 compared to Algorithm 2 and the baseline? 5. What are the writing quality issues mentioned by the reviewer, and how do they affect the readability and understanding of the paper? 6. What is the purpose and choice of the quadratic functions used in the first experiment, and how were they generated or chosen? 7. How realistic and practical is Assumption 2 in real-world scenarios, and what are the implications if it is not satisfied?
Summary Of The Paper Review
Summary Of The Paper The authors study fair prediction subject to Equalized Loss (EL), and they introduce a variety of approaches for exactly and approximately solving the problem of finding the globally optimal predictor that satisfies EL. First, they show how to solve a sequence of convex constrained optimization problems in order to solve the larger non-convex problem. Next, they show how to approximately solve this problem more efficiently by using unconstrained convex optimization. Lastly, they evaluate both of their approaches on two datasets. Review I found this paper to be interesting and well-motivated (finding the best classifier subject to fairness constraints is a real problem), but the results were not as compelling as I hoped. For instance, most of section 3 seems to stem from Assumption 1, wherein the loss functions L0, L1, and L are all assumed to be strictly convex. What happens if they are non-convex? I would have appreciated some analysis of Algorithm 3’s performance — how much worse than Algorithm 2 can it be in theory? The results in the second experiment suggest that it can be quite a bit worse (even worse than the baseline). There were also issues with writing quality, which is under the bar for ICLR. There were many missing articles (a/an/the) throughout the paper, and some of the exposition was very difficult to intuitively understand. Also, just a minor note, but expectation and probability are generally encased in [] or (), not {}, which is a convention I haven’t seen before. Perhaps add a bit more motivation for the quadratic functions used in the first experiment (Section 6.1). How were they generated / chosen? Also, just wondering: how likely is Assumption 2 to be satisfied in practice? Are there settings in which even the best loss for a certain group still means they're disadvantaged?
ICLR
Title Non-convex Optimization for Learning a Fair Predictor under Equalized Loss Fairness Constraint Abstract Supervised learning models have been increasingly used in various domains such as lending, college admission, natural language processing, face recognition, etc. These models may inherit pre-existing biases from training datasets and exhibit discrimination against protected social groups. Various fairness notions have been introduced to address fairness issues. In general, finding a fair predictor leads to a constrained optimization problem, and depending on the fairness notion, it may be non-convex. In this work, we focus on Equalized Loss (EL), a fairness notion that requires the prediction error/loss to be equalized across different demographic groups. Imposing this constraint to the learning process leads to a non-convex optimization problem even if the loss function is convex. We introduce algorithms that can leverage off-the-shelf convex programming tools and efficiently find the global optimum of this non-convex problem. In particular, we first propose the ELminimizer algorithm, which finds the optimal EL fair predictor by reducing the non-convex optimization problem to a sequence of convex constrained optimizations. We then propose a simple algorithm that is computationally more efficient compared to ELminimizer and finds a sub-optimal EL fair predictor using unconstrained convex programming tools. Experiments on real-world data show the effectiveness of our algorithms. N/A Supervised learning models have been increasingly used in various domains such as lending, college admission, natural language processing, face recognition, etc. These models may inherit pre-existing biases from training datasets and exhibit discrimination against protected social groups. Various fairness notions have been introduced to address fairness issues. In general, finding a fair predictor leads to a constrained optimization problem, and depending on the fairness notion, it may be non-convex. In this work, we focus on Equalized Loss (EL), a fairness notion that requires the prediction error/loss to be equalized across different demographic groups. Imposing this constraint to the learning process leads to a non-convex optimization problem even if the loss function is convex. We introduce algorithms that can leverage off-the-shelf convex programming tools and efficiently find the global optimum of this non-convex problem. In particular, we first propose the ELminimizer algorithm, which finds the optimal EL fair predictor by reducing the non-convex optimization problem to a sequence of convex constrained optimizations. We then propose a simple algorithm that is computationally more efficient compared to ELminimizer and finds a sub-optimal EL fair predictor using unconstrained convex programming tools. Experiments on real-world data show the effectiveness of our algorithms. 1 INTRODUCTION As machine learning (ML) algorithms are increasingly being used in applications such as education, lending, recruitment, healthcare, criminal justice, etc., there is a growing concern that the algorithms may exhibit discrimination against protected population groups. For example, speech recognition products such as Google Home and Amazon Alexa were shown to have accent bias (Harwell, 2018). The COMPAS recidivism prediction tool, used by courts in the US in parole decisions, has been shown to have a substantially higher false positive rate for African Americans compared to the general population (Dressel & Farid, 2018). Amazon had been using automated software since 2014 to assess applicants’ resumes, which were found to be biased against women (Dastin, 2018). Various fairness notions have been proposed in the literature to measure and remedy the biases in ML systems; they can be roughly classified into two classes: 1) individual fairness focuses on the equity at individual level and it requires the similar individuals to be treated similarly (Dwork et al., 2012; Biega et al., 2018; Jung et al., 2019; Gupta & Kamble, 2019); 2) group fairness requires certain statistical measures to be (approximately) equalized across different groups distinguished by some sensitive attributes. Their suitability for use is often application dependent, and many of them are incompatible with each other (Zhang et al., 2019; Hardt et al., 2016; Conitzer et al., 2019; Zhang et al., 2020; Khalili et al., 2020). Extensive approaches have been developed to satisfying a given definition of fairness and they generally fall under three categories: pre-processing, by modifying the original dataset such as removing certain features and reweighing, e.g., (Kamiran & Calders, 2012; Celis et al., 2020); in-processing, by modifying the algorithms such as imposing fairness constraints or changing objective functions, e.g., (Zhang et al., 2018; Agarwal et al., 2018; 2019; Reimers et al., 2021; Calmon et al., 2017); post-processing, by adjusting the output of the algorithms based on sensitive attributes, e.g., (Hardt et al., 2016). In this paper, we focus on group fairness and we aim to mitigate unfairness issues in supervised learning using in-processing approaches. The problem can be cast as a constrained optimization problem where a fair predictor can be found by minimizing the prediction error (i.e., loss) subject to certain group fairness constraint. In Section 2.1, we present a number of definitions of commonly used group fairness notions, namely, statistical parity (Dwork et al., 2012), equal opportunity (Hardt et al., 2016), equalized loss (Zhang et al., 2019), and bounded group loss (Agarwal et al., 2019). Here we are particularly interested in equalized loss which requires the expected loss to be equalized across different groups. Constrained optimization problems for finding a fair predictor have been studied in the iterature. In general, imposing a fairness criterion to the optimization problem may lead to a non-convex optimization problem. Existing works have proposed various approaches to solving such a non-convex optimization in different settings. For example, Komiyama et al. (2018) studied the non-convex optimization for regression problems under the coefficient of determination constraint. Agarwal et al. (2019) proposed an approach to finding a fair regression model under bounded group loss and statistical parity fairness constraints. Agarwal et al. (2018) studied classification problems and aimed at finding fair classifiers under various fairness notions including statistical parity and equal opportunity. In particular, they considered zero-one loss as the objective function and trained a randomized fair classifier over a finite hypothesis space; this problem was reduced to a problem of finding the saddle point of a linear Lagrangian function in (Agarwal et al., 2018). Zhang et al. (2018) proposed an adversarial debasing technique to find a fair classifier under equalized odd, equal opportunity, and statistical parity. However, there is no guarantee that this technique finds the global optimal solution. The main difference between the present work and the existing in-processing approaches are as follows: 1) we consider a non-convex problem for finding a fair predictor satisfying Equalized Loss fairness notion, which has not been studied in the literature to the best of our knowledge. 2) We propose algorithms for finding the global optimal solution to this non-convex problem efficiently. 3) Our algorithms are easy to implement and are applicable to both regression and classification problems. 4) Unlike (Agarwal et al., 2018), our algorithms are not limited to finite hypothesis space. Non-convex optimization problems have also been studied in other contexts such as learning overparametrized models. For example, deep neural networks are typically trained by solving unconstrained, non-convex problems, and methods such as gradient descent may not be suitable as they are likely to find saddle points but not optimums. To address this issue, approaches have been proposed in recent works by incorporating the higher order derivatives (Celis et al., 2020; Anandkumar & Ge, 2016) or noisy gradients (Ge et al., 2015). However, these methods only find a local minimum (not a global minimum) and are not applicable to our problem with a non-convex constraint. In this work, we develop novel algorithms that find the fair (sub-)optimal solutions under Equalized Loss fairness constraint efficiently. Note that while our approach and algorithms are presented in the context of fair machine learning, they are applicable to any problem that can be formulated as a constrained optimization problem in the form of minw L0(w)+αL1(w) s.t. |L0(w)−L1(w)| < γ, where α is a constant Our main contributions and findings are as follows. 1. We study the relationship between Equalized Loss (EL) and Bounded Group Loss (BGL) fairness notions. We show that given the existence of feasible solutions satisfying (approximate) BGL fairness, imposing (approximate) EL fairness constraint never increase losses of both groups simultaneously (Theorems 1 and 2 in Section 2.1). These results help policy makers to have a better understanding of these two fairness notions. 2. We develop an algorithm (ELminimizer) to solve a non-convex constrained optimization problem that finds the optimal (approximate) EL fair solution. We show that such non-convex optimization can be reduced to a sequence of convex constrained optimizations and the convergence property of the algorithm is analyzed (Theorems 3 and 4, Section 3). 3. We develop a simple algorithm for finding a sub-optimal (approximate) EL fair solution. We show that a sub-optimal solution is a linear combination of optimal solutions to two unconstrained optimizations and it can be found efficiently without solving constrained optimizations (Theorem 5, Section 4). 4. We conduct sample complexity analysis and provide the guarantee on generalization performance (Theorem 7, Section 5). 5. We validate the theoretical results by conducting experiments on real-world data (Section 6). 2 PROBLEM FORMULATION Consider a supervised learning problem where the training dataset consists of triples (X,A, Y ) from two social groups. Random variableX ∈ X ⊂ Rdx is the feature vector (in form of a column vector), A ∈ {0, 1} is the sensitive attribute (e.g., race, gender) indicating the group membership, and Y ∈ Y ⊂ R is the label. The feature vector X may or may not include sensitive attribute A. Label Y can be either discrete or continuous depending on the given problem: if Y is discrete (resp. continuous), then the problem is a classification (resp. regression) problem. Let F be a set of predictors fw : X → R parameterized by weight vector w ∈ Rdw .1 Consider loss function l : Y×X → Rwhere l(Y, fw(X )) measures the error of fw in predicting label Y . Denote the expected loss with respect to the joint probability distribution of (X,Y ) by L(w) := E{l(Y, fw(X ))}. Then, La(w) := E{l(Y, fw(X ))|A = a} denotes the expected loss of the group with attribute A = a. A predictor that minimizes the total expected loss, i.e., argminw L(w), can be biased against certain groups. To mitigate the risk of unfairness, various fairness notions have been proposed in the literature. Some of the most commonly used notions of group fairness are as follows: 1) Statistical Parity (SP) (Dwork et al., 2012) implies that the predictor and the sensitive attribute should be independent, i.e., fw(X ) ⊥ A; 2) Equal Opportunity (EqOpt) (Hardt et al., 2016) requires that conditional on Y = 1, prediction and sensitive attribute are independent, i.e., fw(X ) ⊥ A|Y = 1; 3) Equalized Odds (EO) (Hardt et al., 2016) requires the conditional independence between prediction and sensitive attribute given Y , i.e., fw(X ) ⊥ A|Y ; 4) Equalized Loss (EL) (Zhang et al., 2019; Berk et al., 2021) requires that the losses experienced by different groups are equalized, i.e., L0(w) = L1(w); 5) Bounded Group Loss (BGL) (Agarwal et al., 2019) requires that the loss experienced by each group is bounded. With fairness consideration, the goal is to find weight vectorw that minimizes total expected loss in predicting Y givenX , subject to certain fairness condition, i.e., minw L(w) s.t. fairness constraint. This is a typical formulation in fair machine learning literature, and above method of finding a fair predictor belongs to in-processing approaches. Because such constrained optimization can be nonconvex, finding the optimal solution efficiently can be challenging. In this work, we develop novel algorithms that solves such an optimization problem udder EL fairness constraint. 2.1 EQUALIZED LOSS (EL) AND BOUNDED GROUP LOSS (BGL) As mentioned in Section 2, various fairness notions have been introduced in the literature. Among them, Statistical Parity (SP), Equal Opportunity (EqOpt), Equalized Odds (EO), and Bounded Group Loss (BGL) have been studied extensively in the literature, and both in-processing and postprocessing approaches have been developed to satisfy these constraints (Dwork et al., 2012; Agarwal et al., 2018; Hardt et al., 2016; Zafar et al., 2019; Fitzsimons et al., 2019). Note that different fairness notions may be conflict with each other and which one to adopt is application and context dependent. In this work, we are interested in Equalized Loss (EL) fairness notion (Zhang et al., 2019; Berk et al., 2021) which implies that the prediction error should be the same across different groups,2 and Group Bounded Loss (BGL) fairness notion (Agarwal et al., 2019) which requires the prediction error of every group to be bounded. We consider a relaxed version of EL fairness defined as follows. Definition 1 (γ-EL) A predictor f satisfies γ-EL if the expected losses experienced by different demographic groups satisfy the following, − γ ≤ L0(w)− L1(w) ≤ γ. (1) Parameter γ controls the degree of fairness; the smaller γ implies the stronger fairness. When γ = 0, the exact EL fairness is attained. We say a group is disadvantaged if it experiences a larger loss. Similarly, Group Bounded Loss (BGL) fairness notion is formally defined as follows. Definition 2 (γ-BGL) A predictor f satisfies γ-BGL if the expected loss of each demographic group is bounded by γ, i.e., La(w) ≤ γ, ∀a ∈ {0, 1}. (2) 1Predictive models such as logistic regression, linear regression, deep learning models, etc., are parameter- ized by a weight vector. 2EL has also been referred to as Overall Accuracy Equality in (Berk et al., 2021; Agarwal et al., 2019). 2.2 RELATIONS BETWEEN γ-EL AND γ-BGL In this section, we formally study the relations between γ-EL and γ-BGL fairness notions. Under γ-EL fairness constraint, finding a fair predictor is equivalent to solving the following constrained optimization problem: minw L(w) s.t. |L0(w)− L1(w)| ≤ γ. (3) Letw∗ be denoted as the solution to (3) and fw∗ is the optimal γ-EL fair predictor. Theorem 1 below shows that given the existence of a feasible point satisfying γ-BGL fairness, it’s impossible for both groups experiencing loss larger than γ from the optimal γ-EL fair predictor. Theorem 1 Consider the following optimization for finding the optimal γ-BGL fair predictor, minw L(w) s.t. La(w) ≤ γ, ∀a ∈ {0, 1}. (4) If L0(w∗) > γ and L1(w∗) > γ, then optimization problem (4) does not have a feasible point. Proof 1 We prove by contradiction. Assume w̃ is a feasible point of optimization (4). Note that w̃ is a feasible point for optimization problem (3) as well. Since both L0(w∗) and L1(w∗) are larger than γ, we have, E{l(Y, fw∗)} = Pr{A = 0}L0(w∗) + Pr{A = 1}L1(w∗) > γ, E{l(Y, fw̃)} = Pr{A = 0}L0(w̃) + Pr{A = 1}L1(w̃) ≤ γ. Therefore,w∗ can not be the solution to (3). This contradiction proves that the optimization problem (4) cannot have a feasible point. Theorem 1 implies that if γ-EL notion leads to an increase of the loss of every demographic group, then there is no optimal predictor under γ-BGL.3 The next theorem further shows that for any predictor satisfying γ-EL, it must satisfy 2γ-BGL. Theorem 2 Assume optimization problem (4) has at least one feasible point. Then, we have, min{L0(w∗), L1(w∗)} ≤ γ and max{L0(w∗), L1(w∗)} ≤ 2γ. Proof 2 Let w̃ be a feasible point of optimization problem (4), then w̃ is also a feasible point to (3). If min{L0(w∗), L1(w∗)} > γ, then L(w∗) > γ ≥ L(w̃) must hold. This is a contradiction because it implies thatw∗ is not an optimal solution to (3). Therefore, min{L0(w∗), L1(w∗)} ≤ γ. Similarly, we can prove max{L0(w∗), L1(w∗)} ≤ 2γ by contradiction. Assume max{L0(w∗), L1(w∗)} > 2γ. Then, max{L0(w∗), L1(w∗)} − min{L0(w∗), L1(w∗)} > γ which shows that w∗ is not a feasible point for (3). This is a contradiction. Therefore, max{L0(w∗), L1(w∗)} ≤ 2γ. Theorems 1 and 2 investigated the relations between EL and BGL fairness notions. Since γ-EL implies 2γ-BGL and it additionally requires the approximate equality across different groups, we will focus on γ-EL fairness notion in the rest of the paper. Because optimization problem (3) is a non-convex optimization, finding the optimal fair γ-EL solution efficiently can be challenging. In the next sections, we propose a number of algorithms that are easy to implement and can solve the optimization (3) efficiently. 3 OPTIMAL FAIR MODEL UNDER EL FAIRNESS In this section, we consider the optimization problem (3) under the EL fairness constraint. Note that this optimization problem is non-convex and finding the global optimal solution is difficult. However, we propose an algorithm which is able to find the solution to non-convex optimization (3) by solving a sequence of convex optimization problems. Before presenting the algorithm, we need to introduce two assumptions. Assumption 1 L0(w), L1(w), and L(w) are strictly convex functions inw. 3Theorem 1 is related to (Agarwal et al., 2019). In particular, they considered γ-BGL fairness and mentioned that the equalized loss fairness notion may increase the loss of both groups. Algorithm 1: Function ELminimizer 1 ELminimizer(wG0 ,wG1 , , γ): 2 λ0start = L0(wG0) 3 λ0end = L0(wG1) 4 Define L̃1(w) = L1(w) + γ 5 i = 0 6 while λ(i)end − λ (i) start > do 7 λ (i) mid = (λ (i) end + λ (i) start)/2; 8 Solve the following convex optimization problem, w∗i = argmin w L̃1(w) s.t. L0(w) ≤ λ(i)mid (5) 9 λ(i) = L̃1(w ∗ i ); 10 if λ(i) ≥ λ(i)mid then 11 λ (i+1) start = λ (i) mid; λ (i+1) end = λ (i) end; 12 end 13 else 14 λ (i+1) end = λ (i) mid; λ (i+1) start = λ (i) start; 15 end 16 i = i+ 1; 17 end 18 Returnw∗i Example 1 Consider a linear classifier fw(X ) = wTX with squared loss l(Y, fw(X )) = (wTX − Y )2. In this example, E{l(Y, fw(X ))} = wTE{XXT }w − 2E{YXT }w + E{Y 2} is strictly convex in w if covariance matrix E{XXT } is positive definite. Similarly, La(w) is strictly convex if E{XXT |A = a} is positive definite. LetwGa be the weight vector minimizing the loss associated with group A = a. That is, wGa = argmin w La(w). (6) Since optimization problem (6) is an unconstrained convex optimization problem,wGa can be found efficiently by the first order condition or the gradient descent. We make the following assumption. Assumption 2 We assume that the following holds, L0(wG0) ≤ L1(wG0) and L1(wG1) ≤ L0(wG1). Algorithm 2: Solving Optimization Problem (3) Input: wG0 ,wG1 , ,γ 1 wγ = ELminimizer(wG0 ,wG1 , , γ); 2 w−γ = ELminimizer(wG0 ,wG1 , ,−γ); 3 if L(wγ) ≤ L(w−γ) then 4 w∗ = wγ ; 5 end 6 else 7 w∗ = w−γ ; 8 end Output: w∗ Assumption 2 implies that when a group experiences its lowest possible loss, it should not be the disadvantaged group. Under Assumption 2, given wG0 and wG1 , Algorithm 1 with γ = 0 (i.e., function ELminimizer(wG0 ,wG1 , , 0)) finds the optimal 0-EL fair solution, where parameter > 0 specifies the stopping criterion; as → 0, the output approaches to the optimal solution. Intuitively, Algorithm 1 solves non-convex optimization (3) by solving a sequence of convex and constrained optimization problems. If γ > 0, Algorithm 2 finds the optimal predictor under γ-EL using function ELminimizer. The convergence of Algorithm 1 for finding the optimal 0-EL fair solution, and convergence of Algorithm 2 for finding the optimal γ-EL fair solution are proved in the following theorems. Theorem 3 Consider sequences {λ(i)mid|i = 1, 2, . . .} and {w∗i |i = 1, 2, . . .} generated by Algorithm 1 when γ = 0, i.e., ELminimizer(wG0 ,wG1 , → 0, 0). Under Assumptions 1 and 2, we have, lim i→∞ w∗i = w ∗ and lim i→∞ λ (i) mid = E{L(Y, fw∗(X))} where fw∗ is the optimal 0-EL fair predictor. Similarly, we can prove the convergence for the approximate EL fairness when γ 6= 0. Theorem 4 Assume that L0(wG0) − L1(wG0) < −γ and L0(wG1) − L1(wG1) > γ. Then, as → 0, the output of Algorithm 2 goes to the optimal γ-EL fair solutionw∗. Complexity Analysis: The While loop in Algorithm 1 is executed for O(log(1/ )) times. Therefore, Algorithm 1 needs to solve a constrained convex optimization problem for O(log(1/ )) times. Note that constrained convex optimization problems can be efficiently solved via sub-gradient methods (Nedić & Ozdaglar, 2009), brier methods (Wright, 2001), stochastic gradient descent with one projection (Mahdavi et al., 2012), etc. For instance, Nedić & Ozdaglar (2009) introduces a subgradient method that finds the saddle point of the Lagrangian function corresponding to (5) and it converges at the rate ofO(1/k) (k is the number of iterations). Therefore, if is the maximum error tolerance for (5), the total time complexity of Algorithm 2 is O(1/ log(1/ )). 4 SUB-OPTIMAL FAIR MODEL UNDER γ-EL In Section 3, we have shown that non-convex optimization problem (3) can be reduced to a sequence of convex constrained optimizations (5), and based on this we proposed an algorithm (Algorithm 2) that finds the optimal γ-EL fair predictor. However, the proposed algorithm still requires solving a convex constrained optimization in each iteration. In this section, we propose another algorithm which finds a sub-optimal solution to optimization (3) without solving constrained optimization in each iteration. The algorithm consists of two phases in sequence: (1) finding two weight vectors by solving two unconstrained convex optimization problems; (2) generating a new weight vector satisfying γ-EL fairness with the two weight vectors found in the first phase. Because of the convexity, two unconstrained convex optimization problems in the first phase can be solved efficiently. Phase 1: Unconstrained optimization. In this phase, we remove EL fairness constraint and first solve the following uncontrained optimization problem, wO = argmin w L(w) (7) Because L(w) is strictly convex inw, the above optimization problem can be solved efficiently using the gradient descent method. Predictor fwO is the optimal predictor without fairness constraint, and L(wO) is the smallest overall expected loss that is attainable. Let â = argmaxa∈{0,1} La(wO), i.e., group â is the group that is disadvantaged under predictor fwO . Then, for the disadvantaged group â, we findwGâ by solving unconstrained optimization problem (6). Phase 2: Binary search to find the fair predictor. For β ∈ [0, 1], we define the followings, g(β) = Lâ ( (1− β)wO + βwGâ ) − L1−â ( (1− β)wO + βwGâ ) ; h(β) = L ( (1− β)wO + βwGâ ) , where function g(β) can be interpreted as loss disparity between two demographic group under predictor f(1−β)wO+βwGâ , and h(β) is the corresponding overall expected loss. Some properties of functions g(.) and h(.) are summarized in the following theorem. Theorem 5 Under Assumptions 1 and 2, the followings hold, 1. There exists β0 ∈ [0, 1] such that g(β0) = 0. 2. h(β) is strictly increasing in β ∈ [0, 1]; g(β) is strictly decreasing in β ∈ [0, 1]. Theorem 5 implies that in a dw dimensional space, if we start fromwO and move towardwGâ along a straight line, the overall loss increases and the disparity between two groups decreases until we reach (1 − β0)wO + β0wGâ , at which 0-EL fairness is satisfied. Note that β0 is the unique root of g. Since g(β) is a strictly decreasing function, β0 can be found using binary search. For the approximate γ-EL fairness, there are multiple values of β such that (1− β)wO + βwGâ satisfies γEL. Since h(β) is strictly increasing in β, among all β that satisfies γ-EL fairness, we would choose the smallest one. The method for finding a sub-optimal solution to optimization (3) is described in Algorithm 3. Algorithm 3: Sub-optimal solution to optimization problem (3) 1 Input: wGâ ,wO, , γ 2 Initialization: gγ(β) = g(β)− γ, i = 0, β(0)start = 0, β (0) end = 1 3 if gγ(0) ≤ 0 then 4 w = wO, and go to line 16; 5 end 6 while β(i)end − β (i) start > do 7 β (i) mid = (β (i) start + β (i) end)/2; 8 if gγ(β (i) mid) ≥ 0 then 9 β (i+1) start = β (i) mid, β (i+1) end = β (i) end; 10 end 11 else 12 β (i+1) start = β (i) start, β (i+1) end = β (i) mid; 13 end 14 end 15 w = (1− β(i)mid)wO + β (i) midwGâ ; 16 Output: w Note that while loop in Algorithm 3 is repeated for O(log(1/ )) times. Since the time complexity of operations in each loop isO(1), the total time complexity of Algorithm 3 isO(log(1/ )). We can formally prove that the output returned by Algorithm 3 satisfies γ-EL fairness constraint. Theorem 6 Assume that Assumption 1 holds. If gγ(0) ≤ 0, then wO satisfies the γ-EL fairness; if gγ(0) > 0, then limi→∞ β (i) mid = β (∞) mid exists, and (1 − β (∞) mid)wO + β (∞) midwGâ satisfies the γ-EL fairness constraint. It is worth mentioning, since h(β) is incrasing, we are intrested in finding the smallest possible β that (1 − β)wO + βwGâ satisfies γ-EL. Here, β (∞) mid is the smallest possible β under which (1 − β)wO + βwGâ satisfies γ-EL. 5 GENERALIZATION PERFORMANCE So far we proposed algorithms for solving optimization (3). In practice, the joint probability distribution of (X,A, Y ) is often unknown and the expected loss needs to be estimated using the empirical loss. Specifically, given n samples (X i, Ai, Yi), i = 1, . . . , n and predictor fw , the empirical losses of entire population and each group are defined as follows, L̂(w) = 1 n n∑ i=1 l(Yi, fw(X i)); L̂a(w) = 1 na ∑ i:Ai=a l(Yi, fw(X i)), (8) where na = |{i|Ai = a}|. Because γ-EL fairness constraint is defined in terms of expected loss, the optimization problem of finding an optimal γ-EL fair predictor using empirical losses is as follows, ŵ = argmin w L̂(w) s.t. |L̂0(w)− L̂1(w)| ≤ γ̂. (9) Note that γ̂ 6= γ and one goal in this section is to find relation between γ̂ and γ. We aim to investigate how to determine γ̂ so that with high probability the predictor found by solving problem (9) satisfies γ-EL fairness, and meanwhile ŵ is a good estimate of w∗. To present our result, we make the following assumption. Assumption 3 With probability 1− δ, we have the following, sup fw∈F |L(w)− L̂(w)| ≤ B(δ, n,F), where B(δ, n,F) is a bound that goes to zero as n goes to infinity. Note that if the class F is learnable with respect to loss function l, then there exists such a bound B(δ, n,F) that goes to zero as n goes to infinity (Shalev-Shwartz & Ben-David, 2014).4 Theorem 7 Let F be a set of learnable functions, and let fŵ and fw∗ be the solution to (9) and (3) respectively with γ̂ = γ + ∑ a∈{0,1}B(δ, na,F). Then, with probability at least 1 − 6δ the followings hold, L(ŵ)− L(w∗) ≤ 2B(δ, n,F) and |L0(ŵ)− L1(ŵ)| ≤ γ + 2B(δ, n0,F) + 2B(δ, n1,F). Theorem 7 shows that as n0, n1 go to infinity, γ̂ → γ, and both empirical loss and expected loss satisfy γ-EL. In addition, as n goes to infinity, the expected loss at ŵ goes to the minimum possible expected loss. Therefore, solving (9) using empirical loss is equivalent to solving (3) if the number of data points from each group is sufficiently large. 6 EXPERIMENTS 6.1 EXPERIMENT 1: QUADRATIC FUNCTIONS First, we solve optimization problem (3) given the following quadratic functions, L0(w) = (w1 + 5) 2 + (w2 + 2) 2 + (w3 + 1) 2 + 4w1 · w3, L1(w) = (w1 − 9)2 + (w2 − 9)2 + (w3 − 9)2 + w1 · w2 + w2 · w3 + w1 · w3 + 1, L(w) = L0(w) + L1(w). By the first order condition, we obtainwG0 ,wG1 ,wO as follows, wG0 = [1,−2,−3]T , wG1 = [4.5, 4.5, 4.5]T , wO = [24.53, 3.0, 26.53]T We use Algorithm 1 to find the optimal solution to (3) and run Algorithm 3 to find a sub-optimal solution. In particular, we adopt the penalty method (Ben-Tal & Zibulevsky, 1997) to solve constrained convex optimization (5), i.e., by solving the following unconstrained optimization, min w L1(w) + t ·max{0, (L0(w)− λ(i)mid)} 2, (10) where t is the penalty parameter. We solve the optimization problem (10) using gradient descent with learning rate 0.001 and 10000 iterations. We set penalty parameter t = 0.5 and increase t by 0.1 after every 250 iterations. Note that optimization (5) is convex and the penalty method for a constrained convex optimization converges to the optimal solution (Ben-Tal & Zibulevsky, 1997). We compare the our algorithms with a baseline: the solution to optimization problem (3) found using the penalty method, i.e., by solving the following unconstrained optimization, min w L0(w) +L1(w) + t · [ max{0, (L0(w)− L1(w)− γ)}2 +max{0, (L1(w)− L0(w)− γ)}2 ] . (11) When solving the optimization problem (11), we use learning rate 0.001. We set penalty parameter t = 0.5 and increase it by 0.1 every 250 iterations. Figure 1a illustrates the overall loss L(w) at the (sub-) optimal points obtained from Algorithms 2 and 3 and the baseline. x-axis represents fairness parameter γ. Since Algorithm 2 converges to the optimal solution, it achieves the smallest loss. Figure 1b illustrates the distance of the optimal point w∗ from the sub-optimal solutions obtained by Algorithm 3 and the baseline penalty method. It shows that when γ is sufficiently large (less strict fairness constraint), a sub-optimal solution generated by Algorithm 3 is closer to the optimal solution than the solution found using the baseline method. 4As an example, if F is a compact subset of linear predictors in Reproducing Kernel Hilbert Space (RKHS) and loss l(y, f(x)) is Lipschitz in f(x) (second argument), then Assumption 3 can be satisfied (Bartlett & Mendelson, 2002). Vast majority of linear predictors such as support vector machine and logistic regression can be defined in RKHS. 6.2 EXPERIMENT 2: LOGISTIC REGRESSION AND THE ADULT INCOME DATASET The adult income dataset is a public dataset containing the information of 48,842 individuals (Kohavi, 1996). Each data point includes 14 features including age, education, race, etc. Consider race (White or Black) as the sensitive attribute, we denote White demographic group byA = 0 and Black group by A = 1. We first pre-process the dataset by removing the data points with a missing value or with the race other than Black and White and obtain 41,961 data points. Among these data points, 4585 belong to Black demographic group. For each data point, we convert all the categorical features to one-hot vectors and result in dx = 110 dimensional features. We then normalize the feature vectors such that they have zero mean value and unit variance. Our goal is to find a logistic regression model satisfying γ-EL to predict whether the income of an individual is above $50K or not. We use Algorithm 2 and Algorithm 3 with = 0.01 to find the optimal logistic regression model under EL. We use the penalty method described in equation (11) as the baseline. Similar to Experiment 1, we set learning rate as 0.001 for solving (10) and (11). Penalty parameter t is set to be 0.5 and increases by 0.1 every 250 iterations. Figure 1c illustrates the loss of logistic regression model trained by Algorithm 2, Algorithm 3, and the baseline. It shows that Algorithm 2 outperforms the baseline; this is because that the baseline only finds a sub-optimal solution while Algorithm 2 finds the global optimal solution. As mentioned in Section 4, Algorithm 3 finds a sub-optimal solution that satisfies γ-EL, and its performance can vary from case to case. Even though Algorithm 3 has a good performance in Experiment 1, it does not outperform the baseline in Experiment 2. Figure 1d illustrates the distances from the optimal point w∗ to the sub-optimal solutions obtained by Algorithm 3 and the baseline penalty method. It shows that the distance fromw∗ to the solution obtained under Algorithm 3 is slightly larger than that fromw∗ to the solution obtained under the baseline. 7 CONCLUSION In this work, we studied the problem of fair supervised learning under the Equalized Loss (EL) fairness notion which requires the prediction error/loss to be the same across different demographic groups. By imposing EL constraint, the learning problem can be formulated as a non-convex optimization problem. We introduce a number of algorithms that find the global optimal solution to this non-convex optimization problem. In particular, we showed that the optimal solution to such a non-convex problem can be found by solving a sequence of convex constrained optimizations. We also introduced a simple algorithm for finding a sub-optimal solution to the non-convex problem without solving constrained convex optimization problems. In addition to the theoretical guarantees, we demonstrated the performance of the proposed algorithm through numerical experiments. 8 REPRODUCIBILITY STATEMENT Regarding the theoretical results: This paper includes six Theorems. The proof of Theorem 1 and Theorem 2 have been provided in the main text. Due to the page limit, the proofs of the other theorems have been provided in the appendix. Regarding the numerical examples: the first experiment does not use any dataset, and we study the performance of our proposed method on quadratic objective functions. The values for hyperparameters (including learning and penalty parameter) have been explicitly mentioned in section 6. In the second numerical example, we used the adult income dataset which is a well-known public dataset in our community. We explained the data pre-processing procedure in Section 6.2 in details. 9 ETHICS STATEMENT In this work, we proposed algorithms to find fair predictors under the EL fairness notion. We want to emphasize that selecting a right fairness notion depends on the application and the authors do not make any suggestions to policy/law makers about choosing or avoiding this fairness notion. APPENDIX PROOFS In order to prove Theorem 3, we first introduce two lemmas. Lemma 1 Under assumption 2, there exists w ∈ Rdw such that L0(w) = L1(w) = L(w) and λ (1) start ≤ L(w) ≤ λ (1) end. Proof. Let h0(β) = L0((1 − β)wG0 + βwG1) and h1(β) = L1((1 − β)wG0 + βwG1), and h(β) = h0(β) − h1(β), β ∈ [0, 1]. Note that ∇wLa(wGa) = 0 because wGa is the minimizer of La(w). Moreover, ∇2wLa(w) is positive semi-definit because La(.) is a strictly convex function. First, we show that L0((1− β)wG0 + βwG1) is an increasing function in β, and L1((1− β)wG0 + βwG1) is a decreasing function in β. Note that h ′ 0(0) = (wG1 − wG0)T∇wL0(wG0) = 0, and h′′0(0) = (wG1 −wG0)T∇2wL0(wG0)(wG1 −wG0) ≥ 0. This implies that h′0(β) ≥ 0,∀β ∈ [0, 1]. Similarly, we can show that h′1(β) ≤ 0,∀β ∈ [0, 1]. Note that under Assumption (2), h(0) < 0 and h(1) > 0. Therefore, by the intermediate value theorem, the exists β ∈ (0, 1) such that h(β) = 0. Definew = (1− β)wG0 + βwG1 . We have, h(β) = 0 =⇒ L0(w) = L1(w) = L(w) (12) wG0 is minimizer of L0 =⇒ L(w) = L0(w) ≥ λ (1) start (13) h′0(β) ≥ 0,∀β ∈ [0, 1] =⇒ h0(1) ≥ h0(β) =⇒ λ (1) end ≥ L0(w) = L(w) (14) Lemma 2 L0(w∗i ) = λ (i) mid, wherew ∗ i is the solution to (5). Proof. We proceed by contradiction. Assume that L0(w∗i ) < λ (i) mid. SincewG1 is not in the feasible set of (5),∇wL1(w∗i ) 6= 0. This is a contradiction becausew∗i is an interior point of the feasible set of a convex optimization and cannot be optimal if∇wL1(w∗i ) is equal to zero. Proof [Theorem 3] Let Ii = [λ (i) start, λ (i) end] be a sequence of intervals. It is easy to see that I1 ⊇ I2 ⊇ · · · and λ (i) end−λ (i) start → 0 as i→∞. Therefore, by the Nested Interval Theorem, ∩∞i=1Ii consists of exactly one real number λ∗, and both λ(i)start and λ (i) end converge to λ ∗. Because λ(i)mid = λ (i) start+λ (i) start 2 , λ (i) mid also converges to λ∗. Now, we show that L(w∗) ∈ Ii for all i. Note that L(w∗) = L0(w∗) ≥ λ(1)start because wG0 is the minimizer of L0. Moreover, λ (1) end ≥ L(w∗) otherwise L(w) < L(w∗) (w is defined in Lemma 1) andw∗ is not optimal solution under 0-EL. Therefore, L(w∗) ∈ I1. Now we proceed by induction. Suppose L(w∗) ∈ Ii. We show that L(w∗) ∈ Ii+1 as well. We consider two cases. • L(w∗) ≤ λ(i)mid. In this case w∗ is a feasible point for (5), and λ(i) ≤ L(w∗) ≤ λ (i) mid. Therefore, L(w∗) ∈ Ii+1. • L(w∗) < λ(i)mid. In this case, we proceed by contradiction to show that λ (i) ≥ λ(i)mid. Assume that λ(i) < λ(i)mid. Define g(β) = g0(β)−g1(β), where gi(β) = Li((1−β)wG0 + βw∗i ). Note that λ (i) = g1(1) By Lemma 2, g0(1) = λ (i) mid. Therefore, g(1) = λ (i) mid − λ(i) > 0. Moreover, under Assumption 2, g(0) < 0. Therefore, by the intermediate value theorem, there exists β ∈ (0, 1) such that g(β) = 0. Similar to the proof of Lemma 1, we can show that g0(β) in an increasing function for all β ∈ [0, 1]. As a result g0(β) < g0(1) = λ (i) mid. Definew = (1− β)wG0 + βw∗i . We have, g0(β) = L0(w) = L1(w) = L(w) < λ (i) mid (15) L(w∗) < λ (i) mid (16) The last two equations imply that w∗ is not an optimal fair solution under 0-EL fairness constraint. This is a contradiction. Therefore, if L(w∗) > λ(i)mid, then λ (i) ≥ λ(i)mid. As a result, L(w∗) ∈ Ii+1 By two above cases and the nested interval theorem, we conclude that, L(w∗) ∈ ∩∞i=1Ii, lim i→∞ λ (i) mid = L(w ∗) For the second part of the theorem, consider the following, w∗∞ = argmin w L1(w)s.t., L0(w) ≤ λ∞mid = L(w∗) lim i→∞ w∗i = w ∗ ∞ In order to show that w∗∞ is equal to w ∗, we proceed by contradiction. Suppose w∗∞ 6= w∗. As a result, L1(w∗∞) < L(w ∗). Define η(β) = η0(β)− η1(β), where ηi(β) = Li((1− β)wG0 + βw∗∞). Note that L1(w∗∞) = η1(1). By Lemma 2, the condition in (5) is binding and η0(1) = L(w ∗). Therefore, η(1) = L(w∗) − L1(w∗∞) > 0. Moreover, under Assumption 2, η(0) < 0. Therefore, by the intermediate value theorem, there exists β ∈ (0, 1) such that η(β) = 0. Similar to the proof of Lemma 1, we can show that η0(β) is an increasing function for all β ∈ [0, 1]. As a result η0(β) < η0(1) = L(w ∗). Definew = (1− β)wG0 + βw∗∞. We have, η0(β) = L0(w) = L1(w) = L(w) < L(w ∗) (17) The last equation implies thatw∗ is not an optimal fair solution under 0-EL fairness constraint. This a contradiction. As a result,w∗∞ = ŵ. Proof [Theorem 4 ] Letw∗ be the optimal weight vector under γ-EL. Step 1. we show that one of the following holds, L0(w ∗)− L1(w∗) = γ (18) L0(w ∗)− L1(w∗) = −γ (19) Proof by contradiction. Assume −γ < L0(w∗) − L1(w∗) < γ. This implies that w∗ is an interior point of the feasible set of optimization problem (3). Since w∗ 6= w∗O, then ∇L(w∗) 6= 0. As a result, object function of (3) can be improved at w∗ by moving toward −∇L(w∗). This a contradiction. Therefore, |L0(w∗)− L1(w∗)| = γ. Step 2. Function wγ = ELminimizer(wG0 ,wG0 , , γ) is the solution to the following optimization problem, min w Pr{A = 0}L0(w) + Pr{A = 1}L1(w), s.t., L0(w∗)− L1(w∗) = γ (20) To show the above claim, notice that the solution to optimization problem (20) is the same as the following, min w Pr{A = 0}L0(w) + Pr{A = 1}L̃1(w), s.t., L0(w∗)− L̃1(w∗) = 0, (21) where L̃1(w) = L1(w) + γ. Since L0(wG0) − L̃1(wG0) < 0 and L0(wG1) − L̃1(wG1) > 0, by Theorem 3, we know thatwγ = ELminimizer(wG0 ,wG0 , , γ) find the solution to (21). Lastly, because |L0(w∗)− L1(w∗)| = γ, we have, w∗ = { wγ if L(wγ) ≤ L(w−γ) w−γ o.w. (22) Thus, Algorithm 2 finds the solution to (3). Proof [Theorem 5] 1. Under Assumption 2, g(1) < 0. Moreover, g(0) ≥ 0. Therefore, by the intermediate value theorem, there exists β0 ∈ [0, 1] such that g(β0) = 0. 2. Since wO is the minimizer of L(w), h′(0) = 0. Moreover, since L(w) is strictly convex, h′′(0) > 0. As a result, h′(β) > 0 for β > 0. 3. SincewGâ is the minimizer ofLâ(w), andLâ(w) is strictly convex, Lâ((1−β)wO+βwGâ) is strictly decreasing function. Note that since h(β) = Pr{A = â}Lâ((1− β)wO + βwGâ) +Pr{A = 1− â}L1−â((1− β)wO + βwGâ) is strictly increasing and Lâ((1 − β)wO + βwGâ) is strictly decreasing, we conclude that L1−â((1− β)wO + βwGâ) is strictly increasing. As a result, g should be strictly decreasing. Proof [Theorem 6] First, we show that if gγ(0) ≤ 0, thenwO satisfies γ-EL. gγ(0) ≤ 0 =⇒ g(β)− γ ≤ 0 =⇒ Lâ(wO)− L1−â(wO) ≤ γ Moreover, Lâ(wO)−L1−â(wO) ≥ 0 because â = argmaxa La(wO). Therefore, γ-EL is satisfied. Secondly, assume that gγ(0) > 0. Under Assumption 1, gγ(1) = Lâ(wGâ)−L1−â(wGâ)− γ < 0. Therefore, by the intermediate value there exists β0 such that gγ(β0) = 0. Moreover, gγ is a strictly decreasing function. Therefore, the binary search proposed in Algorithm 3 converges to root of gγ(β). As a result, (1 − β(∞)mid)wO + β (∞) midwGâ satisfies satisfies γ-EL. Moreover, Lâ(wO) − L1−â(wO) ≥ 0 because â = argmaxa La(wO). Note that since g(β) is decreasing, β(∞)mid is the smallest possible β under which (1 − β)wO + βwGâ γ-EL. Since h is increasing, the smallest possible β gives us a better accuracy. Proof [Theorem 7] By the triangle inequality, the following holds, sup fw∈F ||L0(w)−L1(w)| − |L̂0(w)− L̂1(w)|| ≤ sup fw∈F |L0(w)− L̂0(w)|+ sup fw∈F |L1(w)− L̂1(w)|. (23) Therefore, with probability at least 1− 2δ we have, sup fw∈F ||L0(w)− L1(w)| − |L̂0(w)− L̂1(w)|| ≤ B(δ, n0,F) +B(δ, n1,F) (24) As a result, with probability 1− 2δ holds, {w|fw ∈ F , |L0(w)− L1(w)| ≤ γ} ⊆ {w|fw ∈ F , |L̂0(w)− L̂1(w)| ≤ γ̂} (25) Now consider the following, L(ŵ)− L(w∗) = L(ŵ)− L̂(ŵ) + L̂(ŵ)− L̂(w∗) + L̂(w∗)− L(w∗) (26) By (25), L̂(ŵ)− L̂(w∗) ≤ 0 with probability 1−2δ. Thus, with probability at least 1−2δ, we have, L(ŵ)− L(w∗) ≤ L(ŵ)− L̂(ŵ) + L̂(w∗)− L(w∗). (27) Therefore, under assumption 3, we can conclude with probability at least 1− 6δ, L(ŵ)− L(w∗) ≤ 2B(δ, n,F). In addition, by (24), with probability at least 1− 2δ, we have, |L0(ŵ)− L1(ŵ)| ≤ B(δ, n0,F) +B(δ, n1,F) + |L̂0(w)− L̂1(w)| ≤ γ̂ +B(δ, n0,F) +B(δ, n1,F) = γ + 2B(δ, n0,F) + 2B(δ, n1,F)
1. What is the main contribution of the paper regarding supervised learning models and fairness constraints? 2. What are the strengths and weaknesses of the proposed algorithms for solving non-convex problems with fairness constraints? 3. How does the paper compare to a closely related work in the literature, such as arXiv:1802.08626? 4. What are the limitations of the experimental section, and how could it be improved with more comprehensive comparisons and dataset varieties? 5. Are there any concerns or questions regarding the proof of Lemma 1 and the use of subgradients when L may not be smooth?
Summary Of The Paper Review
Summary Of The Paper This paper studies supervised learning models with fairness constraints. They specifically consider equalized loss fairness constraint. When a traditional (convex) loss minimization problem is cast with additional fairness constraints, the corresponding problem is non-convex. They provide algorithms to efficiently solve this problem up to global optimality. They demonstrate the performance of their algorithms on real-world data. Review There is a well-known and closely related work in the literature, which is (arXiv:1802.08626). I am surprised that this paper was not cited in the paper. (arXiv:1802.08626) studies empirical risk minimization under fairness constraints as in this paper. (arXiv:1802.08626) also, propose to use equalized group loss as in this paper. See Definition 1 in (arXiv:1802.08626). (Their equalized loss definition is also conditioned on y = 1 as in equal opportunity but I think their method can be also applied to equalized group loss that is defined as in this paper.) The similarities and the differences between the two papers should have been put clearly. How does Theorem 7 be different than Theorem 1 in (arXiv:1802.08626)? Both theorems seem to be using the same assumption. There are obvious similarities between the two papers. I think the paper is very well-written, other than the fact that the related literature part should be improved. I am not convinced by the novelty of the paper given the existing work in (arXiv:1802.08626). The contribution is limited in my point of view to the introduction of the algorithms to find the fair predictor, which is interesting but it is not enough to meet the barrier of acceptance. The fact that the paper (arXiv:1802.08626) is not mentioned at all is worrying, given that even the notation of Assumption1 and Theorem 7 is the same with (arXiv:1802.08626). The experiment section is far away from being complete. Only one real-world dataset is considered, while there are several real-world datasets publicly available and there is no comparison with the state-of-the-art. In proof of Lemma 1, there is the gradient of L_a(w). But it could be the case that L is not smooth and thus L_a may not be smooth too. Then, instead of talking about the gradients, one should talk about the subgradients. I am not sure in this case the proof of Lemma 2 goes through. Minor comment: In the appendix, the {s.t.}'s should be put in math mode.
ICLR
Title Non-convex Optimization for Learning a Fair Predictor under Equalized Loss Fairness Constraint Abstract Supervised learning models have been increasingly used in various domains such as lending, college admission, natural language processing, face recognition, etc. These models may inherit pre-existing biases from training datasets and exhibit discrimination against protected social groups. Various fairness notions have been introduced to address fairness issues. In general, finding a fair predictor leads to a constrained optimization problem, and depending on the fairness notion, it may be non-convex. In this work, we focus on Equalized Loss (EL), a fairness notion that requires the prediction error/loss to be equalized across different demographic groups. Imposing this constraint to the learning process leads to a non-convex optimization problem even if the loss function is convex. We introduce algorithms that can leverage off-the-shelf convex programming tools and efficiently find the global optimum of this non-convex problem. In particular, we first propose the ELminimizer algorithm, which finds the optimal EL fair predictor by reducing the non-convex optimization problem to a sequence of convex constrained optimizations. We then propose a simple algorithm that is computationally more efficient compared to ELminimizer and finds a sub-optimal EL fair predictor using unconstrained convex programming tools. Experiments on real-world data show the effectiveness of our algorithms. N/A Supervised learning models have been increasingly used in various domains such as lending, college admission, natural language processing, face recognition, etc. These models may inherit pre-existing biases from training datasets and exhibit discrimination against protected social groups. Various fairness notions have been introduced to address fairness issues. In general, finding a fair predictor leads to a constrained optimization problem, and depending on the fairness notion, it may be non-convex. In this work, we focus on Equalized Loss (EL), a fairness notion that requires the prediction error/loss to be equalized across different demographic groups. Imposing this constraint to the learning process leads to a non-convex optimization problem even if the loss function is convex. We introduce algorithms that can leverage off-the-shelf convex programming tools and efficiently find the global optimum of this non-convex problem. In particular, we first propose the ELminimizer algorithm, which finds the optimal EL fair predictor by reducing the non-convex optimization problem to a sequence of convex constrained optimizations. We then propose a simple algorithm that is computationally more efficient compared to ELminimizer and finds a sub-optimal EL fair predictor using unconstrained convex programming tools. Experiments on real-world data show the effectiveness of our algorithms. 1 INTRODUCTION As machine learning (ML) algorithms are increasingly being used in applications such as education, lending, recruitment, healthcare, criminal justice, etc., there is a growing concern that the algorithms may exhibit discrimination against protected population groups. For example, speech recognition products such as Google Home and Amazon Alexa were shown to have accent bias (Harwell, 2018). The COMPAS recidivism prediction tool, used by courts in the US in parole decisions, has been shown to have a substantially higher false positive rate for African Americans compared to the general population (Dressel & Farid, 2018). Amazon had been using automated software since 2014 to assess applicants’ resumes, which were found to be biased against women (Dastin, 2018). Various fairness notions have been proposed in the literature to measure and remedy the biases in ML systems; they can be roughly classified into two classes: 1) individual fairness focuses on the equity at individual level and it requires the similar individuals to be treated similarly (Dwork et al., 2012; Biega et al., 2018; Jung et al., 2019; Gupta & Kamble, 2019); 2) group fairness requires certain statistical measures to be (approximately) equalized across different groups distinguished by some sensitive attributes. Their suitability for use is often application dependent, and many of them are incompatible with each other (Zhang et al., 2019; Hardt et al., 2016; Conitzer et al., 2019; Zhang et al., 2020; Khalili et al., 2020). Extensive approaches have been developed to satisfying a given definition of fairness and they generally fall under three categories: pre-processing, by modifying the original dataset such as removing certain features and reweighing, e.g., (Kamiran & Calders, 2012; Celis et al., 2020); in-processing, by modifying the algorithms such as imposing fairness constraints or changing objective functions, e.g., (Zhang et al., 2018; Agarwal et al., 2018; 2019; Reimers et al., 2021; Calmon et al., 2017); post-processing, by adjusting the output of the algorithms based on sensitive attributes, e.g., (Hardt et al., 2016). In this paper, we focus on group fairness and we aim to mitigate unfairness issues in supervised learning using in-processing approaches. The problem can be cast as a constrained optimization problem where a fair predictor can be found by minimizing the prediction error (i.e., loss) subject to certain group fairness constraint. In Section 2.1, we present a number of definitions of commonly used group fairness notions, namely, statistical parity (Dwork et al., 2012), equal opportunity (Hardt et al., 2016), equalized loss (Zhang et al., 2019), and bounded group loss (Agarwal et al., 2019). Here we are particularly interested in equalized loss which requires the expected loss to be equalized across different groups. Constrained optimization problems for finding a fair predictor have been studied in the iterature. In general, imposing a fairness criterion to the optimization problem may lead to a non-convex optimization problem. Existing works have proposed various approaches to solving such a non-convex optimization in different settings. For example, Komiyama et al. (2018) studied the non-convex optimization for regression problems under the coefficient of determination constraint. Agarwal et al. (2019) proposed an approach to finding a fair regression model under bounded group loss and statistical parity fairness constraints. Agarwal et al. (2018) studied classification problems and aimed at finding fair classifiers under various fairness notions including statistical parity and equal opportunity. In particular, they considered zero-one loss as the objective function and trained a randomized fair classifier over a finite hypothesis space; this problem was reduced to a problem of finding the saddle point of a linear Lagrangian function in (Agarwal et al., 2018). Zhang et al. (2018) proposed an adversarial debasing technique to find a fair classifier under equalized odd, equal opportunity, and statistical parity. However, there is no guarantee that this technique finds the global optimal solution. The main difference between the present work and the existing in-processing approaches are as follows: 1) we consider a non-convex problem for finding a fair predictor satisfying Equalized Loss fairness notion, which has not been studied in the literature to the best of our knowledge. 2) We propose algorithms for finding the global optimal solution to this non-convex problem efficiently. 3) Our algorithms are easy to implement and are applicable to both regression and classification problems. 4) Unlike (Agarwal et al., 2018), our algorithms are not limited to finite hypothesis space. Non-convex optimization problems have also been studied in other contexts such as learning overparametrized models. For example, deep neural networks are typically trained by solving unconstrained, non-convex problems, and methods such as gradient descent may not be suitable as they are likely to find saddle points but not optimums. To address this issue, approaches have been proposed in recent works by incorporating the higher order derivatives (Celis et al., 2020; Anandkumar & Ge, 2016) or noisy gradients (Ge et al., 2015). However, these methods only find a local minimum (not a global minimum) and are not applicable to our problem with a non-convex constraint. In this work, we develop novel algorithms that find the fair (sub-)optimal solutions under Equalized Loss fairness constraint efficiently. Note that while our approach and algorithms are presented in the context of fair machine learning, they are applicable to any problem that can be formulated as a constrained optimization problem in the form of minw L0(w)+αL1(w) s.t. |L0(w)−L1(w)| < γ, where α is a constant Our main contributions and findings are as follows. 1. We study the relationship between Equalized Loss (EL) and Bounded Group Loss (BGL) fairness notions. We show that given the existence of feasible solutions satisfying (approximate) BGL fairness, imposing (approximate) EL fairness constraint never increase losses of both groups simultaneously (Theorems 1 and 2 in Section 2.1). These results help policy makers to have a better understanding of these two fairness notions. 2. We develop an algorithm (ELminimizer) to solve a non-convex constrained optimization problem that finds the optimal (approximate) EL fair solution. We show that such non-convex optimization can be reduced to a sequence of convex constrained optimizations and the convergence property of the algorithm is analyzed (Theorems 3 and 4, Section 3). 3. We develop a simple algorithm for finding a sub-optimal (approximate) EL fair solution. We show that a sub-optimal solution is a linear combination of optimal solutions to two unconstrained optimizations and it can be found efficiently without solving constrained optimizations (Theorem 5, Section 4). 4. We conduct sample complexity analysis and provide the guarantee on generalization performance (Theorem 7, Section 5). 5. We validate the theoretical results by conducting experiments on real-world data (Section 6). 2 PROBLEM FORMULATION Consider a supervised learning problem where the training dataset consists of triples (X,A, Y ) from two social groups. Random variableX ∈ X ⊂ Rdx is the feature vector (in form of a column vector), A ∈ {0, 1} is the sensitive attribute (e.g., race, gender) indicating the group membership, and Y ∈ Y ⊂ R is the label. The feature vector X may or may not include sensitive attribute A. Label Y can be either discrete or continuous depending on the given problem: if Y is discrete (resp. continuous), then the problem is a classification (resp. regression) problem. Let F be a set of predictors fw : X → R parameterized by weight vector w ∈ Rdw .1 Consider loss function l : Y×X → Rwhere l(Y, fw(X )) measures the error of fw in predicting label Y . Denote the expected loss with respect to the joint probability distribution of (X,Y ) by L(w) := E{l(Y, fw(X ))}. Then, La(w) := E{l(Y, fw(X ))|A = a} denotes the expected loss of the group with attribute A = a. A predictor that minimizes the total expected loss, i.e., argminw L(w), can be biased against certain groups. To mitigate the risk of unfairness, various fairness notions have been proposed in the literature. Some of the most commonly used notions of group fairness are as follows: 1) Statistical Parity (SP) (Dwork et al., 2012) implies that the predictor and the sensitive attribute should be independent, i.e., fw(X ) ⊥ A; 2) Equal Opportunity (EqOpt) (Hardt et al., 2016) requires that conditional on Y = 1, prediction and sensitive attribute are independent, i.e., fw(X ) ⊥ A|Y = 1; 3) Equalized Odds (EO) (Hardt et al., 2016) requires the conditional independence between prediction and sensitive attribute given Y , i.e., fw(X ) ⊥ A|Y ; 4) Equalized Loss (EL) (Zhang et al., 2019; Berk et al., 2021) requires that the losses experienced by different groups are equalized, i.e., L0(w) = L1(w); 5) Bounded Group Loss (BGL) (Agarwal et al., 2019) requires that the loss experienced by each group is bounded. With fairness consideration, the goal is to find weight vectorw that minimizes total expected loss in predicting Y givenX , subject to certain fairness condition, i.e., minw L(w) s.t. fairness constraint. This is a typical formulation in fair machine learning literature, and above method of finding a fair predictor belongs to in-processing approaches. Because such constrained optimization can be nonconvex, finding the optimal solution efficiently can be challenging. In this work, we develop novel algorithms that solves such an optimization problem udder EL fairness constraint. 2.1 EQUALIZED LOSS (EL) AND BOUNDED GROUP LOSS (BGL) As mentioned in Section 2, various fairness notions have been introduced in the literature. Among them, Statistical Parity (SP), Equal Opportunity (EqOpt), Equalized Odds (EO), and Bounded Group Loss (BGL) have been studied extensively in the literature, and both in-processing and postprocessing approaches have been developed to satisfy these constraints (Dwork et al., 2012; Agarwal et al., 2018; Hardt et al., 2016; Zafar et al., 2019; Fitzsimons et al., 2019). Note that different fairness notions may be conflict with each other and which one to adopt is application and context dependent. In this work, we are interested in Equalized Loss (EL) fairness notion (Zhang et al., 2019; Berk et al., 2021) which implies that the prediction error should be the same across different groups,2 and Group Bounded Loss (BGL) fairness notion (Agarwal et al., 2019) which requires the prediction error of every group to be bounded. We consider a relaxed version of EL fairness defined as follows. Definition 1 (γ-EL) A predictor f satisfies γ-EL if the expected losses experienced by different demographic groups satisfy the following, − γ ≤ L0(w)− L1(w) ≤ γ. (1) Parameter γ controls the degree of fairness; the smaller γ implies the stronger fairness. When γ = 0, the exact EL fairness is attained. We say a group is disadvantaged if it experiences a larger loss. Similarly, Group Bounded Loss (BGL) fairness notion is formally defined as follows. Definition 2 (γ-BGL) A predictor f satisfies γ-BGL if the expected loss of each demographic group is bounded by γ, i.e., La(w) ≤ γ, ∀a ∈ {0, 1}. (2) 1Predictive models such as logistic regression, linear regression, deep learning models, etc., are parameter- ized by a weight vector. 2EL has also been referred to as Overall Accuracy Equality in (Berk et al., 2021; Agarwal et al., 2019). 2.2 RELATIONS BETWEEN γ-EL AND γ-BGL In this section, we formally study the relations between γ-EL and γ-BGL fairness notions. Under γ-EL fairness constraint, finding a fair predictor is equivalent to solving the following constrained optimization problem: minw L(w) s.t. |L0(w)− L1(w)| ≤ γ. (3) Letw∗ be denoted as the solution to (3) and fw∗ is the optimal γ-EL fair predictor. Theorem 1 below shows that given the existence of a feasible point satisfying γ-BGL fairness, it’s impossible for both groups experiencing loss larger than γ from the optimal γ-EL fair predictor. Theorem 1 Consider the following optimization for finding the optimal γ-BGL fair predictor, minw L(w) s.t. La(w) ≤ γ, ∀a ∈ {0, 1}. (4) If L0(w∗) > γ and L1(w∗) > γ, then optimization problem (4) does not have a feasible point. Proof 1 We prove by contradiction. Assume w̃ is a feasible point of optimization (4). Note that w̃ is a feasible point for optimization problem (3) as well. Since both L0(w∗) and L1(w∗) are larger than γ, we have, E{l(Y, fw∗)} = Pr{A = 0}L0(w∗) + Pr{A = 1}L1(w∗) > γ, E{l(Y, fw̃)} = Pr{A = 0}L0(w̃) + Pr{A = 1}L1(w̃) ≤ γ. Therefore,w∗ can not be the solution to (3). This contradiction proves that the optimization problem (4) cannot have a feasible point. Theorem 1 implies that if γ-EL notion leads to an increase of the loss of every demographic group, then there is no optimal predictor under γ-BGL.3 The next theorem further shows that for any predictor satisfying γ-EL, it must satisfy 2γ-BGL. Theorem 2 Assume optimization problem (4) has at least one feasible point. Then, we have, min{L0(w∗), L1(w∗)} ≤ γ and max{L0(w∗), L1(w∗)} ≤ 2γ. Proof 2 Let w̃ be a feasible point of optimization problem (4), then w̃ is also a feasible point to (3). If min{L0(w∗), L1(w∗)} > γ, then L(w∗) > γ ≥ L(w̃) must hold. This is a contradiction because it implies thatw∗ is not an optimal solution to (3). Therefore, min{L0(w∗), L1(w∗)} ≤ γ. Similarly, we can prove max{L0(w∗), L1(w∗)} ≤ 2γ by contradiction. Assume max{L0(w∗), L1(w∗)} > 2γ. Then, max{L0(w∗), L1(w∗)} − min{L0(w∗), L1(w∗)} > γ which shows that w∗ is not a feasible point for (3). This is a contradiction. Therefore, max{L0(w∗), L1(w∗)} ≤ 2γ. Theorems 1 and 2 investigated the relations between EL and BGL fairness notions. Since γ-EL implies 2γ-BGL and it additionally requires the approximate equality across different groups, we will focus on γ-EL fairness notion in the rest of the paper. Because optimization problem (3) is a non-convex optimization, finding the optimal fair γ-EL solution efficiently can be challenging. In the next sections, we propose a number of algorithms that are easy to implement and can solve the optimization (3) efficiently. 3 OPTIMAL FAIR MODEL UNDER EL FAIRNESS In this section, we consider the optimization problem (3) under the EL fairness constraint. Note that this optimization problem is non-convex and finding the global optimal solution is difficult. However, we propose an algorithm which is able to find the solution to non-convex optimization (3) by solving a sequence of convex optimization problems. Before presenting the algorithm, we need to introduce two assumptions. Assumption 1 L0(w), L1(w), and L(w) are strictly convex functions inw. 3Theorem 1 is related to (Agarwal et al., 2019). In particular, they considered γ-BGL fairness and mentioned that the equalized loss fairness notion may increase the loss of both groups. Algorithm 1: Function ELminimizer 1 ELminimizer(wG0 ,wG1 , , γ): 2 λ0start = L0(wG0) 3 λ0end = L0(wG1) 4 Define L̃1(w) = L1(w) + γ 5 i = 0 6 while λ(i)end − λ (i) start > do 7 λ (i) mid = (λ (i) end + λ (i) start)/2; 8 Solve the following convex optimization problem, w∗i = argmin w L̃1(w) s.t. L0(w) ≤ λ(i)mid (5) 9 λ(i) = L̃1(w ∗ i ); 10 if λ(i) ≥ λ(i)mid then 11 λ (i+1) start = λ (i) mid; λ (i+1) end = λ (i) end; 12 end 13 else 14 λ (i+1) end = λ (i) mid; λ (i+1) start = λ (i) start; 15 end 16 i = i+ 1; 17 end 18 Returnw∗i Example 1 Consider a linear classifier fw(X ) = wTX with squared loss l(Y, fw(X )) = (wTX − Y )2. In this example, E{l(Y, fw(X ))} = wTE{XXT }w − 2E{YXT }w + E{Y 2} is strictly convex in w if covariance matrix E{XXT } is positive definite. Similarly, La(w) is strictly convex if E{XXT |A = a} is positive definite. LetwGa be the weight vector minimizing the loss associated with group A = a. That is, wGa = argmin w La(w). (6) Since optimization problem (6) is an unconstrained convex optimization problem,wGa can be found efficiently by the first order condition or the gradient descent. We make the following assumption. Assumption 2 We assume that the following holds, L0(wG0) ≤ L1(wG0) and L1(wG1) ≤ L0(wG1). Algorithm 2: Solving Optimization Problem (3) Input: wG0 ,wG1 , ,γ 1 wγ = ELminimizer(wG0 ,wG1 , , γ); 2 w−γ = ELminimizer(wG0 ,wG1 , ,−γ); 3 if L(wγ) ≤ L(w−γ) then 4 w∗ = wγ ; 5 end 6 else 7 w∗ = w−γ ; 8 end Output: w∗ Assumption 2 implies that when a group experiences its lowest possible loss, it should not be the disadvantaged group. Under Assumption 2, given wG0 and wG1 , Algorithm 1 with γ = 0 (i.e., function ELminimizer(wG0 ,wG1 , , 0)) finds the optimal 0-EL fair solution, where parameter > 0 specifies the stopping criterion; as → 0, the output approaches to the optimal solution. Intuitively, Algorithm 1 solves non-convex optimization (3) by solving a sequence of convex and constrained optimization problems. If γ > 0, Algorithm 2 finds the optimal predictor under γ-EL using function ELminimizer. The convergence of Algorithm 1 for finding the optimal 0-EL fair solution, and convergence of Algorithm 2 for finding the optimal γ-EL fair solution are proved in the following theorems. Theorem 3 Consider sequences {λ(i)mid|i = 1, 2, . . .} and {w∗i |i = 1, 2, . . .} generated by Algorithm 1 when γ = 0, i.e., ELminimizer(wG0 ,wG1 , → 0, 0). Under Assumptions 1 and 2, we have, lim i→∞ w∗i = w ∗ and lim i→∞ λ (i) mid = E{L(Y, fw∗(X))} where fw∗ is the optimal 0-EL fair predictor. Similarly, we can prove the convergence for the approximate EL fairness when γ 6= 0. Theorem 4 Assume that L0(wG0) − L1(wG0) < −γ and L0(wG1) − L1(wG1) > γ. Then, as → 0, the output of Algorithm 2 goes to the optimal γ-EL fair solutionw∗. Complexity Analysis: The While loop in Algorithm 1 is executed for O(log(1/ )) times. Therefore, Algorithm 1 needs to solve a constrained convex optimization problem for O(log(1/ )) times. Note that constrained convex optimization problems can be efficiently solved via sub-gradient methods (Nedić & Ozdaglar, 2009), brier methods (Wright, 2001), stochastic gradient descent with one projection (Mahdavi et al., 2012), etc. For instance, Nedić & Ozdaglar (2009) introduces a subgradient method that finds the saddle point of the Lagrangian function corresponding to (5) and it converges at the rate ofO(1/k) (k is the number of iterations). Therefore, if is the maximum error tolerance for (5), the total time complexity of Algorithm 2 is O(1/ log(1/ )). 4 SUB-OPTIMAL FAIR MODEL UNDER γ-EL In Section 3, we have shown that non-convex optimization problem (3) can be reduced to a sequence of convex constrained optimizations (5), and based on this we proposed an algorithm (Algorithm 2) that finds the optimal γ-EL fair predictor. However, the proposed algorithm still requires solving a convex constrained optimization in each iteration. In this section, we propose another algorithm which finds a sub-optimal solution to optimization (3) without solving constrained optimization in each iteration. The algorithm consists of two phases in sequence: (1) finding two weight vectors by solving two unconstrained convex optimization problems; (2) generating a new weight vector satisfying γ-EL fairness with the two weight vectors found in the first phase. Because of the convexity, two unconstrained convex optimization problems in the first phase can be solved efficiently. Phase 1: Unconstrained optimization. In this phase, we remove EL fairness constraint and first solve the following uncontrained optimization problem, wO = argmin w L(w) (7) Because L(w) is strictly convex inw, the above optimization problem can be solved efficiently using the gradient descent method. Predictor fwO is the optimal predictor without fairness constraint, and L(wO) is the smallest overall expected loss that is attainable. Let â = argmaxa∈{0,1} La(wO), i.e., group â is the group that is disadvantaged under predictor fwO . Then, for the disadvantaged group â, we findwGâ by solving unconstrained optimization problem (6). Phase 2: Binary search to find the fair predictor. For β ∈ [0, 1], we define the followings, g(β) = Lâ ( (1− β)wO + βwGâ ) − L1−â ( (1− β)wO + βwGâ ) ; h(β) = L ( (1− β)wO + βwGâ ) , where function g(β) can be interpreted as loss disparity between two demographic group under predictor f(1−β)wO+βwGâ , and h(β) is the corresponding overall expected loss. Some properties of functions g(.) and h(.) are summarized in the following theorem. Theorem 5 Under Assumptions 1 and 2, the followings hold, 1. There exists β0 ∈ [0, 1] such that g(β0) = 0. 2. h(β) is strictly increasing in β ∈ [0, 1]; g(β) is strictly decreasing in β ∈ [0, 1]. Theorem 5 implies that in a dw dimensional space, if we start fromwO and move towardwGâ along a straight line, the overall loss increases and the disparity between two groups decreases until we reach (1 − β0)wO + β0wGâ , at which 0-EL fairness is satisfied. Note that β0 is the unique root of g. Since g(β) is a strictly decreasing function, β0 can be found using binary search. For the approximate γ-EL fairness, there are multiple values of β such that (1− β)wO + βwGâ satisfies γEL. Since h(β) is strictly increasing in β, among all β that satisfies γ-EL fairness, we would choose the smallest one. The method for finding a sub-optimal solution to optimization (3) is described in Algorithm 3. Algorithm 3: Sub-optimal solution to optimization problem (3) 1 Input: wGâ ,wO, , γ 2 Initialization: gγ(β) = g(β)− γ, i = 0, β(0)start = 0, β (0) end = 1 3 if gγ(0) ≤ 0 then 4 w = wO, and go to line 16; 5 end 6 while β(i)end − β (i) start > do 7 β (i) mid = (β (i) start + β (i) end)/2; 8 if gγ(β (i) mid) ≥ 0 then 9 β (i+1) start = β (i) mid, β (i+1) end = β (i) end; 10 end 11 else 12 β (i+1) start = β (i) start, β (i+1) end = β (i) mid; 13 end 14 end 15 w = (1− β(i)mid)wO + β (i) midwGâ ; 16 Output: w Note that while loop in Algorithm 3 is repeated for O(log(1/ )) times. Since the time complexity of operations in each loop isO(1), the total time complexity of Algorithm 3 isO(log(1/ )). We can formally prove that the output returned by Algorithm 3 satisfies γ-EL fairness constraint. Theorem 6 Assume that Assumption 1 holds. If gγ(0) ≤ 0, then wO satisfies the γ-EL fairness; if gγ(0) > 0, then limi→∞ β (i) mid = β (∞) mid exists, and (1 − β (∞) mid)wO + β (∞) midwGâ satisfies the γ-EL fairness constraint. It is worth mentioning, since h(β) is incrasing, we are intrested in finding the smallest possible β that (1 − β)wO + βwGâ satisfies γ-EL. Here, β (∞) mid is the smallest possible β under which (1 − β)wO + βwGâ satisfies γ-EL. 5 GENERALIZATION PERFORMANCE So far we proposed algorithms for solving optimization (3). In practice, the joint probability distribution of (X,A, Y ) is often unknown and the expected loss needs to be estimated using the empirical loss. Specifically, given n samples (X i, Ai, Yi), i = 1, . . . , n and predictor fw , the empirical losses of entire population and each group are defined as follows, L̂(w) = 1 n n∑ i=1 l(Yi, fw(X i)); L̂a(w) = 1 na ∑ i:Ai=a l(Yi, fw(X i)), (8) where na = |{i|Ai = a}|. Because γ-EL fairness constraint is defined in terms of expected loss, the optimization problem of finding an optimal γ-EL fair predictor using empirical losses is as follows, ŵ = argmin w L̂(w) s.t. |L̂0(w)− L̂1(w)| ≤ γ̂. (9) Note that γ̂ 6= γ and one goal in this section is to find relation between γ̂ and γ. We aim to investigate how to determine γ̂ so that with high probability the predictor found by solving problem (9) satisfies γ-EL fairness, and meanwhile ŵ is a good estimate of w∗. To present our result, we make the following assumption. Assumption 3 With probability 1− δ, we have the following, sup fw∈F |L(w)− L̂(w)| ≤ B(δ, n,F), where B(δ, n,F) is a bound that goes to zero as n goes to infinity. Note that if the class F is learnable with respect to loss function l, then there exists such a bound B(δ, n,F) that goes to zero as n goes to infinity (Shalev-Shwartz & Ben-David, 2014).4 Theorem 7 Let F be a set of learnable functions, and let fŵ and fw∗ be the solution to (9) and (3) respectively with γ̂ = γ + ∑ a∈{0,1}B(δ, na,F). Then, with probability at least 1 − 6δ the followings hold, L(ŵ)− L(w∗) ≤ 2B(δ, n,F) and |L0(ŵ)− L1(ŵ)| ≤ γ + 2B(δ, n0,F) + 2B(δ, n1,F). Theorem 7 shows that as n0, n1 go to infinity, γ̂ → γ, and both empirical loss and expected loss satisfy γ-EL. In addition, as n goes to infinity, the expected loss at ŵ goes to the minimum possible expected loss. Therefore, solving (9) using empirical loss is equivalent to solving (3) if the number of data points from each group is sufficiently large. 6 EXPERIMENTS 6.1 EXPERIMENT 1: QUADRATIC FUNCTIONS First, we solve optimization problem (3) given the following quadratic functions, L0(w) = (w1 + 5) 2 + (w2 + 2) 2 + (w3 + 1) 2 + 4w1 · w3, L1(w) = (w1 − 9)2 + (w2 − 9)2 + (w3 − 9)2 + w1 · w2 + w2 · w3 + w1 · w3 + 1, L(w) = L0(w) + L1(w). By the first order condition, we obtainwG0 ,wG1 ,wO as follows, wG0 = [1,−2,−3]T , wG1 = [4.5, 4.5, 4.5]T , wO = [24.53, 3.0, 26.53]T We use Algorithm 1 to find the optimal solution to (3) and run Algorithm 3 to find a sub-optimal solution. In particular, we adopt the penalty method (Ben-Tal & Zibulevsky, 1997) to solve constrained convex optimization (5), i.e., by solving the following unconstrained optimization, min w L1(w) + t ·max{0, (L0(w)− λ(i)mid)} 2, (10) where t is the penalty parameter. We solve the optimization problem (10) using gradient descent with learning rate 0.001 and 10000 iterations. We set penalty parameter t = 0.5 and increase t by 0.1 after every 250 iterations. Note that optimization (5) is convex and the penalty method for a constrained convex optimization converges to the optimal solution (Ben-Tal & Zibulevsky, 1997). We compare the our algorithms with a baseline: the solution to optimization problem (3) found using the penalty method, i.e., by solving the following unconstrained optimization, min w L0(w) +L1(w) + t · [ max{0, (L0(w)− L1(w)− γ)}2 +max{0, (L1(w)− L0(w)− γ)}2 ] . (11) When solving the optimization problem (11), we use learning rate 0.001. We set penalty parameter t = 0.5 and increase it by 0.1 every 250 iterations. Figure 1a illustrates the overall loss L(w) at the (sub-) optimal points obtained from Algorithms 2 and 3 and the baseline. x-axis represents fairness parameter γ. Since Algorithm 2 converges to the optimal solution, it achieves the smallest loss. Figure 1b illustrates the distance of the optimal point w∗ from the sub-optimal solutions obtained by Algorithm 3 and the baseline penalty method. It shows that when γ is sufficiently large (less strict fairness constraint), a sub-optimal solution generated by Algorithm 3 is closer to the optimal solution than the solution found using the baseline method. 4As an example, if F is a compact subset of linear predictors in Reproducing Kernel Hilbert Space (RKHS) and loss l(y, f(x)) is Lipschitz in f(x) (second argument), then Assumption 3 can be satisfied (Bartlett & Mendelson, 2002). Vast majority of linear predictors such as support vector machine and logistic regression can be defined in RKHS. 6.2 EXPERIMENT 2: LOGISTIC REGRESSION AND THE ADULT INCOME DATASET The adult income dataset is a public dataset containing the information of 48,842 individuals (Kohavi, 1996). Each data point includes 14 features including age, education, race, etc. Consider race (White or Black) as the sensitive attribute, we denote White demographic group byA = 0 and Black group by A = 1. We first pre-process the dataset by removing the data points with a missing value or with the race other than Black and White and obtain 41,961 data points. Among these data points, 4585 belong to Black demographic group. For each data point, we convert all the categorical features to one-hot vectors and result in dx = 110 dimensional features. We then normalize the feature vectors such that they have zero mean value and unit variance. Our goal is to find a logistic regression model satisfying γ-EL to predict whether the income of an individual is above $50K or not. We use Algorithm 2 and Algorithm 3 with = 0.01 to find the optimal logistic regression model under EL. We use the penalty method described in equation (11) as the baseline. Similar to Experiment 1, we set learning rate as 0.001 for solving (10) and (11). Penalty parameter t is set to be 0.5 and increases by 0.1 every 250 iterations. Figure 1c illustrates the loss of logistic regression model trained by Algorithm 2, Algorithm 3, and the baseline. It shows that Algorithm 2 outperforms the baseline; this is because that the baseline only finds a sub-optimal solution while Algorithm 2 finds the global optimal solution. As mentioned in Section 4, Algorithm 3 finds a sub-optimal solution that satisfies γ-EL, and its performance can vary from case to case. Even though Algorithm 3 has a good performance in Experiment 1, it does not outperform the baseline in Experiment 2. Figure 1d illustrates the distances from the optimal point w∗ to the sub-optimal solutions obtained by Algorithm 3 and the baseline penalty method. It shows that the distance fromw∗ to the solution obtained under Algorithm 3 is slightly larger than that fromw∗ to the solution obtained under the baseline. 7 CONCLUSION In this work, we studied the problem of fair supervised learning under the Equalized Loss (EL) fairness notion which requires the prediction error/loss to be the same across different demographic groups. By imposing EL constraint, the learning problem can be formulated as a non-convex optimization problem. We introduce a number of algorithms that find the global optimal solution to this non-convex optimization problem. In particular, we showed that the optimal solution to such a non-convex problem can be found by solving a sequence of convex constrained optimizations. We also introduced a simple algorithm for finding a sub-optimal solution to the non-convex problem without solving constrained convex optimization problems. In addition to the theoretical guarantees, we demonstrated the performance of the proposed algorithm through numerical experiments. 8 REPRODUCIBILITY STATEMENT Regarding the theoretical results: This paper includes six Theorems. The proof of Theorem 1 and Theorem 2 have been provided in the main text. Due to the page limit, the proofs of the other theorems have been provided in the appendix. Regarding the numerical examples: the first experiment does not use any dataset, and we study the performance of our proposed method on quadratic objective functions. The values for hyperparameters (including learning and penalty parameter) have been explicitly mentioned in section 6. In the second numerical example, we used the adult income dataset which is a well-known public dataset in our community. We explained the data pre-processing procedure in Section 6.2 in details. 9 ETHICS STATEMENT In this work, we proposed algorithms to find fair predictors under the EL fairness notion. We want to emphasize that selecting a right fairness notion depends on the application and the authors do not make any suggestions to policy/law makers about choosing or avoiding this fairness notion. APPENDIX PROOFS In order to prove Theorem 3, we first introduce two lemmas. Lemma 1 Under assumption 2, there exists w ∈ Rdw such that L0(w) = L1(w) = L(w) and λ (1) start ≤ L(w) ≤ λ (1) end. Proof. Let h0(β) = L0((1 − β)wG0 + βwG1) and h1(β) = L1((1 − β)wG0 + βwG1), and h(β) = h0(β) − h1(β), β ∈ [0, 1]. Note that ∇wLa(wGa) = 0 because wGa is the minimizer of La(w). Moreover, ∇2wLa(w) is positive semi-definit because La(.) is a strictly convex function. First, we show that L0((1− β)wG0 + βwG1) is an increasing function in β, and L1((1− β)wG0 + βwG1) is a decreasing function in β. Note that h ′ 0(0) = (wG1 − wG0)T∇wL0(wG0) = 0, and h′′0(0) = (wG1 −wG0)T∇2wL0(wG0)(wG1 −wG0) ≥ 0. This implies that h′0(β) ≥ 0,∀β ∈ [0, 1]. Similarly, we can show that h′1(β) ≤ 0,∀β ∈ [0, 1]. Note that under Assumption (2), h(0) < 0 and h(1) > 0. Therefore, by the intermediate value theorem, the exists β ∈ (0, 1) such that h(β) = 0. Definew = (1− β)wG0 + βwG1 . We have, h(β) = 0 =⇒ L0(w) = L1(w) = L(w) (12) wG0 is minimizer of L0 =⇒ L(w) = L0(w) ≥ λ (1) start (13) h′0(β) ≥ 0,∀β ∈ [0, 1] =⇒ h0(1) ≥ h0(β) =⇒ λ (1) end ≥ L0(w) = L(w) (14) Lemma 2 L0(w∗i ) = λ (i) mid, wherew ∗ i is the solution to (5). Proof. We proceed by contradiction. Assume that L0(w∗i ) < λ (i) mid. SincewG1 is not in the feasible set of (5),∇wL1(w∗i ) 6= 0. This is a contradiction becausew∗i is an interior point of the feasible set of a convex optimization and cannot be optimal if∇wL1(w∗i ) is equal to zero. Proof [Theorem 3] Let Ii = [λ (i) start, λ (i) end] be a sequence of intervals. It is easy to see that I1 ⊇ I2 ⊇ · · · and λ (i) end−λ (i) start → 0 as i→∞. Therefore, by the Nested Interval Theorem, ∩∞i=1Ii consists of exactly one real number λ∗, and both λ(i)start and λ (i) end converge to λ ∗. Because λ(i)mid = λ (i) start+λ (i) start 2 , λ (i) mid also converges to λ∗. Now, we show that L(w∗) ∈ Ii for all i. Note that L(w∗) = L0(w∗) ≥ λ(1)start because wG0 is the minimizer of L0. Moreover, λ (1) end ≥ L(w∗) otherwise L(w) < L(w∗) (w is defined in Lemma 1) andw∗ is not optimal solution under 0-EL. Therefore, L(w∗) ∈ I1. Now we proceed by induction. Suppose L(w∗) ∈ Ii. We show that L(w∗) ∈ Ii+1 as well. We consider two cases. • L(w∗) ≤ λ(i)mid. In this case w∗ is a feasible point for (5), and λ(i) ≤ L(w∗) ≤ λ (i) mid. Therefore, L(w∗) ∈ Ii+1. • L(w∗) < λ(i)mid. In this case, we proceed by contradiction to show that λ (i) ≥ λ(i)mid. Assume that λ(i) < λ(i)mid. Define g(β) = g0(β)−g1(β), where gi(β) = Li((1−β)wG0 + βw∗i ). Note that λ (i) = g1(1) By Lemma 2, g0(1) = λ (i) mid. Therefore, g(1) = λ (i) mid − λ(i) > 0. Moreover, under Assumption 2, g(0) < 0. Therefore, by the intermediate value theorem, there exists β ∈ (0, 1) such that g(β) = 0. Similar to the proof of Lemma 1, we can show that g0(β) in an increasing function for all β ∈ [0, 1]. As a result g0(β) < g0(1) = λ (i) mid. Definew = (1− β)wG0 + βw∗i . We have, g0(β) = L0(w) = L1(w) = L(w) < λ (i) mid (15) L(w∗) < λ (i) mid (16) The last two equations imply that w∗ is not an optimal fair solution under 0-EL fairness constraint. This is a contradiction. Therefore, if L(w∗) > λ(i)mid, then λ (i) ≥ λ(i)mid. As a result, L(w∗) ∈ Ii+1 By two above cases and the nested interval theorem, we conclude that, L(w∗) ∈ ∩∞i=1Ii, lim i→∞ λ (i) mid = L(w ∗) For the second part of the theorem, consider the following, w∗∞ = argmin w L1(w)s.t., L0(w) ≤ λ∞mid = L(w∗) lim i→∞ w∗i = w ∗ ∞ In order to show that w∗∞ is equal to w ∗, we proceed by contradiction. Suppose w∗∞ 6= w∗. As a result, L1(w∗∞) < L(w ∗). Define η(β) = η0(β)− η1(β), where ηi(β) = Li((1− β)wG0 + βw∗∞). Note that L1(w∗∞) = η1(1). By Lemma 2, the condition in (5) is binding and η0(1) = L(w ∗). Therefore, η(1) = L(w∗) − L1(w∗∞) > 0. Moreover, under Assumption 2, η(0) < 0. Therefore, by the intermediate value theorem, there exists β ∈ (0, 1) such that η(β) = 0. Similar to the proof of Lemma 1, we can show that η0(β) is an increasing function for all β ∈ [0, 1]. As a result η0(β) < η0(1) = L(w ∗). Definew = (1− β)wG0 + βw∗∞. We have, η0(β) = L0(w) = L1(w) = L(w) < L(w ∗) (17) The last equation implies thatw∗ is not an optimal fair solution under 0-EL fairness constraint. This a contradiction. As a result,w∗∞ = ŵ. Proof [Theorem 4 ] Letw∗ be the optimal weight vector under γ-EL. Step 1. we show that one of the following holds, L0(w ∗)− L1(w∗) = γ (18) L0(w ∗)− L1(w∗) = −γ (19) Proof by contradiction. Assume −γ < L0(w∗) − L1(w∗) < γ. This implies that w∗ is an interior point of the feasible set of optimization problem (3). Since w∗ 6= w∗O, then ∇L(w∗) 6= 0. As a result, object function of (3) can be improved at w∗ by moving toward −∇L(w∗). This a contradiction. Therefore, |L0(w∗)− L1(w∗)| = γ. Step 2. Function wγ = ELminimizer(wG0 ,wG0 , , γ) is the solution to the following optimization problem, min w Pr{A = 0}L0(w) + Pr{A = 1}L1(w), s.t., L0(w∗)− L1(w∗) = γ (20) To show the above claim, notice that the solution to optimization problem (20) is the same as the following, min w Pr{A = 0}L0(w) + Pr{A = 1}L̃1(w), s.t., L0(w∗)− L̃1(w∗) = 0, (21) where L̃1(w) = L1(w) + γ. Since L0(wG0) − L̃1(wG0) < 0 and L0(wG1) − L̃1(wG1) > 0, by Theorem 3, we know thatwγ = ELminimizer(wG0 ,wG0 , , γ) find the solution to (21). Lastly, because |L0(w∗)− L1(w∗)| = γ, we have, w∗ = { wγ if L(wγ) ≤ L(w−γ) w−γ o.w. (22) Thus, Algorithm 2 finds the solution to (3). Proof [Theorem 5] 1. Under Assumption 2, g(1) < 0. Moreover, g(0) ≥ 0. Therefore, by the intermediate value theorem, there exists β0 ∈ [0, 1] such that g(β0) = 0. 2. Since wO is the minimizer of L(w), h′(0) = 0. Moreover, since L(w) is strictly convex, h′′(0) > 0. As a result, h′(β) > 0 for β > 0. 3. SincewGâ is the minimizer ofLâ(w), andLâ(w) is strictly convex, Lâ((1−β)wO+βwGâ) is strictly decreasing function. Note that since h(β) = Pr{A = â}Lâ((1− β)wO + βwGâ) +Pr{A = 1− â}L1−â((1− β)wO + βwGâ) is strictly increasing and Lâ((1 − β)wO + βwGâ) is strictly decreasing, we conclude that L1−â((1− β)wO + βwGâ) is strictly increasing. As a result, g should be strictly decreasing. Proof [Theorem 6] First, we show that if gγ(0) ≤ 0, thenwO satisfies γ-EL. gγ(0) ≤ 0 =⇒ g(β)− γ ≤ 0 =⇒ Lâ(wO)− L1−â(wO) ≤ γ Moreover, Lâ(wO)−L1−â(wO) ≥ 0 because â = argmaxa La(wO). Therefore, γ-EL is satisfied. Secondly, assume that gγ(0) > 0. Under Assumption 1, gγ(1) = Lâ(wGâ)−L1−â(wGâ)− γ < 0. Therefore, by the intermediate value there exists β0 such that gγ(β0) = 0. Moreover, gγ is a strictly decreasing function. Therefore, the binary search proposed in Algorithm 3 converges to root of gγ(β). As a result, (1 − β(∞)mid)wO + β (∞) midwGâ satisfies satisfies γ-EL. Moreover, Lâ(wO) − L1−â(wO) ≥ 0 because â = argmaxa La(wO). Note that since g(β) is decreasing, β(∞)mid is the smallest possible β under which (1 − β)wO + βwGâ γ-EL. Since h is increasing, the smallest possible β gives us a better accuracy. Proof [Theorem 7] By the triangle inequality, the following holds, sup fw∈F ||L0(w)−L1(w)| − |L̂0(w)− L̂1(w)|| ≤ sup fw∈F |L0(w)− L̂0(w)|+ sup fw∈F |L1(w)− L̂1(w)|. (23) Therefore, with probability at least 1− 2δ we have, sup fw∈F ||L0(w)− L1(w)| − |L̂0(w)− L̂1(w)|| ≤ B(δ, n0,F) +B(δ, n1,F) (24) As a result, with probability 1− 2δ holds, {w|fw ∈ F , |L0(w)− L1(w)| ≤ γ} ⊆ {w|fw ∈ F , |L̂0(w)− L̂1(w)| ≤ γ̂} (25) Now consider the following, L(ŵ)− L(w∗) = L(ŵ)− L̂(ŵ) + L̂(ŵ)− L̂(w∗) + L̂(w∗)− L(w∗) (26) By (25), L̂(ŵ)− L̂(w∗) ≤ 0 with probability 1−2δ. Thus, with probability at least 1−2δ, we have, L(ŵ)− L(w∗) ≤ L(ŵ)− L̂(ŵ) + L̂(w∗)− L(w∗). (27) Therefore, under assumption 3, we can conclude with probability at least 1− 6δ, L(ŵ)− L(w∗) ≤ 2B(δ, n,F). In addition, by (24), with probability at least 1− 2δ, we have, |L0(ŵ)− L1(ŵ)| ≤ B(δ, n0,F) +B(δ, n1,F) + |L̂0(w)− L̂1(w)| ≤ γ̂ +B(δ, n0,F) +B(δ, n1,F) = γ + 2B(δ, n0,F) + 2B(δ, n1,F)
1. What is the focus of the paper regarding fair supervised learning? 2. What are the strengths of the proposed algorithms, particularly their ability to find global (sub-)optimal solutions? 3. What are the weaknesses of the paper regarding its theoretical analysis and lack of technical comparisons with other works? 4. How does the reviewer assess the novelty and significance of the paper's contribution to fairness research?
Summary Of The Paper Review
Summary Of The Paper This paper studies the problem of fair supervised learning under the Equalized Loss (EL) fairness notion, which is formulated as a non-convex constrained optimization problem. The authors introduce two algorithms that find the global (sub-)optimal solution by solving a sequence of convex (constrained) optimizations. Empirically, the algorithms perform well. Review Strength: This paper considers a meaningful fairness notion EL, which is an important topic in fairness. This paper provides a complete analysis on how to handle the non-convex constrained fair learning problem. Their algorithms have provable guarantees. Weakness: The theoretical proof is standard but seems not hard. There is no discussion on the technical contributions. The intuition of proposed algorithms seems to be dividing the constraint space into pieces, i.e., solving (5) for multiple \lambda_mid and then selecting an optimal one from satisfiable solutions. This idea seems also appeared in a prior paper [L. Elisa Celis, Lingxiao Huang, Vijay Keswani, Nisheeth K. Vishnoi: Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees. FAT 2019: 319-328]. Could the authors provide a technical comparison? I have read the comment of Reviewer toHL. I agree that the missing conference (arXiv:1802.08626) is very important. The authors should provide a detailed comparison with this work.
ICLR
Title Universal Vision-Language Dense Retrieval: Learning A Unified Representation Space for Multi-Modal Retrieval Abstract This paper presents Universal Vision-Language Dense Retrieval (UniVL-DR), which builds a unified model for multi-modal retrieval. UniVL-DR encodes queries and multi-modality resources in an embedding space for searching candidates from different modalities. To learn a unified embedding space for multi-modal retrieval, UniVL-DR proposes two techniques: 1) Universal embedding optimization strategy, which contrastively optimizes the embedding space using the modality-balanced hard negatives; 2) Image verbalization method, which bridges the modality gap between images and texts in the raw data space. UniVL-DR achieves the state-ofthe-art on the multi-modal open-domain question answering benchmark, WebQA, and outperforms all retrieval models on the two subtasks, text-text retrieval and text-image retrieval. It demonstrates that universal multi-modal search is feasible to replace the divide-and-conquer pipeline with a united model and also benefits single/cross modality tasks. All source codes of this work are available at https: //github.com/OpenMatch/UniVL-DR. 1 INTRODUCTION Although search engines primarily focus on textual data (Singhal et al., 2001), multi-media is necessary to satisfy user needs during retrieval. A user query can be answered by the information in variant formats, such as a text document, or a picture. The growth of multi-media content has been one of the most notable trends on the internet (Mei et al., 2014), and various studies have proved that users prefer more vivid multi-media content in search results (Datta et al., 2008). Current multi-media search systems often employ a divide-and-conquer approach. As shown in Figure 1(a), they first conduct search in individual modalities, including text, image, video, etc. (Bajaj et al., 2016; Grubinger et al., 2008; Kwiatkowski et al., 2019; Awad et al., 2021), and then fuse the retrieval results from various verticals together, e.g., building another ranking layer on top of these single/cross modality retrievers (Escalante et al., 2008; Grubinger et al., 2008). Both relevance modeling and retrieval result fusion are usually entwined to achieve more accurate multi-modal retrieval results. However, due to the modality gap, they can be only pipeline-modeled in divide-andconquer, making it challenging to fuse retrieval results from different modalities. In this paper, we explore the potential of universal multi-modal retrieval to build an end-to-end model and retrieve multi-modality documents for user queries. Illustrated in Figure 1(b), universal multi-modal retrieval maps queries and multi-modality resources to one universal embedding space and retrieves multi-modality candidates via KNN search. As a result, the relevance modeling, cross-modality matching, and retrieval result fusion are done by one model. More specifically, we propose a Universal Vision-Language Dense Retrieval (UniVL-DR) model to get the representations of queries, texts, and images and learn a tailored vision-language embedding space for multi-modal retrieval. UniVL-DR optimizes the vision-language embedding space using hard negatives (Xiong et al., 2021a) and balances the modalities of these negatives to alleviate the modality preference of multi-modal retrievers. Furthermore, UniVL-DR introduces an image verbalization method, which regards language as a kind of mentalese (Cavanagh, 2021) and mitigates the modality gap between images and texts. Our image verbalization method first aligns the semantics of image captions and figure pixels (Huang et al., 2021a), and then paraphrases the image facts. It helps to bridge language and vision understanding modules of UniVL-DR via natural language. To build a multi-modal retrieval benchmark, we leverage a multi-modal question answering (QA) benchmark WebQA (Chang et al., 2022) and convert it to a standard open-domain setting: retrieving multi-modality candidates from text and image collections for a user query. Divide-and-conquer is an intuitive way to build a multi-modal retrieval system and we pre-route queries to oracle modality to show the upper bound performance of such a system. Compared with the divide-and-conquer system, UniVL-DR addresses the retrieval result fusion challenge, achieves state-of-the-art multi-modal retrieval performance, and brings more than 5% improvement in single/cross modality retrieval. Our experiments show that UniVL-DR learns an effective embedding space for multi-modal retrieval by separating texts and images into different areas and guiding queries to return candidates from corresponding modalities. Our further analyses show that UniVL-DR can alleviate overfit singlemodality signals by balancing hard negatives during training and bridging the modality gap between vision and language by verbalizing images. All experimental results show that learning one universal representation space is starting to benefit single-modality tasks—pretraining representation models on multi-modality and using our techniques can learn additional signals from multi-modalities, overcome the modality boundary, and provide convincing gains in single/multi-modality tasks. 2 RELATED WORK Document retrieval is a typical single modality retrieval task, which aims to return related documents for user queries and can be tackled with dense retrievers (Xiong et al., 2021b; Lewis et al., 2020; Zhan et al., 2021; Li et al., 2021b; Yu et al., 2021). Dense retrievers encode queries and documents with pretrained language models (Devlin et al., 2019) and map them in an embedding space to conduct an efficient search. The query and document encoders are usually contrastively trained with in-batch negatives, BM25 retrieved negatives, and hard negatives (Karpukhin et al., 2020; Xiong et al., 2021a). Recently, lots of work has focused on multi-modal retrieval tasks, which retrieve texts and images to satisfy the multi-modality information needs of users (Hannan et al., 2020; Singh et al., 2021; Talmor et al., 2021; Chang et al., 2022). WebQA (Chang et al., 2022), an open-domain multi-modal question answering benchmark, is built to encourage the following work to represent multi-modal knowledge in a unified space and answer user queries with the information from attribute modalities. It is a more realistic setting, which avoids synthesizing queries with templates (Talmor et al., 2021) and downplays the role of modality disambiguation (Hannan et al., 2020) in the multi-modality modeling. To search information from large-scale multi-modality sources, WebQA (Chang et al., 2022) employs a divide-and-conquer pipeline to search text and image candidates with BM25 and CLIP (Radford et al., 2021) and then fuse these retrieval results using a vision-language model. However, single- modality retrievers, such as BM25 and CLIP, usually show distinct retrieval effectiveness (Chang et al., 2022), leading to modality discrimination during fusing retrieval results from different modalities. When building a unified multi-modal retriever, vision-language pretraining (VLP) is crucial to learn universal representations for texts and images, which has also shown success on lots of visionlanguage benchmarks (Uppal et al., 2022; Han et al., 2020; Khan et al., 2021; Du et al., 2022). Most VLP approaches encode texts and images and pretrain encoders with two tasks: masked token prediction and text-image matching (Zhang et al., 2021). These VLP methods teach vision-language models to learn the semantic alignments between texts and images, as well as encode images with the regional features of detected objects (Chen et al., 2019; Lu et al., 2019; Tan and Bansal, 2019; Su et al., 2020; Li et al., 2019; 2021a; Cho et al., 2021; Hu et al., 2020; Gan et al., 2020) or the whole image features (Xu et al., 2021; Kim et al., 2021; Huang et al., 2021b; Wang et al., 2021). 3 MULTI-MODAL RETRIEVAL TASK As shown in Figure 3, we compare different retrieval tasks and tell apart the differences between multi-modal retrieval and other two tasks, single modality retrieval and cross modality retrieval. Single Modality Retrieval. Single modality retrieval focuses on conducting relevance searching in one modality space, which includes text-text retrieval and image-image retrieval. Text-text retrieval (Bajaj et al., 2016) aims to search relevant candidates from the text collection T = {T1, ..., Tn} to answer a query q. And image-image retrieval (Yoon et al., 2021) focuses more on returning similar images from the image collection I = {I1, ..., Im} for the given image Ij . Cross Modality Retrieval. The cross modality retrieval, e.g. MSCOCO (Chen et al., 2015) and Flickr30K (Young et al., 2014), contains two subtasks: text-image retrieval and image-text retrieval. Given an image caption Ti or an image Ij , these tasks require retrieval models to conduct crossmodality matching between images and captions, aiming to search candidates from images I = {I1, ..., Im} or image captions T = {T1, ..., Tn}, respectively. Such cross-modality interactions are built to align semantics between captions and images, which is distinct from the search relevance. Multi-Modal Retrieval. Given a query q, the multi-modal retrieval task (Chang et al., 2022) helps users uncover the information from multi-modality sources D = {T1, ..., Tn, I1, ..., Im}. Different from single/cross modality retrieval, multi-modal retrieval aims at returning relevant candidates from the multi-modality documents D. The retrieval results may consist of texts, images, or a mixture of them according to user query q. Different from existing text and sketch base image retrieval (Sangkloy et al., 2022; Dutta and Akata, 2020; Dey et al., 2018; Mai et al., 2017), the multimodal retrieval focuses more on relevance modeling between queries and documents, single/cross modality matching, and modality routing, making this task more challenging. Moreover, we can pre-route queries to a single modality and convert the multi-modal retrieval to two subtasks, text-text retrieval and text-image retrieval, which are single and cross modality retrieval tasks. 4 UNIVSEARCH BY LEARNING A UNIFIED EMBEDDING SPACE This section describes our Universal Vision-Language Dense Retrieval (UniVL-DR). As shown in Figure 3, given a query q and multi-modality documents D = {d 1Text, ..., dnText, d 1Image, ..., dmImage}, it directly encodes query q, text document d iText and image document d j Image in one embedding space, which conducts relevance modeling, modality routing, and result fusion in such a space (Sec. 4.1). Texts and images usually have different understanding mechanisms, making it difficult to tackle multi-modality tasks. Nevertheless, language and vision can be commonly translated as a type of mentalese to better communicate between different modules in our brains (Cavanagh, 2021), thus a unified representation method has the ability to break the boundary of different modalities and benefit vision-language learning. To build a unified multi-modal retrieval system, UniVL-DR learns a universal embedding space by contrastively optimizing vision-language representations using hard negatives with balanced-modality sampling (Sec. 4.2) and bridging the modality gap via verbalizing the picture to paraphrase pixel semantics in the raw text space (Sec. 4.3). 4.1 MULTI-MODALITY DENSE RETRIEVAL UniVL-DR gets representations of queries, image documents and text documents with two encoders: TextEnocder and ImgEncoder. Specifically, the image document d jImage consists of a picture Ij and an image caption Cj , thus we utilize ImgEncoder and TextEnocder to encode Ij and Cj . Query Encoding. UniVL-DR directly encodes the query q to get its representation q⃗: q⃗ = TextEnocder(q). (1) Text Document Encoding. To represent text documents, UniVL-DR also leverages the TextEnocder to encode the i-th text document d iText as d⃗ i Text: d⃗ iText = TextEnocder(d i Text). (2) Image Document Encoding. Different from text documents, image documents can be represented by picture features and image captions and the textual captions can help better understand the semantics of image documents (Baldrati et al., 2022). Thus, UniVL-DR encodes picture Ij and image caption Cj and then sums these embeddings to get the representation d⃗ j Image of j-th image document: d⃗ jImage = ImgEnocder(Ij) + TextEnocder(Cj). (3) The representations d⃗ jImage and d⃗ i Text of image document and text document use the same TextEnocder to encode their textual information, which bridges different modalities in the text space and helps to build a universal embedding space for multi-modality retrieval. Multi-modality Document Retrieval. The cosine similarity score f(q, d) of query q and document candidate d ∈ D can be calculated to estimate the relevance between q and d: f(q, d) = cos(q⃗, d⃗ ), (4) where q⃗ and d⃗ are the representations of q and d. The efficient similarity calculation between queries and the multi-modality documents can be provided by FAISS (Johnson et al., 2019). 4.2 UNIVERSAL REPRESENTATION LEARNING UniVL-DR employs a vision-language model, CLIP (Radford et al., 2021), to learn universal representations for queries and multi-modality documents, which is knowledgeable about crossmodality retrieval. UniVL-DR optimizes the universal embedding space through training with modality-balanced hard negatives, which avoids overfitting to the signals of single-modality during multi-modal co-training. Given the query q and its relevant candidate d+ ∈ D, the embedding space can be optimized by sampling hard negatives D− and minimizing the following contrastive training loss L: L = − log e f(q,d+)/τ ef(q,d+)/τ + ∑ d−∈D− e f(q,d−)/τ = − f(q, d+)/τ︸ ︷︷ ︸ LAlign + log(ef(q,d +)/τ + k1∑ i=1 ef(q,d i− Image)/τ ︸ ︷︷ ︸ LImage + k2∑ j=1 ef(q,d j− Text)/τ ︸ ︷︷ ︸ LText ), (5) where τ is the temperature to scale the similarity score. During training, we in fact maximize LAlign and minimize LImage and LText, which make queries closer to related documents and away from unrelated documents. If k1 > k2 or k2 > k1, we can achieve a smaller loss LImage + LText by simply making queries far away from the image collection or the text collection. Such a behavior can win a lower loss L but overfits the ranking features from single/cross modality matching, leading to a modality discrimination during retrieval. Our modality-balanced negative training strategy keeps k1 = k2 = k to better train the modality selection ability of retrievers. 4.3 IMAGE VERBALIZATION FOR EXPANSION UniVL-DR provides another way to bridge the modality gap between texts and images by verbalizing picture pixel features, including image caption and query generation methods. Following Li et al. (2020), we can represent a picture Ij using detected objects O = {O1, ..., Ol}. For each image object Oi, we can get its pixel feature O⃗i and the predicted class Ôi. Then UniVL-DR uses a vision-language model, such as VinVL (Zhang et al., 2021), to verbalize image documents. Specifically, we generate potentially matched captions or related queries as the image verbalization results V (Ij), according to the picture Ij or the image document d j Image = {Ij , Cj}. We can first feed the predicted classes {Ô1; ...; Ôl} and regional features {O⃗1; ...; O⃗l} of detected objects into image verbalization models. Then we train the model to generate image caption Cj : Xcj = [CLS];Cj ; [SEP]; Ô1; ...; Ôl; [SEP]; O⃗1; ...; O⃗l; (6) or replace the detected object classes {O⃗1; ...; O⃗l} in the input sequence Xcj with the image caption Cj to generate related query q of the image document d j Image: Xqj = [CLS]; q; [SEP];Cj ; [SEP]; O⃗1; ...; O⃗l, (7) where ; is the concatenation operation, and [CLS] and [SEP] are special tokens. During training or inference, we utilize Masked Language Modeling (MLM) (Devlin et al., 2019) to mask and predict some or all of the tokens of image caption Cj and query q in the inputs Xcj and X q j , aiming to train image verbalization models or generate verbalized captions and queries. Finally, we enhanced the representations of image documents by expending their text representations C∗j by expanding the raw caption Cj with image verbalization results V (Ij): C∗j = Cj ; [SEP];V (Ij), (8) where the enhanced text representation C∗j is used to replace the raw caption Cj in E.q. 3 during encoding the image document d jImage. 5 EXPERIMENTAL METHODOLOGY This section describes the dataset, baselines, some vision language models used in our experiments, and implementation details. Dataset. A multi-hop and multi-modal open domain question answering dataset WebQA (Chang et al., 2022) is used in our experiments. We process the WebQA dataset in an open domain retrieval setting and show the details in Appendix A.1. Evaluation Metrics. We use NDCG@K, MRR@K, Recall@20, and Recall@100 as the evaluation metrics. K can be 10 and 20. And we regard MRR@10 as our main evaluation (Bajaj et al., 2016). Vision-Language Models. In our experiments, we employ two state-of-the-art vision-language models, VinVL (Zhang et al., 2021) and CLIP (Radford et al., 2021) to implement different retrieval models in our experiments. VinVL (Zhang et al., 2021) inherits Oscar (Li et al., 2020) architecture, which extracts object tags and region features to represent images, and learns cross-modal representations by aligning semantics between images and texts. Different from VinVL, CLIP (Radford et al., 2021) utilizes a dual encoder to project images and texts in the same semantic space for computing their similarity scores and is trained on a large-scale dataset WebImageText that contains 400 million image-text pairs. It has shown strong effectiveness in cross-modality retrieval. Baselines. Our baselines contain several models in the settings of single modality retrieval, divideand-conquer, and universal multi-modal retrieval. Single modality retrieval. In this setting, we represent image documents with captions and employ text retrievers, BM25 and DPR (Karpukhin et al., 2020) as baselines. DPR is trained with NQ (Kwiatkowski et al., 2019), which is similar to the textual source of WebQA. Then we continuously train DPR with in-batch and hard negatives to implement NQ-DPR and NQ-ANCE models. Divide-and-conquer. We first employ three widely used retrievers, BM25, VinVL-DPR, and CLIPDPR, to conduct text-text retrieval and text-image retrieval. Then the multi-modality retrieval results are fused according to their uni-modal rank reciprocals or oracle modality routing. The latter one shows the upper bound of the retrieval performance of our divide-and-conquer models. Multi-modal retrieval. In our experiments, we also build two multi-modal retrieval baselines: VinVLDPR and CLIP-DPR. VinVL-DPR and CLIP-DPR represent image documents with caption and picture features. And then they optimize VLP models, VinVL (Zhang et al., 2021) and CLIP (Radford et al., 2021), with in-batch negatives to learn universal representations for multi-modal retrieval. Implementation Details. During training UniVL-DR, we employ the text and image encoders from CLIP, truncate the text with the max length of 771 and set the batch size to 64, learning rate=5e− 6, max training epoch to 20, and the temperature hyperparameter τ = 0.01. In our experiments, we retrieve Top 100 documents using CLIP-DPR and sample two hard negatives of different modalities (k = 1) from these candidates. All models are tuned with AdamW optimizer, are evaluated per 500 steps, and set early stop step as 5. More experimental details are shown in Appendix A.2. 6 EVALUATION RESULTS In this section, we study the performance of UniVL-DR, its advantages in multi-modal retrieval, the effectiveness of our modality-balanced hard negative training strategies, and how our image verbalization methods bridge the modality gap between texts and images. 6.1 OVERALL PERFORMANCE The multi-modal retrieval performance of different models is shown in Table 1. Our UniVL-DR outperforms all baselines with more than 7% improvement on ranking evaluation, recalls more than 6% relevant multi-modality documents, and even outperforms the divide-andconquer model guided by oracle modality routing. Such significant improvements illustrate the effectiveness of UniVL-DR in building a multi-modal retrieval system. 1https://github.com/openai/CLIP Setting Model MRR@10 NDCG@10 MRR@20 NDCG@20 Rec@20 Rec@100 Similar to UniVL-DR, BM25 learns universal textual representations for image/text documents and shows strong ranking effectiveness. To build a divide-and-conquer system, we use BM25 and CLIP-DPR to implement text-text and text-image retrievers and then fuse the results from different retrievers. With the help of oracle modality routing, the divide-and-conquer system shows better ranking results and recalls more relevant documents than BM25. Nevertheless, this system shows a distinct performance when using the uni-modal rank reciprocals to route queries, showing the challenge of fusing retrieval results in divide-and-conquer. CLIP-DPR and UniVL-DR can deal with this problem by learning universal representations for queries and multi-modality documents, which unifies the multi-modality relevance modeling and retrieval result fusion. Thanks to our multi-modality training strategies, UniVL-DR achieves more than 10% improvement on multi-modal retrieval than CLIP-DPR. The following experiments further explore how UniVL-DR learns universal representations for multi-modal retrieval and bridges the gap between images and texts. 6.2 ABLATION STUDIES The ablation studies are conducted to study model performance on multi-modal retrieval. And we also evaluate the effectiveness of UniVL-DR on both text-text and text-image retrieval tasks, which aims at showing the influence of multi-modal learning on these single/cross modality retrieval tasks. As shown in Table 2, we evaluate the retrieval effectiveness of different vision-language models, VinVL-DPR and CLIP-DPR. They are trained with in-batch negatives on text-text/image and multimodal retrieval tasks. In the single/cross modality setting, we fine-tune vision-language models with a group of queries that only contain related documents in text modality or image modality. Our multi-modality training setting uses all queries to train these vision-language models and equally samples in-batch negatives from the documents of different modalities. For both CLIP-DPR and VinVL-DPR, image captions are usually more effective to represent image documents than figure features, which demonstrates the difficulty in understanding figure semantics with only figure pixels. Thus, UniVL-DR tries to verbalize the figure features by extracting the objects that appear in the figure and describing the figure facts among detected objects (Zhang et al., 2021). The image verbalization results paraphrase picture pixel facts in natural language and help to enhance the textual representations of images by expanding image verbalization results to image captions. As a result, UniVL-DR uses such an enhanced text representation for image documents and then employs the same module to encode text information of image documents and text documents. It helps to build universal representations for multi-modality documents by breaking the modality boundary and fully using additional training signals from different modalities, making UniVL-DR achieve the best retrieval performance on multi-modal retrieval among all baseline models. UniVL-DR also shows its advantages by outperforming all baseline models on both text-text and text-image retrieval tasks, demonstrating that multi-modality modeling indeed benefits single/cross modality retrieval. In the multi-modal retrieval setting, CLIP-DPR is converted from a text-text retriever to a multi-modal retriever after adding figure features. CLIP-DPR achieves better performance on the text-image retrieval task than CLIP-DPR w/o figure feature, which illustrates that image features provide additional signals to help multi-modality models distinguish related image documents. On the contrary, the multi-modal retrieval performance of CLIP-DPR is decreased, showing that CLIP-DPR fails to fuse retrieval results from different modalities. UniVL-DR uses a modality-balanced hard negative training strategy to learn universal representations for queries and documents, which deals with the challenge of fusing retrieval results, helps to achieve more gains on the multi-modal retrieval task, and enhances the modality disambiguation ability. 6.3 EFFECTIVENESS OF BALANCED HARD NEGATIVE SAMPLING In this experiment, we study the training strategies of UniVL-DR that are used in learning universal multi-modality representations and show the effectiveness of different negative sampling methods. As shown in Table 3, we start from the multi-modal retriever CLIP-DPR, continuously fine-tune it with different hard negative sampling methods, and show their performance on different retrieval tasks. Our experimental results show that the in-batch trained models prefer to return text documents than image documents as the ranking results, even the image-answerable queries take a larger portion (about 51.6%) in the training data. It illustrates that training multi-modality retrievers with modality-unbalanced negatives usually leads to undesired modality bias during retrieval. Then we continuously train CLIP-DPR with hard negatives sampled from top-retrieved multi-modality results from CLIP-DPR and significantly improve its retrieval performance in all testing scenarios. Our modality-balanced hard negative sampling strategy achieves the best retrieval performance among all negative sampling methods, showing its important role in building a universal multi-modal retrieval model. Compared with ANCE (Random), our modality-balanced sampling strategy mitigates the modality variance during contrastive training and provides more useful signals to train the modality disambiguation ability of universal multi-modal retrievers. Finally, we visualize the embedding space of different retrieval models in Figure 4. After training with modality-balanced hard negatives, UniVL-DR learns a more uniform and effective embedding space for multi-modal retrieval. In this embedding space, both text and image documents are assigned in different areas of the embedding space, and queries are routed to different areas for returning documents from corresponding modalities. As shown in Figure 4(b) and Figure 4(c), when the retrieval models are only trained with hard negatives of text and image documents, the query embeddings are concentrated and respectively assigned closer to the areas of image and text documents. It demonstrates that multi-modality retrieval model usually overfits the training signals of in-batch majority modality to win a lower contrastive loss during training. UniVL-DR alleviates this problem by balancing the modalities of hard negatives in contrastive training. 6.4 BRIDGING CROSS-MODALITY MATCHING WITH IMAGE VERBALIZATION UniVL-DR uses image verbalization methods to generate matched captions or related queries to bridge the modality gap between texts and images. In this experiment, we show the effectiveness of different image verbalization strategies on text-text, text-image, and multi-modal retrieval tasks. As shown in Table 4, our image verbalization methods demonstrate their ability on enhancing the text representations of image documents by achieving better text-image retrieval results. These image verbalization methods aim to generate informative text clues to help retrievers distinguish the queryrelated image documents in the text space. Then the text-text and multi-modal retrieval performance is also improved with the help of verbalized captions or queries, showing the effectiveness of our image verbalization methods in bridging the modality gap between images and texts and benefiting the single modality tasks using additional training signals from different modalities. Compared with verbalized captions, our query verbalization method aligns the necessary semantics in captions, e.g. mentioned entities, with image objects and verbalizes the figure pixels with the help of caption semantics. Enhancing image representations using verbalized queries usually achieves better retrieval effectiveness than using verbalized captions. It showcases that our query verbalization method can provide more meaningful text clues for relevance modeling and multi-modality learning. Moreover, some additional experiments are provided to study the effectiveness of different image verbalization methods. We first show the relationship between the effectiveness of verbalized queries and manual caption lengths in Appendix A.4 and then conduct some case studies in Appendix A.5 to explore the characteristics of different image verbalization methods. 7 CONCLUSION This paper proposes UniVL-DR, which models singe/cross modality matching and retrieval result fusion in one universal embedding space. UniVL-DR proposes an effective multi-modality training strategy to learn universal representations for queries and documents, which breaks the modality boundary between vision and language, and helps to achieve state-of-the-art multi-modal retrieval performance. Our experiments show that UniVL-DR can bridge the modality gap with image verbalization technologies and avoid overfitting the training signals of one modality by optimizing retrievers with modality-balanced hard negatives. ACKNOWLEDGMENTS This work is supported by Beijing Academy of Artificial Intelligence (BAAI), the Natural Science Foundation of China under Grant No. 62206042, No. U1811261 and No. 62006129, the Fundamental Research Funds for the Central Universities under Grant No. N2216013, China Postdoctoral Science Foundation under Grant No. 2022M710022, and National Science and Technology Major Project (J2019-IV-0002-0069). A APPENDIX A.1 DATA STATISTICS A multi-hop and multi-modal open domain question answering dataset WebQA (Chang et al., 2022) is used in our experiments. The dataset contains images and passages that are crawled from the general Web and Wikipedia. In our experiments, we randomly sample 5,000 queries from the original training set of WebQA as the development set for evaluation. All data statistics are shown in Table 5. To build an open-domain benchmark, we collect 389,750 images and 787,697 texts as multi-modal retrieval sources. The image collection contains all images collected by the WebQA dataset, while the text collection contains all relevant passages of all 41,732 queries, which are Wikipedia snippets selected by matching noun chunks in the queries (Chang et al., 2022). A.2 ADDITIONAL EXPERIMENT DETAILS This subsection describes additional implementation details. In our experiments, we employ two pretrained vision-language models, VinVL (Zhang et al., 2021) and CLIP (Radford et al., 2021), and the pretrained language model, BERT (Devlin et al., 2019) to implement different retrieval models. VinVL-DPR. For VinVL variant models, we first detect the image objects and extract corresponding region features following VinVL2. Then we concatenate image captions and image region features as inputs to feed into VinVL models and get the image representations. We initialize VinVL with the checkpoint trained on the MSCOCO image retrieval task and continuously train the model on the WebQA dataset with in-batch negatives. During training, we set the batch size to 32, learning rate=2e − 5, accumulate step as 1, and max training epoch to 30. We truncate the queries, image captions, text documents, and image region features with max lengths of 70, 70, 200, and 50. CLIP-DPR. For training CLIP-DPR, we start from the ViT-B/32 version of CLIP and continuously train CLIP on the WebQA dataset with in-batch negatives. We truncate texts with the max length of 77 and set accumulate step as 1, batch size to 64, learning rate=5e − 6, max training epoch to 20, and the temperature hyperparameter τ = 0.01. The cosine annealing strategy is used to schedule the learning rate during training. BERT-DPR. We initialize our retriever with the bert-base-uncased checkpoint, which is provided by Hugginface Transformers3. During training, we set the batch size to 32, learning rate=5e − 5, accumulate step as 1, and max training epoch to 30. We truncate the queries, text documents, and image captions with max lengths of 70, 200, and 70. NQ-DPR/NQ-ANCE. NQ-DPR and NQ-ANCE start from the NQ-trained DPR model (Karpukhin et al., 2020), which uses a dual encoder architecture to encode queries and documents. All experimental settings keep the same with BERT-DPR. Besides, NQ-ANCE is tuned with the hard negatives sampled from the Top 100 retrieved candidates of NQ-DPR Xiong et al. (2021a). A.3 EXPERIMENTAL DETAILS OF IMAGE VERBALIZATION The image verbalization models are used to generate potentially matched captions or related questions for an image. Our experiments start from the image caption generation model, which is trained on the MSCOCO image caption task (Zhang et al., 2021) to generate related captions or queries to verbalize images. 2https://github.com/microsoft/scene_graph_benchmark 3https://github.com/huggingface/transformers We can first directly generate image-related captions as the image verbalization results using the image caption model provided by VinVL (Zhang et al., 2021). As shown in E.q. 6, we first detect the image objects in the images and then feed the predicted classes and region features of detected objects to the VinVL model. In our experiments, we fix the parameters of the VinVL-based image caption model and generate the caption for each image. During generating image-related queries, as shown in E.q. 7, we concatenate the image-related query, image caption, and image regional features as the input. We continuously train the VinVL-based image caption model by randomly masking the tokens in queries, and optimizing vision-language models to fill in the masked positions. Different from image caption models, our query generation method tries to align the semantics in image captions and image pixel features instead of mapping the predicted classes and regional image features of detected objects, which can help vision-language models better understand the image semantics (Huang et al., 2021a). During training and inference, we set the generated tokens up to 20 and the beam size to 5. We truncate the queries, image captions, and image region features with the max lengths of 40, 30, and 50, respectively. The mask probability is set to 0.15. More experimental details can be referred to Zhang et al. (2021). A.4 IMAGE VERBALIZATION PERFORMANCE WITH DIFFERENT CAPTION LENGTHS In this subsection, we evaluate the multi-modal retrieval performance of UniVL-DR with different verbalized queries. Specifically, we evaluate the effectiveness of image-verbalized queries in the multi-modal retrieval task. These image-verbalized queries are generated with the image documents with different manual caption lengths. We group the testing examples into three categories according to the manual caption lengths of the image documents and calculate the average MRR@10 score for each group. The ratios are 42.33%, 36.84%, and 20.83% of the short, medium, and long caption length groups. As shown in Figure 5, the experimental results show that our query generation method mainly helps to improve the retrieval effectiveness on the queries of short length and medium length, illustrating that these generated queries can provide some crucial textual clues in image representations of shorter captions. These expanded text clues help retrieval models better understand image semantics, more effectively represent images via enhanced textual information, and conduct cross-modality matching more easily. Moreover, the queries in the medium caption length group achieve the best performance, because the image captions of medium lengths can cover more necessary text clues for generating more informative verbalization results. A.5 CASE STUDIES ON DIFFERENT IMAGE VERBALIZATION METHODS This experiment shows some image verbalization cases in Table 6. We randomly sample queries that can be answered by image documents and show the manual captions, verbalized captions, and verbalized queries of the image documents. Overall, these cases can be categorized into two groups according to the lengths of manual captions. The first three cases are longer and more informative to describe the image facts among mentioned objects, which can be directly used in text-image relevance modeling. On the contrary, the manual captions in the last three cases are written by the most representative entities that appeared in the images, making it difficult to distinguish the related images only according to these manual captions. UniVL-DR employs two image verbalization methods to enhance the textual semantics of images. Generating image captions is the most intuitive way to paraphrase images with some pre-defined classes of image objects. Nevertheless, these object classes are too general and may be uninformative to provide semantics for matching, because the specific entities are critical to retrieving related documents in a question-answering system (Sciavolino et al., 2021). Different from these verbalized captions, the verbalized queries are usually more informative and meaningful and specify the image objects by copying entity names from the manual captions, such as the names of persons, places, and buildings. These entities can be directly matched with the given queries, which benefits cross-modality matching and helps to mitigate the modality gap between images and texts. A.6 MULTI-MODAL RETRIEVAL WITH DIFFERENT IMAGE REPRESENTATION COMBINATION METHODS In this subsection, we conduct experiments to show the effectiveness of different methods in combining the representations of image captions and image features. Model MRR@10 MRR@20 NDCG@10 NDCG@20 Model MRR@1 NDCG@5 NDCG@10 NDCG@20 As shown in Table 7, we evaluate the effectiveness of different combination models using CLIP-DPR. We concatenate, dot product (outer product), and sum the representations of image captions and image features to conduct three models: CLIP-DPR (Concatenation), CLIP-DPR (Outer Product), and CLIP-DPR (Sum). CLIP-DPR (Sum) shows its effectiveness by achieving the best performance among all baselines. The sum operation is a commonly used semantic combination method, which is also used in BERT (Devlin et al., 2019) to combine token embedding and position embedding. On the contrary, the concatenation operation usually regards the representations of image captions and image features as subvectors and separates them into subspaces, making it hard to learn the semantics of image documents. On the other hand, the outer product operation conducts orthogonal representations during combining representations, which is not a typical combination method. A.7 ADDITIONAL EVALUATIONS ON MULTI-MODEL RETRIEVAL In our experiments, we follow previous widely used retrieval benchmarks, MS MARCO (Bajaj et al., 2016) and BEIR (Thakur et al., 2021), and use NDCG@10/20 and MRR@10/20 to show the retrieval effectiveness of different retrieval models. MRR scores and NDCG scores are calculated by the MS MARCO official scripts4 and TREC’s evaluation tool5. As shown in Table 8, we also conduct some evaluations to show the retrieval performance of higherranked candidates using MRR@1 and NDCG@5. UniVL-DR also shows strong effectiveness by outperforming BM25 and CLIP-DPR with more than 6% improvements. Notably, UniVL-DR even shows better retrieval effectiveness than the BM25 & CLIP-DPR (Oracle Modality) model, which is in an ideal setting. It supports our claim that multi-modality modeling can also benefit single/cross-modality tasks. A.8 EXAMPLES OF HARD NEGATIVES In this subsection, we randomly sample two queries and show some hard negatives in Figure 6, which are top-ranked documents using our CLIP-DPR model. In the first case, when we ask “Is there greenery at Centennial Olympic Park?”, the CLIP-DPR model can provide some image and text documents, which are regarded as hard negatives to continuously train dense retrievers. The negative images are about buildings, lawns, and trees, but these objects are not located at Centennial Olympic Park. Evidently, these negative images are on-topic with “greenery” but are not related to the given query. Training dense retrievers with these hard negatives can better teach retrievers to distinguish the subtle difference among these confusing images. 4https://github.com/microsoft/MSMARCO-Passage-Ranking/blob/master/ms_marco_eval.py 5https://github.com/cvangysel/pytrec_eval For both cases, the hard negatives from different modalities showcase some necessary semantics, which are needed by the retrieval model to find relevant information. For example, text documents in Case 1 can provide background knowledge of the Olympic Games and Centennial Olympic Park; image documents in Case 2 supply the “doves” semantics from the visual modality. These informative documents from different modalities can provide sufficient clues to guild dense retrievers to learn necessary semantics during contrastive training.
1. What is the main contribution of the paper regarding dense retrieval? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of modality gap bridging and hard negative utilization? 3. Do you have any questions or concerns about the image verbalization method, its choice, and its potential impact on performance? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any suggestions for improving the proposed method, such as using different proposal generation or caption generation models? 6. Can the authors provide more visualization examples or explanations to help understand the concept of hard negatives and their significance in the approach? 7. Why did the authors choose NDCG@10 as their evaluation metric instead of other common metrics like MRR@10 or Rec@20? 8. What might be the reason behind the observed difference in performance between BM25 & Clip-DPR Rec@20 and UniVl-DR + BM25, and what implications does it have for future improvements?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a universal vision-language dense retrieval with two techniques, using modality-balanced hard negatives for optimization and bridging the modality gap by the image verbalization method. Experiments are conducted on the built open-domain dataset from WebQA and compared to existing models for single-modality, divide-and-conquer, and multi-modal retrieval and ablation studies. Strengths And Weaknesses Strength: Reasonable motivation and a combination approach. Weakness: The proposed image verbalization method simply adopts the existing approach to generate matched captions or queries according to pictures. The performance of the combined method may be bounded by the methods of proposal generation and caption generation models. Is there a strong reason the authors choose VinVL (Zhang 2021) and MLM (Devlin 2019)? Or, it can be any arbitrary methods to achieve this. In addition, what is the definition of hard negatives, some visualization examples may help? What will it happen if the proposed method uses non-hard negatives compared to other methods? Are those good results from the hard negatives? Can the authors explain why to choose NDCG@10, instead of @1, @5, as we know that most of the time the top retrieved results are the most important to queries? When good results on MRR@10 and @20, why BM25 & Clip-DPR Rec@20 is better a lot in Table 2? Does that imply the UniVl-DR + BM25 method can provide even better? Clarity, Quality, Novelty And Reproducibility This paper is well-written. It's clear and easy to follow, but the novelty may be limited.
ICLR
Title Universal Vision-Language Dense Retrieval: Learning A Unified Representation Space for Multi-Modal Retrieval Abstract This paper presents Universal Vision-Language Dense Retrieval (UniVL-DR), which builds a unified model for multi-modal retrieval. UniVL-DR encodes queries and multi-modality resources in an embedding space for searching candidates from different modalities. To learn a unified embedding space for multi-modal retrieval, UniVL-DR proposes two techniques: 1) Universal embedding optimization strategy, which contrastively optimizes the embedding space using the modality-balanced hard negatives; 2) Image verbalization method, which bridges the modality gap between images and texts in the raw data space. UniVL-DR achieves the state-ofthe-art on the multi-modal open-domain question answering benchmark, WebQA, and outperforms all retrieval models on the two subtasks, text-text retrieval and text-image retrieval. It demonstrates that universal multi-modal search is feasible to replace the divide-and-conquer pipeline with a united model and also benefits single/cross modality tasks. All source codes of this work are available at https: //github.com/OpenMatch/UniVL-DR. 1 INTRODUCTION Although search engines primarily focus on textual data (Singhal et al., 2001), multi-media is necessary to satisfy user needs during retrieval. A user query can be answered by the information in variant formats, such as a text document, or a picture. The growth of multi-media content has been one of the most notable trends on the internet (Mei et al., 2014), and various studies have proved that users prefer more vivid multi-media content in search results (Datta et al., 2008). Current multi-media search systems often employ a divide-and-conquer approach. As shown in Figure 1(a), they first conduct search in individual modalities, including text, image, video, etc. (Bajaj et al., 2016; Grubinger et al., 2008; Kwiatkowski et al., 2019; Awad et al., 2021), and then fuse the retrieval results from various verticals together, e.g., building another ranking layer on top of these single/cross modality retrievers (Escalante et al., 2008; Grubinger et al., 2008). Both relevance modeling and retrieval result fusion are usually entwined to achieve more accurate multi-modal retrieval results. However, due to the modality gap, they can be only pipeline-modeled in divide-andconquer, making it challenging to fuse retrieval results from different modalities. In this paper, we explore the potential of universal multi-modal retrieval to build an end-to-end model and retrieve multi-modality documents for user queries. Illustrated in Figure 1(b), universal multi-modal retrieval maps queries and multi-modality resources to one universal embedding space and retrieves multi-modality candidates via KNN search. As a result, the relevance modeling, cross-modality matching, and retrieval result fusion are done by one model. More specifically, we propose a Universal Vision-Language Dense Retrieval (UniVL-DR) model to get the representations of queries, texts, and images and learn a tailored vision-language embedding space for multi-modal retrieval. UniVL-DR optimizes the vision-language embedding space using hard negatives (Xiong et al., 2021a) and balances the modalities of these negatives to alleviate the modality preference of multi-modal retrievers. Furthermore, UniVL-DR introduces an image verbalization method, which regards language as a kind of mentalese (Cavanagh, 2021) and mitigates the modality gap between images and texts. Our image verbalization method first aligns the semantics of image captions and figure pixels (Huang et al., 2021a), and then paraphrases the image facts. It helps to bridge language and vision understanding modules of UniVL-DR via natural language. To build a multi-modal retrieval benchmark, we leverage a multi-modal question answering (QA) benchmark WebQA (Chang et al., 2022) and convert it to a standard open-domain setting: retrieving multi-modality candidates from text and image collections for a user query. Divide-and-conquer is an intuitive way to build a multi-modal retrieval system and we pre-route queries to oracle modality to show the upper bound performance of such a system. Compared with the divide-and-conquer system, UniVL-DR addresses the retrieval result fusion challenge, achieves state-of-the-art multi-modal retrieval performance, and brings more than 5% improvement in single/cross modality retrieval. Our experiments show that UniVL-DR learns an effective embedding space for multi-modal retrieval by separating texts and images into different areas and guiding queries to return candidates from corresponding modalities. Our further analyses show that UniVL-DR can alleviate overfit singlemodality signals by balancing hard negatives during training and bridging the modality gap between vision and language by verbalizing images. All experimental results show that learning one universal representation space is starting to benefit single-modality tasks—pretraining representation models on multi-modality and using our techniques can learn additional signals from multi-modalities, overcome the modality boundary, and provide convincing gains in single/multi-modality tasks. 2 RELATED WORK Document retrieval is a typical single modality retrieval task, which aims to return related documents for user queries and can be tackled with dense retrievers (Xiong et al., 2021b; Lewis et al., 2020; Zhan et al., 2021; Li et al., 2021b; Yu et al., 2021). Dense retrievers encode queries and documents with pretrained language models (Devlin et al., 2019) and map them in an embedding space to conduct an efficient search. The query and document encoders are usually contrastively trained with in-batch negatives, BM25 retrieved negatives, and hard negatives (Karpukhin et al., 2020; Xiong et al., 2021a). Recently, lots of work has focused on multi-modal retrieval tasks, which retrieve texts and images to satisfy the multi-modality information needs of users (Hannan et al., 2020; Singh et al., 2021; Talmor et al., 2021; Chang et al., 2022). WebQA (Chang et al., 2022), an open-domain multi-modal question answering benchmark, is built to encourage the following work to represent multi-modal knowledge in a unified space and answer user queries with the information from attribute modalities. It is a more realistic setting, which avoids synthesizing queries with templates (Talmor et al., 2021) and downplays the role of modality disambiguation (Hannan et al., 2020) in the multi-modality modeling. To search information from large-scale multi-modality sources, WebQA (Chang et al., 2022) employs a divide-and-conquer pipeline to search text and image candidates with BM25 and CLIP (Radford et al., 2021) and then fuse these retrieval results using a vision-language model. However, single- modality retrievers, such as BM25 and CLIP, usually show distinct retrieval effectiveness (Chang et al., 2022), leading to modality discrimination during fusing retrieval results from different modalities. When building a unified multi-modal retriever, vision-language pretraining (VLP) is crucial to learn universal representations for texts and images, which has also shown success on lots of visionlanguage benchmarks (Uppal et al., 2022; Han et al., 2020; Khan et al., 2021; Du et al., 2022). Most VLP approaches encode texts and images and pretrain encoders with two tasks: masked token prediction and text-image matching (Zhang et al., 2021). These VLP methods teach vision-language models to learn the semantic alignments between texts and images, as well as encode images with the regional features of detected objects (Chen et al., 2019; Lu et al., 2019; Tan and Bansal, 2019; Su et al., 2020; Li et al., 2019; 2021a; Cho et al., 2021; Hu et al., 2020; Gan et al., 2020) or the whole image features (Xu et al., 2021; Kim et al., 2021; Huang et al., 2021b; Wang et al., 2021). 3 MULTI-MODAL RETRIEVAL TASK As shown in Figure 3, we compare different retrieval tasks and tell apart the differences between multi-modal retrieval and other two tasks, single modality retrieval and cross modality retrieval. Single Modality Retrieval. Single modality retrieval focuses on conducting relevance searching in one modality space, which includes text-text retrieval and image-image retrieval. Text-text retrieval (Bajaj et al., 2016) aims to search relevant candidates from the text collection T = {T1, ..., Tn} to answer a query q. And image-image retrieval (Yoon et al., 2021) focuses more on returning similar images from the image collection I = {I1, ..., Im} for the given image Ij . Cross Modality Retrieval. The cross modality retrieval, e.g. MSCOCO (Chen et al., 2015) and Flickr30K (Young et al., 2014), contains two subtasks: text-image retrieval and image-text retrieval. Given an image caption Ti or an image Ij , these tasks require retrieval models to conduct crossmodality matching between images and captions, aiming to search candidates from images I = {I1, ..., Im} or image captions T = {T1, ..., Tn}, respectively. Such cross-modality interactions are built to align semantics between captions and images, which is distinct from the search relevance. Multi-Modal Retrieval. Given a query q, the multi-modal retrieval task (Chang et al., 2022) helps users uncover the information from multi-modality sources D = {T1, ..., Tn, I1, ..., Im}. Different from single/cross modality retrieval, multi-modal retrieval aims at returning relevant candidates from the multi-modality documents D. The retrieval results may consist of texts, images, or a mixture of them according to user query q. Different from existing text and sketch base image retrieval (Sangkloy et al., 2022; Dutta and Akata, 2020; Dey et al., 2018; Mai et al., 2017), the multimodal retrieval focuses more on relevance modeling between queries and documents, single/cross modality matching, and modality routing, making this task more challenging. Moreover, we can pre-route queries to a single modality and convert the multi-modal retrieval to two subtasks, text-text retrieval and text-image retrieval, which are single and cross modality retrieval tasks. 4 UNIVSEARCH BY LEARNING A UNIFIED EMBEDDING SPACE This section describes our Universal Vision-Language Dense Retrieval (UniVL-DR). As shown in Figure 3, given a query q and multi-modality documents D = {d 1Text, ..., dnText, d 1Image, ..., dmImage}, it directly encodes query q, text document d iText and image document d j Image in one embedding space, which conducts relevance modeling, modality routing, and result fusion in such a space (Sec. 4.1). Texts and images usually have different understanding mechanisms, making it difficult to tackle multi-modality tasks. Nevertheless, language and vision can be commonly translated as a type of mentalese to better communicate between different modules in our brains (Cavanagh, 2021), thus a unified representation method has the ability to break the boundary of different modalities and benefit vision-language learning. To build a unified multi-modal retrieval system, UniVL-DR learns a universal embedding space by contrastively optimizing vision-language representations using hard negatives with balanced-modality sampling (Sec. 4.2) and bridging the modality gap via verbalizing the picture to paraphrase pixel semantics in the raw text space (Sec. 4.3). 4.1 MULTI-MODALITY DENSE RETRIEVAL UniVL-DR gets representations of queries, image documents and text documents with two encoders: TextEnocder and ImgEncoder. Specifically, the image document d jImage consists of a picture Ij and an image caption Cj , thus we utilize ImgEncoder and TextEnocder to encode Ij and Cj . Query Encoding. UniVL-DR directly encodes the query q to get its representation q⃗: q⃗ = TextEnocder(q). (1) Text Document Encoding. To represent text documents, UniVL-DR also leverages the TextEnocder to encode the i-th text document d iText as d⃗ i Text: d⃗ iText = TextEnocder(d i Text). (2) Image Document Encoding. Different from text documents, image documents can be represented by picture features and image captions and the textual captions can help better understand the semantics of image documents (Baldrati et al., 2022). Thus, UniVL-DR encodes picture Ij and image caption Cj and then sums these embeddings to get the representation d⃗ j Image of j-th image document: d⃗ jImage = ImgEnocder(Ij) + TextEnocder(Cj). (3) The representations d⃗ jImage and d⃗ i Text of image document and text document use the same TextEnocder to encode their textual information, which bridges different modalities in the text space and helps to build a universal embedding space for multi-modality retrieval. Multi-modality Document Retrieval. The cosine similarity score f(q, d) of query q and document candidate d ∈ D can be calculated to estimate the relevance between q and d: f(q, d) = cos(q⃗, d⃗ ), (4) where q⃗ and d⃗ are the representations of q and d. The efficient similarity calculation between queries and the multi-modality documents can be provided by FAISS (Johnson et al., 2019). 4.2 UNIVERSAL REPRESENTATION LEARNING UniVL-DR employs a vision-language model, CLIP (Radford et al., 2021), to learn universal representations for queries and multi-modality documents, which is knowledgeable about crossmodality retrieval. UniVL-DR optimizes the universal embedding space through training with modality-balanced hard negatives, which avoids overfitting to the signals of single-modality during multi-modal co-training. Given the query q and its relevant candidate d+ ∈ D, the embedding space can be optimized by sampling hard negatives D− and minimizing the following contrastive training loss L: L = − log e f(q,d+)/τ ef(q,d+)/τ + ∑ d−∈D− e f(q,d−)/τ = − f(q, d+)/τ︸ ︷︷ ︸ LAlign + log(ef(q,d +)/τ + k1∑ i=1 ef(q,d i− Image)/τ ︸ ︷︷ ︸ LImage + k2∑ j=1 ef(q,d j− Text)/τ ︸ ︷︷ ︸ LText ), (5) where τ is the temperature to scale the similarity score. During training, we in fact maximize LAlign and minimize LImage and LText, which make queries closer to related documents and away from unrelated documents. If k1 > k2 or k2 > k1, we can achieve a smaller loss LImage + LText by simply making queries far away from the image collection or the text collection. Such a behavior can win a lower loss L but overfits the ranking features from single/cross modality matching, leading to a modality discrimination during retrieval. Our modality-balanced negative training strategy keeps k1 = k2 = k to better train the modality selection ability of retrievers. 4.3 IMAGE VERBALIZATION FOR EXPANSION UniVL-DR provides another way to bridge the modality gap between texts and images by verbalizing picture pixel features, including image caption and query generation methods. Following Li et al. (2020), we can represent a picture Ij using detected objects O = {O1, ..., Ol}. For each image object Oi, we can get its pixel feature O⃗i and the predicted class Ôi. Then UniVL-DR uses a vision-language model, such as VinVL (Zhang et al., 2021), to verbalize image documents. Specifically, we generate potentially matched captions or related queries as the image verbalization results V (Ij), according to the picture Ij or the image document d j Image = {Ij , Cj}. We can first feed the predicted classes {Ô1; ...; Ôl} and regional features {O⃗1; ...; O⃗l} of detected objects into image verbalization models. Then we train the model to generate image caption Cj : Xcj = [CLS];Cj ; [SEP]; Ô1; ...; Ôl; [SEP]; O⃗1; ...; O⃗l; (6) or replace the detected object classes {O⃗1; ...; O⃗l} in the input sequence Xcj with the image caption Cj to generate related query q of the image document d j Image: Xqj = [CLS]; q; [SEP];Cj ; [SEP]; O⃗1; ...; O⃗l, (7) where ; is the concatenation operation, and [CLS] and [SEP] are special tokens. During training or inference, we utilize Masked Language Modeling (MLM) (Devlin et al., 2019) to mask and predict some or all of the tokens of image caption Cj and query q in the inputs Xcj and X q j , aiming to train image verbalization models or generate verbalized captions and queries. Finally, we enhanced the representations of image documents by expending their text representations C∗j by expanding the raw caption Cj with image verbalization results V (Ij): C∗j = Cj ; [SEP];V (Ij), (8) where the enhanced text representation C∗j is used to replace the raw caption Cj in E.q. 3 during encoding the image document d jImage. 5 EXPERIMENTAL METHODOLOGY This section describes the dataset, baselines, some vision language models used in our experiments, and implementation details. Dataset. A multi-hop and multi-modal open domain question answering dataset WebQA (Chang et al., 2022) is used in our experiments. We process the WebQA dataset in an open domain retrieval setting and show the details in Appendix A.1. Evaluation Metrics. We use NDCG@K, MRR@K, Recall@20, and Recall@100 as the evaluation metrics. K can be 10 and 20. And we regard MRR@10 as our main evaluation (Bajaj et al., 2016). Vision-Language Models. In our experiments, we employ two state-of-the-art vision-language models, VinVL (Zhang et al., 2021) and CLIP (Radford et al., 2021) to implement different retrieval models in our experiments. VinVL (Zhang et al., 2021) inherits Oscar (Li et al., 2020) architecture, which extracts object tags and region features to represent images, and learns cross-modal representations by aligning semantics between images and texts. Different from VinVL, CLIP (Radford et al., 2021) utilizes a dual encoder to project images and texts in the same semantic space for computing their similarity scores and is trained on a large-scale dataset WebImageText that contains 400 million image-text pairs. It has shown strong effectiveness in cross-modality retrieval. Baselines. Our baselines contain several models in the settings of single modality retrieval, divideand-conquer, and universal multi-modal retrieval. Single modality retrieval. In this setting, we represent image documents with captions and employ text retrievers, BM25 and DPR (Karpukhin et al., 2020) as baselines. DPR is trained with NQ (Kwiatkowski et al., 2019), which is similar to the textual source of WebQA. Then we continuously train DPR with in-batch and hard negatives to implement NQ-DPR and NQ-ANCE models. Divide-and-conquer. We first employ three widely used retrievers, BM25, VinVL-DPR, and CLIPDPR, to conduct text-text retrieval and text-image retrieval. Then the multi-modality retrieval results are fused according to their uni-modal rank reciprocals or oracle modality routing. The latter one shows the upper bound of the retrieval performance of our divide-and-conquer models. Multi-modal retrieval. In our experiments, we also build two multi-modal retrieval baselines: VinVLDPR and CLIP-DPR. VinVL-DPR and CLIP-DPR represent image documents with caption and picture features. And then they optimize VLP models, VinVL (Zhang et al., 2021) and CLIP (Radford et al., 2021), with in-batch negatives to learn universal representations for multi-modal retrieval. Implementation Details. During training UniVL-DR, we employ the text and image encoders from CLIP, truncate the text with the max length of 771 and set the batch size to 64, learning rate=5e− 6, max training epoch to 20, and the temperature hyperparameter τ = 0.01. In our experiments, we retrieve Top 100 documents using CLIP-DPR and sample two hard negatives of different modalities (k = 1) from these candidates. All models are tuned with AdamW optimizer, are evaluated per 500 steps, and set early stop step as 5. More experimental details are shown in Appendix A.2. 6 EVALUATION RESULTS In this section, we study the performance of UniVL-DR, its advantages in multi-modal retrieval, the effectiveness of our modality-balanced hard negative training strategies, and how our image verbalization methods bridge the modality gap between texts and images. 6.1 OVERALL PERFORMANCE The multi-modal retrieval performance of different models is shown in Table 1. Our UniVL-DR outperforms all baselines with more than 7% improvement on ranking evaluation, recalls more than 6% relevant multi-modality documents, and even outperforms the divide-andconquer model guided by oracle modality routing. Such significant improvements illustrate the effectiveness of UniVL-DR in building a multi-modal retrieval system. 1https://github.com/openai/CLIP Setting Model MRR@10 NDCG@10 MRR@20 NDCG@20 Rec@20 Rec@100 Similar to UniVL-DR, BM25 learns universal textual representations for image/text documents and shows strong ranking effectiveness. To build a divide-and-conquer system, we use BM25 and CLIP-DPR to implement text-text and text-image retrievers and then fuse the results from different retrievers. With the help of oracle modality routing, the divide-and-conquer system shows better ranking results and recalls more relevant documents than BM25. Nevertheless, this system shows a distinct performance when using the uni-modal rank reciprocals to route queries, showing the challenge of fusing retrieval results in divide-and-conquer. CLIP-DPR and UniVL-DR can deal with this problem by learning universal representations for queries and multi-modality documents, which unifies the multi-modality relevance modeling and retrieval result fusion. Thanks to our multi-modality training strategies, UniVL-DR achieves more than 10% improvement on multi-modal retrieval than CLIP-DPR. The following experiments further explore how UniVL-DR learns universal representations for multi-modal retrieval and bridges the gap between images and texts. 6.2 ABLATION STUDIES The ablation studies are conducted to study model performance on multi-modal retrieval. And we also evaluate the effectiveness of UniVL-DR on both text-text and text-image retrieval tasks, which aims at showing the influence of multi-modal learning on these single/cross modality retrieval tasks. As shown in Table 2, we evaluate the retrieval effectiveness of different vision-language models, VinVL-DPR and CLIP-DPR. They are trained with in-batch negatives on text-text/image and multimodal retrieval tasks. In the single/cross modality setting, we fine-tune vision-language models with a group of queries that only contain related documents in text modality or image modality. Our multi-modality training setting uses all queries to train these vision-language models and equally samples in-batch negatives from the documents of different modalities. For both CLIP-DPR and VinVL-DPR, image captions are usually more effective to represent image documents than figure features, which demonstrates the difficulty in understanding figure semantics with only figure pixels. Thus, UniVL-DR tries to verbalize the figure features by extracting the objects that appear in the figure and describing the figure facts among detected objects (Zhang et al., 2021). The image verbalization results paraphrase picture pixel facts in natural language and help to enhance the textual representations of images by expanding image verbalization results to image captions. As a result, UniVL-DR uses such an enhanced text representation for image documents and then employs the same module to encode text information of image documents and text documents. It helps to build universal representations for multi-modality documents by breaking the modality boundary and fully using additional training signals from different modalities, making UniVL-DR achieve the best retrieval performance on multi-modal retrieval among all baseline models. UniVL-DR also shows its advantages by outperforming all baseline models on both text-text and text-image retrieval tasks, demonstrating that multi-modality modeling indeed benefits single/cross modality retrieval. In the multi-modal retrieval setting, CLIP-DPR is converted from a text-text retriever to a multi-modal retriever after adding figure features. CLIP-DPR achieves better performance on the text-image retrieval task than CLIP-DPR w/o figure feature, which illustrates that image features provide additional signals to help multi-modality models distinguish related image documents. On the contrary, the multi-modal retrieval performance of CLIP-DPR is decreased, showing that CLIP-DPR fails to fuse retrieval results from different modalities. UniVL-DR uses a modality-balanced hard negative training strategy to learn universal representations for queries and documents, which deals with the challenge of fusing retrieval results, helps to achieve more gains on the multi-modal retrieval task, and enhances the modality disambiguation ability. 6.3 EFFECTIVENESS OF BALANCED HARD NEGATIVE SAMPLING In this experiment, we study the training strategies of UniVL-DR that are used in learning universal multi-modality representations and show the effectiveness of different negative sampling methods. As shown in Table 3, we start from the multi-modal retriever CLIP-DPR, continuously fine-tune it with different hard negative sampling methods, and show their performance on different retrieval tasks. Our experimental results show that the in-batch trained models prefer to return text documents than image documents as the ranking results, even the image-answerable queries take a larger portion (about 51.6%) in the training data. It illustrates that training multi-modality retrievers with modality-unbalanced negatives usually leads to undesired modality bias during retrieval. Then we continuously train CLIP-DPR with hard negatives sampled from top-retrieved multi-modality results from CLIP-DPR and significantly improve its retrieval performance in all testing scenarios. Our modality-balanced hard negative sampling strategy achieves the best retrieval performance among all negative sampling methods, showing its important role in building a universal multi-modal retrieval model. Compared with ANCE (Random), our modality-balanced sampling strategy mitigates the modality variance during contrastive training and provides more useful signals to train the modality disambiguation ability of universal multi-modal retrievers. Finally, we visualize the embedding space of different retrieval models in Figure 4. After training with modality-balanced hard negatives, UniVL-DR learns a more uniform and effective embedding space for multi-modal retrieval. In this embedding space, both text and image documents are assigned in different areas of the embedding space, and queries are routed to different areas for returning documents from corresponding modalities. As shown in Figure 4(b) and Figure 4(c), when the retrieval models are only trained with hard negatives of text and image documents, the query embeddings are concentrated and respectively assigned closer to the areas of image and text documents. It demonstrates that multi-modality retrieval model usually overfits the training signals of in-batch majority modality to win a lower contrastive loss during training. UniVL-DR alleviates this problem by balancing the modalities of hard negatives in contrastive training. 6.4 BRIDGING CROSS-MODALITY MATCHING WITH IMAGE VERBALIZATION UniVL-DR uses image verbalization methods to generate matched captions or related queries to bridge the modality gap between texts and images. In this experiment, we show the effectiveness of different image verbalization strategies on text-text, text-image, and multi-modal retrieval tasks. As shown in Table 4, our image verbalization methods demonstrate their ability on enhancing the text representations of image documents by achieving better text-image retrieval results. These image verbalization methods aim to generate informative text clues to help retrievers distinguish the queryrelated image documents in the text space. Then the text-text and multi-modal retrieval performance is also improved with the help of verbalized captions or queries, showing the effectiveness of our image verbalization methods in bridging the modality gap between images and texts and benefiting the single modality tasks using additional training signals from different modalities. Compared with verbalized captions, our query verbalization method aligns the necessary semantics in captions, e.g. mentioned entities, with image objects and verbalizes the figure pixels with the help of caption semantics. Enhancing image representations using verbalized queries usually achieves better retrieval effectiveness than using verbalized captions. It showcases that our query verbalization method can provide more meaningful text clues for relevance modeling and multi-modality learning. Moreover, some additional experiments are provided to study the effectiveness of different image verbalization methods. We first show the relationship between the effectiveness of verbalized queries and manual caption lengths in Appendix A.4 and then conduct some case studies in Appendix A.5 to explore the characteristics of different image verbalization methods. 7 CONCLUSION This paper proposes UniVL-DR, which models singe/cross modality matching and retrieval result fusion in one universal embedding space. UniVL-DR proposes an effective multi-modality training strategy to learn universal representations for queries and documents, which breaks the modality boundary between vision and language, and helps to achieve state-of-the-art multi-modal retrieval performance. Our experiments show that UniVL-DR can bridge the modality gap with image verbalization technologies and avoid overfitting the training signals of one modality by optimizing retrievers with modality-balanced hard negatives. ACKNOWLEDGMENTS This work is supported by Beijing Academy of Artificial Intelligence (BAAI), the Natural Science Foundation of China under Grant No. 62206042, No. U1811261 and No. 62006129, the Fundamental Research Funds for the Central Universities under Grant No. N2216013, China Postdoctoral Science Foundation under Grant No. 2022M710022, and National Science and Technology Major Project (J2019-IV-0002-0069). A APPENDIX A.1 DATA STATISTICS A multi-hop and multi-modal open domain question answering dataset WebQA (Chang et al., 2022) is used in our experiments. The dataset contains images and passages that are crawled from the general Web and Wikipedia. In our experiments, we randomly sample 5,000 queries from the original training set of WebQA as the development set for evaluation. All data statistics are shown in Table 5. To build an open-domain benchmark, we collect 389,750 images and 787,697 texts as multi-modal retrieval sources. The image collection contains all images collected by the WebQA dataset, while the text collection contains all relevant passages of all 41,732 queries, which are Wikipedia snippets selected by matching noun chunks in the queries (Chang et al., 2022). A.2 ADDITIONAL EXPERIMENT DETAILS This subsection describes additional implementation details. In our experiments, we employ two pretrained vision-language models, VinVL (Zhang et al., 2021) and CLIP (Radford et al., 2021), and the pretrained language model, BERT (Devlin et al., 2019) to implement different retrieval models. VinVL-DPR. For VinVL variant models, we first detect the image objects and extract corresponding region features following VinVL2. Then we concatenate image captions and image region features as inputs to feed into VinVL models and get the image representations. We initialize VinVL with the checkpoint trained on the MSCOCO image retrieval task and continuously train the model on the WebQA dataset with in-batch negatives. During training, we set the batch size to 32, learning rate=2e − 5, accumulate step as 1, and max training epoch to 30. We truncate the queries, image captions, text documents, and image region features with max lengths of 70, 70, 200, and 50. CLIP-DPR. For training CLIP-DPR, we start from the ViT-B/32 version of CLIP and continuously train CLIP on the WebQA dataset with in-batch negatives. We truncate texts with the max length of 77 and set accumulate step as 1, batch size to 64, learning rate=5e − 6, max training epoch to 20, and the temperature hyperparameter τ = 0.01. The cosine annealing strategy is used to schedule the learning rate during training. BERT-DPR. We initialize our retriever with the bert-base-uncased checkpoint, which is provided by Hugginface Transformers3. During training, we set the batch size to 32, learning rate=5e − 5, accumulate step as 1, and max training epoch to 30. We truncate the queries, text documents, and image captions with max lengths of 70, 200, and 70. NQ-DPR/NQ-ANCE. NQ-DPR and NQ-ANCE start from the NQ-trained DPR model (Karpukhin et al., 2020), which uses a dual encoder architecture to encode queries and documents. All experimental settings keep the same with BERT-DPR. Besides, NQ-ANCE is tuned with the hard negatives sampled from the Top 100 retrieved candidates of NQ-DPR Xiong et al. (2021a). A.3 EXPERIMENTAL DETAILS OF IMAGE VERBALIZATION The image verbalization models are used to generate potentially matched captions or related questions for an image. Our experiments start from the image caption generation model, which is trained on the MSCOCO image caption task (Zhang et al., 2021) to generate related captions or queries to verbalize images. 2https://github.com/microsoft/scene_graph_benchmark 3https://github.com/huggingface/transformers We can first directly generate image-related captions as the image verbalization results using the image caption model provided by VinVL (Zhang et al., 2021). As shown in E.q. 6, we first detect the image objects in the images and then feed the predicted classes and region features of detected objects to the VinVL model. In our experiments, we fix the parameters of the VinVL-based image caption model and generate the caption for each image. During generating image-related queries, as shown in E.q. 7, we concatenate the image-related query, image caption, and image regional features as the input. We continuously train the VinVL-based image caption model by randomly masking the tokens in queries, and optimizing vision-language models to fill in the masked positions. Different from image caption models, our query generation method tries to align the semantics in image captions and image pixel features instead of mapping the predicted classes and regional image features of detected objects, which can help vision-language models better understand the image semantics (Huang et al., 2021a). During training and inference, we set the generated tokens up to 20 and the beam size to 5. We truncate the queries, image captions, and image region features with the max lengths of 40, 30, and 50, respectively. The mask probability is set to 0.15. More experimental details can be referred to Zhang et al. (2021). A.4 IMAGE VERBALIZATION PERFORMANCE WITH DIFFERENT CAPTION LENGTHS In this subsection, we evaluate the multi-modal retrieval performance of UniVL-DR with different verbalized queries. Specifically, we evaluate the effectiveness of image-verbalized queries in the multi-modal retrieval task. These image-verbalized queries are generated with the image documents with different manual caption lengths. We group the testing examples into three categories according to the manual caption lengths of the image documents and calculate the average MRR@10 score for each group. The ratios are 42.33%, 36.84%, and 20.83% of the short, medium, and long caption length groups. As shown in Figure 5, the experimental results show that our query generation method mainly helps to improve the retrieval effectiveness on the queries of short length and medium length, illustrating that these generated queries can provide some crucial textual clues in image representations of shorter captions. These expanded text clues help retrieval models better understand image semantics, more effectively represent images via enhanced textual information, and conduct cross-modality matching more easily. Moreover, the queries in the medium caption length group achieve the best performance, because the image captions of medium lengths can cover more necessary text clues for generating more informative verbalization results. A.5 CASE STUDIES ON DIFFERENT IMAGE VERBALIZATION METHODS This experiment shows some image verbalization cases in Table 6. We randomly sample queries that can be answered by image documents and show the manual captions, verbalized captions, and verbalized queries of the image documents. Overall, these cases can be categorized into two groups according to the lengths of manual captions. The first three cases are longer and more informative to describe the image facts among mentioned objects, which can be directly used in text-image relevance modeling. On the contrary, the manual captions in the last three cases are written by the most representative entities that appeared in the images, making it difficult to distinguish the related images only according to these manual captions. UniVL-DR employs two image verbalization methods to enhance the textual semantics of images. Generating image captions is the most intuitive way to paraphrase images with some pre-defined classes of image objects. Nevertheless, these object classes are too general and may be uninformative to provide semantics for matching, because the specific entities are critical to retrieving related documents in a question-answering system (Sciavolino et al., 2021). Different from these verbalized captions, the verbalized queries are usually more informative and meaningful and specify the image objects by copying entity names from the manual captions, such as the names of persons, places, and buildings. These entities can be directly matched with the given queries, which benefits cross-modality matching and helps to mitigate the modality gap between images and texts. A.6 MULTI-MODAL RETRIEVAL WITH DIFFERENT IMAGE REPRESENTATION COMBINATION METHODS In this subsection, we conduct experiments to show the effectiveness of different methods in combining the representations of image captions and image features. Model MRR@10 MRR@20 NDCG@10 NDCG@20 Model MRR@1 NDCG@5 NDCG@10 NDCG@20 As shown in Table 7, we evaluate the effectiveness of different combination models using CLIP-DPR. We concatenate, dot product (outer product), and sum the representations of image captions and image features to conduct three models: CLIP-DPR (Concatenation), CLIP-DPR (Outer Product), and CLIP-DPR (Sum). CLIP-DPR (Sum) shows its effectiveness by achieving the best performance among all baselines. The sum operation is a commonly used semantic combination method, which is also used in BERT (Devlin et al., 2019) to combine token embedding and position embedding. On the contrary, the concatenation operation usually regards the representations of image captions and image features as subvectors and separates them into subspaces, making it hard to learn the semantics of image documents. On the other hand, the outer product operation conducts orthogonal representations during combining representations, which is not a typical combination method. A.7 ADDITIONAL EVALUATIONS ON MULTI-MODEL RETRIEVAL In our experiments, we follow previous widely used retrieval benchmarks, MS MARCO (Bajaj et al., 2016) and BEIR (Thakur et al., 2021), and use NDCG@10/20 and MRR@10/20 to show the retrieval effectiveness of different retrieval models. MRR scores and NDCG scores are calculated by the MS MARCO official scripts4 and TREC’s evaluation tool5. As shown in Table 8, we also conduct some evaluations to show the retrieval performance of higherranked candidates using MRR@1 and NDCG@5. UniVL-DR also shows strong effectiveness by outperforming BM25 and CLIP-DPR with more than 6% improvements. Notably, UniVL-DR even shows better retrieval effectiveness than the BM25 & CLIP-DPR (Oracle Modality) model, which is in an ideal setting. It supports our claim that multi-modality modeling can also benefit single/cross-modality tasks. A.8 EXAMPLES OF HARD NEGATIVES In this subsection, we randomly sample two queries and show some hard negatives in Figure 6, which are top-ranked documents using our CLIP-DPR model. In the first case, when we ask “Is there greenery at Centennial Olympic Park?”, the CLIP-DPR model can provide some image and text documents, which are regarded as hard negatives to continuously train dense retrievers. The negative images are about buildings, lawns, and trees, but these objects are not located at Centennial Olympic Park. Evidently, these negative images are on-topic with “greenery” but are not related to the given query. Training dense retrievers with these hard negatives can better teach retrievers to distinguish the subtle difference among these confusing images. 4https://github.com/microsoft/MSMARCO-Passage-Ranking/blob/master/ms_marco_eval.py 5https://github.com/cvangysel/pytrec_eval For both cases, the hard negatives from different modalities showcase some necessary semantics, which are needed by the retrieval model to find relevant information. For example, text documents in Case 1 can provide background knowledge of the Olympic Games and Centennial Olympic Park; image documents in Case 2 supply the “doves” semantics from the visual modality. These informative documents from different modalities can provide sufficient clues to guild dense retrievers to learn necessary semantics during contrastive training.
1. What is the main contribution of the paper in terms of multi-modal retrieval? 2. What are the strengths and weaknesses of the proposed model, particularly regarding its novelty and combination of modalities? 3. Do you have any concerns or suggestions regarding the representation learning and image verbalization method? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any recent works or advancements in text, sketch, and image retrieval that the paper could consider discussing?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper presents universal vision-language dense retrieval model, which builds a unified model for multi-modal retrieval. The proposed model encodes queries and multi-modal resources in an embedding space for searching candidates from different modalities. To learn a unified embedding space for multi-modal retrieval, this work has come out with (1) universal embedding optimization strategy, which contrastively optimizes the embedding space using the modality-balanced hard negatives; (2) image verbalization method, which bridges the modality gap between images and texts in the raw data space. Strengths And Weaknesses Strengths (1) The paper addresses a very interesting tasks of visual language research. (2) The paper is reasonably well presented and written in good English. (3) Experimental results are quite encouraging. Weaknesses (1) The novelty is not clear. I was expecting it to be mentioned or specified in the introduction of the paper for better understanding on the contribution of the paper. Furthermore, I am aware of the following existing papers which are also considering multi-modal embedding for different tasks, such as text [1], sketch [4] and both [3,5] based image retrieval, text and image topic modelling [2] etc. (2) It is not very clear why a simple summation of the representation of image caption and image feature works well for combining two very different types of modalities. I wonder if any other combination (concatenation, outer product etc.) has been ablated or could be interesting to try. (3) Currently a lot of progress has been made on text, sketch and both based image retrieval which should also be included and discussed within the literature of cross-modal retrieval. (4) It is not clear why [CLS] and [SEP] tokens special and different. (5) I think it is worth defining universal representation learning and image verbalization procedure. (6) In the experimental results table, citation of the baselines and SOTA methods should be given for readability. [1] Mai et al., Spatial-Semantic Image Search by Visual Feature Synthesis, CVPR, 2017. [2] Gomez et al., Self-supervised learning of visual features through embedding images into text topic spaces, CVPR, 2017. [3] Dey et al., Learning Cross-Modal Deep Embeddings for Multi-Object Image Retrieval using Text and Sketch, ICPR, 2018. [4] Dutta and Akata, Semantically Tied Paired Cycle Consistency for Any-Shot Sketch-based Image Retrieval, IJCV, 2020. [5] Sankgkloy et al., A Sketch Is Worth a Thousand Words: Image Retrieval with Text and Sketch, ECCV, 2022. Clarity, Quality, Novelty And Reproducibility The paper is reasonably clear. It is well written in good English and the reported results are encouraging. The authors have also provided the codes in the supplementary material which I expect to be useful for it reproducibility. However, I haven't found much originality in the core work, which I commented in the weaknesses section.
ICLR
Title Universal Vision-Language Dense Retrieval: Learning A Unified Representation Space for Multi-Modal Retrieval Abstract This paper presents Universal Vision-Language Dense Retrieval (UniVL-DR), which builds a unified model for multi-modal retrieval. UniVL-DR encodes queries and multi-modality resources in an embedding space for searching candidates from different modalities. To learn a unified embedding space for multi-modal retrieval, UniVL-DR proposes two techniques: 1) Universal embedding optimization strategy, which contrastively optimizes the embedding space using the modality-balanced hard negatives; 2) Image verbalization method, which bridges the modality gap between images and texts in the raw data space. UniVL-DR achieves the state-ofthe-art on the multi-modal open-domain question answering benchmark, WebQA, and outperforms all retrieval models on the two subtasks, text-text retrieval and text-image retrieval. It demonstrates that universal multi-modal search is feasible to replace the divide-and-conquer pipeline with a united model and also benefits single/cross modality tasks. All source codes of this work are available at https: //github.com/OpenMatch/UniVL-DR. 1 INTRODUCTION Although search engines primarily focus on textual data (Singhal et al., 2001), multi-media is necessary to satisfy user needs during retrieval. A user query can be answered by the information in variant formats, such as a text document, or a picture. The growth of multi-media content has been one of the most notable trends on the internet (Mei et al., 2014), and various studies have proved that users prefer more vivid multi-media content in search results (Datta et al., 2008). Current multi-media search systems often employ a divide-and-conquer approach. As shown in Figure 1(a), they first conduct search in individual modalities, including text, image, video, etc. (Bajaj et al., 2016; Grubinger et al., 2008; Kwiatkowski et al., 2019; Awad et al., 2021), and then fuse the retrieval results from various verticals together, e.g., building another ranking layer on top of these single/cross modality retrievers (Escalante et al., 2008; Grubinger et al., 2008). Both relevance modeling and retrieval result fusion are usually entwined to achieve more accurate multi-modal retrieval results. However, due to the modality gap, they can be only pipeline-modeled in divide-andconquer, making it challenging to fuse retrieval results from different modalities. In this paper, we explore the potential of universal multi-modal retrieval to build an end-to-end model and retrieve multi-modality documents for user queries. Illustrated in Figure 1(b), universal multi-modal retrieval maps queries and multi-modality resources to one universal embedding space and retrieves multi-modality candidates via KNN search. As a result, the relevance modeling, cross-modality matching, and retrieval result fusion are done by one model. More specifically, we propose a Universal Vision-Language Dense Retrieval (UniVL-DR) model to get the representations of queries, texts, and images and learn a tailored vision-language embedding space for multi-modal retrieval. UniVL-DR optimizes the vision-language embedding space using hard negatives (Xiong et al., 2021a) and balances the modalities of these negatives to alleviate the modality preference of multi-modal retrievers. Furthermore, UniVL-DR introduces an image verbalization method, which regards language as a kind of mentalese (Cavanagh, 2021) and mitigates the modality gap between images and texts. Our image verbalization method first aligns the semantics of image captions and figure pixels (Huang et al., 2021a), and then paraphrases the image facts. It helps to bridge language and vision understanding modules of UniVL-DR via natural language. To build a multi-modal retrieval benchmark, we leverage a multi-modal question answering (QA) benchmark WebQA (Chang et al., 2022) and convert it to a standard open-domain setting: retrieving multi-modality candidates from text and image collections for a user query. Divide-and-conquer is an intuitive way to build a multi-modal retrieval system and we pre-route queries to oracle modality to show the upper bound performance of such a system. Compared with the divide-and-conquer system, UniVL-DR addresses the retrieval result fusion challenge, achieves state-of-the-art multi-modal retrieval performance, and brings more than 5% improvement in single/cross modality retrieval. Our experiments show that UniVL-DR learns an effective embedding space for multi-modal retrieval by separating texts and images into different areas and guiding queries to return candidates from corresponding modalities. Our further analyses show that UniVL-DR can alleviate overfit singlemodality signals by balancing hard negatives during training and bridging the modality gap between vision and language by verbalizing images. All experimental results show that learning one universal representation space is starting to benefit single-modality tasks—pretraining representation models on multi-modality and using our techniques can learn additional signals from multi-modalities, overcome the modality boundary, and provide convincing gains in single/multi-modality tasks. 2 RELATED WORK Document retrieval is a typical single modality retrieval task, which aims to return related documents for user queries and can be tackled with dense retrievers (Xiong et al., 2021b; Lewis et al., 2020; Zhan et al., 2021; Li et al., 2021b; Yu et al., 2021). Dense retrievers encode queries and documents with pretrained language models (Devlin et al., 2019) and map them in an embedding space to conduct an efficient search. The query and document encoders are usually contrastively trained with in-batch negatives, BM25 retrieved negatives, and hard negatives (Karpukhin et al., 2020; Xiong et al., 2021a). Recently, lots of work has focused on multi-modal retrieval tasks, which retrieve texts and images to satisfy the multi-modality information needs of users (Hannan et al., 2020; Singh et al., 2021; Talmor et al., 2021; Chang et al., 2022). WebQA (Chang et al., 2022), an open-domain multi-modal question answering benchmark, is built to encourage the following work to represent multi-modal knowledge in a unified space and answer user queries with the information from attribute modalities. It is a more realistic setting, which avoids synthesizing queries with templates (Talmor et al., 2021) and downplays the role of modality disambiguation (Hannan et al., 2020) in the multi-modality modeling. To search information from large-scale multi-modality sources, WebQA (Chang et al., 2022) employs a divide-and-conquer pipeline to search text and image candidates with BM25 and CLIP (Radford et al., 2021) and then fuse these retrieval results using a vision-language model. However, single- modality retrievers, such as BM25 and CLIP, usually show distinct retrieval effectiveness (Chang et al., 2022), leading to modality discrimination during fusing retrieval results from different modalities. When building a unified multi-modal retriever, vision-language pretraining (VLP) is crucial to learn universal representations for texts and images, which has also shown success on lots of visionlanguage benchmarks (Uppal et al., 2022; Han et al., 2020; Khan et al., 2021; Du et al., 2022). Most VLP approaches encode texts and images and pretrain encoders with two tasks: masked token prediction and text-image matching (Zhang et al., 2021). These VLP methods teach vision-language models to learn the semantic alignments between texts and images, as well as encode images with the regional features of detected objects (Chen et al., 2019; Lu et al., 2019; Tan and Bansal, 2019; Su et al., 2020; Li et al., 2019; 2021a; Cho et al., 2021; Hu et al., 2020; Gan et al., 2020) or the whole image features (Xu et al., 2021; Kim et al., 2021; Huang et al., 2021b; Wang et al., 2021). 3 MULTI-MODAL RETRIEVAL TASK As shown in Figure 3, we compare different retrieval tasks and tell apart the differences between multi-modal retrieval and other two tasks, single modality retrieval and cross modality retrieval. Single Modality Retrieval. Single modality retrieval focuses on conducting relevance searching in one modality space, which includes text-text retrieval and image-image retrieval. Text-text retrieval (Bajaj et al., 2016) aims to search relevant candidates from the text collection T = {T1, ..., Tn} to answer a query q. And image-image retrieval (Yoon et al., 2021) focuses more on returning similar images from the image collection I = {I1, ..., Im} for the given image Ij . Cross Modality Retrieval. The cross modality retrieval, e.g. MSCOCO (Chen et al., 2015) and Flickr30K (Young et al., 2014), contains two subtasks: text-image retrieval and image-text retrieval. Given an image caption Ti or an image Ij , these tasks require retrieval models to conduct crossmodality matching between images and captions, aiming to search candidates from images I = {I1, ..., Im} or image captions T = {T1, ..., Tn}, respectively. Such cross-modality interactions are built to align semantics between captions and images, which is distinct from the search relevance. Multi-Modal Retrieval. Given a query q, the multi-modal retrieval task (Chang et al., 2022) helps users uncover the information from multi-modality sources D = {T1, ..., Tn, I1, ..., Im}. Different from single/cross modality retrieval, multi-modal retrieval aims at returning relevant candidates from the multi-modality documents D. The retrieval results may consist of texts, images, or a mixture of them according to user query q. Different from existing text and sketch base image retrieval (Sangkloy et al., 2022; Dutta and Akata, 2020; Dey et al., 2018; Mai et al., 2017), the multimodal retrieval focuses more on relevance modeling between queries and documents, single/cross modality matching, and modality routing, making this task more challenging. Moreover, we can pre-route queries to a single modality and convert the multi-modal retrieval to two subtasks, text-text retrieval and text-image retrieval, which are single and cross modality retrieval tasks. 4 UNIVSEARCH BY LEARNING A UNIFIED EMBEDDING SPACE This section describes our Universal Vision-Language Dense Retrieval (UniVL-DR). As shown in Figure 3, given a query q and multi-modality documents D = {d 1Text, ..., dnText, d 1Image, ..., dmImage}, it directly encodes query q, text document d iText and image document d j Image in one embedding space, which conducts relevance modeling, modality routing, and result fusion in such a space (Sec. 4.1). Texts and images usually have different understanding mechanisms, making it difficult to tackle multi-modality tasks. Nevertheless, language and vision can be commonly translated as a type of mentalese to better communicate between different modules in our brains (Cavanagh, 2021), thus a unified representation method has the ability to break the boundary of different modalities and benefit vision-language learning. To build a unified multi-modal retrieval system, UniVL-DR learns a universal embedding space by contrastively optimizing vision-language representations using hard negatives with balanced-modality sampling (Sec. 4.2) and bridging the modality gap via verbalizing the picture to paraphrase pixel semantics in the raw text space (Sec. 4.3). 4.1 MULTI-MODALITY DENSE RETRIEVAL UniVL-DR gets representations of queries, image documents and text documents with two encoders: TextEnocder and ImgEncoder. Specifically, the image document d jImage consists of a picture Ij and an image caption Cj , thus we utilize ImgEncoder and TextEnocder to encode Ij and Cj . Query Encoding. UniVL-DR directly encodes the query q to get its representation q⃗: q⃗ = TextEnocder(q). (1) Text Document Encoding. To represent text documents, UniVL-DR also leverages the TextEnocder to encode the i-th text document d iText as d⃗ i Text: d⃗ iText = TextEnocder(d i Text). (2) Image Document Encoding. Different from text documents, image documents can be represented by picture features and image captions and the textual captions can help better understand the semantics of image documents (Baldrati et al., 2022). Thus, UniVL-DR encodes picture Ij and image caption Cj and then sums these embeddings to get the representation d⃗ j Image of j-th image document: d⃗ jImage = ImgEnocder(Ij) + TextEnocder(Cj). (3) The representations d⃗ jImage and d⃗ i Text of image document and text document use the same TextEnocder to encode their textual information, which bridges different modalities in the text space and helps to build a universal embedding space for multi-modality retrieval. Multi-modality Document Retrieval. The cosine similarity score f(q, d) of query q and document candidate d ∈ D can be calculated to estimate the relevance between q and d: f(q, d) = cos(q⃗, d⃗ ), (4) where q⃗ and d⃗ are the representations of q and d. The efficient similarity calculation between queries and the multi-modality documents can be provided by FAISS (Johnson et al., 2019). 4.2 UNIVERSAL REPRESENTATION LEARNING UniVL-DR employs a vision-language model, CLIP (Radford et al., 2021), to learn universal representations for queries and multi-modality documents, which is knowledgeable about crossmodality retrieval. UniVL-DR optimizes the universal embedding space through training with modality-balanced hard negatives, which avoids overfitting to the signals of single-modality during multi-modal co-training. Given the query q and its relevant candidate d+ ∈ D, the embedding space can be optimized by sampling hard negatives D− and minimizing the following contrastive training loss L: L = − log e f(q,d+)/τ ef(q,d+)/τ + ∑ d−∈D− e f(q,d−)/τ = − f(q, d+)/τ︸ ︷︷ ︸ LAlign + log(ef(q,d +)/τ + k1∑ i=1 ef(q,d i− Image)/τ ︸ ︷︷ ︸ LImage + k2∑ j=1 ef(q,d j− Text)/τ ︸ ︷︷ ︸ LText ), (5) where τ is the temperature to scale the similarity score. During training, we in fact maximize LAlign and minimize LImage and LText, which make queries closer to related documents and away from unrelated documents. If k1 > k2 or k2 > k1, we can achieve a smaller loss LImage + LText by simply making queries far away from the image collection or the text collection. Such a behavior can win a lower loss L but overfits the ranking features from single/cross modality matching, leading to a modality discrimination during retrieval. Our modality-balanced negative training strategy keeps k1 = k2 = k to better train the modality selection ability of retrievers. 4.3 IMAGE VERBALIZATION FOR EXPANSION UniVL-DR provides another way to bridge the modality gap between texts and images by verbalizing picture pixel features, including image caption and query generation methods. Following Li et al. (2020), we can represent a picture Ij using detected objects O = {O1, ..., Ol}. For each image object Oi, we can get its pixel feature O⃗i and the predicted class Ôi. Then UniVL-DR uses a vision-language model, such as VinVL (Zhang et al., 2021), to verbalize image documents. Specifically, we generate potentially matched captions or related queries as the image verbalization results V (Ij), according to the picture Ij or the image document d j Image = {Ij , Cj}. We can first feed the predicted classes {Ô1; ...; Ôl} and regional features {O⃗1; ...; O⃗l} of detected objects into image verbalization models. Then we train the model to generate image caption Cj : Xcj = [CLS];Cj ; [SEP]; Ô1; ...; Ôl; [SEP]; O⃗1; ...; O⃗l; (6) or replace the detected object classes {O⃗1; ...; O⃗l} in the input sequence Xcj with the image caption Cj to generate related query q of the image document d j Image: Xqj = [CLS]; q; [SEP];Cj ; [SEP]; O⃗1; ...; O⃗l, (7) where ; is the concatenation operation, and [CLS] and [SEP] are special tokens. During training or inference, we utilize Masked Language Modeling (MLM) (Devlin et al., 2019) to mask and predict some or all of the tokens of image caption Cj and query q in the inputs Xcj and X q j , aiming to train image verbalization models or generate verbalized captions and queries. Finally, we enhanced the representations of image documents by expending their text representations C∗j by expanding the raw caption Cj with image verbalization results V (Ij): C∗j = Cj ; [SEP];V (Ij), (8) where the enhanced text representation C∗j is used to replace the raw caption Cj in E.q. 3 during encoding the image document d jImage. 5 EXPERIMENTAL METHODOLOGY This section describes the dataset, baselines, some vision language models used in our experiments, and implementation details. Dataset. A multi-hop and multi-modal open domain question answering dataset WebQA (Chang et al., 2022) is used in our experiments. We process the WebQA dataset in an open domain retrieval setting and show the details in Appendix A.1. Evaluation Metrics. We use NDCG@K, MRR@K, Recall@20, and Recall@100 as the evaluation metrics. K can be 10 and 20. And we regard MRR@10 as our main evaluation (Bajaj et al., 2016). Vision-Language Models. In our experiments, we employ two state-of-the-art vision-language models, VinVL (Zhang et al., 2021) and CLIP (Radford et al., 2021) to implement different retrieval models in our experiments. VinVL (Zhang et al., 2021) inherits Oscar (Li et al., 2020) architecture, which extracts object tags and region features to represent images, and learns cross-modal representations by aligning semantics between images and texts. Different from VinVL, CLIP (Radford et al., 2021) utilizes a dual encoder to project images and texts in the same semantic space for computing their similarity scores and is trained on a large-scale dataset WebImageText that contains 400 million image-text pairs. It has shown strong effectiveness in cross-modality retrieval. Baselines. Our baselines contain several models in the settings of single modality retrieval, divideand-conquer, and universal multi-modal retrieval. Single modality retrieval. In this setting, we represent image documents with captions and employ text retrievers, BM25 and DPR (Karpukhin et al., 2020) as baselines. DPR is trained with NQ (Kwiatkowski et al., 2019), which is similar to the textual source of WebQA. Then we continuously train DPR with in-batch and hard negatives to implement NQ-DPR and NQ-ANCE models. Divide-and-conquer. We first employ three widely used retrievers, BM25, VinVL-DPR, and CLIPDPR, to conduct text-text retrieval and text-image retrieval. Then the multi-modality retrieval results are fused according to their uni-modal rank reciprocals or oracle modality routing. The latter one shows the upper bound of the retrieval performance of our divide-and-conquer models. Multi-modal retrieval. In our experiments, we also build two multi-modal retrieval baselines: VinVLDPR and CLIP-DPR. VinVL-DPR and CLIP-DPR represent image documents with caption and picture features. And then they optimize VLP models, VinVL (Zhang et al., 2021) and CLIP (Radford et al., 2021), with in-batch negatives to learn universal representations for multi-modal retrieval. Implementation Details. During training UniVL-DR, we employ the text and image encoders from CLIP, truncate the text with the max length of 771 and set the batch size to 64, learning rate=5e− 6, max training epoch to 20, and the temperature hyperparameter τ = 0.01. In our experiments, we retrieve Top 100 documents using CLIP-DPR and sample two hard negatives of different modalities (k = 1) from these candidates. All models are tuned with AdamW optimizer, are evaluated per 500 steps, and set early stop step as 5. More experimental details are shown in Appendix A.2. 6 EVALUATION RESULTS In this section, we study the performance of UniVL-DR, its advantages in multi-modal retrieval, the effectiveness of our modality-balanced hard negative training strategies, and how our image verbalization methods bridge the modality gap between texts and images. 6.1 OVERALL PERFORMANCE The multi-modal retrieval performance of different models is shown in Table 1. Our UniVL-DR outperforms all baselines with more than 7% improvement on ranking evaluation, recalls more than 6% relevant multi-modality documents, and even outperforms the divide-andconquer model guided by oracle modality routing. Such significant improvements illustrate the effectiveness of UniVL-DR in building a multi-modal retrieval system. 1https://github.com/openai/CLIP Setting Model MRR@10 NDCG@10 MRR@20 NDCG@20 Rec@20 Rec@100 Similar to UniVL-DR, BM25 learns universal textual representations for image/text documents and shows strong ranking effectiveness. To build a divide-and-conquer system, we use BM25 and CLIP-DPR to implement text-text and text-image retrievers and then fuse the results from different retrievers. With the help of oracle modality routing, the divide-and-conquer system shows better ranking results and recalls more relevant documents than BM25. Nevertheless, this system shows a distinct performance when using the uni-modal rank reciprocals to route queries, showing the challenge of fusing retrieval results in divide-and-conquer. CLIP-DPR and UniVL-DR can deal with this problem by learning universal representations for queries and multi-modality documents, which unifies the multi-modality relevance modeling and retrieval result fusion. Thanks to our multi-modality training strategies, UniVL-DR achieves more than 10% improvement on multi-modal retrieval than CLIP-DPR. The following experiments further explore how UniVL-DR learns universal representations for multi-modal retrieval and bridges the gap between images and texts. 6.2 ABLATION STUDIES The ablation studies are conducted to study model performance on multi-modal retrieval. And we also evaluate the effectiveness of UniVL-DR on both text-text and text-image retrieval tasks, which aims at showing the influence of multi-modal learning on these single/cross modality retrieval tasks. As shown in Table 2, we evaluate the retrieval effectiveness of different vision-language models, VinVL-DPR and CLIP-DPR. They are trained with in-batch negatives on text-text/image and multimodal retrieval tasks. In the single/cross modality setting, we fine-tune vision-language models with a group of queries that only contain related documents in text modality or image modality. Our multi-modality training setting uses all queries to train these vision-language models and equally samples in-batch negatives from the documents of different modalities. For both CLIP-DPR and VinVL-DPR, image captions are usually more effective to represent image documents than figure features, which demonstrates the difficulty in understanding figure semantics with only figure pixels. Thus, UniVL-DR tries to verbalize the figure features by extracting the objects that appear in the figure and describing the figure facts among detected objects (Zhang et al., 2021). The image verbalization results paraphrase picture pixel facts in natural language and help to enhance the textual representations of images by expanding image verbalization results to image captions. As a result, UniVL-DR uses such an enhanced text representation for image documents and then employs the same module to encode text information of image documents and text documents. It helps to build universal representations for multi-modality documents by breaking the modality boundary and fully using additional training signals from different modalities, making UniVL-DR achieve the best retrieval performance on multi-modal retrieval among all baseline models. UniVL-DR also shows its advantages by outperforming all baseline models on both text-text and text-image retrieval tasks, demonstrating that multi-modality modeling indeed benefits single/cross modality retrieval. In the multi-modal retrieval setting, CLIP-DPR is converted from a text-text retriever to a multi-modal retriever after adding figure features. CLIP-DPR achieves better performance on the text-image retrieval task than CLIP-DPR w/o figure feature, which illustrates that image features provide additional signals to help multi-modality models distinguish related image documents. On the contrary, the multi-modal retrieval performance of CLIP-DPR is decreased, showing that CLIP-DPR fails to fuse retrieval results from different modalities. UniVL-DR uses a modality-balanced hard negative training strategy to learn universal representations for queries and documents, which deals with the challenge of fusing retrieval results, helps to achieve more gains on the multi-modal retrieval task, and enhances the modality disambiguation ability. 6.3 EFFECTIVENESS OF BALANCED HARD NEGATIVE SAMPLING In this experiment, we study the training strategies of UniVL-DR that are used in learning universal multi-modality representations and show the effectiveness of different negative sampling methods. As shown in Table 3, we start from the multi-modal retriever CLIP-DPR, continuously fine-tune it with different hard negative sampling methods, and show their performance on different retrieval tasks. Our experimental results show that the in-batch trained models prefer to return text documents than image documents as the ranking results, even the image-answerable queries take a larger portion (about 51.6%) in the training data. It illustrates that training multi-modality retrievers with modality-unbalanced negatives usually leads to undesired modality bias during retrieval. Then we continuously train CLIP-DPR with hard negatives sampled from top-retrieved multi-modality results from CLIP-DPR and significantly improve its retrieval performance in all testing scenarios. Our modality-balanced hard negative sampling strategy achieves the best retrieval performance among all negative sampling methods, showing its important role in building a universal multi-modal retrieval model. Compared with ANCE (Random), our modality-balanced sampling strategy mitigates the modality variance during contrastive training and provides more useful signals to train the modality disambiguation ability of universal multi-modal retrievers. Finally, we visualize the embedding space of different retrieval models in Figure 4. After training with modality-balanced hard negatives, UniVL-DR learns a more uniform and effective embedding space for multi-modal retrieval. In this embedding space, both text and image documents are assigned in different areas of the embedding space, and queries are routed to different areas for returning documents from corresponding modalities. As shown in Figure 4(b) and Figure 4(c), when the retrieval models are only trained with hard negatives of text and image documents, the query embeddings are concentrated and respectively assigned closer to the areas of image and text documents. It demonstrates that multi-modality retrieval model usually overfits the training signals of in-batch majority modality to win a lower contrastive loss during training. UniVL-DR alleviates this problem by balancing the modalities of hard negatives in contrastive training. 6.4 BRIDGING CROSS-MODALITY MATCHING WITH IMAGE VERBALIZATION UniVL-DR uses image verbalization methods to generate matched captions or related queries to bridge the modality gap between texts and images. In this experiment, we show the effectiveness of different image verbalization strategies on text-text, text-image, and multi-modal retrieval tasks. As shown in Table 4, our image verbalization methods demonstrate their ability on enhancing the text representations of image documents by achieving better text-image retrieval results. These image verbalization methods aim to generate informative text clues to help retrievers distinguish the queryrelated image documents in the text space. Then the text-text and multi-modal retrieval performance is also improved with the help of verbalized captions or queries, showing the effectiveness of our image verbalization methods in bridging the modality gap between images and texts and benefiting the single modality tasks using additional training signals from different modalities. Compared with verbalized captions, our query verbalization method aligns the necessary semantics in captions, e.g. mentioned entities, with image objects and verbalizes the figure pixels with the help of caption semantics. Enhancing image representations using verbalized queries usually achieves better retrieval effectiveness than using verbalized captions. It showcases that our query verbalization method can provide more meaningful text clues for relevance modeling and multi-modality learning. Moreover, some additional experiments are provided to study the effectiveness of different image verbalization methods. We first show the relationship between the effectiveness of verbalized queries and manual caption lengths in Appendix A.4 and then conduct some case studies in Appendix A.5 to explore the characteristics of different image verbalization methods. 7 CONCLUSION This paper proposes UniVL-DR, which models singe/cross modality matching and retrieval result fusion in one universal embedding space. UniVL-DR proposes an effective multi-modality training strategy to learn universal representations for queries and documents, which breaks the modality boundary between vision and language, and helps to achieve state-of-the-art multi-modal retrieval performance. Our experiments show that UniVL-DR can bridge the modality gap with image verbalization technologies and avoid overfitting the training signals of one modality by optimizing retrievers with modality-balanced hard negatives. ACKNOWLEDGMENTS This work is supported by Beijing Academy of Artificial Intelligence (BAAI), the Natural Science Foundation of China under Grant No. 62206042, No. U1811261 and No. 62006129, the Fundamental Research Funds for the Central Universities under Grant No. N2216013, China Postdoctoral Science Foundation under Grant No. 2022M710022, and National Science and Technology Major Project (J2019-IV-0002-0069). A APPENDIX A.1 DATA STATISTICS A multi-hop and multi-modal open domain question answering dataset WebQA (Chang et al., 2022) is used in our experiments. The dataset contains images and passages that are crawled from the general Web and Wikipedia. In our experiments, we randomly sample 5,000 queries from the original training set of WebQA as the development set for evaluation. All data statistics are shown in Table 5. To build an open-domain benchmark, we collect 389,750 images and 787,697 texts as multi-modal retrieval sources. The image collection contains all images collected by the WebQA dataset, while the text collection contains all relevant passages of all 41,732 queries, which are Wikipedia snippets selected by matching noun chunks in the queries (Chang et al., 2022). A.2 ADDITIONAL EXPERIMENT DETAILS This subsection describes additional implementation details. In our experiments, we employ two pretrained vision-language models, VinVL (Zhang et al., 2021) and CLIP (Radford et al., 2021), and the pretrained language model, BERT (Devlin et al., 2019) to implement different retrieval models. VinVL-DPR. For VinVL variant models, we first detect the image objects and extract corresponding region features following VinVL2. Then we concatenate image captions and image region features as inputs to feed into VinVL models and get the image representations. We initialize VinVL with the checkpoint trained on the MSCOCO image retrieval task and continuously train the model on the WebQA dataset with in-batch negatives. During training, we set the batch size to 32, learning rate=2e − 5, accumulate step as 1, and max training epoch to 30. We truncate the queries, image captions, text documents, and image region features with max lengths of 70, 70, 200, and 50. CLIP-DPR. For training CLIP-DPR, we start from the ViT-B/32 version of CLIP and continuously train CLIP on the WebQA dataset with in-batch negatives. We truncate texts with the max length of 77 and set accumulate step as 1, batch size to 64, learning rate=5e − 6, max training epoch to 20, and the temperature hyperparameter τ = 0.01. The cosine annealing strategy is used to schedule the learning rate during training. BERT-DPR. We initialize our retriever with the bert-base-uncased checkpoint, which is provided by Hugginface Transformers3. During training, we set the batch size to 32, learning rate=5e − 5, accumulate step as 1, and max training epoch to 30. We truncate the queries, text documents, and image captions with max lengths of 70, 200, and 70. NQ-DPR/NQ-ANCE. NQ-DPR and NQ-ANCE start from the NQ-trained DPR model (Karpukhin et al., 2020), which uses a dual encoder architecture to encode queries and documents. All experimental settings keep the same with BERT-DPR. Besides, NQ-ANCE is tuned with the hard negatives sampled from the Top 100 retrieved candidates of NQ-DPR Xiong et al. (2021a). A.3 EXPERIMENTAL DETAILS OF IMAGE VERBALIZATION The image verbalization models are used to generate potentially matched captions or related questions for an image. Our experiments start from the image caption generation model, which is trained on the MSCOCO image caption task (Zhang et al., 2021) to generate related captions or queries to verbalize images. 2https://github.com/microsoft/scene_graph_benchmark 3https://github.com/huggingface/transformers We can first directly generate image-related captions as the image verbalization results using the image caption model provided by VinVL (Zhang et al., 2021). As shown in E.q. 6, we first detect the image objects in the images and then feed the predicted classes and region features of detected objects to the VinVL model. In our experiments, we fix the parameters of the VinVL-based image caption model and generate the caption for each image. During generating image-related queries, as shown in E.q. 7, we concatenate the image-related query, image caption, and image regional features as the input. We continuously train the VinVL-based image caption model by randomly masking the tokens in queries, and optimizing vision-language models to fill in the masked positions. Different from image caption models, our query generation method tries to align the semantics in image captions and image pixel features instead of mapping the predicted classes and regional image features of detected objects, which can help vision-language models better understand the image semantics (Huang et al., 2021a). During training and inference, we set the generated tokens up to 20 and the beam size to 5. We truncate the queries, image captions, and image region features with the max lengths of 40, 30, and 50, respectively. The mask probability is set to 0.15. More experimental details can be referred to Zhang et al. (2021). A.4 IMAGE VERBALIZATION PERFORMANCE WITH DIFFERENT CAPTION LENGTHS In this subsection, we evaluate the multi-modal retrieval performance of UniVL-DR with different verbalized queries. Specifically, we evaluate the effectiveness of image-verbalized queries in the multi-modal retrieval task. These image-verbalized queries are generated with the image documents with different manual caption lengths. We group the testing examples into three categories according to the manual caption lengths of the image documents and calculate the average MRR@10 score for each group. The ratios are 42.33%, 36.84%, and 20.83% of the short, medium, and long caption length groups. As shown in Figure 5, the experimental results show that our query generation method mainly helps to improve the retrieval effectiveness on the queries of short length and medium length, illustrating that these generated queries can provide some crucial textual clues in image representations of shorter captions. These expanded text clues help retrieval models better understand image semantics, more effectively represent images via enhanced textual information, and conduct cross-modality matching more easily. Moreover, the queries in the medium caption length group achieve the best performance, because the image captions of medium lengths can cover more necessary text clues for generating more informative verbalization results. A.5 CASE STUDIES ON DIFFERENT IMAGE VERBALIZATION METHODS This experiment shows some image verbalization cases in Table 6. We randomly sample queries that can be answered by image documents and show the manual captions, verbalized captions, and verbalized queries of the image documents. Overall, these cases can be categorized into two groups according to the lengths of manual captions. The first three cases are longer and more informative to describe the image facts among mentioned objects, which can be directly used in text-image relevance modeling. On the contrary, the manual captions in the last three cases are written by the most representative entities that appeared in the images, making it difficult to distinguish the related images only according to these manual captions. UniVL-DR employs two image verbalization methods to enhance the textual semantics of images. Generating image captions is the most intuitive way to paraphrase images with some pre-defined classes of image objects. Nevertheless, these object classes are too general and may be uninformative to provide semantics for matching, because the specific entities are critical to retrieving related documents in a question-answering system (Sciavolino et al., 2021). Different from these verbalized captions, the verbalized queries are usually more informative and meaningful and specify the image objects by copying entity names from the manual captions, such as the names of persons, places, and buildings. These entities can be directly matched with the given queries, which benefits cross-modality matching and helps to mitigate the modality gap between images and texts. A.6 MULTI-MODAL RETRIEVAL WITH DIFFERENT IMAGE REPRESENTATION COMBINATION METHODS In this subsection, we conduct experiments to show the effectiveness of different methods in combining the representations of image captions and image features. Model MRR@10 MRR@20 NDCG@10 NDCG@20 Model MRR@1 NDCG@5 NDCG@10 NDCG@20 As shown in Table 7, we evaluate the effectiveness of different combination models using CLIP-DPR. We concatenate, dot product (outer product), and sum the representations of image captions and image features to conduct three models: CLIP-DPR (Concatenation), CLIP-DPR (Outer Product), and CLIP-DPR (Sum). CLIP-DPR (Sum) shows its effectiveness by achieving the best performance among all baselines. The sum operation is a commonly used semantic combination method, which is also used in BERT (Devlin et al., 2019) to combine token embedding and position embedding. On the contrary, the concatenation operation usually regards the representations of image captions and image features as subvectors and separates them into subspaces, making it hard to learn the semantics of image documents. On the other hand, the outer product operation conducts orthogonal representations during combining representations, which is not a typical combination method. A.7 ADDITIONAL EVALUATIONS ON MULTI-MODEL RETRIEVAL In our experiments, we follow previous widely used retrieval benchmarks, MS MARCO (Bajaj et al., 2016) and BEIR (Thakur et al., 2021), and use NDCG@10/20 and MRR@10/20 to show the retrieval effectiveness of different retrieval models. MRR scores and NDCG scores are calculated by the MS MARCO official scripts4 and TREC’s evaluation tool5. As shown in Table 8, we also conduct some evaluations to show the retrieval performance of higherranked candidates using MRR@1 and NDCG@5. UniVL-DR also shows strong effectiveness by outperforming BM25 and CLIP-DPR with more than 6% improvements. Notably, UniVL-DR even shows better retrieval effectiveness than the BM25 & CLIP-DPR (Oracle Modality) model, which is in an ideal setting. It supports our claim that multi-modality modeling can also benefit single/cross-modality tasks. A.8 EXAMPLES OF HARD NEGATIVES In this subsection, we randomly sample two queries and show some hard negatives in Figure 6, which are top-ranked documents using our CLIP-DPR model. In the first case, when we ask “Is there greenery at Centennial Olympic Park?”, the CLIP-DPR model can provide some image and text documents, which are regarded as hard negatives to continuously train dense retrievers. The negative images are about buildings, lawns, and trees, but these objects are not located at Centennial Olympic Park. Evidently, these negative images are on-topic with “greenery” but are not related to the given query. Training dense retrievers with these hard negatives can better teach retrievers to distinguish the subtle difference among these confusing images. 4https://github.com/microsoft/MSMARCO-Passage-Ranking/blob/master/ms_marco_eval.py 5https://github.com/cvangysel/pytrec_eval For both cases, the hard negatives from different modalities showcase some necessary semantics, which are needed by the retrieval model to find relevant information. For example, text documents in Case 1 can provide background knowledge of the Olympic Games and Centennial Olympic Park; image documents in Case 2 supply the “doves” semantics from the visual modality. These informative documents from different modalities can provide sufficient clues to guild dense retrievers to learn necessary semantics during contrastive training.
1. What is the focus and contribution of the paper on multi-modal retrieval? 2. What are the strengths and weaknesses of the proposed unified model? 3. Do you have any concerns or suggestions regarding the assumptions made in the paper? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any relevant works that the author should consider comparing to enhance the comprehensiveness of the paper?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a unified model for multi-modal retrieval. The proposed method consists of two techniques, the universal embedding optimization strategy for contrastively optimizing the embedding space, and the Image verbalization method for bridging the modality gap. Strengths And Weaknesses The main strengths of this paper can be concluded as follows: This paper proposes a unified model for multi-modal retrieval, which demonstrates that universal multi-modal search is feasible compared with the divide-and-conquer pipeline with a united model, and also benefits cross-modality tasks. The proposed method bridges the modality gap by optimizing the vision-language embedding space using hard negatives, and aligning the semantics of image captions and figure pixels, which achieves state-of-the-art performance on WebQA datasets. The main weaknesses of this paper are as follows: There is still a gap between the proposed method and SOTA method in some evaluation tasks on the WebQA Dataset. The authors should provide more corresponding analyses. An important assumption in this paper is that the universal embedding optimization strategy can enable the optimization of the universal embedding space with modality-balanced hard negatives. The authors should provide a mathematical explanation to make the assumption more convincing. Relevant works such as [a] [b] should be compared to make this paper more comprehensive. [a] Effective Conditioned and Composed Image Retrieval Combining CLIP-Based Features. CVPR2022. [b] Visual–Textual Hybrid Sequence Matching for Joint Reasoning. TCVB2021. Clarity, Quality, Novelty And Reproducibility The structure of this paper is complete, but more details of the Image verbalization method need to be provided, and the rationality of the Universal embedding optimization strategy needs to be more fully verified.
ICLR
Title L-SR1 Adaptive Regularization by Cubics for Deep Learning Abstract Stochastic gradient descent and other first-order variants, such as Adam and AdaGrad, are commonly used in the field of deep learning due to their computational efficiency and low-storage memory requirements. However, these methods do not exploit curvature information. Consequently, iterates can converge to saddle points and poor local minima. To avoid this, directions of negative curvature can be utilized, which requires computing the second-derivative matrix. In Deep Neural Networks (DNNs), the number of variables (n) can be of the order of tens of millions, making the Hessian impractical to store (O(n)) and to invert (O(n)). Alternatively, quasi-Newton methods compute Hessian approximations that do not have the same computational requirements. Quasi-Newton methods re-use previously computed iterates and gradients to compute a low-rank structured update. The most widely used quasi-Newton update is the L-BFGS, which guarantees a positive semi-definite Hessian approximation, making it suitable in a line search setting. However, the loss function in DNNs are non-convex, where the Hessian is potentially non-positive definite. In this paper, we propose using a Limited-Memory Symmetric Rank-1 quasi-Newton approach which allows for indefinite Hessian approximations, enabling directions of negative curvature to be exploited. Furthermore, we use a modified Adaptive Regularized Cubics approach, which generates a sequence of cubic subproblems that have closed-form solutions. We investigate the performance of our proposed method on autoencoders and feed-forward neural network models and compare our approach to state-of-the-art first-order adaptive stochastic methods as well as L-BFGS. 1 INTRODUCTION Most deep learning problems involve minimization of the empirical risk of estimation min Θ f(x; Θ), (1) where Θ ∈ Rn is the set of weights and f is some scalar-valued loss function. To solve (1), various optimization approaches have been implemented, which we describe below. Throughout this paper, we write f(Θ) and f(x; Θ) interchangeably. Gradient and adaptive gradient methods are widely used for training deep neural networks (DNN) for their computational efficiency. The most common approach is Stochastic Gradient Descent (SGD) which, despite its simplicity, performs well over a wide range of applications. However, in a sparse training data setting, SGD performs poorly due to limited training speed (Luo et al. (2019)). To address this problem, adaptive methods such as AdaGrad (Duchi et al. (2011)), AdaDelta (Zeiler (2012)), RMSProp (Hinton et al. (2012)) and Adam (Kingma & Ba (2014)) have been proposed. These methods take the root mean square of the past gradients to influence the current step. Amongst all of these adaptive methods, Adam is arguably the most widely used in a deep learning setting due to it rapid training speed. Newton’s method has the potential to exploit curvature information from the second-order derivative (Hessian) matrix (see e.g., Gould et al. (2000)). Generally, the iterates are defined by Θk+1 = Θk − αk∇2f(Θk)−1∇f(Θk), where αk > 0 is a steplength defined by a linesearch criterion (Nocedal & Wright (2006)). In a DNN setting, we know that the number of parameters (n) of the network can be of the order of millions. Thus storing the Hessian which takes O(n2) memory, becomes impractical. In addition, the inversion of the Hessian matrix, which takes O(n3) operations, is also impractical. Even though Newton’s method achieves convergence in fewer steps, the method becomes computationally intractable to use on large-scale DNNs. Quasi-Newton methods are alternatives to Newton methods. They compute Hessian approximations, Bk+1, that satisfy the secant condition given by yk = Bk+1sk, where sk = Θk+1 −Θk and yk = ∇f(Θk+1) − ∇f(Θk). The most commonly used quasi-Newton method, including in the realm of deep learning, is the limited-memory BFGS update, or L-BFGS (see e.g., Liu & Nocedal (1989)), where the Hessian approximation is given by Bk+1 = Bk + yky > k y>k sk − BkskskB > k s>k Bksk . (2) The generic L-BFGS quasi-Newton update scheme is described in Algorithm 1, and numerous variants of L-BFGS exist (see Goldfarb et al. (2020); Moritz et al. (2016); Gower et al. (2016)). One advantage of using an L-BFGS update is that the Hessian approximation can be guaranteed to be definite, which is highly suitable in line-search settings because the update sk is guaranteed to be a descent direction, meaning there is some step length along this direction that results in a decrease in the objective function (see Nocedal & Wright (2006), Algorithm 6.1). However, because the L-BFGS update is positive definite, it does not readily detect directions of negative curvature for avoiding saddle points. In contrast, the Symmetric-Rank One (SR1) quasi-Newton update is not guarateed to be positive definite and can result in ascent directions for line-search methods. However, in trust-region settings where indefinite Hessian approximations are an advantage because they capture directions of negative curvature, the limited-memory SR1 (L-SR1) has been shown to outperform L-BFGS in DNNs for classification (see Erway et al. (2020)). We discuss this in more detail in Section 2 but in the context of Adaptive Regularization using Cubics. Algorithm 1 L-BFGS Quasi-Newton Method with Line Search Require: Initial weights Θ0, batch size d, learning rate α, dataset D, loss function f(Θ). for k = 0, 1, 2, . . . do Sample mini-batch of size d : Dk ⊆ D Perform the forward backward pass over the current mini-batch Compute the limited memory approximation Bk using (2) Compute step sk = αB−1k ∇Θf(Θk), where α is the line-search step length end for 2 L-SR1 ADAPTIVE REGULARIZATION USING CUBICS METHOD We begin by discussing the L-SR1 update and the adaptive regularizion using cubics methods for large-scale optimization. Unlike the BFGS update (2), which is a rank-two update, the SR1 update is a rank-one update, which is given by Bk+1 = Bk + 1 s>k (yk −Bksk) (yk −Bksk)(yk −Bksk)> (3) (see Khalfan et al. (1993)). As previously mentioned, Bk+1 in (3) is not guaranteed to be definite. However, it can be shown that the SR1 matrices can converge to the true Hessian (see Conn et al. (1991) for details). We note that the pair (sk,yk) is accepted only when |s>k (yk − Bksk)| > ε‖yk −Bksk‖22, for some constant ε > 0 (see Nocedal & Wright (2006), Sec. 6.2, for details). The SR1 update can be defined recursively as Bk+1 = B0 + k∑ j=0 1 s>j (yj −Bjsj) (yj −Bjsj)(yj −Bjsj)>. (4) In limited-memory SR1 (L-SR1) settings, only the last m n pairs of (sj ,yj) are stored and used. If Sk+1 = [ s0 s1 · · · sk ] and Yk+1 = [ y0 y1 · · · yk ], then Bk+1 admits a compact representation of the form Bk+1 = B0 + [ Ψk+1 ][ Mk+1 ] [ Ψ>k+1 ] , (5) where Ψk+1 = Yk+1−B0Sk+1 and Mk+1 = (Dk+1+Lk+1+L>k+1−S>k+1B0Sk+1)−1, (6) where Lk+1 is the strictly lower triangular part, Uk+1 is the strictly upper triangular part, and Dk+1 is the diagonal part of S>k+1Yk+1 = Lk+1 + Dk+1 + Uk+1 (see Byrd et al. (1994) for further details). Because of the compact representation of Bk+1, its partial eigendecomposition can be computed (see Burdakov et al. (2017)). In particular, if we compute the QR decomposition of Ψk+1, then we can write Bk+1 = B0 = U‖Λ̂k+1U>‖ , where U‖ ∈ R n×(k+1) has orthonormal columns and Λ̂ ∈ R(k+1)×(k+1) is a diagonal matrix. If B0 = δkI (see e.g., Lemma 2.4 in Erway et al. (2020)), where 0 < δk < δmax is some scalar and I is the identity matrix, then we obtain the eigendecomposition Bk+1 = Uk+1Λk+1U>k+1, where Uk+1 = [U‖ U⊥], with U⊥ ∈ Rn×(n−(k+1)) and U>k+1Uk+1 = I. Here, (Λk+1)i = δk + λ̂i for i ≤ k+ 1, where λ̂i is the ith diagonal in Λ̂k+1, and (Λ)i = δk for i > k + 1. Since the SR1 Hessian approximation can be indefinite, some safeguard must be implemented to ensure that the resulting search direction sk is a descent direction. One such safeguard is to use a “regularization” term. The Adaptive Regularization using Cubics (ARCs) method (Griewank (1981); Cartis et al. (2011)) can be viewed as an alternative to line-search and trust-region methods. At each iteration, an approximate global minimizer of a local (cubic) model, min s∈Rn mk(s) ≡ g>k s + 1 2 s>Bks + µk 3 (Φk(s)) 3, (7) is determined, where gk = ∇f(Θk), µk > 0 is a regularization parameter, and Φk is a function (norm) that regularizes s. Typically, the Euclidean norm is used. In this work, we propose an alternative “shape-changing” norm that allows us to solve each subproblem (7) exactly. This shape-changing norm was proposed in Burdakov et al. (2017), and it is based on the partial eigendecomposition of Bk. Specifically, if Bk = UkΛkU>k is the eigendecomposition of Bk, then we can define the norm ‖s‖Uk def = ‖U>k s‖3. Applying a change of basis with s̄ = U>k s and ḡk = U>k gk, we can redefine the cubic subproblem as min s̄∈Rn m̄k(s) = ḡ > k s̄ + 1 2 s̄>Λks̄ + µk 3 ‖s̄‖33 . (8) With this change of basis, we can find a closed-form solution of (8) easily. The proposed Adaptive Regularization using Cubics with L-SR1 (ARCSLSR1) algorithm is given in Algorithm 2. 2.1 CONTRIBUTIONS The main contributions of this paper are as follows: 1. L-SR1 quasi-Newton methods: The most commonly used quasi-Newton approach is the L-BFGS method. In this work, we use the LSR1 update to better model potentially indefinite Hessians of the non-convex loss function. 2. Adaptive Regularization using Cubics (ARCs): Given that the quasi-Newton approximation is allowed to be indefinite, we use an Adaptive Regularized using Cubics approach to safeguard each search direction. 3. Shape-changing regularizer: We use a shape-changing norm to define the cubic regularization term, which allows us to compute the closed form solution to the cubic subproblem (7). 4. Computational complexity: Let m be the number of previous iterates and gradients stored in memory. The proposed LSR1 ARC approach is comparable to L-BFGS in terms of storage and compute complexity (see Table 1). Algorithm 2 Limited-Memory Symmetric Rank-1 Adaptive Regularization using Cubics 1: Given: Θ0, γ2 ≥ γ1, 1 > η2 ≥ η1 > 0, and σ0 > 0 2: for k = 0, 1, 2 . . . do 3: Obtain Sk = [ s0 · · · sk ], Yk = [ y0 · · · yk ] 4: Solve the generalized eigenproblem S>k Yku = Λ̂S > k Sku and let δk = min{λ̂i} 5: Compute Ψk = Yk − δkSk 6: Perform QR decomposition of Ψ = QR 7: Compute the eigendecomposition RMR> = PΛP> 8: Assign U‖ = QP and U>‖ = P >Q> 9: Define C‖ = diag(c1, . . . , cm), where ci = 2 λi+ √ λ2i +4µ|ḡi| and ḡ‖ = U>‖ g 10: Compute α∗ = 2 δk+ √ δ2k+4µ‖g⊥‖ where g⊥ = g −U‖ḡ‖ 11: Compute step s∗ = −α∗g + U‖(α∗Im −C‖)U>‖ 12: Compute m(s∗) and ρk = (f(Θk)− f(Θk+1))/m(s∗) 13: Set Θk+1 = { Θk + sk, if ρk ≥ η1, Θk, otherwise and µk+1 = 0.5µk if ρk > η2, 0.5µk(1 + γ1) if η1 ≤ ρk ≤ η2, 0.5µk(γ1 + γ2) otherwise 14: end for 2.2 IMPLEMENTATION Because full gradient computation is very expensive to perform, we impement a stochastic version of the proposed ARCs-LSR1 method. In particular, we use the batch gradient approximation g̃k ≡ 1 |Bk| ∑ i∈Bk ∇fi(Θk). In defining the SR1 matrix, we use the quasi-Newton pairs (sk, ỹk), where ỹk = g̃k+1 − g̃k (see e.g., Erway et al. (2020)). 3 CONVERGENCE ANALYSIS In this section, we prove convergence properties of the proposed method (ARCs-LSR1 in Algorithm 2). The following theoretical guarantees follow the ideas from Cartis et al. (2011) and Benson & Shanno (2018). First, we make the following mild assumptions: A1. The loss function f(Θ) is continuously differentiable, i.e., f ∈ C1(Rn). A2. The loss function f(Θ) is bounded below. Next, we prove that the matrix Bk in (4) is bounded. Lemma 1 The SR1 matrix Bk+1 in (4) satsifies ‖Bk+1‖F ≤ κB for all k ≥ 1 for some κB > 0. Proof: Using the limited-memory SR1 update with memory parameter m in (4), we have ‖Bk+1‖F ≤ ‖B0‖F + k∑ j=k−m+1 ‖(yj −Bjsj)(yj −Bjsj)>‖F |s>j (yj −Bjsj)| . Using a property of the Frobenius norm, namely, for real matrices A, ‖A‖2F = trace(AA >), we have that ‖(yj − Bjsj)(yj − Bjsj)>‖F = ‖yj − Bjsj‖22. Since the pair (sj ,yj) is accepted only when |s>j (yj −Bjsj)| > ε‖yj −Bjsj‖22, for some constant ε > 0, and B0 = δkI for some 0 < δk < δmax, we have ‖Bk+1‖F ≤ δmax + m ε ≡ κB . Given the bound on ‖Bk+1‖F , we obtain the following result, which is similar to Theorem 2.5 in Cartis et al. (2011). Theorem 1 Under Assumptions A1 and A2, if Lemma 1 holds, then lim inf k→∞ ‖gk‖ = 0. Finally, we consider the following assumption, which can be satisfied when the gradient, g(Θ), is Lipschitz continuous on Θ. A3. If {Θti} and {Θli} are subsequences of {Θk}, then ‖gti−gli‖ → 0 whenever ‖Θti−Θli‖ → 0 as i→∞. If we further make Assumption A3, we have the following stronger result (which is based on Corollary 2.6 in Cartis et al. (2011)): Corollary 1 Under Assumptions A1, A2, and A3, if Lemma 1 holds, then lim k→∞ ‖gk‖ = 0. 4 EXPERIMENTS To empirically compare the efficiency of the method against popular optimization methods like SGD, ADAGRAD, ADAM, RMSProp and L-BFGS, we focus on two broad deep learning problems: image classification and image reconstruction. We choose these tasks due to their broad importance and availability of reproducible model architectures. We run each experiments on an average of 5 times with a random initialization in each experiment. The number of parameters, convolutional layers and fully connected layers are mentioned in Table 3. Dataset: We measure the classification performance of each optimization method on 4 image datasets: MNIST (LeCun et al. (2010)), FashionMNIST (Xiao et al. (2017)), IRIS (Dua & Graff (2017)) and CIFAR10 (Krizhevsky et al.). We have provided a comprehensive view of the experiments in Table 2 Hyperparameter tuning: We empirically fine-tune the hyperparameters and select the best for each update scheme. We have made a comprehensive list of all the learning rates for the gradient and adaptive gradient based algorithms in Table 4 in the Appendix. The additional parameters are defined as follows: • ADAM: We apply an perturbation of 1.0 × 10−6. β0 and β1 are chosen to be 0.9 and 0.999, respectively. • ADAGRAD: The initial accumulator value is set to 0. The perturbation is set to 1.0× 10−10. • SGD: We use a momentum of 0.9. • RMSPROP: We set α = 0.99. The perturbation is set 1.0× 10−8. • L-BFGS: The table 4 in Appendix A presents the initial learning rate for the stochastic step in L-BFGS. We set the default learning rate to 1.0. We choose a history size m of 10 and max iterations to 10. The tolerance on function value/parameter change is set to 1.0 × 10−9 and the first-order optimality condition for termination is defined as 1.0× 10−9. • ARC-LSR1: We choose the same parameters as L-BFGS. Network architecture: For each problem, we define the model architecture in Table 3 in the appendix. We define the process of the forward and backward pass of a DNN in Algorithm 3 in the appendix. Testbed and software: All experiments were conducted using open-source software PyTorch (Paszke et al. (2019)), SciPy (Virtanen et al. (2020)) and NumPy (Harris et al. (2020)). We use an Intel Core i7-8700 CPU with a clock rate of 3.20 GHz and an NVIDIA RTX 2080 Ti graphics card. 5 RESULTS We have divided the sections into two categories: classification and image reconstruction. We present both the training results and the testing results for all methods. 5.1 CLASSIFICATION RESULTS For each classification problem, we define the network architecture, the corresponding hyperparameters (other than the learning rate) for each optimization scheme. IRIS: Since this dataset is relatively small, we assume a small network for our deep-learning model. The model is described in 3. We set the history size for the proposed approach and L-BFGS to 10 and the number of iterations to 10. Figure 1 shows the comparative performance of all the methods. Note that our proposed method (ARCLSR1) achieves the highest classification accuracy in the fewest number of epochs. MNIST: We trained the network for 20 epochs with a batch size of 256 images each. We keep the same history size and number of iterations as the IRIS dataset for L-BFGS and the proposed ARCLSR1 approach. For training, it can be seen in Figure 2 that nearly all methods achieve optimal training accuracy. However, closely inspecting the testing curve, we notice that the proposed approach achieves higher accuracy than all the existing methods. FMNIST: We train the network for 20 epochs with a batch size of 256 images. We keep the history size the same as the IRIS and MNIST experiments for the proposed approach and L-BFGS. For this method, the proposed ARCLSR1 approach is comparable to L-BFGS but outperforms the adaptive methods (see Figure 3). CIFAR10: We use the same parameters presented in Table 4 in the previous section for the adaptive methods. For ARCLSR1 and L-BFGS, we have a history size of 100 with a maximum number of iterations of 100 and a batch size of 1024. Figure 4(a) represents the training loss (cross-entropy loss). Figure 4(b) represents the testing accuracy, i.e., number of sample correctly predicted in the testing set. To demonstrate the efficacy of the proposed method on larger networks, additional experimentation on the ResNet50 architecture can be found in the appendix (Figure 8). 5.2 IMAGE RECONSTRUCTION RESULTS The image reconstruction problem involves feeding a feedforward convolutional autoencoder model (with randomly initialized weights) a batch of the dataset. It follows the same deep learning convention as mentioned in Algorithm 3 in Appendix A. The loss function is defined between the reconstructed image and the original image. MNIST: An image x ∈ Rn is fed to the network, compressed into a latent space z ∈ Rl, where l n, and reconstructed back to its original image size x̄ ∈ Rn. We compute the mean-squared loss error between the reconstruction and the true image. The weights are initialized randomly. Each experiment has been conducted 5 times and we considered a batch size of 256 images each with 50 epochs. The results for the image reconstruction can be seen in Figure 5. One can notice that the initial descent provided by the proposed approach provides a significant decrease in the objective function. To understand better, we provide the details of the results during the initial epoch (Figure 9(a)) and the final epoch (Figure 9(b)). We notice that the ARCLSR1 method has minimized efficiently in the first half of the first epoch. This is empirical evidence that the method converges to the minimizer in fewer steps in comparison to the adaptive methods. In Figure 5 (b), we notice that all the adaptive methods eventually converge to the same point. For training response results on the F-MNIST dataset, see Section B in the appendix. 5.3 TIME COMPLEXITY ANALYSIS We understand that the proposed approach performs competitively against all existing methods. We now analyze the time-constraints of each method. We choose to clock the computationally demanding algorithm here - CIFAR10 classification. We chose a maximum iterations of 100 with a history size of 100 for L-BFGS and the ARCs LSR1, with a batch size of 1024 images. Figure 7 plots the time required by each of the methods to reach non-overtrained minima with a batch size of 1024 images. As can be seen, the proposed approach is able to reach the desired minima in much less time than the rest of the algorithms. L-BFGS finds it hard to converge due to a very noisy loss function and a small batch size, thus causing the algorithm to break. Ozyildirim & Kiran (2020) argue that a large batch size is required for quasi-Newton methods to perform well. However, the ARCLSR1 method performs well with a small batch size as well. 6 CONCLUSION In this paper, we proposed a novel quasi-Newton approach in a modified adaptive regularized cubic setting. We were able to empirically and theoretically show how an L-SR1 quasi-Newton approximation in an ARCs setting was able to perform either better or comparably to most of the state of the art optimization schemes. Even though the approach has yielded exceptional results, we need to test the method’s efficacy when the network size and dataset size is large and when availability of data is sparse.
1. What are the strengths and weaknesses of the proposed quasi-Newton method for neural network optimization? 2. What are the errors or missing parts in the paper that need to be corrected? 3. How does the proposed algorithm compare to other adaptive optimization algorithms, such as Adam, AMSGrad, AdaBound, and NosAdam? 4. What is the step size in the definition of Newton and quasi-Newton methods? 5. Are there any theoretical analyses of the convergence behavior of the proposed algorithm, especially in the online convex setting? 6. What are some limitations of the experimental results presented in the paper? 7. Why were the experiments on CIFAR10 terminated at the 15th epoch, while other algorithms ran for up to 4k epochs? 8. Can the authors provide a clearer explanation of why they chose to use the framework by Reddi et al. (2018) and how it relates to their proposed algorithm?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a quasi-newton method for neural network optimization. The authors have conducted some experiments to verify the effectiveness of their algorithm. Review Although I agree that introducing quasi-newton methods is an interesting direction, this paper seems to be underdeveloped and needs quite a lot of work to be ready for publishing. First of all, there are some errors/missing parts in the paper that seriously need to be corrected. In Algorithm 1, line 4, what is \mathcal{G}? Is it an operator? I believe the authors are trying to use the framework by Reddi et al, 2018, but the notations are not clear enough. The authors claim that the counterexample by Reddi et al. 2018 can be extended to all adaptive optimization algorithms, but that is not true. The counterexample is designed particularly for Adam because its second moment (the denominator) could decrease when optimization progresses, and the telescopic summation in the proof of Adam is thus wrong. I encourage the authors to read the paper of Reddi et al. 2018 more carefully to get a better idea of why Adam is poorly-designed. The other proposed adaptive algorithm, such as AMSGrad, AdaBound (Luo et al, 2019), and NosAdam (Huang et al, 2019) are well-designed and they would not have similar issues. Where is the step size in the definition of newton and quasi-newton methods? I am expecting a theoretical analysis of the convergence behavior of the proposed algorithm, maybe even in the online convex setting to show the advantage of the proposed algorithm. However, I do not see such theorems or propositions. Some of the experimental results are not convincing enough. First of all, I would encourage the authors to run experiments on some larger datasets, such as ImageNet, or on some different tasks apart from image tasks, such as language modeling on PennTreeBank. The current tasks are only image classification/generation, and only on small datasets such as MNIST and CIFAR10, which seem very limited. Moreover, I don't understand why the experiments on CIFAR10 is terminated at the 15-th epoch, while the other algorithms run for as many as 4k epochs. From my experience, 100-150 epochs are pretty sufficient for any algorithm and any NN model to train on CIFAR10. I would like the authors to explain why. Sashank J. Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of adam and beyond, ICLR 2018. Liangchen Luo, Yuanhao Xiong, Yan Liu, and Xu Sun. Adaptive gradient methods with dynamic bound of learning rate, ICLR 2019. Haiwen Huang, Chang Wang, Bin Dong Nostalgic Adam: Weighting more of the past gradients when designing the adaptive learning rate, IJCAI 2019
ICLR
Title L-SR1 Adaptive Regularization by Cubics for Deep Learning Abstract Stochastic gradient descent and other first-order variants, such as Adam and AdaGrad, are commonly used in the field of deep learning due to their computational efficiency and low-storage memory requirements. However, these methods do not exploit curvature information. Consequently, iterates can converge to saddle points and poor local minima. To avoid this, directions of negative curvature can be utilized, which requires computing the second-derivative matrix. In Deep Neural Networks (DNNs), the number of variables (n) can be of the order of tens of millions, making the Hessian impractical to store (O(n)) and to invert (O(n)). Alternatively, quasi-Newton methods compute Hessian approximations that do not have the same computational requirements. Quasi-Newton methods re-use previously computed iterates and gradients to compute a low-rank structured update. The most widely used quasi-Newton update is the L-BFGS, which guarantees a positive semi-definite Hessian approximation, making it suitable in a line search setting. However, the loss function in DNNs are non-convex, where the Hessian is potentially non-positive definite. In this paper, we propose using a Limited-Memory Symmetric Rank-1 quasi-Newton approach which allows for indefinite Hessian approximations, enabling directions of negative curvature to be exploited. Furthermore, we use a modified Adaptive Regularized Cubics approach, which generates a sequence of cubic subproblems that have closed-form solutions. We investigate the performance of our proposed method on autoencoders and feed-forward neural network models and compare our approach to state-of-the-art first-order adaptive stochastic methods as well as L-BFGS. 1 INTRODUCTION Most deep learning problems involve minimization of the empirical risk of estimation min Θ f(x; Θ), (1) where Θ ∈ Rn is the set of weights and f is some scalar-valued loss function. To solve (1), various optimization approaches have been implemented, which we describe below. Throughout this paper, we write f(Θ) and f(x; Θ) interchangeably. Gradient and adaptive gradient methods are widely used for training deep neural networks (DNN) for their computational efficiency. The most common approach is Stochastic Gradient Descent (SGD) which, despite its simplicity, performs well over a wide range of applications. However, in a sparse training data setting, SGD performs poorly due to limited training speed (Luo et al. (2019)). To address this problem, adaptive methods such as AdaGrad (Duchi et al. (2011)), AdaDelta (Zeiler (2012)), RMSProp (Hinton et al. (2012)) and Adam (Kingma & Ba (2014)) have been proposed. These methods take the root mean square of the past gradients to influence the current step. Amongst all of these adaptive methods, Adam is arguably the most widely used in a deep learning setting due to it rapid training speed. Newton’s method has the potential to exploit curvature information from the second-order derivative (Hessian) matrix (see e.g., Gould et al. (2000)). Generally, the iterates are defined by Θk+1 = Θk − αk∇2f(Θk)−1∇f(Θk), where αk > 0 is a steplength defined by a linesearch criterion (Nocedal & Wright (2006)). In a DNN setting, we know that the number of parameters (n) of the network can be of the order of millions. Thus storing the Hessian which takes O(n2) memory, becomes impractical. In addition, the inversion of the Hessian matrix, which takes O(n3) operations, is also impractical. Even though Newton’s method achieves convergence in fewer steps, the method becomes computationally intractable to use on large-scale DNNs. Quasi-Newton methods are alternatives to Newton methods. They compute Hessian approximations, Bk+1, that satisfy the secant condition given by yk = Bk+1sk, where sk = Θk+1 −Θk and yk = ∇f(Θk+1) − ∇f(Θk). The most commonly used quasi-Newton method, including in the realm of deep learning, is the limited-memory BFGS update, or L-BFGS (see e.g., Liu & Nocedal (1989)), where the Hessian approximation is given by Bk+1 = Bk + yky > k y>k sk − BkskskB > k s>k Bksk . (2) The generic L-BFGS quasi-Newton update scheme is described in Algorithm 1, and numerous variants of L-BFGS exist (see Goldfarb et al. (2020); Moritz et al. (2016); Gower et al. (2016)). One advantage of using an L-BFGS update is that the Hessian approximation can be guaranteed to be definite, which is highly suitable in line-search settings because the update sk is guaranteed to be a descent direction, meaning there is some step length along this direction that results in a decrease in the objective function (see Nocedal & Wright (2006), Algorithm 6.1). However, because the L-BFGS update is positive definite, it does not readily detect directions of negative curvature for avoiding saddle points. In contrast, the Symmetric-Rank One (SR1) quasi-Newton update is not guarateed to be positive definite and can result in ascent directions for line-search methods. However, in trust-region settings where indefinite Hessian approximations are an advantage because they capture directions of negative curvature, the limited-memory SR1 (L-SR1) has been shown to outperform L-BFGS in DNNs for classification (see Erway et al. (2020)). We discuss this in more detail in Section 2 but in the context of Adaptive Regularization using Cubics. Algorithm 1 L-BFGS Quasi-Newton Method with Line Search Require: Initial weights Θ0, batch size d, learning rate α, dataset D, loss function f(Θ). for k = 0, 1, 2, . . . do Sample mini-batch of size d : Dk ⊆ D Perform the forward backward pass over the current mini-batch Compute the limited memory approximation Bk using (2) Compute step sk = αB−1k ∇Θf(Θk), where α is the line-search step length end for 2 L-SR1 ADAPTIVE REGULARIZATION USING CUBICS METHOD We begin by discussing the L-SR1 update and the adaptive regularizion using cubics methods for large-scale optimization. Unlike the BFGS update (2), which is a rank-two update, the SR1 update is a rank-one update, which is given by Bk+1 = Bk + 1 s>k (yk −Bksk) (yk −Bksk)(yk −Bksk)> (3) (see Khalfan et al. (1993)). As previously mentioned, Bk+1 in (3) is not guaranteed to be definite. However, it can be shown that the SR1 matrices can converge to the true Hessian (see Conn et al. (1991) for details). We note that the pair (sk,yk) is accepted only when |s>k (yk − Bksk)| > ε‖yk −Bksk‖22, for some constant ε > 0 (see Nocedal & Wright (2006), Sec. 6.2, for details). The SR1 update can be defined recursively as Bk+1 = B0 + k∑ j=0 1 s>j (yj −Bjsj) (yj −Bjsj)(yj −Bjsj)>. (4) In limited-memory SR1 (L-SR1) settings, only the last m n pairs of (sj ,yj) are stored and used. If Sk+1 = [ s0 s1 · · · sk ] and Yk+1 = [ y0 y1 · · · yk ], then Bk+1 admits a compact representation of the form Bk+1 = B0 + [ Ψk+1 ][ Mk+1 ] [ Ψ>k+1 ] , (5) where Ψk+1 = Yk+1−B0Sk+1 and Mk+1 = (Dk+1+Lk+1+L>k+1−S>k+1B0Sk+1)−1, (6) where Lk+1 is the strictly lower triangular part, Uk+1 is the strictly upper triangular part, and Dk+1 is the diagonal part of S>k+1Yk+1 = Lk+1 + Dk+1 + Uk+1 (see Byrd et al. (1994) for further details). Because of the compact representation of Bk+1, its partial eigendecomposition can be computed (see Burdakov et al. (2017)). In particular, if we compute the QR decomposition of Ψk+1, then we can write Bk+1 = B0 = U‖Λ̂k+1U>‖ , where U‖ ∈ R n×(k+1) has orthonormal columns and Λ̂ ∈ R(k+1)×(k+1) is a diagonal matrix. If B0 = δkI (see e.g., Lemma 2.4 in Erway et al. (2020)), where 0 < δk < δmax is some scalar and I is the identity matrix, then we obtain the eigendecomposition Bk+1 = Uk+1Λk+1U>k+1, where Uk+1 = [U‖ U⊥], with U⊥ ∈ Rn×(n−(k+1)) and U>k+1Uk+1 = I. Here, (Λk+1)i = δk + λ̂i for i ≤ k+ 1, where λ̂i is the ith diagonal in Λ̂k+1, and (Λ)i = δk for i > k + 1. Since the SR1 Hessian approximation can be indefinite, some safeguard must be implemented to ensure that the resulting search direction sk is a descent direction. One such safeguard is to use a “regularization” term. The Adaptive Regularization using Cubics (ARCs) method (Griewank (1981); Cartis et al. (2011)) can be viewed as an alternative to line-search and trust-region methods. At each iteration, an approximate global minimizer of a local (cubic) model, min s∈Rn mk(s) ≡ g>k s + 1 2 s>Bks + µk 3 (Φk(s)) 3, (7) is determined, where gk = ∇f(Θk), µk > 0 is a regularization parameter, and Φk is a function (norm) that regularizes s. Typically, the Euclidean norm is used. In this work, we propose an alternative “shape-changing” norm that allows us to solve each subproblem (7) exactly. This shape-changing norm was proposed in Burdakov et al. (2017), and it is based on the partial eigendecomposition of Bk. Specifically, if Bk = UkΛkU>k is the eigendecomposition of Bk, then we can define the norm ‖s‖Uk def = ‖U>k s‖3. Applying a change of basis with s̄ = U>k s and ḡk = U>k gk, we can redefine the cubic subproblem as min s̄∈Rn m̄k(s) = ḡ > k s̄ + 1 2 s̄>Λks̄ + µk 3 ‖s̄‖33 . (8) With this change of basis, we can find a closed-form solution of (8) easily. The proposed Adaptive Regularization using Cubics with L-SR1 (ARCSLSR1) algorithm is given in Algorithm 2. 2.1 CONTRIBUTIONS The main contributions of this paper are as follows: 1. L-SR1 quasi-Newton methods: The most commonly used quasi-Newton approach is the L-BFGS method. In this work, we use the LSR1 update to better model potentially indefinite Hessians of the non-convex loss function. 2. Adaptive Regularization using Cubics (ARCs): Given that the quasi-Newton approximation is allowed to be indefinite, we use an Adaptive Regularized using Cubics approach to safeguard each search direction. 3. Shape-changing regularizer: We use a shape-changing norm to define the cubic regularization term, which allows us to compute the closed form solution to the cubic subproblem (7). 4. Computational complexity: Let m be the number of previous iterates and gradients stored in memory. The proposed LSR1 ARC approach is comparable to L-BFGS in terms of storage and compute complexity (see Table 1). Algorithm 2 Limited-Memory Symmetric Rank-1 Adaptive Regularization using Cubics 1: Given: Θ0, γ2 ≥ γ1, 1 > η2 ≥ η1 > 0, and σ0 > 0 2: for k = 0, 1, 2 . . . do 3: Obtain Sk = [ s0 · · · sk ], Yk = [ y0 · · · yk ] 4: Solve the generalized eigenproblem S>k Yku = Λ̂S > k Sku and let δk = min{λ̂i} 5: Compute Ψk = Yk − δkSk 6: Perform QR decomposition of Ψ = QR 7: Compute the eigendecomposition RMR> = PΛP> 8: Assign U‖ = QP and U>‖ = P >Q> 9: Define C‖ = diag(c1, . . . , cm), where ci = 2 λi+ √ λ2i +4µ|ḡi| and ḡ‖ = U>‖ g 10: Compute α∗ = 2 δk+ √ δ2k+4µ‖g⊥‖ where g⊥ = g −U‖ḡ‖ 11: Compute step s∗ = −α∗g + U‖(α∗Im −C‖)U>‖ 12: Compute m(s∗) and ρk = (f(Θk)− f(Θk+1))/m(s∗) 13: Set Θk+1 = { Θk + sk, if ρk ≥ η1, Θk, otherwise and µk+1 = 0.5µk if ρk > η2, 0.5µk(1 + γ1) if η1 ≤ ρk ≤ η2, 0.5µk(γ1 + γ2) otherwise 14: end for 2.2 IMPLEMENTATION Because full gradient computation is very expensive to perform, we impement a stochastic version of the proposed ARCs-LSR1 method. In particular, we use the batch gradient approximation g̃k ≡ 1 |Bk| ∑ i∈Bk ∇fi(Θk). In defining the SR1 matrix, we use the quasi-Newton pairs (sk, ỹk), where ỹk = g̃k+1 − g̃k (see e.g., Erway et al. (2020)). 3 CONVERGENCE ANALYSIS In this section, we prove convergence properties of the proposed method (ARCs-LSR1 in Algorithm 2). The following theoretical guarantees follow the ideas from Cartis et al. (2011) and Benson & Shanno (2018). First, we make the following mild assumptions: A1. The loss function f(Θ) is continuously differentiable, i.e., f ∈ C1(Rn). A2. The loss function f(Θ) is bounded below. Next, we prove that the matrix Bk in (4) is bounded. Lemma 1 The SR1 matrix Bk+1 in (4) satsifies ‖Bk+1‖F ≤ κB for all k ≥ 1 for some κB > 0. Proof: Using the limited-memory SR1 update with memory parameter m in (4), we have ‖Bk+1‖F ≤ ‖B0‖F + k∑ j=k−m+1 ‖(yj −Bjsj)(yj −Bjsj)>‖F |s>j (yj −Bjsj)| . Using a property of the Frobenius norm, namely, for real matrices A, ‖A‖2F = trace(AA >), we have that ‖(yj − Bjsj)(yj − Bjsj)>‖F = ‖yj − Bjsj‖22. Since the pair (sj ,yj) is accepted only when |s>j (yj −Bjsj)| > ε‖yj −Bjsj‖22, for some constant ε > 0, and B0 = δkI for some 0 < δk < δmax, we have ‖Bk+1‖F ≤ δmax + m ε ≡ κB . Given the bound on ‖Bk+1‖F , we obtain the following result, which is similar to Theorem 2.5 in Cartis et al. (2011). Theorem 1 Under Assumptions A1 and A2, if Lemma 1 holds, then lim inf k→∞ ‖gk‖ = 0. Finally, we consider the following assumption, which can be satisfied when the gradient, g(Θ), is Lipschitz continuous on Θ. A3. If {Θti} and {Θli} are subsequences of {Θk}, then ‖gti−gli‖ → 0 whenever ‖Θti−Θli‖ → 0 as i→∞. If we further make Assumption A3, we have the following stronger result (which is based on Corollary 2.6 in Cartis et al. (2011)): Corollary 1 Under Assumptions A1, A2, and A3, if Lemma 1 holds, then lim k→∞ ‖gk‖ = 0. 4 EXPERIMENTS To empirically compare the efficiency of the method against popular optimization methods like SGD, ADAGRAD, ADAM, RMSProp and L-BFGS, we focus on two broad deep learning problems: image classification and image reconstruction. We choose these tasks due to their broad importance and availability of reproducible model architectures. We run each experiments on an average of 5 times with a random initialization in each experiment. The number of parameters, convolutional layers and fully connected layers are mentioned in Table 3. Dataset: We measure the classification performance of each optimization method on 4 image datasets: MNIST (LeCun et al. (2010)), FashionMNIST (Xiao et al. (2017)), IRIS (Dua & Graff (2017)) and CIFAR10 (Krizhevsky et al.). We have provided a comprehensive view of the experiments in Table 2 Hyperparameter tuning: We empirically fine-tune the hyperparameters and select the best for each update scheme. We have made a comprehensive list of all the learning rates for the gradient and adaptive gradient based algorithms in Table 4 in the Appendix. The additional parameters are defined as follows: • ADAM: We apply an perturbation of 1.0 × 10−6. β0 and β1 are chosen to be 0.9 and 0.999, respectively. • ADAGRAD: The initial accumulator value is set to 0. The perturbation is set to 1.0× 10−10. • SGD: We use a momentum of 0.9. • RMSPROP: We set α = 0.99. The perturbation is set 1.0× 10−8. • L-BFGS: The table 4 in Appendix A presents the initial learning rate for the stochastic step in L-BFGS. We set the default learning rate to 1.0. We choose a history size m of 10 and max iterations to 10. The tolerance on function value/parameter change is set to 1.0 × 10−9 and the first-order optimality condition for termination is defined as 1.0× 10−9. • ARC-LSR1: We choose the same parameters as L-BFGS. Network architecture: For each problem, we define the model architecture in Table 3 in the appendix. We define the process of the forward and backward pass of a DNN in Algorithm 3 in the appendix. Testbed and software: All experiments were conducted using open-source software PyTorch (Paszke et al. (2019)), SciPy (Virtanen et al. (2020)) and NumPy (Harris et al. (2020)). We use an Intel Core i7-8700 CPU with a clock rate of 3.20 GHz and an NVIDIA RTX 2080 Ti graphics card. 5 RESULTS We have divided the sections into two categories: classification and image reconstruction. We present both the training results and the testing results for all methods. 5.1 CLASSIFICATION RESULTS For each classification problem, we define the network architecture, the corresponding hyperparameters (other than the learning rate) for each optimization scheme. IRIS: Since this dataset is relatively small, we assume a small network for our deep-learning model. The model is described in 3. We set the history size for the proposed approach and L-BFGS to 10 and the number of iterations to 10. Figure 1 shows the comparative performance of all the methods. Note that our proposed method (ARCLSR1) achieves the highest classification accuracy in the fewest number of epochs. MNIST: We trained the network for 20 epochs with a batch size of 256 images each. We keep the same history size and number of iterations as the IRIS dataset for L-BFGS and the proposed ARCLSR1 approach. For training, it can be seen in Figure 2 that nearly all methods achieve optimal training accuracy. However, closely inspecting the testing curve, we notice that the proposed approach achieves higher accuracy than all the existing methods. FMNIST: We train the network for 20 epochs with a batch size of 256 images. We keep the history size the same as the IRIS and MNIST experiments for the proposed approach and L-BFGS. For this method, the proposed ARCLSR1 approach is comparable to L-BFGS but outperforms the adaptive methods (see Figure 3). CIFAR10: We use the same parameters presented in Table 4 in the previous section for the adaptive methods. For ARCLSR1 and L-BFGS, we have a history size of 100 with a maximum number of iterations of 100 and a batch size of 1024. Figure 4(a) represents the training loss (cross-entropy loss). Figure 4(b) represents the testing accuracy, i.e., number of sample correctly predicted in the testing set. To demonstrate the efficacy of the proposed method on larger networks, additional experimentation on the ResNet50 architecture can be found in the appendix (Figure 8). 5.2 IMAGE RECONSTRUCTION RESULTS The image reconstruction problem involves feeding a feedforward convolutional autoencoder model (with randomly initialized weights) a batch of the dataset. It follows the same deep learning convention as mentioned in Algorithm 3 in Appendix A. The loss function is defined between the reconstructed image and the original image. MNIST: An image x ∈ Rn is fed to the network, compressed into a latent space z ∈ Rl, where l n, and reconstructed back to its original image size x̄ ∈ Rn. We compute the mean-squared loss error between the reconstruction and the true image. The weights are initialized randomly. Each experiment has been conducted 5 times and we considered a batch size of 256 images each with 50 epochs. The results for the image reconstruction can be seen in Figure 5. One can notice that the initial descent provided by the proposed approach provides a significant decrease in the objective function. To understand better, we provide the details of the results during the initial epoch (Figure 9(a)) and the final epoch (Figure 9(b)). We notice that the ARCLSR1 method has minimized efficiently in the first half of the first epoch. This is empirical evidence that the method converges to the minimizer in fewer steps in comparison to the adaptive methods. In Figure 5 (b), we notice that all the adaptive methods eventually converge to the same point. For training response results on the F-MNIST dataset, see Section B in the appendix. 5.3 TIME COMPLEXITY ANALYSIS We understand that the proposed approach performs competitively against all existing methods. We now analyze the time-constraints of each method. We choose to clock the computationally demanding algorithm here - CIFAR10 classification. We chose a maximum iterations of 100 with a history size of 100 for L-BFGS and the ARCs LSR1, with a batch size of 1024 images. Figure 7 plots the time required by each of the methods to reach non-overtrained minima with a batch size of 1024 images. As can be seen, the proposed approach is able to reach the desired minima in much less time than the rest of the algorithms. L-BFGS finds it hard to converge due to a very noisy loss function and a small batch size, thus causing the algorithm to break. Ozyildirim & Kiran (2020) argue that a large batch size is required for quasi-Newton methods to perform well. However, the ARCLSR1 method performs well with a small batch size as well. 6 CONCLUSION In this paper, we proposed a novel quasi-Newton approach in a modified adaptive regularized cubic setting. We were able to empirically and theoretically show how an L-SR1 quasi-Newton approximation in an ARCs setting was able to perform either better or comparably to most of the state of the art optimization schemes. Even though the approach has yielded exceptional results, we need to test the method’s efficacy when the network size and dataset size is large and when availability of data is sparse.
1. What are the main contributions and novel aspects introduced by the paper regarding the limited memory quasi-Newton method? 2. What are the strengths and weaknesses of the proposed algorithm compared to prior works, particularly L-BFGS? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. What are some critical details that the reviewer believes are missing from the paper, and how would they enhance the algorithm's interest and potential impact? 5. How does the use of stochastic gradient estimates affect the algorithm's performance, and what are some aspects of the implementation that could benefit from further explanation or improvement?
Summary Of The Paper Review
Summary Of The Paper The paper describes a limited memory quasi-Newton method based on SR1 updating using a variant on the adaptive regularized cubic (ARC) approach to globalization. The algorithm is applied to training deep neural networks for image classification and autoencoding, and compared to L-BFGS and various SGD variants. The authors claim the main contributions to be the different optimizer ingredients (L-SR1, ARC safeguarding, and the particular shape-changing regularizer) as well as computational complexity similar to L-BFGS. Review The ingredients in the algorithm are not new, though the shape-changing regularizer used in this paper was originally described for a trust-region method rather than ARC (and the author of the shape-changing regularizer paper focused on L-BFGS as his main algorithm, while at the same time mentioning L-SR1 and other quasi-Newton methods as options). However, the particular combination is new as far as I know, and the empirical results seem quite promising, though all involve relatively small networks. To the extent that the combination is new, I would have liked to see a little more rationale for the details of the regularizer adjustment and step acceptance criteria. These seem to be adapted from the paper of Burdakov to the ARC setting, but there is no discussion of convergence results. If the algorithm is the main contribution of the paper, a statement of these results would be nice to have. It would also be nice to have the (very short) appendix A folded into the main paper text so that the paper could be implemented by a reader without referring to the supplementary material. The text is missing some critical details that I think make the algorithm much more interesting (and make me interested in seeing a more developed paper in the future!). First, it seems that the L-SR1 and L-BFGS algorithms are both being used with stochastic gradient estimates in lieu of actual gradients. This is not entirely clear in the earlier examples where the authors refer to mini-batch sizes (those could have just been for the SGD variants), but in the final paragraph of Section 4, just before the conclusion, the authors refer to comparing L-SR1 and L-BFGS with different batch sizes. Though the authors mention prior work arguing that L-BFGS works poorly with stochastic gradient estimates (at least without very large batch sizes), there is no hint as to why stochastic gradients should work so much better with L-SR1. This seems like a real mystery: is it the L-SR1 Hessian estimate that works better, or the use of ARC rather than line search, or something else entirely? There are also several aspects of the algorithm that are under-explained in the presence of stochastic estimation. For example, the step acceptance and regularization adaption are based on an improvement ratio that superficially requires evaluating the full empirical loss at two points; are these replaced by means over a mini-batch in the implementation? If so, is the same mini-batch used to evaluate both terms? And similarly with the gradient treatment: is the gradient difference in the L-SR1 algorithm based on stochastic gradient estimates using the same mini-batch, or independent draws? Also, in the conclusion, the authors mention that they would like to "explore a stochastic version of computing the Hessian approximation", and this confused me in the context of the apparently-stochastic gradients feeding into the L-SR1 Hessian approximation. Second, the authors never specify the activation function they are using in their experiments. But if one was to use a ReLU activation (which seems likely), the true empirical risk becomes non-smooth. There has been interesting work done on using BFGS for non-smooth functions, and it seems to work surprisingly well in that setting; however, to my knowledge, the theory for why is still lacking. I don't know of any work on SR1 (or L-SR1) for non-smooth objectives. The paper might still be ready for publication despite missing theoretical details if there were a strong enough empirical justification. However, while the experiments are interesting, they all involve relatively small networks (only one over 100K parameters) for relatively small data sets, though this might be based on the hardware used (one node with a GPU). I also have several more minor comments: The citation of Luo et al on the first page should be parenthesized. The quoted theorem 1 is for convex optimization problem; the start of the next sentence says "Newton's method avoid (sic) these saddle points," but convexity means that saddle points cannot be the problem. "Thus storing the Hessian O(n^2) becomes impractical" should be something like "Thus storing the Hessian, which takes O(n^2) memory, becomes impractical." Similarly "the inversion of the Hessian matrix can make it an additional computational burden O(n^3)" should be something like "the inversion of the Hessian matrix takes O(n^3), which is also impractical" (or just "the inversion of the Hessian matrix takes O(n^3)" -- the reader will likely get the point). In Algorithm 3 line 3, the matrix Y_k should have columns y_0 through y_k (vs s_0 through s_k). Also, I was confused my the update for mu_{k+1} -- it seems like it is being assigned an interval rather than a specific value? In the results, please do say more about the architecture (activation functions in particular). This could go into supplementary details, but the experiments could not be reproduced with what is given here. On page 7, "Figure 4(b) tepresents" -> "Figure 4(b) represents"
ICLR
Title L-SR1 Adaptive Regularization by Cubics for Deep Learning Abstract Stochastic gradient descent and other first-order variants, such as Adam and AdaGrad, are commonly used in the field of deep learning due to their computational efficiency and low-storage memory requirements. However, these methods do not exploit curvature information. Consequently, iterates can converge to saddle points and poor local minima. To avoid this, directions of negative curvature can be utilized, which requires computing the second-derivative matrix. In Deep Neural Networks (DNNs), the number of variables (n) can be of the order of tens of millions, making the Hessian impractical to store (O(n)) and to invert (O(n)). Alternatively, quasi-Newton methods compute Hessian approximations that do not have the same computational requirements. Quasi-Newton methods re-use previously computed iterates and gradients to compute a low-rank structured update. The most widely used quasi-Newton update is the L-BFGS, which guarantees a positive semi-definite Hessian approximation, making it suitable in a line search setting. However, the loss function in DNNs are non-convex, where the Hessian is potentially non-positive definite. In this paper, we propose using a Limited-Memory Symmetric Rank-1 quasi-Newton approach which allows for indefinite Hessian approximations, enabling directions of negative curvature to be exploited. Furthermore, we use a modified Adaptive Regularized Cubics approach, which generates a sequence of cubic subproblems that have closed-form solutions. We investigate the performance of our proposed method on autoencoders and feed-forward neural network models and compare our approach to state-of-the-art first-order adaptive stochastic methods as well as L-BFGS. 1 INTRODUCTION Most deep learning problems involve minimization of the empirical risk of estimation min Θ f(x; Θ), (1) where Θ ∈ Rn is the set of weights and f is some scalar-valued loss function. To solve (1), various optimization approaches have been implemented, which we describe below. Throughout this paper, we write f(Θ) and f(x; Θ) interchangeably. Gradient and adaptive gradient methods are widely used for training deep neural networks (DNN) for their computational efficiency. The most common approach is Stochastic Gradient Descent (SGD) which, despite its simplicity, performs well over a wide range of applications. However, in a sparse training data setting, SGD performs poorly due to limited training speed (Luo et al. (2019)). To address this problem, adaptive methods such as AdaGrad (Duchi et al. (2011)), AdaDelta (Zeiler (2012)), RMSProp (Hinton et al. (2012)) and Adam (Kingma & Ba (2014)) have been proposed. These methods take the root mean square of the past gradients to influence the current step. Amongst all of these adaptive methods, Adam is arguably the most widely used in a deep learning setting due to it rapid training speed. Newton’s method has the potential to exploit curvature information from the second-order derivative (Hessian) matrix (see e.g., Gould et al. (2000)). Generally, the iterates are defined by Θk+1 = Θk − αk∇2f(Θk)−1∇f(Θk), where αk > 0 is a steplength defined by a linesearch criterion (Nocedal & Wright (2006)). In a DNN setting, we know that the number of parameters (n) of the network can be of the order of millions. Thus storing the Hessian which takes O(n2) memory, becomes impractical. In addition, the inversion of the Hessian matrix, which takes O(n3) operations, is also impractical. Even though Newton’s method achieves convergence in fewer steps, the method becomes computationally intractable to use on large-scale DNNs. Quasi-Newton methods are alternatives to Newton methods. They compute Hessian approximations, Bk+1, that satisfy the secant condition given by yk = Bk+1sk, where sk = Θk+1 −Θk and yk = ∇f(Θk+1) − ∇f(Θk). The most commonly used quasi-Newton method, including in the realm of deep learning, is the limited-memory BFGS update, or L-BFGS (see e.g., Liu & Nocedal (1989)), where the Hessian approximation is given by Bk+1 = Bk + yky > k y>k sk − BkskskB > k s>k Bksk . (2) The generic L-BFGS quasi-Newton update scheme is described in Algorithm 1, and numerous variants of L-BFGS exist (see Goldfarb et al. (2020); Moritz et al. (2016); Gower et al. (2016)). One advantage of using an L-BFGS update is that the Hessian approximation can be guaranteed to be definite, which is highly suitable in line-search settings because the update sk is guaranteed to be a descent direction, meaning there is some step length along this direction that results in a decrease in the objective function (see Nocedal & Wright (2006), Algorithm 6.1). However, because the L-BFGS update is positive definite, it does not readily detect directions of negative curvature for avoiding saddle points. In contrast, the Symmetric-Rank One (SR1) quasi-Newton update is not guarateed to be positive definite and can result in ascent directions for line-search methods. However, in trust-region settings where indefinite Hessian approximations are an advantage because they capture directions of negative curvature, the limited-memory SR1 (L-SR1) has been shown to outperform L-BFGS in DNNs for classification (see Erway et al. (2020)). We discuss this in more detail in Section 2 but in the context of Adaptive Regularization using Cubics. Algorithm 1 L-BFGS Quasi-Newton Method with Line Search Require: Initial weights Θ0, batch size d, learning rate α, dataset D, loss function f(Θ). for k = 0, 1, 2, . . . do Sample mini-batch of size d : Dk ⊆ D Perform the forward backward pass over the current mini-batch Compute the limited memory approximation Bk using (2) Compute step sk = αB−1k ∇Θf(Θk), where α is the line-search step length end for 2 L-SR1 ADAPTIVE REGULARIZATION USING CUBICS METHOD We begin by discussing the L-SR1 update and the adaptive regularizion using cubics methods for large-scale optimization. Unlike the BFGS update (2), which is a rank-two update, the SR1 update is a rank-one update, which is given by Bk+1 = Bk + 1 s>k (yk −Bksk) (yk −Bksk)(yk −Bksk)> (3) (see Khalfan et al. (1993)). As previously mentioned, Bk+1 in (3) is not guaranteed to be definite. However, it can be shown that the SR1 matrices can converge to the true Hessian (see Conn et al. (1991) for details). We note that the pair (sk,yk) is accepted only when |s>k (yk − Bksk)| > ε‖yk −Bksk‖22, for some constant ε > 0 (see Nocedal & Wright (2006), Sec. 6.2, for details). The SR1 update can be defined recursively as Bk+1 = B0 + k∑ j=0 1 s>j (yj −Bjsj) (yj −Bjsj)(yj −Bjsj)>. (4) In limited-memory SR1 (L-SR1) settings, only the last m n pairs of (sj ,yj) are stored and used. If Sk+1 = [ s0 s1 · · · sk ] and Yk+1 = [ y0 y1 · · · yk ], then Bk+1 admits a compact representation of the form Bk+1 = B0 + [ Ψk+1 ][ Mk+1 ] [ Ψ>k+1 ] , (5) where Ψk+1 = Yk+1−B0Sk+1 and Mk+1 = (Dk+1+Lk+1+L>k+1−S>k+1B0Sk+1)−1, (6) where Lk+1 is the strictly lower triangular part, Uk+1 is the strictly upper triangular part, and Dk+1 is the diagonal part of S>k+1Yk+1 = Lk+1 + Dk+1 + Uk+1 (see Byrd et al. (1994) for further details). Because of the compact representation of Bk+1, its partial eigendecomposition can be computed (see Burdakov et al. (2017)). In particular, if we compute the QR decomposition of Ψk+1, then we can write Bk+1 = B0 = U‖Λ̂k+1U>‖ , where U‖ ∈ R n×(k+1) has orthonormal columns and Λ̂ ∈ R(k+1)×(k+1) is a diagonal matrix. If B0 = δkI (see e.g., Lemma 2.4 in Erway et al. (2020)), where 0 < δk < δmax is some scalar and I is the identity matrix, then we obtain the eigendecomposition Bk+1 = Uk+1Λk+1U>k+1, where Uk+1 = [U‖ U⊥], with U⊥ ∈ Rn×(n−(k+1)) and U>k+1Uk+1 = I. Here, (Λk+1)i = δk + λ̂i for i ≤ k+ 1, where λ̂i is the ith diagonal in Λ̂k+1, and (Λ)i = δk for i > k + 1. Since the SR1 Hessian approximation can be indefinite, some safeguard must be implemented to ensure that the resulting search direction sk is a descent direction. One such safeguard is to use a “regularization” term. The Adaptive Regularization using Cubics (ARCs) method (Griewank (1981); Cartis et al. (2011)) can be viewed as an alternative to line-search and trust-region methods. At each iteration, an approximate global minimizer of a local (cubic) model, min s∈Rn mk(s) ≡ g>k s + 1 2 s>Bks + µk 3 (Φk(s)) 3, (7) is determined, where gk = ∇f(Θk), µk > 0 is a regularization parameter, and Φk is a function (norm) that regularizes s. Typically, the Euclidean norm is used. In this work, we propose an alternative “shape-changing” norm that allows us to solve each subproblem (7) exactly. This shape-changing norm was proposed in Burdakov et al. (2017), and it is based on the partial eigendecomposition of Bk. Specifically, if Bk = UkΛkU>k is the eigendecomposition of Bk, then we can define the norm ‖s‖Uk def = ‖U>k s‖3. Applying a change of basis with s̄ = U>k s and ḡk = U>k gk, we can redefine the cubic subproblem as min s̄∈Rn m̄k(s) = ḡ > k s̄ + 1 2 s̄>Λks̄ + µk 3 ‖s̄‖33 . (8) With this change of basis, we can find a closed-form solution of (8) easily. The proposed Adaptive Regularization using Cubics with L-SR1 (ARCSLSR1) algorithm is given in Algorithm 2. 2.1 CONTRIBUTIONS The main contributions of this paper are as follows: 1. L-SR1 quasi-Newton methods: The most commonly used quasi-Newton approach is the L-BFGS method. In this work, we use the LSR1 update to better model potentially indefinite Hessians of the non-convex loss function. 2. Adaptive Regularization using Cubics (ARCs): Given that the quasi-Newton approximation is allowed to be indefinite, we use an Adaptive Regularized using Cubics approach to safeguard each search direction. 3. Shape-changing regularizer: We use a shape-changing norm to define the cubic regularization term, which allows us to compute the closed form solution to the cubic subproblem (7). 4. Computational complexity: Let m be the number of previous iterates and gradients stored in memory. The proposed LSR1 ARC approach is comparable to L-BFGS in terms of storage and compute complexity (see Table 1). Algorithm 2 Limited-Memory Symmetric Rank-1 Adaptive Regularization using Cubics 1: Given: Θ0, γ2 ≥ γ1, 1 > η2 ≥ η1 > 0, and σ0 > 0 2: for k = 0, 1, 2 . . . do 3: Obtain Sk = [ s0 · · · sk ], Yk = [ y0 · · · yk ] 4: Solve the generalized eigenproblem S>k Yku = Λ̂S > k Sku and let δk = min{λ̂i} 5: Compute Ψk = Yk − δkSk 6: Perform QR decomposition of Ψ = QR 7: Compute the eigendecomposition RMR> = PΛP> 8: Assign U‖ = QP and U>‖ = P >Q> 9: Define C‖ = diag(c1, . . . , cm), where ci = 2 λi+ √ λ2i +4µ|ḡi| and ḡ‖ = U>‖ g 10: Compute α∗ = 2 δk+ √ δ2k+4µ‖g⊥‖ where g⊥ = g −U‖ḡ‖ 11: Compute step s∗ = −α∗g + U‖(α∗Im −C‖)U>‖ 12: Compute m(s∗) and ρk = (f(Θk)− f(Θk+1))/m(s∗) 13: Set Θk+1 = { Θk + sk, if ρk ≥ η1, Θk, otherwise and µk+1 = 0.5µk if ρk > η2, 0.5µk(1 + γ1) if η1 ≤ ρk ≤ η2, 0.5µk(γ1 + γ2) otherwise 14: end for 2.2 IMPLEMENTATION Because full gradient computation is very expensive to perform, we impement a stochastic version of the proposed ARCs-LSR1 method. In particular, we use the batch gradient approximation g̃k ≡ 1 |Bk| ∑ i∈Bk ∇fi(Θk). In defining the SR1 matrix, we use the quasi-Newton pairs (sk, ỹk), where ỹk = g̃k+1 − g̃k (see e.g., Erway et al. (2020)). 3 CONVERGENCE ANALYSIS In this section, we prove convergence properties of the proposed method (ARCs-LSR1 in Algorithm 2). The following theoretical guarantees follow the ideas from Cartis et al. (2011) and Benson & Shanno (2018). First, we make the following mild assumptions: A1. The loss function f(Θ) is continuously differentiable, i.e., f ∈ C1(Rn). A2. The loss function f(Θ) is bounded below. Next, we prove that the matrix Bk in (4) is bounded. Lemma 1 The SR1 matrix Bk+1 in (4) satsifies ‖Bk+1‖F ≤ κB for all k ≥ 1 for some κB > 0. Proof: Using the limited-memory SR1 update with memory parameter m in (4), we have ‖Bk+1‖F ≤ ‖B0‖F + k∑ j=k−m+1 ‖(yj −Bjsj)(yj −Bjsj)>‖F |s>j (yj −Bjsj)| . Using a property of the Frobenius norm, namely, for real matrices A, ‖A‖2F = trace(AA >), we have that ‖(yj − Bjsj)(yj − Bjsj)>‖F = ‖yj − Bjsj‖22. Since the pair (sj ,yj) is accepted only when |s>j (yj −Bjsj)| > ε‖yj −Bjsj‖22, for some constant ε > 0, and B0 = δkI for some 0 < δk < δmax, we have ‖Bk+1‖F ≤ δmax + m ε ≡ κB . Given the bound on ‖Bk+1‖F , we obtain the following result, which is similar to Theorem 2.5 in Cartis et al. (2011). Theorem 1 Under Assumptions A1 and A2, if Lemma 1 holds, then lim inf k→∞ ‖gk‖ = 0. Finally, we consider the following assumption, which can be satisfied when the gradient, g(Θ), is Lipschitz continuous on Θ. A3. If {Θti} and {Θli} are subsequences of {Θk}, then ‖gti−gli‖ → 0 whenever ‖Θti−Θli‖ → 0 as i→∞. If we further make Assumption A3, we have the following stronger result (which is based on Corollary 2.6 in Cartis et al. (2011)): Corollary 1 Under Assumptions A1, A2, and A3, if Lemma 1 holds, then lim k→∞ ‖gk‖ = 0. 4 EXPERIMENTS To empirically compare the efficiency of the method against popular optimization methods like SGD, ADAGRAD, ADAM, RMSProp and L-BFGS, we focus on two broad deep learning problems: image classification and image reconstruction. We choose these tasks due to their broad importance and availability of reproducible model architectures. We run each experiments on an average of 5 times with a random initialization in each experiment. The number of parameters, convolutional layers and fully connected layers are mentioned in Table 3. Dataset: We measure the classification performance of each optimization method on 4 image datasets: MNIST (LeCun et al. (2010)), FashionMNIST (Xiao et al. (2017)), IRIS (Dua & Graff (2017)) and CIFAR10 (Krizhevsky et al.). We have provided a comprehensive view of the experiments in Table 2 Hyperparameter tuning: We empirically fine-tune the hyperparameters and select the best for each update scheme. We have made a comprehensive list of all the learning rates for the gradient and adaptive gradient based algorithms in Table 4 in the Appendix. The additional parameters are defined as follows: • ADAM: We apply an perturbation of 1.0 × 10−6. β0 and β1 are chosen to be 0.9 and 0.999, respectively. • ADAGRAD: The initial accumulator value is set to 0. The perturbation is set to 1.0× 10−10. • SGD: We use a momentum of 0.9. • RMSPROP: We set α = 0.99. The perturbation is set 1.0× 10−8. • L-BFGS: The table 4 in Appendix A presents the initial learning rate for the stochastic step in L-BFGS. We set the default learning rate to 1.0. We choose a history size m of 10 and max iterations to 10. The tolerance on function value/parameter change is set to 1.0 × 10−9 and the first-order optimality condition for termination is defined as 1.0× 10−9. • ARC-LSR1: We choose the same parameters as L-BFGS. Network architecture: For each problem, we define the model architecture in Table 3 in the appendix. We define the process of the forward and backward pass of a DNN in Algorithm 3 in the appendix. Testbed and software: All experiments were conducted using open-source software PyTorch (Paszke et al. (2019)), SciPy (Virtanen et al. (2020)) and NumPy (Harris et al. (2020)). We use an Intel Core i7-8700 CPU with a clock rate of 3.20 GHz and an NVIDIA RTX 2080 Ti graphics card. 5 RESULTS We have divided the sections into two categories: classification and image reconstruction. We present both the training results and the testing results for all methods. 5.1 CLASSIFICATION RESULTS For each classification problem, we define the network architecture, the corresponding hyperparameters (other than the learning rate) for each optimization scheme. IRIS: Since this dataset is relatively small, we assume a small network for our deep-learning model. The model is described in 3. We set the history size for the proposed approach and L-BFGS to 10 and the number of iterations to 10. Figure 1 shows the comparative performance of all the methods. Note that our proposed method (ARCLSR1) achieves the highest classification accuracy in the fewest number of epochs. MNIST: We trained the network for 20 epochs with a batch size of 256 images each. We keep the same history size and number of iterations as the IRIS dataset for L-BFGS and the proposed ARCLSR1 approach. For training, it can be seen in Figure 2 that nearly all methods achieve optimal training accuracy. However, closely inspecting the testing curve, we notice that the proposed approach achieves higher accuracy than all the existing methods. FMNIST: We train the network for 20 epochs with a batch size of 256 images. We keep the history size the same as the IRIS and MNIST experiments for the proposed approach and L-BFGS. For this method, the proposed ARCLSR1 approach is comparable to L-BFGS but outperforms the adaptive methods (see Figure 3). CIFAR10: We use the same parameters presented in Table 4 in the previous section for the adaptive methods. For ARCLSR1 and L-BFGS, we have a history size of 100 with a maximum number of iterations of 100 and a batch size of 1024. Figure 4(a) represents the training loss (cross-entropy loss). Figure 4(b) represents the testing accuracy, i.e., number of sample correctly predicted in the testing set. To demonstrate the efficacy of the proposed method on larger networks, additional experimentation on the ResNet50 architecture can be found in the appendix (Figure 8). 5.2 IMAGE RECONSTRUCTION RESULTS The image reconstruction problem involves feeding a feedforward convolutional autoencoder model (with randomly initialized weights) a batch of the dataset. It follows the same deep learning convention as mentioned in Algorithm 3 in Appendix A. The loss function is defined between the reconstructed image and the original image. MNIST: An image x ∈ Rn is fed to the network, compressed into a latent space z ∈ Rl, where l n, and reconstructed back to its original image size x̄ ∈ Rn. We compute the mean-squared loss error between the reconstruction and the true image. The weights are initialized randomly. Each experiment has been conducted 5 times and we considered a batch size of 256 images each with 50 epochs. The results for the image reconstruction can be seen in Figure 5. One can notice that the initial descent provided by the proposed approach provides a significant decrease in the objective function. To understand better, we provide the details of the results during the initial epoch (Figure 9(a)) and the final epoch (Figure 9(b)). We notice that the ARCLSR1 method has minimized efficiently in the first half of the first epoch. This is empirical evidence that the method converges to the minimizer in fewer steps in comparison to the adaptive methods. In Figure 5 (b), we notice that all the adaptive methods eventually converge to the same point. For training response results on the F-MNIST dataset, see Section B in the appendix. 5.3 TIME COMPLEXITY ANALYSIS We understand that the proposed approach performs competitively against all existing methods. We now analyze the time-constraints of each method. We choose to clock the computationally demanding algorithm here - CIFAR10 classification. We chose a maximum iterations of 100 with a history size of 100 for L-BFGS and the ARCs LSR1, with a batch size of 1024 images. Figure 7 plots the time required by each of the methods to reach non-overtrained minima with a batch size of 1024 images. As can be seen, the proposed approach is able to reach the desired minima in much less time than the rest of the algorithms. L-BFGS finds it hard to converge due to a very noisy loss function and a small batch size, thus causing the algorithm to break. Ozyildirim & Kiran (2020) argue that a large batch size is required for quasi-Newton methods to perform well. However, the ARCLSR1 method performs well with a small batch size as well. 6 CONCLUSION In this paper, we proposed a novel quasi-Newton approach in a modified adaptive regularized cubic setting. We were able to empirically and theoretically show how an L-SR1 quasi-Newton approximation in an ARCs setting was able to perform either better or comparably to most of the state of the art optimization schemes. Even though the approach has yielded exceptional results, we need to test the method’s efficacy when the network size and dataset size is large and when availability of data is sparse.
1. What is the focus of the paper, and what are the proposed approaches? 2. What are the strengths of the proposed methods, particularly in their application to autoencoders and feed-forward neural networks? 3. What are the weaknesses of the paper, especially regarding the lack of theoretical support for its claims? 4. Do you have any questions about the novelty of the contributions mentioned in the paper? 5. Can you clarify the guarantee of definite Hessian approximation in L-BFGS updates? 6. What is the meaning of "online convex optimization"? 7. Are there any typos or unclear parts in the paper that need attention?
Summary Of The Paper Review
Summary Of The Paper The paper proposes to use a Symmetric Rank-1 (SR1) quasi-Newton approach to approximate the Hessian and to use an Adaptive Regularized Cubics (ARC) with an adaptive norm framework. To assess the performance of the new method, numerical experiments using autoencoders and feed-forward neural network models are supplied. Review My major concerns with this paper are : -Except thm1 from Reddi et al. paper, I dont see any theoretical result neither in the paper nor in the appendix. Am I missing some thing or no theory is given to support the claims in the paper? For instance, after Thm1 statement, you claim that Newton's method avoid saddle points...but I did not see any theoretical results supporting this... -The novelty of the contributions stated at page 4: the methods SR1 to approximate the Hessian & ARC exist already in the literature. You claim at the end of page 2 "... an L-BFGS update is that the Hessian approximation can be guaranteed to be definite, which is highly suitable in line-search settings because the update s_k is guaranteed to be a descent direction..." why? give a reference or show it? Line 4 in Algo 1: what is the operator G? what is f_t? Thm1: what do you mean by online convex optimization exactly? Several typos in the paper. ...
ICLR
Title L-SR1 Adaptive Regularization by Cubics for Deep Learning Abstract Stochastic gradient descent and other first-order variants, such as Adam and AdaGrad, are commonly used in the field of deep learning due to their computational efficiency and low-storage memory requirements. However, these methods do not exploit curvature information. Consequently, iterates can converge to saddle points and poor local minima. To avoid this, directions of negative curvature can be utilized, which requires computing the second-derivative matrix. In Deep Neural Networks (DNNs), the number of variables (n) can be of the order of tens of millions, making the Hessian impractical to store (O(n)) and to invert (O(n)). Alternatively, quasi-Newton methods compute Hessian approximations that do not have the same computational requirements. Quasi-Newton methods re-use previously computed iterates and gradients to compute a low-rank structured update. The most widely used quasi-Newton update is the L-BFGS, which guarantees a positive semi-definite Hessian approximation, making it suitable in a line search setting. However, the loss function in DNNs are non-convex, where the Hessian is potentially non-positive definite. In this paper, we propose using a Limited-Memory Symmetric Rank-1 quasi-Newton approach which allows for indefinite Hessian approximations, enabling directions of negative curvature to be exploited. Furthermore, we use a modified Adaptive Regularized Cubics approach, which generates a sequence of cubic subproblems that have closed-form solutions. We investigate the performance of our proposed method on autoencoders and feed-forward neural network models and compare our approach to state-of-the-art first-order adaptive stochastic methods as well as L-BFGS. 1 INTRODUCTION Most deep learning problems involve minimization of the empirical risk of estimation min Θ f(x; Θ), (1) where Θ ∈ Rn is the set of weights and f is some scalar-valued loss function. To solve (1), various optimization approaches have been implemented, which we describe below. Throughout this paper, we write f(Θ) and f(x; Θ) interchangeably. Gradient and adaptive gradient methods are widely used for training deep neural networks (DNN) for their computational efficiency. The most common approach is Stochastic Gradient Descent (SGD) which, despite its simplicity, performs well over a wide range of applications. However, in a sparse training data setting, SGD performs poorly due to limited training speed (Luo et al. (2019)). To address this problem, adaptive methods such as AdaGrad (Duchi et al. (2011)), AdaDelta (Zeiler (2012)), RMSProp (Hinton et al. (2012)) and Adam (Kingma & Ba (2014)) have been proposed. These methods take the root mean square of the past gradients to influence the current step. Amongst all of these adaptive methods, Adam is arguably the most widely used in a deep learning setting due to it rapid training speed. Newton’s method has the potential to exploit curvature information from the second-order derivative (Hessian) matrix (see e.g., Gould et al. (2000)). Generally, the iterates are defined by Θk+1 = Θk − αk∇2f(Θk)−1∇f(Θk), where αk > 0 is a steplength defined by a linesearch criterion (Nocedal & Wright (2006)). In a DNN setting, we know that the number of parameters (n) of the network can be of the order of millions. Thus storing the Hessian which takes O(n2) memory, becomes impractical. In addition, the inversion of the Hessian matrix, which takes O(n3) operations, is also impractical. Even though Newton’s method achieves convergence in fewer steps, the method becomes computationally intractable to use on large-scale DNNs. Quasi-Newton methods are alternatives to Newton methods. They compute Hessian approximations, Bk+1, that satisfy the secant condition given by yk = Bk+1sk, where sk = Θk+1 −Θk and yk = ∇f(Θk+1) − ∇f(Θk). The most commonly used quasi-Newton method, including in the realm of deep learning, is the limited-memory BFGS update, or L-BFGS (see e.g., Liu & Nocedal (1989)), where the Hessian approximation is given by Bk+1 = Bk + yky > k y>k sk − BkskskB > k s>k Bksk . (2) The generic L-BFGS quasi-Newton update scheme is described in Algorithm 1, and numerous variants of L-BFGS exist (see Goldfarb et al. (2020); Moritz et al. (2016); Gower et al. (2016)). One advantage of using an L-BFGS update is that the Hessian approximation can be guaranteed to be definite, which is highly suitable in line-search settings because the update sk is guaranteed to be a descent direction, meaning there is some step length along this direction that results in a decrease in the objective function (see Nocedal & Wright (2006), Algorithm 6.1). However, because the L-BFGS update is positive definite, it does not readily detect directions of negative curvature for avoiding saddle points. In contrast, the Symmetric-Rank One (SR1) quasi-Newton update is not guarateed to be positive definite and can result in ascent directions for line-search methods. However, in trust-region settings where indefinite Hessian approximations are an advantage because they capture directions of negative curvature, the limited-memory SR1 (L-SR1) has been shown to outperform L-BFGS in DNNs for classification (see Erway et al. (2020)). We discuss this in more detail in Section 2 but in the context of Adaptive Regularization using Cubics. Algorithm 1 L-BFGS Quasi-Newton Method with Line Search Require: Initial weights Θ0, batch size d, learning rate α, dataset D, loss function f(Θ). for k = 0, 1, 2, . . . do Sample mini-batch of size d : Dk ⊆ D Perform the forward backward pass over the current mini-batch Compute the limited memory approximation Bk using (2) Compute step sk = αB−1k ∇Θf(Θk), where α is the line-search step length end for 2 L-SR1 ADAPTIVE REGULARIZATION USING CUBICS METHOD We begin by discussing the L-SR1 update and the adaptive regularizion using cubics methods for large-scale optimization. Unlike the BFGS update (2), which is a rank-two update, the SR1 update is a rank-one update, which is given by Bk+1 = Bk + 1 s>k (yk −Bksk) (yk −Bksk)(yk −Bksk)> (3) (see Khalfan et al. (1993)). As previously mentioned, Bk+1 in (3) is not guaranteed to be definite. However, it can be shown that the SR1 matrices can converge to the true Hessian (see Conn et al. (1991) for details). We note that the pair (sk,yk) is accepted only when |s>k (yk − Bksk)| > ε‖yk −Bksk‖22, for some constant ε > 0 (see Nocedal & Wright (2006), Sec. 6.2, for details). The SR1 update can be defined recursively as Bk+1 = B0 + k∑ j=0 1 s>j (yj −Bjsj) (yj −Bjsj)(yj −Bjsj)>. (4) In limited-memory SR1 (L-SR1) settings, only the last m n pairs of (sj ,yj) are stored and used. If Sk+1 = [ s0 s1 · · · sk ] and Yk+1 = [ y0 y1 · · · yk ], then Bk+1 admits a compact representation of the form Bk+1 = B0 + [ Ψk+1 ][ Mk+1 ] [ Ψ>k+1 ] , (5) where Ψk+1 = Yk+1−B0Sk+1 and Mk+1 = (Dk+1+Lk+1+L>k+1−S>k+1B0Sk+1)−1, (6) where Lk+1 is the strictly lower triangular part, Uk+1 is the strictly upper triangular part, and Dk+1 is the diagonal part of S>k+1Yk+1 = Lk+1 + Dk+1 + Uk+1 (see Byrd et al. (1994) for further details). Because of the compact representation of Bk+1, its partial eigendecomposition can be computed (see Burdakov et al. (2017)). In particular, if we compute the QR decomposition of Ψk+1, then we can write Bk+1 = B0 = U‖Λ̂k+1U>‖ , where U‖ ∈ R n×(k+1) has orthonormal columns and Λ̂ ∈ R(k+1)×(k+1) is a diagonal matrix. If B0 = δkI (see e.g., Lemma 2.4 in Erway et al. (2020)), where 0 < δk < δmax is some scalar and I is the identity matrix, then we obtain the eigendecomposition Bk+1 = Uk+1Λk+1U>k+1, where Uk+1 = [U‖ U⊥], with U⊥ ∈ Rn×(n−(k+1)) and U>k+1Uk+1 = I. Here, (Λk+1)i = δk + λ̂i for i ≤ k+ 1, where λ̂i is the ith diagonal in Λ̂k+1, and (Λ)i = δk for i > k + 1. Since the SR1 Hessian approximation can be indefinite, some safeguard must be implemented to ensure that the resulting search direction sk is a descent direction. One such safeguard is to use a “regularization” term. The Adaptive Regularization using Cubics (ARCs) method (Griewank (1981); Cartis et al. (2011)) can be viewed as an alternative to line-search and trust-region methods. At each iteration, an approximate global minimizer of a local (cubic) model, min s∈Rn mk(s) ≡ g>k s + 1 2 s>Bks + µk 3 (Φk(s)) 3, (7) is determined, where gk = ∇f(Θk), µk > 0 is a regularization parameter, and Φk is a function (norm) that regularizes s. Typically, the Euclidean norm is used. In this work, we propose an alternative “shape-changing” norm that allows us to solve each subproblem (7) exactly. This shape-changing norm was proposed in Burdakov et al. (2017), and it is based on the partial eigendecomposition of Bk. Specifically, if Bk = UkΛkU>k is the eigendecomposition of Bk, then we can define the norm ‖s‖Uk def = ‖U>k s‖3. Applying a change of basis with s̄ = U>k s and ḡk = U>k gk, we can redefine the cubic subproblem as min s̄∈Rn m̄k(s) = ḡ > k s̄ + 1 2 s̄>Λks̄ + µk 3 ‖s̄‖33 . (8) With this change of basis, we can find a closed-form solution of (8) easily. The proposed Adaptive Regularization using Cubics with L-SR1 (ARCSLSR1) algorithm is given in Algorithm 2. 2.1 CONTRIBUTIONS The main contributions of this paper are as follows: 1. L-SR1 quasi-Newton methods: The most commonly used quasi-Newton approach is the L-BFGS method. In this work, we use the LSR1 update to better model potentially indefinite Hessians of the non-convex loss function. 2. Adaptive Regularization using Cubics (ARCs): Given that the quasi-Newton approximation is allowed to be indefinite, we use an Adaptive Regularized using Cubics approach to safeguard each search direction. 3. Shape-changing regularizer: We use a shape-changing norm to define the cubic regularization term, which allows us to compute the closed form solution to the cubic subproblem (7). 4. Computational complexity: Let m be the number of previous iterates and gradients stored in memory. The proposed LSR1 ARC approach is comparable to L-BFGS in terms of storage and compute complexity (see Table 1). Algorithm 2 Limited-Memory Symmetric Rank-1 Adaptive Regularization using Cubics 1: Given: Θ0, γ2 ≥ γ1, 1 > η2 ≥ η1 > 0, and σ0 > 0 2: for k = 0, 1, 2 . . . do 3: Obtain Sk = [ s0 · · · sk ], Yk = [ y0 · · · yk ] 4: Solve the generalized eigenproblem S>k Yku = Λ̂S > k Sku and let δk = min{λ̂i} 5: Compute Ψk = Yk − δkSk 6: Perform QR decomposition of Ψ = QR 7: Compute the eigendecomposition RMR> = PΛP> 8: Assign U‖ = QP and U>‖ = P >Q> 9: Define C‖ = diag(c1, . . . , cm), where ci = 2 λi+ √ λ2i +4µ|ḡi| and ḡ‖ = U>‖ g 10: Compute α∗ = 2 δk+ √ δ2k+4µ‖g⊥‖ where g⊥ = g −U‖ḡ‖ 11: Compute step s∗ = −α∗g + U‖(α∗Im −C‖)U>‖ 12: Compute m(s∗) and ρk = (f(Θk)− f(Θk+1))/m(s∗) 13: Set Θk+1 = { Θk + sk, if ρk ≥ η1, Θk, otherwise and µk+1 = 0.5µk if ρk > η2, 0.5µk(1 + γ1) if η1 ≤ ρk ≤ η2, 0.5µk(γ1 + γ2) otherwise 14: end for 2.2 IMPLEMENTATION Because full gradient computation is very expensive to perform, we impement a stochastic version of the proposed ARCs-LSR1 method. In particular, we use the batch gradient approximation g̃k ≡ 1 |Bk| ∑ i∈Bk ∇fi(Θk). In defining the SR1 matrix, we use the quasi-Newton pairs (sk, ỹk), where ỹk = g̃k+1 − g̃k (see e.g., Erway et al. (2020)). 3 CONVERGENCE ANALYSIS In this section, we prove convergence properties of the proposed method (ARCs-LSR1 in Algorithm 2). The following theoretical guarantees follow the ideas from Cartis et al. (2011) and Benson & Shanno (2018). First, we make the following mild assumptions: A1. The loss function f(Θ) is continuously differentiable, i.e., f ∈ C1(Rn). A2. The loss function f(Θ) is bounded below. Next, we prove that the matrix Bk in (4) is bounded. Lemma 1 The SR1 matrix Bk+1 in (4) satsifies ‖Bk+1‖F ≤ κB for all k ≥ 1 for some κB > 0. Proof: Using the limited-memory SR1 update with memory parameter m in (4), we have ‖Bk+1‖F ≤ ‖B0‖F + k∑ j=k−m+1 ‖(yj −Bjsj)(yj −Bjsj)>‖F |s>j (yj −Bjsj)| . Using a property of the Frobenius norm, namely, for real matrices A, ‖A‖2F = trace(AA >), we have that ‖(yj − Bjsj)(yj − Bjsj)>‖F = ‖yj − Bjsj‖22. Since the pair (sj ,yj) is accepted only when |s>j (yj −Bjsj)| > ε‖yj −Bjsj‖22, for some constant ε > 0, and B0 = δkI for some 0 < δk < δmax, we have ‖Bk+1‖F ≤ δmax + m ε ≡ κB . Given the bound on ‖Bk+1‖F , we obtain the following result, which is similar to Theorem 2.5 in Cartis et al. (2011). Theorem 1 Under Assumptions A1 and A2, if Lemma 1 holds, then lim inf k→∞ ‖gk‖ = 0. Finally, we consider the following assumption, which can be satisfied when the gradient, g(Θ), is Lipschitz continuous on Θ. A3. If {Θti} and {Θli} are subsequences of {Θk}, then ‖gti−gli‖ → 0 whenever ‖Θti−Θli‖ → 0 as i→∞. If we further make Assumption A3, we have the following stronger result (which is based on Corollary 2.6 in Cartis et al. (2011)): Corollary 1 Under Assumptions A1, A2, and A3, if Lemma 1 holds, then lim k→∞ ‖gk‖ = 0. 4 EXPERIMENTS To empirically compare the efficiency of the method against popular optimization methods like SGD, ADAGRAD, ADAM, RMSProp and L-BFGS, we focus on two broad deep learning problems: image classification and image reconstruction. We choose these tasks due to their broad importance and availability of reproducible model architectures. We run each experiments on an average of 5 times with a random initialization in each experiment. The number of parameters, convolutional layers and fully connected layers are mentioned in Table 3. Dataset: We measure the classification performance of each optimization method on 4 image datasets: MNIST (LeCun et al. (2010)), FashionMNIST (Xiao et al. (2017)), IRIS (Dua & Graff (2017)) and CIFAR10 (Krizhevsky et al.). We have provided a comprehensive view of the experiments in Table 2 Hyperparameter tuning: We empirically fine-tune the hyperparameters and select the best for each update scheme. We have made a comprehensive list of all the learning rates for the gradient and adaptive gradient based algorithms in Table 4 in the Appendix. The additional parameters are defined as follows: • ADAM: We apply an perturbation of 1.0 × 10−6. β0 and β1 are chosen to be 0.9 and 0.999, respectively. • ADAGRAD: The initial accumulator value is set to 0. The perturbation is set to 1.0× 10−10. • SGD: We use a momentum of 0.9. • RMSPROP: We set α = 0.99. The perturbation is set 1.0× 10−8. • L-BFGS: The table 4 in Appendix A presents the initial learning rate for the stochastic step in L-BFGS. We set the default learning rate to 1.0. We choose a history size m of 10 and max iterations to 10. The tolerance on function value/parameter change is set to 1.0 × 10−9 and the first-order optimality condition for termination is defined as 1.0× 10−9. • ARC-LSR1: We choose the same parameters as L-BFGS. Network architecture: For each problem, we define the model architecture in Table 3 in the appendix. We define the process of the forward and backward pass of a DNN in Algorithm 3 in the appendix. Testbed and software: All experiments were conducted using open-source software PyTorch (Paszke et al. (2019)), SciPy (Virtanen et al. (2020)) and NumPy (Harris et al. (2020)). We use an Intel Core i7-8700 CPU with a clock rate of 3.20 GHz and an NVIDIA RTX 2080 Ti graphics card. 5 RESULTS We have divided the sections into two categories: classification and image reconstruction. We present both the training results and the testing results for all methods. 5.1 CLASSIFICATION RESULTS For each classification problem, we define the network architecture, the corresponding hyperparameters (other than the learning rate) for each optimization scheme. IRIS: Since this dataset is relatively small, we assume a small network for our deep-learning model. The model is described in 3. We set the history size for the proposed approach and L-BFGS to 10 and the number of iterations to 10. Figure 1 shows the comparative performance of all the methods. Note that our proposed method (ARCLSR1) achieves the highest classification accuracy in the fewest number of epochs. MNIST: We trained the network for 20 epochs with a batch size of 256 images each. We keep the same history size and number of iterations as the IRIS dataset for L-BFGS and the proposed ARCLSR1 approach. For training, it can be seen in Figure 2 that nearly all methods achieve optimal training accuracy. However, closely inspecting the testing curve, we notice that the proposed approach achieves higher accuracy than all the existing methods. FMNIST: We train the network for 20 epochs with a batch size of 256 images. We keep the history size the same as the IRIS and MNIST experiments for the proposed approach and L-BFGS. For this method, the proposed ARCLSR1 approach is comparable to L-BFGS but outperforms the adaptive methods (see Figure 3). CIFAR10: We use the same parameters presented in Table 4 in the previous section for the adaptive methods. For ARCLSR1 and L-BFGS, we have a history size of 100 with a maximum number of iterations of 100 and a batch size of 1024. Figure 4(a) represents the training loss (cross-entropy loss). Figure 4(b) represents the testing accuracy, i.e., number of sample correctly predicted in the testing set. To demonstrate the efficacy of the proposed method on larger networks, additional experimentation on the ResNet50 architecture can be found in the appendix (Figure 8). 5.2 IMAGE RECONSTRUCTION RESULTS The image reconstruction problem involves feeding a feedforward convolutional autoencoder model (with randomly initialized weights) a batch of the dataset. It follows the same deep learning convention as mentioned in Algorithm 3 in Appendix A. The loss function is defined between the reconstructed image and the original image. MNIST: An image x ∈ Rn is fed to the network, compressed into a latent space z ∈ Rl, where l n, and reconstructed back to its original image size x̄ ∈ Rn. We compute the mean-squared loss error between the reconstruction and the true image. The weights are initialized randomly. Each experiment has been conducted 5 times and we considered a batch size of 256 images each with 50 epochs. The results for the image reconstruction can be seen in Figure 5. One can notice that the initial descent provided by the proposed approach provides a significant decrease in the objective function. To understand better, we provide the details of the results during the initial epoch (Figure 9(a)) and the final epoch (Figure 9(b)). We notice that the ARCLSR1 method has minimized efficiently in the first half of the first epoch. This is empirical evidence that the method converges to the minimizer in fewer steps in comparison to the adaptive methods. In Figure 5 (b), we notice that all the adaptive methods eventually converge to the same point. For training response results on the F-MNIST dataset, see Section B in the appendix. 5.3 TIME COMPLEXITY ANALYSIS We understand that the proposed approach performs competitively against all existing methods. We now analyze the time-constraints of each method. We choose to clock the computationally demanding algorithm here - CIFAR10 classification. We chose a maximum iterations of 100 with a history size of 100 for L-BFGS and the ARCs LSR1, with a batch size of 1024 images. Figure 7 plots the time required by each of the methods to reach non-overtrained minima with a batch size of 1024 images. As can be seen, the proposed approach is able to reach the desired minima in much less time than the rest of the algorithms. L-BFGS finds it hard to converge due to a very noisy loss function and a small batch size, thus causing the algorithm to break. Ozyildirim & Kiran (2020) argue that a large batch size is required for quasi-Newton methods to perform well. However, the ARCLSR1 method performs well with a small batch size as well. 6 CONCLUSION In this paper, we proposed a novel quasi-Newton approach in a modified adaptive regularized cubic setting. We were able to empirically and theoretically show how an L-SR1 quasi-Newton approximation in an ARCs setting was able to perform either better or comparably to most of the state of the art optimization schemes. Even though the approach has yielded exceptional results, we need to test the method’s efficacy when the network size and dataset size is large and when availability of data is sparse.
1. What is the main contribution of the paper regarding the application of the Limited-Memory Symmetric Rank-1 (L-SR1) algorithm in deep learning? 2. What are the strengths and weaknesses of the proposed method compared to other algorithms such as SGD, Adam, and L-BFGS? 3. How does the reviewer assess the novelty and significance of the paper's contributions? 4. What are the concerns regarding the exposition and explanation of the proposed algorithm? 5. What are the limitations of the experimental results and how do they relate to the practical relevance of the method? 6. Are there any factual errors or misunderstandings in the paper regarding Newton's method and saddle points? 7. How could the structure and organization of the paper be improved? 8. What are the typos, grammatical errors, and inconsistencies in the paper? 9. How does the reviewer evaluate the overall quality and impact of the paper?
Summary Of The Paper Review
Summary Of The Paper This paper investigates the application of a certain Quasi-Newton algorithm, the Limited-Memory Symmetric Rank-1 (L-SR1) algorithm, in deep learning problems. The benefit of this technique over similar more widely investigated methods that use a positive definite approximation of the Hessian, such as stochastic L-BFGS, is the fact that the L-SR1 approximation is not guaranteed to be definite, and thus has the potential to have a more accurate approximation of the true Hessian. Since in this case line-search methods may return ascent directions, authors propose a specific form of Adaptive Regularization using Cubics (ARC) as an alternative. Numerical simulations are provided comparing the performance of the proposed algorithm to SGD, adaptive methods such as Adam and a naive L-BFGS implementation. Review Pros: The paper highlights the potential of L-SR1 in the highly non-convex deep neural network setting, where neglecting negative curvature information might seriously degrade the accuracy of the Hessian approximation. The idea of using the specific type of shape-changing norm that varies in every iteration in order to simplify the local cubic sub-problem is interesting. Cons: The novelty of this paper is very limited. The only novel contribution compared to Cartis et al.* and Erway et al*. seems to be the particular form of shape-changing regularization. This has the potential to be a valuable contribution, but the exposition of the technique is very confusing. Authors point out that that closed-form solution can be found easily, but provide no further explanation, simply presenting the full algorithm as is. The proposed algorithm, which is the main contribution of this paper, is not discussed in much detail and several steps are left unexplained. The significance of the results is questionable. Since the paper does not provide any theoretical insights into the introduced algorithm (e.g. convergence guarantees), it has to be thoroughly verified through relevant experiments. The proposed algorithm is not compared to its closest competitor, Erway et al, and the L-BFGS algorithm used in the comparison is a naive implementation of mini-batch L-BFGS that is known to be unstable on larger models/datasets. There are several improved versions of L-BFGS tailored to the stochastic setting (Moritz et al., Gower et al. and more). Furthermore, the experiments are performed on very small and simple models and datasets, casting doubt on the practical relevance of the method. Even in this limited setting the gains over other techniques are not too significant. Authors posit that Newton's method avoid saddle points by exploiting curvature information. To the best of my knowledge, the opposite is true and Newton's method is prone to converging to saddle points. The motivation that SR1 avoids saddle points by using negative curvature information is similarly unsupported. It seems to me that the benefit of SR1 over BFGS would lie in the more accurate Hessian representation and not in the avoidance of saddles. The structure and organization of the paper is confusing and has unnecessary parts. For instance I don't see why the specific forms of adaptive methods are discussed in such detail (including table) when it is not used later in the paper. Including forward and backward pass of a DNN as an algorithm is also unnecessary at a conference like ICLR. Detailed description of the well-known datasets, and the very detailed discussion of minor hyperparameters could be moved to the appendix. The paper has many typos, grammatical errors, unfinished sentences. The experimental results are confusing and inconsistent. In some plots we don't know exactly what quantity is plotted (Fig. 3). In some plots, epochs are not integers, which is confusing and in this case iterations would be more clear. Figure 7 shows 'computational cost' but it is not disclosed how authors calculated this quantity. In the same plot CIFAR-10 training on a small model takes 80k seconds (approximately 22 hours), which seems highly unrealistic. In 'Conclusions' authors also mention that the proposed algorithm's performance is fairly independent of batch size, which is not supported at all by the experiments. Overall, the experiments have brought me more questions than answers. *Citations are equivalent to those in the paper.
ICLR
Title Laplacian Eigenspaces, Horocycles and Neuron Models on Hyperbolic Spaces Abstract We use hyperbolic Poisson kernel to construct the horocycle neuron model on hyperbolic spaces, which is a spectral generalization of the classical neuron model. We prove a universal approximation theorem for horocycle neurons. As a corollary, we obtain a state-of-the-art result on the expressivity of f a,p, which is used in the hyperbolic multiple linear regression. Our experiments get state-of-the-art results on the Poincare-embedding subtree classification task and the classification accuracy of the two-dimensional visualization of images. 1 INTRODUCTION Conventional deep network techniques attempt to use architecture based on compositions of simple functions to learn representations of Euclidean data (LeCun et al., 2015). They have achieved remarkable successes in a wide range of applications (Hinton et al., 2012; He et al., 2016). Geometric deep learning, a niche field that has caught the attention of many authors, attempts to generalize conventional learning techniques to non-Euclidean spaces (Bronstein et al., 2017; Monti et al., 2017). There has been growing interest in using hyperbolic spaces in machine learning tasks because they are well-suited for tree-like data representation (Ontrup & Ritter, 2005; Alanis-Lobato et al., 2016; Nickel & Kiela, 2017; Chamberlain et al., 2018; Nickel & Kiela, 2018; Sala et al., 2018; Ganea et al., 2018b; Tifrea et al., 2019; Chami et al., 2019; Liu et al., 2019; Balazevic et al., 2019; Yu & Sa, 2019; Gulcehre et al., 2019; Law et al., 2019). Many authors have introduced hyperbolic analogs of classical learning tools (Ganea et al., 2018a; Cho et al., 2019; Nagano et al., 2019; Grattarola et al., 2019; Mathieu et al., 2019; Ovinnikov, 2020; Khrulkov et al., 2020; Shimizu et al., 2020). Spectral methods are successful in machine learning, from nonlinear dimensionality reduction (Belkin & Partha, 2002) to clustering (Shi & Malik, 2000; Ng et al., 2002) to hashing (Weiss et al., 2009) to graph CNNs (Bruna et al., 2014) to spherical CNNs (Cohen et al., 2018) and to inference networks (Pfau et al., 2019). Spectral methods have been applied to learning tasks on spheres (Cohen et al., 2018) and graphs (Bruna et al., 2014), but not yet on hyperbolic spaces. This paper studies a spectral generalization of the FC (affine) layer on hyperbolic spaces. Before presenting the spectral generalization of the affine layer, we introduce some notations. Let (·, ·)E be the inner product, | · | the Euclidean norm, and ρ an activation function. The Poincaré ball model of the hyperbolic space Hn(n≥2) is a manifold {x∈Rn : |x|<1} equipped with a Riemannian metric ds2Hn= ∑n i=1 4(1−|x| 2)−2dx2i . The boundary of Hn under its canonical embedding in Rn is the unit sphere Sn−1. The classical neuron y=ρ((x,w)E+b) is of input x∈Rn, output y∈R, with trainable parameters w∈Rn, b∈R. An affine layer Rn → Rm is a concatenation of m neurons. An alternative representation of the neuron x 7→ρ((x,w)E+b) is given by 1 x∈Rn 7→ ρ(λ(x, ω)E+b), ω∈Sn−1, λ, b∈R. (1) This neuron is constant over any hyperplane that is perpendicular to a fixed direction ω. In Hn, a horocycle is a n−1 dimensional sphere (one point deleted) that is tangential to Sn−1. Horocycles are hyperbolic counterparts of hyperplanes (Bonola, 2012). Horocyclic waves 〈x, ω〉H := 12 log 1−|x|2 |x−ω|2 are constant over any horocycle that is tangential to Sn−1 at ω. Therefore, x∈Hn 7→ ρ(λ〈x, ω〉H+b), ω∈Sn−1, λ, b∈R (2) 1if w 6= (0, . . . , 0), one can take ω = w/|w|, λ = |w|; else, one can take λ = 0 and any ω ∈ Sn−1. generalizes the classical neuron model (1), and a concatenation of finitely many (2) generalizes the FC (affine) layer. We call (2) a horocycle neuron. Figure 1 (middle) is an example on H2. The neuron models in (1, 2) are related to spectral theory because (·, ω)E (respectively 〈·, ω〉H ) are building blocks of the Euclidean (respectively hyperbolic) Laplacian eigenspace. Moreover, many L2 spaces have a basis given by Laplacian eigenfunctions (Einsiedler & Ward, 2017). On one side, all Euclidean (respectively hyperbolic) eigenfunctions are some kind of “superposition” of (·, ω)E (respectively 〈·, ω〉H ). On the other side, neural networks based on (1) (respectively (2)) represent functions that are another kind of “superposition” of (·, ω)E (respectively 〈·, ω〉H ). They heuristically explain why the universal approximation property is likely to hold for networks constructed by (1) and (2). By using the Hahn Banach theorem, an injectivity theorem of Helgason, and integral formula, we prove that finite sums of horocycle neurons (2) are universal approximators (Theorem 2). Let p ∈ Hn, Tp(Hn) be the tangent space of Hn at p, a ∈ Tp(Hn), ⊕ be the Möbius addition (Ungar, 2008). We remind the reader that the following functions f1a,p(x) = 2|a| 1− |p|2 sinh −1 ( 2(−p⊕ x, a)E (1− | − p⊕ x|2)|a| ) (3) are building blocks of many hyperbolic learning tools (Ganea et al., 2018a; Mathieu et al., 2019; Shimizu et al., 2020). Figure 1 illustrates examples of different neuron models (1, 2, 3) on H2. In Lemma 1, we shall present a close relationship between (2) and (3). Using this relationship and Theorem 2, we obtain a novel result on the expressivity of f1a,p (Corollary 1). This article contributes to hyperbolic learning. We first apply spectral methods, such as the horocycle, to hyperbolic deep learning. We prove results on the expressivity of horocycle neurons (2) and f1a,p (3). With horocycle neurons, we obtain state-of-the-art results on the Poincaré-embedding subtree classification task and the classification accuracy of the 2-D visualization of images in in the experiment. 2 RELATED WORK Universal approximation There is a vast literature on universal approximation (Cybenko, 1989; Hornik et al., 1989; Funahashi, 1989; Leshno et al., 1993). Cybenko (1989)’s existential approach uses the Hahn Banach theorem and Fourier transform of Radon measures. To prove Theorem 2, we also use the Hahn Banach theorem, and additionally an integral formula (7) and an injectivity Theorem 1 of Helgason. Generalizing integral formulas and injectivity theorems is easier than generalizing Fourier transform of Radon measures on most non-Euclidean spaces. (Carroll & Dickinson, 1989) uses the inverse Radon transform to prove universal approximation theorems. This method relates to ours, as injectivity theorems are akin to inverse Radon transforms. However, using the injectivity theorem is an existential approach while using the inverse Radon transform is a constructive one. Spectral methods Spectral methods in Bronstein et al. (2017); Bruna et al. (2014); Cohen et al. (2018) use a basis of L2(X) given by eigenfunctions, whereX is a finite graph or the sphere. Because L2(Hn) has no eigenfunctions as a basis, our approach is different from theirs. Hyperbolic deep learning One part of hyperbolic learning concerns embedding data into the hyperbolic space (Nickel & Kiela, 2017; Sala et al., 2018). Another part concerns learning architectures with hyperbolic data as the input (Ganea et al. (2018a); Cho et al. (2019)). Ganea et al. (2018a) proposes two ways to generalize the affine layer on hyperbolic spaces: one by replacing the linear and bias part of an affine map with (25, 26) of their paper; another one by using a concatenation of f1a,p in their hyperbolic multiple linear regression (MLR). The latter seems more relevant to ours. A level set of f1a,p is a hypercycle that has the same distance to a chosen geodesic hypersurface, while a level set of a horocycle neuron is a horocycle that has the same “spectral” distance to an ideal point at infinity. Based on functions similar to f1a,p, Mathieu et al. (2019); Shimizu et al. (2020) build the gyroplane layer and Poincaré FC layer. Ganea et al. (2018a); Cho et al. (2019) take geodesics as decision hyperplanes, while we (initially) take horocycles. We shall construct the horocycle multiple linear regression (MLR), where decision hypersurfaces are geodesics. Geodesics decision hyperplanes (Ganea et al., 2018a; Cho et al., 2019) and geodesic decision hypersurfaces here arise from different methods. Khrulkov et al. (2020) investigates hyperbolic image embedding, where prototypes (or models) of each class are center-based. We study a different one, and we shall call our prototypes end-based. 3 HYPERBOLIC SPACES This section reviews facts from hyperbolic geometry that are used in the proof of Theorem 2. For the reader who is not interested in the proof, (4) is enough for the implementation. Hyperbolic metric We use the Poincaré model. The hyperbolic space Hn(n≥2) is the manifold {x∈Rn : |x|<1} equipped with a Riemannian metric ds2 = ∑n i=1 4(1−|x|2)−2dx2i . Let o be the origin of Hn. The distance function dHn satisfies dHn(o, x)=2 arctanh |x|. Geodesics, horocycles and corresponding points Geodesics in Hn are precisely circular arcs that are orthogonal to Sn−1. Horocycles in Hn are precisely (n−1)-dimensional spheres that are tangential to Sn−1 (Helgason, 1970). Horocycles are hyperbolic analogs of hyperplanes. Figure 2 illustrates geodesics and horocycles on H2. Hyperbolic Poisson kernel The Poisson kernel for Hn is P (x, ω)= ( 1−|x|2 |x−ω|2 )n−1 , where x∈Hn, ω∈Sn−1 (Helgason (1970)[p.108]). The function 〈·, ω〉H defined by 〈x, ω〉H = 1 2(n− 1) logP (·, ω) = 1 2 log 1− |x|2 |x− ω|2 (4) is constant over any horocycle that is tangential to Sn−1 at ω (Figure 1 (middle), (6)). Riemannian volume The Riemannian volume induced by the metric ds2 on Hn is dVol = 2n(1− |x|2)−ndx1 . . . dxn. (5) Horocycles Let Ξ be the set of horocycles of Hn, and let Ξω be the set of all horocycles that are tangential to Sn−1 at ω. Given λ∈R, we let ξλ,ω be the unique horocycle that connects ω and tanh (λ/2) · ω. We have Ξω = ∪λ∈R{ξλ,ω} and Ξ = ∪ω∈Sn−1Ξω. The length of any geodesic (that ends at ω) line segment cut by ξλ1,ω and ξλ2,ω equals |λ1 − λ2| (A.2). Therefore |λ1 − λ2| is a natural distance function defined on Ξω, and the map λ→ ξλ,ω is an isometry between R and Ξω. This isometry is closely related to 〈·, ω〉H (A.3): for any x ∈ ξλ,ω , 〈x, ω〉H = λ/2. (6) The annoying /2 in (6) is a tradeoff that the metric here is different from that in Helgason (2000). Integral formula For fixed ω ∈ Sn−1, Hn=∪λ∈Rξλ,ω. Let dVolξλ,ω be the measure induced by ds2 on ξλ,ω . Let L be a family of geodesics that end at ω, δ > 0, and U=L ∩ (∪λ≤α≤λ+δξα,ω). For l ∈ L, dH(l ∩ ξλ,ω, l ∩ ξλ+δ,ω)=δ (A.2), hence dVol(U) = δ · dVolξλ,ω (U ∩ ξλ,ω) and therefore∫ Hn f(x)dVol(x) = ∫ R (∫ ξλ,ω f(z)dVolξλ,ω (z) ) dλ. (7) The above proof (for Hn) is essentially the same as that in (Helgason, 2000)[p.37] (for H2). To further convince the reader that (7) holds for all n, we give another simple proof in A.4. Injectivity theorem With respect to the canonical measure on Ξ, Helgason (1970)[p.13] proved Theorem 1 (Helgason). If f ∈ L1(Hn) and ∫ ξ f(z)dVolξ(z) = 0 for a.e ξ ∈ Ξ, then f = 0 a.e.. Theorem 1 demonstrates that if the integral of f ∈ L1(Hn) over almost every horocycle is zero then f is also zero. This theorem and the integral formula (7) are essential for the proof of Theorem 2. 4 LEARNING ARCHITECTURES AND EIGENFUNCTIONS OF THE LAPLACIAN In this section, we discuss a heuristic connection between the representation properties of eigenfunctions and classical neurons, and then we define some horocycle-related learning tools. 4.1 EIGENSPACES AND NEURON MODELS On a Riemannian manifold X , the Laplace-Beltrami LX is the divergence of the gradient, and it has a well-known representation property (Einsiedler & Ward, 2017): if X is a compact Riemannian manifold or bounded domain in Rn, then L2(X) has a basis given by eigenfunctions. This statement is false if X is Rn or Hn (Hislop, 1994). Eigenspaces of on Rn and Hn Our work is motivated by the theory of eigenspaces, in which Euclidean (respectively hyperbolic) eigenfunctions are obtained from (x, ω)E (respectively 〈x, ω〉H ) by some kind of superposition. For example, all smooth eigenfunctions of LRn are precisely the functions (M. Hashizume & Okamoto, 1972)[p.543] f(x) = ∫ Sn−1 eλ(x,ω)EdT (ω), (8) and eigenfunctions of LHn are precisely the functions (Helgason, 1970)[Theorem 1.7, p.139] f(x) = ∫ Sn−1 eλ〈x,ω〉HdT (ω), (9) where T in (8) and (9) are some technical linear forms of suitable functional spaces on Sn−1. Neuron models By (8) and (1), Euclidean eigenfunctions (respectively classical neurons) are superpositions of (·, ω)E and exp (respectively ρ), with homogeneity and additivity. By (9) and (2), hyperbolic eigenfunctions (respectively horocycle neurons) are superpositions of 〈·, ω〉H and exp (respectively ρ). The representation property of eigenfunctions on compact manifolds and bounded domains suggests that the universal approximation property is likely to hold for networks constructed by (·, ω)E or 〈·, ω〉H . However, this heuristic is not proof (A.5). 4.2 HOROCYCLE BASED LEARNING ARCHITECTURES Horocycle neuron In the implementation of the horocycle neuron (2), we take 1 2 log ( 1−|x|2 |x−ω|2+ + ) for 〈x, ω〉H , where is a small constant to ensure numerical stability. For updating ω, we use the sphere optimization algorithm (Absil et al., 2008; Bonnabel, 2013) (A.6). Horocycle feature and horocycle decision hypersurface Given a non-origin point x ∈ Hn, for y ∈ Hn we define hx(y) = 〈y, x/|x|〉H and call it the horocycle feature attached to x. This feature is useful in the Poincaré embedding subtree classification task (see the experiment and Figure 3[left]). The horocycle is the hyperbolic analog of the Euclidean hyperplane, and therefore it could be a possible choice of decision hypersurface, which may arise from a level set of a horocycle feature. End-based clustering and end prototype Natural clustering is a topic in representation learning (Bengio et al., 2013), and the common prototype-based clusters are center-based (Tan et al., 2005). We propose a type of clustering that embeds high-dimensional data in Hn and places prototypes in Sn−1. Figure 3[right] is an example for n = 2. For ω ∈ Sn−1 and any b ∈ R, the function x ∈ Hn 7→ − log ( 1−|x|2 |x−ω|2 ) + b measures the relative distance of Hn from ω in Gromov’s bordification theory (Bridson & Haefliger (2009)[II.8], A.18). Moreover, we define Dist : Hn ×Sn−1 ×R→ R by Dist(x, ω, b) = − log ( 1− |x|2 |x− ω|2 ) + b = −2〈x, ω〉H + b. (10) It is a relative distance function, and this is why Dist may assume negative values and why there is a bias term b in (10). Consider classes Cls = {C1, C2, . . . , CM} and labeled training examples {(X1, Y 1), . . . , (XN , Y N )}, where Xi ∈ RD are D-dimensional input features and Y i ∈ {1, 2, . . . ,M}. Each example Xi belongs to the class CY i . In light of (10), our goal is to find a neural network NNθ : RD → Hn that is parameterized by θ, prototypes ω1, . . . , ωM ∈ Sn−1, and real numbers b1, . . . , bM ∈ R such that # { 1≤i≤N : Y i = arg min 1≤j≤M ( Dist(NNθ(X i), ωj , bj) )} N (11) is maximized. We call {NNθ(Xj) : 1 ≤ j ≤ N} the end-based clustering and ωi end prototypes (in hyperbolic geometry, the end is an equivalence class of parallel lines in Figure 2[left]). In experiments, we take NNθ = Exp ◦ NN′θ, where NN ′ θ : R D → Rn is a standard neural network parameterized by θ and Exp : Rn → Hn is the exponential map of the hyperbolic space. Horocycle layer, horocycle multiple linear regression (MLR) and geodesic decision hypersurfaces We call a concatenation of (2) a horocycle layer, and we shall carefully describe a prototypical learning framework for end-based clusterings. Using the same notions as in the previous paragraph, the classification task has M classes, and NNθ = Exp ◦NN′θ : RD → Hn is a deep network. For prototypes ω1, . . . , ωM ∈ Sn−1, real numbers b1, . . . , bM ∈ R, and any exampleX , our feedforward for prediction will be x = NNθ(X), (Feature descriptor) SCj(X) = −Dist(x, ωj , bj), (Scores; Similarity) X ∈ Cargmax 1≤j≤M (SCj(X)). (Classifier) The goal is to maximize the accuracy (11), and then we need a loss function for the backpropagation. Following the convention of prototypical networks (Snell et al., 2017; Yang et al., 2018), we choose an increasing function ρ (in our experiments, ρ(x) = x or ρ = tanh. 2) and let the distribution over classes for an input X (with label Y ) be pθ(Y = Cj |X) ∝ e−ρ(Dist(NNθ(X),mj ,bj)) = e−ρ(−SCj(X)). 2One often takes ρ(x) = x2 in metric learning, which is improper here because Dist(x) could be negative. Therefore, given a batch of training examples, the loss function is L = − ∑ (Xj ,Y j)∈Batch log pθ(Y = CY j |Xj) #Batch . (12) The training proceeds by minimizing L, and we call this framework a horocycle MLR. The set of parameters of the framework is {θ} ∪ {ω1, . . . , ωM} ∪ {b1, . . . , bM}. It is worth mentioning that decision boundaries of the horocycle MLR are geodesics, which follows from SCi(X)=SCj(X)⇐⇒ log ( 1−|x|2 |x−ωi|2 ) −bi = log ( 1−|x|2 |x−ωj |2 ) −bj ⇐⇒ |x−ωi| |x−ωj | = e bj−bi 2 and the theorem of Apollonian circles (A.7). Poisson neuron and Poisson multiple linear regression (MLR) Although 〈x, ω〉H (4) is wellmotivated by the theory of eigenspaces (9) and fits naturally into metric learning (see 10 or also Corollary 1), it is only defined on Hn. Some readers might not be convinced that the neuron has to be defined on hyperbolic spaces. Therefore, we try to remove the log in (4) and define the Poisson neuron model by Pρw,λ,b(x) = ρ ( λ |w| 2−|x|2 |x−w|2 + b ) for w ∈ Rn, λ, b ∈ R, which is well-defined on Rn\{w}. Notice that if |x| < |w| then |w| 2−|x|2 |x−w|2 = e 2〈x/|w|,w/|w|〉H . In A.8, Figure 7 illustrates an example of a Poisson neuron on R2. In the implementation, we take |w| 2−|x|2 |x−w|2+ for |w|2−|x|2 |x−w|2 , where is a small constant for numerical stability. We call a concatenation of Poisson neurons a Poisson layer, and we use it with a deep neural network NNθ : RD → Rn to construct the Poisson MLR, which is similar to the horocycle MLR. Let w1, . . . , wM ∈ Rn and b1, . . . , bM ∈ R, the feedforward for prediction of our framework is x = NNθ(X), SCj(X) = BatchNorm(P ρ wj ,−1,bj (x)), X ∈ Cargmax 1≤j≤M (SCj(X)). (13) We let the pθ(Y = Cj |X) ∝ eSCj(X) and take (12) as the loss. This framework is called a Poisson MLR. We use the usual optimization algorithms to update parameters in the Poisson neuron. The BatchNorm(Ioffe & Szegedy, 2015) seems crucial for (13) in the experiment. Figure 4 illustrates that high-confidence prediction regions (deep red areas) of the Poisson MLR are compact sets, in contrast to classical classifiers Hein et al. (2019)[Theorem 3.1]. We shall use this figure to explain an experiment in Section 6.4. 5 REPRESENTATIONAL POWER In this section, ρ is a continuous sigmoidal function (Cybenko, 1989), ReLU(Nair & Hinton, 2010), ELU(Clevert et al., 2016), or Softplus(Dugas et al., 2001). We remind the reader that ρ is sigmoidal if lim t→∞ ρ(t) = 1 and lim t→−∞ ρ(t) = 0. The following theorem justifies the representational power of horocycle neurons. Theorem 2. Let K be a compact set in Hn, and 1≤p<∞. Then finite sums of the form F (x) = N∑ i=1 αiρ(λi〈x, ωi〉H+bi), ωi∈Sn−1, αi, λi, bi∈R (14) are dense in Lp(K,µ), where µ is either dVol (5) or the induced Euclidean volume. We provide a sketch of the proof here and go through the details in A.9. It suffices to prove the theorem for a sigmoidal function ρ and µ = dVol , as other cases follow from this one. Assume that these finite sums are not dense in Lp(K, dVol). By the Hahn-Banach theorem, there exists some nonzero h∈Lq(K, dVol), where q=p/(p− 1) if p>1 and q=∞ if p=1, such that ∫ K F (x)h(x)dVol(x) = 0 for all finite sums of the form (14). Extend h to be a function H that is defined on Hn by assigning H(x)=h(x) if x∈K and H(x)=0 if x∈Hn\K. Using the property of sigmoidal functions, the bounded convergence theorem, and the integral formula (7), we prove that the integration of H on almost every horocycle is zero. By the injectivity Theorem 1, H is almost everywhere zero, which contradicts our assumption and completes the proof. In A.10, we shall prove the same result for Poisson neurons. In A.11, we prove the following lemma, which demonstrates a close relationship between horocycle neurons and the widely used f1a,p (3). Lemma 1. Let K be a compact set in Hn, ω ∈ Sn−1, and > 0. There are c, d ∈ R, p ∈ Hn, and a ∈ Tp(Hn) such that the function D(x) = cf1a,p(x) + d− 〈x, ω〉H satisfies ||D||Lp(K,dVol) < . This lemma suggests that 〈·, ω〉H is a boundary point of some “compactification” of the space of f1a,p. The above lemma together with Theorem 2 implies Corollary 1. Let K be a compact set in Hn and 1≤p<∞. Finite sums of the form F (x) = N∑ i=1 αiρ(cif 1 ai,pi(x) + di), pi ∈ H n, ai ∈ Tpi(H n), αi, ci, di ∈ R, are dense in Lp(K,µ), where µ = dVol or µ is the induced Euclidean volume. This result provides novel insights into the hyperbolic neural network (Ganea et al., 2018a), gyroplane layer (Mathieu et al., 2019), and Poincaré FC layer (Shimizu et al., 2020). Although level sets of f1a,p are hypercycles, our proof of Lemma 1 relies on the theory of horocycles. It would be interesting to have more natural approaches to treat the expressivity of f1a,p. 6 EXPERIMENTS In this section, we first play with the MNIST toy. Next, we apply a horocycle feature to the Poincaré embedding subtree classification task. After that, we construct 2-D clusterings of image datasets by using the horocycle MLR. Finally, we provide evidence for further possible applications of the Poisson MLR. We use the framework or some functions of Tensorflow, Keras, and scikit-learn (Abadi et al., 2015; Chollet et al., 2015; Pedregosa et al., 2011). 6.1 MNIST The MNIST (LeCun et al., 1998) task is popular for testing hyperbolic learning tools (Ontrup & Ritter, 2005; Nagano et al., 2019; Mathieu et al., 2019; Grattarola et al., 2019; Ovinnikov, 2020; Khrulkov et al., 2020). We train two different classifiers. A.12, A.14, and code contain details. The first one is a single horocycle layer followed by the softmax classifier. The average test error rate after 600 epochs is 1.96%, and Theorem 2 provides the rationale for this experiment (A.13). The second one is a Poisson MLR. It is the best hyperbolic geometry related MNIST classifier (Table 1). In this table, Ontrup & Ritter (2005) uses the hyperbolic SOM, Grattarola et al. (2019) uses the adversarial autoencoder, and Khrulkov et al. (2020) uses the hyperbolic MLR. Our experiment performs well on MNIST suggests that horocycle and Poisson neurons are computationally efficient and easily coordinate with classical learning tools (such as the convolutional layer and the softmax). 6.2 POINCARÉ EMBEDDING SUBTREE CLASSIFICATION Given a Poincaré embedding (Nickel & Kiela, 2017) PE : {WordNet noun} → HD of 82114 nouns and given a node x ∈ {WordNet noun}, the task is to classify all other nodes as being part of the subtree rooted at x (Ganea et al., 2018a). Our model is logistic regression, where the horocycle feature p ∈ {WordNet noun} 7→ hPE(x)(PE(p)/s) (s is a hyperparameter lying in [1, 1.5]) is the only predictor, and the dependent variable is whether p is in the subtree rooted at x. The decision hypersurface of this model is a horocycle, as illustrated in Figure 3 (left). In the experiment, we pre-train three different Poincaré embeddings3 in each of H2,H3,H5,H10. For each x ∈ {animal, group, location, mammal, worker} and D ∈ {2, 3, 5, 10}, we randomly select one of three pre-trained Poincaré embedding PE : {WordNet noun} → HD and then test the model. Table 2 reports the F1 classification scores and two standard deviations of 100 trials for each {x,D}. Different Poincaré embeddings account for the most variance of the performance. Our model is different from the existing ones. Firstly, we take the horocycle as the decision hypersurface, while others take the geodesic. Secondly, we train a logistic regression on top of the horocycle feature attached to PE(x), which is efficiently calculated, while others train the hyperbolic MLR with different parametrizations. On the number of parameters, we have three (independent of D), Ganea et al. (2018a) has 2D, and Shimizu et al. (2020) has D + 1. The number of parameters explains why our model is prominent in low dimensions. 6.3 END-BASED CLUSTERING FOR 2D DIMENSION REDUCTION In this experiment, we use the horocycle MLR (Section 4.2) to construct end-based clusterings NNθ : R D → H2 for MNIST, Fashion-MNIST(Xiao et al., 2017), and CIFAR-10(Krizhevsky, 2012). We take NNθ = Exp ◦ NN′θ, where Exp is the exponential map of H2 and NN ′ θ : R D → R2 is a network with four convolutional blocks for MNIST/Fashion-MNIST or a ResNet-32 structure for CIFAR-10. A.16 and code contain details. 3https://github.com/dalab/hyperbolic_cones Figure 5 illustrates end-based clusterings for MNIST, Fashion-MNIST, and CIFAR-10, with performance reported in the caption. Our accuracy for Fashion-MNIST is 8% higher than all numbers presented in McInnes et al. (2020). Moreover, Table 3 compares the numbers of Yang et al. (2018); Ghosh & Kirby (2020), and ours for MNIST, and our methods are similar. We all use convolutional networks as the (Feature descriptor) and prototype-based functions as the loss. However, Yang et al. (2018); Ghosh & Kirby (2020) use the center-based prototype loss, while we use the end-based (12). Yang et al. (2018)[Figure 1] points out that the traditional CNN is good at linearly separating feature representations, but the learned features are of large intra-class variations. The horocycle MLR leads to the inter-class separability in the same way (angle accounts for label difference) a traditional CNN does. At the same time, it also obtains intra-class compactness (Figure 5). 6.4 POISSON MLR Using a Poisson MLR whose feature descriptor is a ResNet-32 structure, we obtain a classifier with a test error rate of 6.46% on CIFAR-10. It is on par with other methods with similar network structures (Yang et al., 2018). Moreover, we apply Poisson MLR to the classification task of flowers (Tensorflow), which is a typical example of overfitting. Replacing the MLR part of the Keras model (Tensorflow) with a Poisson MLR, the new Poisson model shows better generalization performance (Figure 6). A.17 and code contain the details. This subsection provides evidence for further applications of horocycles. 7 CONCLUSION Based on the spectral theory of hyperbolic spaces, we introduce several horocycle-related learning tools. They find applications in the hyperbolic neural networks, the Poincaré embedding subtree classification task, and the visualization and classification of image datasets. We give an existential proof of a universal approximation theorem for shallow networks constructed by horocycle neurons or f1a,p. Hopefully, it will trigger further research on the expressivity problems, such as constructive approaches, quantitative results, and benefit of depth (Mhaskar & Poggio, 2016), on horocycle neurons, f1a,p, and similar functions on more general manifolds. A APPENDIX A.1 NOTATIONS AND SYMBOLS Default Notations Notation Description Related formula R The set of real numbers Rn n dimensional Euclidean space x ∈ Rn, x = (x1, . . . , xn) (·, ·)E Euclidean inner product x ∈ Rn, y ∈ Rn, (x, y)E = ∑n i=1 xiyi 〈·, ·〉H Hyperbolic analogue of (·, ·)E x ∈ Hn, y ∈ Sn−1, 〈x, ω〉H = 12 log 1−|x|2 |x−ω|2 | · | Euclidean norm x ∈ Rn, |x| = √ (x, x)E Hn n dimensional hyperbolic space as a set, Hn = {x ∈ Rn : |x| < 1} Tp(X) Tangent space of X at p T (X) Tangent space of X T (X) = ∪p∈XTp(X) ds2Hn The canonical metric on Hn with curva- ture -1 ds2Hn= ∑n i=1 4(1−|x| 2)−2dx2i dVol Riemannian volume on Hn dVol = 2n(1− |x|2)−ndx1 . . . dxn Lp(K, dVol) Lp space Lp(K, dVol) = { f | ∫ K |f |pdVol <∞ } || · ||Lp(K,dVol) Lp norm f measurable on K, ||f ||Lp(K,dVol) = (∫ K |f |pdVol ) 1 p Sn−1 n− 1 dimensional sphere as a set, Sn−1 = {x ∈ Rn : |x| = 1} P (·, ·) Hyperbolic Poisson kernel x ∈ Hn, ω ∈ Sn−1, P (x, ω) = ( 1−|x|2 |x−ω|2 )n−1 f1a,p Model in the hyperbolic MLR f1a,p(x) = 2|a| 1−|p|2 sinh −1 ( 2(−p⊕x,a)E (1−|−p⊕x|2)|a| ) dHn The hyperbolic distance function Ξ The space of horocycles Ξω The set of horocycles that are tangential to Sn−1 at ω LX Laplace-Beltrami operator on X hx The horocycle feature function hx(y) = 〈y, x/|x|〉H ξλ,ω The unique horocycle connecting ω and tanhλ/2 · ω. MLR Multiple linear regression dim dimension IK the indicator function of K Dist Relative distance function Dist(x, ω, b) = −2〈x, ω〉H + b Cls Set of classes Cls = {C1, C2, . . . , CM} NNθ A network parameterized by θ NN′θ A network parameterized by θ Exp Exponential map of the hyperbolic space (X1, Y 1) Labeled sample SCj Score function pθ(Y = Cj |X) Prediction probability L Loss function Pρw,λ,b Poisson neuron P ρ w,λ,b(x) = ρ ( λ |w| 2−|x|2 |x−w|2 + b ) PE Poincaré embedding Conventional symbols Symbol In most cases it refers n,m, i integers x, y, w points in Rn or Hn, or real numbers o the origin of Rn or Hn b, c, d, α, δ real numbers λ real or complex number t real number, represent the timestamp in optimization ω point in Sn−1 ρ an activation function f, g functions K a compact set X a manifold p a point in Hn or on a manifold a an element in Tp(Hn) ξ a horocycle µ a measure L a family of geodesics lines l a geodesics line U a set in Hn F, h,H functions M number of classes D dimension A.2 PROOF OF THE ISOMETRY Given ω∈Sn−1 and λ∈R, we let ξλ,ω the unique horocycle that connects ω and tanh (λ/2) · ω. The length of any geodesic (that ends at ω) line segment cut by ξλ1,ω and ξλ2,ω equals |λ1 − λ2|. This fact is obvious in the half-space model. There is a Riemannian isometry F : {z ∈ Rn : |z| < 1} → {(x1, · · · , xn) : x1 > 0} (the latter is with the metric ds2 = dx 2 1+···+dx 2 n x21 ) such that F (ω) = ∞ and F (o) = (1, 0, . . . , 0). Using dHn(o, tanh(λi/2)ω) = |λi|, d{(x1,··· ,xn):x1>0}((1, 0, . . . , 0), (e±λi , 0, . . . , 0)) = |λi|, F (ω) =∞ and F (o) = (1, 0, . . . , 0), we have F (tanh(λi/2)ω) = (eλi , 0, . . . , 0). Therefore, F maps ξλi,ω to {(x1, x2, . . . , xn) : x1 = eλi}. Any geodesic (that ends at ω) line segment cut by ξλ1,ω and ξλ2,ω is mapped by F to {(t, α2, . . . , αn) : (t− eλ1)(t− eλ2) < 0} for some fixed αj . It is easy to check the length of this segment with respect to dx 2 1+···+dx 2 n x21 (as αi are constants, the metric reduces to dx21/x 2 1 on this segment) is |λ1 − λ2|. A.3 PROOF OF (6) Because x is on ξλ which is a sphere with center 1+tanhλ/2 2 ω and radius 1−tanhλ/2 2 , we have∣∣∣x− 1+tanhλ/22 ω∣∣∣2 = ∣∣∣ 1−tanhλ/22 ∣∣∣2, which leads to |x|2−(1+tanhλ/2)(x, ω)E+tanhλ/2|ω|2 = 0, and then 1+tanhλ/22 |x− ω| 2 = 1−tanhλ/22 (|ω 2| − |x|2), and finally 〈x, ω〉H = 12 log |ω|2−|x|2 |x−ω|2 = 1 2 log 1+tanhλ/2 1−tanhλ/2 = λ/2. A.4 ANOTHER PROOF OF THE INTEGRAL FORMULA (7) We use Hn for the upper half space model {(x1, · · · , xn) : x1 > 0} with the Riemannian volume dx1···dxnxn1 . Let ω = (∞, 0, . . . , 0) and o be (1, 0, . . . , 0) as in (A.2), then ξλ,ω = {(x1, x2, . . . , xn) : x1 = eλ}. The induced Riemannian metric on ξλ,ω (respectively volume dVolξλ,ω ) is dx22+···+dx 2 n e2λ (respectively dx2···dxn e(n−1)λ ). For any integral function f on Hn, using change of variable x1 = eλ∫ Hn f(x1, . . . , xn) dx1 · · · dxn xn1 = ∫ λ ∫ (x2,...,xn)∈Rn−1 f(eλ, x2, . . . , xn) dx2 · · · dxn enλ eλdλ = ∫ λ ∫ (x2,...,xn)∈Rn−1 f(eλ, x2, . . . , xn) dx2 · · · dxn e(n−1)λ dλ = ∫ λ ∫ ξλ,ω f(z)dVolξλ,ω (z)dλ. The above identity is equivalent to the integral formula ∫ Hn f(x)dVol(x) =∫ R (∫ ξλ,ω f(z)dVolξλ,ω (z) ) dλ. presented in (7), according to the Riemannian isometry in (A.2). A.5 THE HEURISTIC IS NOT A PROOF The spectral theory does not directly lead to universal approximation theorems because of the following: 1, superpositions in (1, 2) and (8, 9) are different (similarly, although another kind of superposition in Hilbert’s 13th problem (Hilbert, 1935; Arnold, 2009) was a driving force for universal approximation theorems (Nielsen, 1987), the former is hardly relevant for networks (Girosi & Poggio, 1989)); 2, desired representation properties of hyperbolic eigenfunctions are unknown, partially because Hn is non-compact; 3, results in spectral theory favor Hilbert spaces, while universal approximation theorems embrace more than L2 space. A.6 OPTIMIZATION The parameters update for the horocycle unit (2) involves the optimization problem on the sphere (for ω) and the hyperbolic space (for x). We use a standard algorithm of sphere optimization (Absil et al., 2008) to update ω, and in the supplement we present an optimization approach based on the geodesic polar-coordinates to update x. In the implementation of a horocycle layer, the forward propagation is trivial, while the backpropagation involves optimization on the sphere and hyperbolic space. In the following, η is the learning rate, αt is the value of α (α may be η, s, z, ω, . . .) at the t-th step, TpX is the tangent fiber at p, ∇ is the gradient, and∇H is the hyperbolic gradient. It suffices to consider the layer s=〈z, ω〉. Optimization on the sphere The parameter update of ω in s=〈z, ω〉 involves the optimization on the sphere. The projection of ∂Lθ∂s ∇s(ωt) = ∂Lθ ∂s zt−ωt |zt−ωt|2 ∈ TωtR n onto TωtS n−1 is given by Absil et al. (2008)[p.48] vt = ∂Lθ ∂s zt − ωt |zt − ωt|2 − ∂Lθ ∂s ( zt − ωt |zt − ωt|2 , ωt ) ωt = ∂Lθ ∂s zt − (zt, ωt)ωt |zt − ωt|2 . Two well-known update algorithms of wt Absil et al. (2008)[p.76] are: ωt+1 = cos (ηt|vt|)ωt − sin (ηt|vt|)|vt|−1vt; ωt+1 = (ωt − ηtvt)/|ωt − ηtvt|. A.7 A PROOF OF APOLLONIUS THEOREM Theorem 3 (Apollonius). Given distinct ω1, ω2 ∈ Sn−1 and a positive number λ, the locus {x : |x− ω1| = λ|x− ω2|} is a sphere orthogonal to Sn−1. Proof. If λ is one then it is trivial. We assume now λ is not one. By |x− ω1| = λ|x− ω2|, we can have ∣∣∣∣x− ω1 − λω21− λ ∣∣∣∣2 = |ω1 − λω2|2|1− λ|2 − 1. The locus is a sphere with center O = ω1−λω21−λ and radius R = √ |ω1−λω2|2 |1−λ|2 − 1. The theorem of Apollonius (in all dimension) claims that this sphere is orthogonal to Sn−1. To prove this, it suffices to prove |oO|2 = 1 +R2 (recall o is the origin of Hn), which follows from∣∣∣∣ω1 − λω21− λ ∣∣∣∣2 = √ |ω1 − λω2|2 |1− λ|2 − 1 2 + 1. A.8 INVERSION On Rn ∪ {∞}, given the sphere {x : |x− w0| = r}, the corresponding inversion is given by Iv(x) = w0 + r2(x− w0) |x− w0|2 . For x ∈ Rn ∪ {∞}, Iv(x) is called the inverse of x with respect to {x : |x− w0| = r}. A.9 PROOF OF THEOREM 2 Theorem 2 Let K be a compact set in Hn, and 1≤p<∞. Then finite sums of the form F (x) = N∑ i=1 αiρ(λi〈x, ωi〉H+bi), ωi∈Sn−1, αi, λi, bi∈R are dense in Lp(K,µ), where µ is either dVol (5) or the induced Euclidean volume. Proof. We first treat the case ρ is sigmoidal and µ = dVol . Assume that these finite sums are not dense in Lp(K, dVol). By the Hahn-Banach theorem, there exists some nonzero h∈Lq(K, dVol), where q=p/(p − 1) if p>1 and q=∞ if p=1, such that ∫ K F (x)h(x)dVol(x) = 0 for all fi- nite sums of the form (14). As K is a compact set, by Hölder’s inequality, ∫ K |h(x)| dVol ≤ ( ∫ K dVol)1/p||h||Lq(K,dVol), which leads to h∈L1(K, dVol). Extend h to be a function H that is defined on Hn by assigning H(x)=h(x) if x∈K and H(x)=0 if x∈Hn\K. Then H∈L1(Hn, dVol)∩Lq(Hn, dVol) and∫ Hn F (x)H(x)dVol(x) = 0 (15) for all finite sums of the form (14). For any ω∈Sn−1 and λ, b∈R, we set Fω,λ,b(x) = ρ(λ(〈x, ω〉H−b)). These functions are uniformly bounded, as |Fω,λ,b(x)|≤1. Moreover, lim λ→∞ Fω,λ,b(x) = { 1 if 〈x, ω〉H>b, 0 if 〈x, ω〉H<b. (16) According to (15), for all ω, λ, b, we have ∫ Hn Fω,λ,b(x)H(x)dVol(x) = 0. Functions {Fω,λ,b}λ∈R converge pointwise as λ→∞, and they are uniformly bounded by |H|∈L1(Hn, dVol). By the bounded convergence theorem, for all ω∈Sn−1, b∈R, we have∫ {x:〈x,ω〉H>b} H(x)dVol(x) = 0. (17) By the integral formula (7) (with notations defined there), (6) and (17), for all b∈R,∫ ∞ 2b (∫ ξt,ω H(z)dVolξt,ω (z) ) dt = 0. (18) Taking the derivative of ∫∞ 2b (∫ ξt,ω H(z)dVolξt,ω (z) ) dt with respect to b, we deduce from (18) that∫ ξ2b,ω H(z)dVolξ2b,ω (z) = 0 for a.e. b∈R. In other words, the integration of H on a.e. ξ ∈ Ξω is zero. This fact is valid for all ω∈Sn−1. Therefore, the integration of H on a.e. ξ ∈ Ξ is zero. By the injectivity Theorem 1, H = 0 a.e., which contradicts our assumption. Therefore, finite sums of the form (14) are dense in Lp(K, dVol). The case ρ is ReLU, ELU or Softplus and µ = dVol follows from the above case and the fact that x 7→ ρ(x+ 1)− ρ(x) is sigmoidal. The case µ is the Euclidean volume follows from previous cases and the fact that the Euclidean volume on compact K is bounded from above by λdVol for some constant λ. A.10 UNIVERSAL APPROXIMATION THEOREM FOR POISSON NEURONS. In this section, ρ is a continuous sigmoidal function (Cybenko, 1989), ReLU(Nair & Hinton, 2010), ELU(Clevert et al., 2016), or Softplus(Dugas et al., 2001). We also recall the Poisson neuron: Pρw,λ,b(x) = ρ ( λ |w|2 − |x|2 |x− w|2 + b ) , w ∈ Rn, λ, b ∈ R. Theorem 4. Let K be a compact set in Hn, and 1≤p<∞. Then finite sums of the form F (x) = N∑ i=1 αiP ρ ωi,λi,bi (x), ωi∈Sn−1, αi, λi, bi∈R (19) are dense in Lp(K,µ), where µ is either dVol (5) or the induced Euclidean volume. Proof. We first treat the case ρ is sigmoidal and µ = dVol . Assume that these finite sums are not dense in Lp(K, dVol). By the Hahn-Banach theorem, there exists some nonzero h∈Lq(K, dVol), where q=p/(p − 1) if p>1 and q=∞ if p=1, such that ∫ K F (x)h(x)dVol(x) = 0 for all fi- nite sums of the form (19). As K is a compact set, by Hölder’s inequality, ∫ K |h(x)| dVol ≤ ( ∫ K dVol)1/p||h||Lq(K,dVol), which leads to h∈L1(K, dVol). Extend h to be a function H that is defined on Hn by assigning H(x)=h(x) if x∈K and H(x)=0 if x∈Hn\K. Then H∈L1(Hn, dVol)∩Lq(Hn, dVol) and∫ Hn F (x)H(x)dVol(x) = 0 (20) for all finite sums of the form (19). For any ω∈Sn−1, λ ∈ R, and b > 0, we set Fω,λ,b(x) = P ρ ω,λ,−λb(x) = ρ ( λ ( 1− |x|2 |x− ω|2 − b )) . These functions are uniformly bounded, as |Fω,λ,b(x)|≤1. Moreover, lim λ→∞ Fω,λ,b(x) = 1 if 1−|x|2 |x−ω|2>b, 0 if 1−|x| 2 |x−ω|2<b. (21) According to (20), for all ω, λ, b, we have ∫ Hn Fω,λ,b(x)H(x)dVol(x) = 0. Functions {Fω,λ,b}λ∈R converge pointwise as λ→∞, and they are uniformly bounded by |H|∈L1(Hn, dVol). By the bounded convergence theorem, for all ω∈Sn−1, b∈R, we have∫ {x:〈x,ω〉H>(log b)/2} H(x)dVol(x) = ∫ { x: 1−|x|2 |x−ω|2 >b }H(x)dVol(x) = 0. (22) By the integral formula (7) (with notations defined there), (6) and (22), for all b∈R,∫ ∞ log b (∫ ξt,ω H(z)dVolξt,ω (z) ) dt = 0. (23) Taking the derivative of ∫∞ log b (∫ ξt,ω H(z)dVolξt,ω (z) ) dt with respect to b, we deduce from (23) that∫ ξlog b,ω H(z)dVolξlog b,ω (z) = 0 for a.e. b>0. In other words, the integration of H on a.e. ξ ∈ Ξω is zero. This fact is valid for all ω∈Sn−1. Therefore, the integration of H on a.e. ξ ∈ Ξ is zero. By the injectivity Theorem 1, H = 0 a.e., which contradicts our assumption. Therefore, finite sums of the form (19) are dense in Lp(K, dVol). The case ρ is ReLU, ELU or Softplus and µ = dVol follows from the above case and the fact that x 7→ ρ(x+ 1)− ρ(x) is sigmoidal. The case µ is the Euclidean volume follows from previous cases and the fact that the Euclidean volume on compact K is bounded from above by λdVol for some constant λ. We refere the reader to the difference of (16) and (21), (17) and (22), and (18) and (23). However, basically the proofs are the same. The points are the integral formula (7), the injectivity Theorem 1 and the fact that level sets of horocycle/Poisson neurons are horocycles. Moreover, as a corollary of Theorem 4, we have Corollary 2. Let K be a compact set in Rn, and 1≤p<∞. Then finite sums of the form F (x) = N∑ i=1 αiP ρ wi,λi,bi (x), wi∈Rn, αi, λi, bi∈R are dense in Lp(K,µ), where µ is the Euclidean volume. Proof. Because K is compact, there exists a positive number R such that K ⊂ {x ∈ Rn : |x| < R}. By the above theorem, finite sums of the form F (x) = N∑ i=1 αiP ρ wi,λi,bi (x), wi∈Sn−1, αi, λi, bi∈R are dense in Lp(K/R, µ). Then the corollary follows from Pρw,λ,b(x) = P ρ w/R,λ,b(x/R). A.11 PROOF OF THE LEMMA 1 Recall f1a,p(x) = 2|a| 1− |p|2 sinh−1 ( 2(−p⊕ x, a)E (1− | − p⊕ x|2)|a| ) . (24) The proof of Lemma 1 follows from the following direct computation. Proof. Let t ∈ (0, 1). Take pt = tω and at = −ω, then we have −pt ⊕ x = −t(1− 2t(ω, x)E + |x|2)ω + (1− t2)x 1− 2t(ω, x)E + t2|x|2 . Let Ft(x) = 2(−pt⊕x,at)E (1−|−pt⊕x|2)|at| , then Ft(x) = 2(−pt ⊕ x, at)E (1− | − pt ⊕ x|2)|at| = 2 t(1−2t(ω,x)E+|x| 2)−(1−t2)(x,ω)E 1−2t(ω,x)E+t2|x|2 1− |−t(1−2t(ω,x)E+|x| 2)ω+(1−t2)x|2 (1−2t(ω,x)E+t2|x|2)2 = 2t(1− 2t(ω, x)E + t2|x|2)(1− 2t(ω, x)E + |x|2)− 2(1− t2)(1− 2t(ω, x)E + t2|x|2)(x, ω)E (1− 2t(ω, x)E + t2|x|2)2 − | − t(1− 2t(ω, x)E + |x|2)ω + (1− t2)x|2 = At(x)/Bt(x), where At, Bt are defined as the corresponding numerator and denominator. We have At(x)|t=1 = 2|x− ω|4 Bt(x)|t=1 = 0 ∂Bt(x)/∂t|t=1 = 2|x− ω|2(|x|2 − 1). Let Gt(x) = sinh−1(Ft(x)) + log 1−t1+t , then Gt(x) = log ( At(x) Bt(x) + √ 1 + A2t (x) B2t (x) ) + log 1− t 1 + t = log ( (1− t)At (1 + t)Bt + √ (1− t)2 (1 + t)2 + (1− t)2A2t (x) (1 + t)2B2t (x) ) . By L’Hôpital’s rule, lim t<1,t→1 (1− t)At(x) (1 + t)Bt(x) = −At(x) + (1− t)A′t(x) Bt(x) + (1 + t)B′t(x) ∣∣∣ t=1 = |x− ω|2 2− 2|x|2 . Therefore, lim t<1,t→1 Gt(x) = log ( |x− ω|2 1− |x|2 ) . For t < 1, we take pt = tω, at = −ω, ct = t 2−1 4 , dt = 1 2 log 1+t 1−t , then for all x ∈ K, lim t<1,t→1 ctf 1 at,pt(x) + dt = limt<1,t→1 −1 2 Gt(x) = 1 2 log ( 1− |x|2 |x− ω|2 ) = 〈x, ω〉H . If there exists c1, c2 such that |ctf1at, pt(x) + dt|(= |Gt(x)|/2) ≤ c2 for all t ∈ (c1, 1), x ∈ K, then by the dominated convergence theorem, there exists t such that ||ctf1at,pt(x) + dt − 〈x, ω〉H ||Lp(K,m) < , which proves the lemma. Note that (1− t)At(x) (1 + t)Bt(x) = 2|x− ω|4(1− t) + ∑4 j=1 Uj(x, ω)(1− t)j+1 −2|x− ω|2(|x|2 − 1)(1− t)(1 + t) + ∑4 l=2 Ll(x, ω)(1− t)l(1 + t) = 2|x− ω|4 + ∑4 j=1 Uj(x, ω)(1− t)j 2|x− ω|2(1− |x|2)(1 + t) + ∑4 l=2 Ll(x, ω)(1− t)l−1(1 + t) , where Uj and Ll are continuous functions defined on K × {ω}. There exist positive numbere c3, c4 and c1 ∈ (0, 1) such that for all x ∈ K and t ∈ (c1, 1), c3 ≤ 2|x− ω|4 ≤ c4, c3 ≤ 2|x− ω|2(1− |x|2)(1 + t) ≤ c4, c3 2 ≥ | 4∑ j=1 Uj(x, ω)(1− t)j |, c3 2 ≥ | 4∑ l=2 Ll(x, ω)(1− t)l−1(1 + t)|. Therefore, for x ∈ K and t ∈ (c1, 1), we have c3 2c4 + c3 ≤ (1− t)At(x) (1 + t)Bt(x) ≤ 2c4 + c3 c3 . This implies that for t ∈ (c1, 1), Gt|K and therefore |ctf1at,pt + dt||K are uniformly bounded, which finishes the proof of the lemma. A.12 THE FIRST MNIST CLASSIFIER IN 6.1 At the preprocessing stage, we compute the projection of the 28× 28 input pattern on the 40 principal components and then scale them so that the scaled 40-dimensional PCA features are within the unit ball. In our network, 1. Input layer: scaled 40-dimensional PCA features; 2. First layer: 40 inputs/1000 outputs horocycle layer (tanh activation); 3. Last layer: 1000 inputs/10 outputs affine layer; 4. Loss: cross entroy loss. Take learning rate = 1, learning rate decay = 0.999, and batch size = 128, and run it three times. The average test error rates after 600 epochs is 1.96%. PCA follows LeCun et al. (1998)(C.3), where 40 PCA is used for the quadratic network. Quadratic network has a similar structure to ours, because our neuron are contructed by quotient of quadratic functions followed by log. A.13 HOROCYCLE LAYER FOLLOWED BY MLR CAN APPROXIMATE THE CLASSFICATION FUNCTION Suppose the MNIST classification function M is defined on ∪9j=0Kj ⊂ H40, where Ki are relatively compact and M|Kj = j. By Theorem 2, for 0≤j≤9, there exist Fj(x) = ∑Nj i=1 αj,iρ(λj,i〈x, ωj,i〉H+bj,i) such that Fj approximates IKj , where I is the indicator function. Therefore, a network with the first (horocycle) layer given by ρ(λj,i〈x, ωj,i〉H+bj,i)(0≤j≤9, 1≤i≤Nj) followed by a classical MLR with parameters given by αj,i(0≤j≤9, 1≤i≤Nj) (with arg max for prediction) approximatesM. A.14 THE SECOND MNIST CLASSIFIER IN 6.1 At the preprocessing stage, we do data augmentation by letting each image 1 step toward each of its 4 corners, so that our traning set has 300000 examples. In our network, 1. Input layer: (28,28, 1); 2. First block: 32-filters 3× 3 convolution, ReLU, 2× 2 max-pooling, BatchNorm; 3. Second block: 64-filters 3× 3 convolution, ReLU, BatchNorm; 4. Thrid block: 64-filters 3× 3 convolution,ReLU,2× 2 max-pooling, BatchNorm; 5. Fourth block: 128-filters 3× 3 convolution, ReLU, 2× 2 max-pooling, BatchNorm; 6. Fifth block: FC 1000, ReLU, BatchNorm; 7. Last block: 1000 input/10 output Poisson layer, sigmoid, BatchNorm; 8. Loss: cross entroy loss. In optimization, we take Adam(Kingma & Ba, 2015). The batch size is 128 in the first 5 epochs, and 1024 in the next 15 epochs. After 5 epochs, we set ωi in the Poisson layer to be non-trainable. We train our network five times, the average test error rate after 20 epochs is 0.35%. The in |w| 2−|x|2 |x−w|2+ is an important hyperparameter for the numerical stability. We train this MNIST model with ∈ {10−1, 10−2, 10−4, 10−6, 10−8, 10−10, 10−20}. They all show robust performance. A.15 EXPERIMENT OF POINCARE TREE CLASSIFICATION TASK Given a Poincaré embedding (Nickel & Kiela, 2017) PE : {WordNet noun} → HD of the 82114 WordNet noun nodes and given a node x, the task is to classify all other nodes as being part of the subtree rooted at x (Ganea et al., 2018a). Our model is a logistic regression, where the horocycle feature p ∈ {WordNet noun} 7→ hPE(x)(PE(p)/s) (s is a hyperparameter lying in [1, 1.5]) is the only predictor, and the dependent variable is whether p is in the subtree rooted at x. Let P be the set of all nodes in the Poincare embedding, and let p range from P . 1. Input: hPE(x)(PE(p)/s) (s is a hyperparameter.) 2. Only layer: 1 input/1 output affine layer. (two parameters: one for input, one for bias.) 3. Loss: Logistic. (with respect to 1 if p in the tree rooted at x; 0 else.) In each training, x is one of {animal, group, location, mammal, worker}, dim is one of {2,3,5,10}, and Poincaré embeddings are from the animation_train.py of Ganea et al. (2018b) 4 (with tree=wordnet_full, model=poincare, dim=dim, seed randomly ∈ {7, 8, 9}). All nodes in the subtree rooted at x are divided into training nodes (80%) and test nodes (20%). The same splitting procedure applies for the rest nodes. We choose s that has the best training F1, and then record the corresponding test F1. For each x and dim, we do the training 100 times. The average test F1 classification scores are recorded in Table 2. The horocycle feature performs well here because it is compatible with the Poincaré embedding algorithm. Let x be a node that is not at the origin. It seems that the Poincaré embedding algorithm tends to pull all nodes that are from the subtree rooted at x towards the direction of x|x| , therefore y → 〈 y, x|x| 〉 H is a suitable feature for this task. A.16 END-BASED CLUSTERING IN H2 For MNIST, at the preprocessing stage, we do data augmentation by letting each image 1 step toward each of its 4 corners, so that our traning set has 300000 examples. Our network for H2 embedding of MNIST dataset is 1. Input layer: (28,28, 1); 2. First block: 32-filters 3× 3 convolution, ReLU, 2× 2 max-pooling, BatchNorm; 3. Second block: 64-filters 3× 3 convolution, ReLU, BatchNorm; 4. Thrid block: 64-filters 3× 3 convolution,ReLU,2× 2 max-pooling, BatchNorm; 5. Fourth block: 128-filters 3× 3 convolution, ReLU, 2× 2 max-pooling, BatchNorm; 6. Fifth block: FC 1000, ReLU, BatchNorm; 7. Sixth block: FC 2, ReLU, BatchNorm, Exp; 8. Last block: 2 input/10 output horocycle layer, sigmoid; 4https://github.com/dalab/hyperbolic_cones 9. Loss: cross entroy loss, where Exp is the exponential map ToH2(= R2)→ H2. We apply the data augmentation as in A.14. In optimization, learning rate is 0.1, learning rate decay is 0.99, batch size is 128, epochs is 50. Our network, data augmentation and optimization for H2 embedding of Fashion-MNIST dataset is completely the same as that for MNIST. For MNIST and Fashion-MNIST we use sphere optimization. We would like to remark that there are interesting new features in sphere optimization. Because the S1 is compact, for any continuous function f , there exists x = argmaxS1f . The derivative of f at x vanish, so the usual optimization algorithm to find the minimum will fail in the general case. In our experiments, we solve this problem by adding the following tricks: 1. Observation: if the class Cα are all close to ω ∈ S1, and the end prototype ωα for the class Cα is around −ω, then ωα is a maximum point of the loss function and therefore can not be improved through normal SGD. We solve this problem by adopting an idea(supervised variation) of k-means clustering. In each early epochs, optimization consists of two parts. In the first part, the normal SGD applies. In the second part, we move end prototypes (ωi) to the average direction of the class (using training data). 2. Observation: if the class Cα and class Cβ are all close to ω ∈ S1, and the end prototype ωα, ωβ are also both around ω, then all points in class Cα and class Cβ , end prototypes ωα, ωβ will all be pulling to ω by the SGD, and finally the network can not distinguish class Cα and class Cβ . We solve this problem by adding a loss if two prototypes are close. With these small tricks, our 2D end-based clustering algorithm is very stable for MNIST and FashionMNIST. We run it on MNIST 10 times, and they all get a test acc around 99% within 20 epochs. Suppose the classification task has M classes and the prototype of the i-th class is ωi. We write down the additional loss function for the second observation as follows i = RandomChoice({1, . . . ,M}) j = RandomChoice({1, . . . ,M} \ {i}) d = (ωi, ωj)E LObservation2 = arctanh(10× ReLU(d− 0.9− )), where is a small constant for numerical stability. For CIFAR-10, our network for H2 embedding of CIFAR-10 dataset is 1. Input layer: (32,32, 3); 2. First block: ResNet-32/128 output; 3. Second block: FC 2, ReLU, BatchNorm, Exp; 4. Last block: 2 input/10 output horocycle layer; 5. Loss: cross entroy loss. In the data augmentation, we apply horizontal/vertical shifts and horizontal flip. We use Adam. The batch size is 32 in the first 100 epochs, or 1024 in the next 50 epochs. The weights of the horocycle layer are fixed at the beginning of the training and are non-trainable, which follows an idea of Mettes et al. (2019). A.17 POISSON MLR For CIFAR-10, we use a ResNet-32 structure as the feature descriptor, and we apply horizontal/vertical shifts and horizontal flip. In our network, 1. Input layer: (32,32, 3); 2. First block: ResNet-32/128 output; 3. Second block: FC 128, ReLU, BatchNorm; 4. Last block: 128 input/10 output Poisson layer, BatchNorm; 5. Loss: cross entroy loss. We use Adam. The batch size is 32 in the first 80 epochs, or 1024 in the next 20 epochs. Test acc greater than 93.5%. For the classification task of flowers (Tensorflow), The dataset of 3670 photos of flowers contains 5 classes: daisy, dandelion, roses, sunflowers and tulips. The keras model is 1. Input layer: (180,180, 3); 2. First block: 16-filters 3× 3 convolution, ReLU, 2× 2 max-pooling; 3. Second block: 32-filters 3× 3 convolution, ReLU, 2× 2 max-pooling; 4. Thrid block: 64-filters 3× 3 convolution,ReLU,2× 2 max-pooling; 5. Fourth block: FC 128, ReLU; 6. Last block: 128 input/10 output FC layer; 7. Loss: cross entroy loss. Our Poisson model is 1. Input layer: (180,180, 3); 2. First block: 16-filters 3× 3 convolution, ReLU, 2× 2 max-pooling; 3. Second block: 32-filters 3× 3 convolution, ReLU, 2× 2 max-pooling; 4. Thrid block: 64-filters 3× 3 convolution,ReLU,2× 2 max-pooling; 5. Fourth block: FC 128, ReLU; 6. Last block: BatchNorm, 128 input/10 output Poisson layer, sigmoid, BatchNorm; 7. Loss: cross entroy loss. We use 2936 photos for training and
1. What is the main contribution of the paper regarding linear models on hyperbolic spaces? 2. What are the strengths and weaknesses of the proposed approach, particularly in its connection to geometric tools and synthetic experiments? 3. How does the reviewer assess the novelty and motivation of the work, especially in real-world applications? 4. What are the questions raised by the reviewer regarding the paper's content, such as the choice of layers, dimension, and inner product? 5. Do you have any suggestions for improving the paper, such as providing better motivation or increasing the expressiveness of the output?
Review
Review This paper develops a MLR based on hyperbolic geometry. The idea is based on well-known concept of horocycle and horospheres which are known to be hyperbolic counterpart of line and plane in Euclidean geometry (see Coxter). Then the authors show the universal approximation which kind of follows similarly from the Euclidean counterpart. In fact we can probably conject that this universal approximation holds for any manifolds with constant sectional curvature. Strength: To the best of my knowledge, this is the first paper to deal with linear models on hyperbolic spaces by borrowing geometric tools like horocycles. Major weakness: The ideas are borrowed from well-known geometric tools, although this is not a weakness but the theorems closely follow Euclidean counterpart. This essentially reduces the ``````"novelty" of the paper. Moreover, the experiments are ``"synthetic", there is no motivation to use such a construction in real experiment. It will be good to see the authors discuss in which real cases we need to use such a hyperbolic MLR. The work should be better motivated, for example what is the motivation of using Horocycle layer and Poisson neuron layer? In section 6.2, the 2D output after 4 convolutional layers seems very less expressive, why not increase the dimension? Also what is the motivation to map it to H^2? In Theorem 2, eq. 9, why the inner product is Euclidean instead of hyperbolic? The universal approximation theorem in Theorem 2 almost follows from the Euclidean counterpart, e.g., see https://cbmm.mit.edu/sites/default/files/publications/CBMM-Memo-054.pdf What is the additional consequence of Corollary 1 other than showing we can approximate any function, similar as Theorem 2? The statement in section 6.3 stating "t is the best hyperbolic geometry related MNIST classifier" does not carry much weight, e.g., what is the motivation of using MNIST images for MLR using hyperbolic geometry? There is not much point for section 6.4. In most practical cases, the 1-dimensional reduction is not meaningful as it can not carry much information. Section 6.5 seems very rushed including the Fig. 8 and experiment of Flowers. This section seems more like placeholder.
ICLR
Title Laplacian Eigenspaces, Horocycles and Neuron Models on Hyperbolic Spaces Abstract We use hyperbolic Poisson kernel to construct the horocycle neuron model on hyperbolic spaces, which is a spectral generalization of the classical neuron model. We prove a universal approximation theorem for horocycle neurons. As a corollary, we obtain a state-of-the-art result on the expressivity of f a,p, which is used in the hyperbolic multiple linear regression. Our experiments get state-of-the-art results on the Poincare-embedding subtree classification task and the classification accuracy of the two-dimensional visualization of images. 1 INTRODUCTION Conventional deep network techniques attempt to use architecture based on compositions of simple functions to learn representations of Euclidean data (LeCun et al., 2015). They have achieved remarkable successes in a wide range of applications (Hinton et al., 2012; He et al., 2016). Geometric deep learning, a niche field that has caught the attention of many authors, attempts to generalize conventional learning techniques to non-Euclidean spaces (Bronstein et al., 2017; Monti et al., 2017). There has been growing interest in using hyperbolic spaces in machine learning tasks because they are well-suited for tree-like data representation (Ontrup & Ritter, 2005; Alanis-Lobato et al., 2016; Nickel & Kiela, 2017; Chamberlain et al., 2018; Nickel & Kiela, 2018; Sala et al., 2018; Ganea et al., 2018b; Tifrea et al., 2019; Chami et al., 2019; Liu et al., 2019; Balazevic et al., 2019; Yu & Sa, 2019; Gulcehre et al., 2019; Law et al., 2019). Many authors have introduced hyperbolic analogs of classical learning tools (Ganea et al., 2018a; Cho et al., 2019; Nagano et al., 2019; Grattarola et al., 2019; Mathieu et al., 2019; Ovinnikov, 2020; Khrulkov et al., 2020; Shimizu et al., 2020). Spectral methods are successful in machine learning, from nonlinear dimensionality reduction (Belkin & Partha, 2002) to clustering (Shi & Malik, 2000; Ng et al., 2002) to hashing (Weiss et al., 2009) to graph CNNs (Bruna et al., 2014) to spherical CNNs (Cohen et al., 2018) and to inference networks (Pfau et al., 2019). Spectral methods have been applied to learning tasks on spheres (Cohen et al., 2018) and graphs (Bruna et al., 2014), but not yet on hyperbolic spaces. This paper studies a spectral generalization of the FC (affine) layer on hyperbolic spaces. Before presenting the spectral generalization of the affine layer, we introduce some notations. Let (·, ·)E be the inner product, | · | the Euclidean norm, and ρ an activation function. The Poincaré ball model of the hyperbolic space Hn(n≥2) is a manifold {x∈Rn : |x|<1} equipped with a Riemannian metric ds2Hn= ∑n i=1 4(1−|x| 2)−2dx2i . The boundary of Hn under its canonical embedding in Rn is the unit sphere Sn−1. The classical neuron y=ρ((x,w)E+b) is of input x∈Rn, output y∈R, with trainable parameters w∈Rn, b∈R. An affine layer Rn → Rm is a concatenation of m neurons. An alternative representation of the neuron x 7→ρ((x,w)E+b) is given by 1 x∈Rn 7→ ρ(λ(x, ω)E+b), ω∈Sn−1, λ, b∈R. (1) This neuron is constant over any hyperplane that is perpendicular to a fixed direction ω. In Hn, a horocycle is a n−1 dimensional sphere (one point deleted) that is tangential to Sn−1. Horocycles are hyperbolic counterparts of hyperplanes (Bonola, 2012). Horocyclic waves 〈x, ω〉H := 12 log 1−|x|2 |x−ω|2 are constant over any horocycle that is tangential to Sn−1 at ω. Therefore, x∈Hn 7→ ρ(λ〈x, ω〉H+b), ω∈Sn−1, λ, b∈R (2) 1if w 6= (0, . . . , 0), one can take ω = w/|w|, λ = |w|; else, one can take λ = 0 and any ω ∈ Sn−1. generalizes the classical neuron model (1), and a concatenation of finitely many (2) generalizes the FC (affine) layer. We call (2) a horocycle neuron. Figure 1 (middle) is an example on H2. The neuron models in (1, 2) are related to spectral theory because (·, ω)E (respectively 〈·, ω〉H ) are building blocks of the Euclidean (respectively hyperbolic) Laplacian eigenspace. Moreover, many L2 spaces have a basis given by Laplacian eigenfunctions (Einsiedler & Ward, 2017). On one side, all Euclidean (respectively hyperbolic) eigenfunctions are some kind of “superposition” of (·, ω)E (respectively 〈·, ω〉H ). On the other side, neural networks based on (1) (respectively (2)) represent functions that are another kind of “superposition” of (·, ω)E (respectively 〈·, ω〉H ). They heuristically explain why the universal approximation property is likely to hold for networks constructed by (1) and (2). By using the Hahn Banach theorem, an injectivity theorem of Helgason, and integral formula, we prove that finite sums of horocycle neurons (2) are universal approximators (Theorem 2). Let p ∈ Hn, Tp(Hn) be the tangent space of Hn at p, a ∈ Tp(Hn), ⊕ be the Möbius addition (Ungar, 2008). We remind the reader that the following functions f1a,p(x) = 2|a| 1− |p|2 sinh −1 ( 2(−p⊕ x, a)E (1− | − p⊕ x|2)|a| ) (3) are building blocks of many hyperbolic learning tools (Ganea et al., 2018a; Mathieu et al., 2019; Shimizu et al., 2020). Figure 1 illustrates examples of different neuron models (1, 2, 3) on H2. In Lemma 1, we shall present a close relationship between (2) and (3). Using this relationship and Theorem 2, we obtain a novel result on the expressivity of f1a,p (Corollary 1). This article contributes to hyperbolic learning. We first apply spectral methods, such as the horocycle, to hyperbolic deep learning. We prove results on the expressivity of horocycle neurons (2) and f1a,p (3). With horocycle neurons, we obtain state-of-the-art results on the Poincaré-embedding subtree classification task and the classification accuracy of the 2-D visualization of images in in the experiment. 2 RELATED WORK Universal approximation There is a vast literature on universal approximation (Cybenko, 1989; Hornik et al., 1989; Funahashi, 1989; Leshno et al., 1993). Cybenko (1989)’s existential approach uses the Hahn Banach theorem and Fourier transform of Radon measures. To prove Theorem 2, we also use the Hahn Banach theorem, and additionally an integral formula (7) and an injectivity Theorem 1 of Helgason. Generalizing integral formulas and injectivity theorems is easier than generalizing Fourier transform of Radon measures on most non-Euclidean spaces. (Carroll & Dickinson, 1989) uses the inverse Radon transform to prove universal approximation theorems. This method relates to ours, as injectivity theorems are akin to inverse Radon transforms. However, using the injectivity theorem is an existential approach while using the inverse Radon transform is a constructive one. Spectral methods Spectral methods in Bronstein et al. (2017); Bruna et al. (2014); Cohen et al. (2018) use a basis of L2(X) given by eigenfunctions, whereX is a finite graph or the sphere. Because L2(Hn) has no eigenfunctions as a basis, our approach is different from theirs. Hyperbolic deep learning One part of hyperbolic learning concerns embedding data into the hyperbolic space (Nickel & Kiela, 2017; Sala et al., 2018). Another part concerns learning architectures with hyperbolic data as the input (Ganea et al. (2018a); Cho et al. (2019)). Ganea et al. (2018a) proposes two ways to generalize the affine layer on hyperbolic spaces: one by replacing the linear and bias part of an affine map with (25, 26) of their paper; another one by using a concatenation of f1a,p in their hyperbolic multiple linear regression (MLR). The latter seems more relevant to ours. A level set of f1a,p is a hypercycle that has the same distance to a chosen geodesic hypersurface, while a level set of a horocycle neuron is a horocycle that has the same “spectral” distance to an ideal point at infinity. Based on functions similar to f1a,p, Mathieu et al. (2019); Shimizu et al. (2020) build the gyroplane layer and Poincaré FC layer. Ganea et al. (2018a); Cho et al. (2019) take geodesics as decision hyperplanes, while we (initially) take horocycles. We shall construct the horocycle multiple linear regression (MLR), where decision hypersurfaces are geodesics. Geodesics decision hyperplanes (Ganea et al., 2018a; Cho et al., 2019) and geodesic decision hypersurfaces here arise from different methods. Khrulkov et al. (2020) investigates hyperbolic image embedding, where prototypes (or models) of each class are center-based. We study a different one, and we shall call our prototypes end-based. 3 HYPERBOLIC SPACES This section reviews facts from hyperbolic geometry that are used in the proof of Theorem 2. For the reader who is not interested in the proof, (4) is enough for the implementation. Hyperbolic metric We use the Poincaré model. The hyperbolic space Hn(n≥2) is the manifold {x∈Rn : |x|<1} equipped with a Riemannian metric ds2 = ∑n i=1 4(1−|x|2)−2dx2i . Let o be the origin of Hn. The distance function dHn satisfies dHn(o, x)=2 arctanh |x|. Geodesics, horocycles and corresponding points Geodesics in Hn are precisely circular arcs that are orthogonal to Sn−1. Horocycles in Hn are precisely (n−1)-dimensional spheres that are tangential to Sn−1 (Helgason, 1970). Horocycles are hyperbolic analogs of hyperplanes. Figure 2 illustrates geodesics and horocycles on H2. Hyperbolic Poisson kernel The Poisson kernel for Hn is P (x, ω)= ( 1−|x|2 |x−ω|2 )n−1 , where x∈Hn, ω∈Sn−1 (Helgason (1970)[p.108]). The function 〈·, ω〉H defined by 〈x, ω〉H = 1 2(n− 1) logP (·, ω) = 1 2 log 1− |x|2 |x− ω|2 (4) is constant over any horocycle that is tangential to Sn−1 at ω (Figure 1 (middle), (6)). Riemannian volume The Riemannian volume induced by the metric ds2 on Hn is dVol = 2n(1− |x|2)−ndx1 . . . dxn. (5) Horocycles Let Ξ be the set of horocycles of Hn, and let Ξω be the set of all horocycles that are tangential to Sn−1 at ω. Given λ∈R, we let ξλ,ω be the unique horocycle that connects ω and tanh (λ/2) · ω. We have Ξω = ∪λ∈R{ξλ,ω} and Ξ = ∪ω∈Sn−1Ξω. The length of any geodesic (that ends at ω) line segment cut by ξλ1,ω and ξλ2,ω equals |λ1 − λ2| (A.2). Therefore |λ1 − λ2| is a natural distance function defined on Ξω, and the map λ→ ξλ,ω is an isometry between R and Ξω. This isometry is closely related to 〈·, ω〉H (A.3): for any x ∈ ξλ,ω , 〈x, ω〉H = λ/2. (6) The annoying /2 in (6) is a tradeoff that the metric here is different from that in Helgason (2000). Integral formula For fixed ω ∈ Sn−1, Hn=∪λ∈Rξλ,ω. Let dVolξλ,ω be the measure induced by ds2 on ξλ,ω . Let L be a family of geodesics that end at ω, δ > 0, and U=L ∩ (∪λ≤α≤λ+δξα,ω). For l ∈ L, dH(l ∩ ξλ,ω, l ∩ ξλ+δ,ω)=δ (A.2), hence dVol(U) = δ · dVolξλ,ω (U ∩ ξλ,ω) and therefore∫ Hn f(x)dVol(x) = ∫ R (∫ ξλ,ω f(z)dVolξλ,ω (z) ) dλ. (7) The above proof (for Hn) is essentially the same as that in (Helgason, 2000)[p.37] (for H2). To further convince the reader that (7) holds for all n, we give another simple proof in A.4. Injectivity theorem With respect to the canonical measure on Ξ, Helgason (1970)[p.13] proved Theorem 1 (Helgason). If f ∈ L1(Hn) and ∫ ξ f(z)dVolξ(z) = 0 for a.e ξ ∈ Ξ, then f = 0 a.e.. Theorem 1 demonstrates that if the integral of f ∈ L1(Hn) over almost every horocycle is zero then f is also zero. This theorem and the integral formula (7) are essential for the proof of Theorem 2. 4 LEARNING ARCHITECTURES AND EIGENFUNCTIONS OF THE LAPLACIAN In this section, we discuss a heuristic connection between the representation properties of eigenfunctions and classical neurons, and then we define some horocycle-related learning tools. 4.1 EIGENSPACES AND NEURON MODELS On a Riemannian manifold X , the Laplace-Beltrami LX is the divergence of the gradient, and it has a well-known representation property (Einsiedler & Ward, 2017): if X is a compact Riemannian manifold or bounded domain in Rn, then L2(X) has a basis given by eigenfunctions. This statement is false if X is Rn or Hn (Hislop, 1994). Eigenspaces of on Rn and Hn Our work is motivated by the theory of eigenspaces, in which Euclidean (respectively hyperbolic) eigenfunctions are obtained from (x, ω)E (respectively 〈x, ω〉H ) by some kind of superposition. For example, all smooth eigenfunctions of LRn are precisely the functions (M. Hashizume & Okamoto, 1972)[p.543] f(x) = ∫ Sn−1 eλ(x,ω)EdT (ω), (8) and eigenfunctions of LHn are precisely the functions (Helgason, 1970)[Theorem 1.7, p.139] f(x) = ∫ Sn−1 eλ〈x,ω〉HdT (ω), (9) where T in (8) and (9) are some technical linear forms of suitable functional spaces on Sn−1. Neuron models By (8) and (1), Euclidean eigenfunctions (respectively classical neurons) are superpositions of (·, ω)E and exp (respectively ρ), with homogeneity and additivity. By (9) and (2), hyperbolic eigenfunctions (respectively horocycle neurons) are superpositions of 〈·, ω〉H and exp (respectively ρ). The representation property of eigenfunctions on compact manifolds and bounded domains suggests that the universal approximation property is likely to hold for networks constructed by (·, ω)E or 〈·, ω〉H . However, this heuristic is not proof (A.5). 4.2 HOROCYCLE BASED LEARNING ARCHITECTURES Horocycle neuron In the implementation of the horocycle neuron (2), we take 1 2 log ( 1−|x|2 |x−ω|2+ + ) for 〈x, ω〉H , where is a small constant to ensure numerical stability. For updating ω, we use the sphere optimization algorithm (Absil et al., 2008; Bonnabel, 2013) (A.6). Horocycle feature and horocycle decision hypersurface Given a non-origin point x ∈ Hn, for y ∈ Hn we define hx(y) = 〈y, x/|x|〉H and call it the horocycle feature attached to x. This feature is useful in the Poincaré embedding subtree classification task (see the experiment and Figure 3[left]). The horocycle is the hyperbolic analog of the Euclidean hyperplane, and therefore it could be a possible choice of decision hypersurface, which may arise from a level set of a horocycle feature. End-based clustering and end prototype Natural clustering is a topic in representation learning (Bengio et al., 2013), and the common prototype-based clusters are center-based (Tan et al., 2005). We propose a type of clustering that embeds high-dimensional data in Hn and places prototypes in Sn−1. Figure 3[right] is an example for n = 2. For ω ∈ Sn−1 and any b ∈ R, the function x ∈ Hn 7→ − log ( 1−|x|2 |x−ω|2 ) + b measures the relative distance of Hn from ω in Gromov’s bordification theory (Bridson & Haefliger (2009)[II.8], A.18). Moreover, we define Dist : Hn ×Sn−1 ×R→ R by Dist(x, ω, b) = − log ( 1− |x|2 |x− ω|2 ) + b = −2〈x, ω〉H + b. (10) It is a relative distance function, and this is why Dist may assume negative values and why there is a bias term b in (10). Consider classes Cls = {C1, C2, . . . , CM} and labeled training examples {(X1, Y 1), . . . , (XN , Y N )}, where Xi ∈ RD are D-dimensional input features and Y i ∈ {1, 2, . . . ,M}. Each example Xi belongs to the class CY i . In light of (10), our goal is to find a neural network NNθ : RD → Hn that is parameterized by θ, prototypes ω1, . . . , ωM ∈ Sn−1, and real numbers b1, . . . , bM ∈ R such that # { 1≤i≤N : Y i = arg min 1≤j≤M ( Dist(NNθ(X i), ωj , bj) )} N (11) is maximized. We call {NNθ(Xj) : 1 ≤ j ≤ N} the end-based clustering and ωi end prototypes (in hyperbolic geometry, the end is an equivalence class of parallel lines in Figure 2[left]). In experiments, we take NNθ = Exp ◦ NN′θ, where NN ′ θ : R D → Rn is a standard neural network parameterized by θ and Exp : Rn → Hn is the exponential map of the hyperbolic space. Horocycle layer, horocycle multiple linear regression (MLR) and geodesic decision hypersurfaces We call a concatenation of (2) a horocycle layer, and we shall carefully describe a prototypical learning framework for end-based clusterings. Using the same notions as in the previous paragraph, the classification task has M classes, and NNθ = Exp ◦NN′θ : RD → Hn is a deep network. For prototypes ω1, . . . , ωM ∈ Sn−1, real numbers b1, . . . , bM ∈ R, and any exampleX , our feedforward for prediction will be x = NNθ(X), (Feature descriptor) SCj(X) = −Dist(x, ωj , bj), (Scores; Similarity) X ∈ Cargmax 1≤j≤M (SCj(X)). (Classifier) The goal is to maximize the accuracy (11), and then we need a loss function for the backpropagation. Following the convention of prototypical networks (Snell et al., 2017; Yang et al., 2018), we choose an increasing function ρ (in our experiments, ρ(x) = x or ρ = tanh. 2) and let the distribution over classes for an input X (with label Y ) be pθ(Y = Cj |X) ∝ e−ρ(Dist(NNθ(X),mj ,bj)) = e−ρ(−SCj(X)). 2One often takes ρ(x) = x2 in metric learning, which is improper here because Dist(x) could be negative. Therefore, given a batch of training examples, the loss function is L = − ∑ (Xj ,Y j)∈Batch log pθ(Y = CY j |Xj) #Batch . (12) The training proceeds by minimizing L, and we call this framework a horocycle MLR. The set of parameters of the framework is {θ} ∪ {ω1, . . . , ωM} ∪ {b1, . . . , bM}. It is worth mentioning that decision boundaries of the horocycle MLR are geodesics, which follows from SCi(X)=SCj(X)⇐⇒ log ( 1−|x|2 |x−ωi|2 ) −bi = log ( 1−|x|2 |x−ωj |2 ) −bj ⇐⇒ |x−ωi| |x−ωj | = e bj−bi 2 and the theorem of Apollonian circles (A.7). Poisson neuron and Poisson multiple linear regression (MLR) Although 〈x, ω〉H (4) is wellmotivated by the theory of eigenspaces (9) and fits naturally into metric learning (see 10 or also Corollary 1), it is only defined on Hn. Some readers might not be convinced that the neuron has to be defined on hyperbolic spaces. Therefore, we try to remove the log in (4) and define the Poisson neuron model by Pρw,λ,b(x) = ρ ( λ |w| 2−|x|2 |x−w|2 + b ) for w ∈ Rn, λ, b ∈ R, which is well-defined on Rn\{w}. Notice that if |x| < |w| then |w| 2−|x|2 |x−w|2 = e 2〈x/|w|,w/|w|〉H . In A.8, Figure 7 illustrates an example of a Poisson neuron on R2. In the implementation, we take |w| 2−|x|2 |x−w|2+ for |w|2−|x|2 |x−w|2 , where is a small constant for numerical stability. We call a concatenation of Poisson neurons a Poisson layer, and we use it with a deep neural network NNθ : RD → Rn to construct the Poisson MLR, which is similar to the horocycle MLR. Let w1, . . . , wM ∈ Rn and b1, . . . , bM ∈ R, the feedforward for prediction of our framework is x = NNθ(X), SCj(X) = BatchNorm(P ρ wj ,−1,bj (x)), X ∈ Cargmax 1≤j≤M (SCj(X)). (13) We let the pθ(Y = Cj |X) ∝ eSCj(X) and take (12) as the loss. This framework is called a Poisson MLR. We use the usual optimization algorithms to update parameters in the Poisson neuron. The BatchNorm(Ioffe & Szegedy, 2015) seems crucial for (13) in the experiment. Figure 4 illustrates that high-confidence prediction regions (deep red areas) of the Poisson MLR are compact sets, in contrast to classical classifiers Hein et al. (2019)[Theorem 3.1]. We shall use this figure to explain an experiment in Section 6.4. 5 REPRESENTATIONAL POWER In this section, ρ is a continuous sigmoidal function (Cybenko, 1989), ReLU(Nair & Hinton, 2010), ELU(Clevert et al., 2016), or Softplus(Dugas et al., 2001). We remind the reader that ρ is sigmoidal if lim t→∞ ρ(t) = 1 and lim t→−∞ ρ(t) = 0. The following theorem justifies the representational power of horocycle neurons. Theorem 2. Let K be a compact set in Hn, and 1≤p<∞. Then finite sums of the form F (x) = N∑ i=1 αiρ(λi〈x, ωi〉H+bi), ωi∈Sn−1, αi, λi, bi∈R (14) are dense in Lp(K,µ), where µ is either dVol (5) or the induced Euclidean volume. We provide a sketch of the proof here and go through the details in A.9. It suffices to prove the theorem for a sigmoidal function ρ and µ = dVol , as other cases follow from this one. Assume that these finite sums are not dense in Lp(K, dVol). By the Hahn-Banach theorem, there exists some nonzero h∈Lq(K, dVol), where q=p/(p− 1) if p>1 and q=∞ if p=1, such that ∫ K F (x)h(x)dVol(x) = 0 for all finite sums of the form (14). Extend h to be a function H that is defined on Hn by assigning H(x)=h(x) if x∈K and H(x)=0 if x∈Hn\K. Using the property of sigmoidal functions, the bounded convergence theorem, and the integral formula (7), we prove that the integration of H on almost every horocycle is zero. By the injectivity Theorem 1, H is almost everywhere zero, which contradicts our assumption and completes the proof. In A.10, we shall prove the same result for Poisson neurons. In A.11, we prove the following lemma, which demonstrates a close relationship between horocycle neurons and the widely used f1a,p (3). Lemma 1. Let K be a compact set in Hn, ω ∈ Sn−1, and > 0. There are c, d ∈ R, p ∈ Hn, and a ∈ Tp(Hn) such that the function D(x) = cf1a,p(x) + d− 〈x, ω〉H satisfies ||D||Lp(K,dVol) < . This lemma suggests that 〈·, ω〉H is a boundary point of some “compactification” of the space of f1a,p. The above lemma together with Theorem 2 implies Corollary 1. Let K be a compact set in Hn and 1≤p<∞. Finite sums of the form F (x) = N∑ i=1 αiρ(cif 1 ai,pi(x) + di), pi ∈ H n, ai ∈ Tpi(H n), αi, ci, di ∈ R, are dense in Lp(K,µ), where µ = dVol or µ is the induced Euclidean volume. This result provides novel insights into the hyperbolic neural network (Ganea et al., 2018a), gyroplane layer (Mathieu et al., 2019), and Poincaré FC layer (Shimizu et al., 2020). Although level sets of f1a,p are hypercycles, our proof of Lemma 1 relies on the theory of horocycles. It would be interesting to have more natural approaches to treat the expressivity of f1a,p. 6 EXPERIMENTS In this section, we first play with the MNIST toy. Next, we apply a horocycle feature to the Poincaré embedding subtree classification task. After that, we construct 2-D clusterings of image datasets by using the horocycle MLR. Finally, we provide evidence for further possible applications of the Poisson MLR. We use the framework or some functions of Tensorflow, Keras, and scikit-learn (Abadi et al., 2015; Chollet et al., 2015; Pedregosa et al., 2011). 6.1 MNIST The MNIST (LeCun et al., 1998) task is popular for testing hyperbolic learning tools (Ontrup & Ritter, 2005; Nagano et al., 2019; Mathieu et al., 2019; Grattarola et al., 2019; Ovinnikov, 2020; Khrulkov et al., 2020). We train two different classifiers. A.12, A.14, and code contain details. The first one is a single horocycle layer followed by the softmax classifier. The average test error rate after 600 epochs is 1.96%, and Theorem 2 provides the rationale for this experiment (A.13). The second one is a Poisson MLR. It is the best hyperbolic geometry related MNIST classifier (Table 1). In this table, Ontrup & Ritter (2005) uses the hyperbolic SOM, Grattarola et al. (2019) uses the adversarial autoencoder, and Khrulkov et al. (2020) uses the hyperbolic MLR. Our experiment performs well on MNIST suggests that horocycle and Poisson neurons are computationally efficient and easily coordinate with classical learning tools (such as the convolutional layer and the softmax). 6.2 POINCARÉ EMBEDDING SUBTREE CLASSIFICATION Given a Poincaré embedding (Nickel & Kiela, 2017) PE : {WordNet noun} → HD of 82114 nouns and given a node x ∈ {WordNet noun}, the task is to classify all other nodes as being part of the subtree rooted at x (Ganea et al., 2018a). Our model is logistic regression, where the horocycle feature p ∈ {WordNet noun} 7→ hPE(x)(PE(p)/s) (s is a hyperparameter lying in [1, 1.5]) is the only predictor, and the dependent variable is whether p is in the subtree rooted at x. The decision hypersurface of this model is a horocycle, as illustrated in Figure 3 (left). In the experiment, we pre-train three different Poincaré embeddings3 in each of H2,H3,H5,H10. For each x ∈ {animal, group, location, mammal, worker} and D ∈ {2, 3, 5, 10}, we randomly select one of three pre-trained Poincaré embedding PE : {WordNet noun} → HD and then test the model. Table 2 reports the F1 classification scores and two standard deviations of 100 trials for each {x,D}. Different Poincaré embeddings account for the most variance of the performance. Our model is different from the existing ones. Firstly, we take the horocycle as the decision hypersurface, while others take the geodesic. Secondly, we train a logistic regression on top of the horocycle feature attached to PE(x), which is efficiently calculated, while others train the hyperbolic MLR with different parametrizations. On the number of parameters, we have three (independent of D), Ganea et al. (2018a) has 2D, and Shimizu et al. (2020) has D + 1. The number of parameters explains why our model is prominent in low dimensions. 6.3 END-BASED CLUSTERING FOR 2D DIMENSION REDUCTION In this experiment, we use the horocycle MLR (Section 4.2) to construct end-based clusterings NNθ : R D → H2 for MNIST, Fashion-MNIST(Xiao et al., 2017), and CIFAR-10(Krizhevsky, 2012). We take NNθ = Exp ◦ NN′θ, where Exp is the exponential map of H2 and NN ′ θ : R D → R2 is a network with four convolutional blocks for MNIST/Fashion-MNIST or a ResNet-32 structure for CIFAR-10. A.16 and code contain details. 3https://github.com/dalab/hyperbolic_cones Figure 5 illustrates end-based clusterings for MNIST, Fashion-MNIST, and CIFAR-10, with performance reported in the caption. Our accuracy for Fashion-MNIST is 8% higher than all numbers presented in McInnes et al. (2020). Moreover, Table 3 compares the numbers of Yang et al. (2018); Ghosh & Kirby (2020), and ours for MNIST, and our methods are similar. We all use convolutional networks as the (Feature descriptor) and prototype-based functions as the loss. However, Yang et al. (2018); Ghosh & Kirby (2020) use the center-based prototype loss, while we use the end-based (12). Yang et al. (2018)[Figure 1] points out that the traditional CNN is good at linearly separating feature representations, but the learned features are of large intra-class variations. The horocycle MLR leads to the inter-class separability in the same way (angle accounts for label difference) a traditional CNN does. At the same time, it also obtains intra-class compactness (Figure 5). 6.4 POISSON MLR Using a Poisson MLR whose feature descriptor is a ResNet-32 structure, we obtain a classifier with a test error rate of 6.46% on CIFAR-10. It is on par with other methods with similar network structures (Yang et al., 2018). Moreover, we apply Poisson MLR to the classification task of flowers (Tensorflow), which is a typical example of overfitting. Replacing the MLR part of the Keras model (Tensorflow) with a Poisson MLR, the new Poisson model shows better generalization performance (Figure 6). A.17 and code contain the details. This subsection provides evidence for further applications of horocycles. 7 CONCLUSION Based on the spectral theory of hyperbolic spaces, we introduce several horocycle-related learning tools. They find applications in the hyperbolic neural networks, the Poincaré embedding subtree classification task, and the visualization and classification of image datasets. We give an existential proof of a universal approximation theorem for shallow networks constructed by horocycle neurons or f1a,p. Hopefully, it will trigger further research on the expressivity problems, such as constructive approaches, quantitative results, and benefit of depth (Mhaskar & Poggio, 2016), on horocycle neurons, f1a,p, and similar functions on more general manifolds. A APPENDIX A.1 NOTATIONS AND SYMBOLS Default Notations Notation Description Related formula R The set of real numbers Rn n dimensional Euclidean space x ∈ Rn, x = (x1, . . . , xn) (·, ·)E Euclidean inner product x ∈ Rn, y ∈ Rn, (x, y)E = ∑n i=1 xiyi 〈·, ·〉H Hyperbolic analogue of (·, ·)E x ∈ Hn, y ∈ Sn−1, 〈x, ω〉H = 12 log 1−|x|2 |x−ω|2 | · | Euclidean norm x ∈ Rn, |x| = √ (x, x)E Hn n dimensional hyperbolic space as a set, Hn = {x ∈ Rn : |x| < 1} Tp(X) Tangent space of X at p T (X) Tangent space of X T (X) = ∪p∈XTp(X) ds2Hn The canonical metric on Hn with curva- ture -1 ds2Hn= ∑n i=1 4(1−|x| 2)−2dx2i dVol Riemannian volume on Hn dVol = 2n(1− |x|2)−ndx1 . . . dxn Lp(K, dVol) Lp space Lp(K, dVol) = { f | ∫ K |f |pdVol <∞ } || · ||Lp(K,dVol) Lp norm f measurable on K, ||f ||Lp(K,dVol) = (∫ K |f |pdVol ) 1 p Sn−1 n− 1 dimensional sphere as a set, Sn−1 = {x ∈ Rn : |x| = 1} P (·, ·) Hyperbolic Poisson kernel x ∈ Hn, ω ∈ Sn−1, P (x, ω) = ( 1−|x|2 |x−ω|2 )n−1 f1a,p Model in the hyperbolic MLR f1a,p(x) = 2|a| 1−|p|2 sinh −1 ( 2(−p⊕x,a)E (1−|−p⊕x|2)|a| ) dHn The hyperbolic distance function Ξ The space of horocycles Ξω The set of horocycles that are tangential to Sn−1 at ω LX Laplace-Beltrami operator on X hx The horocycle feature function hx(y) = 〈y, x/|x|〉H ξλ,ω The unique horocycle connecting ω and tanhλ/2 · ω. MLR Multiple linear regression dim dimension IK the indicator function of K Dist Relative distance function Dist(x, ω, b) = −2〈x, ω〉H + b Cls Set of classes Cls = {C1, C2, . . . , CM} NNθ A network parameterized by θ NN′θ A network parameterized by θ Exp Exponential map of the hyperbolic space (X1, Y 1) Labeled sample SCj Score function pθ(Y = Cj |X) Prediction probability L Loss function Pρw,λ,b Poisson neuron P ρ w,λ,b(x) = ρ ( λ |w| 2−|x|2 |x−w|2 + b ) PE Poincaré embedding Conventional symbols Symbol In most cases it refers n,m, i integers x, y, w points in Rn or Hn, or real numbers o the origin of Rn or Hn b, c, d, α, δ real numbers λ real or complex number t real number, represent the timestamp in optimization ω point in Sn−1 ρ an activation function f, g functions K a compact set X a manifold p a point in Hn or on a manifold a an element in Tp(Hn) ξ a horocycle µ a measure L a family of geodesics lines l a geodesics line U a set in Hn F, h,H functions M number of classes D dimension A.2 PROOF OF THE ISOMETRY Given ω∈Sn−1 and λ∈R, we let ξλ,ω the unique horocycle that connects ω and tanh (λ/2) · ω. The length of any geodesic (that ends at ω) line segment cut by ξλ1,ω and ξλ2,ω equals |λ1 − λ2|. This fact is obvious in the half-space model. There is a Riemannian isometry F : {z ∈ Rn : |z| < 1} → {(x1, · · · , xn) : x1 > 0} (the latter is with the metric ds2 = dx 2 1+···+dx 2 n x21 ) such that F (ω) = ∞ and F (o) = (1, 0, . . . , 0). Using dHn(o, tanh(λi/2)ω) = |λi|, d{(x1,··· ,xn):x1>0}((1, 0, . . . , 0), (e±λi , 0, . . . , 0)) = |λi|, F (ω) =∞ and F (o) = (1, 0, . . . , 0), we have F (tanh(λi/2)ω) = (eλi , 0, . . . , 0). Therefore, F maps ξλi,ω to {(x1, x2, . . . , xn) : x1 = eλi}. Any geodesic (that ends at ω) line segment cut by ξλ1,ω and ξλ2,ω is mapped by F to {(t, α2, . . . , αn) : (t− eλ1)(t− eλ2) < 0} for some fixed αj . It is easy to check the length of this segment with respect to dx 2 1+···+dx 2 n x21 (as αi are constants, the metric reduces to dx21/x 2 1 on this segment) is |λ1 − λ2|. A.3 PROOF OF (6) Because x is on ξλ which is a sphere with center 1+tanhλ/2 2 ω and radius 1−tanhλ/2 2 , we have∣∣∣x− 1+tanhλ/22 ω∣∣∣2 = ∣∣∣ 1−tanhλ/22 ∣∣∣2, which leads to |x|2−(1+tanhλ/2)(x, ω)E+tanhλ/2|ω|2 = 0, and then 1+tanhλ/22 |x− ω| 2 = 1−tanhλ/22 (|ω 2| − |x|2), and finally 〈x, ω〉H = 12 log |ω|2−|x|2 |x−ω|2 = 1 2 log 1+tanhλ/2 1−tanhλ/2 = λ/2. A.4 ANOTHER PROOF OF THE INTEGRAL FORMULA (7) We use Hn for the upper half space model {(x1, · · · , xn) : x1 > 0} with the Riemannian volume dx1···dxnxn1 . Let ω = (∞, 0, . . . , 0) and o be (1, 0, . . . , 0) as in (A.2), then ξλ,ω = {(x1, x2, . . . , xn) : x1 = eλ}. The induced Riemannian metric on ξλ,ω (respectively volume dVolξλ,ω ) is dx22+···+dx 2 n e2λ (respectively dx2···dxn e(n−1)λ ). For any integral function f on Hn, using change of variable x1 = eλ∫ Hn f(x1, . . . , xn) dx1 · · · dxn xn1 = ∫ λ ∫ (x2,...,xn)∈Rn−1 f(eλ, x2, . . . , xn) dx2 · · · dxn enλ eλdλ = ∫ λ ∫ (x2,...,xn)∈Rn−1 f(eλ, x2, . . . , xn) dx2 · · · dxn e(n−1)λ dλ = ∫ λ ∫ ξλ,ω f(z)dVolξλ,ω (z)dλ. The above identity is equivalent to the integral formula ∫ Hn f(x)dVol(x) =∫ R (∫ ξλ,ω f(z)dVolξλ,ω (z) ) dλ. presented in (7), according to the Riemannian isometry in (A.2). A.5 THE HEURISTIC IS NOT A PROOF The spectral theory does not directly lead to universal approximation theorems because of the following: 1, superpositions in (1, 2) and (8, 9) are different (similarly, although another kind of superposition in Hilbert’s 13th problem (Hilbert, 1935; Arnold, 2009) was a driving force for universal approximation theorems (Nielsen, 1987), the former is hardly relevant for networks (Girosi & Poggio, 1989)); 2, desired representation properties of hyperbolic eigenfunctions are unknown, partially because Hn is non-compact; 3, results in spectral theory favor Hilbert spaces, while universal approximation theorems embrace more than L2 space. A.6 OPTIMIZATION The parameters update for the horocycle unit (2) involves the optimization problem on the sphere (for ω) and the hyperbolic space (for x). We use a standard algorithm of sphere optimization (Absil et al., 2008) to update ω, and in the supplement we present an optimization approach based on the geodesic polar-coordinates to update x. In the implementation of a horocycle layer, the forward propagation is trivial, while the backpropagation involves optimization on the sphere and hyperbolic space. In the following, η is the learning rate, αt is the value of α (α may be η, s, z, ω, . . .) at the t-th step, TpX is the tangent fiber at p, ∇ is the gradient, and∇H is the hyperbolic gradient. It suffices to consider the layer s=〈z, ω〉. Optimization on the sphere The parameter update of ω in s=〈z, ω〉 involves the optimization on the sphere. The projection of ∂Lθ∂s ∇s(ωt) = ∂Lθ ∂s zt−ωt |zt−ωt|2 ∈ TωtR n onto TωtS n−1 is given by Absil et al. (2008)[p.48] vt = ∂Lθ ∂s zt − ωt |zt − ωt|2 − ∂Lθ ∂s ( zt − ωt |zt − ωt|2 , ωt ) ωt = ∂Lθ ∂s zt − (zt, ωt)ωt |zt − ωt|2 . Two well-known update algorithms of wt Absil et al. (2008)[p.76] are: ωt+1 = cos (ηt|vt|)ωt − sin (ηt|vt|)|vt|−1vt; ωt+1 = (ωt − ηtvt)/|ωt − ηtvt|. A.7 A PROOF OF APOLLONIUS THEOREM Theorem 3 (Apollonius). Given distinct ω1, ω2 ∈ Sn−1 and a positive number λ, the locus {x : |x− ω1| = λ|x− ω2|} is a sphere orthogonal to Sn−1. Proof. If λ is one then it is trivial. We assume now λ is not one. By |x− ω1| = λ|x− ω2|, we can have ∣∣∣∣x− ω1 − λω21− λ ∣∣∣∣2 = |ω1 − λω2|2|1− λ|2 − 1. The locus is a sphere with center O = ω1−λω21−λ and radius R = √ |ω1−λω2|2 |1−λ|2 − 1. The theorem of Apollonius (in all dimension) claims that this sphere is orthogonal to Sn−1. To prove this, it suffices to prove |oO|2 = 1 +R2 (recall o is the origin of Hn), which follows from∣∣∣∣ω1 − λω21− λ ∣∣∣∣2 = √ |ω1 − λω2|2 |1− λ|2 − 1 2 + 1. A.8 INVERSION On Rn ∪ {∞}, given the sphere {x : |x− w0| = r}, the corresponding inversion is given by Iv(x) = w0 + r2(x− w0) |x− w0|2 . For x ∈ Rn ∪ {∞}, Iv(x) is called the inverse of x with respect to {x : |x− w0| = r}. A.9 PROOF OF THEOREM 2 Theorem 2 Let K be a compact set in Hn, and 1≤p<∞. Then finite sums of the form F (x) = N∑ i=1 αiρ(λi〈x, ωi〉H+bi), ωi∈Sn−1, αi, λi, bi∈R are dense in Lp(K,µ), where µ is either dVol (5) or the induced Euclidean volume. Proof. We first treat the case ρ is sigmoidal and µ = dVol . Assume that these finite sums are not dense in Lp(K, dVol). By the Hahn-Banach theorem, there exists some nonzero h∈Lq(K, dVol), where q=p/(p − 1) if p>1 and q=∞ if p=1, such that ∫ K F (x)h(x)dVol(x) = 0 for all fi- nite sums of the form (14). As K is a compact set, by Hölder’s inequality, ∫ K |h(x)| dVol ≤ ( ∫ K dVol)1/p||h||Lq(K,dVol), which leads to h∈L1(K, dVol). Extend h to be a function H that is defined on Hn by assigning H(x)=h(x) if x∈K and H(x)=0 if x∈Hn\K. Then H∈L1(Hn, dVol)∩Lq(Hn, dVol) and∫ Hn F (x)H(x)dVol(x) = 0 (15) for all finite sums of the form (14). For any ω∈Sn−1 and λ, b∈R, we set Fω,λ,b(x) = ρ(λ(〈x, ω〉H−b)). These functions are uniformly bounded, as |Fω,λ,b(x)|≤1. Moreover, lim λ→∞ Fω,λ,b(x) = { 1 if 〈x, ω〉H>b, 0 if 〈x, ω〉H<b. (16) According to (15), for all ω, λ, b, we have ∫ Hn Fω,λ,b(x)H(x)dVol(x) = 0. Functions {Fω,λ,b}λ∈R converge pointwise as λ→∞, and they are uniformly bounded by |H|∈L1(Hn, dVol). By the bounded convergence theorem, for all ω∈Sn−1, b∈R, we have∫ {x:〈x,ω〉H>b} H(x)dVol(x) = 0. (17) By the integral formula (7) (with notations defined there), (6) and (17), for all b∈R,∫ ∞ 2b (∫ ξt,ω H(z)dVolξt,ω (z) ) dt = 0. (18) Taking the derivative of ∫∞ 2b (∫ ξt,ω H(z)dVolξt,ω (z) ) dt with respect to b, we deduce from (18) that∫ ξ2b,ω H(z)dVolξ2b,ω (z) = 0 for a.e. b∈R. In other words, the integration of H on a.e. ξ ∈ Ξω is zero. This fact is valid for all ω∈Sn−1. Therefore, the integration of H on a.e. ξ ∈ Ξ is zero. By the injectivity Theorem 1, H = 0 a.e., which contradicts our assumption. Therefore, finite sums of the form (14) are dense in Lp(K, dVol). The case ρ is ReLU, ELU or Softplus and µ = dVol follows from the above case and the fact that x 7→ ρ(x+ 1)− ρ(x) is sigmoidal. The case µ is the Euclidean volume follows from previous cases and the fact that the Euclidean volume on compact K is bounded from above by λdVol for some constant λ. A.10 UNIVERSAL APPROXIMATION THEOREM FOR POISSON NEURONS. In this section, ρ is a continuous sigmoidal function (Cybenko, 1989), ReLU(Nair & Hinton, 2010), ELU(Clevert et al., 2016), or Softplus(Dugas et al., 2001). We also recall the Poisson neuron: Pρw,λ,b(x) = ρ ( λ |w|2 − |x|2 |x− w|2 + b ) , w ∈ Rn, λ, b ∈ R. Theorem 4. Let K be a compact set in Hn, and 1≤p<∞. Then finite sums of the form F (x) = N∑ i=1 αiP ρ ωi,λi,bi (x), ωi∈Sn−1, αi, λi, bi∈R (19) are dense in Lp(K,µ), where µ is either dVol (5) or the induced Euclidean volume. Proof. We first treat the case ρ is sigmoidal and µ = dVol . Assume that these finite sums are not dense in Lp(K, dVol). By the Hahn-Banach theorem, there exists some nonzero h∈Lq(K, dVol), where q=p/(p − 1) if p>1 and q=∞ if p=1, such that ∫ K F (x)h(x)dVol(x) = 0 for all fi- nite sums of the form (19). As K is a compact set, by Hölder’s inequality, ∫ K |h(x)| dVol ≤ ( ∫ K dVol)1/p||h||Lq(K,dVol), which leads to h∈L1(K, dVol). Extend h to be a function H that is defined on Hn by assigning H(x)=h(x) if x∈K and H(x)=0 if x∈Hn\K. Then H∈L1(Hn, dVol)∩Lq(Hn, dVol) and∫ Hn F (x)H(x)dVol(x) = 0 (20) for all finite sums of the form (19). For any ω∈Sn−1, λ ∈ R, and b > 0, we set Fω,λ,b(x) = P ρ ω,λ,−λb(x) = ρ ( λ ( 1− |x|2 |x− ω|2 − b )) . These functions are uniformly bounded, as |Fω,λ,b(x)|≤1. Moreover, lim λ→∞ Fω,λ,b(x) = 1 if 1−|x|2 |x−ω|2>b, 0 if 1−|x| 2 |x−ω|2<b. (21) According to (20), for all ω, λ, b, we have ∫ Hn Fω,λ,b(x)H(x)dVol(x) = 0. Functions {Fω,λ,b}λ∈R converge pointwise as λ→∞, and they are uniformly bounded by |H|∈L1(Hn, dVol). By the bounded convergence theorem, for all ω∈Sn−1, b∈R, we have∫ {x:〈x,ω〉H>(log b)/2} H(x)dVol(x) = ∫ { x: 1−|x|2 |x−ω|2 >b }H(x)dVol(x) = 0. (22) By the integral formula (7) (with notations defined there), (6) and (22), for all b∈R,∫ ∞ log b (∫ ξt,ω H(z)dVolξt,ω (z) ) dt = 0. (23) Taking the derivative of ∫∞ log b (∫ ξt,ω H(z)dVolξt,ω (z) ) dt with respect to b, we deduce from (23) that∫ ξlog b,ω H(z)dVolξlog b,ω (z) = 0 for a.e. b>0. In other words, the integration of H on a.e. ξ ∈ Ξω is zero. This fact is valid for all ω∈Sn−1. Therefore, the integration of H on a.e. ξ ∈ Ξ is zero. By the injectivity Theorem 1, H = 0 a.e., which contradicts our assumption. Therefore, finite sums of the form (19) are dense in Lp(K, dVol). The case ρ is ReLU, ELU or Softplus and µ = dVol follows from the above case and the fact that x 7→ ρ(x+ 1)− ρ(x) is sigmoidal. The case µ is the Euclidean volume follows from previous cases and the fact that the Euclidean volume on compact K is bounded from above by λdVol for some constant λ. We refere the reader to the difference of (16) and (21), (17) and (22), and (18) and (23). However, basically the proofs are the same. The points are the integral formula (7), the injectivity Theorem 1 and the fact that level sets of horocycle/Poisson neurons are horocycles. Moreover, as a corollary of Theorem 4, we have Corollary 2. Let K be a compact set in Rn, and 1≤p<∞. Then finite sums of the form F (x) = N∑ i=1 αiP ρ wi,λi,bi (x), wi∈Rn, αi, λi, bi∈R are dense in Lp(K,µ), where µ is the Euclidean volume. Proof. Because K is compact, there exists a positive number R such that K ⊂ {x ∈ Rn : |x| < R}. By the above theorem, finite sums of the form F (x) = N∑ i=1 αiP ρ wi,λi,bi (x), wi∈Sn−1, αi, λi, bi∈R are dense in Lp(K/R, µ). Then the corollary follows from Pρw,λ,b(x) = P ρ w/R,λ,b(x/R). A.11 PROOF OF THE LEMMA 1 Recall f1a,p(x) = 2|a| 1− |p|2 sinh−1 ( 2(−p⊕ x, a)E (1− | − p⊕ x|2)|a| ) . (24) The proof of Lemma 1 follows from the following direct computation. Proof. Let t ∈ (0, 1). Take pt = tω and at = −ω, then we have −pt ⊕ x = −t(1− 2t(ω, x)E + |x|2)ω + (1− t2)x 1− 2t(ω, x)E + t2|x|2 . Let Ft(x) = 2(−pt⊕x,at)E (1−|−pt⊕x|2)|at| , then Ft(x) = 2(−pt ⊕ x, at)E (1− | − pt ⊕ x|2)|at| = 2 t(1−2t(ω,x)E+|x| 2)−(1−t2)(x,ω)E 1−2t(ω,x)E+t2|x|2 1− |−t(1−2t(ω,x)E+|x| 2)ω+(1−t2)x|2 (1−2t(ω,x)E+t2|x|2)2 = 2t(1− 2t(ω, x)E + t2|x|2)(1− 2t(ω, x)E + |x|2)− 2(1− t2)(1− 2t(ω, x)E + t2|x|2)(x, ω)E (1− 2t(ω, x)E + t2|x|2)2 − | − t(1− 2t(ω, x)E + |x|2)ω + (1− t2)x|2 = At(x)/Bt(x), where At, Bt are defined as the corresponding numerator and denominator. We have At(x)|t=1 = 2|x− ω|4 Bt(x)|t=1 = 0 ∂Bt(x)/∂t|t=1 = 2|x− ω|2(|x|2 − 1). Let Gt(x) = sinh−1(Ft(x)) + log 1−t1+t , then Gt(x) = log ( At(x) Bt(x) + √ 1 + A2t (x) B2t (x) ) + log 1− t 1 + t = log ( (1− t)At (1 + t)Bt + √ (1− t)2 (1 + t)2 + (1− t)2A2t (x) (1 + t)2B2t (x) ) . By L’Hôpital’s rule, lim t<1,t→1 (1− t)At(x) (1 + t)Bt(x) = −At(x) + (1− t)A′t(x) Bt(x) + (1 + t)B′t(x) ∣∣∣ t=1 = |x− ω|2 2− 2|x|2 . Therefore, lim t<1,t→1 Gt(x) = log ( |x− ω|2 1− |x|2 ) . For t < 1, we take pt = tω, at = −ω, ct = t 2−1 4 , dt = 1 2 log 1+t 1−t , then for all x ∈ K, lim t<1,t→1 ctf 1 at,pt(x) + dt = limt<1,t→1 −1 2 Gt(x) = 1 2 log ( 1− |x|2 |x− ω|2 ) = 〈x, ω〉H . If there exists c1, c2 such that |ctf1at, pt(x) + dt|(= |Gt(x)|/2) ≤ c2 for all t ∈ (c1, 1), x ∈ K, then by the dominated convergence theorem, there exists t such that ||ctf1at,pt(x) + dt − 〈x, ω〉H ||Lp(K,m) < , which proves the lemma. Note that (1− t)At(x) (1 + t)Bt(x) = 2|x− ω|4(1− t) + ∑4 j=1 Uj(x, ω)(1− t)j+1 −2|x− ω|2(|x|2 − 1)(1− t)(1 + t) + ∑4 l=2 Ll(x, ω)(1− t)l(1 + t) = 2|x− ω|4 + ∑4 j=1 Uj(x, ω)(1− t)j 2|x− ω|2(1− |x|2)(1 + t) + ∑4 l=2 Ll(x, ω)(1− t)l−1(1 + t) , where Uj and Ll are continuous functions defined on K × {ω}. There exist positive numbere c3, c4 and c1 ∈ (0, 1) such that for all x ∈ K and t ∈ (c1, 1), c3 ≤ 2|x− ω|4 ≤ c4, c3 ≤ 2|x− ω|2(1− |x|2)(1 + t) ≤ c4, c3 2 ≥ | 4∑ j=1 Uj(x, ω)(1− t)j |, c3 2 ≥ | 4∑ l=2 Ll(x, ω)(1− t)l−1(1 + t)|. Therefore, for x ∈ K and t ∈ (c1, 1), we have c3 2c4 + c3 ≤ (1− t)At(x) (1 + t)Bt(x) ≤ 2c4 + c3 c3 . This implies that for t ∈ (c1, 1), Gt|K and therefore |ctf1at,pt + dt||K are uniformly bounded, which finishes the proof of the lemma. A.12 THE FIRST MNIST CLASSIFIER IN 6.1 At the preprocessing stage, we compute the projection of the 28× 28 input pattern on the 40 principal components and then scale them so that the scaled 40-dimensional PCA features are within the unit ball. In our network, 1. Input layer: scaled 40-dimensional PCA features; 2. First layer: 40 inputs/1000 outputs horocycle layer (tanh activation); 3. Last layer: 1000 inputs/10 outputs affine layer; 4. Loss: cross entroy loss. Take learning rate = 1, learning rate decay = 0.999, and batch size = 128, and run it three times. The average test error rates after 600 epochs is 1.96%. PCA follows LeCun et al. (1998)(C.3), where 40 PCA is used for the quadratic network. Quadratic network has a similar structure to ours, because our neuron are contructed by quotient of quadratic functions followed by log. A.13 HOROCYCLE LAYER FOLLOWED BY MLR CAN APPROXIMATE THE CLASSFICATION FUNCTION Suppose the MNIST classification function M is defined on ∪9j=0Kj ⊂ H40, where Ki are relatively compact and M|Kj = j. By Theorem 2, for 0≤j≤9, there exist Fj(x) = ∑Nj i=1 αj,iρ(λj,i〈x, ωj,i〉H+bj,i) such that Fj approximates IKj , where I is the indicator function. Therefore, a network with the first (horocycle) layer given by ρ(λj,i〈x, ωj,i〉H+bj,i)(0≤j≤9, 1≤i≤Nj) followed by a classical MLR with parameters given by αj,i(0≤j≤9, 1≤i≤Nj) (with arg max for prediction) approximatesM. A.14 THE SECOND MNIST CLASSIFIER IN 6.1 At the preprocessing stage, we do data augmentation by letting each image 1 step toward each of its 4 corners, so that our traning set has 300000 examples. In our network, 1. Input layer: (28,28, 1); 2. First block: 32-filters 3× 3 convolution, ReLU, 2× 2 max-pooling, BatchNorm; 3. Second block: 64-filters 3× 3 convolution, ReLU, BatchNorm; 4. Thrid block: 64-filters 3× 3 convolution,ReLU,2× 2 max-pooling, BatchNorm; 5. Fourth block: 128-filters 3× 3 convolution, ReLU, 2× 2 max-pooling, BatchNorm; 6. Fifth block: FC 1000, ReLU, BatchNorm; 7. Last block: 1000 input/10 output Poisson layer, sigmoid, BatchNorm; 8. Loss: cross entroy loss. In optimization, we take Adam(Kingma & Ba, 2015). The batch size is 128 in the first 5 epochs, and 1024 in the next 15 epochs. After 5 epochs, we set ωi in the Poisson layer to be non-trainable. We train our network five times, the average test error rate after 20 epochs is 0.35%. The in |w| 2−|x|2 |x−w|2+ is an important hyperparameter for the numerical stability. We train this MNIST model with ∈ {10−1, 10−2, 10−4, 10−6, 10−8, 10−10, 10−20}. They all show robust performance. A.15 EXPERIMENT OF POINCARE TREE CLASSIFICATION TASK Given a Poincaré embedding (Nickel & Kiela, 2017) PE : {WordNet noun} → HD of the 82114 WordNet noun nodes and given a node x, the task is to classify all other nodes as being part of the subtree rooted at x (Ganea et al., 2018a). Our model is a logistic regression, where the horocycle feature p ∈ {WordNet noun} 7→ hPE(x)(PE(p)/s) (s is a hyperparameter lying in [1, 1.5]) is the only predictor, and the dependent variable is whether p is in the subtree rooted at x. Let P be the set of all nodes in the Poincare embedding, and let p range from P . 1. Input: hPE(x)(PE(p)/s) (s is a hyperparameter.) 2. Only layer: 1 input/1 output affine layer. (two parameters: one for input, one for bias.) 3. Loss: Logistic. (with respect to 1 if p in the tree rooted at x; 0 else.) In each training, x is one of {animal, group, location, mammal, worker}, dim is one of {2,3,5,10}, and Poincaré embeddings are from the animation_train.py of Ganea et al. (2018b) 4 (with tree=wordnet_full, model=poincare, dim=dim, seed randomly ∈ {7, 8, 9}). All nodes in the subtree rooted at x are divided into training nodes (80%) and test nodes (20%). The same splitting procedure applies for the rest nodes. We choose s that has the best training F1, and then record the corresponding test F1. For each x and dim, we do the training 100 times. The average test F1 classification scores are recorded in Table 2. The horocycle feature performs well here because it is compatible with the Poincaré embedding algorithm. Let x be a node that is not at the origin. It seems that the Poincaré embedding algorithm tends to pull all nodes that are from the subtree rooted at x towards the direction of x|x| , therefore y → 〈 y, x|x| 〉 H is a suitable feature for this task. A.16 END-BASED CLUSTERING IN H2 For MNIST, at the preprocessing stage, we do data augmentation by letting each image 1 step toward each of its 4 corners, so that our traning set has 300000 examples. Our network for H2 embedding of MNIST dataset is 1. Input layer: (28,28, 1); 2. First block: 32-filters 3× 3 convolution, ReLU, 2× 2 max-pooling, BatchNorm; 3. Second block: 64-filters 3× 3 convolution, ReLU, BatchNorm; 4. Thrid block: 64-filters 3× 3 convolution,ReLU,2× 2 max-pooling, BatchNorm; 5. Fourth block: 128-filters 3× 3 convolution, ReLU, 2× 2 max-pooling, BatchNorm; 6. Fifth block: FC 1000, ReLU, BatchNorm; 7. Sixth block: FC 2, ReLU, BatchNorm, Exp; 8. Last block: 2 input/10 output horocycle layer, sigmoid; 4https://github.com/dalab/hyperbolic_cones 9. Loss: cross entroy loss, where Exp is the exponential map ToH2(= R2)→ H2. We apply the data augmentation as in A.14. In optimization, learning rate is 0.1, learning rate decay is 0.99, batch size is 128, epochs is 50. Our network, data augmentation and optimization for H2 embedding of Fashion-MNIST dataset is completely the same as that for MNIST. For MNIST and Fashion-MNIST we use sphere optimization. We would like to remark that there are interesting new features in sphere optimization. Because the S1 is compact, for any continuous function f , there exists x = argmaxS1f . The derivative of f at x vanish, so the usual optimization algorithm to find the minimum will fail in the general case. In our experiments, we solve this problem by adding the following tricks: 1. Observation: if the class Cα are all close to ω ∈ S1, and the end prototype ωα for the class Cα is around −ω, then ωα is a maximum point of the loss function and therefore can not be improved through normal SGD. We solve this problem by adopting an idea(supervised variation) of k-means clustering. In each early epochs, optimization consists of two parts. In the first part, the normal SGD applies. In the second part, we move end prototypes (ωi) to the average direction of the class (using training data). 2. Observation: if the class Cα and class Cβ are all close to ω ∈ S1, and the end prototype ωα, ωβ are also both around ω, then all points in class Cα and class Cβ , end prototypes ωα, ωβ will all be pulling to ω by the SGD, and finally the network can not distinguish class Cα and class Cβ . We solve this problem by adding a loss if two prototypes are close. With these small tricks, our 2D end-based clustering algorithm is very stable for MNIST and FashionMNIST. We run it on MNIST 10 times, and they all get a test acc around 99% within 20 epochs. Suppose the classification task has M classes and the prototype of the i-th class is ωi. We write down the additional loss function for the second observation as follows i = RandomChoice({1, . . . ,M}) j = RandomChoice({1, . . . ,M} \ {i}) d = (ωi, ωj)E LObservation2 = arctanh(10× ReLU(d− 0.9− )), where is a small constant for numerical stability. For CIFAR-10, our network for H2 embedding of CIFAR-10 dataset is 1. Input layer: (32,32, 3); 2. First block: ResNet-32/128 output; 3. Second block: FC 2, ReLU, BatchNorm, Exp; 4. Last block: 2 input/10 output horocycle layer; 5. Loss: cross entroy loss. In the data augmentation, we apply horizontal/vertical shifts and horizontal flip. We use Adam. The batch size is 32 in the first 100 epochs, or 1024 in the next 50 epochs. The weights of the horocycle layer are fixed at the beginning of the training and are non-trainable, which follows an idea of Mettes et al. (2019). A.17 POISSON MLR For CIFAR-10, we use a ResNet-32 structure as the feature descriptor, and we apply horizontal/vertical shifts and horizontal flip. In our network, 1. Input layer: (32,32, 3); 2. First block: ResNet-32/128 output; 3. Second block: FC 128, ReLU, BatchNorm; 4. Last block: 128 input/10 output Poisson layer, BatchNorm; 5. Loss: cross entroy loss. We use Adam. The batch size is 32 in the first 80 epochs, or 1024 in the next 20 epochs. Test acc greater than 93.5%. For the classification task of flowers (Tensorflow), The dataset of 3670 photos of flowers contains 5 classes: daisy, dandelion, roses, sunflowers and tulips. The keras model is 1. Input layer: (180,180, 3); 2. First block: 16-filters 3× 3 convolution, ReLU, 2× 2 max-pooling; 3. Second block: 32-filters 3× 3 convolution, ReLU, 2× 2 max-pooling; 4. Thrid block: 64-filters 3× 3 convolution,ReLU,2× 2 max-pooling; 5. Fourth block: FC 128, ReLU; 6. Last block: 128 input/10 output FC layer; 7. Loss: cross entroy loss. Our Poisson model is 1. Input layer: (180,180, 3); 2. First block: 16-filters 3× 3 convolution, ReLU, 2× 2 max-pooling; 3. Second block: 32-filters 3× 3 convolution, ReLU, 2× 2 max-pooling; 4. Thrid block: 64-filters 3× 3 convolution,ReLU,2× 2 max-pooling; 5. Fourth block: FC 128, ReLU; 6. Last block: BatchNorm, 128 input/10 output Poisson layer, sigmoid, BatchNorm; 7. Loss: cross entroy loss. We use 2936 photos for training and
1. What is the main contribution of the paper regarding neural networks and hyperbolic space? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of theoretical results and experimental performance? 3. Do you have any concerns or questions regarding the writing style, exposition, and presentation of the paper? 4. How does the reviewer assess the significance and impact of the paper in the field of representation learning? 5. Are there any specific aspects of the paper that need improvement or further discussion, such as the restructuring of the paper, motivation of results, and explanation of experimental settings?
Review
Review Summary: This paper proposes new neural models for hyperbolic space, which unlike previous hyperbolic NN works, relies on the notion of horocycle in the Poincare disk. This novel framework has connections to spectral learnig in hyperbolic space. Representation theorems alla Cybenko for layers constructed from these neurons are presented. Finally, various experiments on clustering and classifying datasets using these neurons to generate hyperbolic embeddings are presented. With the caveat that this paper is outside my main area of expertise, I must say that I have mixed feelings about it. On the one hand, I want to like it - the topic is quite interesting and timely, the theoretical connections are intriguing, the representation results seem quite remarkable, and the experiments seem to suggest (modulo some questions I have, see below) that this is a promising approach. On the other hand, the writing, dry exposition, utter lack of discussion or intuition for most results, and the confusing setup of the experiments make it hard to produce a confident assessment. In addition, these drawbacks probably imply that the paper might not be accessible but to a few niche in the community, and might have a very limited impact. For the reasons above, I'm leaning towards rejection, but I think that this could be a very solid paper if: (i) the results hold, (ii) the writing and exposition is improved, and (iii) the results are better discussed and motivated . Strengths: Interesting problem in a flourishing but not-yet-too-crowded corner of the representation learning literature Seemingly very strong theoretical results (representation theorems for neural nets in hyperbolic space) Seemingly very convincing experimental results, outperforming alternative methods by wide margins Weaknesses: The paper needs thorough rewriting. There's various typos, confusing grammatical choices, and overall, confusing writing. Besides grammar, etc, the paper needs to be written with an ICLR audience in mind, most of which might not be experts in hyperbolic geometry, so more hand holding is needed. The paper needs restructuring. Too much space is devoted to listing prior results without further explanation or discussion (e.g. Theorem 1 - what's the importance, implication of this result?). In turn, the contribution of this paper, mostly contained in Section 4.2, could benefit from more detailed discussion and motivation. In particular, I find sentences like "Suppose this Poisson neuron is non-trainable ... " very confusing. I have no idea what this whole sentence is trying to convey. The results in Section 5 need more discussion. Theorem 2 at least is reminiscent of other representation theorems in the NN literature, but what is reader supposed to take away from Lemma 1 and Corollary 1? Instead of provding a full proof of Theorem 2, I would suggest deferring that to the appendix, and using the additional space to discuss the importance of all these results. The experiments seem quite impressive, but then again, I'm not sure whether I can gauge their soundness with confidence. There are many details about the experimental setting that are either missing or not well explained. For example: Are the G/S/H models in Table 1 all directly comparable? Do they have a similar number of parameters? Similar training? The reported advantage of H over G/S seems to be mostly prominent in low dimensions of the Poincare ball. I would like to see a discussion on why this is the case. Given how much variance the results in Table 1 seem to have, standard deviation or error bars should be reported along with the means It is not clear what exactly is meant by test error on a clustering task in Section 6.2. How are the train / test samples used? Many experimental/design choices are not well justified - e.g., why is the input layer scaled down with PCA in 6.3? Other issues: The notion of end prototypes seems quite interesting, but I feel like it could be better explained / elaborated on, e.g., at the end of Section 4. In Theorem 2, it isn't clear where K is coming into play in the definition of F (given that the density argument is on L p ( K , μ ) , I suppose F is only defined for x i n K ?).
ICLR
Title Laplacian Eigenspaces, Horocycles and Neuron Models on Hyperbolic Spaces Abstract We use hyperbolic Poisson kernel to construct the horocycle neuron model on hyperbolic spaces, which is a spectral generalization of the classical neuron model. We prove a universal approximation theorem for horocycle neurons. As a corollary, we obtain a state-of-the-art result on the expressivity of f a,p, which is used in the hyperbolic multiple linear regression. Our experiments get state-of-the-art results on the Poincare-embedding subtree classification task and the classification accuracy of the two-dimensional visualization of images. 1 INTRODUCTION Conventional deep network techniques attempt to use architecture based on compositions of simple functions to learn representations of Euclidean data (LeCun et al., 2015). They have achieved remarkable successes in a wide range of applications (Hinton et al., 2012; He et al., 2016). Geometric deep learning, a niche field that has caught the attention of many authors, attempts to generalize conventional learning techniques to non-Euclidean spaces (Bronstein et al., 2017; Monti et al., 2017). There has been growing interest in using hyperbolic spaces in machine learning tasks because they are well-suited for tree-like data representation (Ontrup & Ritter, 2005; Alanis-Lobato et al., 2016; Nickel & Kiela, 2017; Chamberlain et al., 2018; Nickel & Kiela, 2018; Sala et al., 2018; Ganea et al., 2018b; Tifrea et al., 2019; Chami et al., 2019; Liu et al., 2019; Balazevic et al., 2019; Yu & Sa, 2019; Gulcehre et al., 2019; Law et al., 2019). Many authors have introduced hyperbolic analogs of classical learning tools (Ganea et al., 2018a; Cho et al., 2019; Nagano et al., 2019; Grattarola et al., 2019; Mathieu et al., 2019; Ovinnikov, 2020; Khrulkov et al., 2020; Shimizu et al., 2020). Spectral methods are successful in machine learning, from nonlinear dimensionality reduction (Belkin & Partha, 2002) to clustering (Shi & Malik, 2000; Ng et al., 2002) to hashing (Weiss et al., 2009) to graph CNNs (Bruna et al., 2014) to spherical CNNs (Cohen et al., 2018) and to inference networks (Pfau et al., 2019). Spectral methods have been applied to learning tasks on spheres (Cohen et al., 2018) and graphs (Bruna et al., 2014), but not yet on hyperbolic spaces. This paper studies a spectral generalization of the FC (affine) layer on hyperbolic spaces. Before presenting the spectral generalization of the affine layer, we introduce some notations. Let (·, ·)E be the inner product, | · | the Euclidean norm, and ρ an activation function. The Poincaré ball model of the hyperbolic space Hn(n≥2) is a manifold {x∈Rn : |x|<1} equipped with a Riemannian metric ds2Hn= ∑n i=1 4(1−|x| 2)−2dx2i . The boundary of Hn under its canonical embedding in Rn is the unit sphere Sn−1. The classical neuron y=ρ((x,w)E+b) is of input x∈Rn, output y∈R, with trainable parameters w∈Rn, b∈R. An affine layer Rn → Rm is a concatenation of m neurons. An alternative representation of the neuron x 7→ρ((x,w)E+b) is given by 1 x∈Rn 7→ ρ(λ(x, ω)E+b), ω∈Sn−1, λ, b∈R. (1) This neuron is constant over any hyperplane that is perpendicular to a fixed direction ω. In Hn, a horocycle is a n−1 dimensional sphere (one point deleted) that is tangential to Sn−1. Horocycles are hyperbolic counterparts of hyperplanes (Bonola, 2012). Horocyclic waves 〈x, ω〉H := 12 log 1−|x|2 |x−ω|2 are constant over any horocycle that is tangential to Sn−1 at ω. Therefore, x∈Hn 7→ ρ(λ〈x, ω〉H+b), ω∈Sn−1, λ, b∈R (2) 1if w 6= (0, . . . , 0), one can take ω = w/|w|, λ = |w|; else, one can take λ = 0 and any ω ∈ Sn−1. generalizes the classical neuron model (1), and a concatenation of finitely many (2) generalizes the FC (affine) layer. We call (2) a horocycle neuron. Figure 1 (middle) is an example on H2. The neuron models in (1, 2) are related to spectral theory because (·, ω)E (respectively 〈·, ω〉H ) are building blocks of the Euclidean (respectively hyperbolic) Laplacian eigenspace. Moreover, many L2 spaces have a basis given by Laplacian eigenfunctions (Einsiedler & Ward, 2017). On one side, all Euclidean (respectively hyperbolic) eigenfunctions are some kind of “superposition” of (·, ω)E (respectively 〈·, ω〉H ). On the other side, neural networks based on (1) (respectively (2)) represent functions that are another kind of “superposition” of (·, ω)E (respectively 〈·, ω〉H ). They heuristically explain why the universal approximation property is likely to hold for networks constructed by (1) and (2). By using the Hahn Banach theorem, an injectivity theorem of Helgason, and integral formula, we prove that finite sums of horocycle neurons (2) are universal approximators (Theorem 2). Let p ∈ Hn, Tp(Hn) be the tangent space of Hn at p, a ∈ Tp(Hn), ⊕ be the Möbius addition (Ungar, 2008). We remind the reader that the following functions f1a,p(x) = 2|a| 1− |p|2 sinh −1 ( 2(−p⊕ x, a)E (1− | − p⊕ x|2)|a| ) (3) are building blocks of many hyperbolic learning tools (Ganea et al., 2018a; Mathieu et al., 2019; Shimizu et al., 2020). Figure 1 illustrates examples of different neuron models (1, 2, 3) on H2. In Lemma 1, we shall present a close relationship between (2) and (3). Using this relationship and Theorem 2, we obtain a novel result on the expressivity of f1a,p (Corollary 1). This article contributes to hyperbolic learning. We first apply spectral methods, such as the horocycle, to hyperbolic deep learning. We prove results on the expressivity of horocycle neurons (2) and f1a,p (3). With horocycle neurons, we obtain state-of-the-art results on the Poincaré-embedding subtree classification task and the classification accuracy of the 2-D visualization of images in in the experiment. 2 RELATED WORK Universal approximation There is a vast literature on universal approximation (Cybenko, 1989; Hornik et al., 1989; Funahashi, 1989; Leshno et al., 1993). Cybenko (1989)’s existential approach uses the Hahn Banach theorem and Fourier transform of Radon measures. To prove Theorem 2, we also use the Hahn Banach theorem, and additionally an integral formula (7) and an injectivity Theorem 1 of Helgason. Generalizing integral formulas and injectivity theorems is easier than generalizing Fourier transform of Radon measures on most non-Euclidean spaces. (Carroll & Dickinson, 1989) uses the inverse Radon transform to prove universal approximation theorems. This method relates to ours, as injectivity theorems are akin to inverse Radon transforms. However, using the injectivity theorem is an existential approach while using the inverse Radon transform is a constructive one. Spectral methods Spectral methods in Bronstein et al. (2017); Bruna et al. (2014); Cohen et al. (2018) use a basis of L2(X) given by eigenfunctions, whereX is a finite graph or the sphere. Because L2(Hn) has no eigenfunctions as a basis, our approach is different from theirs. Hyperbolic deep learning One part of hyperbolic learning concerns embedding data into the hyperbolic space (Nickel & Kiela, 2017; Sala et al., 2018). Another part concerns learning architectures with hyperbolic data as the input (Ganea et al. (2018a); Cho et al. (2019)). Ganea et al. (2018a) proposes two ways to generalize the affine layer on hyperbolic spaces: one by replacing the linear and bias part of an affine map with (25, 26) of their paper; another one by using a concatenation of f1a,p in their hyperbolic multiple linear regression (MLR). The latter seems more relevant to ours. A level set of f1a,p is a hypercycle that has the same distance to a chosen geodesic hypersurface, while a level set of a horocycle neuron is a horocycle that has the same “spectral” distance to an ideal point at infinity. Based on functions similar to f1a,p, Mathieu et al. (2019); Shimizu et al. (2020) build the gyroplane layer and Poincaré FC layer. Ganea et al. (2018a); Cho et al. (2019) take geodesics as decision hyperplanes, while we (initially) take horocycles. We shall construct the horocycle multiple linear regression (MLR), where decision hypersurfaces are geodesics. Geodesics decision hyperplanes (Ganea et al., 2018a; Cho et al., 2019) and geodesic decision hypersurfaces here arise from different methods. Khrulkov et al. (2020) investigates hyperbolic image embedding, where prototypes (or models) of each class are center-based. We study a different one, and we shall call our prototypes end-based. 3 HYPERBOLIC SPACES This section reviews facts from hyperbolic geometry that are used in the proof of Theorem 2. For the reader who is not interested in the proof, (4) is enough for the implementation. Hyperbolic metric We use the Poincaré model. The hyperbolic space Hn(n≥2) is the manifold {x∈Rn : |x|<1} equipped with a Riemannian metric ds2 = ∑n i=1 4(1−|x|2)−2dx2i . Let o be the origin of Hn. The distance function dHn satisfies dHn(o, x)=2 arctanh |x|. Geodesics, horocycles and corresponding points Geodesics in Hn are precisely circular arcs that are orthogonal to Sn−1. Horocycles in Hn are precisely (n−1)-dimensional spheres that are tangential to Sn−1 (Helgason, 1970). Horocycles are hyperbolic analogs of hyperplanes. Figure 2 illustrates geodesics and horocycles on H2. Hyperbolic Poisson kernel The Poisson kernel for Hn is P (x, ω)= ( 1−|x|2 |x−ω|2 )n−1 , where x∈Hn, ω∈Sn−1 (Helgason (1970)[p.108]). The function 〈·, ω〉H defined by 〈x, ω〉H = 1 2(n− 1) logP (·, ω) = 1 2 log 1− |x|2 |x− ω|2 (4) is constant over any horocycle that is tangential to Sn−1 at ω (Figure 1 (middle), (6)). Riemannian volume The Riemannian volume induced by the metric ds2 on Hn is dVol = 2n(1− |x|2)−ndx1 . . . dxn. (5) Horocycles Let Ξ be the set of horocycles of Hn, and let Ξω be the set of all horocycles that are tangential to Sn−1 at ω. Given λ∈R, we let ξλ,ω be the unique horocycle that connects ω and tanh (λ/2) · ω. We have Ξω = ∪λ∈R{ξλ,ω} and Ξ = ∪ω∈Sn−1Ξω. The length of any geodesic (that ends at ω) line segment cut by ξλ1,ω and ξλ2,ω equals |λ1 − λ2| (A.2). Therefore |λ1 − λ2| is a natural distance function defined on Ξω, and the map λ→ ξλ,ω is an isometry between R and Ξω. This isometry is closely related to 〈·, ω〉H (A.3): for any x ∈ ξλ,ω , 〈x, ω〉H = λ/2. (6) The annoying /2 in (6) is a tradeoff that the metric here is different from that in Helgason (2000). Integral formula For fixed ω ∈ Sn−1, Hn=∪λ∈Rξλ,ω. Let dVolξλ,ω be the measure induced by ds2 on ξλ,ω . Let L be a family of geodesics that end at ω, δ > 0, and U=L ∩ (∪λ≤α≤λ+δξα,ω). For l ∈ L, dH(l ∩ ξλ,ω, l ∩ ξλ+δ,ω)=δ (A.2), hence dVol(U) = δ · dVolξλ,ω (U ∩ ξλ,ω) and therefore∫ Hn f(x)dVol(x) = ∫ R (∫ ξλ,ω f(z)dVolξλ,ω (z) ) dλ. (7) The above proof (for Hn) is essentially the same as that in (Helgason, 2000)[p.37] (for H2). To further convince the reader that (7) holds for all n, we give another simple proof in A.4. Injectivity theorem With respect to the canonical measure on Ξ, Helgason (1970)[p.13] proved Theorem 1 (Helgason). If f ∈ L1(Hn) and ∫ ξ f(z)dVolξ(z) = 0 for a.e ξ ∈ Ξ, then f = 0 a.e.. Theorem 1 demonstrates that if the integral of f ∈ L1(Hn) over almost every horocycle is zero then f is also zero. This theorem and the integral formula (7) are essential for the proof of Theorem 2. 4 LEARNING ARCHITECTURES AND EIGENFUNCTIONS OF THE LAPLACIAN In this section, we discuss a heuristic connection between the representation properties of eigenfunctions and classical neurons, and then we define some horocycle-related learning tools. 4.1 EIGENSPACES AND NEURON MODELS On a Riemannian manifold X , the Laplace-Beltrami LX is the divergence of the gradient, and it has a well-known representation property (Einsiedler & Ward, 2017): if X is a compact Riemannian manifold or bounded domain in Rn, then L2(X) has a basis given by eigenfunctions. This statement is false if X is Rn or Hn (Hislop, 1994). Eigenspaces of on Rn and Hn Our work is motivated by the theory of eigenspaces, in which Euclidean (respectively hyperbolic) eigenfunctions are obtained from (x, ω)E (respectively 〈x, ω〉H ) by some kind of superposition. For example, all smooth eigenfunctions of LRn are precisely the functions (M. Hashizume & Okamoto, 1972)[p.543] f(x) = ∫ Sn−1 eλ(x,ω)EdT (ω), (8) and eigenfunctions of LHn are precisely the functions (Helgason, 1970)[Theorem 1.7, p.139] f(x) = ∫ Sn−1 eλ〈x,ω〉HdT (ω), (9) where T in (8) and (9) are some technical linear forms of suitable functional spaces on Sn−1. Neuron models By (8) and (1), Euclidean eigenfunctions (respectively classical neurons) are superpositions of (·, ω)E and exp (respectively ρ), with homogeneity and additivity. By (9) and (2), hyperbolic eigenfunctions (respectively horocycle neurons) are superpositions of 〈·, ω〉H and exp (respectively ρ). The representation property of eigenfunctions on compact manifolds and bounded domains suggests that the universal approximation property is likely to hold for networks constructed by (·, ω)E or 〈·, ω〉H . However, this heuristic is not proof (A.5). 4.2 HOROCYCLE BASED LEARNING ARCHITECTURES Horocycle neuron In the implementation of the horocycle neuron (2), we take 1 2 log ( 1−|x|2 |x−ω|2+ + ) for 〈x, ω〉H , where is a small constant to ensure numerical stability. For updating ω, we use the sphere optimization algorithm (Absil et al., 2008; Bonnabel, 2013) (A.6). Horocycle feature and horocycle decision hypersurface Given a non-origin point x ∈ Hn, for y ∈ Hn we define hx(y) = 〈y, x/|x|〉H and call it the horocycle feature attached to x. This feature is useful in the Poincaré embedding subtree classification task (see the experiment and Figure 3[left]). The horocycle is the hyperbolic analog of the Euclidean hyperplane, and therefore it could be a possible choice of decision hypersurface, which may arise from a level set of a horocycle feature. End-based clustering and end prototype Natural clustering is a topic in representation learning (Bengio et al., 2013), and the common prototype-based clusters are center-based (Tan et al., 2005). We propose a type of clustering that embeds high-dimensional data in Hn and places prototypes in Sn−1. Figure 3[right] is an example for n = 2. For ω ∈ Sn−1 and any b ∈ R, the function x ∈ Hn 7→ − log ( 1−|x|2 |x−ω|2 ) + b measures the relative distance of Hn from ω in Gromov’s bordification theory (Bridson & Haefliger (2009)[II.8], A.18). Moreover, we define Dist : Hn ×Sn−1 ×R→ R by Dist(x, ω, b) = − log ( 1− |x|2 |x− ω|2 ) + b = −2〈x, ω〉H + b. (10) It is a relative distance function, and this is why Dist may assume negative values and why there is a bias term b in (10). Consider classes Cls = {C1, C2, . . . , CM} and labeled training examples {(X1, Y 1), . . . , (XN , Y N )}, where Xi ∈ RD are D-dimensional input features and Y i ∈ {1, 2, . . . ,M}. Each example Xi belongs to the class CY i . In light of (10), our goal is to find a neural network NNθ : RD → Hn that is parameterized by θ, prototypes ω1, . . . , ωM ∈ Sn−1, and real numbers b1, . . . , bM ∈ R such that # { 1≤i≤N : Y i = arg min 1≤j≤M ( Dist(NNθ(X i), ωj , bj) )} N (11) is maximized. We call {NNθ(Xj) : 1 ≤ j ≤ N} the end-based clustering and ωi end prototypes (in hyperbolic geometry, the end is an equivalence class of parallel lines in Figure 2[left]). In experiments, we take NNθ = Exp ◦ NN′θ, where NN ′ θ : R D → Rn is a standard neural network parameterized by θ and Exp : Rn → Hn is the exponential map of the hyperbolic space. Horocycle layer, horocycle multiple linear regression (MLR) and geodesic decision hypersurfaces We call a concatenation of (2) a horocycle layer, and we shall carefully describe a prototypical learning framework for end-based clusterings. Using the same notions as in the previous paragraph, the classification task has M classes, and NNθ = Exp ◦NN′θ : RD → Hn is a deep network. For prototypes ω1, . . . , ωM ∈ Sn−1, real numbers b1, . . . , bM ∈ R, and any exampleX , our feedforward for prediction will be x = NNθ(X), (Feature descriptor) SCj(X) = −Dist(x, ωj , bj), (Scores; Similarity) X ∈ Cargmax 1≤j≤M (SCj(X)). (Classifier) The goal is to maximize the accuracy (11), and then we need a loss function for the backpropagation. Following the convention of prototypical networks (Snell et al., 2017; Yang et al., 2018), we choose an increasing function ρ (in our experiments, ρ(x) = x or ρ = tanh. 2) and let the distribution over classes for an input X (with label Y ) be pθ(Y = Cj |X) ∝ e−ρ(Dist(NNθ(X),mj ,bj)) = e−ρ(−SCj(X)). 2One often takes ρ(x) = x2 in metric learning, which is improper here because Dist(x) could be negative. Therefore, given a batch of training examples, the loss function is L = − ∑ (Xj ,Y j)∈Batch log pθ(Y = CY j |Xj) #Batch . (12) The training proceeds by minimizing L, and we call this framework a horocycle MLR. The set of parameters of the framework is {θ} ∪ {ω1, . . . , ωM} ∪ {b1, . . . , bM}. It is worth mentioning that decision boundaries of the horocycle MLR are geodesics, which follows from SCi(X)=SCj(X)⇐⇒ log ( 1−|x|2 |x−ωi|2 ) −bi = log ( 1−|x|2 |x−ωj |2 ) −bj ⇐⇒ |x−ωi| |x−ωj | = e bj−bi 2 and the theorem of Apollonian circles (A.7). Poisson neuron and Poisson multiple linear regression (MLR) Although 〈x, ω〉H (4) is wellmotivated by the theory of eigenspaces (9) and fits naturally into metric learning (see 10 or also Corollary 1), it is only defined on Hn. Some readers might not be convinced that the neuron has to be defined on hyperbolic spaces. Therefore, we try to remove the log in (4) and define the Poisson neuron model by Pρw,λ,b(x) = ρ ( λ |w| 2−|x|2 |x−w|2 + b ) for w ∈ Rn, λ, b ∈ R, which is well-defined on Rn\{w}. Notice that if |x| < |w| then |w| 2−|x|2 |x−w|2 = e 2〈x/|w|,w/|w|〉H . In A.8, Figure 7 illustrates an example of a Poisson neuron on R2. In the implementation, we take |w| 2−|x|2 |x−w|2+ for |w|2−|x|2 |x−w|2 , where is a small constant for numerical stability. We call a concatenation of Poisson neurons a Poisson layer, and we use it with a deep neural network NNθ : RD → Rn to construct the Poisson MLR, which is similar to the horocycle MLR. Let w1, . . . , wM ∈ Rn and b1, . . . , bM ∈ R, the feedforward for prediction of our framework is x = NNθ(X), SCj(X) = BatchNorm(P ρ wj ,−1,bj (x)), X ∈ Cargmax 1≤j≤M (SCj(X)). (13) We let the pθ(Y = Cj |X) ∝ eSCj(X) and take (12) as the loss. This framework is called a Poisson MLR. We use the usual optimization algorithms to update parameters in the Poisson neuron. The BatchNorm(Ioffe & Szegedy, 2015) seems crucial for (13) in the experiment. Figure 4 illustrates that high-confidence prediction regions (deep red areas) of the Poisson MLR are compact sets, in contrast to classical classifiers Hein et al. (2019)[Theorem 3.1]. We shall use this figure to explain an experiment in Section 6.4. 5 REPRESENTATIONAL POWER In this section, ρ is a continuous sigmoidal function (Cybenko, 1989), ReLU(Nair & Hinton, 2010), ELU(Clevert et al., 2016), or Softplus(Dugas et al., 2001). We remind the reader that ρ is sigmoidal if lim t→∞ ρ(t) = 1 and lim t→−∞ ρ(t) = 0. The following theorem justifies the representational power of horocycle neurons. Theorem 2. Let K be a compact set in Hn, and 1≤p<∞. Then finite sums of the form F (x) = N∑ i=1 αiρ(λi〈x, ωi〉H+bi), ωi∈Sn−1, αi, λi, bi∈R (14) are dense in Lp(K,µ), where µ is either dVol (5) or the induced Euclidean volume. We provide a sketch of the proof here and go through the details in A.9. It suffices to prove the theorem for a sigmoidal function ρ and µ = dVol , as other cases follow from this one. Assume that these finite sums are not dense in Lp(K, dVol). By the Hahn-Banach theorem, there exists some nonzero h∈Lq(K, dVol), where q=p/(p− 1) if p>1 and q=∞ if p=1, such that ∫ K F (x)h(x)dVol(x) = 0 for all finite sums of the form (14). Extend h to be a function H that is defined on Hn by assigning H(x)=h(x) if x∈K and H(x)=0 if x∈Hn\K. Using the property of sigmoidal functions, the bounded convergence theorem, and the integral formula (7), we prove that the integration of H on almost every horocycle is zero. By the injectivity Theorem 1, H is almost everywhere zero, which contradicts our assumption and completes the proof. In A.10, we shall prove the same result for Poisson neurons. In A.11, we prove the following lemma, which demonstrates a close relationship between horocycle neurons and the widely used f1a,p (3). Lemma 1. Let K be a compact set in Hn, ω ∈ Sn−1, and > 0. There are c, d ∈ R, p ∈ Hn, and a ∈ Tp(Hn) such that the function D(x) = cf1a,p(x) + d− 〈x, ω〉H satisfies ||D||Lp(K,dVol) < . This lemma suggests that 〈·, ω〉H is a boundary point of some “compactification” of the space of f1a,p. The above lemma together with Theorem 2 implies Corollary 1. Let K be a compact set in Hn and 1≤p<∞. Finite sums of the form F (x) = N∑ i=1 αiρ(cif 1 ai,pi(x) + di), pi ∈ H n, ai ∈ Tpi(H n), αi, ci, di ∈ R, are dense in Lp(K,µ), where µ = dVol or µ is the induced Euclidean volume. This result provides novel insights into the hyperbolic neural network (Ganea et al., 2018a), gyroplane layer (Mathieu et al., 2019), and Poincaré FC layer (Shimizu et al., 2020). Although level sets of f1a,p are hypercycles, our proof of Lemma 1 relies on the theory of horocycles. It would be interesting to have more natural approaches to treat the expressivity of f1a,p. 6 EXPERIMENTS In this section, we first play with the MNIST toy. Next, we apply a horocycle feature to the Poincaré embedding subtree classification task. After that, we construct 2-D clusterings of image datasets by using the horocycle MLR. Finally, we provide evidence for further possible applications of the Poisson MLR. We use the framework or some functions of Tensorflow, Keras, and scikit-learn (Abadi et al., 2015; Chollet et al., 2015; Pedregosa et al., 2011). 6.1 MNIST The MNIST (LeCun et al., 1998) task is popular for testing hyperbolic learning tools (Ontrup & Ritter, 2005; Nagano et al., 2019; Mathieu et al., 2019; Grattarola et al., 2019; Ovinnikov, 2020; Khrulkov et al., 2020). We train two different classifiers. A.12, A.14, and code contain details. The first one is a single horocycle layer followed by the softmax classifier. The average test error rate after 600 epochs is 1.96%, and Theorem 2 provides the rationale for this experiment (A.13). The second one is a Poisson MLR. It is the best hyperbolic geometry related MNIST classifier (Table 1). In this table, Ontrup & Ritter (2005) uses the hyperbolic SOM, Grattarola et al. (2019) uses the adversarial autoencoder, and Khrulkov et al. (2020) uses the hyperbolic MLR. Our experiment performs well on MNIST suggests that horocycle and Poisson neurons are computationally efficient and easily coordinate with classical learning tools (such as the convolutional layer and the softmax). 6.2 POINCARÉ EMBEDDING SUBTREE CLASSIFICATION Given a Poincaré embedding (Nickel & Kiela, 2017) PE : {WordNet noun} → HD of 82114 nouns and given a node x ∈ {WordNet noun}, the task is to classify all other nodes as being part of the subtree rooted at x (Ganea et al., 2018a). Our model is logistic regression, where the horocycle feature p ∈ {WordNet noun} 7→ hPE(x)(PE(p)/s) (s is a hyperparameter lying in [1, 1.5]) is the only predictor, and the dependent variable is whether p is in the subtree rooted at x. The decision hypersurface of this model is a horocycle, as illustrated in Figure 3 (left). In the experiment, we pre-train three different Poincaré embeddings3 in each of H2,H3,H5,H10. For each x ∈ {animal, group, location, mammal, worker} and D ∈ {2, 3, 5, 10}, we randomly select one of three pre-trained Poincaré embedding PE : {WordNet noun} → HD and then test the model. Table 2 reports the F1 classification scores and two standard deviations of 100 trials for each {x,D}. Different Poincaré embeddings account for the most variance of the performance. Our model is different from the existing ones. Firstly, we take the horocycle as the decision hypersurface, while others take the geodesic. Secondly, we train a logistic regression on top of the horocycle feature attached to PE(x), which is efficiently calculated, while others train the hyperbolic MLR with different parametrizations. On the number of parameters, we have three (independent of D), Ganea et al. (2018a) has 2D, and Shimizu et al. (2020) has D + 1. The number of parameters explains why our model is prominent in low dimensions. 6.3 END-BASED CLUSTERING FOR 2D DIMENSION REDUCTION In this experiment, we use the horocycle MLR (Section 4.2) to construct end-based clusterings NNθ : R D → H2 for MNIST, Fashion-MNIST(Xiao et al., 2017), and CIFAR-10(Krizhevsky, 2012). We take NNθ = Exp ◦ NN′θ, where Exp is the exponential map of H2 and NN ′ θ : R D → R2 is a network with four convolutional blocks for MNIST/Fashion-MNIST or a ResNet-32 structure for CIFAR-10. A.16 and code contain details. 3https://github.com/dalab/hyperbolic_cones Figure 5 illustrates end-based clusterings for MNIST, Fashion-MNIST, and CIFAR-10, with performance reported in the caption. Our accuracy for Fashion-MNIST is 8% higher than all numbers presented in McInnes et al. (2020). Moreover, Table 3 compares the numbers of Yang et al. (2018); Ghosh & Kirby (2020), and ours for MNIST, and our methods are similar. We all use convolutional networks as the (Feature descriptor) and prototype-based functions as the loss. However, Yang et al. (2018); Ghosh & Kirby (2020) use the center-based prototype loss, while we use the end-based (12). Yang et al. (2018)[Figure 1] points out that the traditional CNN is good at linearly separating feature representations, but the learned features are of large intra-class variations. The horocycle MLR leads to the inter-class separability in the same way (angle accounts for label difference) a traditional CNN does. At the same time, it also obtains intra-class compactness (Figure 5). 6.4 POISSON MLR Using a Poisson MLR whose feature descriptor is a ResNet-32 structure, we obtain a classifier with a test error rate of 6.46% on CIFAR-10. It is on par with other methods with similar network structures (Yang et al., 2018). Moreover, we apply Poisson MLR to the classification task of flowers (Tensorflow), which is a typical example of overfitting. Replacing the MLR part of the Keras model (Tensorflow) with a Poisson MLR, the new Poisson model shows better generalization performance (Figure 6). A.17 and code contain the details. This subsection provides evidence for further applications of horocycles. 7 CONCLUSION Based on the spectral theory of hyperbolic spaces, we introduce several horocycle-related learning tools. They find applications in the hyperbolic neural networks, the Poincaré embedding subtree classification task, and the visualization and classification of image datasets. We give an existential proof of a universal approximation theorem for shallow networks constructed by horocycle neurons or f1a,p. Hopefully, it will trigger further research on the expressivity problems, such as constructive approaches, quantitative results, and benefit of depth (Mhaskar & Poggio, 2016), on horocycle neurons, f1a,p, and similar functions on more general manifolds. A APPENDIX A.1 NOTATIONS AND SYMBOLS Default Notations Notation Description Related formula R The set of real numbers Rn n dimensional Euclidean space x ∈ Rn, x = (x1, . . . , xn) (·, ·)E Euclidean inner product x ∈ Rn, y ∈ Rn, (x, y)E = ∑n i=1 xiyi 〈·, ·〉H Hyperbolic analogue of (·, ·)E x ∈ Hn, y ∈ Sn−1, 〈x, ω〉H = 12 log 1−|x|2 |x−ω|2 | · | Euclidean norm x ∈ Rn, |x| = √ (x, x)E Hn n dimensional hyperbolic space as a set, Hn = {x ∈ Rn : |x| < 1} Tp(X) Tangent space of X at p T (X) Tangent space of X T (X) = ∪p∈XTp(X) ds2Hn The canonical metric on Hn with curva- ture -1 ds2Hn= ∑n i=1 4(1−|x| 2)−2dx2i dVol Riemannian volume on Hn dVol = 2n(1− |x|2)−ndx1 . . . dxn Lp(K, dVol) Lp space Lp(K, dVol) = { f | ∫ K |f |pdVol <∞ } || · ||Lp(K,dVol) Lp norm f measurable on K, ||f ||Lp(K,dVol) = (∫ K |f |pdVol ) 1 p Sn−1 n− 1 dimensional sphere as a set, Sn−1 = {x ∈ Rn : |x| = 1} P (·, ·) Hyperbolic Poisson kernel x ∈ Hn, ω ∈ Sn−1, P (x, ω) = ( 1−|x|2 |x−ω|2 )n−1 f1a,p Model in the hyperbolic MLR f1a,p(x) = 2|a| 1−|p|2 sinh −1 ( 2(−p⊕x,a)E (1−|−p⊕x|2)|a| ) dHn The hyperbolic distance function Ξ The space of horocycles Ξω The set of horocycles that are tangential to Sn−1 at ω LX Laplace-Beltrami operator on X hx The horocycle feature function hx(y) = 〈y, x/|x|〉H ξλ,ω The unique horocycle connecting ω and tanhλ/2 · ω. MLR Multiple linear regression dim dimension IK the indicator function of K Dist Relative distance function Dist(x, ω, b) = −2〈x, ω〉H + b Cls Set of classes Cls = {C1, C2, . . . , CM} NNθ A network parameterized by θ NN′θ A network parameterized by θ Exp Exponential map of the hyperbolic space (X1, Y 1) Labeled sample SCj Score function pθ(Y = Cj |X) Prediction probability L Loss function Pρw,λ,b Poisson neuron P ρ w,λ,b(x) = ρ ( λ |w| 2−|x|2 |x−w|2 + b ) PE Poincaré embedding Conventional symbols Symbol In most cases it refers n,m, i integers x, y, w points in Rn or Hn, or real numbers o the origin of Rn or Hn b, c, d, α, δ real numbers λ real or complex number t real number, represent the timestamp in optimization ω point in Sn−1 ρ an activation function f, g functions K a compact set X a manifold p a point in Hn or on a manifold a an element in Tp(Hn) ξ a horocycle µ a measure L a family of geodesics lines l a geodesics line U a set in Hn F, h,H functions M number of classes D dimension A.2 PROOF OF THE ISOMETRY Given ω∈Sn−1 and λ∈R, we let ξλ,ω the unique horocycle that connects ω and tanh (λ/2) · ω. The length of any geodesic (that ends at ω) line segment cut by ξλ1,ω and ξλ2,ω equals |λ1 − λ2|. This fact is obvious in the half-space model. There is a Riemannian isometry F : {z ∈ Rn : |z| < 1} → {(x1, · · · , xn) : x1 > 0} (the latter is with the metric ds2 = dx 2 1+···+dx 2 n x21 ) such that F (ω) = ∞ and F (o) = (1, 0, . . . , 0). Using dHn(o, tanh(λi/2)ω) = |λi|, d{(x1,··· ,xn):x1>0}((1, 0, . . . , 0), (e±λi , 0, . . . , 0)) = |λi|, F (ω) =∞ and F (o) = (1, 0, . . . , 0), we have F (tanh(λi/2)ω) = (eλi , 0, . . . , 0). Therefore, F maps ξλi,ω to {(x1, x2, . . . , xn) : x1 = eλi}. Any geodesic (that ends at ω) line segment cut by ξλ1,ω and ξλ2,ω is mapped by F to {(t, α2, . . . , αn) : (t− eλ1)(t− eλ2) < 0} for some fixed αj . It is easy to check the length of this segment with respect to dx 2 1+···+dx 2 n x21 (as αi are constants, the metric reduces to dx21/x 2 1 on this segment) is |λ1 − λ2|. A.3 PROOF OF (6) Because x is on ξλ which is a sphere with center 1+tanhλ/2 2 ω and radius 1−tanhλ/2 2 , we have∣∣∣x− 1+tanhλ/22 ω∣∣∣2 = ∣∣∣ 1−tanhλ/22 ∣∣∣2, which leads to |x|2−(1+tanhλ/2)(x, ω)E+tanhλ/2|ω|2 = 0, and then 1+tanhλ/22 |x− ω| 2 = 1−tanhλ/22 (|ω 2| − |x|2), and finally 〈x, ω〉H = 12 log |ω|2−|x|2 |x−ω|2 = 1 2 log 1+tanhλ/2 1−tanhλ/2 = λ/2. A.4 ANOTHER PROOF OF THE INTEGRAL FORMULA (7) We use Hn for the upper half space model {(x1, · · · , xn) : x1 > 0} with the Riemannian volume dx1···dxnxn1 . Let ω = (∞, 0, . . . , 0) and o be (1, 0, . . . , 0) as in (A.2), then ξλ,ω = {(x1, x2, . . . , xn) : x1 = eλ}. The induced Riemannian metric on ξλ,ω (respectively volume dVolξλ,ω ) is dx22+···+dx 2 n e2λ (respectively dx2···dxn e(n−1)λ ). For any integral function f on Hn, using change of variable x1 = eλ∫ Hn f(x1, . . . , xn) dx1 · · · dxn xn1 = ∫ λ ∫ (x2,...,xn)∈Rn−1 f(eλ, x2, . . . , xn) dx2 · · · dxn enλ eλdλ = ∫ λ ∫ (x2,...,xn)∈Rn−1 f(eλ, x2, . . . , xn) dx2 · · · dxn e(n−1)λ dλ = ∫ λ ∫ ξλ,ω f(z)dVolξλ,ω (z)dλ. The above identity is equivalent to the integral formula ∫ Hn f(x)dVol(x) =∫ R (∫ ξλ,ω f(z)dVolξλ,ω (z) ) dλ. presented in (7), according to the Riemannian isometry in (A.2). A.5 THE HEURISTIC IS NOT A PROOF The spectral theory does not directly lead to universal approximation theorems because of the following: 1, superpositions in (1, 2) and (8, 9) are different (similarly, although another kind of superposition in Hilbert’s 13th problem (Hilbert, 1935; Arnold, 2009) was a driving force for universal approximation theorems (Nielsen, 1987), the former is hardly relevant for networks (Girosi & Poggio, 1989)); 2, desired representation properties of hyperbolic eigenfunctions are unknown, partially because Hn is non-compact; 3, results in spectral theory favor Hilbert spaces, while universal approximation theorems embrace more than L2 space. A.6 OPTIMIZATION The parameters update for the horocycle unit (2) involves the optimization problem on the sphere (for ω) and the hyperbolic space (for x). We use a standard algorithm of sphere optimization (Absil et al., 2008) to update ω, and in the supplement we present an optimization approach based on the geodesic polar-coordinates to update x. In the implementation of a horocycle layer, the forward propagation is trivial, while the backpropagation involves optimization on the sphere and hyperbolic space. In the following, η is the learning rate, αt is the value of α (α may be η, s, z, ω, . . .) at the t-th step, TpX is the tangent fiber at p, ∇ is the gradient, and∇H is the hyperbolic gradient. It suffices to consider the layer s=〈z, ω〉. Optimization on the sphere The parameter update of ω in s=〈z, ω〉 involves the optimization on the sphere. The projection of ∂Lθ∂s ∇s(ωt) = ∂Lθ ∂s zt−ωt |zt−ωt|2 ∈ TωtR n onto TωtS n−1 is given by Absil et al. (2008)[p.48] vt = ∂Lθ ∂s zt − ωt |zt − ωt|2 − ∂Lθ ∂s ( zt − ωt |zt − ωt|2 , ωt ) ωt = ∂Lθ ∂s zt − (zt, ωt)ωt |zt − ωt|2 . Two well-known update algorithms of wt Absil et al. (2008)[p.76] are: ωt+1 = cos (ηt|vt|)ωt − sin (ηt|vt|)|vt|−1vt; ωt+1 = (ωt − ηtvt)/|ωt − ηtvt|. A.7 A PROOF OF APOLLONIUS THEOREM Theorem 3 (Apollonius). Given distinct ω1, ω2 ∈ Sn−1 and a positive number λ, the locus {x : |x− ω1| = λ|x− ω2|} is a sphere orthogonal to Sn−1. Proof. If λ is one then it is trivial. We assume now λ is not one. By |x− ω1| = λ|x− ω2|, we can have ∣∣∣∣x− ω1 − λω21− λ ∣∣∣∣2 = |ω1 − λω2|2|1− λ|2 − 1. The locus is a sphere with center O = ω1−λω21−λ and radius R = √ |ω1−λω2|2 |1−λ|2 − 1. The theorem of Apollonius (in all dimension) claims that this sphere is orthogonal to Sn−1. To prove this, it suffices to prove |oO|2 = 1 +R2 (recall o is the origin of Hn), which follows from∣∣∣∣ω1 − λω21− λ ∣∣∣∣2 = √ |ω1 − λω2|2 |1− λ|2 − 1 2 + 1. A.8 INVERSION On Rn ∪ {∞}, given the sphere {x : |x− w0| = r}, the corresponding inversion is given by Iv(x) = w0 + r2(x− w0) |x− w0|2 . For x ∈ Rn ∪ {∞}, Iv(x) is called the inverse of x with respect to {x : |x− w0| = r}. A.9 PROOF OF THEOREM 2 Theorem 2 Let K be a compact set in Hn, and 1≤p<∞. Then finite sums of the form F (x) = N∑ i=1 αiρ(λi〈x, ωi〉H+bi), ωi∈Sn−1, αi, λi, bi∈R are dense in Lp(K,µ), where µ is either dVol (5) or the induced Euclidean volume. Proof. We first treat the case ρ is sigmoidal and µ = dVol . Assume that these finite sums are not dense in Lp(K, dVol). By the Hahn-Banach theorem, there exists some nonzero h∈Lq(K, dVol), where q=p/(p − 1) if p>1 and q=∞ if p=1, such that ∫ K F (x)h(x)dVol(x) = 0 for all fi- nite sums of the form (14). As K is a compact set, by Hölder’s inequality, ∫ K |h(x)| dVol ≤ ( ∫ K dVol)1/p||h||Lq(K,dVol), which leads to h∈L1(K, dVol). Extend h to be a function H that is defined on Hn by assigning H(x)=h(x) if x∈K and H(x)=0 if x∈Hn\K. Then H∈L1(Hn, dVol)∩Lq(Hn, dVol) and∫ Hn F (x)H(x)dVol(x) = 0 (15) for all finite sums of the form (14). For any ω∈Sn−1 and λ, b∈R, we set Fω,λ,b(x) = ρ(λ(〈x, ω〉H−b)). These functions are uniformly bounded, as |Fω,λ,b(x)|≤1. Moreover, lim λ→∞ Fω,λ,b(x) = { 1 if 〈x, ω〉H>b, 0 if 〈x, ω〉H<b. (16) According to (15), for all ω, λ, b, we have ∫ Hn Fω,λ,b(x)H(x)dVol(x) = 0. Functions {Fω,λ,b}λ∈R converge pointwise as λ→∞, and they are uniformly bounded by |H|∈L1(Hn, dVol). By the bounded convergence theorem, for all ω∈Sn−1, b∈R, we have∫ {x:〈x,ω〉H>b} H(x)dVol(x) = 0. (17) By the integral formula (7) (with notations defined there), (6) and (17), for all b∈R,∫ ∞ 2b (∫ ξt,ω H(z)dVolξt,ω (z) ) dt = 0. (18) Taking the derivative of ∫∞ 2b (∫ ξt,ω H(z)dVolξt,ω (z) ) dt with respect to b, we deduce from (18) that∫ ξ2b,ω H(z)dVolξ2b,ω (z) = 0 for a.e. b∈R. In other words, the integration of H on a.e. ξ ∈ Ξω is zero. This fact is valid for all ω∈Sn−1. Therefore, the integration of H on a.e. ξ ∈ Ξ is zero. By the injectivity Theorem 1, H = 0 a.e., which contradicts our assumption. Therefore, finite sums of the form (14) are dense in Lp(K, dVol). The case ρ is ReLU, ELU or Softplus and µ = dVol follows from the above case and the fact that x 7→ ρ(x+ 1)− ρ(x) is sigmoidal. The case µ is the Euclidean volume follows from previous cases and the fact that the Euclidean volume on compact K is bounded from above by λdVol for some constant λ. A.10 UNIVERSAL APPROXIMATION THEOREM FOR POISSON NEURONS. In this section, ρ is a continuous sigmoidal function (Cybenko, 1989), ReLU(Nair & Hinton, 2010), ELU(Clevert et al., 2016), or Softplus(Dugas et al., 2001). We also recall the Poisson neuron: Pρw,λ,b(x) = ρ ( λ |w|2 − |x|2 |x− w|2 + b ) , w ∈ Rn, λ, b ∈ R. Theorem 4. Let K be a compact set in Hn, and 1≤p<∞. Then finite sums of the form F (x) = N∑ i=1 αiP ρ ωi,λi,bi (x), ωi∈Sn−1, αi, λi, bi∈R (19) are dense in Lp(K,µ), where µ is either dVol (5) or the induced Euclidean volume. Proof. We first treat the case ρ is sigmoidal and µ = dVol . Assume that these finite sums are not dense in Lp(K, dVol). By the Hahn-Banach theorem, there exists some nonzero h∈Lq(K, dVol), where q=p/(p − 1) if p>1 and q=∞ if p=1, such that ∫ K F (x)h(x)dVol(x) = 0 for all fi- nite sums of the form (19). As K is a compact set, by Hölder’s inequality, ∫ K |h(x)| dVol ≤ ( ∫ K dVol)1/p||h||Lq(K,dVol), which leads to h∈L1(K, dVol). Extend h to be a function H that is defined on Hn by assigning H(x)=h(x) if x∈K and H(x)=0 if x∈Hn\K. Then H∈L1(Hn, dVol)∩Lq(Hn, dVol) and∫ Hn F (x)H(x)dVol(x) = 0 (20) for all finite sums of the form (19). For any ω∈Sn−1, λ ∈ R, and b > 0, we set Fω,λ,b(x) = P ρ ω,λ,−λb(x) = ρ ( λ ( 1− |x|2 |x− ω|2 − b )) . These functions are uniformly bounded, as |Fω,λ,b(x)|≤1. Moreover, lim λ→∞ Fω,λ,b(x) = 1 if 1−|x|2 |x−ω|2>b, 0 if 1−|x| 2 |x−ω|2<b. (21) According to (20), for all ω, λ, b, we have ∫ Hn Fω,λ,b(x)H(x)dVol(x) = 0. Functions {Fω,λ,b}λ∈R converge pointwise as λ→∞, and they are uniformly bounded by |H|∈L1(Hn, dVol). By the bounded convergence theorem, for all ω∈Sn−1, b∈R, we have∫ {x:〈x,ω〉H>(log b)/2} H(x)dVol(x) = ∫ { x: 1−|x|2 |x−ω|2 >b }H(x)dVol(x) = 0. (22) By the integral formula (7) (with notations defined there), (6) and (22), for all b∈R,∫ ∞ log b (∫ ξt,ω H(z)dVolξt,ω (z) ) dt = 0. (23) Taking the derivative of ∫∞ log b (∫ ξt,ω H(z)dVolξt,ω (z) ) dt with respect to b, we deduce from (23) that∫ ξlog b,ω H(z)dVolξlog b,ω (z) = 0 for a.e. b>0. In other words, the integration of H on a.e. ξ ∈ Ξω is zero. This fact is valid for all ω∈Sn−1. Therefore, the integration of H on a.e. ξ ∈ Ξ is zero. By the injectivity Theorem 1, H = 0 a.e., which contradicts our assumption. Therefore, finite sums of the form (19) are dense in Lp(K, dVol). The case ρ is ReLU, ELU or Softplus and µ = dVol follows from the above case and the fact that x 7→ ρ(x+ 1)− ρ(x) is sigmoidal. The case µ is the Euclidean volume follows from previous cases and the fact that the Euclidean volume on compact K is bounded from above by λdVol for some constant λ. We refere the reader to the difference of (16) and (21), (17) and (22), and (18) and (23). However, basically the proofs are the same. The points are the integral formula (7), the injectivity Theorem 1 and the fact that level sets of horocycle/Poisson neurons are horocycles. Moreover, as a corollary of Theorem 4, we have Corollary 2. Let K be a compact set in Rn, and 1≤p<∞. Then finite sums of the form F (x) = N∑ i=1 αiP ρ wi,λi,bi (x), wi∈Rn, αi, λi, bi∈R are dense in Lp(K,µ), where µ is the Euclidean volume. Proof. Because K is compact, there exists a positive number R such that K ⊂ {x ∈ Rn : |x| < R}. By the above theorem, finite sums of the form F (x) = N∑ i=1 αiP ρ wi,λi,bi (x), wi∈Sn−1, αi, λi, bi∈R are dense in Lp(K/R, µ). Then the corollary follows from Pρw,λ,b(x) = P ρ w/R,λ,b(x/R). A.11 PROOF OF THE LEMMA 1 Recall f1a,p(x) = 2|a| 1− |p|2 sinh−1 ( 2(−p⊕ x, a)E (1− | − p⊕ x|2)|a| ) . (24) The proof of Lemma 1 follows from the following direct computation. Proof. Let t ∈ (0, 1). Take pt = tω and at = −ω, then we have −pt ⊕ x = −t(1− 2t(ω, x)E + |x|2)ω + (1− t2)x 1− 2t(ω, x)E + t2|x|2 . Let Ft(x) = 2(−pt⊕x,at)E (1−|−pt⊕x|2)|at| , then Ft(x) = 2(−pt ⊕ x, at)E (1− | − pt ⊕ x|2)|at| = 2 t(1−2t(ω,x)E+|x| 2)−(1−t2)(x,ω)E 1−2t(ω,x)E+t2|x|2 1− |−t(1−2t(ω,x)E+|x| 2)ω+(1−t2)x|2 (1−2t(ω,x)E+t2|x|2)2 = 2t(1− 2t(ω, x)E + t2|x|2)(1− 2t(ω, x)E + |x|2)− 2(1− t2)(1− 2t(ω, x)E + t2|x|2)(x, ω)E (1− 2t(ω, x)E + t2|x|2)2 − | − t(1− 2t(ω, x)E + |x|2)ω + (1− t2)x|2 = At(x)/Bt(x), where At, Bt are defined as the corresponding numerator and denominator. We have At(x)|t=1 = 2|x− ω|4 Bt(x)|t=1 = 0 ∂Bt(x)/∂t|t=1 = 2|x− ω|2(|x|2 − 1). Let Gt(x) = sinh−1(Ft(x)) + log 1−t1+t , then Gt(x) = log ( At(x) Bt(x) + √ 1 + A2t (x) B2t (x) ) + log 1− t 1 + t = log ( (1− t)At (1 + t)Bt + √ (1− t)2 (1 + t)2 + (1− t)2A2t (x) (1 + t)2B2t (x) ) . By L’Hôpital’s rule, lim t<1,t→1 (1− t)At(x) (1 + t)Bt(x) = −At(x) + (1− t)A′t(x) Bt(x) + (1 + t)B′t(x) ∣∣∣ t=1 = |x− ω|2 2− 2|x|2 . Therefore, lim t<1,t→1 Gt(x) = log ( |x− ω|2 1− |x|2 ) . For t < 1, we take pt = tω, at = −ω, ct = t 2−1 4 , dt = 1 2 log 1+t 1−t , then for all x ∈ K, lim t<1,t→1 ctf 1 at,pt(x) + dt = limt<1,t→1 −1 2 Gt(x) = 1 2 log ( 1− |x|2 |x− ω|2 ) = 〈x, ω〉H . If there exists c1, c2 such that |ctf1at, pt(x) + dt|(= |Gt(x)|/2) ≤ c2 for all t ∈ (c1, 1), x ∈ K, then by the dominated convergence theorem, there exists t such that ||ctf1at,pt(x) + dt − 〈x, ω〉H ||Lp(K,m) < , which proves the lemma. Note that (1− t)At(x) (1 + t)Bt(x) = 2|x− ω|4(1− t) + ∑4 j=1 Uj(x, ω)(1− t)j+1 −2|x− ω|2(|x|2 − 1)(1− t)(1 + t) + ∑4 l=2 Ll(x, ω)(1− t)l(1 + t) = 2|x− ω|4 + ∑4 j=1 Uj(x, ω)(1− t)j 2|x− ω|2(1− |x|2)(1 + t) + ∑4 l=2 Ll(x, ω)(1− t)l−1(1 + t) , where Uj and Ll are continuous functions defined on K × {ω}. There exist positive numbere c3, c4 and c1 ∈ (0, 1) such that for all x ∈ K and t ∈ (c1, 1), c3 ≤ 2|x− ω|4 ≤ c4, c3 ≤ 2|x− ω|2(1− |x|2)(1 + t) ≤ c4, c3 2 ≥ | 4∑ j=1 Uj(x, ω)(1− t)j |, c3 2 ≥ | 4∑ l=2 Ll(x, ω)(1− t)l−1(1 + t)|. Therefore, for x ∈ K and t ∈ (c1, 1), we have c3 2c4 + c3 ≤ (1− t)At(x) (1 + t)Bt(x) ≤ 2c4 + c3 c3 . This implies that for t ∈ (c1, 1), Gt|K and therefore |ctf1at,pt + dt||K are uniformly bounded, which finishes the proof of the lemma. A.12 THE FIRST MNIST CLASSIFIER IN 6.1 At the preprocessing stage, we compute the projection of the 28× 28 input pattern on the 40 principal components and then scale them so that the scaled 40-dimensional PCA features are within the unit ball. In our network, 1. Input layer: scaled 40-dimensional PCA features; 2. First layer: 40 inputs/1000 outputs horocycle layer (tanh activation); 3. Last layer: 1000 inputs/10 outputs affine layer; 4. Loss: cross entroy loss. Take learning rate = 1, learning rate decay = 0.999, and batch size = 128, and run it three times. The average test error rates after 600 epochs is 1.96%. PCA follows LeCun et al. (1998)(C.3), where 40 PCA is used for the quadratic network. Quadratic network has a similar structure to ours, because our neuron are contructed by quotient of quadratic functions followed by log. A.13 HOROCYCLE LAYER FOLLOWED BY MLR CAN APPROXIMATE THE CLASSFICATION FUNCTION Suppose the MNIST classification function M is defined on ∪9j=0Kj ⊂ H40, where Ki are relatively compact and M|Kj = j. By Theorem 2, for 0≤j≤9, there exist Fj(x) = ∑Nj i=1 αj,iρ(λj,i〈x, ωj,i〉H+bj,i) such that Fj approximates IKj , where I is the indicator function. Therefore, a network with the first (horocycle) layer given by ρ(λj,i〈x, ωj,i〉H+bj,i)(0≤j≤9, 1≤i≤Nj) followed by a classical MLR with parameters given by αj,i(0≤j≤9, 1≤i≤Nj) (with arg max for prediction) approximatesM. A.14 THE SECOND MNIST CLASSIFIER IN 6.1 At the preprocessing stage, we do data augmentation by letting each image 1 step toward each of its 4 corners, so that our traning set has 300000 examples. In our network, 1. Input layer: (28,28, 1); 2. First block: 32-filters 3× 3 convolution, ReLU, 2× 2 max-pooling, BatchNorm; 3. Second block: 64-filters 3× 3 convolution, ReLU, BatchNorm; 4. Thrid block: 64-filters 3× 3 convolution,ReLU,2× 2 max-pooling, BatchNorm; 5. Fourth block: 128-filters 3× 3 convolution, ReLU, 2× 2 max-pooling, BatchNorm; 6. Fifth block: FC 1000, ReLU, BatchNorm; 7. Last block: 1000 input/10 output Poisson layer, sigmoid, BatchNorm; 8. Loss: cross entroy loss. In optimization, we take Adam(Kingma & Ba, 2015). The batch size is 128 in the first 5 epochs, and 1024 in the next 15 epochs. After 5 epochs, we set ωi in the Poisson layer to be non-trainable. We train our network five times, the average test error rate after 20 epochs is 0.35%. The in |w| 2−|x|2 |x−w|2+ is an important hyperparameter for the numerical stability. We train this MNIST model with ∈ {10−1, 10−2, 10−4, 10−6, 10−8, 10−10, 10−20}. They all show robust performance. A.15 EXPERIMENT OF POINCARE TREE CLASSIFICATION TASK Given a Poincaré embedding (Nickel & Kiela, 2017) PE : {WordNet noun} → HD of the 82114 WordNet noun nodes and given a node x, the task is to classify all other nodes as being part of the subtree rooted at x (Ganea et al., 2018a). Our model is a logistic regression, where the horocycle feature p ∈ {WordNet noun} 7→ hPE(x)(PE(p)/s) (s is a hyperparameter lying in [1, 1.5]) is the only predictor, and the dependent variable is whether p is in the subtree rooted at x. Let P be the set of all nodes in the Poincare embedding, and let p range from P . 1. Input: hPE(x)(PE(p)/s) (s is a hyperparameter.) 2. Only layer: 1 input/1 output affine layer. (two parameters: one for input, one for bias.) 3. Loss: Logistic. (with respect to 1 if p in the tree rooted at x; 0 else.) In each training, x is one of {animal, group, location, mammal, worker}, dim is one of {2,3,5,10}, and Poincaré embeddings are from the animation_train.py of Ganea et al. (2018b) 4 (with tree=wordnet_full, model=poincare, dim=dim, seed randomly ∈ {7, 8, 9}). All nodes in the subtree rooted at x are divided into training nodes (80%) and test nodes (20%). The same splitting procedure applies for the rest nodes. We choose s that has the best training F1, and then record the corresponding test F1. For each x and dim, we do the training 100 times. The average test F1 classification scores are recorded in Table 2. The horocycle feature performs well here because it is compatible with the Poincaré embedding algorithm. Let x be a node that is not at the origin. It seems that the Poincaré embedding algorithm tends to pull all nodes that are from the subtree rooted at x towards the direction of x|x| , therefore y → 〈 y, x|x| 〉 H is a suitable feature for this task. A.16 END-BASED CLUSTERING IN H2 For MNIST, at the preprocessing stage, we do data augmentation by letting each image 1 step toward each of its 4 corners, so that our traning set has 300000 examples. Our network for H2 embedding of MNIST dataset is 1. Input layer: (28,28, 1); 2. First block: 32-filters 3× 3 convolution, ReLU, 2× 2 max-pooling, BatchNorm; 3. Second block: 64-filters 3× 3 convolution, ReLU, BatchNorm; 4. Thrid block: 64-filters 3× 3 convolution,ReLU,2× 2 max-pooling, BatchNorm; 5. Fourth block: 128-filters 3× 3 convolution, ReLU, 2× 2 max-pooling, BatchNorm; 6. Fifth block: FC 1000, ReLU, BatchNorm; 7. Sixth block: FC 2, ReLU, BatchNorm, Exp; 8. Last block: 2 input/10 output horocycle layer, sigmoid; 4https://github.com/dalab/hyperbolic_cones 9. Loss: cross entroy loss, where Exp is the exponential map ToH2(= R2)→ H2. We apply the data augmentation as in A.14. In optimization, learning rate is 0.1, learning rate decay is 0.99, batch size is 128, epochs is 50. Our network, data augmentation and optimization for H2 embedding of Fashion-MNIST dataset is completely the same as that for MNIST. For MNIST and Fashion-MNIST we use sphere optimization. We would like to remark that there are interesting new features in sphere optimization. Because the S1 is compact, for any continuous function f , there exists x = argmaxS1f . The derivative of f at x vanish, so the usual optimization algorithm to find the minimum will fail in the general case. In our experiments, we solve this problem by adding the following tricks: 1. Observation: if the class Cα are all close to ω ∈ S1, and the end prototype ωα for the class Cα is around −ω, then ωα is a maximum point of the loss function and therefore can not be improved through normal SGD. We solve this problem by adopting an idea(supervised variation) of k-means clustering. In each early epochs, optimization consists of two parts. In the first part, the normal SGD applies. In the second part, we move end prototypes (ωi) to the average direction of the class (using training data). 2. Observation: if the class Cα and class Cβ are all close to ω ∈ S1, and the end prototype ωα, ωβ are also both around ω, then all points in class Cα and class Cβ , end prototypes ωα, ωβ will all be pulling to ω by the SGD, and finally the network can not distinguish class Cα and class Cβ . We solve this problem by adding a loss if two prototypes are close. With these small tricks, our 2D end-based clustering algorithm is very stable for MNIST and FashionMNIST. We run it on MNIST 10 times, and they all get a test acc around 99% within 20 epochs. Suppose the classification task has M classes and the prototype of the i-th class is ωi. We write down the additional loss function for the second observation as follows i = RandomChoice({1, . . . ,M}) j = RandomChoice({1, . . . ,M} \ {i}) d = (ωi, ωj)E LObservation2 = arctanh(10× ReLU(d− 0.9− )), where is a small constant for numerical stability. For CIFAR-10, our network for H2 embedding of CIFAR-10 dataset is 1. Input layer: (32,32, 3); 2. First block: ResNet-32/128 output; 3. Second block: FC 2, ReLU, BatchNorm, Exp; 4. Last block: 2 input/10 output horocycle layer; 5. Loss: cross entroy loss. In the data augmentation, we apply horizontal/vertical shifts and horizontal flip. We use Adam. The batch size is 32 in the first 100 epochs, or 1024 in the next 50 epochs. The weights of the horocycle layer are fixed at the beginning of the training and are non-trainable, which follows an idea of Mettes et al. (2019). A.17 POISSON MLR For CIFAR-10, we use a ResNet-32 structure as the feature descriptor, and we apply horizontal/vertical shifts and horizontal flip. In our network, 1. Input layer: (32,32, 3); 2. First block: ResNet-32/128 output; 3. Second block: FC 128, ReLU, BatchNorm; 4. Last block: 128 input/10 output Poisson layer, BatchNorm; 5. Loss: cross entroy loss. We use Adam. The batch size is 32 in the first 80 epochs, or 1024 in the next 20 epochs. Test acc greater than 93.5%. For the classification task of flowers (Tensorflow), The dataset of 3670 photos of flowers contains 5 classes: daisy, dandelion, roses, sunflowers and tulips. The keras model is 1. Input layer: (180,180, 3); 2. First block: 16-filters 3× 3 convolution, ReLU, 2× 2 max-pooling; 3. Second block: 32-filters 3× 3 convolution, ReLU, 2× 2 max-pooling; 4. Thrid block: 64-filters 3× 3 convolution,ReLU,2× 2 max-pooling; 5. Fourth block: FC 128, ReLU; 6. Last block: 128 input/10 output FC layer; 7. Loss: cross entroy loss. Our Poisson model is 1. Input layer: (180,180, 3); 2. First block: 16-filters 3× 3 convolution, ReLU, 2× 2 max-pooling; 3. Second block: 32-filters 3× 3 convolution, ReLU, 2× 2 max-pooling; 4. Thrid block: 64-filters 3× 3 convolution,ReLU,2× 2 max-pooling; 5. Fourth block: FC 128, ReLU; 6. Last block: BatchNorm, 128 input/10 output Poisson layer, sigmoid, BatchNorm; 7. Loss: cross entroy loss. We use 2936 photos for training and
1. What is the main contribution of the paper in terms of theoretical and practical strengths? 2. How does the proposed method differ from previous works related to hyperbolic neural networks? 3. What are the concerns and questions raised by the reviewer regarding the connection between equation (2) and Poisson layer, empirical performance, numerical stability, and other minor issues? 4. What are the pros and cons of the paper according to the reviewer's assessment? 5. Are there any additional comments or suggestions provided by the reviewer for the authors' revision?
Review
Review This paper introduced a new hyperbolic neuron based on horocycles (hyperbolic counterparts of hyperplanes). The authors proved that these neurons in H^n are as useful as traditional neurons in R^n through theoretical arguments and demonstrated they can significantly improve learning in hyperbolic embeddings of tree datasets and MNIST/CIFAR datasets. Quality: This contribution has both theoretical and practical strengths. Theoretically, they proved that the proposed hyperbolic neurons are universal approximators (Theorem 2). Practically, they introduced a new kind of hyperbolic neuron, with its difference with existing literature clearly demonstrated through formulations and density plots. It shows supervisor performance improvements in several examples. Clarity: The language is well polished. The formulations and statements are clear and consistent. The presentation has high clarity with good intuitions through illustrations. Originality: The proposed method is mostly related to hyperbolic neural networks constructed using Mobius arithmetic operators. Their difference is demonstrated both intuitively and empirically through experiments. The relationship with previous works is clearly stated in section 2. The references are proper with page numbers mentioned. Significance: This paper establishes a new connection between hyperbolic geometry and deep learning. Therefore it should be interesting to the large group of audiences in those areas. My main concern and questions are listed as follows: Most importantly, the introduction and the theorem are based on equation (2); while the experiments are based on the Poisson layer introduced in section 4.2. I can see some inconsistency here: clearly they are different functions. Please fill this gap in the rebuttal and next version. Are there any explanations and technical arguments of the good empirical performance? Clearly, the hyper-parameter epsilon is important to maintain numerical stability. There should be some demonstrations in the main text or the supplementary material to show the robustness to instability (e.g. by setting epsilon=0) Finally, I summarize the pros and cons as follows. Pro: new hyperbolic deep learning proof of representation power on H^n strong empirical results Con: missing connection between equation(2) and Poisson layer Overall, based on the above assessment measures I recommend strong acceptance. Here are more comments for the authors' revision: abstract: "MLR" is the abbrev of? Introduction: introduce notation T_p(H) After theorem 1 there have to be some remarks to explain the statement. Same for Lemma/Corollary 1. Theorem 2 is referred to before the statement. The volume element "dm" is a bit hard to read Where are the notation h_x(y) used? Figure 8 x-axis and y-axis are not clear After rebuttal: Thank you for the revision and the clarifications. It is now clear that this work actually proposed two different neurons: the horocycle neuron defined on H^n and the Position neuron defined on R^n (removing one point). After the revision, they are proved to satisfy the universal approximation property. They share similar level sets (although the density of the level sets is different). It would be interesting to see their relationships through formal arguments and a more careful empirical comparison. This work needs background knowledge in hyperbolic geometry and may not be easy to read at the beginning. That could explain the criticism regarding clarity. Overall, I believe this paper developed important tools along the line of hyperbolic deep learning and still recommends strong acceptance.
ICLR
Title Laplacian Eigenspaces, Horocycles and Neuron Models on Hyperbolic Spaces Abstract We use hyperbolic Poisson kernel to construct the horocycle neuron model on hyperbolic spaces, which is a spectral generalization of the classical neuron model. We prove a universal approximation theorem for horocycle neurons. As a corollary, we obtain a state-of-the-art result on the expressivity of f a,p, which is used in the hyperbolic multiple linear regression. Our experiments get state-of-the-art results on the Poincare-embedding subtree classification task and the classification accuracy of the two-dimensional visualization of images. 1 INTRODUCTION Conventional deep network techniques attempt to use architecture based on compositions of simple functions to learn representations of Euclidean data (LeCun et al., 2015). They have achieved remarkable successes in a wide range of applications (Hinton et al., 2012; He et al., 2016). Geometric deep learning, a niche field that has caught the attention of many authors, attempts to generalize conventional learning techniques to non-Euclidean spaces (Bronstein et al., 2017; Monti et al., 2017). There has been growing interest in using hyperbolic spaces in machine learning tasks because they are well-suited for tree-like data representation (Ontrup & Ritter, 2005; Alanis-Lobato et al., 2016; Nickel & Kiela, 2017; Chamberlain et al., 2018; Nickel & Kiela, 2018; Sala et al., 2018; Ganea et al., 2018b; Tifrea et al., 2019; Chami et al., 2019; Liu et al., 2019; Balazevic et al., 2019; Yu & Sa, 2019; Gulcehre et al., 2019; Law et al., 2019). Many authors have introduced hyperbolic analogs of classical learning tools (Ganea et al., 2018a; Cho et al., 2019; Nagano et al., 2019; Grattarola et al., 2019; Mathieu et al., 2019; Ovinnikov, 2020; Khrulkov et al., 2020; Shimizu et al., 2020). Spectral methods are successful in machine learning, from nonlinear dimensionality reduction (Belkin & Partha, 2002) to clustering (Shi & Malik, 2000; Ng et al., 2002) to hashing (Weiss et al., 2009) to graph CNNs (Bruna et al., 2014) to spherical CNNs (Cohen et al., 2018) and to inference networks (Pfau et al., 2019). Spectral methods have been applied to learning tasks on spheres (Cohen et al., 2018) and graphs (Bruna et al., 2014), but not yet on hyperbolic spaces. This paper studies a spectral generalization of the FC (affine) layer on hyperbolic spaces. Before presenting the spectral generalization of the affine layer, we introduce some notations. Let (·, ·)E be the inner product, | · | the Euclidean norm, and ρ an activation function. The Poincaré ball model of the hyperbolic space Hn(n≥2) is a manifold {x∈Rn : |x|<1} equipped with a Riemannian metric ds2Hn= ∑n i=1 4(1−|x| 2)−2dx2i . The boundary of Hn under its canonical embedding in Rn is the unit sphere Sn−1. The classical neuron y=ρ((x,w)E+b) is of input x∈Rn, output y∈R, with trainable parameters w∈Rn, b∈R. An affine layer Rn → Rm is a concatenation of m neurons. An alternative representation of the neuron x 7→ρ((x,w)E+b) is given by 1 x∈Rn 7→ ρ(λ(x, ω)E+b), ω∈Sn−1, λ, b∈R. (1) This neuron is constant over any hyperplane that is perpendicular to a fixed direction ω. In Hn, a horocycle is a n−1 dimensional sphere (one point deleted) that is tangential to Sn−1. Horocycles are hyperbolic counterparts of hyperplanes (Bonola, 2012). Horocyclic waves 〈x, ω〉H := 12 log 1−|x|2 |x−ω|2 are constant over any horocycle that is tangential to Sn−1 at ω. Therefore, x∈Hn 7→ ρ(λ〈x, ω〉H+b), ω∈Sn−1, λ, b∈R (2) 1if w 6= (0, . . . , 0), one can take ω = w/|w|, λ = |w|; else, one can take λ = 0 and any ω ∈ Sn−1. generalizes the classical neuron model (1), and a concatenation of finitely many (2) generalizes the FC (affine) layer. We call (2) a horocycle neuron. Figure 1 (middle) is an example on H2. The neuron models in (1, 2) are related to spectral theory because (·, ω)E (respectively 〈·, ω〉H ) are building blocks of the Euclidean (respectively hyperbolic) Laplacian eigenspace. Moreover, many L2 spaces have a basis given by Laplacian eigenfunctions (Einsiedler & Ward, 2017). On one side, all Euclidean (respectively hyperbolic) eigenfunctions are some kind of “superposition” of (·, ω)E (respectively 〈·, ω〉H ). On the other side, neural networks based on (1) (respectively (2)) represent functions that are another kind of “superposition” of (·, ω)E (respectively 〈·, ω〉H ). They heuristically explain why the universal approximation property is likely to hold for networks constructed by (1) and (2). By using the Hahn Banach theorem, an injectivity theorem of Helgason, and integral formula, we prove that finite sums of horocycle neurons (2) are universal approximators (Theorem 2). Let p ∈ Hn, Tp(Hn) be the tangent space of Hn at p, a ∈ Tp(Hn), ⊕ be the Möbius addition (Ungar, 2008). We remind the reader that the following functions f1a,p(x) = 2|a| 1− |p|2 sinh −1 ( 2(−p⊕ x, a)E (1− | − p⊕ x|2)|a| ) (3) are building blocks of many hyperbolic learning tools (Ganea et al., 2018a; Mathieu et al., 2019; Shimizu et al., 2020). Figure 1 illustrates examples of different neuron models (1, 2, 3) on H2. In Lemma 1, we shall present a close relationship between (2) and (3). Using this relationship and Theorem 2, we obtain a novel result on the expressivity of f1a,p (Corollary 1). This article contributes to hyperbolic learning. We first apply spectral methods, such as the horocycle, to hyperbolic deep learning. We prove results on the expressivity of horocycle neurons (2) and f1a,p (3). With horocycle neurons, we obtain state-of-the-art results on the Poincaré-embedding subtree classification task and the classification accuracy of the 2-D visualization of images in in the experiment. 2 RELATED WORK Universal approximation There is a vast literature on universal approximation (Cybenko, 1989; Hornik et al., 1989; Funahashi, 1989; Leshno et al., 1993). Cybenko (1989)’s existential approach uses the Hahn Banach theorem and Fourier transform of Radon measures. To prove Theorem 2, we also use the Hahn Banach theorem, and additionally an integral formula (7) and an injectivity Theorem 1 of Helgason. Generalizing integral formulas and injectivity theorems is easier than generalizing Fourier transform of Radon measures on most non-Euclidean spaces. (Carroll & Dickinson, 1989) uses the inverse Radon transform to prove universal approximation theorems. This method relates to ours, as injectivity theorems are akin to inverse Radon transforms. However, using the injectivity theorem is an existential approach while using the inverse Radon transform is a constructive one. Spectral methods Spectral methods in Bronstein et al. (2017); Bruna et al. (2014); Cohen et al. (2018) use a basis of L2(X) given by eigenfunctions, whereX is a finite graph or the sphere. Because L2(Hn) has no eigenfunctions as a basis, our approach is different from theirs. Hyperbolic deep learning One part of hyperbolic learning concerns embedding data into the hyperbolic space (Nickel & Kiela, 2017; Sala et al., 2018). Another part concerns learning architectures with hyperbolic data as the input (Ganea et al. (2018a); Cho et al. (2019)). Ganea et al. (2018a) proposes two ways to generalize the affine layer on hyperbolic spaces: one by replacing the linear and bias part of an affine map with (25, 26) of their paper; another one by using a concatenation of f1a,p in their hyperbolic multiple linear regression (MLR). The latter seems more relevant to ours. A level set of f1a,p is a hypercycle that has the same distance to a chosen geodesic hypersurface, while a level set of a horocycle neuron is a horocycle that has the same “spectral” distance to an ideal point at infinity. Based on functions similar to f1a,p, Mathieu et al. (2019); Shimizu et al. (2020) build the gyroplane layer and Poincaré FC layer. Ganea et al. (2018a); Cho et al. (2019) take geodesics as decision hyperplanes, while we (initially) take horocycles. We shall construct the horocycle multiple linear regression (MLR), where decision hypersurfaces are geodesics. Geodesics decision hyperplanes (Ganea et al., 2018a; Cho et al., 2019) and geodesic decision hypersurfaces here arise from different methods. Khrulkov et al. (2020) investigates hyperbolic image embedding, where prototypes (or models) of each class are center-based. We study a different one, and we shall call our prototypes end-based. 3 HYPERBOLIC SPACES This section reviews facts from hyperbolic geometry that are used in the proof of Theorem 2. For the reader who is not interested in the proof, (4) is enough for the implementation. Hyperbolic metric We use the Poincaré model. The hyperbolic space Hn(n≥2) is the manifold {x∈Rn : |x|<1} equipped with a Riemannian metric ds2 = ∑n i=1 4(1−|x|2)−2dx2i . Let o be the origin of Hn. The distance function dHn satisfies dHn(o, x)=2 arctanh |x|. Geodesics, horocycles and corresponding points Geodesics in Hn are precisely circular arcs that are orthogonal to Sn−1. Horocycles in Hn are precisely (n−1)-dimensional spheres that are tangential to Sn−1 (Helgason, 1970). Horocycles are hyperbolic analogs of hyperplanes. Figure 2 illustrates geodesics and horocycles on H2. Hyperbolic Poisson kernel The Poisson kernel for Hn is P (x, ω)= ( 1−|x|2 |x−ω|2 )n−1 , where x∈Hn, ω∈Sn−1 (Helgason (1970)[p.108]). The function 〈·, ω〉H defined by 〈x, ω〉H = 1 2(n− 1) logP (·, ω) = 1 2 log 1− |x|2 |x− ω|2 (4) is constant over any horocycle that is tangential to Sn−1 at ω (Figure 1 (middle), (6)). Riemannian volume The Riemannian volume induced by the metric ds2 on Hn is dVol = 2n(1− |x|2)−ndx1 . . . dxn. (5) Horocycles Let Ξ be the set of horocycles of Hn, and let Ξω be the set of all horocycles that are tangential to Sn−1 at ω. Given λ∈R, we let ξλ,ω be the unique horocycle that connects ω and tanh (λ/2) · ω. We have Ξω = ∪λ∈R{ξλ,ω} and Ξ = ∪ω∈Sn−1Ξω. The length of any geodesic (that ends at ω) line segment cut by ξλ1,ω and ξλ2,ω equals |λ1 − λ2| (A.2). Therefore |λ1 − λ2| is a natural distance function defined on Ξω, and the map λ→ ξλ,ω is an isometry between R and Ξω. This isometry is closely related to 〈·, ω〉H (A.3): for any x ∈ ξλ,ω , 〈x, ω〉H = λ/2. (6) The annoying /2 in (6) is a tradeoff that the metric here is different from that in Helgason (2000). Integral formula For fixed ω ∈ Sn−1, Hn=∪λ∈Rξλ,ω. Let dVolξλ,ω be the measure induced by ds2 on ξλ,ω . Let L be a family of geodesics that end at ω, δ > 0, and U=L ∩ (∪λ≤α≤λ+δξα,ω). For l ∈ L, dH(l ∩ ξλ,ω, l ∩ ξλ+δ,ω)=δ (A.2), hence dVol(U) = δ · dVolξλ,ω (U ∩ ξλ,ω) and therefore∫ Hn f(x)dVol(x) = ∫ R (∫ ξλ,ω f(z)dVolξλ,ω (z) ) dλ. (7) The above proof (for Hn) is essentially the same as that in (Helgason, 2000)[p.37] (for H2). To further convince the reader that (7) holds for all n, we give another simple proof in A.4. Injectivity theorem With respect to the canonical measure on Ξ, Helgason (1970)[p.13] proved Theorem 1 (Helgason). If f ∈ L1(Hn) and ∫ ξ f(z)dVolξ(z) = 0 for a.e ξ ∈ Ξ, then f = 0 a.e.. Theorem 1 demonstrates that if the integral of f ∈ L1(Hn) over almost every horocycle is zero then f is also zero. This theorem and the integral formula (7) are essential for the proof of Theorem 2. 4 LEARNING ARCHITECTURES AND EIGENFUNCTIONS OF THE LAPLACIAN In this section, we discuss a heuristic connection between the representation properties of eigenfunctions and classical neurons, and then we define some horocycle-related learning tools. 4.1 EIGENSPACES AND NEURON MODELS On a Riemannian manifold X , the Laplace-Beltrami LX is the divergence of the gradient, and it has a well-known representation property (Einsiedler & Ward, 2017): if X is a compact Riemannian manifold or bounded domain in Rn, then L2(X) has a basis given by eigenfunctions. This statement is false if X is Rn or Hn (Hislop, 1994). Eigenspaces of on Rn and Hn Our work is motivated by the theory of eigenspaces, in which Euclidean (respectively hyperbolic) eigenfunctions are obtained from (x, ω)E (respectively 〈x, ω〉H ) by some kind of superposition. For example, all smooth eigenfunctions of LRn are precisely the functions (M. Hashizume & Okamoto, 1972)[p.543] f(x) = ∫ Sn−1 eλ(x,ω)EdT (ω), (8) and eigenfunctions of LHn are precisely the functions (Helgason, 1970)[Theorem 1.7, p.139] f(x) = ∫ Sn−1 eλ〈x,ω〉HdT (ω), (9) where T in (8) and (9) are some technical linear forms of suitable functional spaces on Sn−1. Neuron models By (8) and (1), Euclidean eigenfunctions (respectively classical neurons) are superpositions of (·, ω)E and exp (respectively ρ), with homogeneity and additivity. By (9) and (2), hyperbolic eigenfunctions (respectively horocycle neurons) are superpositions of 〈·, ω〉H and exp (respectively ρ). The representation property of eigenfunctions on compact manifolds and bounded domains suggests that the universal approximation property is likely to hold for networks constructed by (·, ω)E or 〈·, ω〉H . However, this heuristic is not proof (A.5). 4.2 HOROCYCLE BASED LEARNING ARCHITECTURES Horocycle neuron In the implementation of the horocycle neuron (2), we take 1 2 log ( 1−|x|2 |x−ω|2+ + ) for 〈x, ω〉H , where is a small constant to ensure numerical stability. For updating ω, we use the sphere optimization algorithm (Absil et al., 2008; Bonnabel, 2013) (A.6). Horocycle feature and horocycle decision hypersurface Given a non-origin point x ∈ Hn, for y ∈ Hn we define hx(y) = 〈y, x/|x|〉H and call it the horocycle feature attached to x. This feature is useful in the Poincaré embedding subtree classification task (see the experiment and Figure 3[left]). The horocycle is the hyperbolic analog of the Euclidean hyperplane, and therefore it could be a possible choice of decision hypersurface, which may arise from a level set of a horocycle feature. End-based clustering and end prototype Natural clustering is a topic in representation learning (Bengio et al., 2013), and the common prototype-based clusters are center-based (Tan et al., 2005). We propose a type of clustering that embeds high-dimensional data in Hn and places prototypes in Sn−1. Figure 3[right] is an example for n = 2. For ω ∈ Sn−1 and any b ∈ R, the function x ∈ Hn 7→ − log ( 1−|x|2 |x−ω|2 ) + b measures the relative distance of Hn from ω in Gromov’s bordification theory (Bridson & Haefliger (2009)[II.8], A.18). Moreover, we define Dist : Hn ×Sn−1 ×R→ R by Dist(x, ω, b) = − log ( 1− |x|2 |x− ω|2 ) + b = −2〈x, ω〉H + b. (10) It is a relative distance function, and this is why Dist may assume negative values and why there is a bias term b in (10). Consider classes Cls = {C1, C2, . . . , CM} and labeled training examples {(X1, Y 1), . . . , (XN , Y N )}, where Xi ∈ RD are D-dimensional input features and Y i ∈ {1, 2, . . . ,M}. Each example Xi belongs to the class CY i . In light of (10), our goal is to find a neural network NNθ : RD → Hn that is parameterized by θ, prototypes ω1, . . . , ωM ∈ Sn−1, and real numbers b1, . . . , bM ∈ R such that # { 1≤i≤N : Y i = arg min 1≤j≤M ( Dist(NNθ(X i), ωj , bj) )} N (11) is maximized. We call {NNθ(Xj) : 1 ≤ j ≤ N} the end-based clustering and ωi end prototypes (in hyperbolic geometry, the end is an equivalence class of parallel lines in Figure 2[left]). In experiments, we take NNθ = Exp ◦ NN′θ, where NN ′ θ : R D → Rn is a standard neural network parameterized by θ and Exp : Rn → Hn is the exponential map of the hyperbolic space. Horocycle layer, horocycle multiple linear regression (MLR) and geodesic decision hypersurfaces We call a concatenation of (2) a horocycle layer, and we shall carefully describe a prototypical learning framework for end-based clusterings. Using the same notions as in the previous paragraph, the classification task has M classes, and NNθ = Exp ◦NN′θ : RD → Hn is a deep network. For prototypes ω1, . . . , ωM ∈ Sn−1, real numbers b1, . . . , bM ∈ R, and any exampleX , our feedforward for prediction will be x = NNθ(X), (Feature descriptor) SCj(X) = −Dist(x, ωj , bj), (Scores; Similarity) X ∈ Cargmax 1≤j≤M (SCj(X)). (Classifier) The goal is to maximize the accuracy (11), and then we need a loss function for the backpropagation. Following the convention of prototypical networks (Snell et al., 2017; Yang et al., 2018), we choose an increasing function ρ (in our experiments, ρ(x) = x or ρ = tanh. 2) and let the distribution over classes for an input X (with label Y ) be pθ(Y = Cj |X) ∝ e−ρ(Dist(NNθ(X),mj ,bj)) = e−ρ(−SCj(X)). 2One often takes ρ(x) = x2 in metric learning, which is improper here because Dist(x) could be negative. Therefore, given a batch of training examples, the loss function is L = − ∑ (Xj ,Y j)∈Batch log pθ(Y = CY j |Xj) #Batch . (12) The training proceeds by minimizing L, and we call this framework a horocycle MLR. The set of parameters of the framework is {θ} ∪ {ω1, . . . , ωM} ∪ {b1, . . . , bM}. It is worth mentioning that decision boundaries of the horocycle MLR are geodesics, which follows from SCi(X)=SCj(X)⇐⇒ log ( 1−|x|2 |x−ωi|2 ) −bi = log ( 1−|x|2 |x−ωj |2 ) −bj ⇐⇒ |x−ωi| |x−ωj | = e bj−bi 2 and the theorem of Apollonian circles (A.7). Poisson neuron and Poisson multiple linear regression (MLR) Although 〈x, ω〉H (4) is wellmotivated by the theory of eigenspaces (9) and fits naturally into metric learning (see 10 or also Corollary 1), it is only defined on Hn. Some readers might not be convinced that the neuron has to be defined on hyperbolic spaces. Therefore, we try to remove the log in (4) and define the Poisson neuron model by Pρw,λ,b(x) = ρ ( λ |w| 2−|x|2 |x−w|2 + b ) for w ∈ Rn, λ, b ∈ R, which is well-defined on Rn\{w}. Notice that if |x| < |w| then |w| 2−|x|2 |x−w|2 = e 2〈x/|w|,w/|w|〉H . In A.8, Figure 7 illustrates an example of a Poisson neuron on R2. In the implementation, we take |w| 2−|x|2 |x−w|2+ for |w|2−|x|2 |x−w|2 , where is a small constant for numerical stability. We call a concatenation of Poisson neurons a Poisson layer, and we use it with a deep neural network NNθ : RD → Rn to construct the Poisson MLR, which is similar to the horocycle MLR. Let w1, . . . , wM ∈ Rn and b1, . . . , bM ∈ R, the feedforward for prediction of our framework is x = NNθ(X), SCj(X) = BatchNorm(P ρ wj ,−1,bj (x)), X ∈ Cargmax 1≤j≤M (SCj(X)). (13) We let the pθ(Y = Cj |X) ∝ eSCj(X) and take (12) as the loss. This framework is called a Poisson MLR. We use the usual optimization algorithms to update parameters in the Poisson neuron. The BatchNorm(Ioffe & Szegedy, 2015) seems crucial for (13) in the experiment. Figure 4 illustrates that high-confidence prediction regions (deep red areas) of the Poisson MLR are compact sets, in contrast to classical classifiers Hein et al. (2019)[Theorem 3.1]. We shall use this figure to explain an experiment in Section 6.4. 5 REPRESENTATIONAL POWER In this section, ρ is a continuous sigmoidal function (Cybenko, 1989), ReLU(Nair & Hinton, 2010), ELU(Clevert et al., 2016), or Softplus(Dugas et al., 2001). We remind the reader that ρ is sigmoidal if lim t→∞ ρ(t) = 1 and lim t→−∞ ρ(t) = 0. The following theorem justifies the representational power of horocycle neurons. Theorem 2. Let K be a compact set in Hn, and 1≤p<∞. Then finite sums of the form F (x) = N∑ i=1 αiρ(λi〈x, ωi〉H+bi), ωi∈Sn−1, αi, λi, bi∈R (14) are dense in Lp(K,µ), where µ is either dVol (5) or the induced Euclidean volume. We provide a sketch of the proof here and go through the details in A.9. It suffices to prove the theorem for a sigmoidal function ρ and µ = dVol , as other cases follow from this one. Assume that these finite sums are not dense in Lp(K, dVol). By the Hahn-Banach theorem, there exists some nonzero h∈Lq(K, dVol), where q=p/(p− 1) if p>1 and q=∞ if p=1, such that ∫ K F (x)h(x)dVol(x) = 0 for all finite sums of the form (14). Extend h to be a function H that is defined on Hn by assigning H(x)=h(x) if x∈K and H(x)=0 if x∈Hn\K. Using the property of sigmoidal functions, the bounded convergence theorem, and the integral formula (7), we prove that the integration of H on almost every horocycle is zero. By the injectivity Theorem 1, H is almost everywhere zero, which contradicts our assumption and completes the proof. In A.10, we shall prove the same result for Poisson neurons. In A.11, we prove the following lemma, which demonstrates a close relationship between horocycle neurons and the widely used f1a,p (3). Lemma 1. Let K be a compact set in Hn, ω ∈ Sn−1, and > 0. There are c, d ∈ R, p ∈ Hn, and a ∈ Tp(Hn) such that the function D(x) = cf1a,p(x) + d− 〈x, ω〉H satisfies ||D||Lp(K,dVol) < . This lemma suggests that 〈·, ω〉H is a boundary point of some “compactification” of the space of f1a,p. The above lemma together with Theorem 2 implies Corollary 1. Let K be a compact set in Hn and 1≤p<∞. Finite sums of the form F (x) = N∑ i=1 αiρ(cif 1 ai,pi(x) + di), pi ∈ H n, ai ∈ Tpi(H n), αi, ci, di ∈ R, are dense in Lp(K,µ), where µ = dVol or µ is the induced Euclidean volume. This result provides novel insights into the hyperbolic neural network (Ganea et al., 2018a), gyroplane layer (Mathieu et al., 2019), and Poincaré FC layer (Shimizu et al., 2020). Although level sets of f1a,p are hypercycles, our proof of Lemma 1 relies on the theory of horocycles. It would be interesting to have more natural approaches to treat the expressivity of f1a,p. 6 EXPERIMENTS In this section, we first play with the MNIST toy. Next, we apply a horocycle feature to the Poincaré embedding subtree classification task. After that, we construct 2-D clusterings of image datasets by using the horocycle MLR. Finally, we provide evidence for further possible applications of the Poisson MLR. We use the framework or some functions of Tensorflow, Keras, and scikit-learn (Abadi et al., 2015; Chollet et al., 2015; Pedregosa et al., 2011). 6.1 MNIST The MNIST (LeCun et al., 1998) task is popular for testing hyperbolic learning tools (Ontrup & Ritter, 2005; Nagano et al., 2019; Mathieu et al., 2019; Grattarola et al., 2019; Ovinnikov, 2020; Khrulkov et al., 2020). We train two different classifiers. A.12, A.14, and code contain details. The first one is a single horocycle layer followed by the softmax classifier. The average test error rate after 600 epochs is 1.96%, and Theorem 2 provides the rationale for this experiment (A.13). The second one is a Poisson MLR. It is the best hyperbolic geometry related MNIST classifier (Table 1). In this table, Ontrup & Ritter (2005) uses the hyperbolic SOM, Grattarola et al. (2019) uses the adversarial autoencoder, and Khrulkov et al. (2020) uses the hyperbolic MLR. Our experiment performs well on MNIST suggests that horocycle and Poisson neurons are computationally efficient and easily coordinate with classical learning tools (such as the convolutional layer and the softmax). 6.2 POINCARÉ EMBEDDING SUBTREE CLASSIFICATION Given a Poincaré embedding (Nickel & Kiela, 2017) PE : {WordNet noun} → HD of 82114 nouns and given a node x ∈ {WordNet noun}, the task is to classify all other nodes as being part of the subtree rooted at x (Ganea et al., 2018a). Our model is logistic regression, where the horocycle feature p ∈ {WordNet noun} 7→ hPE(x)(PE(p)/s) (s is a hyperparameter lying in [1, 1.5]) is the only predictor, and the dependent variable is whether p is in the subtree rooted at x. The decision hypersurface of this model is a horocycle, as illustrated in Figure 3 (left). In the experiment, we pre-train three different Poincaré embeddings3 in each of H2,H3,H5,H10. For each x ∈ {animal, group, location, mammal, worker} and D ∈ {2, 3, 5, 10}, we randomly select one of three pre-trained Poincaré embedding PE : {WordNet noun} → HD and then test the model. Table 2 reports the F1 classification scores and two standard deviations of 100 trials for each {x,D}. Different Poincaré embeddings account for the most variance of the performance. Our model is different from the existing ones. Firstly, we take the horocycle as the decision hypersurface, while others take the geodesic. Secondly, we train a logistic regression on top of the horocycle feature attached to PE(x), which is efficiently calculated, while others train the hyperbolic MLR with different parametrizations. On the number of parameters, we have three (independent of D), Ganea et al. (2018a) has 2D, and Shimizu et al. (2020) has D + 1. The number of parameters explains why our model is prominent in low dimensions. 6.3 END-BASED CLUSTERING FOR 2D DIMENSION REDUCTION In this experiment, we use the horocycle MLR (Section 4.2) to construct end-based clusterings NNθ : R D → H2 for MNIST, Fashion-MNIST(Xiao et al., 2017), and CIFAR-10(Krizhevsky, 2012). We take NNθ = Exp ◦ NN′θ, where Exp is the exponential map of H2 and NN ′ θ : R D → R2 is a network with four convolutional blocks for MNIST/Fashion-MNIST or a ResNet-32 structure for CIFAR-10. A.16 and code contain details. 3https://github.com/dalab/hyperbolic_cones Figure 5 illustrates end-based clusterings for MNIST, Fashion-MNIST, and CIFAR-10, with performance reported in the caption. Our accuracy for Fashion-MNIST is 8% higher than all numbers presented in McInnes et al. (2020). Moreover, Table 3 compares the numbers of Yang et al. (2018); Ghosh & Kirby (2020), and ours for MNIST, and our methods are similar. We all use convolutional networks as the (Feature descriptor) and prototype-based functions as the loss. However, Yang et al. (2018); Ghosh & Kirby (2020) use the center-based prototype loss, while we use the end-based (12). Yang et al. (2018)[Figure 1] points out that the traditional CNN is good at linearly separating feature representations, but the learned features are of large intra-class variations. The horocycle MLR leads to the inter-class separability in the same way (angle accounts for label difference) a traditional CNN does. At the same time, it also obtains intra-class compactness (Figure 5). 6.4 POISSON MLR Using a Poisson MLR whose feature descriptor is a ResNet-32 structure, we obtain a classifier with a test error rate of 6.46% on CIFAR-10. It is on par with other methods with similar network structures (Yang et al., 2018). Moreover, we apply Poisson MLR to the classification task of flowers (Tensorflow), which is a typical example of overfitting. Replacing the MLR part of the Keras model (Tensorflow) with a Poisson MLR, the new Poisson model shows better generalization performance (Figure 6). A.17 and code contain the details. This subsection provides evidence for further applications of horocycles. 7 CONCLUSION Based on the spectral theory of hyperbolic spaces, we introduce several horocycle-related learning tools. They find applications in the hyperbolic neural networks, the Poincaré embedding subtree classification task, and the visualization and classification of image datasets. We give an existential proof of a universal approximation theorem for shallow networks constructed by horocycle neurons or f1a,p. Hopefully, it will trigger further research on the expressivity problems, such as constructive approaches, quantitative results, and benefit of depth (Mhaskar & Poggio, 2016), on horocycle neurons, f1a,p, and similar functions on more general manifolds. A APPENDIX A.1 NOTATIONS AND SYMBOLS Default Notations Notation Description Related formula R The set of real numbers Rn n dimensional Euclidean space x ∈ Rn, x = (x1, . . . , xn) (·, ·)E Euclidean inner product x ∈ Rn, y ∈ Rn, (x, y)E = ∑n i=1 xiyi 〈·, ·〉H Hyperbolic analogue of (·, ·)E x ∈ Hn, y ∈ Sn−1, 〈x, ω〉H = 12 log 1−|x|2 |x−ω|2 | · | Euclidean norm x ∈ Rn, |x| = √ (x, x)E Hn n dimensional hyperbolic space as a set, Hn = {x ∈ Rn : |x| < 1} Tp(X) Tangent space of X at p T (X) Tangent space of X T (X) = ∪p∈XTp(X) ds2Hn The canonical metric on Hn with curva- ture -1 ds2Hn= ∑n i=1 4(1−|x| 2)−2dx2i dVol Riemannian volume on Hn dVol = 2n(1− |x|2)−ndx1 . . . dxn Lp(K, dVol) Lp space Lp(K, dVol) = { f | ∫ K |f |pdVol <∞ } || · ||Lp(K,dVol) Lp norm f measurable on K, ||f ||Lp(K,dVol) = (∫ K |f |pdVol ) 1 p Sn−1 n− 1 dimensional sphere as a set, Sn−1 = {x ∈ Rn : |x| = 1} P (·, ·) Hyperbolic Poisson kernel x ∈ Hn, ω ∈ Sn−1, P (x, ω) = ( 1−|x|2 |x−ω|2 )n−1 f1a,p Model in the hyperbolic MLR f1a,p(x) = 2|a| 1−|p|2 sinh −1 ( 2(−p⊕x,a)E (1−|−p⊕x|2)|a| ) dHn The hyperbolic distance function Ξ The space of horocycles Ξω The set of horocycles that are tangential to Sn−1 at ω LX Laplace-Beltrami operator on X hx The horocycle feature function hx(y) = 〈y, x/|x|〉H ξλ,ω The unique horocycle connecting ω and tanhλ/2 · ω. MLR Multiple linear regression dim dimension IK the indicator function of K Dist Relative distance function Dist(x, ω, b) = −2〈x, ω〉H + b Cls Set of classes Cls = {C1, C2, . . . , CM} NNθ A network parameterized by θ NN′θ A network parameterized by θ Exp Exponential map of the hyperbolic space (X1, Y 1) Labeled sample SCj Score function pθ(Y = Cj |X) Prediction probability L Loss function Pρw,λ,b Poisson neuron P ρ w,λ,b(x) = ρ ( λ |w| 2−|x|2 |x−w|2 + b ) PE Poincaré embedding Conventional symbols Symbol In most cases it refers n,m, i integers x, y, w points in Rn or Hn, or real numbers o the origin of Rn or Hn b, c, d, α, δ real numbers λ real or complex number t real number, represent the timestamp in optimization ω point in Sn−1 ρ an activation function f, g functions K a compact set X a manifold p a point in Hn or on a manifold a an element in Tp(Hn) ξ a horocycle µ a measure L a family of geodesics lines l a geodesics line U a set in Hn F, h,H functions M number of classes D dimension A.2 PROOF OF THE ISOMETRY Given ω∈Sn−1 and λ∈R, we let ξλ,ω the unique horocycle that connects ω and tanh (λ/2) · ω. The length of any geodesic (that ends at ω) line segment cut by ξλ1,ω and ξλ2,ω equals |λ1 − λ2|. This fact is obvious in the half-space model. There is a Riemannian isometry F : {z ∈ Rn : |z| < 1} → {(x1, · · · , xn) : x1 > 0} (the latter is with the metric ds2 = dx 2 1+···+dx 2 n x21 ) such that F (ω) = ∞ and F (o) = (1, 0, . . . , 0). Using dHn(o, tanh(λi/2)ω) = |λi|, d{(x1,··· ,xn):x1>0}((1, 0, . . . , 0), (e±λi , 0, . . . , 0)) = |λi|, F (ω) =∞ and F (o) = (1, 0, . . . , 0), we have F (tanh(λi/2)ω) = (eλi , 0, . . . , 0). Therefore, F maps ξλi,ω to {(x1, x2, . . . , xn) : x1 = eλi}. Any geodesic (that ends at ω) line segment cut by ξλ1,ω and ξλ2,ω is mapped by F to {(t, α2, . . . , αn) : (t− eλ1)(t− eλ2) < 0} for some fixed αj . It is easy to check the length of this segment with respect to dx 2 1+···+dx 2 n x21 (as αi are constants, the metric reduces to dx21/x 2 1 on this segment) is |λ1 − λ2|. A.3 PROOF OF (6) Because x is on ξλ which is a sphere with center 1+tanhλ/2 2 ω and radius 1−tanhλ/2 2 , we have∣∣∣x− 1+tanhλ/22 ω∣∣∣2 = ∣∣∣ 1−tanhλ/22 ∣∣∣2, which leads to |x|2−(1+tanhλ/2)(x, ω)E+tanhλ/2|ω|2 = 0, and then 1+tanhλ/22 |x− ω| 2 = 1−tanhλ/22 (|ω 2| − |x|2), and finally 〈x, ω〉H = 12 log |ω|2−|x|2 |x−ω|2 = 1 2 log 1+tanhλ/2 1−tanhλ/2 = λ/2. A.4 ANOTHER PROOF OF THE INTEGRAL FORMULA (7) We use Hn for the upper half space model {(x1, · · · , xn) : x1 > 0} with the Riemannian volume dx1···dxnxn1 . Let ω = (∞, 0, . . . , 0) and o be (1, 0, . . . , 0) as in (A.2), then ξλ,ω = {(x1, x2, . . . , xn) : x1 = eλ}. The induced Riemannian metric on ξλ,ω (respectively volume dVolξλ,ω ) is dx22+···+dx 2 n e2λ (respectively dx2···dxn e(n−1)λ ). For any integral function f on Hn, using change of variable x1 = eλ∫ Hn f(x1, . . . , xn) dx1 · · · dxn xn1 = ∫ λ ∫ (x2,...,xn)∈Rn−1 f(eλ, x2, . . . , xn) dx2 · · · dxn enλ eλdλ = ∫ λ ∫ (x2,...,xn)∈Rn−1 f(eλ, x2, . . . , xn) dx2 · · · dxn e(n−1)λ dλ = ∫ λ ∫ ξλ,ω f(z)dVolξλ,ω (z)dλ. The above identity is equivalent to the integral formula ∫ Hn f(x)dVol(x) =∫ R (∫ ξλ,ω f(z)dVolξλ,ω (z) ) dλ. presented in (7), according to the Riemannian isometry in (A.2). A.5 THE HEURISTIC IS NOT A PROOF The spectral theory does not directly lead to universal approximation theorems because of the following: 1, superpositions in (1, 2) and (8, 9) are different (similarly, although another kind of superposition in Hilbert’s 13th problem (Hilbert, 1935; Arnold, 2009) was a driving force for universal approximation theorems (Nielsen, 1987), the former is hardly relevant for networks (Girosi & Poggio, 1989)); 2, desired representation properties of hyperbolic eigenfunctions are unknown, partially because Hn is non-compact; 3, results in spectral theory favor Hilbert spaces, while universal approximation theorems embrace more than L2 space. A.6 OPTIMIZATION The parameters update for the horocycle unit (2) involves the optimization problem on the sphere (for ω) and the hyperbolic space (for x). We use a standard algorithm of sphere optimization (Absil et al., 2008) to update ω, and in the supplement we present an optimization approach based on the geodesic polar-coordinates to update x. In the implementation of a horocycle layer, the forward propagation is trivial, while the backpropagation involves optimization on the sphere and hyperbolic space. In the following, η is the learning rate, αt is the value of α (α may be η, s, z, ω, . . .) at the t-th step, TpX is the tangent fiber at p, ∇ is the gradient, and∇H is the hyperbolic gradient. It suffices to consider the layer s=〈z, ω〉. Optimization on the sphere The parameter update of ω in s=〈z, ω〉 involves the optimization on the sphere. The projection of ∂Lθ∂s ∇s(ωt) = ∂Lθ ∂s zt−ωt |zt−ωt|2 ∈ TωtR n onto TωtS n−1 is given by Absil et al. (2008)[p.48] vt = ∂Lθ ∂s zt − ωt |zt − ωt|2 − ∂Lθ ∂s ( zt − ωt |zt − ωt|2 , ωt ) ωt = ∂Lθ ∂s zt − (zt, ωt)ωt |zt − ωt|2 . Two well-known update algorithms of wt Absil et al. (2008)[p.76] are: ωt+1 = cos (ηt|vt|)ωt − sin (ηt|vt|)|vt|−1vt; ωt+1 = (ωt − ηtvt)/|ωt − ηtvt|. A.7 A PROOF OF APOLLONIUS THEOREM Theorem 3 (Apollonius). Given distinct ω1, ω2 ∈ Sn−1 and a positive number λ, the locus {x : |x− ω1| = λ|x− ω2|} is a sphere orthogonal to Sn−1. Proof. If λ is one then it is trivial. We assume now λ is not one. By |x− ω1| = λ|x− ω2|, we can have ∣∣∣∣x− ω1 − λω21− λ ∣∣∣∣2 = |ω1 − λω2|2|1− λ|2 − 1. The locus is a sphere with center O = ω1−λω21−λ and radius R = √ |ω1−λω2|2 |1−λ|2 − 1. The theorem of Apollonius (in all dimension) claims that this sphere is orthogonal to Sn−1. To prove this, it suffices to prove |oO|2 = 1 +R2 (recall o is the origin of Hn), which follows from∣∣∣∣ω1 − λω21− λ ∣∣∣∣2 = √ |ω1 − λω2|2 |1− λ|2 − 1 2 + 1. A.8 INVERSION On Rn ∪ {∞}, given the sphere {x : |x− w0| = r}, the corresponding inversion is given by Iv(x) = w0 + r2(x− w0) |x− w0|2 . For x ∈ Rn ∪ {∞}, Iv(x) is called the inverse of x with respect to {x : |x− w0| = r}. A.9 PROOF OF THEOREM 2 Theorem 2 Let K be a compact set in Hn, and 1≤p<∞. Then finite sums of the form F (x) = N∑ i=1 αiρ(λi〈x, ωi〉H+bi), ωi∈Sn−1, αi, λi, bi∈R are dense in Lp(K,µ), where µ is either dVol (5) or the induced Euclidean volume. Proof. We first treat the case ρ is sigmoidal and µ = dVol . Assume that these finite sums are not dense in Lp(K, dVol). By the Hahn-Banach theorem, there exists some nonzero h∈Lq(K, dVol), where q=p/(p − 1) if p>1 and q=∞ if p=1, such that ∫ K F (x)h(x)dVol(x) = 0 for all fi- nite sums of the form (14). As K is a compact set, by Hölder’s inequality, ∫ K |h(x)| dVol ≤ ( ∫ K dVol)1/p||h||Lq(K,dVol), which leads to h∈L1(K, dVol). Extend h to be a function H that is defined on Hn by assigning H(x)=h(x) if x∈K and H(x)=0 if x∈Hn\K. Then H∈L1(Hn, dVol)∩Lq(Hn, dVol) and∫ Hn F (x)H(x)dVol(x) = 0 (15) for all finite sums of the form (14). For any ω∈Sn−1 and λ, b∈R, we set Fω,λ,b(x) = ρ(λ(〈x, ω〉H−b)). These functions are uniformly bounded, as |Fω,λ,b(x)|≤1. Moreover, lim λ→∞ Fω,λ,b(x) = { 1 if 〈x, ω〉H>b, 0 if 〈x, ω〉H<b. (16) According to (15), for all ω, λ, b, we have ∫ Hn Fω,λ,b(x)H(x)dVol(x) = 0. Functions {Fω,λ,b}λ∈R converge pointwise as λ→∞, and they are uniformly bounded by |H|∈L1(Hn, dVol). By the bounded convergence theorem, for all ω∈Sn−1, b∈R, we have∫ {x:〈x,ω〉H>b} H(x)dVol(x) = 0. (17) By the integral formula (7) (with notations defined there), (6) and (17), for all b∈R,∫ ∞ 2b (∫ ξt,ω H(z)dVolξt,ω (z) ) dt = 0. (18) Taking the derivative of ∫∞ 2b (∫ ξt,ω H(z)dVolξt,ω (z) ) dt with respect to b, we deduce from (18) that∫ ξ2b,ω H(z)dVolξ2b,ω (z) = 0 for a.e. b∈R. In other words, the integration of H on a.e. ξ ∈ Ξω is zero. This fact is valid for all ω∈Sn−1. Therefore, the integration of H on a.e. ξ ∈ Ξ is zero. By the injectivity Theorem 1, H = 0 a.e., which contradicts our assumption. Therefore, finite sums of the form (14) are dense in Lp(K, dVol). The case ρ is ReLU, ELU or Softplus and µ = dVol follows from the above case and the fact that x 7→ ρ(x+ 1)− ρ(x) is sigmoidal. The case µ is the Euclidean volume follows from previous cases and the fact that the Euclidean volume on compact K is bounded from above by λdVol for some constant λ. A.10 UNIVERSAL APPROXIMATION THEOREM FOR POISSON NEURONS. In this section, ρ is a continuous sigmoidal function (Cybenko, 1989), ReLU(Nair & Hinton, 2010), ELU(Clevert et al., 2016), or Softplus(Dugas et al., 2001). We also recall the Poisson neuron: Pρw,λ,b(x) = ρ ( λ |w|2 − |x|2 |x− w|2 + b ) , w ∈ Rn, λ, b ∈ R. Theorem 4. Let K be a compact set in Hn, and 1≤p<∞. Then finite sums of the form F (x) = N∑ i=1 αiP ρ ωi,λi,bi (x), ωi∈Sn−1, αi, λi, bi∈R (19) are dense in Lp(K,µ), where µ is either dVol (5) or the induced Euclidean volume. Proof. We first treat the case ρ is sigmoidal and µ = dVol . Assume that these finite sums are not dense in Lp(K, dVol). By the Hahn-Banach theorem, there exists some nonzero h∈Lq(K, dVol), where q=p/(p − 1) if p>1 and q=∞ if p=1, such that ∫ K F (x)h(x)dVol(x) = 0 for all fi- nite sums of the form (19). As K is a compact set, by Hölder’s inequality, ∫ K |h(x)| dVol ≤ ( ∫ K dVol)1/p||h||Lq(K,dVol), which leads to h∈L1(K, dVol). Extend h to be a function H that is defined on Hn by assigning H(x)=h(x) if x∈K and H(x)=0 if x∈Hn\K. Then H∈L1(Hn, dVol)∩Lq(Hn, dVol) and∫ Hn F (x)H(x)dVol(x) = 0 (20) for all finite sums of the form (19). For any ω∈Sn−1, λ ∈ R, and b > 0, we set Fω,λ,b(x) = P ρ ω,λ,−λb(x) = ρ ( λ ( 1− |x|2 |x− ω|2 − b )) . These functions are uniformly bounded, as |Fω,λ,b(x)|≤1. Moreover, lim λ→∞ Fω,λ,b(x) = 1 if 1−|x|2 |x−ω|2>b, 0 if 1−|x| 2 |x−ω|2<b. (21) According to (20), for all ω, λ, b, we have ∫ Hn Fω,λ,b(x)H(x)dVol(x) = 0. Functions {Fω,λ,b}λ∈R converge pointwise as λ→∞, and they are uniformly bounded by |H|∈L1(Hn, dVol). By the bounded convergence theorem, for all ω∈Sn−1, b∈R, we have∫ {x:〈x,ω〉H>(log b)/2} H(x)dVol(x) = ∫ { x: 1−|x|2 |x−ω|2 >b }H(x)dVol(x) = 0. (22) By the integral formula (7) (with notations defined there), (6) and (22), for all b∈R,∫ ∞ log b (∫ ξt,ω H(z)dVolξt,ω (z) ) dt = 0. (23) Taking the derivative of ∫∞ log b (∫ ξt,ω H(z)dVolξt,ω (z) ) dt with respect to b, we deduce from (23) that∫ ξlog b,ω H(z)dVolξlog b,ω (z) = 0 for a.e. b>0. In other words, the integration of H on a.e. ξ ∈ Ξω is zero. This fact is valid for all ω∈Sn−1. Therefore, the integration of H on a.e. ξ ∈ Ξ is zero. By the injectivity Theorem 1, H = 0 a.e., which contradicts our assumption. Therefore, finite sums of the form (19) are dense in Lp(K, dVol). The case ρ is ReLU, ELU or Softplus and µ = dVol follows from the above case and the fact that x 7→ ρ(x+ 1)− ρ(x) is sigmoidal. The case µ is the Euclidean volume follows from previous cases and the fact that the Euclidean volume on compact K is bounded from above by λdVol for some constant λ. We refere the reader to the difference of (16) and (21), (17) and (22), and (18) and (23). However, basically the proofs are the same. The points are the integral formula (7), the injectivity Theorem 1 and the fact that level sets of horocycle/Poisson neurons are horocycles. Moreover, as a corollary of Theorem 4, we have Corollary 2. Let K be a compact set in Rn, and 1≤p<∞. Then finite sums of the form F (x) = N∑ i=1 αiP ρ wi,λi,bi (x), wi∈Rn, αi, λi, bi∈R are dense in Lp(K,µ), where µ is the Euclidean volume. Proof. Because K is compact, there exists a positive number R such that K ⊂ {x ∈ Rn : |x| < R}. By the above theorem, finite sums of the form F (x) = N∑ i=1 αiP ρ wi,λi,bi (x), wi∈Sn−1, αi, λi, bi∈R are dense in Lp(K/R, µ). Then the corollary follows from Pρw,λ,b(x) = P ρ w/R,λ,b(x/R). A.11 PROOF OF THE LEMMA 1 Recall f1a,p(x) = 2|a| 1− |p|2 sinh−1 ( 2(−p⊕ x, a)E (1− | − p⊕ x|2)|a| ) . (24) The proof of Lemma 1 follows from the following direct computation. Proof. Let t ∈ (0, 1). Take pt = tω and at = −ω, then we have −pt ⊕ x = −t(1− 2t(ω, x)E + |x|2)ω + (1− t2)x 1− 2t(ω, x)E + t2|x|2 . Let Ft(x) = 2(−pt⊕x,at)E (1−|−pt⊕x|2)|at| , then Ft(x) = 2(−pt ⊕ x, at)E (1− | − pt ⊕ x|2)|at| = 2 t(1−2t(ω,x)E+|x| 2)−(1−t2)(x,ω)E 1−2t(ω,x)E+t2|x|2 1− |−t(1−2t(ω,x)E+|x| 2)ω+(1−t2)x|2 (1−2t(ω,x)E+t2|x|2)2 = 2t(1− 2t(ω, x)E + t2|x|2)(1− 2t(ω, x)E + |x|2)− 2(1− t2)(1− 2t(ω, x)E + t2|x|2)(x, ω)E (1− 2t(ω, x)E + t2|x|2)2 − | − t(1− 2t(ω, x)E + |x|2)ω + (1− t2)x|2 = At(x)/Bt(x), where At, Bt are defined as the corresponding numerator and denominator. We have At(x)|t=1 = 2|x− ω|4 Bt(x)|t=1 = 0 ∂Bt(x)/∂t|t=1 = 2|x− ω|2(|x|2 − 1). Let Gt(x) = sinh−1(Ft(x)) + log 1−t1+t , then Gt(x) = log ( At(x) Bt(x) + √ 1 + A2t (x) B2t (x) ) + log 1− t 1 + t = log ( (1− t)At (1 + t)Bt + √ (1− t)2 (1 + t)2 + (1− t)2A2t (x) (1 + t)2B2t (x) ) . By L’Hôpital’s rule, lim t<1,t→1 (1− t)At(x) (1 + t)Bt(x) = −At(x) + (1− t)A′t(x) Bt(x) + (1 + t)B′t(x) ∣∣∣ t=1 = |x− ω|2 2− 2|x|2 . Therefore, lim t<1,t→1 Gt(x) = log ( |x− ω|2 1− |x|2 ) . For t < 1, we take pt = tω, at = −ω, ct = t 2−1 4 , dt = 1 2 log 1+t 1−t , then for all x ∈ K, lim t<1,t→1 ctf 1 at,pt(x) + dt = limt<1,t→1 −1 2 Gt(x) = 1 2 log ( 1− |x|2 |x− ω|2 ) = 〈x, ω〉H . If there exists c1, c2 such that |ctf1at, pt(x) + dt|(= |Gt(x)|/2) ≤ c2 for all t ∈ (c1, 1), x ∈ K, then by the dominated convergence theorem, there exists t such that ||ctf1at,pt(x) + dt − 〈x, ω〉H ||Lp(K,m) < , which proves the lemma. Note that (1− t)At(x) (1 + t)Bt(x) = 2|x− ω|4(1− t) + ∑4 j=1 Uj(x, ω)(1− t)j+1 −2|x− ω|2(|x|2 − 1)(1− t)(1 + t) + ∑4 l=2 Ll(x, ω)(1− t)l(1 + t) = 2|x− ω|4 + ∑4 j=1 Uj(x, ω)(1− t)j 2|x− ω|2(1− |x|2)(1 + t) + ∑4 l=2 Ll(x, ω)(1− t)l−1(1 + t) , where Uj and Ll are continuous functions defined on K × {ω}. There exist positive numbere c3, c4 and c1 ∈ (0, 1) such that for all x ∈ K and t ∈ (c1, 1), c3 ≤ 2|x− ω|4 ≤ c4, c3 ≤ 2|x− ω|2(1− |x|2)(1 + t) ≤ c4, c3 2 ≥ | 4∑ j=1 Uj(x, ω)(1− t)j |, c3 2 ≥ | 4∑ l=2 Ll(x, ω)(1− t)l−1(1 + t)|. Therefore, for x ∈ K and t ∈ (c1, 1), we have c3 2c4 + c3 ≤ (1− t)At(x) (1 + t)Bt(x) ≤ 2c4 + c3 c3 . This implies that for t ∈ (c1, 1), Gt|K and therefore |ctf1at,pt + dt||K are uniformly bounded, which finishes the proof of the lemma. A.12 THE FIRST MNIST CLASSIFIER IN 6.1 At the preprocessing stage, we compute the projection of the 28× 28 input pattern on the 40 principal components and then scale them so that the scaled 40-dimensional PCA features are within the unit ball. In our network, 1. Input layer: scaled 40-dimensional PCA features; 2. First layer: 40 inputs/1000 outputs horocycle layer (tanh activation); 3. Last layer: 1000 inputs/10 outputs affine layer; 4. Loss: cross entroy loss. Take learning rate = 1, learning rate decay = 0.999, and batch size = 128, and run it three times. The average test error rates after 600 epochs is 1.96%. PCA follows LeCun et al. (1998)(C.3), where 40 PCA is used for the quadratic network. Quadratic network has a similar structure to ours, because our neuron are contructed by quotient of quadratic functions followed by log. A.13 HOROCYCLE LAYER FOLLOWED BY MLR CAN APPROXIMATE THE CLASSFICATION FUNCTION Suppose the MNIST classification function M is defined on ∪9j=0Kj ⊂ H40, where Ki are relatively compact and M|Kj = j. By Theorem 2, for 0≤j≤9, there exist Fj(x) = ∑Nj i=1 αj,iρ(λj,i〈x, ωj,i〉H+bj,i) such that Fj approximates IKj , where I is the indicator function. Therefore, a network with the first (horocycle) layer given by ρ(λj,i〈x, ωj,i〉H+bj,i)(0≤j≤9, 1≤i≤Nj) followed by a classical MLR with parameters given by αj,i(0≤j≤9, 1≤i≤Nj) (with arg max for prediction) approximatesM. A.14 THE SECOND MNIST CLASSIFIER IN 6.1 At the preprocessing stage, we do data augmentation by letting each image 1 step toward each of its 4 corners, so that our traning set has 300000 examples. In our network, 1. Input layer: (28,28, 1); 2. First block: 32-filters 3× 3 convolution, ReLU, 2× 2 max-pooling, BatchNorm; 3. Second block: 64-filters 3× 3 convolution, ReLU, BatchNorm; 4. Thrid block: 64-filters 3× 3 convolution,ReLU,2× 2 max-pooling, BatchNorm; 5. Fourth block: 128-filters 3× 3 convolution, ReLU, 2× 2 max-pooling, BatchNorm; 6. Fifth block: FC 1000, ReLU, BatchNorm; 7. Last block: 1000 input/10 output Poisson layer, sigmoid, BatchNorm; 8. Loss: cross entroy loss. In optimization, we take Adam(Kingma & Ba, 2015). The batch size is 128 in the first 5 epochs, and 1024 in the next 15 epochs. After 5 epochs, we set ωi in the Poisson layer to be non-trainable. We train our network five times, the average test error rate after 20 epochs is 0.35%. The in |w| 2−|x|2 |x−w|2+ is an important hyperparameter for the numerical stability. We train this MNIST model with ∈ {10−1, 10−2, 10−4, 10−6, 10−8, 10−10, 10−20}. They all show robust performance. A.15 EXPERIMENT OF POINCARE TREE CLASSIFICATION TASK Given a Poincaré embedding (Nickel & Kiela, 2017) PE : {WordNet noun} → HD of the 82114 WordNet noun nodes and given a node x, the task is to classify all other nodes as being part of the subtree rooted at x (Ganea et al., 2018a). Our model is a logistic regression, where the horocycle feature p ∈ {WordNet noun} 7→ hPE(x)(PE(p)/s) (s is a hyperparameter lying in [1, 1.5]) is the only predictor, and the dependent variable is whether p is in the subtree rooted at x. Let P be the set of all nodes in the Poincare embedding, and let p range from P . 1. Input: hPE(x)(PE(p)/s) (s is a hyperparameter.) 2. Only layer: 1 input/1 output affine layer. (two parameters: one for input, one for bias.) 3. Loss: Logistic. (with respect to 1 if p in the tree rooted at x; 0 else.) In each training, x is one of {animal, group, location, mammal, worker}, dim is one of {2,3,5,10}, and Poincaré embeddings are from the animation_train.py of Ganea et al. (2018b) 4 (with tree=wordnet_full, model=poincare, dim=dim, seed randomly ∈ {7, 8, 9}). All nodes in the subtree rooted at x are divided into training nodes (80%) and test nodes (20%). The same splitting procedure applies for the rest nodes. We choose s that has the best training F1, and then record the corresponding test F1. For each x and dim, we do the training 100 times. The average test F1 classification scores are recorded in Table 2. The horocycle feature performs well here because it is compatible with the Poincaré embedding algorithm. Let x be a node that is not at the origin. It seems that the Poincaré embedding algorithm tends to pull all nodes that are from the subtree rooted at x towards the direction of x|x| , therefore y → 〈 y, x|x| 〉 H is a suitable feature for this task. A.16 END-BASED CLUSTERING IN H2 For MNIST, at the preprocessing stage, we do data augmentation by letting each image 1 step toward each of its 4 corners, so that our traning set has 300000 examples. Our network for H2 embedding of MNIST dataset is 1. Input layer: (28,28, 1); 2. First block: 32-filters 3× 3 convolution, ReLU, 2× 2 max-pooling, BatchNorm; 3. Second block: 64-filters 3× 3 convolution, ReLU, BatchNorm; 4. Thrid block: 64-filters 3× 3 convolution,ReLU,2× 2 max-pooling, BatchNorm; 5. Fourth block: 128-filters 3× 3 convolution, ReLU, 2× 2 max-pooling, BatchNorm; 6. Fifth block: FC 1000, ReLU, BatchNorm; 7. Sixth block: FC 2, ReLU, BatchNorm, Exp; 8. Last block: 2 input/10 output horocycle layer, sigmoid; 4https://github.com/dalab/hyperbolic_cones 9. Loss: cross entroy loss, where Exp is the exponential map ToH2(= R2)→ H2. We apply the data augmentation as in A.14. In optimization, learning rate is 0.1, learning rate decay is 0.99, batch size is 128, epochs is 50. Our network, data augmentation and optimization for H2 embedding of Fashion-MNIST dataset is completely the same as that for MNIST. For MNIST and Fashion-MNIST we use sphere optimization. We would like to remark that there are interesting new features in sphere optimization. Because the S1 is compact, for any continuous function f , there exists x = argmaxS1f . The derivative of f at x vanish, so the usual optimization algorithm to find the minimum will fail in the general case. In our experiments, we solve this problem by adding the following tricks: 1. Observation: if the class Cα are all close to ω ∈ S1, and the end prototype ωα for the class Cα is around −ω, then ωα is a maximum point of the loss function and therefore can not be improved through normal SGD. We solve this problem by adopting an idea(supervised variation) of k-means clustering. In each early epochs, optimization consists of two parts. In the first part, the normal SGD applies. In the second part, we move end prototypes (ωi) to the average direction of the class (using training data). 2. Observation: if the class Cα and class Cβ are all close to ω ∈ S1, and the end prototype ωα, ωβ are also both around ω, then all points in class Cα and class Cβ , end prototypes ωα, ωβ will all be pulling to ω by the SGD, and finally the network can not distinguish class Cα and class Cβ . We solve this problem by adding a loss if two prototypes are close. With these small tricks, our 2D end-based clustering algorithm is very stable for MNIST and FashionMNIST. We run it on MNIST 10 times, and they all get a test acc around 99% within 20 epochs. Suppose the classification task has M classes and the prototype of the i-th class is ωi. We write down the additional loss function for the second observation as follows i = RandomChoice({1, . . . ,M}) j = RandomChoice({1, . . . ,M} \ {i}) d = (ωi, ωj)E LObservation2 = arctanh(10× ReLU(d− 0.9− )), where is a small constant for numerical stability. For CIFAR-10, our network for H2 embedding of CIFAR-10 dataset is 1. Input layer: (32,32, 3); 2. First block: ResNet-32/128 output; 3. Second block: FC 2, ReLU, BatchNorm, Exp; 4. Last block: 2 input/10 output horocycle layer; 5. Loss: cross entroy loss. In the data augmentation, we apply horizontal/vertical shifts and horizontal flip. We use Adam. The batch size is 32 in the first 100 epochs, or 1024 in the next 50 epochs. The weights of the horocycle layer are fixed at the beginning of the training and are non-trainable, which follows an idea of Mettes et al. (2019). A.17 POISSON MLR For CIFAR-10, we use a ResNet-32 structure as the feature descriptor, and we apply horizontal/vertical shifts and horizontal flip. In our network, 1. Input layer: (32,32, 3); 2. First block: ResNet-32/128 output; 3. Second block: FC 128, ReLU, BatchNorm; 4. Last block: 128 input/10 output Poisson layer, BatchNorm; 5. Loss: cross entroy loss. We use Adam. The batch size is 32 in the first 80 epochs, or 1024 in the next 20 epochs. Test acc greater than 93.5%. For the classification task of flowers (Tensorflow), The dataset of 3670 photos of flowers contains 5 classes: daisy, dandelion, roses, sunflowers and tulips. The keras model is 1. Input layer: (180,180, 3); 2. First block: 16-filters 3× 3 convolution, ReLU, 2× 2 max-pooling; 3. Second block: 32-filters 3× 3 convolution, ReLU, 2× 2 max-pooling; 4. Thrid block: 64-filters 3× 3 convolution,ReLU,2× 2 max-pooling; 5. Fourth block: FC 128, ReLU; 6. Last block: 128 input/10 output FC layer; 7. Loss: cross entroy loss. Our Poisson model is 1. Input layer: (180,180, 3); 2. First block: 16-filters 3× 3 convolution, ReLU, 2× 2 max-pooling; 3. Second block: 32-filters 3× 3 convolution, ReLU, 2× 2 max-pooling; 4. Thrid block: 64-filters 3× 3 convolution,ReLU,2× 2 max-pooling; 5. Fourth block: FC 128, ReLU; 6. Last block: BatchNorm, 128 input/10 output Poisson layer, sigmoid, BatchNorm; 7. Loss: cross entroy loss. We use 2936 photos for training and
1. What is the reviewer's opinion on the contribution and significance of the proposed horocycle neuron and Poisson MLR models? 2. Do the theoretical and empirical evaluations support the claim of the paper? 3. Are there any concerns regarding the novelty and superiority of the proposed methods compared to existing works? 4. Is the presentation of the paper clear and well-organized? 5. Are there any suggestions for improving the clarity and reproducibility of the experiments?
Review
Review Review summary The proposed models are theoretically sound, as it is as expressive as satisfying the universality theorem. Also, the proposed methods are empirically superior to existing methods. However, I think there is much room for improvement in the presentation of the paper. Besides, it is unclear to me what the research question is and how the proposed methods solve the problem. I would recommend reconsidering the organization of the paper. Summary of the paper This paper proposed a horocycle neuron that acts on the hyperbolic space. It uses the Hyperbolic Poisson kernel in place of the standard Euclid norm on the Euclidean space. This paper proposed an architecture called horocycle MLR, which used a horocycle neuron as a building block, and Poisson neural MLR. They showed the universality of a model with a single hidden layer of horocycle neurons or f a , p 1 , which has been used in the existing literature. They applied the horocycle feature to a subtree classification task of Poincare embedding, a horocycle MLR to a clustering task of 2D embedding, and horocycle and Poisson MLRs to classification tasks on image datasets. Claim If I understand correctly, this paper claims that the horocycle and Possison neurons are theoretically sound and empirically effective. However, it is not clear to me the research question that this paper addressed and how the theoretical and empirical properties of the proposed methods answer the question. It is true that they discussed the heuristic connection between the universal approximation property and the integral representation of the form (8) of a horocycle neuron. However, I think it is not a research question but supporting evidence that the universal approximation property is likely to hold. Soundness of the claims Can theory support the claim? The authors proved the universal approximation theorem for horocycle and f a , p 1 . Although it is not a constructive proof due to the Hahn-Banach theorem's nature, as the paper pointed out, it gives an affirmative answer for the theoretical justification and is a good first step to study the expressive power. If I do not miss any information, the Poisson neuron model (Section 4.2, Paragraph 4) is introduced without its motivation nor justification. In addition, this paper does not provide the theoretical superiority of the model. For example, Theorem 1 and Theorem 2 does not apply to the Poisson neuron model. I want to know if there are theoretical justifications for the Poisson model. Can empirical evaluation support the claim? Section 6.1: I confirm that the horocycle model's overall performance is better than Ganea et al. (2018a) and Shimizu et al. (2020). Especially, the proposed method significantly outperforms them when the embedding dimension is two, or the subtree is "worker.n.01". Section 6.2--6.4: I confirm that the proposed method's error rate is smaller than the existing methods. Section 6.5: I could not understand the motivation for the experiments of the CIFAR-10 and Fashion-MNIST datasets in this section. Figure 8 claimed that Poisson MLR shows good generalization in the Flowers dataset. However, this paper does not provide such a comparison in the CIFAR-10 and Fashion-MNIST datasets. Also, the performance in these datasets is not as good as the SOTA models (I referenced [1] for CIFAR-10 and [2] for Fashion-MNIST]). Therefore, I think these results do not support the empirical superiority of Poisson MLR. [1] https://paperswithcode.com/sota/image-classification-on-cifar-10 [2] https://paperswithcode.com/sota/image-classification-on-fashion-mnist Significance and novelty Novelty To the best of our knowledge, this is the first study that proves the universal approximation theorem for a single-hidden layer model on a hyperbolic space. Relation to previous work Although this paper mentioned Ganea et al. (2018a) and Shimizu et al. (2020), with which this paper compare the proposed method in the experiment, it did not compare the methodological difference (especially novelty and superiority) of the proposed method from the two. Same is true of the baseline methods in Table 2 and the method by Ontrup & Ritter (2005) and Grattarola et al. (2019) in Table 3. I would like to recommend to make it clear what is the drawback of the existing model. Correctness Is the theory correct? Yes, So far as I check the proof, Theorem 1 and Corollary 1 (universality of horocycle neurons and the function f a , p 1 ) are correct. Is the experimental evaluation correct? Yes, I did not find any methodologically incorrect point in the experimental procedures. In Table 1, ideally, we should compare three methods with the same train/test partitions because the class label is highly imbalanced (e.g., 1115/82114 is positive in the case of worker.n.01 ); I am wondering if the performance variance caused by the randomness of data partition could be high. Reproducibility of the experiments Yes. It explains experimental settings in detail in the appendix. Also, it has a runnable code with trained parameters. Clarity I would say that there is much improvement in the clarity of the paper. First, I took some time to understand how sections are related and how paragraphs in a section are related. I think adding discourse markers and organizing sentences so that readers can do paragraph reading could make the paper more understandable. Take the introduction section as an example. I feel there is a large gap between the third and fourth paragraphs. In addition, I could not understand that the fourth paragraph intends to explain the horocycle neuron until I reached the end of the paragraph. Also, although the function f a , p 1 is introduced in the fifth paragraph, the introduction does not mention it in the remaining part and goes back to the explanation of horocycle neurons. Another problem is that the tables and figures are not prepared appropriately. For example, Table 3 is inserted within a paragraph. Also, captions and legends of figures are tiny and hard to read. Additional feedback Abstract: The acronym MLR is used without what it stands for. So, I recommend writing the meaning of MLR without abbreviation. Section 1, Paragraph 3: This paper study → studies Section 1, Second bullet: Although this sentence mentioned the Poisson neuron and the horocycle MLR, they were not mentioned before this sentence. Similarly, the term "end-based" is used in the introduction but is explained in Section 4.2 for the first time. I would recommend writing their explanation before they are used. Section 2 Paragraph 3 (Hyperbolic deep learning): I could not see what this paper intended to mean by the term "prototype" at first reading. This wording may need some definition. Section 4.1, Paragraph 3 (Neuron models): What does the following sentence mean?: We accept the representation properties of eigenfunctions on compact manifolds. Section 4.2, Paragraph 5 (End-based clusters, end prototypes): It was hard for me to understand the relationship between sentences in the paragraph. For example, it is not clear at first sight how RBF is related to the discussion of clustering algorithms. I would recommend reconsidering the organization of the paragraph. Section 5 (9): ⟨ x , ω i ⟩ → ⟨ x , ω i ⟩ H Section 6.1, Table 1: Could you explain what H 2 , H 3 , etc. means? Section 6.1: The task (Ganea et al., 2018a) is to classify all other nodes as [...]. → It is not clear what "other" nodes mean solely from the main text. I understand it after I read the first sentence of Section A.11. Appendix A.15: This paper says that it adds a loss to distinguish the prototypes of 4 and 9. However, looking at the code, it seems the algorithm randomly selects two classes and adds the loss from prototypes of these classes. I want to confirm if my understanding is correct and recommend explaining the procedure if it is correct.
ICLR
Title Strategic Classification with Graph Neural Networks Abstract Strategic classification studies learning in settings where users can modify their features to obtain favorable predictions. Most current works focus on simple classifiers that trigger independent user responses. Here we examine the implications of learning with more elaborate models that break the independence assumption. Motivated by the idea that applications of strategic classification are often social in nature, we focus on graph neural networks, which make use of social relations between users to improve predictions. Using a graph for learning introduces inter-user dependencies in prediction; our key point is that strategic users can exploit these to promote their own goals. As we show through analysis and simulation, this can work either against the system—or for it. Based on this, we propose a differentiable framework for strategically-robust learning of graph-based classifiers. Experiments on several real networked datasets demonstrate the utility of our approach. 1 INTRODUCTION Machine learning is increasingly being used to inform decisions about humans. But when users of a system stand to gain from certain predictive outcomes, they may be prone to “game” the system by strategically modifying their features (at some cost). The literature on strategic classification (Brückner & Scheffer, 2011; Hardt et al., 2016) studies learning in this setting, with emphasis on how to learn classifiers that are robust to strategic user behavior. The idea that users may respond to a decision rule applies broadly and across many domains, from hiring, admissions, and scholarships to loan approval, insurance, welfare benefits, and medical eligibility (McCrary, 2008; Almond et al., 2010; Camacho & Conover, 2011; Lee & Lemieux, 2010). This, along with its clean formulation as a learning problem, have made strategic classification the target of much recent interest (Sundaram et al., 2021; Zhang & Conitzer, 2021; Levanon & Rosenfeld, 2021; Ghalme et al., 2021; Jagadeesan et al., 2021; Zrnic et al., 2021; Estornell et al., 2021; Lechner & Urner, 2021; Harris et al., 2021; Levanon & Rosenfeld, 2022; Liu et al., 2022; Ahmadi et al., 2022; Barsotti et al., 2022a). But despite these advances, most works in strategic classification remain to follow the original problem formulation in assuming independence across users responses. From a technical perspective, this assumption greatly simplifies the learning task, as it allows the classifier to consider each user’s response in isolation: user behavior is modeled via a response mapping ∆h(x) determining how users modify their features x in response to the classifier h, and learning aims to find an h for which y ≈ h(∆h(x)). Intuitively, a user will modify her features if this ‘moves’ her across the decision boundary, as long as this is worthwhile (i.e., gains from prediction exceed modification costs). Knowing ∆h allows the system to anticipate user responses and learn an h that is robust. For a wide range of settings, learning under independent user responses has been shown to be theoretically possible (Hardt et al., 2016; Zhang & Conitzer, 2021; Sundaram et al., 2021) and practically feasible (Levanon & Rosenfeld, 2021; 2022). Unfortunately, once this assumption of independence is removed—results no longer hold. One reason is that current approaches can safely assume independence because the decision rules they consider induce independence: when predictions inform decisions for each user independently, users have no incentive to account for the behavior of others. This limits the scope of predictive models to include only simple functions of single inputs. *Equal contribution, alphabetical order In this paper, we aim to extend the literature on strategic classification to support richer learning paradigms that enable inter-dependent user responses, with particular focus on the domain of Graph Neural Networks (GNNs) (Monti et al., 2017; Wang et al., 2019; Bronstein et al., 2017; Hamilton et al., 2017). Generally, user responses can become dependent through the classifier if predictions for one user rely also on information regarding other users, i.e., if h(xi) is also a function of other xj . In this way, the affects of a user modifying her features via xj 7→ ∆h(xj) can propagate to other users and affect their decisions (since h(xi) now relies on ∆h(xj) rather than xj). For GNNS, this expresses through their relience on the graph. GNNs take as input a weighted graph whose nodes correspond to featurized examples, and whose edges indicate relations that are believed to be useful for prediction (e.g., if j→i indicates that yi = yj is likely). In our case, nodes represent users, and edges represent social links. The conventional approach is to first embed nodes in a way that depends on their neighbors’ features, ϕi = ϕ(xi;xnei(i)), and then perform classification (typically linear) in embedded space, ŷi = sign(w⊤ϕi). Notice ŷi depends on xi, but also on all other xj ∈ xnei(i); hence, in deciding how to respond, user i must also account for the strategic responses of her neighbors j ∈ nei(i). We aim to establish the affects of such dependencies on learning. As a concrete example, consider Lenddo1, a company that provides credit scoring services to lending institutions. Lenddo specializes in consumer-focused microlending for emerging economies, where many applicants lack credible financial records. To circumvent the need to rely on historical records, Lenddo uses applicants’ social connections, which are easier to obtain, as a factor in their scoring system.2 As an algorithmic approach for this task, GNNs are an adequate choice (Gao et al., 2021). Once loan decisions become dependent on social relations, the incentives for acting strategically change (Wei et al., 2016). To see how, consider that a user who lies far to the negative side of the decision boundary (and so independently cannot cross) may benefit from the graph if her neighbors “pull” her embedding towards the decision boundary and close enough for her to cross. Conversely, the graph can also suppress strategic behavior, since neighbors can “hold back” nodes and prevent them from crossing. Whether this is helpful to the system or not depends on the true label of the node. This presents a tradeoff: In general, graphs are useful if they are informative of labels in a way that complements features; the many success stories of GNNs suggest that this is often the case (Zhou et al., 2020). But even if this holds sans strategic behavior—once introduced, graphs inadvertently create dependencies through user representations, which strategic users can exploit. Graphs therefore hold the potential to benefit the system, but also its users. Here we study the natural question: who does the graph help more? Through analysis and experimentation, we show that learning in a way that neglects to account for strategic behavior not only jeopardizes performance, but becomes worse as reliance on the graph increases. In this sense, the graph becomes a vulnerability which users can utilize for their needs, turning it from an asset to the system—to a potential threat. As a solution, we propose a practical approach to learning GNNs in strategic environments. We show that for a key neural architecture (SGC; Wu et al. (2019)) and certain cost functions, graph-dependent user responses can be expressed as a ‘projection-like’ operator. This operator admits a simple and differentiable closed form; with additional smoothing, this allows us to implement responses as a neural layer, and learn robust predictors h using gradient methods. Experiments on synthetic and real data (with simulated responses) demonstrate that our approach not only effectively accounts for strategic behavior, but in some cases, can harness the efforts of self-interested users to promote the system’s goals. Our code is publicly available at: http://github.com/StrategicGNNs/Code. 1.1 RELATED WORK Strategic classification. Since its introduction in Hardt et al. (2016) (and based on earlier formulations in Brückner & Scheffer (2009); Brückner et al. (2012); Großhans et al. (2013)), the literature on strategic classification has been growing at a rapid pace. Various aspects of learning have been studied, including: generalization behavior (Zhang & Conitzer, 2021; Sundaram et al., 2021; Ghalme et al., 2021), algorithmic hardness (Hardt et al., 2016), practical optimization methods (Levanon & Rosenfeld, 2021; 2022), and societal implications (Milli et al., 2019; Hu et al., 2019; Chen et al., 2020; Levanon & Rosenfeld, 2021). Some efforts have been made to extend beyond the conventional user models, e.g., by adding noise (Jagadeesan et al., 2021), relying on partial information (Ghalme 1http://lenddoefl.com; see also http://www.wired.com/2014/05/lenddo-facebook/. 2For a discussion on ethics, see final section. For similar initiatives, see https://en.wikipedia.org/wiki/Lenddo. et al., 2021; Bechavod et al., 2022), or considering broader user interests (Levanon & Rosenfeld, 2022); but these, as do the vast majority of other works, focus on linear classifiers and independent user responses.3 We study richer predictive model classes that lead to correlated user behavior. Graph Neural Networks (GNNs). The use of graphs in learning has a long and rich history, and remains a highly active area of research (Wu et al., 2020). Here we cover a small subset of relevant work. The key idea underlying most methods is to iteratively propagate and aggregate information from neighboring nodes. Modern approaches implement variations of this idea as differentiable neural architectures (Gori et al., 2005; Scarselli et al., 2008; Kipf & Welling, 2017; Gilmer et al., 2017). This allows to express more elaborate forms of propagation (Li et al., 2018; Alon & Yahav, 2021) and aggregation (Wu et al., 2019; Xu et al., 2019; Li et al., 2016), including attention-based mechanisms (Veličković et al., 2018; Brody et al., 2022). Nonetheless, a key result by Wu et al. (2019) shows that, both theoretically and empirically, linear GNNs are also quite expressive. Robustness of GNNs. As most other fields in deep learning, GNNs have been the target of recent inquiry as to their sensitivity to adversarial attacks. Common attacks include perturbing nodes, either in sets (Zügner et al., 2018; Zang et al., 2021) or individually (Finkelshtein et al., 2020). Attacks can be applied before training (Zügner & Günnemann, 2019; Bojchevski & Günnemann, 2019; Li et al., 2021; Zhang & Zitnik, 2020) or at test-time (Szegedy et al., 2014; Goodfellow et al., 2015); our work corresponds to the latter. While there are connections between adversarial and strategic behavior (Sundaram et al., 2021), the key difference is that strategic behavior is not a zero-sum game; in some cases, incentives can even align (Levanon & Rosenfeld, 2022). Thus, system-user relations become more nuanced, and provide a degree of freedom in learning that does not exist in adversarial settings. 2 LEARNING SETUP Our setting includes n users, represented as nodes in a directed graph G = (V,E) with non-negative edge weights W = {wij}(i,j)∈E , wij ≥ 0. Each user i is also described by a feature vector xi ∈ Rℓ and a binary label yi ∈ {±1}. We use x−i = {xj}j ̸=i to denote the set of features of all nodes other than i. Using the graph, our goal is to learn a classifier h that correctly predicts user labels. The challenge in our strategic setting is that inputs at test-time can be strategically modified by users, in response to h and in a way that depends on the graph and on other users (we describe this shortly). Denoting by xhi the (possibly modified) strategic response of i to h, our learning objective is: argmin h∈H ∑ i L(yi, ŷi), ŷi = h(x h i ;x h −i) (1) where H is the model class and L is a loss function (i.e., log-loss). Note that both predictions ŷi and modified features xhi can depend on G and on on x h −i (possibly indirectly through h). We focus on the inductive graph learning setting, in which training is done on G, but testing is done on a different graph, G′ (often G,G′ are two disjoint components of a larger graph). Our goal is therefore to learn a classifier that generalizes to other graphs in a way that is robust to strategic user behavior. Graph-based learning. We consider linear graph-based classifiers—these are linear classifiers that operate on linear, graph-dependent node embeddings, defined as: hθ,b(xi;x−i) = sign(θ ⊤ϕ(xi;x−i) + b), ϕ(xi;x−i) = w̃iixi + ∑ j ̸=i w̃jixj (2) where ϕi = ϕ(xi;x−i) is node i’s embedding,4 θ ∈ Rℓ and b ∈ R are learned parameters, and w̃ij ≥ 0 are pairwise weights that depend on G and W . We refer to users j with w̃ji ̸= 0 as the embedding neighbors of i. A simple choice of weights is w̃ji = wji for (j, i) ∈ E (and 0 otherwise), but different methods propose different ways to construct w̃; here we adopt the weight scheme of Wu et al. (2019). We assume the weights w̃ are predetermined, and aim to learn θ and b in Eq. (1). Our focus on linear GNNs stems from several factors. From the perspective of strategic classification, linear decision rules ensure that strategic responses are computationally tractable (see Eq. (4)). This is conventionally required, and most works remain in the linear regime. From the perspective of GNNs, 3The only exception we know of is Liu et al. (2022) who study strategic ranking, but do not consider learning. 4Note that embedding preserve the dimension of the original features. linear architectures have been shown to match state-of-the-art performance on multiple tasks (Wu et al., 2019), implying sufficiently manifest the fundamental role of graphs. Thus, linear GNNs serve as a minimal necessary step for bridging standard strategic classification and graph-based learning, in a way that captures the fundamental structure of the learning task in both domains. Nonetheless, as we show in Sec. 4, even for linear GNNs—user responses can cause learning to be highly non-linear. Strategic inputs. For the strategic aspects of our setting, we build on the popular formulation of Hardt et al. (2016). Users seek to be classified positively (i.e., have ŷi = 1), and to achieve this, are willing to modify their features (at some cost). Once the system has learned and published h, a test-time user i can modify her features xi 7→ x′i in response to h. Modification costs are defined by a cost function c(x, x′) (known to all); here we focus mainly on 2-norm costs c(x, x′) = ∥x− x′∥2 (Levanon & Rosenfeld, 2022; Chen et al., 2020), but also discuss other costs (Brückner et al., 2012; Levanon & Rosenfeld, 2021; Bechavod et al., 2022). User i modifies her features (or “moves”) if this improves her prediction (i.e., if h(xi) = −1 but h(x′i) = 1) and is cost-effective (i.e., prediction gains exceed modification costs); for linear classifiers, this means crossing the decision boundary. Note that since y ∈ {±1}, gains are at most h(x′)− h(x) = 2. Users therefore do not move to any x′ whose cost c(x, x′) exceeds a ‘budget’ of 2, and the maximal moving distance is d = 2. Distribution shift. One interpretation of strategic classification is that user responses cause distribution shift, since in aggregate, p(x′) ̸= p(x). Crucially, how the distribution changes depends on h, which implies that the system has some control over the test distribution p(x′), indirectly through user how users respond—a special case of model-induced distribution shift (Miller et al., 2021; Maheshwari et al., 2022). The unique aspect of our setting is that user responses are linked through their mutual dependence on the graph. We next describe our model of user responses in detail. 3 STRATEGIC USER BEHAVIOR: MODEL AND ANALYSIS Eq. (2) states that h classifies i according to her embedding ϕi, which in turn is a weighted sum of her features and those of her neighbors. To gain intuition as to the effects of the graph on user behavior, it will be convenient to assume weights w̃ are normalized5 so that we can write: ϕi = ϕ(xi;x−i) = (1− αi)xi + αix̄i for some αi ∈ [0, 1] (3) I.e., ϕi can be viewed as an interpolation between xi and some point x̄i ∈ Rℓ representing all other nodes, where the precise point along the line depends on a parameter αi that represents the influence of the graph (in a graph-free setting, αi = 0). This reveals the dual effect a graph has on users: On the one hand, the graph limits the ability of user i to influence her own embedding, since any effort invested in modifying xi affects ϕi by at most 1− αi. But the flip side of this is that an αi-portion of ϕi is fully determined by other users (as expressed in x̄i); if they move, i’s embedding also ‘moves’ for free. A user’s ‘effective’ movement radius is ri = d(1− αi). Fig. 1 (F) shows this for varying αi. 5This is indeed the case in several common approaches. 3.1 STRATEGIC RESPONSES Given that h relies on the graph for predictions—how should a user modify her features xi to obtain ŷi = 1? In vanilla strategic classification (where h operates on each xi independently), users are modeled as rational agents that respond to the classifier by maximizing their utility, i.e., play x′i = argmaxx′ h(x ′) − c(xi, x′), which is a best-response that results in immediate equilibrium (users have no incentive to move, and the system has no incentive to change h).6 In our graph-based setting, however, the dependence of ŷi on all other users via h(xi;x−i) makes this notion of bestresponse ill-defined, since the optimal x′i can depend on others’ strategic responses, x ′ −i, which are unknown to user i at the time of decision (and may very well rely on x′i itself). As a feasible alternative, here we generalize the standard model by assuming that users play myopic best-response over a sequence of multiple update rounds. As we will see, this has direct connections to key ideas underlying graph neural networks. Denote the features of node i at round t by x(t)i , and set x(0)i = xi. A myopic best response means that at round t, each user i chooses x (t) i to maximize her utility at time t according to the state of the game at time t− 1, i.e., assuming all other users play {x(t−1)j }j ̸=i, with costs accumulating over rounds. This defines a myopic response mapping: ∆h(xi;x−i, κ) ≜ argmax x′∈Rℓ h(x′;x−i)− c(xi, x′)− κ (4) where at round t updates are made (concurrently) via x(t+1)i = ∆h(x (t) i ;x (t) −i, κ (t) i ) with accumulating costs κ(t)i = κ (t−1) i + c(x (t−1) i , x (t) i ), κ (0) i = 0. Predictions for round t are ŷ (t) i = h(x (t) i ;x (t) i−i). Eq. (4) naturally extends the standard best-response mapping (which is recovered when αi = 0 ∀i, and converges after one round). By adding a temporal dimension, the actions of users propagate over the graph and in time to affect others. Nonetheless, even within a single round, graph-induced dependencies can result in non-trivial behavior; some examples for ℓ = 1 are given in Fig. 1 (A-D). 3.2 ANALYSIS We now give several results demonstrating basic properties of our response model and consequent dynamics, which shed light on how the graph differentially affects the system and its users. Convergence. Although users are free to move at will, movement adheres to a certain useful pattern. Proposition 1. For any h, if users move via Eq. (4), then for all i ∈ [n], x(t)i ̸= x (t−1) i at most once. Proof. User i will move only when: (i) she is currently classified negatively, h(xi;xi−) = −1, and (ii) there is some x′ for which utility can improve, i.e., h(x′;xi−) − c(xi, x′) > −1, which in our case occurs if h(x′;xi−) = 1 and c(xi, x′) < 2 (since h maps to [−1, 1]).7 Eq. (4) ensures that the modified x′i will be such that ϕ(x ′ i;x−i) lies exactly on the decision boundary of h; hence, x ′ i must be closer to the decision boundary (in Euclidian distance) than xi. This means that any future moves of an (incoming) neighbor j can only push i further away from the decision boundary; hence, the prediction for i remains positive, and she has no future incentive to move again.8 Hence, all users move at most once. The proof reveals a certain monotonicity principle: users always (weakly) benefit from any strategic movement of others. Convergence follows as an immediate result. Corollary 1. Myopic-best response dynamics converge for any h (and after at most n rounds). We will henceforth use xhi to denote the features of user i at convergence (w.r.t. h), denoted Tmax. Hitchhiking. When i moves, the embeddings of (outgoing) neighbors j who currently have ŷj = −1 also move closer to the decision boundary; thus, users who were initially too far to cross may be able to 6Note ‘rational’ here implies users are assumed to know h. As most works in the field, we also make this assumption; for the practically-inclined reader, note that (i) in some cases, there is reason to believe it may approximately hold (e.g., http://openschufa.de), and (ii) relaxing this assumption (and others) is an ongoing community effort (Ghalme et al., 2021; Jagadeesan et al., 2021; Bechavod et al., 2022; Barsotti et al., 2022b). 7In line with Hardt et al. (2016), we assume that if the value is zero then the user does not move. 8Users moving only once ensures that cumulative costs are never larger than the final gain. do so at later rounds. In this sense, the dependencies across users introduced by the graph-dependent embeddings align user incentives, and promote an implicit form of cooperation. Interestingly, users can also obtain positive predictions without moving. We refer to such users as ‘hitchhikers’. Proposition 2. There exist cases where ŷ(t)i = −1 and i doesn’t move, but ŷ (t+1) i = 1. A simple example can be found in Figure 1 (E). Hitchhiking demonstrates how relying on the graph for classification can promote strategic behavior—even under a single response round. Cascading behavior. Hitchhiking shows how the movement of one user can flip the label of another, but the effects of this process are constrained to a single round. When considering multiple rounds, a single node can trigger a ‘domino effect’ of moves that span the entire sequence. Proposition 3. For any n, there exists a graph where a single move triggers n additional rounds. Proposition 4. For any n and k ≤ n, there exists a graph where n− k users move at round k. Proofs are constructive and modular, and rely on graphs that are predictively useful (Appendix A.2). Note also that graph diameter is not a mediating factor (Appendix E.3). Both results show that, through monotonicity, users also (weakly) benefit from additional rounds. This has concrete implications. Corollary 2. In the worst case, the number of rounds until convergence is Ω(n). Corollary 3. In the worst case, Ω(n) users move after Ω(n) rounds. Thus, to exactly account for user behavior, the system must correctly anticipate the strategic responses of users many rounds into the future, since a bulk of predictions may flip in the last round. Fortunately, these results also suggests that in some cases, blocking one node from crossing can prevent a cascade of flips; thus, it may be worthwhile to ‘sacrifice’ certain predictions for collateral gains. This presents an interesting tradeoff in learning, encoded in the learning objective we present next, and which we motivate with our final result on the potential impact of strategic behavior: Proposition 5. The gap in accuracy between (i) the optimal non-strategic classifier on non-strategic data, and (ii) the optimal strategic classifier on strategic data, can be as large as 30% (see Apx. E.1). 4 LEARNING AND OPTIMIZATION We are now ready to describe our learning approach. Our learning objective can be restated as: ĥ = argmin h∈H ∑ i L(yi, h(x h i ;x h −i)) (5) for H = {hθ,b} as in Eq. (2). The difficulty in optimizing Eq. (5) is that xh depend on h through the iterative process, which relies on ∆h. At test time, xh can be computed exactly by simulating the dynamics. However, at train time, we would like to allow for gradients of θ, b to propagate through xh. For this, we propose an efficient differential proxy of xh, implemented as a stack of layers, each corresponding to one response round. The number of layers is a hyperparameter, T . Single round. We begin with examining a single iteration of the dynamics, i.e., T = 1. Note that since a user moves only if the cost is at most 2, Eq. (4) can be rewritten as: ∆h(xi;x−i) = { x′i if h(xi;x−i) = −1 and c(xi, x′i) ≤ 2 xi o.w. (6) where x′i = projh(xi;x−i) is the point to which xi must move in order for ϕ(xi;x−i) to be projected onto h. This projection-like operator (on xi) can be shown to have a closed-form solution: projh(xi;x−i) = xi − θ⊤ϕ(xi;x−i) + b ∥θ∥22w̃ii θ (7) See Appendix B.1 for a derivation using KKT conditions. Eq. (7) is differentiable in θ and b; to make the entire response mapping differentiable, we replace the ‘hard if’ in Eq. (6) with a ‘soft if’, which we now describe. First, to account only for negatively-classified points, we ensure that only points in the negative halfspace are projected via a ‘positive-only’ projection: proj+h (xi;x−i) = xi −min { 0, θ⊤ϕ(xi;x−i) + b ∥θ∥22w̃ii } θ (8) Then, we replace the c ≤ 2 constraint with a smoothed sigmoid that interpolates between xi and the projection, as a function of the cost of the projection and thresholded at 2. This gives our differentiable approximation of the response mapping: ∆̃(xi;x−i, κ) = xi + (x ′ i − xi)στ ( 2− c(xi, x′i)− κ ) where x′i = proj + h (xi;x−i) (9) where σ is a sigmoid and τ is a temperature hyperparameter (τ → 0 recovers Eq. (6)) and for T = 1, κ = 0. In practice we add a small additive tolerance term for numerical stability (See Appendix B.3). Multiple rounds. Next, we consider the computation of (approximate) modified features after T > 1 rounds, denoted x̃(T ), in a differentiable manner. Our approach is to apply ∆̃ iteratively as: x̃ (t+1) i = ∆̃(x̃ (t) i ; x̃ (t) −i, κ (t) i ), x̃ (0) i = xi (10) Considering ∆̃ as a layer in a neural network, approximating T rounds can be done by stacking. In Eq. (10), κ(t)i is set to accumulate costs of approximate responses, κ (t) i = κ (t−1) i + c(x̃ (t−1) i , x̃ (t) i ). One observation is that for 2-norm costs, κ(t)i = c(x̃ (0) i , x̃ (t) i ) (by the triangle inequality; since all points move along a line, equality holds). We can therefore simplify Eq. (9) and replace c(x (t−1) i , x ′ i)− κ (t−1) i with c(x (0) i , x ′ i). For other costs, this gives a lower bound (see Appendix B.1). 5 EXPERIMENTS 5.1 SYNTHETIC DATA We begin our empirical evaluation by demonstrating different aspects of learning in our setting using a simple but illustrative synthetic example. Additional results and insights on movement trends, the effects of movement on accuracy, and the importance of looking ahead, can be found in Appendix D.1. For our experimental setup, we set ℓ = 1 and sample features xi ∈ R for each class from a corresponding Gaussian N (y, 1) (classes are balanced). For each node, we uniformly sample 5 neighbors from the same class and 3 from the other, and use uniform weights. This creates a task where both features and the graph are informative about labels, but only partially, and in a complementary manner (i.e., noise is uncorrelated; for i with yi = 1, if xi < 0, it is still more likely that most neighbors have xj > 0, and vice versa). As it is a-priori unclear how to optimally combine these sources, we study the effects of relying on the graph to various degrees by varying a global α, i.e., setting w̃ii = (1− α) and w̃ij = α/degi for all i and all j ̸= i. We examine both strategic and non-strategic settings, the latter serving as a benchmark. Since ℓ = 1, H = {hb} is simply the class of thresholds, hence we can scan all thresholds b and report learning outcomes for all models hb ∈ H . For non-strategic data, the optimal h∗ has b∗ ≈ 0; for strategic data, the optimal h∗ can be found using line search. Testing is done on disjoint but similarly sampled held-out features and graph. The effects of strategic behavior. Figure 2 (left) presents the accuracy of the learned ĥ for varying α and in different settings. In a non-strategic setting (dashed gray), increasing α helps, but if reliance on the graph becomes exaggerated, performance deteriorates (α ≈ 0.7 is optimal). Allowing users to respond strategically reverses this result: for α = 0 (i.e., no graph), responses lower accuracy by ≈ 0.26 points; but as α is increased, the gap grows, this becoming more pronounced as test-time response rounds progress (blue lines). Interestingly, performance under strategic behavior is worst around the previously-optimal α ≈ 0.75. This shows how learning in a strategic environment—but neglecting to account for strategic behavior—can be detrimental. By accounting for user behavior, our approach (orange line) not only recovers performance, but slightly improves upon the non-strategic setting (this can occur when positive points are properly incentivized; see Appendix D.1). Sensitivity analysis. Figure 2 (right) plots the accuracy of all threshold models hb for increasing values of α. For each α, performance exhibits a ‘bell-curve’ shape, with its peak at the optimal h∗. As α increases, bell-curves change in two ways. First, their centers shift, decreasing from positive values towards zero (which is optimal for non-strategic data); since using the graph limits users’ effective radius of movement, the optimal decision boundary can be less ‘stringent’. Second, and interestingly, bell-curves become narrower. We interpret this as a measure of tolerance: the wider the curve, the lower the loss in accuracy when the learned ĥ is close to (but does not equal) h∗. The figure shows for a subset of α-s ‘tolerance bands’: intervals around b∗ that include thresholds b for which the accuracy of hb is at least 90%, 95%, and 97.5% of the optimum (horizontal lines). Results indicate that larger α-s provide less tolerance. If variation in ĥ can be attributed to the number of examples, this can be interpreted as hinting that larger α-s may entail larger sample complexity. Number of layers (T ). Figure 2 (right) also shows for each bell-curve the accuracy achieved by learned models ĥ of increasing depths, T = 1, . . . , 4 (colored dots). For α = 0 (no graph), there are no inter-user dependencies, and dynamics converge after one round. Hence, T = 1 suffices and is optimal, and additional layers are redundant. However, as α increases, more users move in later rounds, and learning with insufficiently large T results in deteriorated performance. This becomes especially distinct for large α: e.g., for α = 0.9, performance drops by ∼ 11% when using T = 1 instead of the optimal T = 4. Interestingly, lower T always result in lower, more ‘lenient’ thresholds; as a result, performance deteriorates, and more quickly for larger, more sensitive α. Thus, the relations between α and T suggest that greater reliance on the graph requires more depth. 5.2 EXPERIMENTS ON REAL DATA Data. We use three benchmark datasets used extensively in the GNN literature: Cora, CiteSeer, and PubMed (Sen et al., 2008; Kipf & Welling, 2017), and adapt them to our setting. We use the standard (transductive) train-test split of Sen et al. (2008); the data is made inductive by removing all test-set nodes that can be influenced by train-set nodes (Hamilton et al., 2017). All three datasets describe citation networks, with papers as nodes and citations as edges. Although these are directed relations by nature, the available data include only undirected edges; hence, we direct edges towards lowerdegree nodes, so that movement of higher-degree nodes is more influential. As our setup requires binary labels, we follow standard practice and merge classes, aiming for balanced binary classes that sustain strategic movement. Appendix C includes further details. see Appendix D.2 for additional results on strategic improvement, extending neighborhood size, and node centrality and influence. Methods. We compare our robust learning approach to a naïve approach that does not account for strategic behavior (i.e., falsely assumes that users do not move). As a benchmark we report the performance of the naïve model on non-strategic data (for which it is appropriate). All methods are based on the SGC architecture (Wu et al., 2019) as it is expressive enough to effectively utilize the graph, but simple enough to permit rational user responses (Eq. (4); see also notes Sec. 1.1). We use the standard weights W̃ = D− 1 2AD− 1 2 where A is the adjacency matrix and D is the diagonal degree matrix. Optimization and setup. We train using Adam and set hyperparameters according to Wu et al. (2019) (learning rate=0.2, weight decay=1.3 ·10−5). Training is stopped after 20 epochs (this usually suffices for convergence). Hyperparameters were determined based only on the train set: τ = 0.05, chosen to be the smallest value which retained stable training, and T = 3, as training typically saturates then (we also explore varying depths). We use β-scaled 2-norm costs, cβ(x, x′) = β∥x− x′∥2, β ∈ R+, which induce a maximal moving distance of dβ = 2/β. We observed that values around d = 0.5 permit almost arbitrary movement; we therefore experiment in the range d ∈ [0, 0.5], but focus primarily on the mid-point d = 0.25 (note d = 0 implies no movement). Mean and standard errors are reported over five random initializations. Appendix C includes further details. Results. Table 1 presents detailed results for d = 0.25 and T = 3. As can be seen, the naive approach is highly vulnerable to strategic behavior. In contrast, by anticipating how users collectively respond, our robust approach is able to recover most of the drop in accuracy (i.e., from ‘benchmark’ to ‘naïve’; Cora: 35%, CiteSeer: 16%, PubMed: 72%). Note this is achieved with a T much smaller than necessary for response dynamics to converge (Tmax: Cora=7, CiteSeer=7, PubMed=11). Fig. 3 (top) shows results for varying max distances d ∈ [0, 0.5] and fixing T = 3 (note d = 0 entails no movement). For Cora and CiteSeer, larger max distances—the result of lower modification costs—hurt performance; nonetheless, our robust approach maintains a fairly stable recovery rate over all values of d. For PubMed, our approach retains ≈ 92% of the optimum, showing resilience to reduced costs. Interestingly, for CiteSeer, in the range d ∈ [0.05, 0.15], our approach improves over the baseline, suggesting it utilizes strategic movements for improved accuracy (as in Sec. 5.1). Fig. 3 (bottom) shows results for varying depths T ∈ {0, . . . , 10}. For all datasets, results improve as T increases, but saturate quickly at T ≈ 3; this suggests a form of robustness of our approach to overshooting in choosing T (which due to smoothing can cause larger deviations from the true dynamics). Using T = 1 recovers between 65%−91% (across datasets) of the optimal accuracy. This shows that while considering only one round of user responses (in which there are no dependencies) is helpful, it is much more effective to consider multiple, dependent rounds—even if only a few. 6 DISCUSSION In this paper we study strategic classification under graph neural networks. Relying on a graph for prediction introduces dependencies in user responses, which can result in complex correlated behavior. The incentives of the system and its users are not aligned, but also not discordant; our proposed learning approach utilizes this degree of freedom to learn strategically-robust classifiers. Strategic classification assumes rational user behavior; this necessitates classifiers that are simple enough to permit tractable best-responses. A natural future direction is to consider more elaborate predictive architectures coupled with appropriate boundedly-rational user models, in hopes of shedding further light on questions regarding the benefits and risks of transparency and model explainability. ETHICS AND SOCIETAL IMPLICATIONS In our current era, machine learning is routinely used to make predictions about humans. These, in turn, are often used to inform—or even determine—consequential decisions. That humans can (and do) respond to decision rules is a factual reality, and is a topic of continual interest in fields such as economics (e.g., Nielsen et al., 2010) and policy-making (e.g., Camacho & Conover, 2013); the novelty of strategic classification is that it studies decision rules that are a product of learned predictive models. Strategic classification not only acknowledges this reality, but also proposes tools for learning in ways that account for it. But in modeling and anticipating how users respond, and by adjusting learning to accommodate for their effects—learning also serves to ‘steer’ its population of users, perhaps inadvertently, towards certain outcomes (Hardt et al., 2022). GNNs are no exception to this reality. In the domain of graph-based learning, the role of predictive models is expressed in how they associate social connections with decision outcomes for individuals; Clearly, the choice of whether to make use of social data for decisions can be highly sensitive, and doing so necessitates much forethought and care. But the question of whether to use social data to enhance prediction is not a binary in nature, i.e., there is no simple ‘right’ or ‘wrong’. Consider our example of the credit scoring company, Lenddo. On the one hand, Lenddo has been criticized that it may be discriminating applicants based on who they chose to socialize with (or, rather, who chooses to socialize with them). But on the other hand, Lenddo, who focuses primarily on developing countries, has been acclaimed for providing financial assistance to a large community of deserving applicants who, due to conservative norms in typically credit scoring rules, would otherwise be denied a consequential loan.9 Such considerations apply broadly. In other focal domains in strategic classification, such as loans, university admissions, and job hiring, the use of social data for informing decisions can be highly controversial, on both ethical and legal grounds. Regulation is necessary, but as in similar areas, often lags far behind the technology itself. This highlights the need for transparency and accountability in how, when, and to what purpose, social data is used (Ghalme et al., 2021; Jagadeesan et al., 2021; Bechavod et al., 2022; Barsotti et al., 2022b). ACKNOWLEDGEMENTS This research was supported by the Israel Science Foundation (grant No. 278/22). A ANALYSIS A.1 HITCHHIKING Here we provide a concrete example of hitchhiking, following Fig. 1 (E). The example includes three nodes, i, j, k, positioned at: xk = −3, xi = −2.1 xj = −0.5, and connected via edges k→j, and j→i. Edge weights w̃ji = 0.6 and w̃ii = 0.4; w̃kj = 1/3 and w̃jj = 2/3; and w̃kk = 1. The example considers a threshold classifier hb with b = 0, and unit-scale costs (i.e., β = 1) inducing a maximal moving distance of d = 2. We show that i cannot invest effort to cross and obtain ŷi = 1; but once j moves (to obtain ŷj = 1), this results in i also being classified positively (without moving). Initially (at round t = 0), node embeddings are: ϕk = −3 , ϕi = −1.14 , ϕj = − 4 3 and all points are classified negatively, ŷk = ŷi = ŷj = −1. Notice that i cannot cross the decision boundary even if she moves the maximal cost-feasible distance of d = 2: ϕ(x (0) i + 2;x (0) i− ) = w̃ii(x (0) i + 2) + w̃jix (0) j = 0.4(−2.1 + 2) + 0.6(− 1 2 ) = −0.34 < 0 Hence, i doesn’t move, so x(1)i = x (0) i . Similarly, k cannot cross, so x (1) k = x (0) k . However, j can cross by moving to 1.5 (at cost 2) in order to get ŷj = 1: x (1) j = 1.5 = −1/2 + 2 = x (0) j + 2 ⇒ ϕ(x(1)j ;x (1) j−) = w̃jjx (1) j + w̃kjx (0) k = 2 3 x (1) j + 1 3 (−3) = 0 ⇒ ŷ(1)j = 1 After j moves, i is classified positively (and so does not need to move): ϕ(x (1) i ;x (1) i− ) = w̃iix (1) i + w̃jix (1) j = 0.4(−2.1) + 0.6 3 2 = 0.06 > 0 ⇒ ŷ(2)i = 1 A.2 CASCADING BEHAVIOR We give a constructive example (for any n) which will be used to prove Propositions 3 and 4. The construction is modular, meaning that we build a small ‘cyclic’ structure of size 3, such that for any given n, we simply replicate this structure roughly n/3 times, and include two additional ‘start’ and ‘finish’ nodes. Our example assumes a threshold classifier hb with b = 0, and scale costs cβ with β = 1.5 inducing a maximum moving distance of dβ = 3. Let n. We construct a graph of size n+ 2 as follows. Nodes are indexed 0, . . . , n+ 1. The graph has bi-directional edges between each pair of consecutive nodes, namely (i, i + 1) and (i + 1, i) for all i = 0, . . . , n, except for the last node, which has only an outgoing edge (n + 1, n), but no incoming edge. We set uniform normalized edge weights, i.e., wij = 1/3 and wii = 1/3 for all 1 ≤ i, j ≤ n, and w0,0 = w0,1 = 1/2 and wn+1,n+1 = wn+1,n = 1/2. The initial features of each node are defined as: x0 = −1, xi = { 2 if i mod 3 = 1 −4 o.w. ∀i = 1, . . . , n+ 1 (11) Figure 4 (A) illustrates this for n = 3. Note that while the graph creates a ‘chain’ structure, the positioning of node features is cyclic (starting from n = 1): 2,−4,−4, 2,−4,−4, 2, . . . etc. We begin with a lemma showing that in our construction, each node i = 1, . . . , n moves precisely at round t = i. Lemma 1. At every round 1 ≤ t ≤ n: (1) node i = t moves, with x(t)i = 5 if k mod 3 = 1, and x (t) i = −1 otherwise (2) all nodes j > t do not move, i.e., x(t)j = x (t−1) j Note that (1) (together with Prop. 1) implies that for any round t, all nodes i < t (which have already moved at the earlier round t′ = i) do not move again. Additionally, (2) implies that all j > t remain in their initial position, i.e., x(t)j = x (0) j . Finally, notice that the starting node x0 has ϕ0 = 0.5, meaning that ŷ(0)0 = 1, and so does not move at any round. Proof. We begin with the case for n = 3. • Round 1: Node i = 1 can cross by moving the maximal distance of 3: w̃1,1(x (0) 1 + 3) + w̃0,1x (0) 0 + w̃2,1x (0) 2 = 1 3 (2 + 3) + 1 3 (−1) + 1 3 (−4) = 0 (12) However, nodes 2,3 cannot cross even if they move the maximal feasible distance: w̃2,2(x (0) 2 + 3) + w̃1,2x (0) 1 + w̃3,2x (0) 3 = 1 3 (−4 + 3) + 1 3 (2) + 1 3 (−4) = −1 < 0 (13) w̃3,3(x (0) 3 + 3) + w̃2,3x (0) 2 + w̃4,3x (0) 4 = 1 3 (−4 + 3) + 1 3 (−4) + 1 3 (2) = −1 < 0 (14) • Round 2: Node i = 2 can cross by moving the maximal distance of 3: w̃2,2(x (1) 2 + 3) + w̃1,2x (1) 1 + w̃3,2x (1) 3 = 1 3 (−4 + 3) + 1 3 (5) + 1 3 (−4) = 0 (15) However, node 3 cannot cross even if it moves the maximal feasible distance: w̃3,3(x (1) 3 + 3) + w̃2,3x (1) 2 + w̃4,3x (1) 4 = 1 3 (−4 + 3) + 1 3 (−4) + 1 3 (2) = −1 < 0 (16) • Round 3: Node i = 3 can cross by moving the maximal distance of 3: w̃3,3(x (2) 3 + 3) + w̃2,3x (2) 2 + w̃4,3x (2) 4 = 1 3 (−4 + 3) + 1 3 (−1) + 1 3 (2) = 0 (17) Fig. 4 (A) illustrates this procedure for n = 3. Next, consider n > 3. Due to the cyclical nature of feature positioning and the chain structure of our graph, we can consider what happens when we sequentially add nodes to the graph. By induction, we can show that: • n mod 3 = 1: Consider round t = n. Node n has x(t−1)n = 2, and two neighbors: n− 1, who after moving at the previous round has x(t−1)n−1 = −1; and n+ 1, who has a fixed x (t−1) n+1 = −4. Thus, it is in the same configuration as node i = 1, and so its movement follows Eq. (12). • n mod 3 = 2: Consider round t = n. Node n has x(t−1)n = −4, and two neighbors: n− 1, who after moving at the previous round has x(t−1)n−1 = 5; and n + 1, who has a fixed x (t−1) n+1 = −4. Thus, it is in the same configuration as node i = 2, and so its movement follows Eq. (15). • n mod 3 = 0: Consider round t = n. Node n has x(t−1)n = −4, and two neighbors: n− 1, who after moving at the previous round has x(t−1)n−1 = −1; and n + 1, who has a fixed x (t−1) n+1 = 2. Thus, it is in the same configuration as node i = 2, and so its movement follows Eq. (17). Fig. 4 (B) illustrates this idea for n > 3. We now proceed to prove the propositions. Proposition 3: The proposition follows immediately from Lemma 1; the only detail that remains to be shown is that node n + 1 does not move at all. To see this, note that since it does not have any incoming edges, its embedding depends only on its own features, xn+1. If n + 1 mod 3 = 1, we have xn+1 = 2, and so ŷn+1 = 1 without movement. Otherwise, xn+1 = −4, meaning that it is too far to cross. Proposition 4: Fix n and k ≤ n. Consider the same construction presented above for a graph of size k + 2 Then, add n − k nodes identical nodes: for each k < j ≤ n, add an edge k→j, and set xj = xk − 6. We claim that all such nodes will move exactly at round k. Consider some node k < j ≤ n. Since xk moves only at round k (following Lemma 1), j does not move in any of the first t ≤ k rounds: w̃j,j(x (0) j +3)+w̃k,jx (0) k = 1 2 (−x(0)k −6+3)+ 1 2 (x (0) k ) = 1 2 (−x(0)k −3)+ 1 2 (x (0) k ) = −1.5 < 0 (18) At the end of round t = k, node k has a value of x(0)k + 3. This enables j to cross by moving the maximal distance of 3: w̃j,j(x (k) j +3)+ w̃k,jx (k) k = 1 2 (−x(0)k −6+3)+ 1 2 (x (k) k ) = 1 2 (−x(0)k −3)+ 1 2 (x (0) k +3) = 0 (19) As this applies to all such j, we get that n− k nodes move at round k, which concludes our proof. Note the graph is such that, for b = 0, without strategic behavior the graph is useful for prediction (increases accuracy from 66% to 100%), so that a learner that is unaware of (or does not account for) strategic behavior is incentivized to utilize the graph. However, once strategic behavior is introduced, naïvely using the graph causes performance to drop to 0%. B OPTIMIZATION B.1 PROJECTION We prove for 2-norm-squared costs. Correctness holds for 2-norm-costs since the argmin is the same (squared over positives is monotone). Calculation of xi’s best response requires solving the following equation: min x′ c(x′i, xi) s.t θ ⊤ϕ(x′i;x−i) + b = 0 min x′ ∥x′i − xi∥22 s.t θ⊤ϕ(x′i;x−i) + b = 0 To solve for x′, we apply the Lagrange method. Define the Lagrangian as follows: L(x′i, λ) = ∥x′i − xi∥22 + λ[θ⊤ϕ(x′i;x−i) + b] Next, to find the minimum of L, derive with respect to x′i, and compare to 0: 2(x′i − xi) + λθw̃ii = 0 x′i = xi − λw̃ii 2 θ Plugging x′i into the constraint gives: θ⊤[w̃ii(xi − λw̃ii 2 θ) + ∑ j ̸=i w̃ijxj ] + b = 0 θ⊤[ϕ(xi;x−i)− λw̃2ii 2 θ] + b = 0 θ⊤ϕ(xi;x−i) + b = λw̃2ii 2 ∥θ∥22 2 θ⊤ϕ(xi;x−i) + b ∥θ∥22w̃2ii = λ Finally, plugging λ into the expression for x′i obtains: x′i = xi − θ⊤ϕ(xi;x−i) + b ∥θ∥22w̃ii θ B.2 GENERALIZED COSTS Here we provide a formula for computing projections in closed form for generalized quadratic costs: c(x, x′) = 1 2 (x′ − x)⊤A(x′ − x) for positive-definite A. As before, the same formula holds for generalized 2-norm costs (since the argmin is the same). Begin with: min x′ c(x′i, xi) s.t θ ⊤ϕ(x′i;x−i) + b = 0 min x′ 1 2 (x′i − xi)⊤A(x′i − xi) s.t θ⊤ϕ(x′i;x−i) + b = 0 As before, apply the Lagrangian method: 1 2 (x′i − xi)⊤A(x′i − xi) + λ[θ⊤ϕ(x′i;x−i) + b] Derivation w.r.t. to x′i: 1 2 [A⊤(x′i − xi) +A(x′ − x)] + λθw̃ii = 0 (A⊤ +A)x′i = (A ⊤ +A)xi − 2λθw̃ii Since the matrix (A⊤ +A) is PD, we can invert to get: x′i = xi − 2λ(A⊤ +A)−1θw̃ii Plugging x′i in the constrain: θ⊤[w̃ii(xi − 2λ(A⊤ +A)−1θw̃ii) + ∑ j ̸=i w̃ijxj ] + b = 0 θ⊤[ϕ(xi;x−i)− 2λ(A⊤ +A)−1w̃2iiθ] + b = 0 θ⊤ϕ(xi;x−i) + b = 2λθ ⊤(A⊤ +A)−1θw̃2 ii Since (A⊤ +A)−1 is also PD, we get θ⊤(A⊤ +A)−1θ > 0, and hence: θ⊤ϕ(xi;x−i) + b 2θ⊤(A⊤ +A)−1θw̃2 ii = λ Finally, pluging in λ: x′i = xi − θ⊤ϕ(xi;x−i) + b θ⊤(A⊤ +A)−1θw̃2 ii (A⊤ +A)−1θw̃ii x′i = xi − θ⊤ϕ(xi;x−i) + b θ⊤(A⊤ +A)−1θw̃ii (A⊤ +A)−1θ Setting A = I recovers Eq. (7). B.3 IMPROVING NUMERICAL STABILITY BY ADDING A TOLERANCE TERM Theoretically, strategic responses move points precisely on the decision boundary. For numerical stability in classifying (e.g., at test time), we add a small tolerance term, tol, that ensures that points are projected to lie strictly within the positive halfspace. Tolerance is added as follows: min x′ c(x′i, xi) s.t θ ⊤ϕ(xi;x−i) + b ≥ tol (20) This necessitates the following adjustment to Eq. (7): projh(xi;x−i) = xi − θ⊤ϕ(xi;x−i) + b− tol ∥θ∥22w̃ii θ (21) However, blindly applying the above to Eq. (8) via: proj+h (xi;x−i) = xi −min { 0, θ⊤ϕ(xi;x−i) + b− tol ∥θ∥22w̃ii } θ (22) is erroneous, since any user whose score is lower than tol will move—although in principal she shouldn’t. To correct for this, we adjust Eq. (8) by adding a mask that ensures that only points in the negative halfspace are projected: projh(xi;x−i) = xi − 1{θ⊤ϕ(xi;x−i) + b < 0} · ( θ⊤ϕ(xi;x−i) + b− tol ∥θ∥22w̃ii θ ) (23) C ADDITIONAL EXPERIMENTAL DETAILS Data. We experiment with three citation network datasets: Cora, CiteSeer, and Pubmed Sen et al. (2008). Table 2 provides summary statistics of the datasets, as well as experimental details. Splits. All three datasets include a standard train-validation-test split, which we adopt for our use.10 For our purposes, we use make no distinction between ‘train’ and ‘validation’, and use both sets for training purposes. To ensure the data is appropriate for the inductive setting, we remove from the test set all nodes which can be influenced by train-set nodes—this ranges from 6%-43% of the test set, depending on dataset (and possibly setting; see Sec. D.2.1). In Table 2, the number of train samples is denoted ntrain, and the number of inductive test samples is denoted n∗test (all original transductive test sets include 1,000 samples). Binarization. To make the data binary (original labels are multiclass), we enumerated over possible partitions of classes into ‘negative’ and ‘positive’, and chose the most balanced partition. Experimenting with other but similarly-balanced partitions resulted in similar performance (albeit at times less distinct strategic movement). The exception to this was PubMed (having only three classes), for which the most balanced partition was neither ‘balanced’ nor stable, and so here we opted for the more stable alternative. Reported partitions and corresponding negative-positive ratios (for train and for test) are given in Table 2. Strategic responses. At test time, strategic user responses are computed by simulating the response dynamics in Sec. 3.1 until convergence. 10Note that nodes in these sets do not necessarily account for all nodes in the graph. D ADDITIONAL EXPERIMENTAL RESULTS D.1 EXPERIMENTS ON SYNTHETIC DATA In this section we explore further in depth the relation between user movement and classification performance, using our synthetic setup in Sec. 5.1 (all examples discussed herein use α = 0.7). From a predictive point of view, graphs are generally helpful if same-class nodes are well-connected. This is indeed the case in our construction (as can be seen by the performance of the benchmark method with non-extreme α > 0 values). From a strategic perspective, however, connectivity increases cooperation, since neighboring nodes can positively influence each other over time. In our construction, cooperation occurs mostly within classes, i.e., negative points that move encourage other negative points to move, and similarly for positive points. Movement trends. Fig. 5 (left) shows how different threshold classifiers hb induce different degrees of movements. The plot shows the relative number of points (in percentage points) whose predictions changed as a result of strategic behavior, per class (red: y = −1, green: y = 1) and over time: after one round (T = 1, dashed lines), and at convergence (T = ∞, solid lines). As can be seen, there is a general trend: when b is small, mostly negative points move, but as b increases, positive points move instead. The interesting point to observe is the gap between the first round (T = 1) and final round (T = ∞). For negative points, movement at T = 1 peaks at b1 ≈ −0.25, but triggers relatively little consequent moves. In contrast, the peak for T = ∞ occurs at a larger b∞ ≈ 0.15. For this threshold, though less points move in the first round, these trigger significantly more additional moves at later rounds—a result of the connectivity structure within the negative cluster of nodes (blue arrows). A similar effect takes place for positive nodes. The importance of looking ahead. Fig. 5 (center) plots for a range of thresholds b the accuracy of hb at convergence (T = ∞; orange line), and after one round (T = 1; gray line). The role of the latter is to illustrate the outcomes as ‘perceived’ by a myopic predictive model that considers only one round (e.g., includes only one response layer ∆̃); the differences between the two lines demonstrate the gap between perception (based on which training chooses a classifier ĥ) and reality (in which the classifier ĥ is evaluated). As can be seen, the mypoic approach leads to an under-estimation of the optimal b∗; at b1 ≈ 0.5, performance for T = 1 is optimal, but is severely worse under the true T = ∞, for which optimal performance is at b∞ ≈ 1.15. The figure also gives insight as to why this happens. For both b1 and b∞, the figure shows (in bars) the relative number of points from each class who obtain ŷ = 1 as a result of strategic moves. Bars are stacked, showing the relative number of points that moved per round T (darker = earlier rounds; lightest = convergence). As can be seen, at b1, the myopic models believes that many positive points, but only few negative points, will cross. However, in reality, at convergence, the number of positive points that crossed is only slightly higher than that of negative points. Hence, the reason for the(erroneous) optimism of the myopic model is that it did not correctly account for the magnitude of correlated moves of negative points, which is expressed over time. In contrast, note that at b∞, barely any negative points cross. How movement affects accuracy. An important observation about the relation between movement and accuracy is that for any classifier h, any negative point that moves hurts accuracy (since y = −1 but predictions become ŷ = 1), whereas any positive point that moves helps accuracy (since y = 1 and predictions are now ŷ = 1). Fig. 5 (right) shows how these movements combine to affect accuracy. The figure compares accuracy before strategic behavior (T = 0; dashed line) to after one response round (T = 1; solid line, top plot) and to convergence (T = ∞; solid line, lower plot). As can be seen, for any b, the difference between pre-strategic and post-strategic accuracy amounts to exactly the degradation due to negative points (red arrows) plus the improvement of positive points (green arrows). Note, however, the difference between T = 1 and T = ∞, as they relate to the benchmark model (T = 0, i.e., no strategic behavior). For T = 1 (top), across the range of b, positive and negative moves roughly balance out. A result of this is that curves for T = 0 and T = 1 are very much similar, and share similar peaks in terms of accuracy (both have ≈ 0.89). One interpretation of this is that if points were permitted to move only one round, the optimal classifier can completely recover the benchmark accuracy by ensuring that the number of positive points the moves overcomes the number of negative points. However, for T = ∞ (bottom), there is a skew in favor of positive points (green arrows). The result of this is that for the optimal b, additional rounds allow positive points to move in a way that obtains slightly higher accuracy (0.91) compared to the benchmark (0.89). This is one possible mechanism underlying our results on synthetic data in Sec. 5.1, and later for our results on real data in Sec. 5.2. D.2 EXPERIMENTS ON REAL DATA D.2.1 EXTENDING NEIGHBORHOOD SIZE One hyperparameter of SGC is the number ‘propagation’ layers, K, which effectively determines the graph distance at which nodes can influence others (i.e., the ‘neighborhood radius’). Given K, the embedding weights are defined as W̃ = D− 1 2AKD− 1 2 where A is the adjacency matrix and D is the diagonal degree matrix. For K = 0, the graph is unused, which results in a standar linear classifier over node features. Our results in the main body of the paper use K = 1. Fig. 6 shows results for an increasing K (we set T = 3, d = 0.25 as in our main results). Results are mixed: for PubMed, higher K seems to lead to less drop in accuracy for naïve and less recovery for our approach; for Cora and CiteSeer, results are unstable. Note however that this may likely be a product of our inductive setup: since varying K also changes the effective test set (since to preserve inductiveness, larger K often necessitates removing more nodes), test sets vary across conditions and decrease in size, making it difficult to directly compare result across different K. D.2.2 STRATEGIC IMPROVEMENT Our main results in Sec. 5.2 show that for CiteSeer, our strategically-aware approach outperforms the non-strategic benchmark (similarly to our synthetic experiments). Here we show that these results are robust. Fig. 7 provides higher-resolution results on CiteSeer for max distances d ∈ [0, 0.22] in hops of 0.01. All other aspects the setup match the original experiment. As can be seen, our approach slightly but consistently improves upon the benchmark until d ≈ 0.17. D.3 NODE CENTRALITY AND INFLUENCE In this experiment we set to explore the role played by central nodes in the graph in propagating the influence of strategic behavior. Since the embedding of a node i is partly determined by its in-neighbors, broadly we would expect that nodes have high out-degree would be highly influential: as ‘anchors’ that prevent others from moving if they themselves do not, and as ‘carriers’ which either push neighbors over the boundary—or at least promote the closer to it—if they do move. Experimental setup. To study the role of such nodes, we preform the following experiment. First, we order the nodes by decreasing out-degree, so that the potentially more influential nodes appear first in the ranking. Then, for each q ∈ {0, 10, 20, ..., 100}, we disconnect nodes in the qth percentile, i.e., remove all edges emanating from the top-q% ranked nodes. For each such condition, we examine learning and its outcomes, and compare performance to a control condition in which nodes are ordered randomly. The difference in performance and other learning outcomes provides us with a measure of the importance of high-degree nodes. Results. Figure 8 shows results for all methods (naïve, robust (ours), and the non-strategic benchmark) and across all three datasets (Cora, CiteSeer, and Pubmed). In all conditions, we vary on the x-axis the portion of nodes that remain connected, where at zero (left end) we get an empty graph, and at 100 (right end) we get the original graph. Note that the y-axis varies in scale across plots. First, consider Cora. In terms of accuracy (upper plots), results show that on the benchmark (evaluated in a non-strategic environment, in which users do not move), the general trend is that more edges help improve performance. However, the gain in performance is much more pronounced for highdegree nodes. Interestingly, for the naïve method (which operates on strategically modified data, but does not anticipate updates), the trend is reversed: user utilize edges in a way that is detrimental to performance—and more so when high-degree nodes remain, making them a vulnerability. Our robust approach is also sensitive to the addition of edges, but to a much lesser degree (the drop is performance is minor); moreover, which nodes are disconnected appears to make little difference, which we take to mean that our approach can counter the dominant role of central nodes. The lower plots, which describe the portion of users that moves and the portion of users that crossed, provides some explanation as to how this is achieved. For the naïve approach, nearly half of all user move—and all users that move, also cross. This occurs faster (in terms of the portion of nodes removed) in the degree condition. In our robust approach, note that the number of nodes that move is halved, and of those, not all cross, which demonstrates how are learning objective, which anticiaptes strategic behavior, can act to prevent it. For CiteSeer and PubMed, the general trend in accuracy is reversed for the non-strategic benchmark, and for the naïve approach, begins with a gap, which closes at some point (sooner in CiteSeer). Despite these differences, the qualitative behavior of our robust classifier is similar to Cora, and achieves fairly stable accuracy (with a mild negative slope) in both conditions and for both datasets. As in citeseer, movement and crossing behavior is similar, in that for the naïve approach a considerable portion of users move and cross (with a gap between conditions in PubMed); and in that our robust approach greatly reduces the number of users that move, and even more so the number users that cross. E ADDITIONAL ANALYTIC RESULTS E.1 PERFORMANCE GAPS: ROBUST LEARNING VS. THE NON-STRATEGIC BENCHMARK When learning a robust classifier on strategic data, intuitively we may hope that its performance approaches that of the optimal classifier on non-strategic data, which we may think of as a target upper bound. A natural question to then ask is - can we always reach this upper bound and close the performance gap? Here we answer this question in the negative, by showing a simple example where the optimal classifier on non-strategic data achieves perfect accuracy, but the optimal classifier on strategic data achieves only 0.66 accuracy. We then show that by introducing a mild change—namely slightly shifting the features of one node—the performance gap closes entirely. This, combined with our result from the previous Sec. D.2.2 showing that the gap can also be negative, highlights how the gap in performance greatly depends on the structure of the input (i.e., the graph, features, and labels), and in a way which can be highly sensitive even to minor changes. E.1.1 LARGE GAP Our first example in which the best obtainable gap is 0.33 is shown in Figure 9 (Left). The example has three nodes described by one-dimensional features x1 = 1.2, x2 = −1, x3 = −1 and with labels y1 = 1, y2 = −1, y3 = −1. We use the standard cost function so that maximal distance to move is dβ = 2, and use uniform edge weights. Since x ∈ R, classifiers are simply threshold functions on the real line, defined by a threshold parameter b ∈ R. First, we demonstrate that for non-strategic data, there exists a ‘benchmark’ classifier that achieves perfect accuracy. Let b = 0. Node embeddings are: ϕ1 = x1 + x2 2 = 1− 1 2 = 0 = b ϕ2 = x1 + x2 + x3 3 = 1− 1− 1 3 = −1 3 < 0 = b ϕ3 = x2 + x3 2 = −1− 1 2 = −1 < 0 = b This give predictions ŷ1 = +1 = y1, ŷ2 = −1 = y2, ŷ3 = −1 = y3, which are all correct. Next. we prove that there is no robust classifier capable of achieving perfect accuracy on strategic data. Suppose by contradiction that such a classifier exists, denoted b. For x2 to move in the first round, the following conditions must hold: x1 + x ′ 2 + x3 3 = b, 1 + x′2 − 1 = 3b, x′2 = 3b Since x2 moves at most distance 2, it moves in the first round only if − 13 < b ≤ 1 3 . However, if it does move - it ends up getting an incorrect prediction. In addition, in this case x3 can also get a positive prediction (either in the first round or in the next round, depending on whether b > 0), in which case the accuracy is 13 . Thus, we get that 1 3 < b (note that for b < − 1 3 x2 is classified as positive which is wrong). Next, we look at the behavior of x1 in the first round. Conditions for movement are: x′1 + x2 2 = b, x′1 − 1 = 2b, x′1 = 2b+ 1 Here x1 gets negative classification if b > 0. If b > 1, then x1 does not move, since the distance required is larger than 2. Thus, x1 does move (beyond the classifier) and gets the correct prediction only if 0 < b ≤ 1. However, considering now the second round of movement for x2 (which only occurs if b > 13 since for 0 < b ≤ 13 x2 moves at the first round), we get the conditions: x′1 + x ′ 2 + x3 3 = b, 2b+ 1 + x′2 − 1 = 3b, x′2 = b This means that for all 0 < b ≤ 1, x2 moves and gets an incorrect prediction. This occurs either for x1 or for x2. Consequently, there is no b that achieves an accuracy of 1, which contradicts the assumption. The optimal accuracy of any robust classifier for this example is 23 , which is achieved by any b > 1. In this case, none of the nodes move, and x1 is classified negatively, whereas its true label is positive. E.1.2 NO GAP We now give a nearly identical example, shown in Figure 9 (Right), in which the gap becomes zero. We use the same example as before, but set x1 = 1 (instead of x1 = 1.2). We begin by showing that there does exist a classifier which achieves perfect accuracy on non-strategic data. Let b = 0, then embeddings are now: ϕ1 = x1 + x2 2 = 1.2− 1 2 = 0.1 > 0 = b ϕ2 = x1 + x2 + x3 3 = 1.2− 1− 1 3 = − 4 15 < 0 = b ϕ3 = x2 + x3 2 = −1− 1 2 = −1 < 0 = b (note that since all nodes are connected in the graph, changing x1 requires us to recompute all embeddings). Predictions now become: ŷ1 = +1 = y1, ŷ2 = −1 = y2, ŷ3 = −1 = y3 which are all correct. Next, we show that there also exists a classifier which achieves perfect accuracy no strategic data. Let b = 1.1. In the first round, x1 moves, and flips its predicted label: ϕ1 = x′1 + x2 2 = x1 + 2 + x2 2 = x1 + 2 + x2 2 = 3.2− 1 2 = 1.1 = b Here, even if the other nodes move to the fullest extent, they do not have sufficient influence to revert this prediction: ϕ2 = x1 + x ′ 2 + x3 3 = 1.2 +−1 + 2− 1 3 = 0.4 < 1.1 = b ϕ3 = x2 + x ′ 3 2 = −1− 1 + 2 2 = 0 < 1.1 = b Thus, we get ŷ1 = +1 = y1, ŷ2 = −1 = y2, ŷ3 = −1 = y3 which are also all correct. E.2 STRATEGIC BEHAVIOR OF CLIQUES Here we give a result that considers graph structure. In particular, we consider cliques, and show that for uniform weights, either all nodes move together—or none do. Proposition 6. Consider n nodes which are all fully connected, i.e., form a clique, and assume uniform edge weights. Then for any dimension d, any assignment of features x1, . . . , xn ∈ Rd, and for any classifier h, either (i) all n nodes move in the first round, or (ii) none of the nodes move at all. Proof. Consider the case in which at least one node i moves in the first round. Denote by z the change in xi made in order to cross the classifier, i.e., z = x (1) i − x (0) i . Note that in order for xi to move, the following conditions must be satisfied: ||z||2 ≤ 2, θTϕ(x(1)i , x (0) −i ) + b = 0 We now show that every other node t, if it moves a distance of
1. What is the focus of the paper regarding strategic classification on graphs? 2. What are the strengths and weaknesses of the proposed approach, particularly in its simplicity and realism? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns or suggestions regarding the consideration of graph characteristics and user behavior in the analysis?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper Authors formulate the problem of strategic classification on graphs, which users shall modify their features with some cost (strategic classification) and the predictor on a user relies on the smoothed embedding of neighbors of a user. Users are assumed to make myopic decisions based on other users' previous behavior. The fact that one user's actions can impact predictions on other users introduces interesting dynamics on the collective behavior of users. Authors identify interesting properties of the dynamics with both mathematical analysis and numerical experiments. Strengths And Weaknesses The problem formulation is elegant. It is different enough from existing strategic classification formulations to introduce interesting dynamics across users. It is realistic enough to motivate real-world scenarios like credit scoring for microlending. It is simple enough to allow interesting mathematical analysis and possible to be adapted for more complex problems. Authors made a good judgment balancing these considerations. Both mathematical analysis and empirical analysis provide good insights into how GNN introduces dynamics to the collective behavior of users. On the other hand, neither mathematical analysis nor empirical analysis actually take the property of the graph into consideration. This is unfortunate, because for sure the characteristics of graphs would play a crucial role determining the dynamics. For example, if the graph is divided into components, no information shall propagate across components. Short-radius graphs would propagate changes faster than long-radius graphs. A central node (say, in terms of PageRank or other centrality metrics) would play a bigger role impacting the dynamics than peripheral nodes would. The fact that users only make myopic decisions also make the problem less interesting. Users would not really take advantage of opportunities to "collude" with their neighbors. Given it is reasonable to assume users shall communicate their plans with their social network, the fact that this works stops at myopic model is disappointing. Clarity, Quality, Novelty And Reproducibility Main ideas of the paper is easy to follow without much background on strategic classification. Authors do a great job providing high-level insights on both mathematical results and empirical results. The analysis seems relatively simple because users are assumed to make myopic decisions, but still the convergence result provides some insight. As mentioned above, connections to graph properties could've made these analysis much more interesting. The formulation seems to be novel, significantly departing from regular assumptions in the topic of strategic classification. The formulation seems to be applicable to many extensions to practical problems. Experiments of this paper seems to be reproducible, as authors share code, and experiments use public data only.
ICLR
Title Strategic Classification with Graph Neural Networks Abstract Strategic classification studies learning in settings where users can modify their features to obtain favorable predictions. Most current works focus on simple classifiers that trigger independent user responses. Here we examine the implications of learning with more elaborate models that break the independence assumption. Motivated by the idea that applications of strategic classification are often social in nature, we focus on graph neural networks, which make use of social relations between users to improve predictions. Using a graph for learning introduces inter-user dependencies in prediction; our key point is that strategic users can exploit these to promote their own goals. As we show through analysis and simulation, this can work either against the system—or for it. Based on this, we propose a differentiable framework for strategically-robust learning of graph-based classifiers. Experiments on several real networked datasets demonstrate the utility of our approach. 1 INTRODUCTION Machine learning is increasingly being used to inform decisions about humans. But when users of a system stand to gain from certain predictive outcomes, they may be prone to “game” the system by strategically modifying their features (at some cost). The literature on strategic classification (Brückner & Scheffer, 2011; Hardt et al., 2016) studies learning in this setting, with emphasis on how to learn classifiers that are robust to strategic user behavior. The idea that users may respond to a decision rule applies broadly and across many domains, from hiring, admissions, and scholarships to loan approval, insurance, welfare benefits, and medical eligibility (McCrary, 2008; Almond et al., 2010; Camacho & Conover, 2011; Lee & Lemieux, 2010). This, along with its clean formulation as a learning problem, have made strategic classification the target of much recent interest (Sundaram et al., 2021; Zhang & Conitzer, 2021; Levanon & Rosenfeld, 2021; Ghalme et al., 2021; Jagadeesan et al., 2021; Zrnic et al., 2021; Estornell et al., 2021; Lechner & Urner, 2021; Harris et al., 2021; Levanon & Rosenfeld, 2022; Liu et al., 2022; Ahmadi et al., 2022; Barsotti et al., 2022a). But despite these advances, most works in strategic classification remain to follow the original problem formulation in assuming independence across users responses. From a technical perspective, this assumption greatly simplifies the learning task, as it allows the classifier to consider each user’s response in isolation: user behavior is modeled via a response mapping ∆h(x) determining how users modify their features x in response to the classifier h, and learning aims to find an h for which y ≈ h(∆h(x)). Intuitively, a user will modify her features if this ‘moves’ her across the decision boundary, as long as this is worthwhile (i.e., gains from prediction exceed modification costs). Knowing ∆h allows the system to anticipate user responses and learn an h that is robust. For a wide range of settings, learning under independent user responses has been shown to be theoretically possible (Hardt et al., 2016; Zhang & Conitzer, 2021; Sundaram et al., 2021) and practically feasible (Levanon & Rosenfeld, 2021; 2022). Unfortunately, once this assumption of independence is removed—results no longer hold. One reason is that current approaches can safely assume independence because the decision rules they consider induce independence: when predictions inform decisions for each user independently, users have no incentive to account for the behavior of others. This limits the scope of predictive models to include only simple functions of single inputs. *Equal contribution, alphabetical order In this paper, we aim to extend the literature on strategic classification to support richer learning paradigms that enable inter-dependent user responses, with particular focus on the domain of Graph Neural Networks (GNNs) (Monti et al., 2017; Wang et al., 2019; Bronstein et al., 2017; Hamilton et al., 2017). Generally, user responses can become dependent through the classifier if predictions for one user rely also on information regarding other users, i.e., if h(xi) is also a function of other xj . In this way, the affects of a user modifying her features via xj 7→ ∆h(xj) can propagate to other users and affect their decisions (since h(xi) now relies on ∆h(xj) rather than xj). For GNNS, this expresses through their relience on the graph. GNNs take as input a weighted graph whose nodes correspond to featurized examples, and whose edges indicate relations that are believed to be useful for prediction (e.g., if j→i indicates that yi = yj is likely). In our case, nodes represent users, and edges represent social links. The conventional approach is to first embed nodes in a way that depends on their neighbors’ features, ϕi = ϕ(xi;xnei(i)), and then perform classification (typically linear) in embedded space, ŷi = sign(w⊤ϕi). Notice ŷi depends on xi, but also on all other xj ∈ xnei(i); hence, in deciding how to respond, user i must also account for the strategic responses of her neighbors j ∈ nei(i). We aim to establish the affects of such dependencies on learning. As a concrete example, consider Lenddo1, a company that provides credit scoring services to lending institutions. Lenddo specializes in consumer-focused microlending for emerging economies, where many applicants lack credible financial records. To circumvent the need to rely on historical records, Lenddo uses applicants’ social connections, which are easier to obtain, as a factor in their scoring system.2 As an algorithmic approach for this task, GNNs are an adequate choice (Gao et al., 2021). Once loan decisions become dependent on social relations, the incentives for acting strategically change (Wei et al., 2016). To see how, consider that a user who lies far to the negative side of the decision boundary (and so independently cannot cross) may benefit from the graph if her neighbors “pull” her embedding towards the decision boundary and close enough for her to cross. Conversely, the graph can also suppress strategic behavior, since neighbors can “hold back” nodes and prevent them from crossing. Whether this is helpful to the system or not depends on the true label of the node. This presents a tradeoff: In general, graphs are useful if they are informative of labels in a way that complements features; the many success stories of GNNs suggest that this is often the case (Zhou et al., 2020). But even if this holds sans strategic behavior—once introduced, graphs inadvertently create dependencies through user representations, which strategic users can exploit. Graphs therefore hold the potential to benefit the system, but also its users. Here we study the natural question: who does the graph help more? Through analysis and experimentation, we show that learning in a way that neglects to account for strategic behavior not only jeopardizes performance, but becomes worse as reliance on the graph increases. In this sense, the graph becomes a vulnerability which users can utilize for their needs, turning it from an asset to the system—to a potential threat. As a solution, we propose a practical approach to learning GNNs in strategic environments. We show that for a key neural architecture (SGC; Wu et al. (2019)) and certain cost functions, graph-dependent user responses can be expressed as a ‘projection-like’ operator. This operator admits a simple and differentiable closed form; with additional smoothing, this allows us to implement responses as a neural layer, and learn robust predictors h using gradient methods. Experiments on synthetic and real data (with simulated responses) demonstrate that our approach not only effectively accounts for strategic behavior, but in some cases, can harness the efforts of self-interested users to promote the system’s goals. Our code is publicly available at: http://github.com/StrategicGNNs/Code. 1.1 RELATED WORK Strategic classification. Since its introduction in Hardt et al. (2016) (and based on earlier formulations in Brückner & Scheffer (2009); Brückner et al. (2012); Großhans et al. (2013)), the literature on strategic classification has been growing at a rapid pace. Various aspects of learning have been studied, including: generalization behavior (Zhang & Conitzer, 2021; Sundaram et al., 2021; Ghalme et al., 2021), algorithmic hardness (Hardt et al., 2016), practical optimization methods (Levanon & Rosenfeld, 2021; 2022), and societal implications (Milli et al., 2019; Hu et al., 2019; Chen et al., 2020; Levanon & Rosenfeld, 2021). Some efforts have been made to extend beyond the conventional user models, e.g., by adding noise (Jagadeesan et al., 2021), relying on partial information (Ghalme 1http://lenddoefl.com; see also http://www.wired.com/2014/05/lenddo-facebook/. 2For a discussion on ethics, see final section. For similar initiatives, see https://en.wikipedia.org/wiki/Lenddo. et al., 2021; Bechavod et al., 2022), or considering broader user interests (Levanon & Rosenfeld, 2022); but these, as do the vast majority of other works, focus on linear classifiers and independent user responses.3 We study richer predictive model classes that lead to correlated user behavior. Graph Neural Networks (GNNs). The use of graphs in learning has a long and rich history, and remains a highly active area of research (Wu et al., 2020). Here we cover a small subset of relevant work. The key idea underlying most methods is to iteratively propagate and aggregate information from neighboring nodes. Modern approaches implement variations of this idea as differentiable neural architectures (Gori et al., 2005; Scarselli et al., 2008; Kipf & Welling, 2017; Gilmer et al., 2017). This allows to express more elaborate forms of propagation (Li et al., 2018; Alon & Yahav, 2021) and aggregation (Wu et al., 2019; Xu et al., 2019; Li et al., 2016), including attention-based mechanisms (Veličković et al., 2018; Brody et al., 2022). Nonetheless, a key result by Wu et al. (2019) shows that, both theoretically and empirically, linear GNNs are also quite expressive. Robustness of GNNs. As most other fields in deep learning, GNNs have been the target of recent inquiry as to their sensitivity to adversarial attacks. Common attacks include perturbing nodes, either in sets (Zügner et al., 2018; Zang et al., 2021) or individually (Finkelshtein et al., 2020). Attacks can be applied before training (Zügner & Günnemann, 2019; Bojchevski & Günnemann, 2019; Li et al., 2021; Zhang & Zitnik, 2020) or at test-time (Szegedy et al., 2014; Goodfellow et al., 2015); our work corresponds to the latter. While there are connections between adversarial and strategic behavior (Sundaram et al., 2021), the key difference is that strategic behavior is not a zero-sum game; in some cases, incentives can even align (Levanon & Rosenfeld, 2022). Thus, system-user relations become more nuanced, and provide a degree of freedom in learning that does not exist in adversarial settings. 2 LEARNING SETUP Our setting includes n users, represented as nodes in a directed graph G = (V,E) with non-negative edge weights W = {wij}(i,j)∈E , wij ≥ 0. Each user i is also described by a feature vector xi ∈ Rℓ and a binary label yi ∈ {±1}. We use x−i = {xj}j ̸=i to denote the set of features of all nodes other than i. Using the graph, our goal is to learn a classifier h that correctly predicts user labels. The challenge in our strategic setting is that inputs at test-time can be strategically modified by users, in response to h and in a way that depends on the graph and on other users (we describe this shortly). Denoting by xhi the (possibly modified) strategic response of i to h, our learning objective is: argmin h∈H ∑ i L(yi, ŷi), ŷi = h(x h i ;x h −i) (1) where H is the model class and L is a loss function (i.e., log-loss). Note that both predictions ŷi and modified features xhi can depend on G and on on x h −i (possibly indirectly through h). We focus on the inductive graph learning setting, in which training is done on G, but testing is done on a different graph, G′ (often G,G′ are two disjoint components of a larger graph). Our goal is therefore to learn a classifier that generalizes to other graphs in a way that is robust to strategic user behavior. Graph-based learning. We consider linear graph-based classifiers—these are linear classifiers that operate on linear, graph-dependent node embeddings, defined as: hθ,b(xi;x−i) = sign(θ ⊤ϕ(xi;x−i) + b), ϕ(xi;x−i) = w̃iixi + ∑ j ̸=i w̃jixj (2) where ϕi = ϕ(xi;x−i) is node i’s embedding,4 θ ∈ Rℓ and b ∈ R are learned parameters, and w̃ij ≥ 0 are pairwise weights that depend on G and W . We refer to users j with w̃ji ̸= 0 as the embedding neighbors of i. A simple choice of weights is w̃ji = wji for (j, i) ∈ E (and 0 otherwise), but different methods propose different ways to construct w̃; here we adopt the weight scheme of Wu et al. (2019). We assume the weights w̃ are predetermined, and aim to learn θ and b in Eq. (1). Our focus on linear GNNs stems from several factors. From the perspective of strategic classification, linear decision rules ensure that strategic responses are computationally tractable (see Eq. (4)). This is conventionally required, and most works remain in the linear regime. From the perspective of GNNs, 3The only exception we know of is Liu et al. (2022) who study strategic ranking, but do not consider learning. 4Note that embedding preserve the dimension of the original features. linear architectures have been shown to match state-of-the-art performance on multiple tasks (Wu et al., 2019), implying sufficiently manifest the fundamental role of graphs. Thus, linear GNNs serve as a minimal necessary step for bridging standard strategic classification and graph-based learning, in a way that captures the fundamental structure of the learning task in both domains. Nonetheless, as we show in Sec. 4, even for linear GNNs—user responses can cause learning to be highly non-linear. Strategic inputs. For the strategic aspects of our setting, we build on the popular formulation of Hardt et al. (2016). Users seek to be classified positively (i.e., have ŷi = 1), and to achieve this, are willing to modify their features (at some cost). Once the system has learned and published h, a test-time user i can modify her features xi 7→ x′i in response to h. Modification costs are defined by a cost function c(x, x′) (known to all); here we focus mainly on 2-norm costs c(x, x′) = ∥x− x′∥2 (Levanon & Rosenfeld, 2022; Chen et al., 2020), but also discuss other costs (Brückner et al., 2012; Levanon & Rosenfeld, 2021; Bechavod et al., 2022). User i modifies her features (or “moves”) if this improves her prediction (i.e., if h(xi) = −1 but h(x′i) = 1) and is cost-effective (i.e., prediction gains exceed modification costs); for linear classifiers, this means crossing the decision boundary. Note that since y ∈ {±1}, gains are at most h(x′)− h(x) = 2. Users therefore do not move to any x′ whose cost c(x, x′) exceeds a ‘budget’ of 2, and the maximal moving distance is d = 2. Distribution shift. One interpretation of strategic classification is that user responses cause distribution shift, since in aggregate, p(x′) ̸= p(x). Crucially, how the distribution changes depends on h, which implies that the system has some control over the test distribution p(x′), indirectly through user how users respond—a special case of model-induced distribution shift (Miller et al., 2021; Maheshwari et al., 2022). The unique aspect of our setting is that user responses are linked through their mutual dependence on the graph. We next describe our model of user responses in detail. 3 STRATEGIC USER BEHAVIOR: MODEL AND ANALYSIS Eq. (2) states that h classifies i according to her embedding ϕi, which in turn is a weighted sum of her features and those of her neighbors. To gain intuition as to the effects of the graph on user behavior, it will be convenient to assume weights w̃ are normalized5 so that we can write: ϕi = ϕ(xi;x−i) = (1− αi)xi + αix̄i for some αi ∈ [0, 1] (3) I.e., ϕi can be viewed as an interpolation between xi and some point x̄i ∈ Rℓ representing all other nodes, where the precise point along the line depends on a parameter αi that represents the influence of the graph (in a graph-free setting, αi = 0). This reveals the dual effect a graph has on users: On the one hand, the graph limits the ability of user i to influence her own embedding, since any effort invested in modifying xi affects ϕi by at most 1− αi. But the flip side of this is that an αi-portion of ϕi is fully determined by other users (as expressed in x̄i); if they move, i’s embedding also ‘moves’ for free. A user’s ‘effective’ movement radius is ri = d(1− αi). Fig. 1 (F) shows this for varying αi. 5This is indeed the case in several common approaches. 3.1 STRATEGIC RESPONSES Given that h relies on the graph for predictions—how should a user modify her features xi to obtain ŷi = 1? In vanilla strategic classification (where h operates on each xi independently), users are modeled as rational agents that respond to the classifier by maximizing their utility, i.e., play x′i = argmaxx′ h(x ′) − c(xi, x′), which is a best-response that results in immediate equilibrium (users have no incentive to move, and the system has no incentive to change h).6 In our graph-based setting, however, the dependence of ŷi on all other users via h(xi;x−i) makes this notion of bestresponse ill-defined, since the optimal x′i can depend on others’ strategic responses, x ′ −i, which are unknown to user i at the time of decision (and may very well rely on x′i itself). As a feasible alternative, here we generalize the standard model by assuming that users play myopic best-response over a sequence of multiple update rounds. As we will see, this has direct connections to key ideas underlying graph neural networks. Denote the features of node i at round t by x(t)i , and set x(0)i = xi. A myopic best response means that at round t, each user i chooses x (t) i to maximize her utility at time t according to the state of the game at time t− 1, i.e., assuming all other users play {x(t−1)j }j ̸=i, with costs accumulating over rounds. This defines a myopic response mapping: ∆h(xi;x−i, κ) ≜ argmax x′∈Rℓ h(x′;x−i)− c(xi, x′)− κ (4) where at round t updates are made (concurrently) via x(t+1)i = ∆h(x (t) i ;x (t) −i, κ (t) i ) with accumulating costs κ(t)i = κ (t−1) i + c(x (t−1) i , x (t) i ), κ (0) i = 0. Predictions for round t are ŷ (t) i = h(x (t) i ;x (t) i−i). Eq. (4) naturally extends the standard best-response mapping (which is recovered when αi = 0 ∀i, and converges after one round). By adding a temporal dimension, the actions of users propagate over the graph and in time to affect others. Nonetheless, even within a single round, graph-induced dependencies can result in non-trivial behavior; some examples for ℓ = 1 are given in Fig. 1 (A-D). 3.2 ANALYSIS We now give several results demonstrating basic properties of our response model and consequent dynamics, which shed light on how the graph differentially affects the system and its users. Convergence. Although users are free to move at will, movement adheres to a certain useful pattern. Proposition 1. For any h, if users move via Eq. (4), then for all i ∈ [n], x(t)i ̸= x (t−1) i at most once. Proof. User i will move only when: (i) she is currently classified negatively, h(xi;xi−) = −1, and (ii) there is some x′ for which utility can improve, i.e., h(x′;xi−) − c(xi, x′) > −1, which in our case occurs if h(x′;xi−) = 1 and c(xi, x′) < 2 (since h maps to [−1, 1]).7 Eq. (4) ensures that the modified x′i will be such that ϕ(x ′ i;x−i) lies exactly on the decision boundary of h; hence, x ′ i must be closer to the decision boundary (in Euclidian distance) than xi. This means that any future moves of an (incoming) neighbor j can only push i further away from the decision boundary; hence, the prediction for i remains positive, and she has no future incentive to move again.8 Hence, all users move at most once. The proof reveals a certain monotonicity principle: users always (weakly) benefit from any strategic movement of others. Convergence follows as an immediate result. Corollary 1. Myopic-best response dynamics converge for any h (and after at most n rounds). We will henceforth use xhi to denote the features of user i at convergence (w.r.t. h), denoted Tmax. Hitchhiking. When i moves, the embeddings of (outgoing) neighbors j who currently have ŷj = −1 also move closer to the decision boundary; thus, users who were initially too far to cross may be able to 6Note ‘rational’ here implies users are assumed to know h. As most works in the field, we also make this assumption; for the practically-inclined reader, note that (i) in some cases, there is reason to believe it may approximately hold (e.g., http://openschufa.de), and (ii) relaxing this assumption (and others) is an ongoing community effort (Ghalme et al., 2021; Jagadeesan et al., 2021; Bechavod et al., 2022; Barsotti et al., 2022b). 7In line with Hardt et al. (2016), we assume that if the value is zero then the user does not move. 8Users moving only once ensures that cumulative costs are never larger than the final gain. do so at later rounds. In this sense, the dependencies across users introduced by the graph-dependent embeddings align user incentives, and promote an implicit form of cooperation. Interestingly, users can also obtain positive predictions without moving. We refer to such users as ‘hitchhikers’. Proposition 2. There exist cases where ŷ(t)i = −1 and i doesn’t move, but ŷ (t+1) i = 1. A simple example can be found in Figure 1 (E). Hitchhiking demonstrates how relying on the graph for classification can promote strategic behavior—even under a single response round. Cascading behavior. Hitchhiking shows how the movement of one user can flip the label of another, but the effects of this process are constrained to a single round. When considering multiple rounds, a single node can trigger a ‘domino effect’ of moves that span the entire sequence. Proposition 3. For any n, there exists a graph where a single move triggers n additional rounds. Proposition 4. For any n and k ≤ n, there exists a graph where n− k users move at round k. Proofs are constructive and modular, and rely on graphs that are predictively useful (Appendix A.2). Note also that graph diameter is not a mediating factor (Appendix E.3). Both results show that, through monotonicity, users also (weakly) benefit from additional rounds. This has concrete implications. Corollary 2. In the worst case, the number of rounds until convergence is Ω(n). Corollary 3. In the worst case, Ω(n) users move after Ω(n) rounds. Thus, to exactly account for user behavior, the system must correctly anticipate the strategic responses of users many rounds into the future, since a bulk of predictions may flip in the last round. Fortunately, these results also suggests that in some cases, blocking one node from crossing can prevent a cascade of flips; thus, it may be worthwhile to ‘sacrifice’ certain predictions for collateral gains. This presents an interesting tradeoff in learning, encoded in the learning objective we present next, and which we motivate with our final result on the potential impact of strategic behavior: Proposition 5. The gap in accuracy between (i) the optimal non-strategic classifier on non-strategic data, and (ii) the optimal strategic classifier on strategic data, can be as large as 30% (see Apx. E.1). 4 LEARNING AND OPTIMIZATION We are now ready to describe our learning approach. Our learning objective can be restated as: ĥ = argmin h∈H ∑ i L(yi, h(x h i ;x h −i)) (5) for H = {hθ,b} as in Eq. (2). The difficulty in optimizing Eq. (5) is that xh depend on h through the iterative process, which relies on ∆h. At test time, xh can be computed exactly by simulating the dynamics. However, at train time, we would like to allow for gradients of θ, b to propagate through xh. For this, we propose an efficient differential proxy of xh, implemented as a stack of layers, each corresponding to one response round. The number of layers is a hyperparameter, T . Single round. We begin with examining a single iteration of the dynamics, i.e., T = 1. Note that since a user moves only if the cost is at most 2, Eq. (4) can be rewritten as: ∆h(xi;x−i) = { x′i if h(xi;x−i) = −1 and c(xi, x′i) ≤ 2 xi o.w. (6) where x′i = projh(xi;x−i) is the point to which xi must move in order for ϕ(xi;x−i) to be projected onto h. This projection-like operator (on xi) can be shown to have a closed-form solution: projh(xi;x−i) = xi − θ⊤ϕ(xi;x−i) + b ∥θ∥22w̃ii θ (7) See Appendix B.1 for a derivation using KKT conditions. Eq. (7) is differentiable in θ and b; to make the entire response mapping differentiable, we replace the ‘hard if’ in Eq. (6) with a ‘soft if’, which we now describe. First, to account only for negatively-classified points, we ensure that only points in the negative halfspace are projected via a ‘positive-only’ projection: proj+h (xi;x−i) = xi −min { 0, θ⊤ϕ(xi;x−i) + b ∥θ∥22w̃ii } θ (8) Then, we replace the c ≤ 2 constraint with a smoothed sigmoid that interpolates between xi and the projection, as a function of the cost of the projection and thresholded at 2. This gives our differentiable approximation of the response mapping: ∆̃(xi;x−i, κ) = xi + (x ′ i − xi)στ ( 2− c(xi, x′i)− κ ) where x′i = proj + h (xi;x−i) (9) where σ is a sigmoid and τ is a temperature hyperparameter (τ → 0 recovers Eq. (6)) and for T = 1, κ = 0. In practice we add a small additive tolerance term for numerical stability (See Appendix B.3). Multiple rounds. Next, we consider the computation of (approximate) modified features after T > 1 rounds, denoted x̃(T ), in a differentiable manner. Our approach is to apply ∆̃ iteratively as: x̃ (t+1) i = ∆̃(x̃ (t) i ; x̃ (t) −i, κ (t) i ), x̃ (0) i = xi (10) Considering ∆̃ as a layer in a neural network, approximating T rounds can be done by stacking. In Eq. (10), κ(t)i is set to accumulate costs of approximate responses, κ (t) i = κ (t−1) i + c(x̃ (t−1) i , x̃ (t) i ). One observation is that for 2-norm costs, κ(t)i = c(x̃ (0) i , x̃ (t) i ) (by the triangle inequality; since all points move along a line, equality holds). We can therefore simplify Eq. (9) and replace c(x (t−1) i , x ′ i)− κ (t−1) i with c(x (0) i , x ′ i). For other costs, this gives a lower bound (see Appendix B.1). 5 EXPERIMENTS 5.1 SYNTHETIC DATA We begin our empirical evaluation by demonstrating different aspects of learning in our setting using a simple but illustrative synthetic example. Additional results and insights on movement trends, the effects of movement on accuracy, and the importance of looking ahead, can be found in Appendix D.1. For our experimental setup, we set ℓ = 1 and sample features xi ∈ R for each class from a corresponding Gaussian N (y, 1) (classes are balanced). For each node, we uniformly sample 5 neighbors from the same class and 3 from the other, and use uniform weights. This creates a task where both features and the graph are informative about labels, but only partially, and in a complementary manner (i.e., noise is uncorrelated; for i with yi = 1, if xi < 0, it is still more likely that most neighbors have xj > 0, and vice versa). As it is a-priori unclear how to optimally combine these sources, we study the effects of relying on the graph to various degrees by varying a global α, i.e., setting w̃ii = (1− α) and w̃ij = α/degi for all i and all j ̸= i. We examine both strategic and non-strategic settings, the latter serving as a benchmark. Since ℓ = 1, H = {hb} is simply the class of thresholds, hence we can scan all thresholds b and report learning outcomes for all models hb ∈ H . For non-strategic data, the optimal h∗ has b∗ ≈ 0; for strategic data, the optimal h∗ can be found using line search. Testing is done on disjoint but similarly sampled held-out features and graph. The effects of strategic behavior. Figure 2 (left) presents the accuracy of the learned ĥ for varying α and in different settings. In a non-strategic setting (dashed gray), increasing α helps, but if reliance on the graph becomes exaggerated, performance deteriorates (α ≈ 0.7 is optimal). Allowing users to respond strategically reverses this result: for α = 0 (i.e., no graph), responses lower accuracy by ≈ 0.26 points; but as α is increased, the gap grows, this becoming more pronounced as test-time response rounds progress (blue lines). Interestingly, performance under strategic behavior is worst around the previously-optimal α ≈ 0.75. This shows how learning in a strategic environment—but neglecting to account for strategic behavior—can be detrimental. By accounting for user behavior, our approach (orange line) not only recovers performance, but slightly improves upon the non-strategic setting (this can occur when positive points are properly incentivized; see Appendix D.1). Sensitivity analysis. Figure 2 (right) plots the accuracy of all threshold models hb for increasing values of α. For each α, performance exhibits a ‘bell-curve’ shape, with its peak at the optimal h∗. As α increases, bell-curves change in two ways. First, their centers shift, decreasing from positive values towards zero (which is optimal for non-strategic data); since using the graph limits users’ effective radius of movement, the optimal decision boundary can be less ‘stringent’. Second, and interestingly, bell-curves become narrower. We interpret this as a measure of tolerance: the wider the curve, the lower the loss in accuracy when the learned ĥ is close to (but does not equal) h∗. The figure shows for a subset of α-s ‘tolerance bands’: intervals around b∗ that include thresholds b for which the accuracy of hb is at least 90%, 95%, and 97.5% of the optimum (horizontal lines). Results indicate that larger α-s provide less tolerance. If variation in ĥ can be attributed to the number of examples, this can be interpreted as hinting that larger α-s may entail larger sample complexity. Number of layers (T ). Figure 2 (right) also shows for each bell-curve the accuracy achieved by learned models ĥ of increasing depths, T = 1, . . . , 4 (colored dots). For α = 0 (no graph), there are no inter-user dependencies, and dynamics converge after one round. Hence, T = 1 suffices and is optimal, and additional layers are redundant. However, as α increases, more users move in later rounds, and learning with insufficiently large T results in deteriorated performance. This becomes especially distinct for large α: e.g., for α = 0.9, performance drops by ∼ 11% when using T = 1 instead of the optimal T = 4. Interestingly, lower T always result in lower, more ‘lenient’ thresholds; as a result, performance deteriorates, and more quickly for larger, more sensitive α. Thus, the relations between α and T suggest that greater reliance on the graph requires more depth. 5.2 EXPERIMENTS ON REAL DATA Data. We use three benchmark datasets used extensively in the GNN literature: Cora, CiteSeer, and PubMed (Sen et al., 2008; Kipf & Welling, 2017), and adapt them to our setting. We use the standard (transductive) train-test split of Sen et al. (2008); the data is made inductive by removing all test-set nodes that can be influenced by train-set nodes (Hamilton et al., 2017). All three datasets describe citation networks, with papers as nodes and citations as edges. Although these are directed relations by nature, the available data include only undirected edges; hence, we direct edges towards lowerdegree nodes, so that movement of higher-degree nodes is more influential. As our setup requires binary labels, we follow standard practice and merge classes, aiming for balanced binary classes that sustain strategic movement. Appendix C includes further details. see Appendix D.2 for additional results on strategic improvement, extending neighborhood size, and node centrality and influence. Methods. We compare our robust learning approach to a naïve approach that does not account for strategic behavior (i.e., falsely assumes that users do not move). As a benchmark we report the performance of the naïve model on non-strategic data (for which it is appropriate). All methods are based on the SGC architecture (Wu et al., 2019) as it is expressive enough to effectively utilize the graph, but simple enough to permit rational user responses (Eq. (4); see also notes Sec. 1.1). We use the standard weights W̃ = D− 1 2AD− 1 2 where A is the adjacency matrix and D is the diagonal degree matrix. Optimization and setup. We train using Adam and set hyperparameters according to Wu et al. (2019) (learning rate=0.2, weight decay=1.3 ·10−5). Training is stopped after 20 epochs (this usually suffices for convergence). Hyperparameters were determined based only on the train set: τ = 0.05, chosen to be the smallest value which retained stable training, and T = 3, as training typically saturates then (we also explore varying depths). We use β-scaled 2-norm costs, cβ(x, x′) = β∥x− x′∥2, β ∈ R+, which induce a maximal moving distance of dβ = 2/β. We observed that values around d = 0.5 permit almost arbitrary movement; we therefore experiment in the range d ∈ [0, 0.5], but focus primarily on the mid-point d = 0.25 (note d = 0 implies no movement). Mean and standard errors are reported over five random initializations. Appendix C includes further details. Results. Table 1 presents detailed results for d = 0.25 and T = 3. As can be seen, the naive approach is highly vulnerable to strategic behavior. In contrast, by anticipating how users collectively respond, our robust approach is able to recover most of the drop in accuracy (i.e., from ‘benchmark’ to ‘naïve’; Cora: 35%, CiteSeer: 16%, PubMed: 72%). Note this is achieved with a T much smaller than necessary for response dynamics to converge (Tmax: Cora=7, CiteSeer=7, PubMed=11). Fig. 3 (top) shows results for varying max distances d ∈ [0, 0.5] and fixing T = 3 (note d = 0 entails no movement). For Cora and CiteSeer, larger max distances—the result of lower modification costs—hurt performance; nonetheless, our robust approach maintains a fairly stable recovery rate over all values of d. For PubMed, our approach retains ≈ 92% of the optimum, showing resilience to reduced costs. Interestingly, for CiteSeer, in the range d ∈ [0.05, 0.15], our approach improves over the baseline, suggesting it utilizes strategic movements for improved accuracy (as in Sec. 5.1). Fig. 3 (bottom) shows results for varying depths T ∈ {0, . . . , 10}. For all datasets, results improve as T increases, but saturate quickly at T ≈ 3; this suggests a form of robustness of our approach to overshooting in choosing T (which due to smoothing can cause larger deviations from the true dynamics). Using T = 1 recovers between 65%−91% (across datasets) of the optimal accuracy. This shows that while considering only one round of user responses (in which there are no dependencies) is helpful, it is much more effective to consider multiple, dependent rounds—even if only a few. 6 DISCUSSION In this paper we study strategic classification under graph neural networks. Relying on a graph for prediction introduces dependencies in user responses, which can result in complex correlated behavior. The incentives of the system and its users are not aligned, but also not discordant; our proposed learning approach utilizes this degree of freedom to learn strategically-robust classifiers. Strategic classification assumes rational user behavior; this necessitates classifiers that are simple enough to permit tractable best-responses. A natural future direction is to consider more elaborate predictive architectures coupled with appropriate boundedly-rational user models, in hopes of shedding further light on questions regarding the benefits and risks of transparency and model explainability. ETHICS AND SOCIETAL IMPLICATIONS In our current era, machine learning is routinely used to make predictions about humans. These, in turn, are often used to inform—or even determine—consequential decisions. That humans can (and do) respond to decision rules is a factual reality, and is a topic of continual interest in fields such as economics (e.g., Nielsen et al., 2010) and policy-making (e.g., Camacho & Conover, 2013); the novelty of strategic classification is that it studies decision rules that are a product of learned predictive models. Strategic classification not only acknowledges this reality, but also proposes tools for learning in ways that account for it. But in modeling and anticipating how users respond, and by adjusting learning to accommodate for their effects—learning also serves to ‘steer’ its population of users, perhaps inadvertently, towards certain outcomes (Hardt et al., 2022). GNNs are no exception to this reality. In the domain of graph-based learning, the role of predictive models is expressed in how they associate social connections with decision outcomes for individuals; Clearly, the choice of whether to make use of social data for decisions can be highly sensitive, and doing so necessitates much forethought and care. But the question of whether to use social data to enhance prediction is not a binary in nature, i.e., there is no simple ‘right’ or ‘wrong’. Consider our example of the credit scoring company, Lenddo. On the one hand, Lenddo has been criticized that it may be discriminating applicants based on who they chose to socialize with (or, rather, who chooses to socialize with them). But on the other hand, Lenddo, who focuses primarily on developing countries, has been acclaimed for providing financial assistance to a large community of deserving applicants who, due to conservative norms in typically credit scoring rules, would otherwise be denied a consequential loan.9 Such considerations apply broadly. In other focal domains in strategic classification, such as loans, university admissions, and job hiring, the use of social data for informing decisions can be highly controversial, on both ethical and legal grounds. Regulation is necessary, but as in similar areas, often lags far behind the technology itself. This highlights the need for transparency and accountability in how, when, and to what purpose, social data is used (Ghalme et al., 2021; Jagadeesan et al., 2021; Bechavod et al., 2022; Barsotti et al., 2022b). ACKNOWLEDGEMENTS This research was supported by the Israel Science Foundation (grant No. 278/22). A ANALYSIS A.1 HITCHHIKING Here we provide a concrete example of hitchhiking, following Fig. 1 (E). The example includes three nodes, i, j, k, positioned at: xk = −3, xi = −2.1 xj = −0.5, and connected via edges k→j, and j→i. Edge weights w̃ji = 0.6 and w̃ii = 0.4; w̃kj = 1/3 and w̃jj = 2/3; and w̃kk = 1. The example considers a threshold classifier hb with b = 0, and unit-scale costs (i.e., β = 1) inducing a maximal moving distance of d = 2. We show that i cannot invest effort to cross and obtain ŷi = 1; but once j moves (to obtain ŷj = 1), this results in i also being classified positively (without moving). Initially (at round t = 0), node embeddings are: ϕk = −3 , ϕi = −1.14 , ϕj = − 4 3 and all points are classified negatively, ŷk = ŷi = ŷj = −1. Notice that i cannot cross the decision boundary even if she moves the maximal cost-feasible distance of d = 2: ϕ(x (0) i + 2;x (0) i− ) = w̃ii(x (0) i + 2) + w̃jix (0) j = 0.4(−2.1 + 2) + 0.6(− 1 2 ) = −0.34 < 0 Hence, i doesn’t move, so x(1)i = x (0) i . Similarly, k cannot cross, so x (1) k = x (0) k . However, j can cross by moving to 1.5 (at cost 2) in order to get ŷj = 1: x (1) j = 1.5 = −1/2 + 2 = x (0) j + 2 ⇒ ϕ(x(1)j ;x (1) j−) = w̃jjx (1) j + w̃kjx (0) k = 2 3 x (1) j + 1 3 (−3) = 0 ⇒ ŷ(1)j = 1 After j moves, i is classified positively (and so does not need to move): ϕ(x (1) i ;x (1) i− ) = w̃iix (1) i + w̃jix (1) j = 0.4(−2.1) + 0.6 3 2 = 0.06 > 0 ⇒ ŷ(2)i = 1 A.2 CASCADING BEHAVIOR We give a constructive example (for any n) which will be used to prove Propositions 3 and 4. The construction is modular, meaning that we build a small ‘cyclic’ structure of size 3, such that for any given n, we simply replicate this structure roughly n/3 times, and include two additional ‘start’ and ‘finish’ nodes. Our example assumes a threshold classifier hb with b = 0, and scale costs cβ with β = 1.5 inducing a maximum moving distance of dβ = 3. Let n. We construct a graph of size n+ 2 as follows. Nodes are indexed 0, . . . , n+ 1. The graph has bi-directional edges between each pair of consecutive nodes, namely (i, i + 1) and (i + 1, i) for all i = 0, . . . , n, except for the last node, which has only an outgoing edge (n + 1, n), but no incoming edge. We set uniform normalized edge weights, i.e., wij = 1/3 and wii = 1/3 for all 1 ≤ i, j ≤ n, and w0,0 = w0,1 = 1/2 and wn+1,n+1 = wn+1,n = 1/2. The initial features of each node are defined as: x0 = −1, xi = { 2 if i mod 3 = 1 −4 o.w. ∀i = 1, . . . , n+ 1 (11) Figure 4 (A) illustrates this for n = 3. Note that while the graph creates a ‘chain’ structure, the positioning of node features is cyclic (starting from n = 1): 2,−4,−4, 2,−4,−4, 2, . . . etc. We begin with a lemma showing that in our construction, each node i = 1, . . . , n moves precisely at round t = i. Lemma 1. At every round 1 ≤ t ≤ n: (1) node i = t moves, with x(t)i = 5 if k mod 3 = 1, and x (t) i = −1 otherwise (2) all nodes j > t do not move, i.e., x(t)j = x (t−1) j Note that (1) (together with Prop. 1) implies that for any round t, all nodes i < t (which have already moved at the earlier round t′ = i) do not move again. Additionally, (2) implies that all j > t remain in their initial position, i.e., x(t)j = x (0) j . Finally, notice that the starting node x0 has ϕ0 = 0.5, meaning that ŷ(0)0 = 1, and so does not move at any round. Proof. We begin with the case for n = 3. • Round 1: Node i = 1 can cross by moving the maximal distance of 3: w̃1,1(x (0) 1 + 3) + w̃0,1x (0) 0 + w̃2,1x (0) 2 = 1 3 (2 + 3) + 1 3 (−1) + 1 3 (−4) = 0 (12) However, nodes 2,3 cannot cross even if they move the maximal feasible distance: w̃2,2(x (0) 2 + 3) + w̃1,2x (0) 1 + w̃3,2x (0) 3 = 1 3 (−4 + 3) + 1 3 (2) + 1 3 (−4) = −1 < 0 (13) w̃3,3(x (0) 3 + 3) + w̃2,3x (0) 2 + w̃4,3x (0) 4 = 1 3 (−4 + 3) + 1 3 (−4) + 1 3 (2) = −1 < 0 (14) • Round 2: Node i = 2 can cross by moving the maximal distance of 3: w̃2,2(x (1) 2 + 3) + w̃1,2x (1) 1 + w̃3,2x (1) 3 = 1 3 (−4 + 3) + 1 3 (5) + 1 3 (−4) = 0 (15) However, node 3 cannot cross even if it moves the maximal feasible distance: w̃3,3(x (1) 3 + 3) + w̃2,3x (1) 2 + w̃4,3x (1) 4 = 1 3 (−4 + 3) + 1 3 (−4) + 1 3 (2) = −1 < 0 (16) • Round 3: Node i = 3 can cross by moving the maximal distance of 3: w̃3,3(x (2) 3 + 3) + w̃2,3x (2) 2 + w̃4,3x (2) 4 = 1 3 (−4 + 3) + 1 3 (−1) + 1 3 (2) = 0 (17) Fig. 4 (A) illustrates this procedure for n = 3. Next, consider n > 3. Due to the cyclical nature of feature positioning and the chain structure of our graph, we can consider what happens when we sequentially add nodes to the graph. By induction, we can show that: • n mod 3 = 1: Consider round t = n. Node n has x(t−1)n = 2, and two neighbors: n− 1, who after moving at the previous round has x(t−1)n−1 = −1; and n+ 1, who has a fixed x (t−1) n+1 = −4. Thus, it is in the same configuration as node i = 1, and so its movement follows Eq. (12). • n mod 3 = 2: Consider round t = n. Node n has x(t−1)n = −4, and two neighbors: n− 1, who after moving at the previous round has x(t−1)n−1 = 5; and n + 1, who has a fixed x (t−1) n+1 = −4. Thus, it is in the same configuration as node i = 2, and so its movement follows Eq. (15). • n mod 3 = 0: Consider round t = n. Node n has x(t−1)n = −4, and two neighbors: n− 1, who after moving at the previous round has x(t−1)n−1 = −1; and n + 1, who has a fixed x (t−1) n+1 = 2. Thus, it is in the same configuration as node i = 2, and so its movement follows Eq. (17). Fig. 4 (B) illustrates this idea for n > 3. We now proceed to prove the propositions. Proposition 3: The proposition follows immediately from Lemma 1; the only detail that remains to be shown is that node n + 1 does not move at all. To see this, note that since it does not have any incoming edges, its embedding depends only on its own features, xn+1. If n + 1 mod 3 = 1, we have xn+1 = 2, and so ŷn+1 = 1 without movement. Otherwise, xn+1 = −4, meaning that it is too far to cross. Proposition 4: Fix n and k ≤ n. Consider the same construction presented above for a graph of size k + 2 Then, add n − k nodes identical nodes: for each k < j ≤ n, add an edge k→j, and set xj = xk − 6. We claim that all such nodes will move exactly at round k. Consider some node k < j ≤ n. Since xk moves only at round k (following Lemma 1), j does not move in any of the first t ≤ k rounds: w̃j,j(x (0) j +3)+w̃k,jx (0) k = 1 2 (−x(0)k −6+3)+ 1 2 (x (0) k ) = 1 2 (−x(0)k −3)+ 1 2 (x (0) k ) = −1.5 < 0 (18) At the end of round t = k, node k has a value of x(0)k + 3. This enables j to cross by moving the maximal distance of 3: w̃j,j(x (k) j +3)+ w̃k,jx (k) k = 1 2 (−x(0)k −6+3)+ 1 2 (x (k) k ) = 1 2 (−x(0)k −3)+ 1 2 (x (0) k +3) = 0 (19) As this applies to all such j, we get that n− k nodes move at round k, which concludes our proof. Note the graph is such that, for b = 0, without strategic behavior the graph is useful for prediction (increases accuracy from 66% to 100%), so that a learner that is unaware of (or does not account for) strategic behavior is incentivized to utilize the graph. However, once strategic behavior is introduced, naïvely using the graph causes performance to drop to 0%. B OPTIMIZATION B.1 PROJECTION We prove for 2-norm-squared costs. Correctness holds for 2-norm-costs since the argmin is the same (squared over positives is monotone). Calculation of xi’s best response requires solving the following equation: min x′ c(x′i, xi) s.t θ ⊤ϕ(x′i;x−i) + b = 0 min x′ ∥x′i − xi∥22 s.t θ⊤ϕ(x′i;x−i) + b = 0 To solve for x′, we apply the Lagrange method. Define the Lagrangian as follows: L(x′i, λ) = ∥x′i − xi∥22 + λ[θ⊤ϕ(x′i;x−i) + b] Next, to find the minimum of L, derive with respect to x′i, and compare to 0: 2(x′i − xi) + λθw̃ii = 0 x′i = xi − λw̃ii 2 θ Plugging x′i into the constraint gives: θ⊤[w̃ii(xi − λw̃ii 2 θ) + ∑ j ̸=i w̃ijxj ] + b = 0 θ⊤[ϕ(xi;x−i)− λw̃2ii 2 θ] + b = 0 θ⊤ϕ(xi;x−i) + b = λw̃2ii 2 ∥θ∥22 2 θ⊤ϕ(xi;x−i) + b ∥θ∥22w̃2ii = λ Finally, plugging λ into the expression for x′i obtains: x′i = xi − θ⊤ϕ(xi;x−i) + b ∥θ∥22w̃ii θ B.2 GENERALIZED COSTS Here we provide a formula for computing projections in closed form for generalized quadratic costs: c(x, x′) = 1 2 (x′ − x)⊤A(x′ − x) for positive-definite A. As before, the same formula holds for generalized 2-norm costs (since the argmin is the same). Begin with: min x′ c(x′i, xi) s.t θ ⊤ϕ(x′i;x−i) + b = 0 min x′ 1 2 (x′i − xi)⊤A(x′i − xi) s.t θ⊤ϕ(x′i;x−i) + b = 0 As before, apply the Lagrangian method: 1 2 (x′i − xi)⊤A(x′i − xi) + λ[θ⊤ϕ(x′i;x−i) + b] Derivation w.r.t. to x′i: 1 2 [A⊤(x′i − xi) +A(x′ − x)] + λθw̃ii = 0 (A⊤ +A)x′i = (A ⊤ +A)xi − 2λθw̃ii Since the matrix (A⊤ +A) is PD, we can invert to get: x′i = xi − 2λ(A⊤ +A)−1θw̃ii Plugging x′i in the constrain: θ⊤[w̃ii(xi − 2λ(A⊤ +A)−1θw̃ii) + ∑ j ̸=i w̃ijxj ] + b = 0 θ⊤[ϕ(xi;x−i)− 2λ(A⊤ +A)−1w̃2iiθ] + b = 0 θ⊤ϕ(xi;x−i) + b = 2λθ ⊤(A⊤ +A)−1θw̃2 ii Since (A⊤ +A)−1 is also PD, we get θ⊤(A⊤ +A)−1θ > 0, and hence: θ⊤ϕ(xi;x−i) + b 2θ⊤(A⊤ +A)−1θw̃2 ii = λ Finally, pluging in λ: x′i = xi − θ⊤ϕ(xi;x−i) + b θ⊤(A⊤ +A)−1θw̃2 ii (A⊤ +A)−1θw̃ii x′i = xi − θ⊤ϕ(xi;x−i) + b θ⊤(A⊤ +A)−1θw̃ii (A⊤ +A)−1θ Setting A = I recovers Eq. (7). B.3 IMPROVING NUMERICAL STABILITY BY ADDING A TOLERANCE TERM Theoretically, strategic responses move points precisely on the decision boundary. For numerical stability in classifying (e.g., at test time), we add a small tolerance term, tol, that ensures that points are projected to lie strictly within the positive halfspace. Tolerance is added as follows: min x′ c(x′i, xi) s.t θ ⊤ϕ(xi;x−i) + b ≥ tol (20) This necessitates the following adjustment to Eq. (7): projh(xi;x−i) = xi − θ⊤ϕ(xi;x−i) + b− tol ∥θ∥22w̃ii θ (21) However, blindly applying the above to Eq. (8) via: proj+h (xi;x−i) = xi −min { 0, θ⊤ϕ(xi;x−i) + b− tol ∥θ∥22w̃ii } θ (22) is erroneous, since any user whose score is lower than tol will move—although in principal she shouldn’t. To correct for this, we adjust Eq. (8) by adding a mask that ensures that only points in the negative halfspace are projected: projh(xi;x−i) = xi − 1{θ⊤ϕ(xi;x−i) + b < 0} · ( θ⊤ϕ(xi;x−i) + b− tol ∥θ∥22w̃ii θ ) (23) C ADDITIONAL EXPERIMENTAL DETAILS Data. We experiment with three citation network datasets: Cora, CiteSeer, and Pubmed Sen et al. (2008). Table 2 provides summary statistics of the datasets, as well as experimental details. Splits. All three datasets include a standard train-validation-test split, which we adopt for our use.10 For our purposes, we use make no distinction between ‘train’ and ‘validation’, and use both sets for training purposes. To ensure the data is appropriate for the inductive setting, we remove from the test set all nodes which can be influenced by train-set nodes—this ranges from 6%-43% of the test set, depending on dataset (and possibly setting; see Sec. D.2.1). In Table 2, the number of train samples is denoted ntrain, and the number of inductive test samples is denoted n∗test (all original transductive test sets include 1,000 samples). Binarization. To make the data binary (original labels are multiclass), we enumerated over possible partitions of classes into ‘negative’ and ‘positive’, and chose the most balanced partition. Experimenting with other but similarly-balanced partitions resulted in similar performance (albeit at times less distinct strategic movement). The exception to this was PubMed (having only three classes), for which the most balanced partition was neither ‘balanced’ nor stable, and so here we opted for the more stable alternative. Reported partitions and corresponding negative-positive ratios (for train and for test) are given in Table 2. Strategic responses. At test time, strategic user responses are computed by simulating the response dynamics in Sec. 3.1 until convergence. 10Note that nodes in these sets do not necessarily account for all nodes in the graph. D ADDITIONAL EXPERIMENTAL RESULTS D.1 EXPERIMENTS ON SYNTHETIC DATA In this section we explore further in depth the relation between user movement and classification performance, using our synthetic setup in Sec. 5.1 (all examples discussed herein use α = 0.7). From a predictive point of view, graphs are generally helpful if same-class nodes are well-connected. This is indeed the case in our construction (as can be seen by the performance of the benchmark method with non-extreme α > 0 values). From a strategic perspective, however, connectivity increases cooperation, since neighboring nodes can positively influence each other over time. In our construction, cooperation occurs mostly within classes, i.e., negative points that move encourage other negative points to move, and similarly for positive points. Movement trends. Fig. 5 (left) shows how different threshold classifiers hb induce different degrees of movements. The plot shows the relative number of points (in percentage points) whose predictions changed as a result of strategic behavior, per class (red: y = −1, green: y = 1) and over time: after one round (T = 1, dashed lines), and at convergence (T = ∞, solid lines). As can be seen, there is a general trend: when b is small, mostly negative points move, but as b increases, positive points move instead. The interesting point to observe is the gap between the first round (T = 1) and final round (T = ∞). For negative points, movement at T = 1 peaks at b1 ≈ −0.25, but triggers relatively little consequent moves. In contrast, the peak for T = ∞ occurs at a larger b∞ ≈ 0.15. For this threshold, though less points move in the first round, these trigger significantly more additional moves at later rounds—a result of the connectivity structure within the negative cluster of nodes (blue arrows). A similar effect takes place for positive nodes. The importance of looking ahead. Fig. 5 (center) plots for a range of thresholds b the accuracy of hb at convergence (T = ∞; orange line), and after one round (T = 1; gray line). The role of the latter is to illustrate the outcomes as ‘perceived’ by a myopic predictive model that considers only one round (e.g., includes only one response layer ∆̃); the differences between the two lines demonstrate the gap between perception (based on which training chooses a classifier ĥ) and reality (in which the classifier ĥ is evaluated). As can be seen, the mypoic approach leads to an under-estimation of the optimal b∗; at b1 ≈ 0.5, performance for T = 1 is optimal, but is severely worse under the true T = ∞, for which optimal performance is at b∞ ≈ 1.15. The figure also gives insight as to why this happens. For both b1 and b∞, the figure shows (in bars) the relative number of points from each class who obtain ŷ = 1 as a result of strategic moves. Bars are stacked, showing the relative number of points that moved per round T (darker = earlier rounds; lightest = convergence). As can be seen, at b1, the myopic models believes that many positive points, but only few negative points, will cross. However, in reality, at convergence, the number of positive points that crossed is only slightly higher than that of negative points. Hence, the reason for the(erroneous) optimism of the myopic model is that it did not correctly account for the magnitude of correlated moves of negative points, which is expressed over time. In contrast, note that at b∞, barely any negative points cross. How movement affects accuracy. An important observation about the relation between movement and accuracy is that for any classifier h, any negative point that moves hurts accuracy (since y = −1 but predictions become ŷ = 1), whereas any positive point that moves helps accuracy (since y = 1 and predictions are now ŷ = 1). Fig. 5 (right) shows how these movements combine to affect accuracy. The figure compares accuracy before strategic behavior (T = 0; dashed line) to after one response round (T = 1; solid line, top plot) and to convergence (T = ∞; solid line, lower plot). As can be seen, for any b, the difference between pre-strategic and post-strategic accuracy amounts to exactly the degradation due to negative points (red arrows) plus the improvement of positive points (green arrows). Note, however, the difference between T = 1 and T = ∞, as they relate to the benchmark model (T = 0, i.e., no strategic behavior). For T = 1 (top), across the range of b, positive and negative moves roughly balance out. A result of this is that curves for T = 0 and T = 1 are very much similar, and share similar peaks in terms of accuracy (both have ≈ 0.89). One interpretation of this is that if points were permitted to move only one round, the optimal classifier can completely recover the benchmark accuracy by ensuring that the number of positive points the moves overcomes the number of negative points. However, for T = ∞ (bottom), there is a skew in favor of positive points (green arrows). The result of this is that for the optimal b, additional rounds allow positive points to move in a way that obtains slightly higher accuracy (0.91) compared to the benchmark (0.89). This is one possible mechanism underlying our results on synthetic data in Sec. 5.1, and later for our results on real data in Sec. 5.2. D.2 EXPERIMENTS ON REAL DATA D.2.1 EXTENDING NEIGHBORHOOD SIZE One hyperparameter of SGC is the number ‘propagation’ layers, K, which effectively determines the graph distance at which nodes can influence others (i.e., the ‘neighborhood radius’). Given K, the embedding weights are defined as W̃ = D− 1 2AKD− 1 2 where A is the adjacency matrix and D is the diagonal degree matrix. For K = 0, the graph is unused, which results in a standar linear classifier over node features. Our results in the main body of the paper use K = 1. Fig. 6 shows results for an increasing K (we set T = 3, d = 0.25 as in our main results). Results are mixed: for PubMed, higher K seems to lead to less drop in accuracy for naïve and less recovery for our approach; for Cora and CiteSeer, results are unstable. Note however that this may likely be a product of our inductive setup: since varying K also changes the effective test set (since to preserve inductiveness, larger K often necessitates removing more nodes), test sets vary across conditions and decrease in size, making it difficult to directly compare result across different K. D.2.2 STRATEGIC IMPROVEMENT Our main results in Sec. 5.2 show that for CiteSeer, our strategically-aware approach outperforms the non-strategic benchmark (similarly to our synthetic experiments). Here we show that these results are robust. Fig. 7 provides higher-resolution results on CiteSeer for max distances d ∈ [0, 0.22] in hops of 0.01. All other aspects the setup match the original experiment. As can be seen, our approach slightly but consistently improves upon the benchmark until d ≈ 0.17. D.3 NODE CENTRALITY AND INFLUENCE In this experiment we set to explore the role played by central nodes in the graph in propagating the influence of strategic behavior. Since the embedding of a node i is partly determined by its in-neighbors, broadly we would expect that nodes have high out-degree would be highly influential: as ‘anchors’ that prevent others from moving if they themselves do not, and as ‘carriers’ which either push neighbors over the boundary—or at least promote the closer to it—if they do move. Experimental setup. To study the role of such nodes, we preform the following experiment. First, we order the nodes by decreasing out-degree, so that the potentially more influential nodes appear first in the ranking. Then, for each q ∈ {0, 10, 20, ..., 100}, we disconnect nodes in the qth percentile, i.e., remove all edges emanating from the top-q% ranked nodes. For each such condition, we examine learning and its outcomes, and compare performance to a control condition in which nodes are ordered randomly. The difference in performance and other learning outcomes provides us with a measure of the importance of high-degree nodes. Results. Figure 8 shows results for all methods (naïve, robust (ours), and the non-strategic benchmark) and across all three datasets (Cora, CiteSeer, and Pubmed). In all conditions, we vary on the x-axis the portion of nodes that remain connected, where at zero (left end) we get an empty graph, and at 100 (right end) we get the original graph. Note that the y-axis varies in scale across plots. First, consider Cora. In terms of accuracy (upper plots), results show that on the benchmark (evaluated in a non-strategic environment, in which users do not move), the general trend is that more edges help improve performance. However, the gain in performance is much more pronounced for highdegree nodes. Interestingly, for the naïve method (which operates on strategically modified data, but does not anticipate updates), the trend is reversed: user utilize edges in a way that is detrimental to performance—and more so when high-degree nodes remain, making them a vulnerability. Our robust approach is also sensitive to the addition of edges, but to a much lesser degree (the drop is performance is minor); moreover, which nodes are disconnected appears to make little difference, which we take to mean that our approach can counter the dominant role of central nodes. The lower plots, which describe the portion of users that moves and the portion of users that crossed, provides some explanation as to how this is achieved. For the naïve approach, nearly half of all user move—and all users that move, also cross. This occurs faster (in terms of the portion of nodes removed) in the degree condition. In our robust approach, note that the number of nodes that move is halved, and of those, not all cross, which demonstrates how are learning objective, which anticiaptes strategic behavior, can act to prevent it. For CiteSeer and PubMed, the general trend in accuracy is reversed for the non-strategic benchmark, and for the naïve approach, begins with a gap, which closes at some point (sooner in CiteSeer). Despite these differences, the qualitative behavior of our robust classifier is similar to Cora, and achieves fairly stable accuracy (with a mild negative slope) in both conditions and for both datasets. As in citeseer, movement and crossing behavior is similar, in that for the naïve approach a considerable portion of users move and cross (with a gap between conditions in PubMed); and in that our robust approach greatly reduces the number of users that move, and even more so the number users that cross. E ADDITIONAL ANALYTIC RESULTS E.1 PERFORMANCE GAPS: ROBUST LEARNING VS. THE NON-STRATEGIC BENCHMARK When learning a robust classifier on strategic data, intuitively we may hope that its performance approaches that of the optimal classifier on non-strategic data, which we may think of as a target upper bound. A natural question to then ask is - can we always reach this upper bound and close the performance gap? Here we answer this question in the negative, by showing a simple example where the optimal classifier on non-strategic data achieves perfect accuracy, but the optimal classifier on strategic data achieves only 0.66 accuracy. We then show that by introducing a mild change—namely slightly shifting the features of one node—the performance gap closes entirely. This, combined with our result from the previous Sec. D.2.2 showing that the gap can also be negative, highlights how the gap in performance greatly depends on the structure of the input (i.e., the graph, features, and labels), and in a way which can be highly sensitive even to minor changes. E.1.1 LARGE GAP Our first example in which the best obtainable gap is 0.33 is shown in Figure 9 (Left). The example has three nodes described by one-dimensional features x1 = 1.2, x2 = −1, x3 = −1 and with labels y1 = 1, y2 = −1, y3 = −1. We use the standard cost function so that maximal distance to move is dβ = 2, and use uniform edge weights. Since x ∈ R, classifiers are simply threshold functions on the real line, defined by a threshold parameter b ∈ R. First, we demonstrate that for non-strategic data, there exists a ‘benchmark’ classifier that achieves perfect accuracy. Let b = 0. Node embeddings are: ϕ1 = x1 + x2 2 = 1− 1 2 = 0 = b ϕ2 = x1 + x2 + x3 3 = 1− 1− 1 3 = −1 3 < 0 = b ϕ3 = x2 + x3 2 = −1− 1 2 = −1 < 0 = b This give predictions ŷ1 = +1 = y1, ŷ2 = −1 = y2, ŷ3 = −1 = y3, which are all correct. Next. we prove that there is no robust classifier capable of achieving perfect accuracy on strategic data. Suppose by contradiction that such a classifier exists, denoted b. For x2 to move in the first round, the following conditions must hold: x1 + x ′ 2 + x3 3 = b, 1 + x′2 − 1 = 3b, x′2 = 3b Since x2 moves at most distance 2, it moves in the first round only if − 13 < b ≤ 1 3 . However, if it does move - it ends up getting an incorrect prediction. In addition, in this case x3 can also get a positive prediction (either in the first round or in the next round, depending on whether b > 0), in which case the accuracy is 13 . Thus, we get that 1 3 < b (note that for b < − 1 3 x2 is classified as positive which is wrong). Next, we look at the behavior of x1 in the first round. Conditions for movement are: x′1 + x2 2 = b, x′1 − 1 = 2b, x′1 = 2b+ 1 Here x1 gets negative classification if b > 0. If b > 1, then x1 does not move, since the distance required is larger than 2. Thus, x1 does move (beyond the classifier) and gets the correct prediction only if 0 < b ≤ 1. However, considering now the second round of movement for x2 (which only occurs if b > 13 since for 0 < b ≤ 13 x2 moves at the first round), we get the conditions: x′1 + x ′ 2 + x3 3 = b, 2b+ 1 + x′2 − 1 = 3b, x′2 = b This means that for all 0 < b ≤ 1, x2 moves and gets an incorrect prediction. This occurs either for x1 or for x2. Consequently, there is no b that achieves an accuracy of 1, which contradicts the assumption. The optimal accuracy of any robust classifier for this example is 23 , which is achieved by any b > 1. In this case, none of the nodes move, and x1 is classified negatively, whereas its true label is positive. E.1.2 NO GAP We now give a nearly identical example, shown in Figure 9 (Right), in which the gap becomes zero. We use the same example as before, but set x1 = 1 (instead of x1 = 1.2). We begin by showing that there does exist a classifier which achieves perfect accuracy on non-strategic data. Let b = 0, then embeddings are now: ϕ1 = x1 + x2 2 = 1.2− 1 2 = 0.1 > 0 = b ϕ2 = x1 + x2 + x3 3 = 1.2− 1− 1 3 = − 4 15 < 0 = b ϕ3 = x2 + x3 2 = −1− 1 2 = −1 < 0 = b (note that since all nodes are connected in the graph, changing x1 requires us to recompute all embeddings). Predictions now become: ŷ1 = +1 = y1, ŷ2 = −1 = y2, ŷ3 = −1 = y3 which are all correct. Next, we show that there also exists a classifier which achieves perfect accuracy no strategic data. Let b = 1.1. In the first round, x1 moves, and flips its predicted label: ϕ1 = x′1 + x2 2 = x1 + 2 + x2 2 = x1 + 2 + x2 2 = 3.2− 1 2 = 1.1 = b Here, even if the other nodes move to the fullest extent, they do not have sufficient influence to revert this prediction: ϕ2 = x1 + x ′ 2 + x3 3 = 1.2 +−1 + 2− 1 3 = 0.4 < 1.1 = b ϕ3 = x2 + x ′ 3 2 = −1− 1 + 2 2 = 0 < 1.1 = b Thus, we get ŷ1 = +1 = y1, ŷ2 = −1 = y2, ŷ3 = −1 = y3 which are also all correct. E.2 STRATEGIC BEHAVIOR OF CLIQUES Here we give a result that considers graph structure. In particular, we consider cliques, and show that for uniform weights, either all nodes move together—or none do. Proposition 6. Consider n nodes which are all fully connected, i.e., form a clique, and assume uniform edge weights. Then for any dimension d, any assignment of features x1, . . . , xn ∈ Rd, and for any classifier h, either (i) all n nodes move in the first round, or (ii) none of the nodes move at all. Proof. Consider the case in which at least one node i moves in the first round. Denote by z the change in xi made in order to cross the classifier, i.e., z = x (1) i − x (0) i . Note that in order for xi to move, the following conditions must be satisfied: ||z||2 ≤ 2, θTϕ(x(1)i , x (0) −i ) + b = 0 We now show that every other node t, if it moves a distance of
1. What is the focus and contribution of the paper regarding strategic classification on graphs? 2. What are the strengths of the proposed approach, particularly in terms of analyzing strategic behavior and proposing a differential framework? 3. What are the weaknesses of the paper, especially regarding the limitations of the linear classifier and the simplification of the graph structure? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper considers the strategic classification problem on graphs. Specifically, they consider a linear graph-based classifier, which classifies each node in the graph based on the node embedding with a linear function. The node embedding of each node depends on the features of this node and also the features of its neighborhood. In the strategic classification setting, each user (node) can modify the features of this node to change the classification result. The strategic classification problem is to learn a linear graph-based classifier to minimize the misclassification loss under strategic behavior. This paper shows that the inter-user dependency on the graph might be exploited by strategic users and cause a higher loss. They also propose a differential framework for strategically-robust learning of graph-based classifiers. Their theoretical results and method are supported by experimental results. Strengths And Weaknesses Strengths: This paper proposes a novel strategic classification setting on the graph. This paper analyzes the strategic behavior of users in interaction with others. They propose a differentiable framework to learn a robust classifier in the strategic classification on graph setting. Weaknesses: It seems properties in section 3.2 and the learning algorithm highly depend on that the classifier is linear. I understand that linear classifier is widely used in practice and more accessible as the first step toward this problem. At the beginning of section 3 and experiments, you normalize the weights such that the node embedding depends on one parameter α . It seems this simplifies the effects of the graph structure and makes the problem easier. Clarity, Quality, Novelty And Reproducibility This paper is well-written and easy to follow. This paper proposes a novel strategic classification on graph setting and a new algorithm to learn a robust classifier. I think their results are original and reproducible.
ICLR
Title Strategic Classification with Graph Neural Networks Abstract Strategic classification studies learning in settings where users can modify their features to obtain favorable predictions. Most current works focus on simple classifiers that trigger independent user responses. Here we examine the implications of learning with more elaborate models that break the independence assumption. Motivated by the idea that applications of strategic classification are often social in nature, we focus on graph neural networks, which make use of social relations between users to improve predictions. Using a graph for learning introduces inter-user dependencies in prediction; our key point is that strategic users can exploit these to promote their own goals. As we show through analysis and simulation, this can work either against the system—or for it. Based on this, we propose a differentiable framework for strategically-robust learning of graph-based classifiers. Experiments on several real networked datasets demonstrate the utility of our approach. 1 INTRODUCTION Machine learning is increasingly being used to inform decisions about humans. But when users of a system stand to gain from certain predictive outcomes, they may be prone to “game” the system by strategically modifying their features (at some cost). The literature on strategic classification (Brückner & Scheffer, 2011; Hardt et al., 2016) studies learning in this setting, with emphasis on how to learn classifiers that are robust to strategic user behavior. The idea that users may respond to a decision rule applies broadly and across many domains, from hiring, admissions, and scholarships to loan approval, insurance, welfare benefits, and medical eligibility (McCrary, 2008; Almond et al., 2010; Camacho & Conover, 2011; Lee & Lemieux, 2010). This, along with its clean formulation as a learning problem, have made strategic classification the target of much recent interest (Sundaram et al., 2021; Zhang & Conitzer, 2021; Levanon & Rosenfeld, 2021; Ghalme et al., 2021; Jagadeesan et al., 2021; Zrnic et al., 2021; Estornell et al., 2021; Lechner & Urner, 2021; Harris et al., 2021; Levanon & Rosenfeld, 2022; Liu et al., 2022; Ahmadi et al., 2022; Barsotti et al., 2022a). But despite these advances, most works in strategic classification remain to follow the original problem formulation in assuming independence across users responses. From a technical perspective, this assumption greatly simplifies the learning task, as it allows the classifier to consider each user’s response in isolation: user behavior is modeled via a response mapping ∆h(x) determining how users modify their features x in response to the classifier h, and learning aims to find an h for which y ≈ h(∆h(x)). Intuitively, a user will modify her features if this ‘moves’ her across the decision boundary, as long as this is worthwhile (i.e., gains from prediction exceed modification costs). Knowing ∆h allows the system to anticipate user responses and learn an h that is robust. For a wide range of settings, learning under independent user responses has been shown to be theoretically possible (Hardt et al., 2016; Zhang & Conitzer, 2021; Sundaram et al., 2021) and practically feasible (Levanon & Rosenfeld, 2021; 2022). Unfortunately, once this assumption of independence is removed—results no longer hold. One reason is that current approaches can safely assume independence because the decision rules they consider induce independence: when predictions inform decisions for each user independently, users have no incentive to account for the behavior of others. This limits the scope of predictive models to include only simple functions of single inputs. *Equal contribution, alphabetical order In this paper, we aim to extend the literature on strategic classification to support richer learning paradigms that enable inter-dependent user responses, with particular focus on the domain of Graph Neural Networks (GNNs) (Monti et al., 2017; Wang et al., 2019; Bronstein et al., 2017; Hamilton et al., 2017). Generally, user responses can become dependent through the classifier if predictions for one user rely also on information regarding other users, i.e., if h(xi) is also a function of other xj . In this way, the affects of a user modifying her features via xj 7→ ∆h(xj) can propagate to other users and affect their decisions (since h(xi) now relies on ∆h(xj) rather than xj). For GNNS, this expresses through their relience on the graph. GNNs take as input a weighted graph whose nodes correspond to featurized examples, and whose edges indicate relations that are believed to be useful for prediction (e.g., if j→i indicates that yi = yj is likely). In our case, nodes represent users, and edges represent social links. The conventional approach is to first embed nodes in a way that depends on their neighbors’ features, ϕi = ϕ(xi;xnei(i)), and then perform classification (typically linear) in embedded space, ŷi = sign(w⊤ϕi). Notice ŷi depends on xi, but also on all other xj ∈ xnei(i); hence, in deciding how to respond, user i must also account for the strategic responses of her neighbors j ∈ nei(i). We aim to establish the affects of such dependencies on learning. As a concrete example, consider Lenddo1, a company that provides credit scoring services to lending institutions. Lenddo specializes in consumer-focused microlending for emerging economies, where many applicants lack credible financial records. To circumvent the need to rely on historical records, Lenddo uses applicants’ social connections, which are easier to obtain, as a factor in their scoring system.2 As an algorithmic approach for this task, GNNs are an adequate choice (Gao et al., 2021). Once loan decisions become dependent on social relations, the incentives for acting strategically change (Wei et al., 2016). To see how, consider that a user who lies far to the negative side of the decision boundary (and so independently cannot cross) may benefit from the graph if her neighbors “pull” her embedding towards the decision boundary and close enough for her to cross. Conversely, the graph can also suppress strategic behavior, since neighbors can “hold back” nodes and prevent them from crossing. Whether this is helpful to the system or not depends on the true label of the node. This presents a tradeoff: In general, graphs are useful if they are informative of labels in a way that complements features; the many success stories of GNNs suggest that this is often the case (Zhou et al., 2020). But even if this holds sans strategic behavior—once introduced, graphs inadvertently create dependencies through user representations, which strategic users can exploit. Graphs therefore hold the potential to benefit the system, but also its users. Here we study the natural question: who does the graph help more? Through analysis and experimentation, we show that learning in a way that neglects to account for strategic behavior not only jeopardizes performance, but becomes worse as reliance on the graph increases. In this sense, the graph becomes a vulnerability which users can utilize for their needs, turning it from an asset to the system—to a potential threat. As a solution, we propose a practical approach to learning GNNs in strategic environments. We show that for a key neural architecture (SGC; Wu et al. (2019)) and certain cost functions, graph-dependent user responses can be expressed as a ‘projection-like’ operator. This operator admits a simple and differentiable closed form; with additional smoothing, this allows us to implement responses as a neural layer, and learn robust predictors h using gradient methods. Experiments on synthetic and real data (with simulated responses) demonstrate that our approach not only effectively accounts for strategic behavior, but in some cases, can harness the efforts of self-interested users to promote the system’s goals. Our code is publicly available at: http://github.com/StrategicGNNs/Code. 1.1 RELATED WORK Strategic classification. Since its introduction in Hardt et al. (2016) (and based on earlier formulations in Brückner & Scheffer (2009); Brückner et al. (2012); Großhans et al. (2013)), the literature on strategic classification has been growing at a rapid pace. Various aspects of learning have been studied, including: generalization behavior (Zhang & Conitzer, 2021; Sundaram et al., 2021; Ghalme et al., 2021), algorithmic hardness (Hardt et al., 2016), practical optimization methods (Levanon & Rosenfeld, 2021; 2022), and societal implications (Milli et al., 2019; Hu et al., 2019; Chen et al., 2020; Levanon & Rosenfeld, 2021). Some efforts have been made to extend beyond the conventional user models, e.g., by adding noise (Jagadeesan et al., 2021), relying on partial information (Ghalme 1http://lenddoefl.com; see also http://www.wired.com/2014/05/lenddo-facebook/. 2For a discussion on ethics, see final section. For similar initiatives, see https://en.wikipedia.org/wiki/Lenddo. et al., 2021; Bechavod et al., 2022), or considering broader user interests (Levanon & Rosenfeld, 2022); but these, as do the vast majority of other works, focus on linear classifiers and independent user responses.3 We study richer predictive model classes that lead to correlated user behavior. Graph Neural Networks (GNNs). The use of graphs in learning has a long and rich history, and remains a highly active area of research (Wu et al., 2020). Here we cover a small subset of relevant work. The key idea underlying most methods is to iteratively propagate and aggregate information from neighboring nodes. Modern approaches implement variations of this idea as differentiable neural architectures (Gori et al., 2005; Scarselli et al., 2008; Kipf & Welling, 2017; Gilmer et al., 2017). This allows to express more elaborate forms of propagation (Li et al., 2018; Alon & Yahav, 2021) and aggregation (Wu et al., 2019; Xu et al., 2019; Li et al., 2016), including attention-based mechanisms (Veličković et al., 2018; Brody et al., 2022). Nonetheless, a key result by Wu et al. (2019) shows that, both theoretically and empirically, linear GNNs are also quite expressive. Robustness of GNNs. As most other fields in deep learning, GNNs have been the target of recent inquiry as to their sensitivity to adversarial attacks. Common attacks include perturbing nodes, either in sets (Zügner et al., 2018; Zang et al., 2021) or individually (Finkelshtein et al., 2020). Attacks can be applied before training (Zügner & Günnemann, 2019; Bojchevski & Günnemann, 2019; Li et al., 2021; Zhang & Zitnik, 2020) or at test-time (Szegedy et al., 2014; Goodfellow et al., 2015); our work corresponds to the latter. While there are connections between adversarial and strategic behavior (Sundaram et al., 2021), the key difference is that strategic behavior is not a zero-sum game; in some cases, incentives can even align (Levanon & Rosenfeld, 2022). Thus, system-user relations become more nuanced, and provide a degree of freedom in learning that does not exist in adversarial settings. 2 LEARNING SETUP Our setting includes n users, represented as nodes in a directed graph G = (V,E) with non-negative edge weights W = {wij}(i,j)∈E , wij ≥ 0. Each user i is also described by a feature vector xi ∈ Rℓ and a binary label yi ∈ {±1}. We use x−i = {xj}j ̸=i to denote the set of features of all nodes other than i. Using the graph, our goal is to learn a classifier h that correctly predicts user labels. The challenge in our strategic setting is that inputs at test-time can be strategically modified by users, in response to h and in a way that depends on the graph and on other users (we describe this shortly). Denoting by xhi the (possibly modified) strategic response of i to h, our learning objective is: argmin h∈H ∑ i L(yi, ŷi), ŷi = h(x h i ;x h −i) (1) where H is the model class and L is a loss function (i.e., log-loss). Note that both predictions ŷi and modified features xhi can depend on G and on on x h −i (possibly indirectly through h). We focus on the inductive graph learning setting, in which training is done on G, but testing is done on a different graph, G′ (often G,G′ are two disjoint components of a larger graph). Our goal is therefore to learn a classifier that generalizes to other graphs in a way that is robust to strategic user behavior. Graph-based learning. We consider linear graph-based classifiers—these are linear classifiers that operate on linear, graph-dependent node embeddings, defined as: hθ,b(xi;x−i) = sign(θ ⊤ϕ(xi;x−i) + b), ϕ(xi;x−i) = w̃iixi + ∑ j ̸=i w̃jixj (2) where ϕi = ϕ(xi;x−i) is node i’s embedding,4 θ ∈ Rℓ and b ∈ R are learned parameters, and w̃ij ≥ 0 are pairwise weights that depend on G and W . We refer to users j with w̃ji ̸= 0 as the embedding neighbors of i. A simple choice of weights is w̃ji = wji for (j, i) ∈ E (and 0 otherwise), but different methods propose different ways to construct w̃; here we adopt the weight scheme of Wu et al. (2019). We assume the weights w̃ are predetermined, and aim to learn θ and b in Eq. (1). Our focus on linear GNNs stems from several factors. From the perspective of strategic classification, linear decision rules ensure that strategic responses are computationally tractable (see Eq. (4)). This is conventionally required, and most works remain in the linear regime. From the perspective of GNNs, 3The only exception we know of is Liu et al. (2022) who study strategic ranking, but do not consider learning. 4Note that embedding preserve the dimension of the original features. linear architectures have been shown to match state-of-the-art performance on multiple tasks (Wu et al., 2019), implying sufficiently manifest the fundamental role of graphs. Thus, linear GNNs serve as a minimal necessary step for bridging standard strategic classification and graph-based learning, in a way that captures the fundamental structure of the learning task in both domains. Nonetheless, as we show in Sec. 4, even for linear GNNs—user responses can cause learning to be highly non-linear. Strategic inputs. For the strategic aspects of our setting, we build on the popular formulation of Hardt et al. (2016). Users seek to be classified positively (i.e., have ŷi = 1), and to achieve this, are willing to modify their features (at some cost). Once the system has learned and published h, a test-time user i can modify her features xi 7→ x′i in response to h. Modification costs are defined by a cost function c(x, x′) (known to all); here we focus mainly on 2-norm costs c(x, x′) = ∥x− x′∥2 (Levanon & Rosenfeld, 2022; Chen et al., 2020), but also discuss other costs (Brückner et al., 2012; Levanon & Rosenfeld, 2021; Bechavod et al., 2022). User i modifies her features (or “moves”) if this improves her prediction (i.e., if h(xi) = −1 but h(x′i) = 1) and is cost-effective (i.e., prediction gains exceed modification costs); for linear classifiers, this means crossing the decision boundary. Note that since y ∈ {±1}, gains are at most h(x′)− h(x) = 2. Users therefore do not move to any x′ whose cost c(x, x′) exceeds a ‘budget’ of 2, and the maximal moving distance is d = 2. Distribution shift. One interpretation of strategic classification is that user responses cause distribution shift, since in aggregate, p(x′) ̸= p(x). Crucially, how the distribution changes depends on h, which implies that the system has some control over the test distribution p(x′), indirectly through user how users respond—a special case of model-induced distribution shift (Miller et al., 2021; Maheshwari et al., 2022). The unique aspect of our setting is that user responses are linked through their mutual dependence on the graph. We next describe our model of user responses in detail. 3 STRATEGIC USER BEHAVIOR: MODEL AND ANALYSIS Eq. (2) states that h classifies i according to her embedding ϕi, which in turn is a weighted sum of her features and those of her neighbors. To gain intuition as to the effects of the graph on user behavior, it will be convenient to assume weights w̃ are normalized5 so that we can write: ϕi = ϕ(xi;x−i) = (1− αi)xi + αix̄i for some αi ∈ [0, 1] (3) I.e., ϕi can be viewed as an interpolation between xi and some point x̄i ∈ Rℓ representing all other nodes, where the precise point along the line depends on a parameter αi that represents the influence of the graph (in a graph-free setting, αi = 0). This reveals the dual effect a graph has on users: On the one hand, the graph limits the ability of user i to influence her own embedding, since any effort invested in modifying xi affects ϕi by at most 1− αi. But the flip side of this is that an αi-portion of ϕi is fully determined by other users (as expressed in x̄i); if they move, i’s embedding also ‘moves’ for free. A user’s ‘effective’ movement radius is ri = d(1− αi). Fig. 1 (F) shows this for varying αi. 5This is indeed the case in several common approaches. 3.1 STRATEGIC RESPONSES Given that h relies on the graph for predictions—how should a user modify her features xi to obtain ŷi = 1? In vanilla strategic classification (where h operates on each xi independently), users are modeled as rational agents that respond to the classifier by maximizing their utility, i.e., play x′i = argmaxx′ h(x ′) − c(xi, x′), which is a best-response that results in immediate equilibrium (users have no incentive to move, and the system has no incentive to change h).6 In our graph-based setting, however, the dependence of ŷi on all other users via h(xi;x−i) makes this notion of bestresponse ill-defined, since the optimal x′i can depend on others’ strategic responses, x ′ −i, which are unknown to user i at the time of decision (and may very well rely on x′i itself). As a feasible alternative, here we generalize the standard model by assuming that users play myopic best-response over a sequence of multiple update rounds. As we will see, this has direct connections to key ideas underlying graph neural networks. Denote the features of node i at round t by x(t)i , and set x(0)i = xi. A myopic best response means that at round t, each user i chooses x (t) i to maximize her utility at time t according to the state of the game at time t− 1, i.e., assuming all other users play {x(t−1)j }j ̸=i, with costs accumulating over rounds. This defines a myopic response mapping: ∆h(xi;x−i, κ) ≜ argmax x′∈Rℓ h(x′;x−i)− c(xi, x′)− κ (4) where at round t updates are made (concurrently) via x(t+1)i = ∆h(x (t) i ;x (t) −i, κ (t) i ) with accumulating costs κ(t)i = κ (t−1) i + c(x (t−1) i , x (t) i ), κ (0) i = 0. Predictions for round t are ŷ (t) i = h(x (t) i ;x (t) i−i). Eq. (4) naturally extends the standard best-response mapping (which is recovered when αi = 0 ∀i, and converges after one round). By adding a temporal dimension, the actions of users propagate over the graph and in time to affect others. Nonetheless, even within a single round, graph-induced dependencies can result in non-trivial behavior; some examples for ℓ = 1 are given in Fig. 1 (A-D). 3.2 ANALYSIS We now give several results demonstrating basic properties of our response model and consequent dynamics, which shed light on how the graph differentially affects the system and its users. Convergence. Although users are free to move at will, movement adheres to a certain useful pattern. Proposition 1. For any h, if users move via Eq. (4), then for all i ∈ [n], x(t)i ̸= x (t−1) i at most once. Proof. User i will move only when: (i) she is currently classified negatively, h(xi;xi−) = −1, and (ii) there is some x′ for which utility can improve, i.e., h(x′;xi−) − c(xi, x′) > −1, which in our case occurs if h(x′;xi−) = 1 and c(xi, x′) < 2 (since h maps to [−1, 1]).7 Eq. (4) ensures that the modified x′i will be such that ϕ(x ′ i;x−i) lies exactly on the decision boundary of h; hence, x ′ i must be closer to the decision boundary (in Euclidian distance) than xi. This means that any future moves of an (incoming) neighbor j can only push i further away from the decision boundary; hence, the prediction for i remains positive, and she has no future incentive to move again.8 Hence, all users move at most once. The proof reveals a certain monotonicity principle: users always (weakly) benefit from any strategic movement of others. Convergence follows as an immediate result. Corollary 1. Myopic-best response dynamics converge for any h (and after at most n rounds). We will henceforth use xhi to denote the features of user i at convergence (w.r.t. h), denoted Tmax. Hitchhiking. When i moves, the embeddings of (outgoing) neighbors j who currently have ŷj = −1 also move closer to the decision boundary; thus, users who were initially too far to cross may be able to 6Note ‘rational’ here implies users are assumed to know h. As most works in the field, we also make this assumption; for the practically-inclined reader, note that (i) in some cases, there is reason to believe it may approximately hold (e.g., http://openschufa.de), and (ii) relaxing this assumption (and others) is an ongoing community effort (Ghalme et al., 2021; Jagadeesan et al., 2021; Bechavod et al., 2022; Barsotti et al., 2022b). 7In line with Hardt et al. (2016), we assume that if the value is zero then the user does not move. 8Users moving only once ensures that cumulative costs are never larger than the final gain. do so at later rounds. In this sense, the dependencies across users introduced by the graph-dependent embeddings align user incentives, and promote an implicit form of cooperation. Interestingly, users can also obtain positive predictions without moving. We refer to such users as ‘hitchhikers’. Proposition 2. There exist cases where ŷ(t)i = −1 and i doesn’t move, but ŷ (t+1) i = 1. A simple example can be found in Figure 1 (E). Hitchhiking demonstrates how relying on the graph for classification can promote strategic behavior—even under a single response round. Cascading behavior. Hitchhiking shows how the movement of one user can flip the label of another, but the effects of this process are constrained to a single round. When considering multiple rounds, a single node can trigger a ‘domino effect’ of moves that span the entire sequence. Proposition 3. For any n, there exists a graph where a single move triggers n additional rounds. Proposition 4. For any n and k ≤ n, there exists a graph where n− k users move at round k. Proofs are constructive and modular, and rely on graphs that are predictively useful (Appendix A.2). Note also that graph diameter is not a mediating factor (Appendix E.3). Both results show that, through monotonicity, users also (weakly) benefit from additional rounds. This has concrete implications. Corollary 2. In the worst case, the number of rounds until convergence is Ω(n). Corollary 3. In the worst case, Ω(n) users move after Ω(n) rounds. Thus, to exactly account for user behavior, the system must correctly anticipate the strategic responses of users many rounds into the future, since a bulk of predictions may flip in the last round. Fortunately, these results also suggests that in some cases, blocking one node from crossing can prevent a cascade of flips; thus, it may be worthwhile to ‘sacrifice’ certain predictions for collateral gains. This presents an interesting tradeoff in learning, encoded in the learning objective we present next, and which we motivate with our final result on the potential impact of strategic behavior: Proposition 5. The gap in accuracy between (i) the optimal non-strategic classifier on non-strategic data, and (ii) the optimal strategic classifier on strategic data, can be as large as 30% (see Apx. E.1). 4 LEARNING AND OPTIMIZATION We are now ready to describe our learning approach. Our learning objective can be restated as: ĥ = argmin h∈H ∑ i L(yi, h(x h i ;x h −i)) (5) for H = {hθ,b} as in Eq. (2). The difficulty in optimizing Eq. (5) is that xh depend on h through the iterative process, which relies on ∆h. At test time, xh can be computed exactly by simulating the dynamics. However, at train time, we would like to allow for gradients of θ, b to propagate through xh. For this, we propose an efficient differential proxy of xh, implemented as a stack of layers, each corresponding to one response round. The number of layers is a hyperparameter, T . Single round. We begin with examining a single iteration of the dynamics, i.e., T = 1. Note that since a user moves only if the cost is at most 2, Eq. (4) can be rewritten as: ∆h(xi;x−i) = { x′i if h(xi;x−i) = −1 and c(xi, x′i) ≤ 2 xi o.w. (6) where x′i = projh(xi;x−i) is the point to which xi must move in order for ϕ(xi;x−i) to be projected onto h. This projection-like operator (on xi) can be shown to have a closed-form solution: projh(xi;x−i) = xi − θ⊤ϕ(xi;x−i) + b ∥θ∥22w̃ii θ (7) See Appendix B.1 for a derivation using KKT conditions. Eq. (7) is differentiable in θ and b; to make the entire response mapping differentiable, we replace the ‘hard if’ in Eq. (6) with a ‘soft if’, which we now describe. First, to account only for negatively-classified points, we ensure that only points in the negative halfspace are projected via a ‘positive-only’ projection: proj+h (xi;x−i) = xi −min { 0, θ⊤ϕ(xi;x−i) + b ∥θ∥22w̃ii } θ (8) Then, we replace the c ≤ 2 constraint with a smoothed sigmoid that interpolates between xi and the projection, as a function of the cost of the projection and thresholded at 2. This gives our differentiable approximation of the response mapping: ∆̃(xi;x−i, κ) = xi + (x ′ i − xi)στ ( 2− c(xi, x′i)− κ ) where x′i = proj + h (xi;x−i) (9) where σ is a sigmoid and τ is a temperature hyperparameter (τ → 0 recovers Eq. (6)) and for T = 1, κ = 0. In practice we add a small additive tolerance term for numerical stability (See Appendix B.3). Multiple rounds. Next, we consider the computation of (approximate) modified features after T > 1 rounds, denoted x̃(T ), in a differentiable manner. Our approach is to apply ∆̃ iteratively as: x̃ (t+1) i = ∆̃(x̃ (t) i ; x̃ (t) −i, κ (t) i ), x̃ (0) i = xi (10) Considering ∆̃ as a layer in a neural network, approximating T rounds can be done by stacking. In Eq. (10), κ(t)i is set to accumulate costs of approximate responses, κ (t) i = κ (t−1) i + c(x̃ (t−1) i , x̃ (t) i ). One observation is that for 2-norm costs, κ(t)i = c(x̃ (0) i , x̃ (t) i ) (by the triangle inequality; since all points move along a line, equality holds). We can therefore simplify Eq. (9) and replace c(x (t−1) i , x ′ i)− κ (t−1) i with c(x (0) i , x ′ i). For other costs, this gives a lower bound (see Appendix B.1). 5 EXPERIMENTS 5.1 SYNTHETIC DATA We begin our empirical evaluation by demonstrating different aspects of learning in our setting using a simple but illustrative synthetic example. Additional results and insights on movement trends, the effects of movement on accuracy, and the importance of looking ahead, can be found in Appendix D.1. For our experimental setup, we set ℓ = 1 and sample features xi ∈ R for each class from a corresponding Gaussian N (y, 1) (classes are balanced). For each node, we uniformly sample 5 neighbors from the same class and 3 from the other, and use uniform weights. This creates a task where both features and the graph are informative about labels, but only partially, and in a complementary manner (i.e., noise is uncorrelated; for i with yi = 1, if xi < 0, it is still more likely that most neighbors have xj > 0, and vice versa). As it is a-priori unclear how to optimally combine these sources, we study the effects of relying on the graph to various degrees by varying a global α, i.e., setting w̃ii = (1− α) and w̃ij = α/degi for all i and all j ̸= i. We examine both strategic and non-strategic settings, the latter serving as a benchmark. Since ℓ = 1, H = {hb} is simply the class of thresholds, hence we can scan all thresholds b and report learning outcomes for all models hb ∈ H . For non-strategic data, the optimal h∗ has b∗ ≈ 0; for strategic data, the optimal h∗ can be found using line search. Testing is done on disjoint but similarly sampled held-out features and graph. The effects of strategic behavior. Figure 2 (left) presents the accuracy of the learned ĥ for varying α and in different settings. In a non-strategic setting (dashed gray), increasing α helps, but if reliance on the graph becomes exaggerated, performance deteriorates (α ≈ 0.7 is optimal). Allowing users to respond strategically reverses this result: for α = 0 (i.e., no graph), responses lower accuracy by ≈ 0.26 points; but as α is increased, the gap grows, this becoming more pronounced as test-time response rounds progress (blue lines). Interestingly, performance under strategic behavior is worst around the previously-optimal α ≈ 0.75. This shows how learning in a strategic environment—but neglecting to account for strategic behavior—can be detrimental. By accounting for user behavior, our approach (orange line) not only recovers performance, but slightly improves upon the non-strategic setting (this can occur when positive points are properly incentivized; see Appendix D.1). Sensitivity analysis. Figure 2 (right) plots the accuracy of all threshold models hb for increasing values of α. For each α, performance exhibits a ‘bell-curve’ shape, with its peak at the optimal h∗. As α increases, bell-curves change in two ways. First, their centers shift, decreasing from positive values towards zero (which is optimal for non-strategic data); since using the graph limits users’ effective radius of movement, the optimal decision boundary can be less ‘stringent’. Second, and interestingly, bell-curves become narrower. We interpret this as a measure of tolerance: the wider the curve, the lower the loss in accuracy when the learned ĥ is close to (but does not equal) h∗. The figure shows for a subset of α-s ‘tolerance bands’: intervals around b∗ that include thresholds b for which the accuracy of hb is at least 90%, 95%, and 97.5% of the optimum (horizontal lines). Results indicate that larger α-s provide less tolerance. If variation in ĥ can be attributed to the number of examples, this can be interpreted as hinting that larger α-s may entail larger sample complexity. Number of layers (T ). Figure 2 (right) also shows for each bell-curve the accuracy achieved by learned models ĥ of increasing depths, T = 1, . . . , 4 (colored dots). For α = 0 (no graph), there are no inter-user dependencies, and dynamics converge after one round. Hence, T = 1 suffices and is optimal, and additional layers are redundant. However, as α increases, more users move in later rounds, and learning with insufficiently large T results in deteriorated performance. This becomes especially distinct for large α: e.g., for α = 0.9, performance drops by ∼ 11% when using T = 1 instead of the optimal T = 4. Interestingly, lower T always result in lower, more ‘lenient’ thresholds; as a result, performance deteriorates, and more quickly for larger, more sensitive α. Thus, the relations between α and T suggest that greater reliance on the graph requires more depth. 5.2 EXPERIMENTS ON REAL DATA Data. We use three benchmark datasets used extensively in the GNN literature: Cora, CiteSeer, and PubMed (Sen et al., 2008; Kipf & Welling, 2017), and adapt them to our setting. We use the standard (transductive) train-test split of Sen et al. (2008); the data is made inductive by removing all test-set nodes that can be influenced by train-set nodes (Hamilton et al., 2017). All three datasets describe citation networks, with papers as nodes and citations as edges. Although these are directed relations by nature, the available data include only undirected edges; hence, we direct edges towards lowerdegree nodes, so that movement of higher-degree nodes is more influential. As our setup requires binary labels, we follow standard practice and merge classes, aiming for balanced binary classes that sustain strategic movement. Appendix C includes further details. see Appendix D.2 for additional results on strategic improvement, extending neighborhood size, and node centrality and influence. Methods. We compare our robust learning approach to a naïve approach that does not account for strategic behavior (i.e., falsely assumes that users do not move). As a benchmark we report the performance of the naïve model on non-strategic data (for which it is appropriate). All methods are based on the SGC architecture (Wu et al., 2019) as it is expressive enough to effectively utilize the graph, but simple enough to permit rational user responses (Eq. (4); see also notes Sec. 1.1). We use the standard weights W̃ = D− 1 2AD− 1 2 where A is the adjacency matrix and D is the diagonal degree matrix. Optimization and setup. We train using Adam and set hyperparameters according to Wu et al. (2019) (learning rate=0.2, weight decay=1.3 ·10−5). Training is stopped after 20 epochs (this usually suffices for convergence). Hyperparameters were determined based only on the train set: τ = 0.05, chosen to be the smallest value which retained stable training, and T = 3, as training typically saturates then (we also explore varying depths). We use β-scaled 2-norm costs, cβ(x, x′) = β∥x− x′∥2, β ∈ R+, which induce a maximal moving distance of dβ = 2/β. We observed that values around d = 0.5 permit almost arbitrary movement; we therefore experiment in the range d ∈ [0, 0.5], but focus primarily on the mid-point d = 0.25 (note d = 0 implies no movement). Mean and standard errors are reported over five random initializations. Appendix C includes further details. Results. Table 1 presents detailed results for d = 0.25 and T = 3. As can be seen, the naive approach is highly vulnerable to strategic behavior. In contrast, by anticipating how users collectively respond, our robust approach is able to recover most of the drop in accuracy (i.e., from ‘benchmark’ to ‘naïve’; Cora: 35%, CiteSeer: 16%, PubMed: 72%). Note this is achieved with a T much smaller than necessary for response dynamics to converge (Tmax: Cora=7, CiteSeer=7, PubMed=11). Fig. 3 (top) shows results for varying max distances d ∈ [0, 0.5] and fixing T = 3 (note d = 0 entails no movement). For Cora and CiteSeer, larger max distances—the result of lower modification costs—hurt performance; nonetheless, our robust approach maintains a fairly stable recovery rate over all values of d. For PubMed, our approach retains ≈ 92% of the optimum, showing resilience to reduced costs. Interestingly, for CiteSeer, in the range d ∈ [0.05, 0.15], our approach improves over the baseline, suggesting it utilizes strategic movements for improved accuracy (as in Sec. 5.1). Fig. 3 (bottom) shows results for varying depths T ∈ {0, . . . , 10}. For all datasets, results improve as T increases, but saturate quickly at T ≈ 3; this suggests a form of robustness of our approach to overshooting in choosing T (which due to smoothing can cause larger deviations from the true dynamics). Using T = 1 recovers between 65%−91% (across datasets) of the optimal accuracy. This shows that while considering only one round of user responses (in which there are no dependencies) is helpful, it is much more effective to consider multiple, dependent rounds—even if only a few. 6 DISCUSSION In this paper we study strategic classification under graph neural networks. Relying on a graph for prediction introduces dependencies in user responses, which can result in complex correlated behavior. The incentives of the system and its users are not aligned, but also not discordant; our proposed learning approach utilizes this degree of freedom to learn strategically-robust classifiers. Strategic classification assumes rational user behavior; this necessitates classifiers that are simple enough to permit tractable best-responses. A natural future direction is to consider more elaborate predictive architectures coupled with appropriate boundedly-rational user models, in hopes of shedding further light on questions regarding the benefits and risks of transparency and model explainability. ETHICS AND SOCIETAL IMPLICATIONS In our current era, machine learning is routinely used to make predictions about humans. These, in turn, are often used to inform—or even determine—consequential decisions. That humans can (and do) respond to decision rules is a factual reality, and is a topic of continual interest in fields such as economics (e.g., Nielsen et al., 2010) and policy-making (e.g., Camacho & Conover, 2013); the novelty of strategic classification is that it studies decision rules that are a product of learned predictive models. Strategic classification not only acknowledges this reality, but also proposes tools for learning in ways that account for it. But in modeling and anticipating how users respond, and by adjusting learning to accommodate for their effects—learning also serves to ‘steer’ its population of users, perhaps inadvertently, towards certain outcomes (Hardt et al., 2022). GNNs are no exception to this reality. In the domain of graph-based learning, the role of predictive models is expressed in how they associate social connections with decision outcomes for individuals; Clearly, the choice of whether to make use of social data for decisions can be highly sensitive, and doing so necessitates much forethought and care. But the question of whether to use social data to enhance prediction is not a binary in nature, i.e., there is no simple ‘right’ or ‘wrong’. Consider our example of the credit scoring company, Lenddo. On the one hand, Lenddo has been criticized that it may be discriminating applicants based on who they chose to socialize with (or, rather, who chooses to socialize with them). But on the other hand, Lenddo, who focuses primarily on developing countries, has been acclaimed for providing financial assistance to a large community of deserving applicants who, due to conservative norms in typically credit scoring rules, would otherwise be denied a consequential loan.9 Such considerations apply broadly. In other focal domains in strategic classification, such as loans, university admissions, and job hiring, the use of social data for informing decisions can be highly controversial, on both ethical and legal grounds. Regulation is necessary, but as in similar areas, often lags far behind the technology itself. This highlights the need for transparency and accountability in how, when, and to what purpose, social data is used (Ghalme et al., 2021; Jagadeesan et al., 2021; Bechavod et al., 2022; Barsotti et al., 2022b). ACKNOWLEDGEMENTS This research was supported by the Israel Science Foundation (grant No. 278/22). A ANALYSIS A.1 HITCHHIKING Here we provide a concrete example of hitchhiking, following Fig. 1 (E). The example includes three nodes, i, j, k, positioned at: xk = −3, xi = −2.1 xj = −0.5, and connected via edges k→j, and j→i. Edge weights w̃ji = 0.6 and w̃ii = 0.4; w̃kj = 1/3 and w̃jj = 2/3; and w̃kk = 1. The example considers a threshold classifier hb with b = 0, and unit-scale costs (i.e., β = 1) inducing a maximal moving distance of d = 2. We show that i cannot invest effort to cross and obtain ŷi = 1; but once j moves (to obtain ŷj = 1), this results in i also being classified positively (without moving). Initially (at round t = 0), node embeddings are: ϕk = −3 , ϕi = −1.14 , ϕj = − 4 3 and all points are classified negatively, ŷk = ŷi = ŷj = −1. Notice that i cannot cross the decision boundary even if she moves the maximal cost-feasible distance of d = 2: ϕ(x (0) i + 2;x (0) i− ) = w̃ii(x (0) i + 2) + w̃jix (0) j = 0.4(−2.1 + 2) + 0.6(− 1 2 ) = −0.34 < 0 Hence, i doesn’t move, so x(1)i = x (0) i . Similarly, k cannot cross, so x (1) k = x (0) k . However, j can cross by moving to 1.5 (at cost 2) in order to get ŷj = 1: x (1) j = 1.5 = −1/2 + 2 = x (0) j + 2 ⇒ ϕ(x(1)j ;x (1) j−) = w̃jjx (1) j + w̃kjx (0) k = 2 3 x (1) j + 1 3 (−3) = 0 ⇒ ŷ(1)j = 1 After j moves, i is classified positively (and so does not need to move): ϕ(x (1) i ;x (1) i− ) = w̃iix (1) i + w̃jix (1) j = 0.4(−2.1) + 0.6 3 2 = 0.06 > 0 ⇒ ŷ(2)i = 1 A.2 CASCADING BEHAVIOR We give a constructive example (for any n) which will be used to prove Propositions 3 and 4. The construction is modular, meaning that we build a small ‘cyclic’ structure of size 3, such that for any given n, we simply replicate this structure roughly n/3 times, and include two additional ‘start’ and ‘finish’ nodes. Our example assumes a threshold classifier hb with b = 0, and scale costs cβ with β = 1.5 inducing a maximum moving distance of dβ = 3. Let n. We construct a graph of size n+ 2 as follows. Nodes are indexed 0, . . . , n+ 1. The graph has bi-directional edges between each pair of consecutive nodes, namely (i, i + 1) and (i + 1, i) for all i = 0, . . . , n, except for the last node, which has only an outgoing edge (n + 1, n), but no incoming edge. We set uniform normalized edge weights, i.e., wij = 1/3 and wii = 1/3 for all 1 ≤ i, j ≤ n, and w0,0 = w0,1 = 1/2 and wn+1,n+1 = wn+1,n = 1/2. The initial features of each node are defined as: x0 = −1, xi = { 2 if i mod 3 = 1 −4 o.w. ∀i = 1, . . . , n+ 1 (11) Figure 4 (A) illustrates this for n = 3. Note that while the graph creates a ‘chain’ structure, the positioning of node features is cyclic (starting from n = 1): 2,−4,−4, 2,−4,−4, 2, . . . etc. We begin with a lemma showing that in our construction, each node i = 1, . . . , n moves precisely at round t = i. Lemma 1. At every round 1 ≤ t ≤ n: (1) node i = t moves, with x(t)i = 5 if k mod 3 = 1, and x (t) i = −1 otherwise (2) all nodes j > t do not move, i.e., x(t)j = x (t−1) j Note that (1) (together with Prop. 1) implies that for any round t, all nodes i < t (which have already moved at the earlier round t′ = i) do not move again. Additionally, (2) implies that all j > t remain in their initial position, i.e., x(t)j = x (0) j . Finally, notice that the starting node x0 has ϕ0 = 0.5, meaning that ŷ(0)0 = 1, and so does not move at any round. Proof. We begin with the case for n = 3. • Round 1: Node i = 1 can cross by moving the maximal distance of 3: w̃1,1(x (0) 1 + 3) + w̃0,1x (0) 0 + w̃2,1x (0) 2 = 1 3 (2 + 3) + 1 3 (−1) + 1 3 (−4) = 0 (12) However, nodes 2,3 cannot cross even if they move the maximal feasible distance: w̃2,2(x (0) 2 + 3) + w̃1,2x (0) 1 + w̃3,2x (0) 3 = 1 3 (−4 + 3) + 1 3 (2) + 1 3 (−4) = −1 < 0 (13) w̃3,3(x (0) 3 + 3) + w̃2,3x (0) 2 + w̃4,3x (0) 4 = 1 3 (−4 + 3) + 1 3 (−4) + 1 3 (2) = −1 < 0 (14) • Round 2: Node i = 2 can cross by moving the maximal distance of 3: w̃2,2(x (1) 2 + 3) + w̃1,2x (1) 1 + w̃3,2x (1) 3 = 1 3 (−4 + 3) + 1 3 (5) + 1 3 (−4) = 0 (15) However, node 3 cannot cross even if it moves the maximal feasible distance: w̃3,3(x (1) 3 + 3) + w̃2,3x (1) 2 + w̃4,3x (1) 4 = 1 3 (−4 + 3) + 1 3 (−4) + 1 3 (2) = −1 < 0 (16) • Round 3: Node i = 3 can cross by moving the maximal distance of 3: w̃3,3(x (2) 3 + 3) + w̃2,3x (2) 2 + w̃4,3x (2) 4 = 1 3 (−4 + 3) + 1 3 (−1) + 1 3 (2) = 0 (17) Fig. 4 (A) illustrates this procedure for n = 3. Next, consider n > 3. Due to the cyclical nature of feature positioning and the chain structure of our graph, we can consider what happens when we sequentially add nodes to the graph. By induction, we can show that: • n mod 3 = 1: Consider round t = n. Node n has x(t−1)n = 2, and two neighbors: n− 1, who after moving at the previous round has x(t−1)n−1 = −1; and n+ 1, who has a fixed x (t−1) n+1 = −4. Thus, it is in the same configuration as node i = 1, and so its movement follows Eq. (12). • n mod 3 = 2: Consider round t = n. Node n has x(t−1)n = −4, and two neighbors: n− 1, who after moving at the previous round has x(t−1)n−1 = 5; and n + 1, who has a fixed x (t−1) n+1 = −4. Thus, it is in the same configuration as node i = 2, and so its movement follows Eq. (15). • n mod 3 = 0: Consider round t = n. Node n has x(t−1)n = −4, and two neighbors: n− 1, who after moving at the previous round has x(t−1)n−1 = −1; and n + 1, who has a fixed x (t−1) n+1 = 2. Thus, it is in the same configuration as node i = 2, and so its movement follows Eq. (17). Fig. 4 (B) illustrates this idea for n > 3. We now proceed to prove the propositions. Proposition 3: The proposition follows immediately from Lemma 1; the only detail that remains to be shown is that node n + 1 does not move at all. To see this, note that since it does not have any incoming edges, its embedding depends only on its own features, xn+1. If n + 1 mod 3 = 1, we have xn+1 = 2, and so ŷn+1 = 1 without movement. Otherwise, xn+1 = −4, meaning that it is too far to cross. Proposition 4: Fix n and k ≤ n. Consider the same construction presented above for a graph of size k + 2 Then, add n − k nodes identical nodes: for each k < j ≤ n, add an edge k→j, and set xj = xk − 6. We claim that all such nodes will move exactly at round k. Consider some node k < j ≤ n. Since xk moves only at round k (following Lemma 1), j does not move in any of the first t ≤ k rounds: w̃j,j(x (0) j +3)+w̃k,jx (0) k = 1 2 (−x(0)k −6+3)+ 1 2 (x (0) k ) = 1 2 (−x(0)k −3)+ 1 2 (x (0) k ) = −1.5 < 0 (18) At the end of round t = k, node k has a value of x(0)k + 3. This enables j to cross by moving the maximal distance of 3: w̃j,j(x (k) j +3)+ w̃k,jx (k) k = 1 2 (−x(0)k −6+3)+ 1 2 (x (k) k ) = 1 2 (−x(0)k −3)+ 1 2 (x (0) k +3) = 0 (19) As this applies to all such j, we get that n− k nodes move at round k, which concludes our proof. Note the graph is such that, for b = 0, without strategic behavior the graph is useful for prediction (increases accuracy from 66% to 100%), so that a learner that is unaware of (or does not account for) strategic behavior is incentivized to utilize the graph. However, once strategic behavior is introduced, naïvely using the graph causes performance to drop to 0%. B OPTIMIZATION B.1 PROJECTION We prove for 2-norm-squared costs. Correctness holds for 2-norm-costs since the argmin is the same (squared over positives is monotone). Calculation of xi’s best response requires solving the following equation: min x′ c(x′i, xi) s.t θ ⊤ϕ(x′i;x−i) + b = 0 min x′ ∥x′i − xi∥22 s.t θ⊤ϕ(x′i;x−i) + b = 0 To solve for x′, we apply the Lagrange method. Define the Lagrangian as follows: L(x′i, λ) = ∥x′i − xi∥22 + λ[θ⊤ϕ(x′i;x−i) + b] Next, to find the minimum of L, derive with respect to x′i, and compare to 0: 2(x′i − xi) + λθw̃ii = 0 x′i = xi − λw̃ii 2 θ Plugging x′i into the constraint gives: θ⊤[w̃ii(xi − λw̃ii 2 θ) + ∑ j ̸=i w̃ijxj ] + b = 0 θ⊤[ϕ(xi;x−i)− λw̃2ii 2 θ] + b = 0 θ⊤ϕ(xi;x−i) + b = λw̃2ii 2 ∥θ∥22 2 θ⊤ϕ(xi;x−i) + b ∥θ∥22w̃2ii = λ Finally, plugging λ into the expression for x′i obtains: x′i = xi − θ⊤ϕ(xi;x−i) + b ∥θ∥22w̃ii θ B.2 GENERALIZED COSTS Here we provide a formula for computing projections in closed form for generalized quadratic costs: c(x, x′) = 1 2 (x′ − x)⊤A(x′ − x) for positive-definite A. As before, the same formula holds for generalized 2-norm costs (since the argmin is the same). Begin with: min x′ c(x′i, xi) s.t θ ⊤ϕ(x′i;x−i) + b = 0 min x′ 1 2 (x′i − xi)⊤A(x′i − xi) s.t θ⊤ϕ(x′i;x−i) + b = 0 As before, apply the Lagrangian method: 1 2 (x′i − xi)⊤A(x′i − xi) + λ[θ⊤ϕ(x′i;x−i) + b] Derivation w.r.t. to x′i: 1 2 [A⊤(x′i − xi) +A(x′ − x)] + λθw̃ii = 0 (A⊤ +A)x′i = (A ⊤ +A)xi − 2λθw̃ii Since the matrix (A⊤ +A) is PD, we can invert to get: x′i = xi − 2λ(A⊤ +A)−1θw̃ii Plugging x′i in the constrain: θ⊤[w̃ii(xi − 2λ(A⊤ +A)−1θw̃ii) + ∑ j ̸=i w̃ijxj ] + b = 0 θ⊤[ϕ(xi;x−i)− 2λ(A⊤ +A)−1w̃2iiθ] + b = 0 θ⊤ϕ(xi;x−i) + b = 2λθ ⊤(A⊤ +A)−1θw̃2 ii Since (A⊤ +A)−1 is also PD, we get θ⊤(A⊤ +A)−1θ > 0, and hence: θ⊤ϕ(xi;x−i) + b 2θ⊤(A⊤ +A)−1θw̃2 ii = λ Finally, pluging in λ: x′i = xi − θ⊤ϕ(xi;x−i) + b θ⊤(A⊤ +A)−1θw̃2 ii (A⊤ +A)−1θw̃ii x′i = xi − θ⊤ϕ(xi;x−i) + b θ⊤(A⊤ +A)−1θw̃ii (A⊤ +A)−1θ Setting A = I recovers Eq. (7). B.3 IMPROVING NUMERICAL STABILITY BY ADDING A TOLERANCE TERM Theoretically, strategic responses move points precisely on the decision boundary. For numerical stability in classifying (e.g., at test time), we add a small tolerance term, tol, that ensures that points are projected to lie strictly within the positive halfspace. Tolerance is added as follows: min x′ c(x′i, xi) s.t θ ⊤ϕ(xi;x−i) + b ≥ tol (20) This necessitates the following adjustment to Eq. (7): projh(xi;x−i) = xi − θ⊤ϕ(xi;x−i) + b− tol ∥θ∥22w̃ii θ (21) However, blindly applying the above to Eq. (8) via: proj+h (xi;x−i) = xi −min { 0, θ⊤ϕ(xi;x−i) + b− tol ∥θ∥22w̃ii } θ (22) is erroneous, since any user whose score is lower than tol will move—although in principal she shouldn’t. To correct for this, we adjust Eq. (8) by adding a mask that ensures that only points in the negative halfspace are projected: projh(xi;x−i) = xi − 1{θ⊤ϕ(xi;x−i) + b < 0} · ( θ⊤ϕ(xi;x−i) + b− tol ∥θ∥22w̃ii θ ) (23) C ADDITIONAL EXPERIMENTAL DETAILS Data. We experiment with three citation network datasets: Cora, CiteSeer, and Pubmed Sen et al. (2008). Table 2 provides summary statistics of the datasets, as well as experimental details. Splits. All three datasets include a standard train-validation-test split, which we adopt for our use.10 For our purposes, we use make no distinction between ‘train’ and ‘validation’, and use both sets for training purposes. To ensure the data is appropriate for the inductive setting, we remove from the test set all nodes which can be influenced by train-set nodes—this ranges from 6%-43% of the test set, depending on dataset (and possibly setting; see Sec. D.2.1). In Table 2, the number of train samples is denoted ntrain, and the number of inductive test samples is denoted n∗test (all original transductive test sets include 1,000 samples). Binarization. To make the data binary (original labels are multiclass), we enumerated over possible partitions of classes into ‘negative’ and ‘positive’, and chose the most balanced partition. Experimenting with other but similarly-balanced partitions resulted in similar performance (albeit at times less distinct strategic movement). The exception to this was PubMed (having only three classes), for which the most balanced partition was neither ‘balanced’ nor stable, and so here we opted for the more stable alternative. Reported partitions and corresponding negative-positive ratios (for train and for test) are given in Table 2. Strategic responses. At test time, strategic user responses are computed by simulating the response dynamics in Sec. 3.1 until convergence. 10Note that nodes in these sets do not necessarily account for all nodes in the graph. D ADDITIONAL EXPERIMENTAL RESULTS D.1 EXPERIMENTS ON SYNTHETIC DATA In this section we explore further in depth the relation between user movement and classification performance, using our synthetic setup in Sec. 5.1 (all examples discussed herein use α = 0.7). From a predictive point of view, graphs are generally helpful if same-class nodes are well-connected. This is indeed the case in our construction (as can be seen by the performance of the benchmark method with non-extreme α > 0 values). From a strategic perspective, however, connectivity increases cooperation, since neighboring nodes can positively influence each other over time. In our construction, cooperation occurs mostly within classes, i.e., negative points that move encourage other negative points to move, and similarly for positive points. Movement trends. Fig. 5 (left) shows how different threshold classifiers hb induce different degrees of movements. The plot shows the relative number of points (in percentage points) whose predictions changed as a result of strategic behavior, per class (red: y = −1, green: y = 1) and over time: after one round (T = 1, dashed lines), and at convergence (T = ∞, solid lines). As can be seen, there is a general trend: when b is small, mostly negative points move, but as b increases, positive points move instead. The interesting point to observe is the gap between the first round (T = 1) and final round (T = ∞). For negative points, movement at T = 1 peaks at b1 ≈ −0.25, but triggers relatively little consequent moves. In contrast, the peak for T = ∞ occurs at a larger b∞ ≈ 0.15. For this threshold, though less points move in the first round, these trigger significantly more additional moves at later rounds—a result of the connectivity structure within the negative cluster of nodes (blue arrows). A similar effect takes place for positive nodes. The importance of looking ahead. Fig. 5 (center) plots for a range of thresholds b the accuracy of hb at convergence (T = ∞; orange line), and after one round (T = 1; gray line). The role of the latter is to illustrate the outcomes as ‘perceived’ by a myopic predictive model that considers only one round (e.g., includes only one response layer ∆̃); the differences between the two lines demonstrate the gap between perception (based on which training chooses a classifier ĥ) and reality (in which the classifier ĥ is evaluated). As can be seen, the mypoic approach leads to an under-estimation of the optimal b∗; at b1 ≈ 0.5, performance for T = 1 is optimal, but is severely worse under the true T = ∞, for which optimal performance is at b∞ ≈ 1.15. The figure also gives insight as to why this happens. For both b1 and b∞, the figure shows (in bars) the relative number of points from each class who obtain ŷ = 1 as a result of strategic moves. Bars are stacked, showing the relative number of points that moved per round T (darker = earlier rounds; lightest = convergence). As can be seen, at b1, the myopic models believes that many positive points, but only few negative points, will cross. However, in reality, at convergence, the number of positive points that crossed is only slightly higher than that of negative points. Hence, the reason for the(erroneous) optimism of the myopic model is that it did not correctly account for the magnitude of correlated moves of negative points, which is expressed over time. In contrast, note that at b∞, barely any negative points cross. How movement affects accuracy. An important observation about the relation between movement and accuracy is that for any classifier h, any negative point that moves hurts accuracy (since y = −1 but predictions become ŷ = 1), whereas any positive point that moves helps accuracy (since y = 1 and predictions are now ŷ = 1). Fig. 5 (right) shows how these movements combine to affect accuracy. The figure compares accuracy before strategic behavior (T = 0; dashed line) to after one response round (T = 1; solid line, top plot) and to convergence (T = ∞; solid line, lower plot). As can be seen, for any b, the difference between pre-strategic and post-strategic accuracy amounts to exactly the degradation due to negative points (red arrows) plus the improvement of positive points (green arrows). Note, however, the difference between T = 1 and T = ∞, as they relate to the benchmark model (T = 0, i.e., no strategic behavior). For T = 1 (top), across the range of b, positive and negative moves roughly balance out. A result of this is that curves for T = 0 and T = 1 are very much similar, and share similar peaks in terms of accuracy (both have ≈ 0.89). One interpretation of this is that if points were permitted to move only one round, the optimal classifier can completely recover the benchmark accuracy by ensuring that the number of positive points the moves overcomes the number of negative points. However, for T = ∞ (bottom), there is a skew in favor of positive points (green arrows). The result of this is that for the optimal b, additional rounds allow positive points to move in a way that obtains slightly higher accuracy (0.91) compared to the benchmark (0.89). This is one possible mechanism underlying our results on synthetic data in Sec. 5.1, and later for our results on real data in Sec. 5.2. D.2 EXPERIMENTS ON REAL DATA D.2.1 EXTENDING NEIGHBORHOOD SIZE One hyperparameter of SGC is the number ‘propagation’ layers, K, which effectively determines the graph distance at which nodes can influence others (i.e., the ‘neighborhood radius’). Given K, the embedding weights are defined as W̃ = D− 1 2AKD− 1 2 where A is the adjacency matrix and D is the diagonal degree matrix. For K = 0, the graph is unused, which results in a standar linear classifier over node features. Our results in the main body of the paper use K = 1. Fig. 6 shows results for an increasing K (we set T = 3, d = 0.25 as in our main results). Results are mixed: for PubMed, higher K seems to lead to less drop in accuracy for naïve and less recovery for our approach; for Cora and CiteSeer, results are unstable. Note however that this may likely be a product of our inductive setup: since varying K also changes the effective test set (since to preserve inductiveness, larger K often necessitates removing more nodes), test sets vary across conditions and decrease in size, making it difficult to directly compare result across different K. D.2.2 STRATEGIC IMPROVEMENT Our main results in Sec. 5.2 show that for CiteSeer, our strategically-aware approach outperforms the non-strategic benchmark (similarly to our synthetic experiments). Here we show that these results are robust. Fig. 7 provides higher-resolution results on CiteSeer for max distances d ∈ [0, 0.22] in hops of 0.01. All other aspects the setup match the original experiment. As can be seen, our approach slightly but consistently improves upon the benchmark until d ≈ 0.17. D.3 NODE CENTRALITY AND INFLUENCE In this experiment we set to explore the role played by central nodes in the graph in propagating the influence of strategic behavior. Since the embedding of a node i is partly determined by its in-neighbors, broadly we would expect that nodes have high out-degree would be highly influential: as ‘anchors’ that prevent others from moving if they themselves do not, and as ‘carriers’ which either push neighbors over the boundary—or at least promote the closer to it—if they do move. Experimental setup. To study the role of such nodes, we preform the following experiment. First, we order the nodes by decreasing out-degree, so that the potentially more influential nodes appear first in the ranking. Then, for each q ∈ {0, 10, 20, ..., 100}, we disconnect nodes in the qth percentile, i.e., remove all edges emanating from the top-q% ranked nodes. For each such condition, we examine learning and its outcomes, and compare performance to a control condition in which nodes are ordered randomly. The difference in performance and other learning outcomes provides us with a measure of the importance of high-degree nodes. Results. Figure 8 shows results for all methods (naïve, robust (ours), and the non-strategic benchmark) and across all three datasets (Cora, CiteSeer, and Pubmed). In all conditions, we vary on the x-axis the portion of nodes that remain connected, where at zero (left end) we get an empty graph, and at 100 (right end) we get the original graph. Note that the y-axis varies in scale across plots. First, consider Cora. In terms of accuracy (upper plots), results show that on the benchmark (evaluated in a non-strategic environment, in which users do not move), the general trend is that more edges help improve performance. However, the gain in performance is much more pronounced for highdegree nodes. Interestingly, for the naïve method (which operates on strategically modified data, but does not anticipate updates), the trend is reversed: user utilize edges in a way that is detrimental to performance—and more so when high-degree nodes remain, making them a vulnerability. Our robust approach is also sensitive to the addition of edges, but to a much lesser degree (the drop is performance is minor); moreover, which nodes are disconnected appears to make little difference, which we take to mean that our approach can counter the dominant role of central nodes. The lower plots, which describe the portion of users that moves and the portion of users that crossed, provides some explanation as to how this is achieved. For the naïve approach, nearly half of all user move—and all users that move, also cross. This occurs faster (in terms of the portion of nodes removed) in the degree condition. In our robust approach, note that the number of nodes that move is halved, and of those, not all cross, which demonstrates how are learning objective, which anticiaptes strategic behavior, can act to prevent it. For CiteSeer and PubMed, the general trend in accuracy is reversed for the non-strategic benchmark, and for the naïve approach, begins with a gap, which closes at some point (sooner in CiteSeer). Despite these differences, the qualitative behavior of our robust classifier is similar to Cora, and achieves fairly stable accuracy (with a mild negative slope) in both conditions and for both datasets. As in citeseer, movement and crossing behavior is similar, in that for the naïve approach a considerable portion of users move and cross (with a gap between conditions in PubMed); and in that our robust approach greatly reduces the number of users that move, and even more so the number users that cross. E ADDITIONAL ANALYTIC RESULTS E.1 PERFORMANCE GAPS: ROBUST LEARNING VS. THE NON-STRATEGIC BENCHMARK When learning a robust classifier on strategic data, intuitively we may hope that its performance approaches that of the optimal classifier on non-strategic data, which we may think of as a target upper bound. A natural question to then ask is - can we always reach this upper bound and close the performance gap? Here we answer this question in the negative, by showing a simple example where the optimal classifier on non-strategic data achieves perfect accuracy, but the optimal classifier on strategic data achieves only 0.66 accuracy. We then show that by introducing a mild change—namely slightly shifting the features of one node—the performance gap closes entirely. This, combined with our result from the previous Sec. D.2.2 showing that the gap can also be negative, highlights how the gap in performance greatly depends on the structure of the input (i.e., the graph, features, and labels), and in a way which can be highly sensitive even to minor changes. E.1.1 LARGE GAP Our first example in which the best obtainable gap is 0.33 is shown in Figure 9 (Left). The example has three nodes described by one-dimensional features x1 = 1.2, x2 = −1, x3 = −1 and with labels y1 = 1, y2 = −1, y3 = −1. We use the standard cost function so that maximal distance to move is dβ = 2, and use uniform edge weights. Since x ∈ R, classifiers are simply threshold functions on the real line, defined by a threshold parameter b ∈ R. First, we demonstrate that for non-strategic data, there exists a ‘benchmark’ classifier that achieves perfect accuracy. Let b = 0. Node embeddings are: ϕ1 = x1 + x2 2 = 1− 1 2 = 0 = b ϕ2 = x1 + x2 + x3 3 = 1− 1− 1 3 = −1 3 < 0 = b ϕ3 = x2 + x3 2 = −1− 1 2 = −1 < 0 = b This give predictions ŷ1 = +1 = y1, ŷ2 = −1 = y2, ŷ3 = −1 = y3, which are all correct. Next. we prove that there is no robust classifier capable of achieving perfect accuracy on strategic data. Suppose by contradiction that such a classifier exists, denoted b. For x2 to move in the first round, the following conditions must hold: x1 + x ′ 2 + x3 3 = b, 1 + x′2 − 1 = 3b, x′2 = 3b Since x2 moves at most distance 2, it moves in the first round only if − 13 < b ≤ 1 3 . However, if it does move - it ends up getting an incorrect prediction. In addition, in this case x3 can also get a positive prediction (either in the first round or in the next round, depending on whether b > 0), in which case the accuracy is 13 . Thus, we get that 1 3 < b (note that for b < − 1 3 x2 is classified as positive which is wrong). Next, we look at the behavior of x1 in the first round. Conditions for movement are: x′1 + x2 2 = b, x′1 − 1 = 2b, x′1 = 2b+ 1 Here x1 gets negative classification if b > 0. If b > 1, then x1 does not move, since the distance required is larger than 2. Thus, x1 does move (beyond the classifier) and gets the correct prediction only if 0 < b ≤ 1. However, considering now the second round of movement for x2 (which only occurs if b > 13 since for 0 < b ≤ 13 x2 moves at the first round), we get the conditions: x′1 + x ′ 2 + x3 3 = b, 2b+ 1 + x′2 − 1 = 3b, x′2 = b This means that for all 0 < b ≤ 1, x2 moves and gets an incorrect prediction. This occurs either for x1 or for x2. Consequently, there is no b that achieves an accuracy of 1, which contradicts the assumption. The optimal accuracy of any robust classifier for this example is 23 , which is achieved by any b > 1. In this case, none of the nodes move, and x1 is classified negatively, whereas its true label is positive. E.1.2 NO GAP We now give a nearly identical example, shown in Figure 9 (Right), in which the gap becomes zero. We use the same example as before, but set x1 = 1 (instead of x1 = 1.2). We begin by showing that there does exist a classifier which achieves perfect accuracy on non-strategic data. Let b = 0, then embeddings are now: ϕ1 = x1 + x2 2 = 1.2− 1 2 = 0.1 > 0 = b ϕ2 = x1 + x2 + x3 3 = 1.2− 1− 1 3 = − 4 15 < 0 = b ϕ3 = x2 + x3 2 = −1− 1 2 = −1 < 0 = b (note that since all nodes are connected in the graph, changing x1 requires us to recompute all embeddings). Predictions now become: ŷ1 = +1 = y1, ŷ2 = −1 = y2, ŷ3 = −1 = y3 which are all correct. Next, we show that there also exists a classifier which achieves perfect accuracy no strategic data. Let b = 1.1. In the first round, x1 moves, and flips its predicted label: ϕ1 = x′1 + x2 2 = x1 + 2 + x2 2 = x1 + 2 + x2 2 = 3.2− 1 2 = 1.1 = b Here, even if the other nodes move to the fullest extent, they do not have sufficient influence to revert this prediction: ϕ2 = x1 + x ′ 2 + x3 3 = 1.2 +−1 + 2− 1 3 = 0.4 < 1.1 = b ϕ3 = x2 + x ′ 3 2 = −1− 1 + 2 2 = 0 < 1.1 = b Thus, we get ŷ1 = +1 = y1, ŷ2 = −1 = y2, ŷ3 = −1 = y3 which are also all correct. E.2 STRATEGIC BEHAVIOR OF CLIQUES Here we give a result that considers graph structure. In particular, we consider cliques, and show that for uniform weights, either all nodes move together—or none do. Proposition 6. Consider n nodes which are all fully connected, i.e., form a clique, and assume uniform edge weights. Then for any dimension d, any assignment of features x1, . . . , xn ∈ Rd, and for any classifier h, either (i) all n nodes move in the first round, or (ii) none of the nodes move at all. Proof. Consider the case in which at least one node i moves in the first round. Denote by z the change in xi made in order to cross the classifier, i.e., z = x (1) i − x (0) i . Note that in order for xi to move, the following conditions must be satisfied: ||z||2 ≤ 2, θTϕ(x(1)i , x (0) −i ) + b = 0 We now show that every other node t, if it moves a distance of
1. What is the focus of the paper regarding strategic classification and graph neural networks? 2. What are the strengths of the proposed approach, particularly in its empirical evaluation? 3. Do you have any concerns or questions regarding the reported results and their implications? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper studies strategic classification under graph neural networks. In the model of strategic user behavior, it is assumed that a user plays myopic best-response over a sequence of multiple update rounds. The myopic response mapping means that a user may move their reported features in the direction that best improves their classification outcome from the graph neural network model. Several results are then stated to demonstrate the basic properties of the myopic response model and the consequent dynamics. Then, the learning objective based on the myopic best response is optimized over multiple rounds. This optimization computes the approximately modified features after T rounds. Empirical evaluation is provided to demonstrate different aspects of learning in the above setting. Several experiments are conducted on popular settings in the GNN literature, this includes the Cora, CiteSeer, and PubMed datasets, trained with a simplified GCN architecture. The results show that by anticipating the strategic behavior of users in the GNN, the optimized graph neural network performs robustly against the naive baseline, which trains a GNN without accounting for any strategic behaviors. Strengths And Weaknesses This paper is generally well-written; the methods and the experiments are both clearly described. The experiments also provide clear evidence about the proposed method in the presence of strategic behaviors of users. One question with the reported results (e.g., in table 1) is to what extent is the gap between robust and non-strategic performance necessary. Whereas for CiteSeer, there is only a 2% gap between the two settings; for Cora and PubMed, there are about 10% gap between the two settings. To what extent can we close the performance gap between the robust performance and the non-strategic performance? Another question is to what extend do these results hold under architectural modifications—e.g., suppose one changes the diffusion matrix in the GNN, would the same result still apply? Clarity, Quality, Novelty And Reproducibility There is a github repo associated with the submission; so it is believable that the reported results can be reproduced by following the documentation there. Strategic classification is related to the study of game theory; in particular the proposed formulation of this work is very much inspired by an ITCS paper by Hardt et al. (2016). This paper follows the formulation of this paper and applies concepts such as myopic response in graph neural networks. From this perspective, this paper seems like a natural extension of Hard et al. (2016) to the context of GNNs.
ICLR
Title Strategic Classification with Graph Neural Networks Abstract Strategic classification studies learning in settings where users can modify their features to obtain favorable predictions. Most current works focus on simple classifiers that trigger independent user responses. Here we examine the implications of learning with more elaborate models that break the independence assumption. Motivated by the idea that applications of strategic classification are often social in nature, we focus on graph neural networks, which make use of social relations between users to improve predictions. Using a graph for learning introduces inter-user dependencies in prediction; our key point is that strategic users can exploit these to promote their own goals. As we show through analysis and simulation, this can work either against the system—or for it. Based on this, we propose a differentiable framework for strategically-robust learning of graph-based classifiers. Experiments on several real networked datasets demonstrate the utility of our approach. 1 INTRODUCTION Machine learning is increasingly being used to inform decisions about humans. But when users of a system stand to gain from certain predictive outcomes, they may be prone to “game” the system by strategically modifying their features (at some cost). The literature on strategic classification (Brückner & Scheffer, 2011; Hardt et al., 2016) studies learning in this setting, with emphasis on how to learn classifiers that are robust to strategic user behavior. The idea that users may respond to a decision rule applies broadly and across many domains, from hiring, admissions, and scholarships to loan approval, insurance, welfare benefits, and medical eligibility (McCrary, 2008; Almond et al., 2010; Camacho & Conover, 2011; Lee & Lemieux, 2010). This, along with its clean formulation as a learning problem, have made strategic classification the target of much recent interest (Sundaram et al., 2021; Zhang & Conitzer, 2021; Levanon & Rosenfeld, 2021; Ghalme et al., 2021; Jagadeesan et al., 2021; Zrnic et al., 2021; Estornell et al., 2021; Lechner & Urner, 2021; Harris et al., 2021; Levanon & Rosenfeld, 2022; Liu et al., 2022; Ahmadi et al., 2022; Barsotti et al., 2022a). But despite these advances, most works in strategic classification remain to follow the original problem formulation in assuming independence across users responses. From a technical perspective, this assumption greatly simplifies the learning task, as it allows the classifier to consider each user’s response in isolation: user behavior is modeled via a response mapping ∆h(x) determining how users modify their features x in response to the classifier h, and learning aims to find an h for which y ≈ h(∆h(x)). Intuitively, a user will modify her features if this ‘moves’ her across the decision boundary, as long as this is worthwhile (i.e., gains from prediction exceed modification costs). Knowing ∆h allows the system to anticipate user responses and learn an h that is robust. For a wide range of settings, learning under independent user responses has been shown to be theoretically possible (Hardt et al., 2016; Zhang & Conitzer, 2021; Sundaram et al., 2021) and practically feasible (Levanon & Rosenfeld, 2021; 2022). Unfortunately, once this assumption of independence is removed—results no longer hold. One reason is that current approaches can safely assume independence because the decision rules they consider induce independence: when predictions inform decisions for each user independently, users have no incentive to account for the behavior of others. This limits the scope of predictive models to include only simple functions of single inputs. *Equal contribution, alphabetical order In this paper, we aim to extend the literature on strategic classification to support richer learning paradigms that enable inter-dependent user responses, with particular focus on the domain of Graph Neural Networks (GNNs) (Monti et al., 2017; Wang et al., 2019; Bronstein et al., 2017; Hamilton et al., 2017). Generally, user responses can become dependent through the classifier if predictions for one user rely also on information regarding other users, i.e., if h(xi) is also a function of other xj . In this way, the affects of a user modifying her features via xj 7→ ∆h(xj) can propagate to other users and affect their decisions (since h(xi) now relies on ∆h(xj) rather than xj). For GNNS, this expresses through their relience on the graph. GNNs take as input a weighted graph whose nodes correspond to featurized examples, and whose edges indicate relations that are believed to be useful for prediction (e.g., if j→i indicates that yi = yj is likely). In our case, nodes represent users, and edges represent social links. The conventional approach is to first embed nodes in a way that depends on their neighbors’ features, ϕi = ϕ(xi;xnei(i)), and then perform classification (typically linear) in embedded space, ŷi = sign(w⊤ϕi). Notice ŷi depends on xi, but also on all other xj ∈ xnei(i); hence, in deciding how to respond, user i must also account for the strategic responses of her neighbors j ∈ nei(i). We aim to establish the affects of such dependencies on learning. As a concrete example, consider Lenddo1, a company that provides credit scoring services to lending institutions. Lenddo specializes in consumer-focused microlending for emerging economies, where many applicants lack credible financial records. To circumvent the need to rely on historical records, Lenddo uses applicants’ social connections, which are easier to obtain, as a factor in their scoring system.2 As an algorithmic approach for this task, GNNs are an adequate choice (Gao et al., 2021). Once loan decisions become dependent on social relations, the incentives for acting strategically change (Wei et al., 2016). To see how, consider that a user who lies far to the negative side of the decision boundary (and so independently cannot cross) may benefit from the graph if her neighbors “pull” her embedding towards the decision boundary and close enough for her to cross. Conversely, the graph can also suppress strategic behavior, since neighbors can “hold back” nodes and prevent them from crossing. Whether this is helpful to the system or not depends on the true label of the node. This presents a tradeoff: In general, graphs are useful if they are informative of labels in a way that complements features; the many success stories of GNNs suggest that this is often the case (Zhou et al., 2020). But even if this holds sans strategic behavior—once introduced, graphs inadvertently create dependencies through user representations, which strategic users can exploit. Graphs therefore hold the potential to benefit the system, but also its users. Here we study the natural question: who does the graph help more? Through analysis and experimentation, we show that learning in a way that neglects to account for strategic behavior not only jeopardizes performance, but becomes worse as reliance on the graph increases. In this sense, the graph becomes a vulnerability which users can utilize for their needs, turning it from an asset to the system—to a potential threat. As a solution, we propose a practical approach to learning GNNs in strategic environments. We show that for a key neural architecture (SGC; Wu et al. (2019)) and certain cost functions, graph-dependent user responses can be expressed as a ‘projection-like’ operator. This operator admits a simple and differentiable closed form; with additional smoothing, this allows us to implement responses as a neural layer, and learn robust predictors h using gradient methods. Experiments on synthetic and real data (with simulated responses) demonstrate that our approach not only effectively accounts for strategic behavior, but in some cases, can harness the efforts of self-interested users to promote the system’s goals. Our code is publicly available at: http://github.com/StrategicGNNs/Code. 1.1 RELATED WORK Strategic classification. Since its introduction in Hardt et al. (2016) (and based on earlier formulations in Brückner & Scheffer (2009); Brückner et al. (2012); Großhans et al. (2013)), the literature on strategic classification has been growing at a rapid pace. Various aspects of learning have been studied, including: generalization behavior (Zhang & Conitzer, 2021; Sundaram et al., 2021; Ghalme et al., 2021), algorithmic hardness (Hardt et al., 2016), practical optimization methods (Levanon & Rosenfeld, 2021; 2022), and societal implications (Milli et al., 2019; Hu et al., 2019; Chen et al., 2020; Levanon & Rosenfeld, 2021). Some efforts have been made to extend beyond the conventional user models, e.g., by adding noise (Jagadeesan et al., 2021), relying on partial information (Ghalme 1http://lenddoefl.com; see also http://www.wired.com/2014/05/lenddo-facebook/. 2For a discussion on ethics, see final section. For similar initiatives, see https://en.wikipedia.org/wiki/Lenddo. et al., 2021; Bechavod et al., 2022), or considering broader user interests (Levanon & Rosenfeld, 2022); but these, as do the vast majority of other works, focus on linear classifiers and independent user responses.3 We study richer predictive model classes that lead to correlated user behavior. Graph Neural Networks (GNNs). The use of graphs in learning has a long and rich history, and remains a highly active area of research (Wu et al., 2020). Here we cover a small subset of relevant work. The key idea underlying most methods is to iteratively propagate and aggregate information from neighboring nodes. Modern approaches implement variations of this idea as differentiable neural architectures (Gori et al., 2005; Scarselli et al., 2008; Kipf & Welling, 2017; Gilmer et al., 2017). This allows to express more elaborate forms of propagation (Li et al., 2018; Alon & Yahav, 2021) and aggregation (Wu et al., 2019; Xu et al., 2019; Li et al., 2016), including attention-based mechanisms (Veličković et al., 2018; Brody et al., 2022). Nonetheless, a key result by Wu et al. (2019) shows that, both theoretically and empirically, linear GNNs are also quite expressive. Robustness of GNNs. As most other fields in deep learning, GNNs have been the target of recent inquiry as to their sensitivity to adversarial attacks. Common attacks include perturbing nodes, either in sets (Zügner et al., 2018; Zang et al., 2021) or individually (Finkelshtein et al., 2020). Attacks can be applied before training (Zügner & Günnemann, 2019; Bojchevski & Günnemann, 2019; Li et al., 2021; Zhang & Zitnik, 2020) or at test-time (Szegedy et al., 2014; Goodfellow et al., 2015); our work corresponds to the latter. While there are connections between adversarial and strategic behavior (Sundaram et al., 2021), the key difference is that strategic behavior is not a zero-sum game; in some cases, incentives can even align (Levanon & Rosenfeld, 2022). Thus, system-user relations become more nuanced, and provide a degree of freedom in learning that does not exist in adversarial settings. 2 LEARNING SETUP Our setting includes n users, represented as nodes in a directed graph G = (V,E) with non-negative edge weights W = {wij}(i,j)∈E , wij ≥ 0. Each user i is also described by a feature vector xi ∈ Rℓ and a binary label yi ∈ {±1}. We use x−i = {xj}j ̸=i to denote the set of features of all nodes other than i. Using the graph, our goal is to learn a classifier h that correctly predicts user labels. The challenge in our strategic setting is that inputs at test-time can be strategically modified by users, in response to h and in a way that depends on the graph and on other users (we describe this shortly). Denoting by xhi the (possibly modified) strategic response of i to h, our learning objective is: argmin h∈H ∑ i L(yi, ŷi), ŷi = h(x h i ;x h −i) (1) where H is the model class and L is a loss function (i.e., log-loss). Note that both predictions ŷi and modified features xhi can depend on G and on on x h −i (possibly indirectly through h). We focus on the inductive graph learning setting, in which training is done on G, but testing is done on a different graph, G′ (often G,G′ are two disjoint components of a larger graph). Our goal is therefore to learn a classifier that generalizes to other graphs in a way that is robust to strategic user behavior. Graph-based learning. We consider linear graph-based classifiers—these are linear classifiers that operate on linear, graph-dependent node embeddings, defined as: hθ,b(xi;x−i) = sign(θ ⊤ϕ(xi;x−i) + b), ϕ(xi;x−i) = w̃iixi + ∑ j ̸=i w̃jixj (2) where ϕi = ϕ(xi;x−i) is node i’s embedding,4 θ ∈ Rℓ and b ∈ R are learned parameters, and w̃ij ≥ 0 are pairwise weights that depend on G and W . We refer to users j with w̃ji ̸= 0 as the embedding neighbors of i. A simple choice of weights is w̃ji = wji for (j, i) ∈ E (and 0 otherwise), but different methods propose different ways to construct w̃; here we adopt the weight scheme of Wu et al. (2019). We assume the weights w̃ are predetermined, and aim to learn θ and b in Eq. (1). Our focus on linear GNNs stems from several factors. From the perspective of strategic classification, linear decision rules ensure that strategic responses are computationally tractable (see Eq. (4)). This is conventionally required, and most works remain in the linear regime. From the perspective of GNNs, 3The only exception we know of is Liu et al. (2022) who study strategic ranking, but do not consider learning. 4Note that embedding preserve the dimension of the original features. linear architectures have been shown to match state-of-the-art performance on multiple tasks (Wu et al., 2019), implying sufficiently manifest the fundamental role of graphs. Thus, linear GNNs serve as a minimal necessary step for bridging standard strategic classification and graph-based learning, in a way that captures the fundamental structure of the learning task in both domains. Nonetheless, as we show in Sec. 4, even for linear GNNs—user responses can cause learning to be highly non-linear. Strategic inputs. For the strategic aspects of our setting, we build on the popular formulation of Hardt et al. (2016). Users seek to be classified positively (i.e., have ŷi = 1), and to achieve this, are willing to modify their features (at some cost). Once the system has learned and published h, a test-time user i can modify her features xi 7→ x′i in response to h. Modification costs are defined by a cost function c(x, x′) (known to all); here we focus mainly on 2-norm costs c(x, x′) = ∥x− x′∥2 (Levanon & Rosenfeld, 2022; Chen et al., 2020), but also discuss other costs (Brückner et al., 2012; Levanon & Rosenfeld, 2021; Bechavod et al., 2022). User i modifies her features (or “moves”) if this improves her prediction (i.e., if h(xi) = −1 but h(x′i) = 1) and is cost-effective (i.e., prediction gains exceed modification costs); for linear classifiers, this means crossing the decision boundary. Note that since y ∈ {±1}, gains are at most h(x′)− h(x) = 2. Users therefore do not move to any x′ whose cost c(x, x′) exceeds a ‘budget’ of 2, and the maximal moving distance is d = 2. Distribution shift. One interpretation of strategic classification is that user responses cause distribution shift, since in aggregate, p(x′) ̸= p(x). Crucially, how the distribution changes depends on h, which implies that the system has some control over the test distribution p(x′), indirectly through user how users respond—a special case of model-induced distribution shift (Miller et al., 2021; Maheshwari et al., 2022). The unique aspect of our setting is that user responses are linked through their mutual dependence on the graph. We next describe our model of user responses in detail. 3 STRATEGIC USER BEHAVIOR: MODEL AND ANALYSIS Eq. (2) states that h classifies i according to her embedding ϕi, which in turn is a weighted sum of her features and those of her neighbors. To gain intuition as to the effects of the graph on user behavior, it will be convenient to assume weights w̃ are normalized5 so that we can write: ϕi = ϕ(xi;x−i) = (1− αi)xi + αix̄i for some αi ∈ [0, 1] (3) I.e., ϕi can be viewed as an interpolation between xi and some point x̄i ∈ Rℓ representing all other nodes, where the precise point along the line depends on a parameter αi that represents the influence of the graph (in a graph-free setting, αi = 0). This reveals the dual effect a graph has on users: On the one hand, the graph limits the ability of user i to influence her own embedding, since any effort invested in modifying xi affects ϕi by at most 1− αi. But the flip side of this is that an αi-portion of ϕi is fully determined by other users (as expressed in x̄i); if they move, i’s embedding also ‘moves’ for free. A user’s ‘effective’ movement radius is ri = d(1− αi). Fig. 1 (F) shows this for varying αi. 5This is indeed the case in several common approaches. 3.1 STRATEGIC RESPONSES Given that h relies on the graph for predictions—how should a user modify her features xi to obtain ŷi = 1? In vanilla strategic classification (where h operates on each xi independently), users are modeled as rational agents that respond to the classifier by maximizing their utility, i.e., play x′i = argmaxx′ h(x ′) − c(xi, x′), which is a best-response that results in immediate equilibrium (users have no incentive to move, and the system has no incentive to change h).6 In our graph-based setting, however, the dependence of ŷi on all other users via h(xi;x−i) makes this notion of bestresponse ill-defined, since the optimal x′i can depend on others’ strategic responses, x ′ −i, which are unknown to user i at the time of decision (and may very well rely on x′i itself). As a feasible alternative, here we generalize the standard model by assuming that users play myopic best-response over a sequence of multiple update rounds. As we will see, this has direct connections to key ideas underlying graph neural networks. Denote the features of node i at round t by x(t)i , and set x(0)i = xi. A myopic best response means that at round t, each user i chooses x (t) i to maximize her utility at time t according to the state of the game at time t− 1, i.e., assuming all other users play {x(t−1)j }j ̸=i, with costs accumulating over rounds. This defines a myopic response mapping: ∆h(xi;x−i, κ) ≜ argmax x′∈Rℓ h(x′;x−i)− c(xi, x′)− κ (4) where at round t updates are made (concurrently) via x(t+1)i = ∆h(x (t) i ;x (t) −i, κ (t) i ) with accumulating costs κ(t)i = κ (t−1) i + c(x (t−1) i , x (t) i ), κ (0) i = 0. Predictions for round t are ŷ (t) i = h(x (t) i ;x (t) i−i). Eq. (4) naturally extends the standard best-response mapping (which is recovered when αi = 0 ∀i, and converges after one round). By adding a temporal dimension, the actions of users propagate over the graph and in time to affect others. Nonetheless, even within a single round, graph-induced dependencies can result in non-trivial behavior; some examples for ℓ = 1 are given in Fig. 1 (A-D). 3.2 ANALYSIS We now give several results demonstrating basic properties of our response model and consequent dynamics, which shed light on how the graph differentially affects the system and its users. Convergence. Although users are free to move at will, movement adheres to a certain useful pattern. Proposition 1. For any h, if users move via Eq. (4), then for all i ∈ [n], x(t)i ̸= x (t−1) i at most once. Proof. User i will move only when: (i) she is currently classified negatively, h(xi;xi−) = −1, and (ii) there is some x′ for which utility can improve, i.e., h(x′;xi−) − c(xi, x′) > −1, which in our case occurs if h(x′;xi−) = 1 and c(xi, x′) < 2 (since h maps to [−1, 1]).7 Eq. (4) ensures that the modified x′i will be such that ϕ(x ′ i;x−i) lies exactly on the decision boundary of h; hence, x ′ i must be closer to the decision boundary (in Euclidian distance) than xi. This means that any future moves of an (incoming) neighbor j can only push i further away from the decision boundary; hence, the prediction for i remains positive, and she has no future incentive to move again.8 Hence, all users move at most once. The proof reveals a certain monotonicity principle: users always (weakly) benefit from any strategic movement of others. Convergence follows as an immediate result. Corollary 1. Myopic-best response dynamics converge for any h (and after at most n rounds). We will henceforth use xhi to denote the features of user i at convergence (w.r.t. h), denoted Tmax. Hitchhiking. When i moves, the embeddings of (outgoing) neighbors j who currently have ŷj = −1 also move closer to the decision boundary; thus, users who were initially too far to cross may be able to 6Note ‘rational’ here implies users are assumed to know h. As most works in the field, we also make this assumption; for the practically-inclined reader, note that (i) in some cases, there is reason to believe it may approximately hold (e.g., http://openschufa.de), and (ii) relaxing this assumption (and others) is an ongoing community effort (Ghalme et al., 2021; Jagadeesan et al., 2021; Bechavod et al., 2022; Barsotti et al., 2022b). 7In line with Hardt et al. (2016), we assume that if the value is zero then the user does not move. 8Users moving only once ensures that cumulative costs are never larger than the final gain. do so at later rounds. In this sense, the dependencies across users introduced by the graph-dependent embeddings align user incentives, and promote an implicit form of cooperation. Interestingly, users can also obtain positive predictions without moving. We refer to such users as ‘hitchhikers’. Proposition 2. There exist cases where ŷ(t)i = −1 and i doesn’t move, but ŷ (t+1) i = 1. A simple example can be found in Figure 1 (E). Hitchhiking demonstrates how relying on the graph for classification can promote strategic behavior—even under a single response round. Cascading behavior. Hitchhiking shows how the movement of one user can flip the label of another, but the effects of this process are constrained to a single round. When considering multiple rounds, a single node can trigger a ‘domino effect’ of moves that span the entire sequence. Proposition 3. For any n, there exists a graph where a single move triggers n additional rounds. Proposition 4. For any n and k ≤ n, there exists a graph where n− k users move at round k. Proofs are constructive and modular, and rely on graphs that are predictively useful (Appendix A.2). Note also that graph diameter is not a mediating factor (Appendix E.3). Both results show that, through monotonicity, users also (weakly) benefit from additional rounds. This has concrete implications. Corollary 2. In the worst case, the number of rounds until convergence is Ω(n). Corollary 3. In the worst case, Ω(n) users move after Ω(n) rounds. Thus, to exactly account for user behavior, the system must correctly anticipate the strategic responses of users many rounds into the future, since a bulk of predictions may flip in the last round. Fortunately, these results also suggests that in some cases, blocking one node from crossing can prevent a cascade of flips; thus, it may be worthwhile to ‘sacrifice’ certain predictions for collateral gains. This presents an interesting tradeoff in learning, encoded in the learning objective we present next, and which we motivate with our final result on the potential impact of strategic behavior: Proposition 5. The gap in accuracy between (i) the optimal non-strategic classifier on non-strategic data, and (ii) the optimal strategic classifier on strategic data, can be as large as 30% (see Apx. E.1). 4 LEARNING AND OPTIMIZATION We are now ready to describe our learning approach. Our learning objective can be restated as: ĥ = argmin h∈H ∑ i L(yi, h(x h i ;x h −i)) (5) for H = {hθ,b} as in Eq. (2). The difficulty in optimizing Eq. (5) is that xh depend on h through the iterative process, which relies on ∆h. At test time, xh can be computed exactly by simulating the dynamics. However, at train time, we would like to allow for gradients of θ, b to propagate through xh. For this, we propose an efficient differential proxy of xh, implemented as a stack of layers, each corresponding to one response round. The number of layers is a hyperparameter, T . Single round. We begin with examining a single iteration of the dynamics, i.e., T = 1. Note that since a user moves only if the cost is at most 2, Eq. (4) can be rewritten as: ∆h(xi;x−i) = { x′i if h(xi;x−i) = −1 and c(xi, x′i) ≤ 2 xi o.w. (6) where x′i = projh(xi;x−i) is the point to which xi must move in order for ϕ(xi;x−i) to be projected onto h. This projection-like operator (on xi) can be shown to have a closed-form solution: projh(xi;x−i) = xi − θ⊤ϕ(xi;x−i) + b ∥θ∥22w̃ii θ (7) See Appendix B.1 for a derivation using KKT conditions. Eq. (7) is differentiable in θ and b; to make the entire response mapping differentiable, we replace the ‘hard if’ in Eq. (6) with a ‘soft if’, which we now describe. First, to account only for negatively-classified points, we ensure that only points in the negative halfspace are projected via a ‘positive-only’ projection: proj+h (xi;x−i) = xi −min { 0, θ⊤ϕ(xi;x−i) + b ∥θ∥22w̃ii } θ (8) Then, we replace the c ≤ 2 constraint with a smoothed sigmoid that interpolates between xi and the projection, as a function of the cost of the projection and thresholded at 2. This gives our differentiable approximation of the response mapping: ∆̃(xi;x−i, κ) = xi + (x ′ i − xi)στ ( 2− c(xi, x′i)− κ ) where x′i = proj + h (xi;x−i) (9) where σ is a sigmoid and τ is a temperature hyperparameter (τ → 0 recovers Eq. (6)) and for T = 1, κ = 0. In practice we add a small additive tolerance term for numerical stability (See Appendix B.3). Multiple rounds. Next, we consider the computation of (approximate) modified features after T > 1 rounds, denoted x̃(T ), in a differentiable manner. Our approach is to apply ∆̃ iteratively as: x̃ (t+1) i = ∆̃(x̃ (t) i ; x̃ (t) −i, κ (t) i ), x̃ (0) i = xi (10) Considering ∆̃ as a layer in a neural network, approximating T rounds can be done by stacking. In Eq. (10), κ(t)i is set to accumulate costs of approximate responses, κ (t) i = κ (t−1) i + c(x̃ (t−1) i , x̃ (t) i ). One observation is that for 2-norm costs, κ(t)i = c(x̃ (0) i , x̃ (t) i ) (by the triangle inequality; since all points move along a line, equality holds). We can therefore simplify Eq. (9) and replace c(x (t−1) i , x ′ i)− κ (t−1) i with c(x (0) i , x ′ i). For other costs, this gives a lower bound (see Appendix B.1). 5 EXPERIMENTS 5.1 SYNTHETIC DATA We begin our empirical evaluation by demonstrating different aspects of learning in our setting using a simple but illustrative synthetic example. Additional results and insights on movement trends, the effects of movement on accuracy, and the importance of looking ahead, can be found in Appendix D.1. For our experimental setup, we set ℓ = 1 and sample features xi ∈ R for each class from a corresponding Gaussian N (y, 1) (classes are balanced). For each node, we uniformly sample 5 neighbors from the same class and 3 from the other, and use uniform weights. This creates a task where both features and the graph are informative about labels, but only partially, and in a complementary manner (i.e., noise is uncorrelated; for i with yi = 1, if xi < 0, it is still more likely that most neighbors have xj > 0, and vice versa). As it is a-priori unclear how to optimally combine these sources, we study the effects of relying on the graph to various degrees by varying a global α, i.e., setting w̃ii = (1− α) and w̃ij = α/degi for all i and all j ̸= i. We examine both strategic and non-strategic settings, the latter serving as a benchmark. Since ℓ = 1, H = {hb} is simply the class of thresholds, hence we can scan all thresholds b and report learning outcomes for all models hb ∈ H . For non-strategic data, the optimal h∗ has b∗ ≈ 0; for strategic data, the optimal h∗ can be found using line search. Testing is done on disjoint but similarly sampled held-out features and graph. The effects of strategic behavior. Figure 2 (left) presents the accuracy of the learned ĥ for varying α and in different settings. In a non-strategic setting (dashed gray), increasing α helps, but if reliance on the graph becomes exaggerated, performance deteriorates (α ≈ 0.7 is optimal). Allowing users to respond strategically reverses this result: for α = 0 (i.e., no graph), responses lower accuracy by ≈ 0.26 points; but as α is increased, the gap grows, this becoming more pronounced as test-time response rounds progress (blue lines). Interestingly, performance under strategic behavior is worst around the previously-optimal α ≈ 0.75. This shows how learning in a strategic environment—but neglecting to account for strategic behavior—can be detrimental. By accounting for user behavior, our approach (orange line) not only recovers performance, but slightly improves upon the non-strategic setting (this can occur when positive points are properly incentivized; see Appendix D.1). Sensitivity analysis. Figure 2 (right) plots the accuracy of all threshold models hb for increasing values of α. For each α, performance exhibits a ‘bell-curve’ shape, with its peak at the optimal h∗. As α increases, bell-curves change in two ways. First, their centers shift, decreasing from positive values towards zero (which is optimal for non-strategic data); since using the graph limits users’ effective radius of movement, the optimal decision boundary can be less ‘stringent’. Second, and interestingly, bell-curves become narrower. We interpret this as a measure of tolerance: the wider the curve, the lower the loss in accuracy when the learned ĥ is close to (but does not equal) h∗. The figure shows for a subset of α-s ‘tolerance bands’: intervals around b∗ that include thresholds b for which the accuracy of hb is at least 90%, 95%, and 97.5% of the optimum (horizontal lines). Results indicate that larger α-s provide less tolerance. If variation in ĥ can be attributed to the number of examples, this can be interpreted as hinting that larger α-s may entail larger sample complexity. Number of layers (T ). Figure 2 (right) also shows for each bell-curve the accuracy achieved by learned models ĥ of increasing depths, T = 1, . . . , 4 (colored dots). For α = 0 (no graph), there are no inter-user dependencies, and dynamics converge after one round. Hence, T = 1 suffices and is optimal, and additional layers are redundant. However, as α increases, more users move in later rounds, and learning with insufficiently large T results in deteriorated performance. This becomes especially distinct for large α: e.g., for α = 0.9, performance drops by ∼ 11% when using T = 1 instead of the optimal T = 4. Interestingly, lower T always result in lower, more ‘lenient’ thresholds; as a result, performance deteriorates, and more quickly for larger, more sensitive α. Thus, the relations between α and T suggest that greater reliance on the graph requires more depth. 5.2 EXPERIMENTS ON REAL DATA Data. We use three benchmark datasets used extensively in the GNN literature: Cora, CiteSeer, and PubMed (Sen et al., 2008; Kipf & Welling, 2017), and adapt them to our setting. We use the standard (transductive) train-test split of Sen et al. (2008); the data is made inductive by removing all test-set nodes that can be influenced by train-set nodes (Hamilton et al., 2017). All three datasets describe citation networks, with papers as nodes and citations as edges. Although these are directed relations by nature, the available data include only undirected edges; hence, we direct edges towards lowerdegree nodes, so that movement of higher-degree nodes is more influential. As our setup requires binary labels, we follow standard practice and merge classes, aiming for balanced binary classes that sustain strategic movement. Appendix C includes further details. see Appendix D.2 for additional results on strategic improvement, extending neighborhood size, and node centrality and influence. Methods. We compare our robust learning approach to a naïve approach that does not account for strategic behavior (i.e., falsely assumes that users do not move). As a benchmark we report the performance of the naïve model on non-strategic data (for which it is appropriate). All methods are based on the SGC architecture (Wu et al., 2019) as it is expressive enough to effectively utilize the graph, but simple enough to permit rational user responses (Eq. (4); see also notes Sec. 1.1). We use the standard weights W̃ = D− 1 2AD− 1 2 where A is the adjacency matrix and D is the diagonal degree matrix. Optimization and setup. We train using Adam and set hyperparameters according to Wu et al. (2019) (learning rate=0.2, weight decay=1.3 ·10−5). Training is stopped after 20 epochs (this usually suffices for convergence). Hyperparameters were determined based only on the train set: τ = 0.05, chosen to be the smallest value which retained stable training, and T = 3, as training typically saturates then (we also explore varying depths). We use β-scaled 2-norm costs, cβ(x, x′) = β∥x− x′∥2, β ∈ R+, which induce a maximal moving distance of dβ = 2/β. We observed that values around d = 0.5 permit almost arbitrary movement; we therefore experiment in the range d ∈ [0, 0.5], but focus primarily on the mid-point d = 0.25 (note d = 0 implies no movement). Mean and standard errors are reported over five random initializations. Appendix C includes further details. Results. Table 1 presents detailed results for d = 0.25 and T = 3. As can be seen, the naive approach is highly vulnerable to strategic behavior. In contrast, by anticipating how users collectively respond, our robust approach is able to recover most of the drop in accuracy (i.e., from ‘benchmark’ to ‘naïve’; Cora: 35%, CiteSeer: 16%, PubMed: 72%). Note this is achieved with a T much smaller than necessary for response dynamics to converge (Tmax: Cora=7, CiteSeer=7, PubMed=11). Fig. 3 (top) shows results for varying max distances d ∈ [0, 0.5] and fixing T = 3 (note d = 0 entails no movement). For Cora and CiteSeer, larger max distances—the result of lower modification costs—hurt performance; nonetheless, our robust approach maintains a fairly stable recovery rate over all values of d. For PubMed, our approach retains ≈ 92% of the optimum, showing resilience to reduced costs. Interestingly, for CiteSeer, in the range d ∈ [0.05, 0.15], our approach improves over the baseline, suggesting it utilizes strategic movements for improved accuracy (as in Sec. 5.1). Fig. 3 (bottom) shows results for varying depths T ∈ {0, . . . , 10}. For all datasets, results improve as T increases, but saturate quickly at T ≈ 3; this suggests a form of robustness of our approach to overshooting in choosing T (which due to smoothing can cause larger deviations from the true dynamics). Using T = 1 recovers between 65%−91% (across datasets) of the optimal accuracy. This shows that while considering only one round of user responses (in which there are no dependencies) is helpful, it is much more effective to consider multiple, dependent rounds—even if only a few. 6 DISCUSSION In this paper we study strategic classification under graph neural networks. Relying on a graph for prediction introduces dependencies in user responses, which can result in complex correlated behavior. The incentives of the system and its users are not aligned, but also not discordant; our proposed learning approach utilizes this degree of freedom to learn strategically-robust classifiers. Strategic classification assumes rational user behavior; this necessitates classifiers that are simple enough to permit tractable best-responses. A natural future direction is to consider more elaborate predictive architectures coupled with appropriate boundedly-rational user models, in hopes of shedding further light on questions regarding the benefits and risks of transparency and model explainability. ETHICS AND SOCIETAL IMPLICATIONS In our current era, machine learning is routinely used to make predictions about humans. These, in turn, are often used to inform—or even determine—consequential decisions. That humans can (and do) respond to decision rules is a factual reality, and is a topic of continual interest in fields such as economics (e.g., Nielsen et al., 2010) and policy-making (e.g., Camacho & Conover, 2013); the novelty of strategic classification is that it studies decision rules that are a product of learned predictive models. Strategic classification not only acknowledges this reality, but also proposes tools for learning in ways that account for it. But in modeling and anticipating how users respond, and by adjusting learning to accommodate for their effects—learning also serves to ‘steer’ its population of users, perhaps inadvertently, towards certain outcomes (Hardt et al., 2022). GNNs are no exception to this reality. In the domain of graph-based learning, the role of predictive models is expressed in how they associate social connections with decision outcomes for individuals; Clearly, the choice of whether to make use of social data for decisions can be highly sensitive, and doing so necessitates much forethought and care. But the question of whether to use social data to enhance prediction is not a binary in nature, i.e., there is no simple ‘right’ or ‘wrong’. Consider our example of the credit scoring company, Lenddo. On the one hand, Lenddo has been criticized that it may be discriminating applicants based on who they chose to socialize with (or, rather, who chooses to socialize with them). But on the other hand, Lenddo, who focuses primarily on developing countries, has been acclaimed for providing financial assistance to a large community of deserving applicants who, due to conservative norms in typically credit scoring rules, would otherwise be denied a consequential loan.9 Such considerations apply broadly. In other focal domains in strategic classification, such as loans, university admissions, and job hiring, the use of social data for informing decisions can be highly controversial, on both ethical and legal grounds. Regulation is necessary, but as in similar areas, often lags far behind the technology itself. This highlights the need for transparency and accountability in how, when, and to what purpose, social data is used (Ghalme et al., 2021; Jagadeesan et al., 2021; Bechavod et al., 2022; Barsotti et al., 2022b). ACKNOWLEDGEMENTS This research was supported by the Israel Science Foundation (grant No. 278/22). A ANALYSIS A.1 HITCHHIKING Here we provide a concrete example of hitchhiking, following Fig. 1 (E). The example includes three nodes, i, j, k, positioned at: xk = −3, xi = −2.1 xj = −0.5, and connected via edges k→j, and j→i. Edge weights w̃ji = 0.6 and w̃ii = 0.4; w̃kj = 1/3 and w̃jj = 2/3; and w̃kk = 1. The example considers a threshold classifier hb with b = 0, and unit-scale costs (i.e., β = 1) inducing a maximal moving distance of d = 2. We show that i cannot invest effort to cross and obtain ŷi = 1; but once j moves (to obtain ŷj = 1), this results in i also being classified positively (without moving). Initially (at round t = 0), node embeddings are: ϕk = −3 , ϕi = −1.14 , ϕj = − 4 3 and all points are classified negatively, ŷk = ŷi = ŷj = −1. Notice that i cannot cross the decision boundary even if she moves the maximal cost-feasible distance of d = 2: ϕ(x (0) i + 2;x (0) i− ) = w̃ii(x (0) i + 2) + w̃jix (0) j = 0.4(−2.1 + 2) + 0.6(− 1 2 ) = −0.34 < 0 Hence, i doesn’t move, so x(1)i = x (0) i . Similarly, k cannot cross, so x (1) k = x (0) k . However, j can cross by moving to 1.5 (at cost 2) in order to get ŷj = 1: x (1) j = 1.5 = −1/2 + 2 = x (0) j + 2 ⇒ ϕ(x(1)j ;x (1) j−) = w̃jjx (1) j + w̃kjx (0) k = 2 3 x (1) j + 1 3 (−3) = 0 ⇒ ŷ(1)j = 1 After j moves, i is classified positively (and so does not need to move): ϕ(x (1) i ;x (1) i− ) = w̃iix (1) i + w̃jix (1) j = 0.4(−2.1) + 0.6 3 2 = 0.06 > 0 ⇒ ŷ(2)i = 1 A.2 CASCADING BEHAVIOR We give a constructive example (for any n) which will be used to prove Propositions 3 and 4. The construction is modular, meaning that we build a small ‘cyclic’ structure of size 3, such that for any given n, we simply replicate this structure roughly n/3 times, and include two additional ‘start’ and ‘finish’ nodes. Our example assumes a threshold classifier hb with b = 0, and scale costs cβ with β = 1.5 inducing a maximum moving distance of dβ = 3. Let n. We construct a graph of size n+ 2 as follows. Nodes are indexed 0, . . . , n+ 1. The graph has bi-directional edges between each pair of consecutive nodes, namely (i, i + 1) and (i + 1, i) for all i = 0, . . . , n, except for the last node, which has only an outgoing edge (n + 1, n), but no incoming edge. We set uniform normalized edge weights, i.e., wij = 1/3 and wii = 1/3 for all 1 ≤ i, j ≤ n, and w0,0 = w0,1 = 1/2 and wn+1,n+1 = wn+1,n = 1/2. The initial features of each node are defined as: x0 = −1, xi = { 2 if i mod 3 = 1 −4 o.w. ∀i = 1, . . . , n+ 1 (11) Figure 4 (A) illustrates this for n = 3. Note that while the graph creates a ‘chain’ structure, the positioning of node features is cyclic (starting from n = 1): 2,−4,−4, 2,−4,−4, 2, . . . etc. We begin with a lemma showing that in our construction, each node i = 1, . . . , n moves precisely at round t = i. Lemma 1. At every round 1 ≤ t ≤ n: (1) node i = t moves, with x(t)i = 5 if k mod 3 = 1, and x (t) i = −1 otherwise (2) all nodes j > t do not move, i.e., x(t)j = x (t−1) j Note that (1) (together with Prop. 1) implies that for any round t, all nodes i < t (which have already moved at the earlier round t′ = i) do not move again. Additionally, (2) implies that all j > t remain in their initial position, i.e., x(t)j = x (0) j . Finally, notice that the starting node x0 has ϕ0 = 0.5, meaning that ŷ(0)0 = 1, and so does not move at any round. Proof. We begin with the case for n = 3. • Round 1: Node i = 1 can cross by moving the maximal distance of 3: w̃1,1(x (0) 1 + 3) + w̃0,1x (0) 0 + w̃2,1x (0) 2 = 1 3 (2 + 3) + 1 3 (−1) + 1 3 (−4) = 0 (12) However, nodes 2,3 cannot cross even if they move the maximal feasible distance: w̃2,2(x (0) 2 + 3) + w̃1,2x (0) 1 + w̃3,2x (0) 3 = 1 3 (−4 + 3) + 1 3 (2) + 1 3 (−4) = −1 < 0 (13) w̃3,3(x (0) 3 + 3) + w̃2,3x (0) 2 + w̃4,3x (0) 4 = 1 3 (−4 + 3) + 1 3 (−4) + 1 3 (2) = −1 < 0 (14) • Round 2: Node i = 2 can cross by moving the maximal distance of 3: w̃2,2(x (1) 2 + 3) + w̃1,2x (1) 1 + w̃3,2x (1) 3 = 1 3 (−4 + 3) + 1 3 (5) + 1 3 (−4) = 0 (15) However, node 3 cannot cross even if it moves the maximal feasible distance: w̃3,3(x (1) 3 + 3) + w̃2,3x (1) 2 + w̃4,3x (1) 4 = 1 3 (−4 + 3) + 1 3 (−4) + 1 3 (2) = −1 < 0 (16) • Round 3: Node i = 3 can cross by moving the maximal distance of 3: w̃3,3(x (2) 3 + 3) + w̃2,3x (2) 2 + w̃4,3x (2) 4 = 1 3 (−4 + 3) + 1 3 (−1) + 1 3 (2) = 0 (17) Fig. 4 (A) illustrates this procedure for n = 3. Next, consider n > 3. Due to the cyclical nature of feature positioning and the chain structure of our graph, we can consider what happens when we sequentially add nodes to the graph. By induction, we can show that: • n mod 3 = 1: Consider round t = n. Node n has x(t−1)n = 2, and two neighbors: n− 1, who after moving at the previous round has x(t−1)n−1 = −1; and n+ 1, who has a fixed x (t−1) n+1 = −4. Thus, it is in the same configuration as node i = 1, and so its movement follows Eq. (12). • n mod 3 = 2: Consider round t = n. Node n has x(t−1)n = −4, and two neighbors: n− 1, who after moving at the previous round has x(t−1)n−1 = 5; and n + 1, who has a fixed x (t−1) n+1 = −4. Thus, it is in the same configuration as node i = 2, and so its movement follows Eq. (15). • n mod 3 = 0: Consider round t = n. Node n has x(t−1)n = −4, and two neighbors: n− 1, who after moving at the previous round has x(t−1)n−1 = −1; and n + 1, who has a fixed x (t−1) n+1 = 2. Thus, it is in the same configuration as node i = 2, and so its movement follows Eq. (17). Fig. 4 (B) illustrates this idea for n > 3. We now proceed to prove the propositions. Proposition 3: The proposition follows immediately from Lemma 1; the only detail that remains to be shown is that node n + 1 does not move at all. To see this, note that since it does not have any incoming edges, its embedding depends only on its own features, xn+1. If n + 1 mod 3 = 1, we have xn+1 = 2, and so ŷn+1 = 1 without movement. Otherwise, xn+1 = −4, meaning that it is too far to cross. Proposition 4: Fix n and k ≤ n. Consider the same construction presented above for a graph of size k + 2 Then, add n − k nodes identical nodes: for each k < j ≤ n, add an edge k→j, and set xj = xk − 6. We claim that all such nodes will move exactly at round k. Consider some node k < j ≤ n. Since xk moves only at round k (following Lemma 1), j does not move in any of the first t ≤ k rounds: w̃j,j(x (0) j +3)+w̃k,jx (0) k = 1 2 (−x(0)k −6+3)+ 1 2 (x (0) k ) = 1 2 (−x(0)k −3)+ 1 2 (x (0) k ) = −1.5 < 0 (18) At the end of round t = k, node k has a value of x(0)k + 3. This enables j to cross by moving the maximal distance of 3: w̃j,j(x (k) j +3)+ w̃k,jx (k) k = 1 2 (−x(0)k −6+3)+ 1 2 (x (k) k ) = 1 2 (−x(0)k −3)+ 1 2 (x (0) k +3) = 0 (19) As this applies to all such j, we get that n− k nodes move at round k, which concludes our proof. Note the graph is such that, for b = 0, without strategic behavior the graph is useful for prediction (increases accuracy from 66% to 100%), so that a learner that is unaware of (or does not account for) strategic behavior is incentivized to utilize the graph. However, once strategic behavior is introduced, naïvely using the graph causes performance to drop to 0%. B OPTIMIZATION B.1 PROJECTION We prove for 2-norm-squared costs. Correctness holds for 2-norm-costs since the argmin is the same (squared over positives is monotone). Calculation of xi’s best response requires solving the following equation: min x′ c(x′i, xi) s.t θ ⊤ϕ(x′i;x−i) + b = 0 min x′ ∥x′i − xi∥22 s.t θ⊤ϕ(x′i;x−i) + b = 0 To solve for x′, we apply the Lagrange method. Define the Lagrangian as follows: L(x′i, λ) = ∥x′i − xi∥22 + λ[θ⊤ϕ(x′i;x−i) + b] Next, to find the minimum of L, derive with respect to x′i, and compare to 0: 2(x′i − xi) + λθw̃ii = 0 x′i = xi − λw̃ii 2 θ Plugging x′i into the constraint gives: θ⊤[w̃ii(xi − λw̃ii 2 θ) + ∑ j ̸=i w̃ijxj ] + b = 0 θ⊤[ϕ(xi;x−i)− λw̃2ii 2 θ] + b = 0 θ⊤ϕ(xi;x−i) + b = λw̃2ii 2 ∥θ∥22 2 θ⊤ϕ(xi;x−i) + b ∥θ∥22w̃2ii = λ Finally, plugging λ into the expression for x′i obtains: x′i = xi − θ⊤ϕ(xi;x−i) + b ∥θ∥22w̃ii θ B.2 GENERALIZED COSTS Here we provide a formula for computing projections in closed form for generalized quadratic costs: c(x, x′) = 1 2 (x′ − x)⊤A(x′ − x) for positive-definite A. As before, the same formula holds for generalized 2-norm costs (since the argmin is the same). Begin with: min x′ c(x′i, xi) s.t θ ⊤ϕ(x′i;x−i) + b = 0 min x′ 1 2 (x′i − xi)⊤A(x′i − xi) s.t θ⊤ϕ(x′i;x−i) + b = 0 As before, apply the Lagrangian method: 1 2 (x′i − xi)⊤A(x′i − xi) + λ[θ⊤ϕ(x′i;x−i) + b] Derivation w.r.t. to x′i: 1 2 [A⊤(x′i − xi) +A(x′ − x)] + λθw̃ii = 0 (A⊤ +A)x′i = (A ⊤ +A)xi − 2λθw̃ii Since the matrix (A⊤ +A) is PD, we can invert to get: x′i = xi − 2λ(A⊤ +A)−1θw̃ii Plugging x′i in the constrain: θ⊤[w̃ii(xi − 2λ(A⊤ +A)−1θw̃ii) + ∑ j ̸=i w̃ijxj ] + b = 0 θ⊤[ϕ(xi;x−i)− 2λ(A⊤ +A)−1w̃2iiθ] + b = 0 θ⊤ϕ(xi;x−i) + b = 2λθ ⊤(A⊤ +A)−1θw̃2 ii Since (A⊤ +A)−1 is also PD, we get θ⊤(A⊤ +A)−1θ > 0, and hence: θ⊤ϕ(xi;x−i) + b 2θ⊤(A⊤ +A)−1θw̃2 ii = λ Finally, pluging in λ: x′i = xi − θ⊤ϕ(xi;x−i) + b θ⊤(A⊤ +A)−1θw̃2 ii (A⊤ +A)−1θw̃ii x′i = xi − θ⊤ϕ(xi;x−i) + b θ⊤(A⊤ +A)−1θw̃ii (A⊤ +A)−1θ Setting A = I recovers Eq. (7). B.3 IMPROVING NUMERICAL STABILITY BY ADDING A TOLERANCE TERM Theoretically, strategic responses move points precisely on the decision boundary. For numerical stability in classifying (e.g., at test time), we add a small tolerance term, tol, that ensures that points are projected to lie strictly within the positive halfspace. Tolerance is added as follows: min x′ c(x′i, xi) s.t θ ⊤ϕ(xi;x−i) + b ≥ tol (20) This necessitates the following adjustment to Eq. (7): projh(xi;x−i) = xi − θ⊤ϕ(xi;x−i) + b− tol ∥θ∥22w̃ii θ (21) However, blindly applying the above to Eq. (8) via: proj+h (xi;x−i) = xi −min { 0, θ⊤ϕ(xi;x−i) + b− tol ∥θ∥22w̃ii } θ (22) is erroneous, since any user whose score is lower than tol will move—although in principal she shouldn’t. To correct for this, we adjust Eq. (8) by adding a mask that ensures that only points in the negative halfspace are projected: projh(xi;x−i) = xi − 1{θ⊤ϕ(xi;x−i) + b < 0} · ( θ⊤ϕ(xi;x−i) + b− tol ∥θ∥22w̃ii θ ) (23) C ADDITIONAL EXPERIMENTAL DETAILS Data. We experiment with three citation network datasets: Cora, CiteSeer, and Pubmed Sen et al. (2008). Table 2 provides summary statistics of the datasets, as well as experimental details. Splits. All three datasets include a standard train-validation-test split, which we adopt for our use.10 For our purposes, we use make no distinction between ‘train’ and ‘validation’, and use both sets for training purposes. To ensure the data is appropriate for the inductive setting, we remove from the test set all nodes which can be influenced by train-set nodes—this ranges from 6%-43% of the test set, depending on dataset (and possibly setting; see Sec. D.2.1). In Table 2, the number of train samples is denoted ntrain, and the number of inductive test samples is denoted n∗test (all original transductive test sets include 1,000 samples). Binarization. To make the data binary (original labels are multiclass), we enumerated over possible partitions of classes into ‘negative’ and ‘positive’, and chose the most balanced partition. Experimenting with other but similarly-balanced partitions resulted in similar performance (albeit at times less distinct strategic movement). The exception to this was PubMed (having only three classes), for which the most balanced partition was neither ‘balanced’ nor stable, and so here we opted for the more stable alternative. Reported partitions and corresponding negative-positive ratios (for train and for test) are given in Table 2. Strategic responses. At test time, strategic user responses are computed by simulating the response dynamics in Sec. 3.1 until convergence. 10Note that nodes in these sets do not necessarily account for all nodes in the graph. D ADDITIONAL EXPERIMENTAL RESULTS D.1 EXPERIMENTS ON SYNTHETIC DATA In this section we explore further in depth the relation between user movement and classification performance, using our synthetic setup in Sec. 5.1 (all examples discussed herein use α = 0.7). From a predictive point of view, graphs are generally helpful if same-class nodes are well-connected. This is indeed the case in our construction (as can be seen by the performance of the benchmark method with non-extreme α > 0 values). From a strategic perspective, however, connectivity increases cooperation, since neighboring nodes can positively influence each other over time. In our construction, cooperation occurs mostly within classes, i.e., negative points that move encourage other negative points to move, and similarly for positive points. Movement trends. Fig. 5 (left) shows how different threshold classifiers hb induce different degrees of movements. The plot shows the relative number of points (in percentage points) whose predictions changed as a result of strategic behavior, per class (red: y = −1, green: y = 1) and over time: after one round (T = 1, dashed lines), and at convergence (T = ∞, solid lines). As can be seen, there is a general trend: when b is small, mostly negative points move, but as b increases, positive points move instead. The interesting point to observe is the gap between the first round (T = 1) and final round (T = ∞). For negative points, movement at T = 1 peaks at b1 ≈ −0.25, but triggers relatively little consequent moves. In contrast, the peak for T = ∞ occurs at a larger b∞ ≈ 0.15. For this threshold, though less points move in the first round, these trigger significantly more additional moves at later rounds—a result of the connectivity structure within the negative cluster of nodes (blue arrows). A similar effect takes place for positive nodes. The importance of looking ahead. Fig. 5 (center) plots for a range of thresholds b the accuracy of hb at convergence (T = ∞; orange line), and after one round (T = 1; gray line). The role of the latter is to illustrate the outcomes as ‘perceived’ by a myopic predictive model that considers only one round (e.g., includes only one response layer ∆̃); the differences between the two lines demonstrate the gap between perception (based on which training chooses a classifier ĥ) and reality (in which the classifier ĥ is evaluated). As can be seen, the mypoic approach leads to an under-estimation of the optimal b∗; at b1 ≈ 0.5, performance for T = 1 is optimal, but is severely worse under the true T = ∞, for which optimal performance is at b∞ ≈ 1.15. The figure also gives insight as to why this happens. For both b1 and b∞, the figure shows (in bars) the relative number of points from each class who obtain ŷ = 1 as a result of strategic moves. Bars are stacked, showing the relative number of points that moved per round T (darker = earlier rounds; lightest = convergence). As can be seen, at b1, the myopic models believes that many positive points, but only few negative points, will cross. However, in reality, at convergence, the number of positive points that crossed is only slightly higher than that of negative points. Hence, the reason for the(erroneous) optimism of the myopic model is that it did not correctly account for the magnitude of correlated moves of negative points, which is expressed over time. In contrast, note that at b∞, barely any negative points cross. How movement affects accuracy. An important observation about the relation between movement and accuracy is that for any classifier h, any negative point that moves hurts accuracy (since y = −1 but predictions become ŷ = 1), whereas any positive point that moves helps accuracy (since y = 1 and predictions are now ŷ = 1). Fig. 5 (right) shows how these movements combine to affect accuracy. The figure compares accuracy before strategic behavior (T = 0; dashed line) to after one response round (T = 1; solid line, top plot) and to convergence (T = ∞; solid line, lower plot). As can be seen, for any b, the difference between pre-strategic and post-strategic accuracy amounts to exactly the degradation due to negative points (red arrows) plus the improvement of positive points (green arrows). Note, however, the difference between T = 1 and T = ∞, as they relate to the benchmark model (T = 0, i.e., no strategic behavior). For T = 1 (top), across the range of b, positive and negative moves roughly balance out. A result of this is that curves for T = 0 and T = 1 are very much similar, and share similar peaks in terms of accuracy (both have ≈ 0.89). One interpretation of this is that if points were permitted to move only one round, the optimal classifier can completely recover the benchmark accuracy by ensuring that the number of positive points the moves overcomes the number of negative points. However, for T = ∞ (bottom), there is a skew in favor of positive points (green arrows). The result of this is that for the optimal b, additional rounds allow positive points to move in a way that obtains slightly higher accuracy (0.91) compared to the benchmark (0.89). This is one possible mechanism underlying our results on synthetic data in Sec. 5.1, and later for our results on real data in Sec. 5.2. D.2 EXPERIMENTS ON REAL DATA D.2.1 EXTENDING NEIGHBORHOOD SIZE One hyperparameter of SGC is the number ‘propagation’ layers, K, which effectively determines the graph distance at which nodes can influence others (i.e., the ‘neighborhood radius’). Given K, the embedding weights are defined as W̃ = D− 1 2AKD− 1 2 where A is the adjacency matrix and D is the diagonal degree matrix. For K = 0, the graph is unused, which results in a standar linear classifier over node features. Our results in the main body of the paper use K = 1. Fig. 6 shows results for an increasing K (we set T = 3, d = 0.25 as in our main results). Results are mixed: for PubMed, higher K seems to lead to less drop in accuracy for naïve and less recovery for our approach; for Cora and CiteSeer, results are unstable. Note however that this may likely be a product of our inductive setup: since varying K also changes the effective test set (since to preserve inductiveness, larger K often necessitates removing more nodes), test sets vary across conditions and decrease in size, making it difficult to directly compare result across different K. D.2.2 STRATEGIC IMPROVEMENT Our main results in Sec. 5.2 show that for CiteSeer, our strategically-aware approach outperforms the non-strategic benchmark (similarly to our synthetic experiments). Here we show that these results are robust. Fig. 7 provides higher-resolution results on CiteSeer for max distances d ∈ [0, 0.22] in hops of 0.01. All other aspects the setup match the original experiment. As can be seen, our approach slightly but consistently improves upon the benchmark until d ≈ 0.17. D.3 NODE CENTRALITY AND INFLUENCE In this experiment we set to explore the role played by central nodes in the graph in propagating the influence of strategic behavior. Since the embedding of a node i is partly determined by its in-neighbors, broadly we would expect that nodes have high out-degree would be highly influential: as ‘anchors’ that prevent others from moving if they themselves do not, and as ‘carriers’ which either push neighbors over the boundary—or at least promote the closer to it—if they do move. Experimental setup. To study the role of such nodes, we preform the following experiment. First, we order the nodes by decreasing out-degree, so that the potentially more influential nodes appear first in the ranking. Then, for each q ∈ {0, 10, 20, ..., 100}, we disconnect nodes in the qth percentile, i.e., remove all edges emanating from the top-q% ranked nodes. For each such condition, we examine learning and its outcomes, and compare performance to a control condition in which nodes are ordered randomly. The difference in performance and other learning outcomes provides us with a measure of the importance of high-degree nodes. Results. Figure 8 shows results for all methods (naïve, robust (ours), and the non-strategic benchmark) and across all three datasets (Cora, CiteSeer, and Pubmed). In all conditions, we vary on the x-axis the portion of nodes that remain connected, where at zero (left end) we get an empty graph, and at 100 (right end) we get the original graph. Note that the y-axis varies in scale across plots. First, consider Cora. In terms of accuracy (upper plots), results show that on the benchmark (evaluated in a non-strategic environment, in which users do not move), the general trend is that more edges help improve performance. However, the gain in performance is much more pronounced for highdegree nodes. Interestingly, for the naïve method (which operates on strategically modified data, but does not anticipate updates), the trend is reversed: user utilize edges in a way that is detrimental to performance—and more so when high-degree nodes remain, making them a vulnerability. Our robust approach is also sensitive to the addition of edges, but to a much lesser degree (the drop is performance is minor); moreover, which nodes are disconnected appears to make little difference, which we take to mean that our approach can counter the dominant role of central nodes. The lower plots, which describe the portion of users that moves and the portion of users that crossed, provides some explanation as to how this is achieved. For the naïve approach, nearly half of all user move—and all users that move, also cross. This occurs faster (in terms of the portion of nodes removed) in the degree condition. In our robust approach, note that the number of nodes that move is halved, and of those, not all cross, which demonstrates how are learning objective, which anticiaptes strategic behavior, can act to prevent it. For CiteSeer and PubMed, the general trend in accuracy is reversed for the non-strategic benchmark, and for the naïve approach, begins with a gap, which closes at some point (sooner in CiteSeer). Despite these differences, the qualitative behavior of our robust classifier is similar to Cora, and achieves fairly stable accuracy (with a mild negative slope) in both conditions and for both datasets. As in citeseer, movement and crossing behavior is similar, in that for the naïve approach a considerable portion of users move and cross (with a gap between conditions in PubMed); and in that our robust approach greatly reduces the number of users that move, and even more so the number users that cross. E ADDITIONAL ANALYTIC RESULTS E.1 PERFORMANCE GAPS: ROBUST LEARNING VS. THE NON-STRATEGIC BENCHMARK When learning a robust classifier on strategic data, intuitively we may hope that its performance approaches that of the optimal classifier on non-strategic data, which we may think of as a target upper bound. A natural question to then ask is - can we always reach this upper bound and close the performance gap? Here we answer this question in the negative, by showing a simple example where the optimal classifier on non-strategic data achieves perfect accuracy, but the optimal classifier on strategic data achieves only 0.66 accuracy. We then show that by introducing a mild change—namely slightly shifting the features of one node—the performance gap closes entirely. This, combined with our result from the previous Sec. D.2.2 showing that the gap can also be negative, highlights how the gap in performance greatly depends on the structure of the input (i.e., the graph, features, and labels), and in a way which can be highly sensitive even to minor changes. E.1.1 LARGE GAP Our first example in which the best obtainable gap is 0.33 is shown in Figure 9 (Left). The example has three nodes described by one-dimensional features x1 = 1.2, x2 = −1, x3 = −1 and with labels y1 = 1, y2 = −1, y3 = −1. We use the standard cost function so that maximal distance to move is dβ = 2, and use uniform edge weights. Since x ∈ R, classifiers are simply threshold functions on the real line, defined by a threshold parameter b ∈ R. First, we demonstrate that for non-strategic data, there exists a ‘benchmark’ classifier that achieves perfect accuracy. Let b = 0. Node embeddings are: ϕ1 = x1 + x2 2 = 1− 1 2 = 0 = b ϕ2 = x1 + x2 + x3 3 = 1− 1− 1 3 = −1 3 < 0 = b ϕ3 = x2 + x3 2 = −1− 1 2 = −1 < 0 = b This give predictions ŷ1 = +1 = y1, ŷ2 = −1 = y2, ŷ3 = −1 = y3, which are all correct. Next. we prove that there is no robust classifier capable of achieving perfect accuracy on strategic data. Suppose by contradiction that such a classifier exists, denoted b. For x2 to move in the first round, the following conditions must hold: x1 + x ′ 2 + x3 3 = b, 1 + x′2 − 1 = 3b, x′2 = 3b Since x2 moves at most distance 2, it moves in the first round only if − 13 < b ≤ 1 3 . However, if it does move - it ends up getting an incorrect prediction. In addition, in this case x3 can also get a positive prediction (either in the first round or in the next round, depending on whether b > 0), in which case the accuracy is 13 . Thus, we get that 1 3 < b (note that for b < − 1 3 x2 is classified as positive which is wrong). Next, we look at the behavior of x1 in the first round. Conditions for movement are: x′1 + x2 2 = b, x′1 − 1 = 2b, x′1 = 2b+ 1 Here x1 gets negative classification if b > 0. If b > 1, then x1 does not move, since the distance required is larger than 2. Thus, x1 does move (beyond the classifier) and gets the correct prediction only if 0 < b ≤ 1. However, considering now the second round of movement for x2 (which only occurs if b > 13 since for 0 < b ≤ 13 x2 moves at the first round), we get the conditions: x′1 + x ′ 2 + x3 3 = b, 2b+ 1 + x′2 − 1 = 3b, x′2 = b This means that for all 0 < b ≤ 1, x2 moves and gets an incorrect prediction. This occurs either for x1 or for x2. Consequently, there is no b that achieves an accuracy of 1, which contradicts the assumption. The optimal accuracy of any robust classifier for this example is 23 , which is achieved by any b > 1. In this case, none of the nodes move, and x1 is classified negatively, whereas its true label is positive. E.1.2 NO GAP We now give a nearly identical example, shown in Figure 9 (Right), in which the gap becomes zero. We use the same example as before, but set x1 = 1 (instead of x1 = 1.2). We begin by showing that there does exist a classifier which achieves perfect accuracy on non-strategic data. Let b = 0, then embeddings are now: ϕ1 = x1 + x2 2 = 1.2− 1 2 = 0.1 > 0 = b ϕ2 = x1 + x2 + x3 3 = 1.2− 1− 1 3 = − 4 15 < 0 = b ϕ3 = x2 + x3 2 = −1− 1 2 = −1 < 0 = b (note that since all nodes are connected in the graph, changing x1 requires us to recompute all embeddings). Predictions now become: ŷ1 = +1 = y1, ŷ2 = −1 = y2, ŷ3 = −1 = y3 which are all correct. Next, we show that there also exists a classifier which achieves perfect accuracy no strategic data. Let b = 1.1. In the first round, x1 moves, and flips its predicted label: ϕ1 = x′1 + x2 2 = x1 + 2 + x2 2 = x1 + 2 + x2 2 = 3.2− 1 2 = 1.1 = b Here, even if the other nodes move to the fullest extent, they do not have sufficient influence to revert this prediction: ϕ2 = x1 + x ′ 2 + x3 3 = 1.2 +−1 + 2− 1 3 = 0.4 < 1.1 = b ϕ3 = x2 + x ′ 3 2 = −1− 1 + 2 2 = 0 < 1.1 = b Thus, we get ŷ1 = +1 = y1, ŷ2 = −1 = y2, ŷ3 = −1 = y3 which are also all correct. E.2 STRATEGIC BEHAVIOR OF CLIQUES Here we give a result that considers graph structure. In particular, we consider cliques, and show that for uniform weights, either all nodes move together—or none do. Proposition 6. Consider n nodes which are all fully connected, i.e., form a clique, and assume uniform edge weights. Then for any dimension d, any assignment of features x1, . . . , xn ∈ Rd, and for any classifier h, either (i) all n nodes move in the first round, or (ii) none of the nodes move at all. Proof. Consider the case in which at least one node i moves in the first round. Denote by z the change in xi made in order to cross the classifier, i.e., z = x (1) i − x (0) i . Note that in order for xi to move, the following conditions must be satisfied: ||z||2 ≤ 2, θTϕ(x(1)i , x (0) −i ) + b = 0 We now show that every other node t, if it moves a distance of
1. What is the focus and contribution of the paper on strategic classification? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its assumptions and limitations? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns or questions regarding the problem definition, analysis framework, and empirical evaluation?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors consider the problem of strategic classification on graphs under a simplified linear graph model. The authors consider an iterative strategy for nodes changing features. An approximation learning algorithm is proposed to learn a robust classifier. The authors experiment on both synthetic 1d case and three real-world graph dataset. Strengths And Weaknesses Strength The authors study an interesting question to generalize the i.i.d assumption in strategic classification to graph neural networks. Weakness Several assumptions in the problem definition are too simplistic for practical usage. On the model side: (1) it assumes nodes can directly change its embedding instead of the features which is mostly not the case. (2) The assumption of linearity and only considering direct neighbors severely limits the expressibility of GNN. On the strategy side, though, assuming access to h is already a strong assumption. The access to features of all neighbors nodes is way too strong to be practical especially in multi-step updates. Also, the assumption that all nodes have the same incentive to move towards the same class is also limiting. The analysis framework of multi-step is not appropriate for the problem. It is much more natural to consider game-theoretical concepts like equilibrium. Several results from the analysis are straightforward. It would be very interesting to see results like convergence results to equilibrium etc. Also, the assumption to set gain as 2 if flipped is also arbitrary. The empirical evaluation uses different assumptions compared to previous model and analysis. Section 2 allows nodes to change features arbitrarily as long as it is smaller than the gain. However, Section 5.2 assumes a max distance which induces quite different behavior. Clarity, Quality, Novelty And Reproducibility Overall the paper is well written and easy to follow. It studied a new problem, however, the approach is only marginally novel.
ICLR
Title Smooth Kernels Improve Adversarial Robustness and Perceptually-Aligned Gradients Abstract Recent research has shown that CNNs are often overly sensitive to high-frequency textural patterns. Inspired by the intuition that humans tend to be more sensitive to lower-frequency (larger-scale) patterns, we design a regularization scheme that penalizes large differences between adjacent components within each convolutional kernel. We apply our regularization onto several popular training methods, demonstrating that the models with the proposed smooth kernels enjoy improved adversarial robustness. Further, building on recent work establishing connections between adversarial robustness and interpretability, we show that our method appears to give more perceptually-aligned gradients. N/A Recent research has shown that CNNs are often overly sensitive to high-frequency textural patterns. Inspired by the intuition that humans tend to be more sensitive to lower-frequency (larger-scale) patterns, we design a regularization scheme that penalizes large differences between adjacent components within each convolutional kernel. We apply our regularization onto several popular training methods, demonstrating that the models with the proposed smooth kernels enjoy improved adversarial robustness. Further, building on recent work establishing connections between adversarial robustness and interpretability, we show that our method appears to give more perceptually-aligned gradients. 1 INTRODUCTION In recent years, deep learning models have demonstrated remarkable capabilities for predictive modeling in computer vision, leading some to liken their abilities on perception tasks to those of humans (e.g., Weyand et al., 2016). However, under closer inspection, the limits of such claims to the narrow scope of i.i.d. data become clear. For example, when faced with adversarial examples (Szegedy et al., 2013; Goodfellow et al., 2015) or even in non-adversarial domain-agnostic cross-domain evaluations (Wang et al., 2019a;b; Carlucci et al., 2019), performance collapses, dispelling claims of human-like perceptive capabilities and calling into doubt more ambitious applications of this technology in the wild. A long line of recent research has investigated the robustness of neural networks, including investigations of the high-dimension nature of models (Fawzi et al., 2018), enlarging the gaps between decision boundaries (Zhang et al., 2019a), training the models with augmented examples through attack methods (Madry et al., 2018), and even guaranteeing the robustness of models within given radii of perturbation (Wong & Kolter, 2018; Cohen et al., 2019). Compared to earlier methods, these recent works enjoy stronger robustness both as assessed via theoretical guarantees and empirically via quantitative performance against strong attacks. However, despite the success of these techniques, vulnerabilities to new varieties of attacks are frequently discovered (Zhang et al., 2019b). In this paper, we aim to lessen the dependency of neural networks on high-frequency patterns in images, regularizing CNNs to focus on the low-frequency components. Therefore, the main argument of this paper is that: by regularizing the CNN to be most sensitive to the low-frequency components of an image, we can improve the robustness of models. Interestingly, this also appears to lead to more perceptually-aligned gradients. Further, as Wang et al. (2019c) explicitly defined the low (or high)-frequency components as images reconstructed from the low (or high)-end of the image frequency domain (as is frequently discussed in neuroscience literature addressing human recognition of shape (Bar, 2004) or face (Awasthi et al., 2011)), we continue with this definition and demonstrate that a smooth kernel can filter out the high-frequency components and improve the models’ robustness. We test our ideas and show the empirical improvement over popular adversarial robust methods with standard evaluations and further use model interpretation methods to understand how the models make decisions and demonstrate that the regularization helps the model to generate more perceptually-aligned gradients. 2 RELATED WORK Adversarial examples are samples with small perturbations applied that are imperceptible to humans but can nevertheless induce misclassification in machine learning models (Szegedy et al., 2013)). The discovery of adversarial examples spurred a torrent of research, much of it consisting of an arm race between those inventing new attack methods and others offering defenses to make classifiers robust to these sorts of attacks. We refer to survey papers such as (Akhtar & Mian, 2018; Chakraborty et al., 2018) and only list a few most relevant works about applying regularizations to the networks to improve the adversarial robustness, such as regularizations constraining the Lipschitz constant of the network (Cisse et al., 2017) (Lipschitz smoothness), regularizing the scale of gradients (Ross & Doshi-Velez, 2018; Jakubovitz & Giryes, 2018) (smooth gradients), regularizing the curvature of the loss surface (Moosavi-Dezfooli et al., 2019) (smooth loss curvature), and promoting the smoothness of the model distribution (Miyato et al., 2015). These regularizations also use the concept of “smoothness,” but different from ours (small differences among the adjacent weights). Recently, adversarial training (Goodfellow et al., 2015; Madry et al., 2018) has become one of the most popular defense methods, based on the simple idea of augmenting the training data with samples generated through attack methods (i.e., threat models). While adversarial training excels across many evaluations, recent evidence exposes its new limitations (Zhang et al., 2019b), suggesting that adversarial robustness remains a challenge. Key differences: In this paper, we present a new technique penalizing differences among the adjacent components of convolutional kernels. Moreover, we expand upon the recent literature demonstrating connections between adversarial robustness and perceptually-aligned gradients. 3 SMOOTH KERNEL REGULARIZATION Intuition. High-frequency components of images are those reconstructed from the high-end of the image frequency-domain through inverse Fourier transform. This definition was also verified previously by neuroscientists who demonstrated that humans tend to rely on the low-frequency component of images to recognize shapes (Bar, 2004) and faces (Awasthi et al., 2011). Therefore, we argue that the smooth kernel regularization is effective because it helps to produce models less sensitive to high-frequency patterns in images. We define a smooth kernel as a convolutional kernel whose weight at each position does not differ much from those of its neighbors, i.e., (wi,j −wh,k∈N(i,j))2 is a small number, where w denotes the convolutional kernel weight, i, j denote the indices of the convolutional kernel w, and N(i, j) denotes the set of the spatial neighbors of i, j. We note two points that support our intuition. 1. The frequency domain of a smooth kernel has only negligible high-frequency components. This argument can be shown with Theorem 1 in (Platonov, 2005). Roughly, the idea is to view the weight matrix w as a function that maps the index of weights to the weights: w(i, j) → wi,j , then a smooth kernel can be seen as a Lipschitz function with constant α. As pointed out by Platonov (2005), Titchmarsh (1948) showed that when 0 < α < 1, in the frequency domain, the sum of all the high frequency components with a radius greater than r will converge to a small number, suggesting that the high-frequency components (when r is large) are negligible. 2. The kernel with negligible high-frequency components will weigh the high-frequency components of input images accordingly. This argument can be shown through Convolution Theorem (Bracewell, 1986), which states w~x = F−1(F(w) F(x)), where F(·) stands for Fourier transform, ~ stands for convolution operation, and stands for point-wise multiplication. As the theorem states, the convolution operation of images is equivalent to the element-wise multiplication of image frequency domain. Therefore, roughly, if w has negligible high-frequency components in the frequency domain, it will weigh the high-frequency components of x accordingly with negligible weights. Naturally, this argument only pertains to a single convolution, and we rely on our intuition that repeated applications of these smooth kernels across multiple convolution layers in a nonlinear deep network will have some cumulative benefit. Formally, we calculate our regularization term R0(w) as follows: R0(w) = ∑ i,j ∑ h,k∈N(i,j) (wi,j −wh,k)2, We also aim to improve this regularization by trying a few additional heuristics: • First, we notice that directly appending R0(w) will sometimes lead to models that achieve the a small value of R0(w) by directly scaling down the every coefficient of w proportionally, without changing the fluctuation pattern of the weights. To fix this problem, we directly subtract the scale of w (i.e., ∑ i,j w 2 i,j) after R0(w). • Another heuristic to fix this same problem is to directly divide R0(w) by the scale of w. Empirically, we do not observe significant differences between these two heuristics. We settle with the first heuristic because of the difficulty in calculating gradient when a matrix is the denominator. • Finally, we empirically observe that the regularization above will play a significant role during the early stage of training, but may damage the overall performances later when the regularization pulls towards smoothness too much. To mitigate this problem, we use an exponential function to strengthen the effects of the regularization when the value is big and to weaken it when the value is small. Overall, our final regularization is: R(w) = exp ∑ i,j ∑ h,k∈N(i,j) (wi,j −wh,k)2 − ∑ i,j w2i,j In practice, the convolutional kernel is usually a 4-dimensional tensor, while our method only encourages smoothness over the two spatial dimensions corresponding to the 2D images. Thus, we only regularize through these two dimensions broadcasting the operation through the channels. Because a repeated calculation of each kernel component’s distance with its neighbors will double count some pairs, our implementation instead enumerates over all pairs of neighbors, counting each squared difference only once towards the total penalty. We can directly append the regularization λR(w) to most loss functions, where λ is a tuning hyperparameter. In the following experiments, we append λR(w) to the vanilla loss function (crossentropy loss), Trades loss (Zhang et al., 2019a), adversarial training loss (Madry et al., 2018), and a variation of logit pairing loss (Kannan et al., 2018), as introduced in the following paragraphs. Adversarial training works by fitting the model using adversarial examples generated on the fly at train time by the threat model. Trades loss fits the model with clean examples while regularizing the softmax of augmented adversarial examples to be close to that produced for corresponding clean examples, a natural alternative is to fit the model with augmented adversarial examples while regularizing the softmax of clean examples to be close to that of the corresponding adversarial examples, which is related to logit pairing. However, to make the comparison consistent, we use a variation of logit pairing, penalizing the KL divergence of softmax (rather than `2 distance over logits), following the Trades loss, which also uses KL divergence over softmax as the distance metric. To be specific, with the standard notations such as 〈X,Y〉 denoting a data set, 〈x,y〉 denoting a sample, the logit pairing loss is formalized as: minE〈x,y〉∼〈X,Y〉l(f(x′; θ);y) + γk(fl(x′; θ), fl(x; θ)) where x′ = argmax d(x′,x)≤ l(f(x′; θ);y) where d(·, ·) and k(·, ·) are distance functions, fl(·; ·) denotes the model f(·; ·) but outputs the softmax instead of a prediction, l(·, ·) is a cost function, γ is a tuning hyperparameter, and is the upper bound of perturbation. In our following experiments, we consider d(·, ·) as `∞ norm following popular adversarial training set-up and k(·, ·) as KL divergence following standard Trades loss. Intuitively, our usage of KL divergence in logit pairing loss is argued to be advantageous because Pinsker’s inequality suggests that KL divergence upper-bounds the total variation (TV) distance (e.g., Csiszar & Körner, 2011), the usage of KL divergence can be seen as a regularization that limits the hypothesis space to the parameters that yield small TV distance over perturbations of samples, which is linked to the robustness of an estimator, a topic that has been studied by the statistics community for over decades (e.g., see (Diakonikolas et al., 2019) and references within). 4 EXPERIMENTS To empirically validate our methods, we first consider a simple synthetic experiment to demonstrate the effectiveness of our proposed solutions. Then, with standard data sets such as MNIST (LeCun, 1998), FashionMNIST (Xiao et al., 2017), CIFAR10 (Krizhevsky & Hinton, 2009) and Restricted ImageNet (Tsipras et al., 2019), we evaluate our methods with well-established criteria, such as `∞ bounded accuracy. We also leverage saliency-based visualization methods to understand how the model understands each class. Most experiments are conducted with a simple basic convolutional neural network with two convolution layers and two fully connected layers, while the CIFAR10 experiment is conducted with ResNet18 and Restricted ImageNet experiment is conducted with ResNet50 (more details of the models are in the Appendix). As we mentioned previously, we apply the new regularization to four different losses: the vanilla loss (denoted as V), Trades loss (denoted as T) (Zhang et al., 2019a), adversarial training (denoted as A) (Madry et al., 2018), and our variation of logit pairing (denoted as L). T, A, L all adopt `∞ norm bounded PGD as the threat model. We use VR, TR, AR, LR to denote the methods after our regularization is plugged in. We evaluate our methods against a wide range of adversarial attack methods, including FGSM (Goodfellow et al., 2015), PGD (Madry et al., 2018), C&W (Carlini & Wagner, 2017), DeepFool (both `2 and `∞) (Moosavi-Dezfooli et al., 2016), ADef, a method that iteratively applies small deformations to the image (Alaifari et al., 2019) and Salt&Pepper, a black-box method that adds noise to the image. For these attack methods, we use the default parameters in Foolbox (Rauber et al., 2017), and our experiments suggest that these default parameters are effective enough in most cases. For every data set, we first tune the `∞ norm perturbation bound of adversarial training method and then use the same setting for Trades loss and variation of logit pairing. We tune γ within {0.1, 1.0, 10.0} and tune λ within {0.01, 0.1, 1.0, 10.0, 100.0}. 4.1 SYNTHETIC EXPERIMENTS FOR SANITY CHECKING We first use a basic data set of four shapes1 to test whether our proposed method helps regularize the model to behave as we desire. Each image in this data set has a white background and a black foreground depicting one of the four shapes: circle, star, square, and triangle. Our goal is to train a convolutional neural network to classify the images into one of these four shapes. We compare the models trained the four basic losses V, T, A, L and these models with our regularization, denoted as VR, TR, AR, and LR, when λ = 100.0. To further test our idea, we also test the regularization with the hyperparameter set to a negative value λ = −100.0 to inspect the consequences when we regularize the model towards high-frequency kernels. Resulting models are denoted as VH, TH, AH, LH respectively according to the basic losses. We report our inspections in Figure 1: Figure 1(a) visualizes of the convolution kernel (due to the limitation of space, we only visualize the first four convolutional kernels); Figure 1(b) visualizes the corresponding frequency domain in absolute values; Figure 1(c) visualizes the internal representation after an image depicting star is passed through the kernels. Figure 1 (a) shows that our regularization guides the model towards a smooth kernel, across all the basic losses. Also, if we apply our regularization with a negative parameter, then the weights of the resulting kernel tend to fluctuate more dramatically. Figure 1 (b) validates our argument that a smooth kernel only has negligible high-frequency components. As we can see, the frequency domain corresponding to the kernels when our regularization is applied shows significant differences in low-frequency components (center of the visualization) and high-frequency components (periphery of the visualization). Figure 1 (c) further validates our intuition, showing that in comparison to 1https://www.kaggle.com/smeschke/four-shapes internal representations summarized by kernels from basic losses, those influenced by our regularization are more sensitive to the low-frequency signal (e.g. the shape of the input), and the internal representation with our regularization when the parameter is negative tends to focus more on the high-frequency signals. Further, we check the mechanism of our model by inspecting how adversarial examples deceive the models. Figure 2 shows the four on-average most deceptive adversarial examples (the models predict incorrectly with the highest confidence) generated by FGSM. Notations follow the same convention as the previous case, and O denotes the original image. While many images have to be perturbed with a human perceivable amount to deceive the model, we can notice that the adversarial examples for models with our regularization (?R) tend to behave in way that can be understood by a human. The most convincing examples are at the first row for A and AR, where we can clearly see that the adversarial examples alter the decisions from star to circle. Other adversarial examples for ?R models also introduce large areas that shall be interpreted as the shape. In contrast, adversarial examples for other models tend to introduce scattered patches, which will probably not be considered as the shape for most people. Also, if we apply our regularization with a negative parameter (?H), the patches tend to behave in a more shattered manner. 4.2 STANDARD NUMERICAL EVALUATION In Table 1, we report the prediction accuracy over the generated adversarial examples across the attack methods. For MNIST and FashionMNIST, we do not limit the of adversarial examples. In principle, when there is no constraint, one should always be able to find the adversarial example for any sample, however, in practice, many search attempts fail when the attack methods are set with the default hyperparameters in Foolbox. We consider these failures of searches (under default parameters) also a measure of the robustness of models. For CIFAR10 and Restricted ImageNet, we set the to be 0.1 and 0.05, respectively (the maximum pixel value is 1.0). Overall, across most of the settings, our regularization helps achieve numerically the best adversarially robust models. Impressively, for MNIST and FashionMNIST, for some attack methods (e.g., both versions of DeepFool), our regularization can improve the robustness significantly even when only applied to the vanilla training loss, suggesting the importance of the smooth regularization. Also, for these two datasets, the improvements of our regularization are mostly significant over the non-regularized counterparts. For CIFAR10 and Restricted ImageNet, the performance gains are less significant but still observable. In the Appendix, we report the accuracy and curves over `0, Digit 0: Digit 1: Digit 2: Digit 3: Digit 4: Digit 5: Digit 6: Digit 7: Digit 8: I V VR T TR A AR L LR Digit 9: I V VR T TR A AR L LR Figure 3: Sample-independent interpretation of models trained over MNIST. I stands for the input. T-shirt: Coat: Pullover: Dress: Trouser: Shirt: Sandal: Bag: Sneaker: I V VR T TR A AR L LR Boot: I V VR T TR A AR L LR Figure 4: Sample-independent interpretation of models for FashionMNIST. I stands for the input. `2, and `∞ distances, for more thorough comparisons. In general, the performances evaluated by the curves are consistent with results in Table 1. 4.3 INSPECTING MODEL’S PERCEPTION OF CLASS We also leverage one of the most classic model-interpretation methods, activation maximization (Erhan et al., 2009), to further demonstrate the strength of our regularization. Concretely, we follow (Simonyan et al., 2013; Engstrom et al., 2019) to maximize the logit of a certain class so that the most representative features of that class can be exaggerated in the input image. Specifically, with an input image of Gaussian noise, we apply projected gradient descent 10000 iterations with learning rate 0.001 to update the input image. Notice that the interpretation is sample-independent. Figure 3 depicts what the models consider to be the digits. While V and T can barely be interpreted to human, when our regularization is plugged in, the patterns appear to be observable, with impressive examples such as Digit 0, 2, 3. A can also deliver interpretable decisions (e.g., Digit 3 and 5), and our regularization significantly helps in other cases, such as Digit 0, 1, 2, 4, and 8. Figure 4 shows a similar story for FashionMNIST dataset: while A might have the cleanest interpretation for the “sneaker” case, our regularization (especially AR) probably has the best interpretation in all other cases, with good examples such as “Trouser,” “Dress,” and “Boot.” Interestingly, AR is the only method that interprets “Bag” with a strap, and the average image of all training “Bag” samples in FashionMNIST is a bag with a strap. Figure 5 shows the visualization of models trained on CIFAR10. While A seems to have the best interpretation in “horse” case, AR and LR have equal or better interpretation in comparison with A in other cases. Impressively, only AR and LR understand “bird,” and only AR understands “deer”. Figure 6 shows the visualization for Restricted ImageNet plane: mobile: bird: cat: deer: dog: frog: horse: ship: I V VR T TR A AR L LR truck: I V VR T TR A AR L LR (results of simpler models are not shown because they cannot be interpreted). AR is the only method that can describe the outline of the “bird” and “crab” classes, while these models seem to remain more or less the similar interpretation power for other labels. Other results, such as visualization of targeted attack through saliency-based methods and selective visualization of adversarial examples generated along with the experiments, are shown in the Appendix. Overall, the empirical evidence supports our intuition in Section 3: the regularization helps push the model to focus on the low-frequency components of the image and thus leads to more perceptually-aligned gradients. 5 CONCLUSION Inspired by neuroscience literature emphasizing the connection between low-frequency components and shape recognition (Bar, 2004; Awasthi et al., 2011), we proposed a smooth kernel regularization that forces the CNN to learn smooth convolutional kernels (kernels with small differences among adjacent weights) during training. As the relation between smoothness and low-frequency can be argued intuitively and supported by some known results in proved theorems (Titchmarsh, 1948; Bracewell, 1986; Platonov, 2005), our regularization should help the model to depend more on the low-frequency components of images. To verify the effectiveness of the regularization, we plug in the idea onto multiple training losses, including the vanilla loss, Trades loss (Zhang et al., 2019a), the adversarial training loss (Madry et al., 2018), as well as a variation of Logit Pairing loss (Kannan et al., 2018). With seven different attack methods, we demonstrate the empirical strength of our regularization with standard numerical evaluations. Further, we also leverage the standard model interpretation methods to explain the decision of models, showing that our technique, like those demonstrated by Santurkar et al. (2019), tends to result in more perceptually-aligned gradients. A MODEL AND HYPERPARAMETER CHOICES For MNIST and FashionMNIST data set, the model is a simple two-layer architecture with two convolutional layers and two fully connected layers. The `∞ perturbation bound of PGD is set to 0.3/1.0 for MNIST and 0.1/1.0 for FashionMNIST. For CIFAR10, the model is a ResNet18, and the `∞ perturbation bound of PGD is set to 0.03/1.0 (roughly 8/255). For Restricted ImageNet, the model is a ResNet50, and the `∞ perturbation bound of PGD is set to 0.005/1.0, then along the processing process, the pixel values of the images are divided by the standard deviation (0.2575), so is the perturbation bound. Also, for Restricted ImageNet, we continue with either the standard ImageNet-pretrained ResNet50 (for V and T losses) or the adversarially trained ResNet50 on Restricted ImageNet (Santurkar et al., 2019) (for A and L losses). With our hardware settings (NVIDIA 1080Ti), we cannot effectively train Trades loss over ResNet50. B ACCURACY-EPSILON CURVES The accuracy-epsilon curves are shown for `0, `2 and `∞ bounds are shown in Figure 7, Figure 8, and Figure 9. C TARGETED ATTACK We also take advantage of the gradient to perform targeted attack, as shown in following figures. The titles of the columns describe the original class, and the titles of the rows describe the target classes. D SELECTIVE ADVERSARIAL EXAMPLES We visualize the generated adversarial examples to help us evaluate the models. We visualize the on-average most deceptive examples (the highest prediction confidence on the wrong class). We plot one example for each class of the data. For MNIST and FashionMNIST, we focus on the visualization of adversarial examples generated by Adef attack because the attack is more visually aligned with how human perceive the images.
1. What is the main contribution of the paper regarding adversarial attacks? 2. How effective is the proposed regularization scheme in improving model robustness against adversarial attacks? 3. Are there any concerns or limitations regarding the empirical evaluation of the proposed defense? 4. How does the proposed method compare to other approaches in terms of improving model robustness? 5. Is there any evidence that the proposed regularization scheme can effectively remove high-frequency components from the input data?
Review
Review Paper summary: This paper argues that reducing the reliance of neural networks on high-frequency components of images could help robustness against adversarial examples. To attain this goal, the authors propose a new regularization scheme that encourages convolutional kernels to be smoother. The authors augment standard loss functions with the proposed regularization scheme and study the effect on adversarial robustness, as well as perceptual-alignment of model gradients. Comments: I will first discuss some high-level concerns I have with the paper, followed by more specific comments. Motivation and general idea: The idea of improving model robustness by eliminating high-frequency components in the data is not new. This has in fact been the motivation behind several (later shown to be unsuccessful) defenses such as JPEG compression. In recent work, Yin et al. explore this phenomenon in depth, and importantly discuss how adversarial examples are by no means entirely a high-frequency phenomenon. Specifically, they show that while adversarial examples for natural models do tend to be biased towards higher-frequency components, this is not true for robust models. They argue that it is in general always possible to find adversarial examples in the lower end of the frequency spectrum (for instance, transfer attacks from robust models). Thus, based on evidence from prior work, it is unlikely that a defense based entirely on removing high-frequency components from the input data can be successful. Empirical evaluation: There are significant issues with the empirical evaluation of the proposed defense. In particular, in Table 1 (also Figures 7-9 in the Appendix): 1. The numbers suggest that FGSM is more successful as an attack than PGD. The later is a multi-step version of the former, and hence should be strictly better (more successful in lowering model accuracy). This clearly highlights that the PGD attack is not being used correctly/not run with enough steps, and thus the numbers are not an accurate reflection of model robustness. 2. At a higher-level, the numbers that the authors highlight in the table are the best performance over attacks. This is not the correct way to evaluate robustness, which must always be reported as the *worst-case performance* of the model and hence the lowest accuracy over attacks. If one takes this into consideration, it is clear that the proposed regularization does not really improve robustness. 3. Furthermore, the authors state they use default parameters from Foolbox to evaluate their models. The issue with this is evident in the MNIST/FashionMNIST results where even with arbitrary eps, strong attacks like PGD are not able to break the model. This does not mean that the model is robust, it just means that the default hyperparameters from Foolbox cannot break the model. The authors need to re-run the attacks with different steps sizes/step counts for various attacks. The goal in evaluating model robustness should not be finding one attack (or one set of hyperparameters) which is not able to fool the model, but to show that *no attack* (for the given perturbation set) can lower model accuracy. 4. The eps that are used for the CIFAR-10 and ImageNet experiments are extremely large, and not standard in the literature. In fact, based on my experience, it is probably not possible to be (too) robust to eps as large as 0.1 on CIFAR as this is large enough to visibly change the class even for a human (cf. Tsipras et al.). Thus, I think the robustness evaluation in Table 1 is incorrect and violates some basic sanity checks (such as PGD being stronger than FGSM), and thus does not accurately reflect the model’s true performance. Other comments: i. It is unclear why the sensitivity of the proposed regularization scheme to the scale of w can be fixed by subtracting the norm of w (the exp loss will just be raised to the power alpha if you scale weights by alpha). Dividing by the norm would be the more correct way to fix the scale invariance and the authors should include results with this regularization, at least in the Appendix. ii. In Figure 1, the kernels and activations that are visualized are after the first convolutional layer. The authors should perform a similar visualization after the second layer (or later layers in general) because it is somewhat obvious that you will get these properties at the first layer with the proposed regularization scheme (if the lambda is correctly tuned). The part that is unclear is how effective the regularization is over repeated applications of convolution coupled with non-linearities such as ReLU/pooling. iii. While the visualizations in Figures 2-6 are interesting, it seems that many of the cases in which +R looks semantically meaningful, it also looks meaningful with the base loss itself. It thus is unclear whether the perceptual alignment is coming from the base loss, or from the added regularization. Moreover, the perceptual alignment of models should improve when their dependance on “human-meaningless” features reduces. Since high-frequency components are likely “human-meaningless” features, it is plausible that the proposed regularization scheme makes activation maximization/gradients more perceptually alignment. However, I think this is somewhat orthogonal to (and is not enough to tell us anything about) the robustness of the model itself, which is affected by its reliance on many kinds of “non-robust” features (Ilyas et al., 2019) of which high-frequency components might just be one example. iv. In recent work by Wang et al., the authors conduct an interesting experiment to see how dependent the prediction of a given classifier is to high-frequency components in the data (vs low frequency components) (cf. Figure 1 in their paper). It would be interesting to see this experiment replicated with the proposed regularization scheme. Overall, I think there are significant issues with the paper, especially in the empirical evaluation section. The authors need to re-evaluate their models with the strongest possible form of the attack and demonstrate an improvement in worst-case performance to actually establish the merits of the proposed regularization scheme. Thus, I recommend rejection. References: Yin, Dong, et al. "A fourier perspective on model robustness in computer vision." arXiv preprint arXiv:1906.08988 (2019). Tsipras, Dimitris, et al. "Robustness may be at odds with accuracy." arXiv preprint arXiv:1805.12152 (2018). Ilyas, Andrew, et al. "Adversarial examples are not bugs, they are features." arXiv preprint arXiv:1905.02175 (2019). Wang, Haohan, et al. "High Frequency Component Helps Explain the Generalization of Convolutional Neural Networks." arXiv preprint arXiv:1905.13545 (2019).
ICLR
Title Smooth Kernels Improve Adversarial Robustness and Perceptually-Aligned Gradients Abstract Recent research has shown that CNNs are often overly sensitive to high-frequency textural patterns. Inspired by the intuition that humans tend to be more sensitive to lower-frequency (larger-scale) patterns, we design a regularization scheme that penalizes large differences between adjacent components within each convolutional kernel. We apply our regularization onto several popular training methods, demonstrating that the models with the proposed smooth kernels enjoy improved adversarial robustness. Further, building on recent work establishing connections between adversarial robustness and interpretability, we show that our method appears to give more perceptually-aligned gradients. N/A Recent research has shown that CNNs are often overly sensitive to high-frequency textural patterns. Inspired by the intuition that humans tend to be more sensitive to lower-frequency (larger-scale) patterns, we design a regularization scheme that penalizes large differences between adjacent components within each convolutional kernel. We apply our regularization onto several popular training methods, demonstrating that the models with the proposed smooth kernels enjoy improved adversarial robustness. Further, building on recent work establishing connections between adversarial robustness and interpretability, we show that our method appears to give more perceptually-aligned gradients. 1 INTRODUCTION In recent years, deep learning models have demonstrated remarkable capabilities for predictive modeling in computer vision, leading some to liken their abilities on perception tasks to those of humans (e.g., Weyand et al., 2016). However, under closer inspection, the limits of such claims to the narrow scope of i.i.d. data become clear. For example, when faced with adversarial examples (Szegedy et al., 2013; Goodfellow et al., 2015) or even in non-adversarial domain-agnostic cross-domain evaluations (Wang et al., 2019a;b; Carlucci et al., 2019), performance collapses, dispelling claims of human-like perceptive capabilities and calling into doubt more ambitious applications of this technology in the wild. A long line of recent research has investigated the robustness of neural networks, including investigations of the high-dimension nature of models (Fawzi et al., 2018), enlarging the gaps between decision boundaries (Zhang et al., 2019a), training the models with augmented examples through attack methods (Madry et al., 2018), and even guaranteeing the robustness of models within given radii of perturbation (Wong & Kolter, 2018; Cohen et al., 2019). Compared to earlier methods, these recent works enjoy stronger robustness both as assessed via theoretical guarantees and empirically via quantitative performance against strong attacks. However, despite the success of these techniques, vulnerabilities to new varieties of attacks are frequently discovered (Zhang et al., 2019b). In this paper, we aim to lessen the dependency of neural networks on high-frequency patterns in images, regularizing CNNs to focus on the low-frequency components. Therefore, the main argument of this paper is that: by regularizing the CNN to be most sensitive to the low-frequency components of an image, we can improve the robustness of models. Interestingly, this also appears to lead to more perceptually-aligned gradients. Further, as Wang et al. (2019c) explicitly defined the low (or high)-frequency components as images reconstructed from the low (or high)-end of the image frequency domain (as is frequently discussed in neuroscience literature addressing human recognition of shape (Bar, 2004) or face (Awasthi et al., 2011)), we continue with this definition and demonstrate that a smooth kernel can filter out the high-frequency components and improve the models’ robustness. We test our ideas and show the empirical improvement over popular adversarial robust methods with standard evaluations and further use model interpretation methods to understand how the models make decisions and demonstrate that the regularization helps the model to generate more perceptually-aligned gradients. 2 RELATED WORK Adversarial examples are samples with small perturbations applied that are imperceptible to humans but can nevertheless induce misclassification in machine learning models (Szegedy et al., 2013)). The discovery of adversarial examples spurred a torrent of research, much of it consisting of an arm race between those inventing new attack methods and others offering defenses to make classifiers robust to these sorts of attacks. We refer to survey papers such as (Akhtar & Mian, 2018; Chakraborty et al., 2018) and only list a few most relevant works about applying regularizations to the networks to improve the adversarial robustness, such as regularizations constraining the Lipschitz constant of the network (Cisse et al., 2017) (Lipschitz smoothness), regularizing the scale of gradients (Ross & Doshi-Velez, 2018; Jakubovitz & Giryes, 2018) (smooth gradients), regularizing the curvature of the loss surface (Moosavi-Dezfooli et al., 2019) (smooth loss curvature), and promoting the smoothness of the model distribution (Miyato et al., 2015). These regularizations also use the concept of “smoothness,” but different from ours (small differences among the adjacent weights). Recently, adversarial training (Goodfellow et al., 2015; Madry et al., 2018) has become one of the most popular defense methods, based on the simple idea of augmenting the training data with samples generated through attack methods (i.e., threat models). While adversarial training excels across many evaluations, recent evidence exposes its new limitations (Zhang et al., 2019b), suggesting that adversarial robustness remains a challenge. Key differences: In this paper, we present a new technique penalizing differences among the adjacent components of convolutional kernels. Moreover, we expand upon the recent literature demonstrating connections between adversarial robustness and perceptually-aligned gradients. 3 SMOOTH KERNEL REGULARIZATION Intuition. High-frequency components of images are those reconstructed from the high-end of the image frequency-domain through inverse Fourier transform. This definition was also verified previously by neuroscientists who demonstrated that humans tend to rely on the low-frequency component of images to recognize shapes (Bar, 2004) and faces (Awasthi et al., 2011). Therefore, we argue that the smooth kernel regularization is effective because it helps to produce models less sensitive to high-frequency patterns in images. We define a smooth kernel as a convolutional kernel whose weight at each position does not differ much from those of its neighbors, i.e., (wi,j −wh,k∈N(i,j))2 is a small number, where w denotes the convolutional kernel weight, i, j denote the indices of the convolutional kernel w, and N(i, j) denotes the set of the spatial neighbors of i, j. We note two points that support our intuition. 1. The frequency domain of a smooth kernel has only negligible high-frequency components. This argument can be shown with Theorem 1 in (Platonov, 2005). Roughly, the idea is to view the weight matrix w as a function that maps the index of weights to the weights: w(i, j) → wi,j , then a smooth kernel can be seen as a Lipschitz function with constant α. As pointed out by Platonov (2005), Titchmarsh (1948) showed that when 0 < α < 1, in the frequency domain, the sum of all the high frequency components with a radius greater than r will converge to a small number, suggesting that the high-frequency components (when r is large) are negligible. 2. The kernel with negligible high-frequency components will weigh the high-frequency components of input images accordingly. This argument can be shown through Convolution Theorem (Bracewell, 1986), which states w~x = F−1(F(w) F(x)), where F(·) stands for Fourier transform, ~ stands for convolution operation, and stands for point-wise multiplication. As the theorem states, the convolution operation of images is equivalent to the element-wise multiplication of image frequency domain. Therefore, roughly, if w has negligible high-frequency components in the frequency domain, it will weigh the high-frequency components of x accordingly with negligible weights. Naturally, this argument only pertains to a single convolution, and we rely on our intuition that repeated applications of these smooth kernels across multiple convolution layers in a nonlinear deep network will have some cumulative benefit. Formally, we calculate our regularization term R0(w) as follows: R0(w) = ∑ i,j ∑ h,k∈N(i,j) (wi,j −wh,k)2, We also aim to improve this regularization by trying a few additional heuristics: • First, we notice that directly appending R0(w) will sometimes lead to models that achieve the a small value of R0(w) by directly scaling down the every coefficient of w proportionally, without changing the fluctuation pattern of the weights. To fix this problem, we directly subtract the scale of w (i.e., ∑ i,j w 2 i,j) after R0(w). • Another heuristic to fix this same problem is to directly divide R0(w) by the scale of w. Empirically, we do not observe significant differences between these two heuristics. We settle with the first heuristic because of the difficulty in calculating gradient when a matrix is the denominator. • Finally, we empirically observe that the regularization above will play a significant role during the early stage of training, but may damage the overall performances later when the regularization pulls towards smoothness too much. To mitigate this problem, we use an exponential function to strengthen the effects of the regularization when the value is big and to weaken it when the value is small. Overall, our final regularization is: R(w) = exp ∑ i,j ∑ h,k∈N(i,j) (wi,j −wh,k)2 − ∑ i,j w2i,j In practice, the convolutional kernel is usually a 4-dimensional tensor, while our method only encourages smoothness over the two spatial dimensions corresponding to the 2D images. Thus, we only regularize through these two dimensions broadcasting the operation through the channels. Because a repeated calculation of each kernel component’s distance with its neighbors will double count some pairs, our implementation instead enumerates over all pairs of neighbors, counting each squared difference only once towards the total penalty. We can directly append the regularization λR(w) to most loss functions, where λ is a tuning hyperparameter. In the following experiments, we append λR(w) to the vanilla loss function (crossentropy loss), Trades loss (Zhang et al., 2019a), adversarial training loss (Madry et al., 2018), and a variation of logit pairing loss (Kannan et al., 2018), as introduced in the following paragraphs. Adversarial training works by fitting the model using adversarial examples generated on the fly at train time by the threat model. Trades loss fits the model with clean examples while regularizing the softmax of augmented adversarial examples to be close to that produced for corresponding clean examples, a natural alternative is to fit the model with augmented adversarial examples while regularizing the softmax of clean examples to be close to that of the corresponding adversarial examples, which is related to logit pairing. However, to make the comparison consistent, we use a variation of logit pairing, penalizing the KL divergence of softmax (rather than `2 distance over logits), following the Trades loss, which also uses KL divergence over softmax as the distance metric. To be specific, with the standard notations such as 〈X,Y〉 denoting a data set, 〈x,y〉 denoting a sample, the logit pairing loss is formalized as: minE〈x,y〉∼〈X,Y〉l(f(x′; θ);y) + γk(fl(x′; θ), fl(x; θ)) where x′ = argmax d(x′,x)≤ l(f(x′; θ);y) where d(·, ·) and k(·, ·) are distance functions, fl(·; ·) denotes the model f(·; ·) but outputs the softmax instead of a prediction, l(·, ·) is a cost function, γ is a tuning hyperparameter, and is the upper bound of perturbation. In our following experiments, we consider d(·, ·) as `∞ norm following popular adversarial training set-up and k(·, ·) as KL divergence following standard Trades loss. Intuitively, our usage of KL divergence in logit pairing loss is argued to be advantageous because Pinsker’s inequality suggests that KL divergence upper-bounds the total variation (TV) distance (e.g., Csiszar & Körner, 2011), the usage of KL divergence can be seen as a regularization that limits the hypothesis space to the parameters that yield small TV distance over perturbations of samples, which is linked to the robustness of an estimator, a topic that has been studied by the statistics community for over decades (e.g., see (Diakonikolas et al., 2019) and references within). 4 EXPERIMENTS To empirically validate our methods, we first consider a simple synthetic experiment to demonstrate the effectiveness of our proposed solutions. Then, with standard data sets such as MNIST (LeCun, 1998), FashionMNIST (Xiao et al., 2017), CIFAR10 (Krizhevsky & Hinton, 2009) and Restricted ImageNet (Tsipras et al., 2019), we evaluate our methods with well-established criteria, such as `∞ bounded accuracy. We also leverage saliency-based visualization methods to understand how the model understands each class. Most experiments are conducted with a simple basic convolutional neural network with two convolution layers and two fully connected layers, while the CIFAR10 experiment is conducted with ResNet18 and Restricted ImageNet experiment is conducted with ResNet50 (more details of the models are in the Appendix). As we mentioned previously, we apply the new regularization to four different losses: the vanilla loss (denoted as V), Trades loss (denoted as T) (Zhang et al., 2019a), adversarial training (denoted as A) (Madry et al., 2018), and our variation of logit pairing (denoted as L). T, A, L all adopt `∞ norm bounded PGD as the threat model. We use VR, TR, AR, LR to denote the methods after our regularization is plugged in. We evaluate our methods against a wide range of adversarial attack methods, including FGSM (Goodfellow et al., 2015), PGD (Madry et al., 2018), C&W (Carlini & Wagner, 2017), DeepFool (both `2 and `∞) (Moosavi-Dezfooli et al., 2016), ADef, a method that iteratively applies small deformations to the image (Alaifari et al., 2019) and Salt&Pepper, a black-box method that adds noise to the image. For these attack methods, we use the default parameters in Foolbox (Rauber et al., 2017), and our experiments suggest that these default parameters are effective enough in most cases. For every data set, we first tune the `∞ norm perturbation bound of adversarial training method and then use the same setting for Trades loss and variation of logit pairing. We tune γ within {0.1, 1.0, 10.0} and tune λ within {0.01, 0.1, 1.0, 10.0, 100.0}. 4.1 SYNTHETIC EXPERIMENTS FOR SANITY CHECKING We first use a basic data set of four shapes1 to test whether our proposed method helps regularize the model to behave as we desire. Each image in this data set has a white background and a black foreground depicting one of the four shapes: circle, star, square, and triangle. Our goal is to train a convolutional neural network to classify the images into one of these four shapes. We compare the models trained the four basic losses V, T, A, L and these models with our regularization, denoted as VR, TR, AR, and LR, when λ = 100.0. To further test our idea, we also test the regularization with the hyperparameter set to a negative value λ = −100.0 to inspect the consequences when we regularize the model towards high-frequency kernels. Resulting models are denoted as VH, TH, AH, LH respectively according to the basic losses. We report our inspections in Figure 1: Figure 1(a) visualizes of the convolution kernel (due to the limitation of space, we only visualize the first four convolutional kernels); Figure 1(b) visualizes the corresponding frequency domain in absolute values; Figure 1(c) visualizes the internal representation after an image depicting star is passed through the kernels. Figure 1 (a) shows that our regularization guides the model towards a smooth kernel, across all the basic losses. Also, if we apply our regularization with a negative parameter, then the weights of the resulting kernel tend to fluctuate more dramatically. Figure 1 (b) validates our argument that a smooth kernel only has negligible high-frequency components. As we can see, the frequency domain corresponding to the kernels when our regularization is applied shows significant differences in low-frequency components (center of the visualization) and high-frequency components (periphery of the visualization). Figure 1 (c) further validates our intuition, showing that in comparison to 1https://www.kaggle.com/smeschke/four-shapes internal representations summarized by kernels from basic losses, those influenced by our regularization are more sensitive to the low-frequency signal (e.g. the shape of the input), and the internal representation with our regularization when the parameter is negative tends to focus more on the high-frequency signals. Further, we check the mechanism of our model by inspecting how adversarial examples deceive the models. Figure 2 shows the four on-average most deceptive adversarial examples (the models predict incorrectly with the highest confidence) generated by FGSM. Notations follow the same convention as the previous case, and O denotes the original image. While many images have to be perturbed with a human perceivable amount to deceive the model, we can notice that the adversarial examples for models with our regularization (?R) tend to behave in way that can be understood by a human. The most convincing examples are at the first row for A and AR, where we can clearly see that the adversarial examples alter the decisions from star to circle. Other adversarial examples for ?R models also introduce large areas that shall be interpreted as the shape. In contrast, adversarial examples for other models tend to introduce scattered patches, which will probably not be considered as the shape for most people. Also, if we apply our regularization with a negative parameter (?H), the patches tend to behave in a more shattered manner. 4.2 STANDARD NUMERICAL EVALUATION In Table 1, we report the prediction accuracy over the generated adversarial examples across the attack methods. For MNIST and FashionMNIST, we do not limit the of adversarial examples. In principle, when there is no constraint, one should always be able to find the adversarial example for any sample, however, in practice, many search attempts fail when the attack methods are set with the default hyperparameters in Foolbox. We consider these failures of searches (under default parameters) also a measure of the robustness of models. For CIFAR10 and Restricted ImageNet, we set the to be 0.1 and 0.05, respectively (the maximum pixel value is 1.0). Overall, across most of the settings, our regularization helps achieve numerically the best adversarially robust models. Impressively, for MNIST and FashionMNIST, for some attack methods (e.g., both versions of DeepFool), our regularization can improve the robustness significantly even when only applied to the vanilla training loss, suggesting the importance of the smooth regularization. Also, for these two datasets, the improvements of our regularization are mostly significant over the non-regularized counterparts. For CIFAR10 and Restricted ImageNet, the performance gains are less significant but still observable. In the Appendix, we report the accuracy and curves over `0, Digit 0: Digit 1: Digit 2: Digit 3: Digit 4: Digit 5: Digit 6: Digit 7: Digit 8: I V VR T TR A AR L LR Digit 9: I V VR T TR A AR L LR Figure 3: Sample-independent interpretation of models trained over MNIST. I stands for the input. T-shirt: Coat: Pullover: Dress: Trouser: Shirt: Sandal: Bag: Sneaker: I V VR T TR A AR L LR Boot: I V VR T TR A AR L LR Figure 4: Sample-independent interpretation of models for FashionMNIST. I stands for the input. `2, and `∞ distances, for more thorough comparisons. In general, the performances evaluated by the curves are consistent with results in Table 1. 4.3 INSPECTING MODEL’S PERCEPTION OF CLASS We also leverage one of the most classic model-interpretation methods, activation maximization (Erhan et al., 2009), to further demonstrate the strength of our regularization. Concretely, we follow (Simonyan et al., 2013; Engstrom et al., 2019) to maximize the logit of a certain class so that the most representative features of that class can be exaggerated in the input image. Specifically, with an input image of Gaussian noise, we apply projected gradient descent 10000 iterations with learning rate 0.001 to update the input image. Notice that the interpretation is sample-independent. Figure 3 depicts what the models consider to be the digits. While V and T can barely be interpreted to human, when our regularization is plugged in, the patterns appear to be observable, with impressive examples such as Digit 0, 2, 3. A can also deliver interpretable decisions (e.g., Digit 3 and 5), and our regularization significantly helps in other cases, such as Digit 0, 1, 2, 4, and 8. Figure 4 shows a similar story for FashionMNIST dataset: while A might have the cleanest interpretation for the “sneaker” case, our regularization (especially AR) probably has the best interpretation in all other cases, with good examples such as “Trouser,” “Dress,” and “Boot.” Interestingly, AR is the only method that interprets “Bag” with a strap, and the average image of all training “Bag” samples in FashionMNIST is a bag with a strap. Figure 5 shows the visualization of models trained on CIFAR10. While A seems to have the best interpretation in “horse” case, AR and LR have equal or better interpretation in comparison with A in other cases. Impressively, only AR and LR understand “bird,” and only AR understands “deer”. Figure 6 shows the visualization for Restricted ImageNet plane: mobile: bird: cat: deer: dog: frog: horse: ship: I V VR T TR A AR L LR truck: I V VR T TR A AR L LR (results of simpler models are not shown because they cannot be interpreted). AR is the only method that can describe the outline of the “bird” and “crab” classes, while these models seem to remain more or less the similar interpretation power for other labels. Other results, such as visualization of targeted attack through saliency-based methods and selective visualization of adversarial examples generated along with the experiments, are shown in the Appendix. Overall, the empirical evidence supports our intuition in Section 3: the regularization helps push the model to focus on the low-frequency components of the image and thus leads to more perceptually-aligned gradients. 5 CONCLUSION Inspired by neuroscience literature emphasizing the connection between low-frequency components and shape recognition (Bar, 2004; Awasthi et al., 2011), we proposed a smooth kernel regularization that forces the CNN to learn smooth convolutional kernels (kernels with small differences among adjacent weights) during training. As the relation between smoothness and low-frequency can be argued intuitively and supported by some known results in proved theorems (Titchmarsh, 1948; Bracewell, 1986; Platonov, 2005), our regularization should help the model to depend more on the low-frequency components of images. To verify the effectiveness of the regularization, we plug in the idea onto multiple training losses, including the vanilla loss, Trades loss (Zhang et al., 2019a), the adversarial training loss (Madry et al., 2018), as well as a variation of Logit Pairing loss (Kannan et al., 2018). With seven different attack methods, we demonstrate the empirical strength of our regularization with standard numerical evaluations. Further, we also leverage the standard model interpretation methods to explain the decision of models, showing that our technique, like those demonstrated by Santurkar et al. (2019), tends to result in more perceptually-aligned gradients. A MODEL AND HYPERPARAMETER CHOICES For MNIST and FashionMNIST data set, the model is a simple two-layer architecture with two convolutional layers and two fully connected layers. The `∞ perturbation bound of PGD is set to 0.3/1.0 for MNIST and 0.1/1.0 for FashionMNIST. For CIFAR10, the model is a ResNet18, and the `∞ perturbation bound of PGD is set to 0.03/1.0 (roughly 8/255). For Restricted ImageNet, the model is a ResNet50, and the `∞ perturbation bound of PGD is set to 0.005/1.0, then along the processing process, the pixel values of the images are divided by the standard deviation (0.2575), so is the perturbation bound. Also, for Restricted ImageNet, we continue with either the standard ImageNet-pretrained ResNet50 (for V and T losses) or the adversarially trained ResNet50 on Restricted ImageNet (Santurkar et al., 2019) (for A and L losses). With our hardware settings (NVIDIA 1080Ti), we cannot effectively train Trades loss over ResNet50. B ACCURACY-EPSILON CURVES The accuracy-epsilon curves are shown for `0, `2 and `∞ bounds are shown in Figure 7, Figure 8, and Figure 9. C TARGETED ATTACK We also take advantage of the gradient to perform targeted attack, as shown in following figures. The titles of the columns describe the original class, and the titles of the rows describe the target classes. D SELECTIVE ADVERSARIAL EXAMPLES We visualize the generated adversarial examples to help us evaluate the models. We visualize the on-average most deceptive examples (the highest prediction confidence on the wrong class). We plot one example for each class of the data. For MNIST and FashionMNIST, we focus on the visualization of adversarial examples generated by Adef attack because the attack is more visually aligned with how human perceive the images.
1. What is the focus of the paper regarding convolutional kernels? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its ability to improve adversarial robustness and human alignment? 3. Do you have any concerns about the evaluation methods used in the paper, especially when it comes to assessing adversarial robustness? 4. How effective is the proposed regularizer in improving the robustness of various models, and how does it impact human alignment? 5. Are there any limitations or potential issues with the experimental setup or results that the authors should address?
Review
Review The authors propose a method for learning smoother convolutional kernels with the goal of improving robustness and human alignment. Specifically, they propose a regularizer penalizing large changes between consecutive pixels of the kernel with the intuition of penalizing the use of high-frequency input components. They evaluate the impact of their method on the adversarial robustness of various models and class visualization methods. At a high level, the proposed idea is interesting. Reducing the reliance of classifiers on high-frequency patterns is a plausible way of improving human alignment. However, the experimental evidence presented is either unreliable or not sufficient to demonstrate the merit of the approach. My biggest concern is the evaluation of adversarial robustness. The authors perform a number of off-the-shelf attacks using the foolbox library without accounting for fundamental differences between these attacks or basic principles of adversarial evaluation. Specifically, when evaluating the robust accuracy of a model, one needs to specify a concrete threat model (e.g., perturbations of L2-norm at most X), perform various attacks respecting this threat model, and report the number of examples correctly classified against _all_ attacks (see https://arxiv.org/abs/1902.06705 for additional discussion and justification). Instead, the authors consider attacks with unbounded perturbation (for which robust accuracy does not make sense) and attacks using different perturbation constraints (ADef), while reporting improvements on a per-attack base. Hence, most of the columns of Table 1 are unreliable and cannot be taken into account when evaluating the models' robustness. Even ignoring these issues, taking into account the only column of Table 1 that is indicative, the PGD attack, we do not see sufficient evidence for the merit of the method. With the exception of a single outlier for MNIST TR (which I cannot interpret), the proposed regularizer only marginally affects the robustness of a classifier. In particular, no models become robust by adding the regularizer and the impact is limited for the case of already robust models. Furthermore, the evidence in favor of human alignment is relatively weak. While the method does have some effect for the case of simple models (MNIST and Fashion-MNIST), for the case of complex models (for which visualization is actually a challenging problem) the improvement is virtually non-existent. For Figures 5 and 6, the columns with R have essentially the same visual quality as the original columns. The only exception are the L and A columns for CIFAR10 bird and deer for which the original visualization fails. However, I find this confusing. There exist several works by now performing visualization using adversarial models and I have never encountered such a failure before. I wonder if there is some issue with the training of these particular models. Overall, while the high-level idea of the paper is interesting, the experimental evidence presented is either weak or unreliable. I will thus recommend rejection.
ICLR
Title Smooth Kernels Improve Adversarial Robustness and Perceptually-Aligned Gradients Abstract Recent research has shown that CNNs are often overly sensitive to high-frequency textural patterns. Inspired by the intuition that humans tend to be more sensitive to lower-frequency (larger-scale) patterns, we design a regularization scheme that penalizes large differences between adjacent components within each convolutional kernel. We apply our regularization onto several popular training methods, demonstrating that the models with the proposed smooth kernels enjoy improved adversarial robustness. Further, building on recent work establishing connections between adversarial robustness and interpretability, we show that our method appears to give more perceptually-aligned gradients. N/A Recent research has shown that CNNs are often overly sensitive to high-frequency textural patterns. Inspired by the intuition that humans tend to be more sensitive to lower-frequency (larger-scale) patterns, we design a regularization scheme that penalizes large differences between adjacent components within each convolutional kernel. We apply our regularization onto several popular training methods, demonstrating that the models with the proposed smooth kernels enjoy improved adversarial robustness. Further, building on recent work establishing connections between adversarial robustness and interpretability, we show that our method appears to give more perceptually-aligned gradients. 1 INTRODUCTION In recent years, deep learning models have demonstrated remarkable capabilities for predictive modeling in computer vision, leading some to liken their abilities on perception tasks to those of humans (e.g., Weyand et al., 2016). However, under closer inspection, the limits of such claims to the narrow scope of i.i.d. data become clear. For example, when faced with adversarial examples (Szegedy et al., 2013; Goodfellow et al., 2015) or even in non-adversarial domain-agnostic cross-domain evaluations (Wang et al., 2019a;b; Carlucci et al., 2019), performance collapses, dispelling claims of human-like perceptive capabilities and calling into doubt more ambitious applications of this technology in the wild. A long line of recent research has investigated the robustness of neural networks, including investigations of the high-dimension nature of models (Fawzi et al., 2018), enlarging the gaps between decision boundaries (Zhang et al., 2019a), training the models with augmented examples through attack methods (Madry et al., 2018), and even guaranteeing the robustness of models within given radii of perturbation (Wong & Kolter, 2018; Cohen et al., 2019). Compared to earlier methods, these recent works enjoy stronger robustness both as assessed via theoretical guarantees and empirically via quantitative performance against strong attacks. However, despite the success of these techniques, vulnerabilities to new varieties of attacks are frequently discovered (Zhang et al., 2019b). In this paper, we aim to lessen the dependency of neural networks on high-frequency patterns in images, regularizing CNNs to focus on the low-frequency components. Therefore, the main argument of this paper is that: by regularizing the CNN to be most sensitive to the low-frequency components of an image, we can improve the robustness of models. Interestingly, this also appears to lead to more perceptually-aligned gradients. Further, as Wang et al. (2019c) explicitly defined the low (or high)-frequency components as images reconstructed from the low (or high)-end of the image frequency domain (as is frequently discussed in neuroscience literature addressing human recognition of shape (Bar, 2004) or face (Awasthi et al., 2011)), we continue with this definition and demonstrate that a smooth kernel can filter out the high-frequency components and improve the models’ robustness. We test our ideas and show the empirical improvement over popular adversarial robust methods with standard evaluations and further use model interpretation methods to understand how the models make decisions and demonstrate that the regularization helps the model to generate more perceptually-aligned gradients. 2 RELATED WORK Adversarial examples are samples with small perturbations applied that are imperceptible to humans but can nevertheless induce misclassification in machine learning models (Szegedy et al., 2013)). The discovery of adversarial examples spurred a torrent of research, much of it consisting of an arm race between those inventing new attack methods and others offering defenses to make classifiers robust to these sorts of attacks. We refer to survey papers such as (Akhtar & Mian, 2018; Chakraborty et al., 2018) and only list a few most relevant works about applying regularizations to the networks to improve the adversarial robustness, such as regularizations constraining the Lipschitz constant of the network (Cisse et al., 2017) (Lipschitz smoothness), regularizing the scale of gradients (Ross & Doshi-Velez, 2018; Jakubovitz & Giryes, 2018) (smooth gradients), regularizing the curvature of the loss surface (Moosavi-Dezfooli et al., 2019) (smooth loss curvature), and promoting the smoothness of the model distribution (Miyato et al., 2015). These regularizations also use the concept of “smoothness,” but different from ours (small differences among the adjacent weights). Recently, adversarial training (Goodfellow et al., 2015; Madry et al., 2018) has become one of the most popular defense methods, based on the simple idea of augmenting the training data with samples generated through attack methods (i.e., threat models). While adversarial training excels across many evaluations, recent evidence exposes its new limitations (Zhang et al., 2019b), suggesting that adversarial robustness remains a challenge. Key differences: In this paper, we present a new technique penalizing differences among the adjacent components of convolutional kernels. Moreover, we expand upon the recent literature demonstrating connections between adversarial robustness and perceptually-aligned gradients. 3 SMOOTH KERNEL REGULARIZATION Intuition. High-frequency components of images are those reconstructed from the high-end of the image frequency-domain through inverse Fourier transform. This definition was also verified previously by neuroscientists who demonstrated that humans tend to rely on the low-frequency component of images to recognize shapes (Bar, 2004) and faces (Awasthi et al., 2011). Therefore, we argue that the smooth kernel regularization is effective because it helps to produce models less sensitive to high-frequency patterns in images. We define a smooth kernel as a convolutional kernel whose weight at each position does not differ much from those of its neighbors, i.e., (wi,j −wh,k∈N(i,j))2 is a small number, where w denotes the convolutional kernel weight, i, j denote the indices of the convolutional kernel w, and N(i, j) denotes the set of the spatial neighbors of i, j. We note two points that support our intuition. 1. The frequency domain of a smooth kernel has only negligible high-frequency components. This argument can be shown with Theorem 1 in (Platonov, 2005). Roughly, the idea is to view the weight matrix w as a function that maps the index of weights to the weights: w(i, j) → wi,j , then a smooth kernel can be seen as a Lipschitz function with constant α. As pointed out by Platonov (2005), Titchmarsh (1948) showed that when 0 < α < 1, in the frequency domain, the sum of all the high frequency components with a radius greater than r will converge to a small number, suggesting that the high-frequency components (when r is large) are negligible. 2. The kernel with negligible high-frequency components will weigh the high-frequency components of input images accordingly. This argument can be shown through Convolution Theorem (Bracewell, 1986), which states w~x = F−1(F(w) F(x)), where F(·) stands for Fourier transform, ~ stands for convolution operation, and stands for point-wise multiplication. As the theorem states, the convolution operation of images is equivalent to the element-wise multiplication of image frequency domain. Therefore, roughly, if w has negligible high-frequency components in the frequency domain, it will weigh the high-frequency components of x accordingly with negligible weights. Naturally, this argument only pertains to a single convolution, and we rely on our intuition that repeated applications of these smooth kernels across multiple convolution layers in a nonlinear deep network will have some cumulative benefit. Formally, we calculate our regularization term R0(w) as follows: R0(w) = ∑ i,j ∑ h,k∈N(i,j) (wi,j −wh,k)2, We also aim to improve this regularization by trying a few additional heuristics: • First, we notice that directly appending R0(w) will sometimes lead to models that achieve the a small value of R0(w) by directly scaling down the every coefficient of w proportionally, without changing the fluctuation pattern of the weights. To fix this problem, we directly subtract the scale of w (i.e., ∑ i,j w 2 i,j) after R0(w). • Another heuristic to fix this same problem is to directly divide R0(w) by the scale of w. Empirically, we do not observe significant differences between these two heuristics. We settle with the first heuristic because of the difficulty in calculating gradient when a matrix is the denominator. • Finally, we empirically observe that the regularization above will play a significant role during the early stage of training, but may damage the overall performances later when the regularization pulls towards smoothness too much. To mitigate this problem, we use an exponential function to strengthen the effects of the regularization when the value is big and to weaken it when the value is small. Overall, our final regularization is: R(w) = exp ∑ i,j ∑ h,k∈N(i,j) (wi,j −wh,k)2 − ∑ i,j w2i,j In practice, the convolutional kernel is usually a 4-dimensional tensor, while our method only encourages smoothness over the two spatial dimensions corresponding to the 2D images. Thus, we only regularize through these two dimensions broadcasting the operation through the channels. Because a repeated calculation of each kernel component’s distance with its neighbors will double count some pairs, our implementation instead enumerates over all pairs of neighbors, counting each squared difference only once towards the total penalty. We can directly append the regularization λR(w) to most loss functions, where λ is a tuning hyperparameter. In the following experiments, we append λR(w) to the vanilla loss function (crossentropy loss), Trades loss (Zhang et al., 2019a), adversarial training loss (Madry et al., 2018), and a variation of logit pairing loss (Kannan et al., 2018), as introduced in the following paragraphs. Adversarial training works by fitting the model using adversarial examples generated on the fly at train time by the threat model. Trades loss fits the model with clean examples while regularizing the softmax of augmented adversarial examples to be close to that produced for corresponding clean examples, a natural alternative is to fit the model with augmented adversarial examples while regularizing the softmax of clean examples to be close to that of the corresponding adversarial examples, which is related to logit pairing. However, to make the comparison consistent, we use a variation of logit pairing, penalizing the KL divergence of softmax (rather than `2 distance over logits), following the Trades loss, which also uses KL divergence over softmax as the distance metric. To be specific, with the standard notations such as 〈X,Y〉 denoting a data set, 〈x,y〉 denoting a sample, the logit pairing loss is formalized as: minE〈x,y〉∼〈X,Y〉l(f(x′; θ);y) + γk(fl(x′; θ), fl(x; θ)) where x′ = argmax d(x′,x)≤ l(f(x′; θ);y) where d(·, ·) and k(·, ·) are distance functions, fl(·; ·) denotes the model f(·; ·) but outputs the softmax instead of a prediction, l(·, ·) is a cost function, γ is a tuning hyperparameter, and is the upper bound of perturbation. In our following experiments, we consider d(·, ·) as `∞ norm following popular adversarial training set-up and k(·, ·) as KL divergence following standard Trades loss. Intuitively, our usage of KL divergence in logit pairing loss is argued to be advantageous because Pinsker’s inequality suggests that KL divergence upper-bounds the total variation (TV) distance (e.g., Csiszar & Körner, 2011), the usage of KL divergence can be seen as a regularization that limits the hypothesis space to the parameters that yield small TV distance over perturbations of samples, which is linked to the robustness of an estimator, a topic that has been studied by the statistics community for over decades (e.g., see (Diakonikolas et al., 2019) and references within). 4 EXPERIMENTS To empirically validate our methods, we first consider a simple synthetic experiment to demonstrate the effectiveness of our proposed solutions. Then, with standard data sets such as MNIST (LeCun, 1998), FashionMNIST (Xiao et al., 2017), CIFAR10 (Krizhevsky & Hinton, 2009) and Restricted ImageNet (Tsipras et al., 2019), we evaluate our methods with well-established criteria, such as `∞ bounded accuracy. We also leverage saliency-based visualization methods to understand how the model understands each class. Most experiments are conducted with a simple basic convolutional neural network with two convolution layers and two fully connected layers, while the CIFAR10 experiment is conducted with ResNet18 and Restricted ImageNet experiment is conducted with ResNet50 (more details of the models are in the Appendix). As we mentioned previously, we apply the new regularization to four different losses: the vanilla loss (denoted as V), Trades loss (denoted as T) (Zhang et al., 2019a), adversarial training (denoted as A) (Madry et al., 2018), and our variation of logit pairing (denoted as L). T, A, L all adopt `∞ norm bounded PGD as the threat model. We use VR, TR, AR, LR to denote the methods after our regularization is plugged in. We evaluate our methods against a wide range of adversarial attack methods, including FGSM (Goodfellow et al., 2015), PGD (Madry et al., 2018), C&W (Carlini & Wagner, 2017), DeepFool (both `2 and `∞) (Moosavi-Dezfooli et al., 2016), ADef, a method that iteratively applies small deformations to the image (Alaifari et al., 2019) and Salt&Pepper, a black-box method that adds noise to the image. For these attack methods, we use the default parameters in Foolbox (Rauber et al., 2017), and our experiments suggest that these default parameters are effective enough in most cases. For every data set, we first tune the `∞ norm perturbation bound of adversarial training method and then use the same setting for Trades loss and variation of logit pairing. We tune γ within {0.1, 1.0, 10.0} and tune λ within {0.01, 0.1, 1.0, 10.0, 100.0}. 4.1 SYNTHETIC EXPERIMENTS FOR SANITY CHECKING We first use a basic data set of four shapes1 to test whether our proposed method helps regularize the model to behave as we desire. Each image in this data set has a white background and a black foreground depicting one of the four shapes: circle, star, square, and triangle. Our goal is to train a convolutional neural network to classify the images into one of these four shapes. We compare the models trained the four basic losses V, T, A, L and these models with our regularization, denoted as VR, TR, AR, and LR, when λ = 100.0. To further test our idea, we also test the regularization with the hyperparameter set to a negative value λ = −100.0 to inspect the consequences when we regularize the model towards high-frequency kernels. Resulting models are denoted as VH, TH, AH, LH respectively according to the basic losses. We report our inspections in Figure 1: Figure 1(a) visualizes of the convolution kernel (due to the limitation of space, we only visualize the first four convolutional kernels); Figure 1(b) visualizes the corresponding frequency domain in absolute values; Figure 1(c) visualizes the internal representation after an image depicting star is passed through the kernels. Figure 1 (a) shows that our regularization guides the model towards a smooth kernel, across all the basic losses. Also, if we apply our regularization with a negative parameter, then the weights of the resulting kernel tend to fluctuate more dramatically. Figure 1 (b) validates our argument that a smooth kernel only has negligible high-frequency components. As we can see, the frequency domain corresponding to the kernels when our regularization is applied shows significant differences in low-frequency components (center of the visualization) and high-frequency components (periphery of the visualization). Figure 1 (c) further validates our intuition, showing that in comparison to 1https://www.kaggle.com/smeschke/four-shapes internal representations summarized by kernels from basic losses, those influenced by our regularization are more sensitive to the low-frequency signal (e.g. the shape of the input), and the internal representation with our regularization when the parameter is negative tends to focus more on the high-frequency signals. Further, we check the mechanism of our model by inspecting how adversarial examples deceive the models. Figure 2 shows the four on-average most deceptive adversarial examples (the models predict incorrectly with the highest confidence) generated by FGSM. Notations follow the same convention as the previous case, and O denotes the original image. While many images have to be perturbed with a human perceivable amount to deceive the model, we can notice that the adversarial examples for models with our regularization (?R) tend to behave in way that can be understood by a human. The most convincing examples are at the first row for A and AR, where we can clearly see that the adversarial examples alter the decisions from star to circle. Other adversarial examples for ?R models also introduce large areas that shall be interpreted as the shape. In contrast, adversarial examples for other models tend to introduce scattered patches, which will probably not be considered as the shape for most people. Also, if we apply our regularization with a negative parameter (?H), the patches tend to behave in a more shattered manner. 4.2 STANDARD NUMERICAL EVALUATION In Table 1, we report the prediction accuracy over the generated adversarial examples across the attack methods. For MNIST and FashionMNIST, we do not limit the of adversarial examples. In principle, when there is no constraint, one should always be able to find the adversarial example for any sample, however, in practice, many search attempts fail when the attack methods are set with the default hyperparameters in Foolbox. We consider these failures of searches (under default parameters) also a measure of the robustness of models. For CIFAR10 and Restricted ImageNet, we set the to be 0.1 and 0.05, respectively (the maximum pixel value is 1.0). Overall, across most of the settings, our regularization helps achieve numerically the best adversarially robust models. Impressively, for MNIST and FashionMNIST, for some attack methods (e.g., both versions of DeepFool), our regularization can improve the robustness significantly even when only applied to the vanilla training loss, suggesting the importance of the smooth regularization. Also, for these two datasets, the improvements of our regularization are mostly significant over the non-regularized counterparts. For CIFAR10 and Restricted ImageNet, the performance gains are less significant but still observable. In the Appendix, we report the accuracy and curves over `0, Digit 0: Digit 1: Digit 2: Digit 3: Digit 4: Digit 5: Digit 6: Digit 7: Digit 8: I V VR T TR A AR L LR Digit 9: I V VR T TR A AR L LR Figure 3: Sample-independent interpretation of models trained over MNIST. I stands for the input. T-shirt: Coat: Pullover: Dress: Trouser: Shirt: Sandal: Bag: Sneaker: I V VR T TR A AR L LR Boot: I V VR T TR A AR L LR Figure 4: Sample-independent interpretation of models for FashionMNIST. I stands for the input. `2, and `∞ distances, for more thorough comparisons. In general, the performances evaluated by the curves are consistent with results in Table 1. 4.3 INSPECTING MODEL’S PERCEPTION OF CLASS We also leverage one of the most classic model-interpretation methods, activation maximization (Erhan et al., 2009), to further demonstrate the strength of our regularization. Concretely, we follow (Simonyan et al., 2013; Engstrom et al., 2019) to maximize the logit of a certain class so that the most representative features of that class can be exaggerated in the input image. Specifically, with an input image of Gaussian noise, we apply projected gradient descent 10000 iterations with learning rate 0.001 to update the input image. Notice that the interpretation is sample-independent. Figure 3 depicts what the models consider to be the digits. While V and T can barely be interpreted to human, when our regularization is plugged in, the patterns appear to be observable, with impressive examples such as Digit 0, 2, 3. A can also deliver interpretable decisions (e.g., Digit 3 and 5), and our regularization significantly helps in other cases, such as Digit 0, 1, 2, 4, and 8. Figure 4 shows a similar story for FashionMNIST dataset: while A might have the cleanest interpretation for the “sneaker” case, our regularization (especially AR) probably has the best interpretation in all other cases, with good examples such as “Trouser,” “Dress,” and “Boot.” Interestingly, AR is the only method that interprets “Bag” with a strap, and the average image of all training “Bag” samples in FashionMNIST is a bag with a strap. Figure 5 shows the visualization of models trained on CIFAR10. While A seems to have the best interpretation in “horse” case, AR and LR have equal or better interpretation in comparison with A in other cases. Impressively, only AR and LR understand “bird,” and only AR understands “deer”. Figure 6 shows the visualization for Restricted ImageNet plane: mobile: bird: cat: deer: dog: frog: horse: ship: I V VR T TR A AR L LR truck: I V VR T TR A AR L LR (results of simpler models are not shown because they cannot be interpreted). AR is the only method that can describe the outline of the “bird” and “crab” classes, while these models seem to remain more or less the similar interpretation power for other labels. Other results, such as visualization of targeted attack through saliency-based methods and selective visualization of adversarial examples generated along with the experiments, are shown in the Appendix. Overall, the empirical evidence supports our intuition in Section 3: the regularization helps push the model to focus on the low-frequency components of the image and thus leads to more perceptually-aligned gradients. 5 CONCLUSION Inspired by neuroscience literature emphasizing the connection between low-frequency components and shape recognition (Bar, 2004; Awasthi et al., 2011), we proposed a smooth kernel regularization that forces the CNN to learn smooth convolutional kernels (kernels with small differences among adjacent weights) during training. As the relation between smoothness and low-frequency can be argued intuitively and supported by some known results in proved theorems (Titchmarsh, 1948; Bracewell, 1986; Platonov, 2005), our regularization should help the model to depend more on the low-frequency components of images. To verify the effectiveness of the regularization, we plug in the idea onto multiple training losses, including the vanilla loss, Trades loss (Zhang et al., 2019a), the adversarial training loss (Madry et al., 2018), as well as a variation of Logit Pairing loss (Kannan et al., 2018). With seven different attack methods, we demonstrate the empirical strength of our regularization with standard numerical evaluations. Further, we also leverage the standard model interpretation methods to explain the decision of models, showing that our technique, like those demonstrated by Santurkar et al. (2019), tends to result in more perceptually-aligned gradients. A MODEL AND HYPERPARAMETER CHOICES For MNIST and FashionMNIST data set, the model is a simple two-layer architecture with two convolutional layers and two fully connected layers. The `∞ perturbation bound of PGD is set to 0.3/1.0 for MNIST and 0.1/1.0 for FashionMNIST. For CIFAR10, the model is a ResNet18, and the `∞ perturbation bound of PGD is set to 0.03/1.0 (roughly 8/255). For Restricted ImageNet, the model is a ResNet50, and the `∞ perturbation bound of PGD is set to 0.005/1.0, then along the processing process, the pixel values of the images are divided by the standard deviation (0.2575), so is the perturbation bound. Also, for Restricted ImageNet, we continue with either the standard ImageNet-pretrained ResNet50 (for V and T losses) or the adversarially trained ResNet50 on Restricted ImageNet (Santurkar et al., 2019) (for A and L losses). With our hardware settings (NVIDIA 1080Ti), we cannot effectively train Trades loss over ResNet50. B ACCURACY-EPSILON CURVES The accuracy-epsilon curves are shown for `0, `2 and `∞ bounds are shown in Figure 7, Figure 8, and Figure 9. C TARGETED ATTACK We also take advantage of the gradient to perform targeted attack, as shown in following figures. The titles of the columns describe the original class, and the titles of the rows describe the target classes. D SELECTIVE ADVERSARIAL EXAMPLES We visualize the generated adversarial examples to help us evaluate the models. We visualize the on-average most deceptive examples (the highest prediction confidence on the wrong class). We plot one example for each class of the data. For MNIST and FashionMNIST, we focus on the visualization of adversarial examples generated by Adef attack because the attack is more visually aligned with how human perceive the images.
1. How can the readability of the paper be improved, specifically regarding equation numbering? 2. What is the reviewer's concern regarding the usage of smooth kernel regularization, R(w), and logit pairing equation? 3. Why does the reviewer find the notation <X, Y> misleading, and what suggestion do they have for improving it? 4. What is the reviewer's question regarding the justification of using KL? 5. Does the reviewer consider the contribution of designing R(w) and the logit pair loss function trivial? If so, why? 6. How can the authors ensure that the solutions are inside the closed ball in practice, and how can they prove the robustness of their approach? 7. What is the purpose of the loss function l(.,.) supposed to be, according to the reviewer? 8. Why does the reviewer think there is a negative sign in the second part of R(w)? 9. What does the reviewer mean by "immature and inconclusive" regarding the synthetic experiment, and how can the choices of hyperparameters used be justified? 10. What is the reviewer's concern regarding Fig. 4, and how can the usefulness of the results be demonstrated? 11. Is Section 4.3 considered redundant by the reviewer, and if so, how can the paper be improved by combining or reorganizing the sections? 12. What is the reviewer's issue with the comparison between AR and A in Fig. 5, and how can the authors better justify their argument? 13. Does the reviewer believe that the regularization used in the paper is not useful, and if so, what alternative approaches would they suggest?
Review
Review 1. Please number the equations for better readability. 2. The usage of smooth kernel regularization, R(w) and logit pairing equation seems disjoint, what is the natural flow here? 3. The notation <X, Y> is misleading, do not use inner product notation to denote tuple! 4. I do not see any justification of using KL 5. The contribution in terms of designing R(w) and the logit pair loss function is trivial. 6. The argmax to get x’ how practically you ensure the solutions are inside the closed ball? The authors mentioned about robustness without any justification. Why related to TV norm? The usage of Pinsker’s inequality to get upper bound is meaningful but that doesn’t prove the robustness. Please explain, also why not state this as a theorem. I suggest prove the robustness in a concrete manner, maybe using influence functions. 7. What is the loss function l(.,.) supposed to be? 8. In R(w) the second part supposed to minimize the norm of convolution kernel I presume, then why there is a negative sign! 9. The title of Section 4.1 is pretty strange, you don’t have to say “… for sanity checking”. 10. The synthetic experiment is very immature and inconclusive. What can we get from Table 1? Also please justify the choices of hyperparameters used. 11. Again, looking at Fig. 4, I really can’t see the usefulness. 12. Section 4.3 is meaningless and seems redundant. Why not have a single story rather than so many branching, the experiments are not convincing at all. 13. In Fig. 5, authors argued AR is better than A, I don’t see why, e.g., for horse A looks uch better than AR, same for insect. 14. The regularization seems not that useful, to me this work tries to justify using a regularization which by the choice of experiments is not well grounded.
ICLR
Title Independent SE(3)-Equivariant Models for End-to-End Rigid Protein Docking Abstract Protein complex formation is a central problem in biology, being involved in most of the cell’s processes, and essential for applications, e.g. drug design or protein engineering. We tackle rigid body protein-protein docking, i.e., computationally predicting the 3D structure of a protein-protein complex from the individual unbound structures, assuming no conformational change within the proteins happens during binding. We design a novel pairwise-independent SE(3)-equivariant graph matching network to predict the rotation and translation to place one of the proteins at the right docked position relative to the second protein. We mathematically guarantee a basic principle: the predicted complex is always identical regardless of the initial locations and orientations of the two structures. Our model, named EQUIDOCK, approximates the binding pockets and predicts the docking poses using keypoint matching and alignment, achieved through optimal transport and a differentiable Kabsch algorithm. Empirically, we achieve significant running time improvements and often outperform existing docking software despite not relying on heavy candidate sampling, structure refinement, or templates. 1 N/A 1 INTRODUCTION protein Z protein Z protein Z-dependent inhibitor protein Z-dependent inhibitor a. b. PDB ID: 3F1S Figure 1: Different views of the 3D structure of a protein complex. a. Surface and b. cartoon view of protein Z and its inhibitor. In a recent breakthrough, ALPHAFOLD 2 (Jumper et al., 2021; Senior et al., 2020) provides a solution to a grand challenge in biology—inferring a protein’s three-dimensional structure from its amino acid sequence (Baek et al., 2021), following the dogma sequence determines structure. Besides their complex three-dimensional nature, proteins dynamically alter their function and structure in response to cellular signals, changes in the environment, or upon molecular docking. In par- ticular, protein interactions are involved in various biological processes including signal transduction, protein synthesis, DNA replication and repair. Molecular docking is key to understanding protein interactions’ mechanisms and effects, and, subsequently, to developing therapeutic interventions. We here address the problem of rigid body protein-protein docking which refers to computationally predicting the 3D structure of a protein-protein complex given the 3D structures of the two proteins in unbound state. Rigid body means no deformations occur within any protein during binding, which is a realistic assumption in many biological settings. Popular docking software (Chen et al., 2003; Venkatraman et al., 2009; De Vries et al., 2010; Torchala et al., 2013; Schindler et al., 2017; Sunny and Jayaraj, 2021) are typically computationally expensive, †Correspondence to: Octavian Ganea ([email protected]) and Yatao Bian ([email protected]). ∗Equal contribution. §Work done during an internship at Tencent AI Lab. 1Our code is publicly available: https://github.com/octavian-ganea/equidock_public. taking between minutes and hours to solve a single example pair, while not being guaranteed to find accurate complex structures. These methods largely follow the steps: i.) randomly sample a large number (e.g., millions) of candidate initial complex structures, ii.) employ a scoring function to rank the candidates, iii.) adjust and refine the top complex structures based on an energy model (e.g., force field). We here take a first step towards tackling these issues by using deep learning models for direct prediction of protein complex structures. Contributions. We design EQUIDOCK, a fast, end-to-end method for rigid body docking that directly predicts the SE(3) transformation to place one of the proteins (ligand) at the right location and orientation with respect to the second protein (receptor). Our method is based on the principle that the exact same complex structure should be predicted irrespectively of the initial 3D placements and roles of both constituents (see Fig. 2). We achieve this desideratum by incorporating the inductive biases of pairwise SE(3)–equivariance and commutativity, and deriving novel theoretical results for necessary and sufficient model constraints (see Section 3). Next, we create EQUIDOCK to satisfy these properties by design, being a combination of: i) a novel type of pairwise independent SE(3)-equivariant graph matching networks, ii) an attention-based keypoint selection algorithm that discovers representative points and aligns them with the binding pocket residues using optimal transport, and iii) a differentiable superimposition model to recover the optimal global rigid transformation. Unlike prior work, our method does not use heavy candidate sampling or ranking, templates, task-specific geometric or chemical hand-crafted features, or pre-computed meshes. This enables us to achieve plausible structures with a speed-up of 80-500x compared to popular docking software, offering a promising competitive alternative to current solutions for this problem. 2 RELATED WORK Geometric Deep Learning. Graph Neural Networks (GNNs) are becoming the de facto choice for learning with graph data (Bruna et al., 2013; Defferrard et al., 2016; Kipf and Welling, 2016; Gilmer et al., 2017; Xu et al., 2018; Li et al., 2019). Motivated by symmetries naturally occurring in different data types, architectures are tailored to explicitly incorporate such properties (Cohen and Welling, 2016a;b; Thomas et al., 2018; Fuchs et al., 2020; Finzi et al., 2020; Eismann et al., 2020; Satorras et al., 2021). GNNs are validated in a variety of tasks such as particle system dynamics or conformation-based energy estimation (Weiler and Cesa, 2019; Rezende et al., 2019). Euclidean Neural Networks (E(3)-NNs). However, plain GNNs and other deep learning methods do not understand data naturally lying in the 3D Euclidean space. For example, how should the output deterministically change with the input, e.g. when it is rotated ? The recent Euclidean neural networks address this problem, being designed from geometric first-principles. They make use of SE(3)- equivariant and invariant neural layers, thus avoiding expensive data augmentation strategies. Such constrained models ease optimization and have shown important improvements in biology or chemistry – e.g. for molecular structures (Fuchs et al., 2020; Hutchinson et al., 2020; Wu et al., 2021; Jumper et al., 2021; Ganea et al., 2021) and different types of 3D point clouds (Thomas et al., 2018). Different from prior work, we here derive constraints for pairs of 3D objects via pairwise independent SE(3)-equivariances, and design a principled approach for modeling rigid body docking. Protein Folding. Deep neural networks have been used to predict inter-residue contacts, distance and/or orientations (Adhikari and Cheng, 2018; Yang et al., 2020; Senior et al., 2020; Ju et al., 2021), that are subsequently transformed into additional constraints or differentiable energy terms for protein structure optimization. ALPHAFOLD 2 (Jumper et al., 2021) and Rosetta Fold (Baek et al., 2021) are state-of-the-art approaches, and directly predict protein structures from co-evolution information embedded in homologous sequences, using geometric deep learning and E(3)-NNs. Protein-Protein Docking and Interaction. Experimentally determining structures of protein complexes is often expensive and time-consuming, rendering a premium on computational methods (Vakser, 2014). Protein docking methods (Chen et al., 2003; Venkatraman et al., 2009; De Vries et al., 2010; Biesiada et al., 2011; Torchala et al., 2013; Schindler et al., 2017; Weng et al., 2019; Sunny and Jayaraj, 2021; Christoffer et al., 2021; Yan et al., 2020) typically run several steps: first, they sample thousands or millions of complex candidates; second, they use a scoring function for ranking (Moal et al., 2013; Basu and Wallner, 2016; Launay et al., 2020; Eismann et al., 2020); finally, top-ranked candidates undergo a structure refinement process using energy or geometric models (Verburgt and Kihara, 2021). Relevant to protein-protein interaction (PPI) is the task of protein interface prediction where GNNs have showed promise (Fout et al., 2017; Townshend et al., 2019; Liu et al., 2020; Xie and Xu, 2021; Dai and Bailey-Kellogg, 2021). Recently, ALPHAFOLD 2 and ROSETTAFOLD have been utilized as subroutines to improve PPIs from different aspects (Humphreys et al., 2021; Pei et al., 2021; Jovine), e.g., combining physics-based docking method CLUSPRO (Kozakov et al., 2017; Ghani et al., 2021), or using extended multiple-sequence alignments to predict the structure of heterodimeric protein complexes from the sequence information (Bryant et al., 2021). Concurrently to our work, Evans et al. (2021) extend ALPHAFOLD 2 to multiple chains during both training and inference. Drug-Target Interaction (DTI). DTI aims to compute drug-target binding poses and affinity, playing an essential role in understanding drugs’ mechanism of action. Prior methods (Wallach et al., 2015; Li et al., 2021) predict binding affinity from protein-ligand co-crystal structures, but such data is expensive to obtain experimentally. These models are typically based on heavy candidate sampling and ranking (Trott and Olson, 2010; Koes et al., 2013; McNutt et al., 2021; Bao et al., 2021), being tailored for small drug-like ligands and often assuming known binding pocket. Thus, they are not immediately applicable to our use case. In contrast, our rigid docking approach is generic and could be extended to accelerate DTI research as part of future work. 3 MATHEMATICAL CONSTRAINTS FOR RIGID BODY DOCKING We start by introducing the rigid body docking problem and derive the geometric constraints for enforcing same output complex prediction regardless of the initial unbound positions or roles (Fig. 2). Rigid Protein-Protein Docking – Problem Setup. We are given as input a pair of proteins forming a complex. They are (arbitrarily) denoted as the ligand and receptor, consisting of n and m residues, respectively. These proteins are represented in their bound (docked) state as 3D point clouds X∗1 ∈ R3×n,X∗2 ∈ R3×m, where each residue’s location is given by the coordinates of its corresponding α-carbon atom. In the unbound state, the docked ligand is randomly rotated and translated in space, resulting in a modified point cloud X1 ∈ R3×n. For simplicity and w.l.o.g., the receptor remains in its bound location X2 = X∗2. The task is to predict a rotation R ∈ SO(3) and a translation t ∈ R3 such that RX1 + t = X∗1, using as input the proteins and their unbound positions X1 and X2. Here, R = R(X1|X2) and t = t(X1|X2) are functions of the two proteins, where we omit residue identity or other protein information in this notation, for brevity. Note that we assume rigid backbone and side-chains for both proteins. We therefore do not tackle the more challenging problem of flexible docking, but our approach offers an important step towards it. KÉÏ ïËÏÊ PIRtbÉLÉÏ r.tt Ï HÉ ï ï PFRibÉLÉÏITE We desire that the predicted complex structure is independent of the initial locations and orientations of the two proteins, as well as of their roles – see Fig. 2. Formally, we wish to guarantee that: (R(Z1|Z2)Z1 + t(Z1|Z2))⊕ Z2 ≡ (R(X1|X2)X1 + t(X1|X2))⊕X2, (SE(3)-invariance) (R(X1|X2)X1 + t(X1|X2))⊕X2 ≡ X1 ⊕ (R(X2|X1)X2 + t(X2|X1)), (commutativity) ∀Q1,Q2 ∈ SO(3),∀g1,g2 ∈ R3,∀X1 ∈ R3×n,X2 ∈ R3×m, and Zl = QlXl + gl, l ∈ {1, 2}. (1) for any rotations Q1,Q2 and translations g1,g2, where ⊕ is concatenation along columns, and ≡ denotes identity after superimposition, i.e. zero Root-Mean-Square Deviation (RMSD) between the two 3D point sets after applying the Kabsch algorithm (Kabsch, 1976). An immediate question arises: How do the constraints in Eq. (1) translate into constraints for R(·|·) and t(·|·) ? The rotation R and translation t change in a systematic way when we apply SE(3) transformations or swap proteins’ roles. These properties restrict our class of functions as derived below. SE(3)-equivariance Constraints. If we apply any distinct SE(3) transformations on the unbound ligand X1 and receptor X2, i.e. we dock Q1X1 + g1 onto Q2X2 + g2, then the rotation matrix R(Q1X1 + g1|Q2X2 + g2) and translation vector t(Q1X1 + g1|Q2X2 + g2) can be derived from the original R(X1|X2) and t(X1|X2) assuming that we always do rotations first. In this case, R(Q1X1 + g1|Q2X2 + g2) can be decomposed into three rotations: i.) apply Q>1 to undo the rotation Q1 applied on X1, ii.) apply R(X1|X2), iii.) apply Q2 to rotate the docked ligand together with the receptor. This gives R(Q1X1 + g1|Q2X2 + g2) = Q2R(X1|X2)Q>1 , which in turn constraints the translation vector. We provide a formal statement and prove it in Appendix B.1: Proposition 1. For any Q1,Q2 ∈ SO(3),g1,g2 ∈ R3, SE(3)-invariance of the predicted docked complex defined by Eq. (1) is guaranteed iff R(Q1X1 + g1|Q2X2 + g2) = Q2R(X1|X2)Q>1 t(Q1X1 + g1|Q2X2 + g2) = Q2t(X1|X2)−Q2R(X1|X2)Q>1 g1 + g2. (2) As a direct consequence of this proposition, we have the following statement. Proposition 2. Any model satisfying Proposition 1 guarantees invariance of the predicted complex w.r.t. any SE(3) transformation on X1, and equivariance w.r.t. any SE(3) transformation on X2: R(Z1|X2)Z1 + t(Z1|X2) = R(X1|X2)X1 + t(X1|X2), where Z1 = Q1X1 + g1 R(X1|Z2)X1 + t(X1|Z2) = Q2 [R(X1|X2)X1 + t(X1|X2)] + g2, where Z2 = Q2X2 + g2 ∀Q1,Q2 ∈ SO(3),∀g1,g2 ∈ R3,∀X1 ∈ R3×n,∀X2 ∈ R3×m. Commutativity. Instead of docking X1 with respect to X2, we can also dock X2 with respect to X1. In this case, we require the final complex structures to be identical after superimposition, i.e., zero RMSD. This property is named commutativity and it is satisfied as follows (proof in Appendix B.2). Proposition 3. Commutativity as defined by Eq. (1) is guaranteed iff R(X2|X1) = R>(X1|X2); t(X2|X1) = −R>(X1|X2)t(X1|X2), (3) Point Permutation Invariance. We also enforce residue permutation invariance. Formally, both R(X1|X2) and t(X1|X2) should not depend on the order or columns of X1 and, resp., of X2. 4 EQUIDOCK MODEL Protein Representation. A protein is a sequence of amino acid residues that folds in a 3D structure. Each residue has a general structure with a side-chain specifying its type, allowing us to define a local frame and derive SE(3)-invariant features for any pair of residues —see Appendix A. We represent a protein as a graph G = (V, E), similar to Fout et al. (2017); Townshend et al. (2019); Liu et al. (2020). Each node i ∈ V represents one residue and has 3D coordinates xi ∈ R3 corresponding to the α-carbon atom’s location. Edges are given by a k-nearest-neighbor (k-NN) graph using Euclidean distance of the original 3D node coordinates. Overview of Our Approach. Our model is depicted in Fig. 3. We first build k-NN protein graphs G1 = (V1, E1) and G2 = (V2, E2). We then design SE(3)-invariant node features F1 ∈ Rd×n,F2 ∈ Rd×m and edge features {fj→i : ∀(i, j) ∈ E1 ∪ E2} (see Appendix A). Next, we apply several layers consisting of functions Φ that jointly transform node coordinates and features. Crucially, we guarantee, by design, pairwise independent SE(3)-equivariance for coordinate embeddings, and invariance for feature embeddings. This double constraint is formally defined: Given Z1,H1,Z2,H2 = Φ(X1,F1,X2,F2) we have Q1Z1 + g1,H1,Q2Z2 + g2,H2 = Φ(Q1X1 + g1,F1,Q2X2 + g2,F2), ∀Q1,Q2 ∈ SO(3),∀g1,g2 ∈ R3. (4) We implement Φ as a novel type of message-passing neural network (MPNN). We then use the output node coordinate and feature embeddings to compute R(X1|X2) and t(X1|X2). These functions depend on pairwise interactions between the two proteins modeled as cross-messages, but also incorporate the 3D structure in a pairwise-independent SE(3)-equivariant way to satisfy Eq. (1), Proposition 1 and Proposition 3. We discover keypoints from each protein based on a neural attention mechanism and softly guide them to represent the respective binding pocket locations via an optimal transport based auxiliary loss. Finally, we obtain the SE(3) transformation by superimposing the two keypoint sets via a differentiable version of the Kabsch algorithm. An additional soft-constraint discourages point cloud intersections. We now detail each of these model components. Independent E(3)-Equivariant Graph Matching Networks (IEGMNs). Our architecture for Φ satisfying Eq. (4) is called Independent E(3)-Equivariant Graph Matching Network (IEGMN) – see Fig. 3. It extends both Graph Matching Networks (GMN) (Li et al., 2019) and E(3)-Equivariant Graph Neural Networks (E(3)-GNN) (Satorras et al., 2021). IEGMNs perform node coordinate and feature embedding updates for an input pair of graphs G1 = (V1, E1), G2 = (V2, E2), and use inter- and intranode messages, as well as E(3)-equivariant coordinate updates. The l-th layer of IEGMNs transforms node latent/feature embeddings {h(l)i }i∈V1∪V2 and node coordinate embeddings {x (l) i }i∈V1∪V2 as mj→i = ϕ e(h (l) i ,h (l) j , exp(−‖x (l) i − x (l) j ‖2/σ), fj→i),∀ej→i ∈ E1 ∪ E2 (5) µj→i = aj→iWh (l) j ,∀i ∈ V1, j ∈ V2 or i ∈ V2, j ∈ V1 (6) mi = 1 |N (i)| ∑ j∈N (i) mj→i,∀i ∈ V1 ∪ V2 (7) µi = ∑ j∈V2 µj→i,∀i ∈ V1, and µi = ∑ j∈V1 µj→i,∀i ∈ V2 (8) x (l+1) i = ηx (0) i + (1− η)x (l) i + ∑ j∈N (i) (x (l) i − x (l) j )ϕ x(mj→i),∀i ∈ V1 ∪ V2 (9) h (l+1) i = (1− β) · h (l) i + β · ϕh(h (l) i ,mi,µi, fi),∀i ∈ V1 ∪ V2, (10) where N (i) are the neighbors of node i; ϕx is a real-valued (scalar) parametric function; W is a learnable matrix; ϕh, ϕe are parametric functions (MLPs) outputting a vector Rd; fj→i and fi are the original edge and node features (extracted SE(3)-invariantly from the residues). aj→i is an attention based coefficient with trainable shallow neural networks ψq and ψk: aj→i = exp(〈ψq(h(l)i ), ψk(h (l) j )〉)∑ j′ exp(〈ψq(h (l) i ), ψ k(h (l) j′ )〉) , (11) Note that all parameters of W, ϕx, ϕh, ϕe, ψq, ψk can be shared or different for different IEGMN layers . The output of several IEGMN layers is then denoted as: Z1 ∈ R3×n,H1 ∈ Rd×n,Z2 ∈ R3×m,H2 ∈ Rd×m = IEGMN(X1,F1,X2,F2). (12) It is then straightforward to prove the following (see Appendix B.3): Proposition 4. IEGMNs satisfy the pairwise independent SE(3)-equivariance property in Eq. (4). Keypoints for Differentiable Protein Superimposition. Next, we use multi-head attention to obtain K points for each protein, Y1,Y2 ∈ R3×K , which we name keypoints. We train them to become representative points for the binding pocket of the respective protein pair (softly-enforced by an additional loss described later). If this would holds perfectly, then the superimposition of Y1 and Y2 would give the corresponding ground truth superimposition of X1 and X2. Our model is : y1k := n∑ i=1 αki z1i; y2k := m∑ j=1 βkj z2j , where z1i denotes the i-th column of matrix Z1, and αki = softmaxi( 1√ d h>1iW ′ kµ(ϕ(H2))) are attention scores (similarly defined for βkj ), with W ′ k ∈ Rd×d a parametric matrix (different for each attention head), ϕ a linear layer plus a LeakyReLU non-linearity, and µ(·) is the mean vector. Differentiable Kabsch Model. We design the rotation and translation that docks protein 1 into protein 2 to be the same transformation used to superimpose Y1 and Y2 — see Fig. 3. For this, we compute a differentiable version of the Kabsch algorithm (Kabsch, 1976) as follows. Let A = Y2Y > 1 ∈ R3×3 computed using zero-mean keypoints. The singular value decomposition (SVD) is A = U2SU>1 , where U2,U1 ∈ O(3). Finally, we define the differentiable functions R(X1|X2; θ) = U2 ( 1 0 0 0 1 0 0 0 d ) U>1 , where d = sign(det(U2U > 1 )) t(X1|X2; θ) = µ(Y2)−R(X1|X2; θ)µ(Y1), (13) where µ(·) is the mean vector of a point cloud. It is straightforward to prove that this model satisfies all the equivariance properties in Eqs. (1) to (3). From a practical perspective, the gradient and backpropagation through the SVD operation was analyzed by (Ionescu et al., 2015; Papadopoulo and Lourakis, 2000) and implemented in the automatic differentiation frameworks such as PyTorch. MSE Loss. During training, we randomly decide which protein is the receptor (say protein 2), keep it in the docked position (i.e., X2 = X∗2), predict the SE(3) transformation using Eq. (13) and use it to compute the final position of the ligand as X̃1 = R(X1|X2)X1 + t(X1|X2). The mean squared error (MSE) loss is then LMSE = 1n ∑n i=1 ‖x∗i − x̃i‖2. Optimal Transport and Binding Pocket Keypoint Alignment. As stated before, we desire that Y1 and Y2 are representative points for the binding pocket location of the respective protein pair. However, this needs to be encouraged explicitly, which we achieve using an additional loss. We first define the binding pocket point sets, inspiring from previous PPI work (Section 2). Given the residues’ α-carbon locations of the bound (docked) structures, X∗1 and X ∗ 2, we select all pairs of residues at less than τ Euclidean distance (τ = 8Å in our experiments). We assume these are all interacting residues. Denote these pairs as {(x∗1s,x∗2s), s ∈ 1, . . . , S}, where S is variable across data pairs. We compute midpoints of these segments, denoted as P∗1,P ∗ 2 ∈ R3×S , where p∗1s = p ∗ 2s = 0.5 · (x∗1s + x∗2s). We view P∗1 and P∗2 as binding pocket points. In the unbound state, these sets are randomly moved in space together with the respective protein residue coordinates X1 and X2. We denote them as P1,P2 ∈ R3×S . For clarity, if X1 = QX∗1 + g, then P1 = QP∗1 + g. We desire that Y1 is a representative set for the 3D set P1 (and, similarly, Y2 for P2). However, while at training time we know that every point p1s corresponds to the point p2s (and, similarly, y1k aligns with y2k, by assumption), we unfortunately do not know the actual alignment between points in Yl and Pl, for every l ∈ {1, 2}. This can be recovered using an additional optimal transport loss: LOT = min T∈U(S,K) 〈T,C〉, where Cs,k = ‖y1k − p1s‖2 + ‖y2k − p2s‖2, (14) where U(S,K) is the set of S ×K transport plans with uniform marginals. The optimal transport plan is computed using an Earth Mover’s Distance and the POT library (Flamary et al., 2021), while being kept fixed during back-propagation and optimization when only the cost matrix is trained. Note that our approach assumes that y1k corresponds to y2k, for every k ∈ {1, . . . ,K}. Intuitively, each attention head k will identify a specific geometric/chemical local surface feature of protein 1 by y1k, and match its complementary feature of protein 2 by y2k. Avoiding Point Cloud Intersection. In practice, our model does not enforce a useful inductive bias, namely that proteins forming complexes are never "intersecting" with each other. To address this issue, we first state a notion of the "interior" of a protein point cloud. Following previous work (Sverrisson et al., 2021; Venkatraman et al., 2009), we define the surface of a protein point cloud X ∈ R3×n as {x ∈ R3 : G(x) = γ}, where G(x) = −σ ln(∑ni=1 exp(−||x − xi||2/σ)). The parameters σ and γ are chosen such that there exist no "holes" inside a protein (we found γ = 10, σ = 25 to work well, see Appendix E). As a consequence, the interior of the protein is given by {x ∈ R3 : G(x) < γ}. Then, the condition for non-intersecting ligand and receptor can be written as G1(x2j) > γ, ∀j ∈ 1, . . . ,m and G2(x1i) > γ, ∀i ∈ 1, . . . , n. As a loss function, this becomes LNI = 1 n n∑ i=1 max(0, γ −G2(x1i)) + 1 m m∑ j=1 max(0, γ −G1(x2j)). (15) Surface Aware Node Features. Surface contact modeling is important for protein docking. We here design a novel surface feature type that differentiates residues closer to the surface of the protein from those in the interior. Similar to Sverrisson et al. (2021), we prioritize efficiency and avoid pre-computing meshes, but show that our new feature is a good proxy for residue’s depth (i.e. distance to the protein surface). Intuitively, residues in the core of the protein are locally surrounded in all directions by other residues. This is not true for residues on the surface, e.g., neighbors are in a half-space if the surface is locally flat. Building on this intuition, for each node (residue) i in the k-NN protein graph, we compute the norm of the weighted average of its neighbor forces, which can be interpreted as the normalized gradient of the G(x) surface function. This SE(3)-invariant feature is ρi(xi;λ) = ‖∑i′∈Ni wi,i′,λ(xi − xi′)‖∑ i′∈Ni wi,i′,λ‖xi − xi′‖ , where wi,i′,λ = exp(−||xi − xi′ ||2/λ)∑ j∈Ni exp(−||xi − xj ||2/λ) . (16) Intuitively, as depicted in Fig. 8, residues in the interior of the protein have values close to 0 since they are surrounded by vectors from all directions that cancel out, while residues near the surface have neighbors only in a narrower cone, with aperture depending on the local curvature of the surface. We show in Appendix C that this feature correlates well with more expensive residue depth estimation methods, e.g. based on MSMS, thus offering a computationally appealing alternative. We also compute an estimation of this feature for large dense point clouds based on the local surface angle. 5 EXPERIMENTS Datasets. We leverage the following datasets: Docking Benchmark 5.5 (DB5.5) (Vreven et al., 2015) and Database of Interacting Protein Structures (DIPS) (Townshend et al., 2019). DB5.5 is a gold standard dataset in terms of data quality, but contains only 253 structures. DIPS is a larger protein complex structures dataset mined from the Protein Data Bank (Berman et al., 2000) and tailored for rigid body docking. Datasets information is given in Appendix D. We filter DIPS to only keep proteins with at most 10K atoms. Datasets are then randomly partitioned in train/val/test splits of sizes 203/25/25 (DB5.5) and 39,937/974/965 (DIPS). For DIPS, the split is based on protein family to separate similar proteins. For the final evaluation in Table 1, we use the full DB5.5 test set, and randomly sample 100 pairs from different protein families from the DIPS test set. Baselines. We compare our EQUIDOCK method with popular state-of-the-art docking software 2 CLUSPRO (PIPER) (Desta et al., 2020; Kozakov et al., 2017),ATTRACT (Schindler et al., 2017; de Vries et al., 2015), PATCHDOCK (Mashiach et al., 2010; SchneidmanDuhovny et al., 2005), and HDOCK (Yan et al., 2020; 2017b;a; Huang and Zou, 2014; 2008). These baselines provide user-friendly local packages suitable for automatic experiments or webservers for manual submissions. Evaluation Metrics. To measure prediction’s quality, we report Complex Root Mean Square Deviation (CRMSD) and Interface Root Mean Square Deviation (IRMSD), de- fined below. Given the ground truth and predicted complex structures, Z∗ ∈ R3×(n+m) and Z ∈ R3×(n+m), we first superimpose them using the Kabsch algorithm (Kabsch, 1976), and then compute C-RMSD = √ 1 n+m‖Z∗ − Z‖2F . We compute I-RMSD similarly, but using only the coordinates of the interface residues with distance less than 8Å to the other protein’s residues. For a fair comparison among baselines, we use only the α-carbon coordinates to compute both metrics. Training Details. We train our models on the train part of DIPS first, using Adam (Kingma and Ba, 2014) with learning rate 2e-4 and early stopping with patience of 30 epochs. We update the best validation model only when it achieves a score of less than 98% of the previous best validation score, where the score is the median of Ligand RMSD on the full DIPS validation set. The best DIPS validated model is then tested on the DIPS test set. For DB5.5, we fine tune the DIPS pre-trained 2ClusPro: https://cluspro.bu.edu/, Attract: www.attract.ph.tum.de/services/ ATTRACT/ATTRACT.vdi.gz, PatchDock: https://bioinfo3d.cs.tau.ac.il/PatchDock/, HDOCK: http://huanglab.phys.hust.edu.cn/software/HDOCK/ model on the DB5.5 training set using learning rate 1e-4 and early stopping with 150 epochs patience. The best DB5.5 validated model is finally tested on DB5.5 test set. During training, we randomly assign the roles of ligand and receptor. Also, during both training and testing, we randomly rotate and translate the ligand in space (even though our model is invariant to this operation) for all baselines. Complex Prediction Results. Results are shown in Table 1, Fig. 4 and Appendix E. We note that our method is competitive and often outperforms the baselines. However, we do not use heavy candidate sampling and re-ranking, we do not rely on task-specific hand-crafted features, and we currently do not perform structure fine-tuning, aiming to predict the SE(3) ligand transformation in a direct shot. Moreover, we note that some of the baselines might have used part of our test set in validating their models, for example to learn surface templates, thus, their reported scores might be optimistic. Notably, HDOCK score function was validated on DB4 which overlaps with DB5.5. A more appropriate comparison would require us to re-build these baselines without information from our test sets, a task that is currently not possible without open-source implementations. Computational Efficiency. We show inference times in Fig. 5 and Table 4. Note that EQUIDOCK is between 80-500 times faster than the baselines. This is especially important for intensive screening applications that aim to scan over vast search spaces, e.g. for drug discovery. In addition, it is also relevant for de novo design of binding proteins (e.g. antibodies (Jin et al., 2021)) or for use cases when protein docking models are just a component of significantly larger end-to-end architectures targeting more involved biological scenarios, e.g., representing a drug’s mechanism of action or modeling cellular processes with a single model as opposed to a multi-pipeline architecture. Visualization. We show in Fig. 6 a successful example of a test DIPS protein pair for which our model significantly outperforms all baselines. 6 CONCLUSION We have presented an extremely fast, end-to-end rigid protein docking approach that does not rely on candidate sampling, templates, task-specific features or pre-computed meshes. Our method smartly incorporates useful rigid protein docking priors including commutativity and pairwise independent SE(3)-equivariances, thus avoiding the computational burden of data augmentation. We look forward to incorporating more domain knowledge in EQUIDOCK and extend it for flexible docking and docking molecular dynamics, as well as adapt it to other related tasks such as drug binding prediction. On the long term, we envision that fast and accurate deep learning models would allow us to tackle more complex and involved biological scenarios, for example to model the mechanism of action of various drugs or to design de novo binding proteins and drugs to specific targets (e.g. for antibody generation). Last, we hope that our architecture can inspire the design of other types of biological 3D interactions. Limitations. First, our presented model does not incorporate protein flexibility which is necessary for various protein families, e.g., antibodies. Unfortunately, both DB5 and DIPS datasets are biased towards rigid body docking . Second, we only prevent steric clashes using a soft constraint (Eq. (15)) which has limitations (see Table 6). Future extensions would hard-constrain the model to prevent such artifacts. ACKNOWLEDGEMENTS The authors thank Hannes Stärk, Gabriele Corso, Patrick Walters, Tian Xie, Xiang Fu, Jacob Stern, Jason Yim, Lewis Martin, Jeremy Wohlwend, Jiaxiang Wu, Wei Liu, and Ding Xue for insightful and helpful discussions. OEG is funded by the Machine Learning for Pharmaceutical Discovery and Synthesis (MLPDS) consortium, the Abdul Latif Jameel Clinic for Machine Learning in Health, the DTRA Discovery of Medical Countermeasures Against New and Emerging (DOMANE) threats program, and the DARPA Accelerated Molecular Discovery program. This publication was created as part of NCCR Catalysis (grant number 180544), a National Centres of Competence in Research funded by the Swiss National Science Foundation. RB and TJ also acknowledge support from NSF Expeditions grant (award 1918839): Collaborative Research: Understanding the World Through Code. A Representing Proteins as Graphs 15 B Proofs of the Main Propositions 16 B.1 Proof of Proposition 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 B.2 Proof of Proposition 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 B.3 Proof of Proposition 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 C Surface Features 17 D Datasets 18 E More Experimental Details and Results 20 A REPRESENTING PROTEINS AS GRAPHS A protein is comprised of amino acid residues. The structure of an amino acid residue is shown in Figure Fig. 7. Generally, an amino acid residue contains amino (-NH-), α-carbon atom and carboxyl (-CO-), along with a side chain (R) connected with the α-carbon atom. The side chain (R) is specific to each type of amino acid residues. We work on residue level (our approaches can be extended to atom level as well). A protein is represented by a set of nodes where each node is an amino acid residue in the protein. Each node i has a 3D coordinate xi ∈ R3 which is the 3D coordinate of α-carbon atom of the residue. The neighborhood of a node is the set of k (k = 10 in our experiments) nearest nodes where the distance is the Euclidean distance between 3D coordinates. Node feature is a one dimension indicator (one-hot encoding) of the type of amino acid residue. This one dimension indicator will be passed into an embedding layer. Local Coordinate System. Similar to Ingraham et al. (2019) and Jumper et al. (2021), we introduce a local coordinate system for each residue which denotes the orientation of a residue. Based on this, we can further design SE(3)-invariant edge features. As shown in Figure 7, for a residue i, we denote the unit vector pointing from α-carbon atom to nitrogen atom as ui. We denote the unit vector pointing from α-carbon atom to carbon atom of the carboxyl (-CO-) as ti. ui and ti together define a plane, and the normal of this plane is ni = ui×ti‖ui×ti‖ . Finally, we define vi = ni × ui. Then ni, ui and vi together form the basis of residue i’s local coordinate system. They together encode the orientation of residue i. Then we introduce the edge features of an edge j → i ∈ E . These features describe the relative position of j with respect to i, the relative orientation of j with respect to i and the distance between j and i. Relative Position Edge Features First we introduce the edge features pj→i which describe relative position of j with respect to i: pj→i = n>i u>i v>i [xj − xi] Relative Orientation Edge Features As we mention above, each residue has orientation which carries information. Here we introduce the edge features qj→i, kj→i and tj→i which describe relative orientation of j with respect to i: qj→i = n>i u>i v>i [nj ] , kj→i = n>i u>i v>i [uj ] , tj→i = n>i u>i v>i [vj ] Distance-Based Edge Features Distance also carries information. Here we use radial basis function of distance as edge features: fj→i,r = e − (‖xj−xi‖) 2 2σ2r , r = 1, 2, ..., R Where R and scale parameters {σr}1≤r≤R are hyperparameters. In experiments, the set of scale parameters we used is {1.5x|x = 0, 1, 2, ..., 14}. So for each edge, there are 15 distance-based edge features. Surface Aware Node Features We additionally compute 5 surface aware node features defined in Eq. (16) using λ ∈ {1., 2., 5., 10., 30.}. B PROOFS OF THE MAIN PROPOSITIONS B.1 PROOF OF PROPOSITION 1. Proof. Denote the predicted ligand position by R(X1|X2)X1 + t(X1|X2) = X̃1. Assume first that SE(3)-invariance of the predicted docked complex defined by Eq. (1) is satisfied. Then the transformation to dock Q1X1 + g1 with respect to Q2X2 + g2 is the same as the transformation to change Q1X1 +g1 into Q2X̃1 +g2. We use the notation: R>(X1|X2) = (R(X1|X2))>. Then, we have the following derivation steps: R(X1|X2)X1 + t(X1|X2) = X̃1 X1 + R >(X1|X2)t(X1|X2) = R>(X1|X2)X̃1 X1 + R >(X1|X2)t(X1|X2) = R>(X1|X2)Q>2 (Q2X̃1 + g2 − g2) X1 + R >(X1|X2)t(X1|X2) = R>(X1|X2)Q>2 (Q2X̃1 + g2)−R>(X1|X2)Q>2 g2 X1 + R >(X1|X2)t(X1|X2) + R>(X1|X2)Q>2 g2 = R>(X1|X2)Q>2 (Q2X̃1 + g2) Q>1 (Q1X1 + g1 − g1) + R>(X1|X2)(t(X1|X2) + Q>2 g2) = R>(X1|X2)Q>2 (Q2X̃1 + g2) Q>1 (Q1X1 + g1)−Q>1 g1 + R>(X1|X2)(t(X1|X2) + Q>2 g2) = R>(X1|X2)Q>2 (Q2X̃1 + g2) R(X1|X2)Q>1 (Q1X1 + g1)−R(X1|X2)Q>1 g1 + t(X1|X2) + Q>2 g2 = Q>2 (Q2X̃1 + g2) Q2R(X1|X2)Q>1 (Q1X1 + g1)−Q2R(X1|X2)Q>1 g1 + Q2t(X1|X2) + g2 = Q2X̃1 + g2 From the last equation above, one derives the transformation of Q1X1 + g1 into Q2X̃1 + g2, which is, by definition of the functions R and t, the same as the transformation to dock Q1X1 + g1 with respect to Q2X2 + g2. This transformation is R(Q1X1 + g1|Q2X2 + g2) = Q2R(X1|X2)Q>1 t(Q1X1 + g1|Q2X2 + g2) = Q2t(X1|X2)−Q2R(X1|X2)Q>1 g1 + g2 which concludes the proof. Conversely, assuming constraints in Eq. (2) hold, we derive that Q1X1 + g1 is transformed into Q2X̃1 + g2, which then is trivial to check that it satisfies SE(3)-invariance of the predicted docked complex defined by Eq. (1). B.2 PROOF OF PROPOSITION 3. Proof. We use the notation R>(X1|X2) := (R(X1|X2))>. As in Appendix B.1, we denote R(X1|X2)X1 + t(X1|X2) = X̃1. Then the transformation to dock X2 with respect to X1 is the same as the transformation to change X̃1 back to X1, which is R(X1|X2)X1 + t(X1|X2) = X̃1 X1 + R >(X1|X2)t(X1|X2) = R>(X1|X2)X̃1 X1 = R >(X1|X2)X̃1 −R>(X1|X2)t(X1|X2) From the last equation above, we derive the transformation to change X̃1 back to X1, which is the same as the transformation to dock X2 with respect to X1. B.3 PROOF OF PROPOSITION 4. Proof. Let X(l+1)1 ,H (l+1) 1 ,X (l+1) 2 ,H (l+1) 2 = IEGMN(X (l) 1 ,H (l) 1 ,X (l) 2 ,H (l) 2 ) be the output of an IEGMN layer. Then, for any matrices Q1,Q2 ∈ SO(3) and any translation vectors g1,g2 ∈ R3, we want to prove that IEGMN satisfy the pairwise independent SE(3)-equivariance property: Q1X (l+1) 1 +g1,H (l+1) 1 ,Q2X (l+1) 2 +g2,H (l+1) 2 = IEGMN(Q1X (l) 1 +g1,H (l) 1 ,Q2X (l) 2 +g2,H (l) 2 ) where each column of X(l)1 ∈ R3×n,H (l) 1 ∈ Rd×n,X (l) 2 ∈ R3×m and H (l) 2 ∈ Rd×m represent an individual node’s coordinate embedding or feature embedding. We first note that the equations of our proposed IEGMN layer that compute messages mj→i, µj→i, mi and µi are SE(3)-invariant. Indeed, they depend on the initial features which are SE(3)-invariant by design, the current latent node embeddings {h(l)i }i∈V1∪V2 , as well as the Euclidean distances on the current node coordinates {x(l)i }i∈V1∪V2 . Thus, we also derive that the equation that computes the new latent node embeddings h(l+1)i is SE(3)-invariant. Last, the equation that updates the coordinates x (l+1) i is SE(3)-equivariant with respect to the 3D coordinates of nodes from the same graph as i, but SE(3)-invariant with respect to the 3D coordinates of nodes from the other graph since it only uses invariant transformations of the latter. C SURFACE FEATURES Visualization. We further discuss our new surface features introduced in Eq. (16). We first visualize their design intuition in Fig. 8. A synthetic experiment is shown in Fig. 9. Correlation with MSMS features. Next, we analyze how accurate are these features compared to established residue depth estimation methods, e.g. based on the MSMS software (Sanner et al., 1996). We plot the Spearman rank-order correlation of the two methods in Fig. 10. We observe a concentrated distribution with a mean of 0.68 and a median of 0.70, suggesting a strong correlation with the MSMS depth estimation. Closed form expression. Finally, we prove that for points close to the protein surface and surrounded by (infinitely) many equally-distanced and equally-spaced points, one can derive a closed form expression of the surface features defined in Eq. (16). See Fig. 11. We work in 2 dimensions, but extensions to 3 dimensions are straightforward. Assume that the local surface at point xi has angle α. Further, assume that xi is surrounded by N equally-distanced and equally-spaced points denoted by x′i. Then, all wi,i′,λ will be identical. Then, the summation vector in the numerator of Eq. (16) will only have non-zero components on the direction that bisects the surface angle, as the other components will cancel-out. Then, under the limit N → ∞, we derive the closed form expression: ρi(xi;λ) = 1 N ∥∥∥∥∥ ∑ i′∈Ni xi − xi′ ‖xi − xi′‖ ∥∥∥∥∥ = 2 N N 2∑ j=0 cos( jα N ) ≈N→∞ 2 α ∫ α/2 0 cos(θ)dθ = 2 sin(α/2) α (17) D DATASETS The overview of datasets is in Table 2. DB5.5 is obtained from https://zlab.umassmed.edu/ benchmark/, while DIPS is downloaded from https://github.com/drorlab/DIPS. While DIPS contains only the bound structures, thus currently being only suitable for rigid docking, DB5.5 also includes unbound protein structures, however, mostly showing rigid structures - see Fig. 12. E MORE EXPERIMENTAL DETAILS AND RESULTS Baseline Failures. On the test sets, ATTRACT fails for ’1N2C’ in DB5.5, ’oi_4oip.pdb1_8’, ’oi_4oip.pdb1_3’ and ’p7_4p7s.pdb1_2’ in DIPS. For such failure cases, we use the unbound input structure as the prediction for metrics calculation. Hyperparameters. We perform hyperparameter search over the choices listed in Table 3 and select the best hyperparameters for DB5.5 and DIPS respectively based on their corresponding validation sets. Detailed Running Times. In addition to the main text, we show in Table 4 detailed running times of all methods. Hardware specifications are as follows: ATTRACT was run on a 6-Core Intel Core i7 2.2 GHz CPU; HDOCK was run on a single Intel Xeon Gold 6230 2.1 GHz CPU; EQUIDOCK was run on a single Intel Core i9-9880H 2.3 GHz CPU. CLUSPRO and PATCHDOCK have been manually run using their respective web servers. Plots for DB5.5. We show the corresponding plots for DB5.5 results in Fig. 13. Ablation Studies. To highlight contributions of different model components, we provide ablation studies in Table 5. One can note that, as expected, removing the pocket loss results in lower interface RMSD scores compared to removing other components. Analysis of the Intersection Loss. We further analyze the intersection loss introduced in Eq. (15) with parameters γ = 10 and σ = 25 (chosen on DB5 validation set). We show in Table 6 that this loss achieves almost perfect values for the ground truth structures, being important to softly constrain non-intersecting predicted proteins.
1. What is the focus and contribution of the paper on protein docking? 2. What are the strengths of the proposed approach, particularly in terms of its novelty and efficiency? 3. What are the weaknesses of the paper, especially regarding its performance comparisons and potential limitations? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or concerns regarding the proposed method, such as its applicability to other scenarios or potential biases in the dataset?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a SE(3) equivariant graph matching network for end-to-end rigid protein docking. They propose a novel optimal transport loss to approximate binding pocket and a differentiable Kabsch algorithm to predict the docking pose. They achieve significant running time improvements over existing protein docking software with competitive results, and do not rely on heavy sampling, structure refinement, or templates Review Strength The proposed method is quite novel. Built upon Graph Matching Networks [1] and E(3)-Equivariant Graph Neural Networks [2], they propose a SE(3) equivariant graph matching network. The keypoint selection with optimal transport and rotation prediction with Kabsch algorithm are also interesting to me. The proposed method achieves a promising speedup of 80-500x. The proposed method seems general and can be applied to other rigid docking scenarios, i.e., predicting a rotation and translation matrix. The algorithm can potentially be adapted for protein-ligand docking, which is also an important task in computational chemistry. The paper is well written and the mathematical formulation of SE(3) equivariant rigid protein docking is very nice. The authors also promise they will release the code and datasets after reviewing the process. Weakness Although the proposed method achieves a significant speedup compared to previous docking software, it does not always outperform the baselines as acknowledged by the authors. It is also unclear how structure refinement, or templates can be combined with the current method to further improve the performance, which are used in many other protein structure prediction algorithms, e.g., alphafold2. The DIPS dataset does not contain protein structures in their unbounded form. It is unclear how the authors tackle this bias in the dataset. The authors include Surface Aware Node Features into their model. I agree that surface contact modeling is very important for protein docking. It is therefore interesting to see an ablation study over the features. Reference [1] Graph matching networks for learning the similarity of graph structured objects [2] E(n)-equivariant graph neural networks.
ICLR
Title Independent SE(3)-Equivariant Models for End-to-End Rigid Protein Docking Abstract Protein complex formation is a central problem in biology, being involved in most of the cell’s processes, and essential for applications, e.g. drug design or protein engineering. We tackle rigid body protein-protein docking, i.e., computationally predicting the 3D structure of a protein-protein complex from the individual unbound structures, assuming no conformational change within the proteins happens during binding. We design a novel pairwise-independent SE(3)-equivariant graph matching network to predict the rotation and translation to place one of the proteins at the right docked position relative to the second protein. We mathematically guarantee a basic principle: the predicted complex is always identical regardless of the initial locations and orientations of the two structures. Our model, named EQUIDOCK, approximates the binding pockets and predicts the docking poses using keypoint matching and alignment, achieved through optimal transport and a differentiable Kabsch algorithm. Empirically, we achieve significant running time improvements and often outperform existing docking software despite not relying on heavy candidate sampling, structure refinement, or templates. 1 N/A 1 INTRODUCTION protein Z protein Z protein Z-dependent inhibitor protein Z-dependent inhibitor a. b. PDB ID: 3F1S Figure 1: Different views of the 3D structure of a protein complex. a. Surface and b. cartoon view of protein Z and its inhibitor. In a recent breakthrough, ALPHAFOLD 2 (Jumper et al., 2021; Senior et al., 2020) provides a solution to a grand challenge in biology—inferring a protein’s three-dimensional structure from its amino acid sequence (Baek et al., 2021), following the dogma sequence determines structure. Besides their complex three-dimensional nature, proteins dynamically alter their function and structure in response to cellular signals, changes in the environment, or upon molecular docking. In par- ticular, protein interactions are involved in various biological processes including signal transduction, protein synthesis, DNA replication and repair. Molecular docking is key to understanding protein interactions’ mechanisms and effects, and, subsequently, to developing therapeutic interventions. We here address the problem of rigid body protein-protein docking which refers to computationally predicting the 3D structure of a protein-protein complex given the 3D structures of the two proteins in unbound state. Rigid body means no deformations occur within any protein during binding, which is a realistic assumption in many biological settings. Popular docking software (Chen et al., 2003; Venkatraman et al., 2009; De Vries et al., 2010; Torchala et al., 2013; Schindler et al., 2017; Sunny and Jayaraj, 2021) are typically computationally expensive, †Correspondence to: Octavian Ganea ([email protected]) and Yatao Bian ([email protected]). ∗Equal contribution. §Work done during an internship at Tencent AI Lab. 1Our code is publicly available: https://github.com/octavian-ganea/equidock_public. taking between minutes and hours to solve a single example pair, while not being guaranteed to find accurate complex structures. These methods largely follow the steps: i.) randomly sample a large number (e.g., millions) of candidate initial complex structures, ii.) employ a scoring function to rank the candidates, iii.) adjust and refine the top complex structures based on an energy model (e.g., force field). We here take a first step towards tackling these issues by using deep learning models for direct prediction of protein complex structures. Contributions. We design EQUIDOCK, a fast, end-to-end method for rigid body docking that directly predicts the SE(3) transformation to place one of the proteins (ligand) at the right location and orientation with respect to the second protein (receptor). Our method is based on the principle that the exact same complex structure should be predicted irrespectively of the initial 3D placements and roles of both constituents (see Fig. 2). We achieve this desideratum by incorporating the inductive biases of pairwise SE(3)–equivariance and commutativity, and deriving novel theoretical results for necessary and sufficient model constraints (see Section 3). Next, we create EQUIDOCK to satisfy these properties by design, being a combination of: i) a novel type of pairwise independent SE(3)-equivariant graph matching networks, ii) an attention-based keypoint selection algorithm that discovers representative points and aligns them with the binding pocket residues using optimal transport, and iii) a differentiable superimposition model to recover the optimal global rigid transformation. Unlike prior work, our method does not use heavy candidate sampling or ranking, templates, task-specific geometric or chemical hand-crafted features, or pre-computed meshes. This enables us to achieve plausible structures with a speed-up of 80-500x compared to popular docking software, offering a promising competitive alternative to current solutions for this problem. 2 RELATED WORK Geometric Deep Learning. Graph Neural Networks (GNNs) are becoming the de facto choice for learning with graph data (Bruna et al., 2013; Defferrard et al., 2016; Kipf and Welling, 2016; Gilmer et al., 2017; Xu et al., 2018; Li et al., 2019). Motivated by symmetries naturally occurring in different data types, architectures are tailored to explicitly incorporate such properties (Cohen and Welling, 2016a;b; Thomas et al., 2018; Fuchs et al., 2020; Finzi et al., 2020; Eismann et al., 2020; Satorras et al., 2021). GNNs are validated in a variety of tasks such as particle system dynamics or conformation-based energy estimation (Weiler and Cesa, 2019; Rezende et al., 2019). Euclidean Neural Networks (E(3)-NNs). However, plain GNNs and other deep learning methods do not understand data naturally lying in the 3D Euclidean space. For example, how should the output deterministically change with the input, e.g. when it is rotated ? The recent Euclidean neural networks address this problem, being designed from geometric first-principles. They make use of SE(3)- equivariant and invariant neural layers, thus avoiding expensive data augmentation strategies. Such constrained models ease optimization and have shown important improvements in biology or chemistry – e.g. for molecular structures (Fuchs et al., 2020; Hutchinson et al., 2020; Wu et al., 2021; Jumper et al., 2021; Ganea et al., 2021) and different types of 3D point clouds (Thomas et al., 2018). Different from prior work, we here derive constraints for pairs of 3D objects via pairwise independent SE(3)-equivariances, and design a principled approach for modeling rigid body docking. Protein Folding. Deep neural networks have been used to predict inter-residue contacts, distance and/or orientations (Adhikari and Cheng, 2018; Yang et al., 2020; Senior et al., 2020; Ju et al., 2021), that are subsequently transformed into additional constraints or differentiable energy terms for protein structure optimization. ALPHAFOLD 2 (Jumper et al., 2021) and Rosetta Fold (Baek et al., 2021) are state-of-the-art approaches, and directly predict protein structures from co-evolution information embedded in homologous sequences, using geometric deep learning and E(3)-NNs. Protein-Protein Docking and Interaction. Experimentally determining structures of protein complexes is often expensive and time-consuming, rendering a premium on computational methods (Vakser, 2014). Protein docking methods (Chen et al., 2003; Venkatraman et al., 2009; De Vries et al., 2010; Biesiada et al., 2011; Torchala et al., 2013; Schindler et al., 2017; Weng et al., 2019; Sunny and Jayaraj, 2021; Christoffer et al., 2021; Yan et al., 2020) typically run several steps: first, they sample thousands or millions of complex candidates; second, they use a scoring function for ranking (Moal et al., 2013; Basu and Wallner, 2016; Launay et al., 2020; Eismann et al., 2020); finally, top-ranked candidates undergo a structure refinement process using energy or geometric models (Verburgt and Kihara, 2021). Relevant to protein-protein interaction (PPI) is the task of protein interface prediction where GNNs have showed promise (Fout et al., 2017; Townshend et al., 2019; Liu et al., 2020; Xie and Xu, 2021; Dai and Bailey-Kellogg, 2021). Recently, ALPHAFOLD 2 and ROSETTAFOLD have been utilized as subroutines to improve PPIs from different aspects (Humphreys et al., 2021; Pei et al., 2021; Jovine), e.g., combining physics-based docking method CLUSPRO (Kozakov et al., 2017; Ghani et al., 2021), or using extended multiple-sequence alignments to predict the structure of heterodimeric protein complexes from the sequence information (Bryant et al., 2021). Concurrently to our work, Evans et al. (2021) extend ALPHAFOLD 2 to multiple chains during both training and inference. Drug-Target Interaction (DTI). DTI aims to compute drug-target binding poses and affinity, playing an essential role in understanding drugs’ mechanism of action. Prior methods (Wallach et al., 2015; Li et al., 2021) predict binding affinity from protein-ligand co-crystal structures, but such data is expensive to obtain experimentally. These models are typically based on heavy candidate sampling and ranking (Trott and Olson, 2010; Koes et al., 2013; McNutt et al., 2021; Bao et al., 2021), being tailored for small drug-like ligands and often assuming known binding pocket. Thus, they are not immediately applicable to our use case. In contrast, our rigid docking approach is generic and could be extended to accelerate DTI research as part of future work. 3 MATHEMATICAL CONSTRAINTS FOR RIGID BODY DOCKING We start by introducing the rigid body docking problem and derive the geometric constraints for enforcing same output complex prediction regardless of the initial unbound positions or roles (Fig. 2). Rigid Protein-Protein Docking – Problem Setup. We are given as input a pair of proteins forming a complex. They are (arbitrarily) denoted as the ligand and receptor, consisting of n and m residues, respectively. These proteins are represented in their bound (docked) state as 3D point clouds X∗1 ∈ R3×n,X∗2 ∈ R3×m, where each residue’s location is given by the coordinates of its corresponding α-carbon atom. In the unbound state, the docked ligand is randomly rotated and translated in space, resulting in a modified point cloud X1 ∈ R3×n. For simplicity and w.l.o.g., the receptor remains in its bound location X2 = X∗2. The task is to predict a rotation R ∈ SO(3) and a translation t ∈ R3 such that RX1 + t = X∗1, using as input the proteins and their unbound positions X1 and X2. Here, R = R(X1|X2) and t = t(X1|X2) are functions of the two proteins, where we omit residue identity or other protein information in this notation, for brevity. Note that we assume rigid backbone and side-chains for both proteins. We therefore do not tackle the more challenging problem of flexible docking, but our approach offers an important step towards it. KÉÏ ïËÏÊ PIRtbÉLÉÏ r.tt Ï HÉ ï ï PFRibÉLÉÏITE We desire that the predicted complex structure is independent of the initial locations and orientations of the two proteins, as well as of their roles – see Fig. 2. Formally, we wish to guarantee that: (R(Z1|Z2)Z1 + t(Z1|Z2))⊕ Z2 ≡ (R(X1|X2)X1 + t(X1|X2))⊕X2, (SE(3)-invariance) (R(X1|X2)X1 + t(X1|X2))⊕X2 ≡ X1 ⊕ (R(X2|X1)X2 + t(X2|X1)), (commutativity) ∀Q1,Q2 ∈ SO(3),∀g1,g2 ∈ R3,∀X1 ∈ R3×n,X2 ∈ R3×m, and Zl = QlXl + gl, l ∈ {1, 2}. (1) for any rotations Q1,Q2 and translations g1,g2, where ⊕ is concatenation along columns, and ≡ denotes identity after superimposition, i.e. zero Root-Mean-Square Deviation (RMSD) between the two 3D point sets after applying the Kabsch algorithm (Kabsch, 1976). An immediate question arises: How do the constraints in Eq. (1) translate into constraints for R(·|·) and t(·|·) ? The rotation R and translation t change in a systematic way when we apply SE(3) transformations or swap proteins’ roles. These properties restrict our class of functions as derived below. SE(3)-equivariance Constraints. If we apply any distinct SE(3) transformations on the unbound ligand X1 and receptor X2, i.e. we dock Q1X1 + g1 onto Q2X2 + g2, then the rotation matrix R(Q1X1 + g1|Q2X2 + g2) and translation vector t(Q1X1 + g1|Q2X2 + g2) can be derived from the original R(X1|X2) and t(X1|X2) assuming that we always do rotations first. In this case, R(Q1X1 + g1|Q2X2 + g2) can be decomposed into three rotations: i.) apply Q>1 to undo the rotation Q1 applied on X1, ii.) apply R(X1|X2), iii.) apply Q2 to rotate the docked ligand together with the receptor. This gives R(Q1X1 + g1|Q2X2 + g2) = Q2R(X1|X2)Q>1 , which in turn constraints the translation vector. We provide a formal statement and prove it in Appendix B.1: Proposition 1. For any Q1,Q2 ∈ SO(3),g1,g2 ∈ R3, SE(3)-invariance of the predicted docked complex defined by Eq. (1) is guaranteed iff R(Q1X1 + g1|Q2X2 + g2) = Q2R(X1|X2)Q>1 t(Q1X1 + g1|Q2X2 + g2) = Q2t(X1|X2)−Q2R(X1|X2)Q>1 g1 + g2. (2) As a direct consequence of this proposition, we have the following statement. Proposition 2. Any model satisfying Proposition 1 guarantees invariance of the predicted complex w.r.t. any SE(3) transformation on X1, and equivariance w.r.t. any SE(3) transformation on X2: R(Z1|X2)Z1 + t(Z1|X2) = R(X1|X2)X1 + t(X1|X2), where Z1 = Q1X1 + g1 R(X1|Z2)X1 + t(X1|Z2) = Q2 [R(X1|X2)X1 + t(X1|X2)] + g2, where Z2 = Q2X2 + g2 ∀Q1,Q2 ∈ SO(3),∀g1,g2 ∈ R3,∀X1 ∈ R3×n,∀X2 ∈ R3×m. Commutativity. Instead of docking X1 with respect to X2, we can also dock X2 with respect to X1. In this case, we require the final complex structures to be identical after superimposition, i.e., zero RMSD. This property is named commutativity and it is satisfied as follows (proof in Appendix B.2). Proposition 3. Commutativity as defined by Eq. (1) is guaranteed iff R(X2|X1) = R>(X1|X2); t(X2|X1) = −R>(X1|X2)t(X1|X2), (3) Point Permutation Invariance. We also enforce residue permutation invariance. Formally, both R(X1|X2) and t(X1|X2) should not depend on the order or columns of X1 and, resp., of X2. 4 EQUIDOCK MODEL Protein Representation. A protein is a sequence of amino acid residues that folds in a 3D structure. Each residue has a general structure with a side-chain specifying its type, allowing us to define a local frame and derive SE(3)-invariant features for any pair of residues —see Appendix A. We represent a protein as a graph G = (V, E), similar to Fout et al. (2017); Townshend et al. (2019); Liu et al. (2020). Each node i ∈ V represents one residue and has 3D coordinates xi ∈ R3 corresponding to the α-carbon atom’s location. Edges are given by a k-nearest-neighbor (k-NN) graph using Euclidean distance of the original 3D node coordinates. Overview of Our Approach. Our model is depicted in Fig. 3. We first build k-NN protein graphs G1 = (V1, E1) and G2 = (V2, E2). We then design SE(3)-invariant node features F1 ∈ Rd×n,F2 ∈ Rd×m and edge features {fj→i : ∀(i, j) ∈ E1 ∪ E2} (see Appendix A). Next, we apply several layers consisting of functions Φ that jointly transform node coordinates and features. Crucially, we guarantee, by design, pairwise independent SE(3)-equivariance for coordinate embeddings, and invariance for feature embeddings. This double constraint is formally defined: Given Z1,H1,Z2,H2 = Φ(X1,F1,X2,F2) we have Q1Z1 + g1,H1,Q2Z2 + g2,H2 = Φ(Q1X1 + g1,F1,Q2X2 + g2,F2), ∀Q1,Q2 ∈ SO(3),∀g1,g2 ∈ R3. (4) We implement Φ as a novel type of message-passing neural network (MPNN). We then use the output node coordinate and feature embeddings to compute R(X1|X2) and t(X1|X2). These functions depend on pairwise interactions between the two proteins modeled as cross-messages, but also incorporate the 3D structure in a pairwise-independent SE(3)-equivariant way to satisfy Eq. (1), Proposition 1 and Proposition 3. We discover keypoints from each protein based on a neural attention mechanism and softly guide them to represent the respective binding pocket locations via an optimal transport based auxiliary loss. Finally, we obtain the SE(3) transformation by superimposing the two keypoint sets via a differentiable version of the Kabsch algorithm. An additional soft-constraint discourages point cloud intersections. We now detail each of these model components. Independent E(3)-Equivariant Graph Matching Networks (IEGMNs). Our architecture for Φ satisfying Eq. (4) is called Independent E(3)-Equivariant Graph Matching Network (IEGMN) – see Fig. 3. It extends both Graph Matching Networks (GMN) (Li et al., 2019) and E(3)-Equivariant Graph Neural Networks (E(3)-GNN) (Satorras et al., 2021). IEGMNs perform node coordinate and feature embedding updates for an input pair of graphs G1 = (V1, E1), G2 = (V2, E2), and use inter- and intranode messages, as well as E(3)-equivariant coordinate updates. The l-th layer of IEGMNs transforms node latent/feature embeddings {h(l)i }i∈V1∪V2 and node coordinate embeddings {x (l) i }i∈V1∪V2 as mj→i = ϕ e(h (l) i ,h (l) j , exp(−‖x (l) i − x (l) j ‖2/σ), fj→i),∀ej→i ∈ E1 ∪ E2 (5) µj→i = aj→iWh (l) j ,∀i ∈ V1, j ∈ V2 or i ∈ V2, j ∈ V1 (6) mi = 1 |N (i)| ∑ j∈N (i) mj→i,∀i ∈ V1 ∪ V2 (7) µi = ∑ j∈V2 µj→i,∀i ∈ V1, and µi = ∑ j∈V1 µj→i,∀i ∈ V2 (8) x (l+1) i = ηx (0) i + (1− η)x (l) i + ∑ j∈N (i) (x (l) i − x (l) j )ϕ x(mj→i),∀i ∈ V1 ∪ V2 (9) h (l+1) i = (1− β) · h (l) i + β · ϕh(h (l) i ,mi,µi, fi),∀i ∈ V1 ∪ V2, (10) where N (i) are the neighbors of node i; ϕx is a real-valued (scalar) parametric function; W is a learnable matrix; ϕh, ϕe are parametric functions (MLPs) outputting a vector Rd; fj→i and fi are the original edge and node features (extracted SE(3)-invariantly from the residues). aj→i is an attention based coefficient with trainable shallow neural networks ψq and ψk: aj→i = exp(〈ψq(h(l)i ), ψk(h (l) j )〉)∑ j′ exp(〈ψq(h (l) i ), ψ k(h (l) j′ )〉) , (11) Note that all parameters of W, ϕx, ϕh, ϕe, ψq, ψk can be shared or different for different IEGMN layers . The output of several IEGMN layers is then denoted as: Z1 ∈ R3×n,H1 ∈ Rd×n,Z2 ∈ R3×m,H2 ∈ Rd×m = IEGMN(X1,F1,X2,F2). (12) It is then straightforward to prove the following (see Appendix B.3): Proposition 4. IEGMNs satisfy the pairwise independent SE(3)-equivariance property in Eq. (4). Keypoints for Differentiable Protein Superimposition. Next, we use multi-head attention to obtain K points for each protein, Y1,Y2 ∈ R3×K , which we name keypoints. We train them to become representative points for the binding pocket of the respective protein pair (softly-enforced by an additional loss described later). If this would holds perfectly, then the superimposition of Y1 and Y2 would give the corresponding ground truth superimposition of X1 and X2. Our model is : y1k := n∑ i=1 αki z1i; y2k := m∑ j=1 βkj z2j , where z1i denotes the i-th column of matrix Z1, and αki = softmaxi( 1√ d h>1iW ′ kµ(ϕ(H2))) are attention scores (similarly defined for βkj ), with W ′ k ∈ Rd×d a parametric matrix (different for each attention head), ϕ a linear layer plus a LeakyReLU non-linearity, and µ(·) is the mean vector. Differentiable Kabsch Model. We design the rotation and translation that docks protein 1 into protein 2 to be the same transformation used to superimpose Y1 and Y2 — see Fig. 3. For this, we compute a differentiable version of the Kabsch algorithm (Kabsch, 1976) as follows. Let A = Y2Y > 1 ∈ R3×3 computed using zero-mean keypoints. The singular value decomposition (SVD) is A = U2SU>1 , where U2,U1 ∈ O(3). Finally, we define the differentiable functions R(X1|X2; θ) = U2 ( 1 0 0 0 1 0 0 0 d ) U>1 , where d = sign(det(U2U > 1 )) t(X1|X2; θ) = µ(Y2)−R(X1|X2; θ)µ(Y1), (13) where µ(·) is the mean vector of a point cloud. It is straightforward to prove that this model satisfies all the equivariance properties in Eqs. (1) to (3). From a practical perspective, the gradient and backpropagation through the SVD operation was analyzed by (Ionescu et al., 2015; Papadopoulo and Lourakis, 2000) and implemented in the automatic differentiation frameworks such as PyTorch. MSE Loss. During training, we randomly decide which protein is the receptor (say protein 2), keep it in the docked position (i.e., X2 = X∗2), predict the SE(3) transformation using Eq. (13) and use it to compute the final position of the ligand as X̃1 = R(X1|X2)X1 + t(X1|X2). The mean squared error (MSE) loss is then LMSE = 1n ∑n i=1 ‖x∗i − x̃i‖2. Optimal Transport and Binding Pocket Keypoint Alignment. As stated before, we desire that Y1 and Y2 are representative points for the binding pocket location of the respective protein pair. However, this needs to be encouraged explicitly, which we achieve using an additional loss. We first define the binding pocket point sets, inspiring from previous PPI work (Section 2). Given the residues’ α-carbon locations of the bound (docked) structures, X∗1 and X ∗ 2, we select all pairs of residues at less than τ Euclidean distance (τ = 8Å in our experiments). We assume these are all interacting residues. Denote these pairs as {(x∗1s,x∗2s), s ∈ 1, . . . , S}, where S is variable across data pairs. We compute midpoints of these segments, denoted as P∗1,P ∗ 2 ∈ R3×S , where p∗1s = p ∗ 2s = 0.5 · (x∗1s + x∗2s). We view P∗1 and P∗2 as binding pocket points. In the unbound state, these sets are randomly moved in space together with the respective protein residue coordinates X1 and X2. We denote them as P1,P2 ∈ R3×S . For clarity, if X1 = QX∗1 + g, then P1 = QP∗1 + g. We desire that Y1 is a representative set for the 3D set P1 (and, similarly, Y2 for P2). However, while at training time we know that every point p1s corresponds to the point p2s (and, similarly, y1k aligns with y2k, by assumption), we unfortunately do not know the actual alignment between points in Yl and Pl, for every l ∈ {1, 2}. This can be recovered using an additional optimal transport loss: LOT = min T∈U(S,K) 〈T,C〉, where Cs,k = ‖y1k − p1s‖2 + ‖y2k − p2s‖2, (14) where U(S,K) is the set of S ×K transport plans with uniform marginals. The optimal transport plan is computed using an Earth Mover’s Distance and the POT library (Flamary et al., 2021), while being kept fixed during back-propagation and optimization when only the cost matrix is trained. Note that our approach assumes that y1k corresponds to y2k, for every k ∈ {1, . . . ,K}. Intuitively, each attention head k will identify a specific geometric/chemical local surface feature of protein 1 by y1k, and match its complementary feature of protein 2 by y2k. Avoiding Point Cloud Intersection. In practice, our model does not enforce a useful inductive bias, namely that proteins forming complexes are never "intersecting" with each other. To address this issue, we first state a notion of the "interior" of a protein point cloud. Following previous work (Sverrisson et al., 2021; Venkatraman et al., 2009), we define the surface of a protein point cloud X ∈ R3×n as {x ∈ R3 : G(x) = γ}, where G(x) = −σ ln(∑ni=1 exp(−||x − xi||2/σ)). The parameters σ and γ are chosen such that there exist no "holes" inside a protein (we found γ = 10, σ = 25 to work well, see Appendix E). As a consequence, the interior of the protein is given by {x ∈ R3 : G(x) < γ}. Then, the condition for non-intersecting ligand and receptor can be written as G1(x2j) > γ, ∀j ∈ 1, . . . ,m and G2(x1i) > γ, ∀i ∈ 1, . . . , n. As a loss function, this becomes LNI = 1 n n∑ i=1 max(0, γ −G2(x1i)) + 1 m m∑ j=1 max(0, γ −G1(x2j)). (15) Surface Aware Node Features. Surface contact modeling is important for protein docking. We here design a novel surface feature type that differentiates residues closer to the surface of the protein from those in the interior. Similar to Sverrisson et al. (2021), we prioritize efficiency and avoid pre-computing meshes, but show that our new feature is a good proxy for residue’s depth (i.e. distance to the protein surface). Intuitively, residues in the core of the protein are locally surrounded in all directions by other residues. This is not true for residues on the surface, e.g., neighbors are in a half-space if the surface is locally flat. Building on this intuition, for each node (residue) i in the k-NN protein graph, we compute the norm of the weighted average of its neighbor forces, which can be interpreted as the normalized gradient of the G(x) surface function. This SE(3)-invariant feature is ρi(xi;λ) = ‖∑i′∈Ni wi,i′,λ(xi − xi′)‖∑ i′∈Ni wi,i′,λ‖xi − xi′‖ , where wi,i′,λ = exp(−||xi − xi′ ||2/λ)∑ j∈Ni exp(−||xi − xj ||2/λ) . (16) Intuitively, as depicted in Fig. 8, residues in the interior of the protein have values close to 0 since they are surrounded by vectors from all directions that cancel out, while residues near the surface have neighbors only in a narrower cone, with aperture depending on the local curvature of the surface. We show in Appendix C that this feature correlates well with more expensive residue depth estimation methods, e.g. based on MSMS, thus offering a computationally appealing alternative. We also compute an estimation of this feature for large dense point clouds based on the local surface angle. 5 EXPERIMENTS Datasets. We leverage the following datasets: Docking Benchmark 5.5 (DB5.5) (Vreven et al., 2015) and Database of Interacting Protein Structures (DIPS) (Townshend et al., 2019). DB5.5 is a gold standard dataset in terms of data quality, but contains only 253 structures. DIPS is a larger protein complex structures dataset mined from the Protein Data Bank (Berman et al., 2000) and tailored for rigid body docking. Datasets information is given in Appendix D. We filter DIPS to only keep proteins with at most 10K atoms. Datasets are then randomly partitioned in train/val/test splits of sizes 203/25/25 (DB5.5) and 39,937/974/965 (DIPS). For DIPS, the split is based on protein family to separate similar proteins. For the final evaluation in Table 1, we use the full DB5.5 test set, and randomly sample 100 pairs from different protein families from the DIPS test set. Baselines. We compare our EQUIDOCK method with popular state-of-the-art docking software 2 CLUSPRO (PIPER) (Desta et al., 2020; Kozakov et al., 2017),ATTRACT (Schindler et al., 2017; de Vries et al., 2015), PATCHDOCK (Mashiach et al., 2010; SchneidmanDuhovny et al., 2005), and HDOCK (Yan et al., 2020; 2017b;a; Huang and Zou, 2014; 2008). These baselines provide user-friendly local packages suitable for automatic experiments or webservers for manual submissions. Evaluation Metrics. To measure prediction’s quality, we report Complex Root Mean Square Deviation (CRMSD) and Interface Root Mean Square Deviation (IRMSD), de- fined below. Given the ground truth and predicted complex structures, Z∗ ∈ R3×(n+m) and Z ∈ R3×(n+m), we first superimpose them using the Kabsch algorithm (Kabsch, 1976), and then compute C-RMSD = √ 1 n+m‖Z∗ − Z‖2F . We compute I-RMSD similarly, but using only the coordinates of the interface residues with distance less than 8Å to the other protein’s residues. For a fair comparison among baselines, we use only the α-carbon coordinates to compute both metrics. Training Details. We train our models on the train part of DIPS first, using Adam (Kingma and Ba, 2014) with learning rate 2e-4 and early stopping with patience of 30 epochs. We update the best validation model only when it achieves a score of less than 98% of the previous best validation score, where the score is the median of Ligand RMSD on the full DIPS validation set. The best DIPS validated model is then tested on the DIPS test set. For DB5.5, we fine tune the DIPS pre-trained 2ClusPro: https://cluspro.bu.edu/, Attract: www.attract.ph.tum.de/services/ ATTRACT/ATTRACT.vdi.gz, PatchDock: https://bioinfo3d.cs.tau.ac.il/PatchDock/, HDOCK: http://huanglab.phys.hust.edu.cn/software/HDOCK/ model on the DB5.5 training set using learning rate 1e-4 and early stopping with 150 epochs patience. The best DB5.5 validated model is finally tested on DB5.5 test set. During training, we randomly assign the roles of ligand and receptor. Also, during both training and testing, we randomly rotate and translate the ligand in space (even though our model is invariant to this operation) for all baselines. Complex Prediction Results. Results are shown in Table 1, Fig. 4 and Appendix E. We note that our method is competitive and often outperforms the baselines. However, we do not use heavy candidate sampling and re-ranking, we do not rely on task-specific hand-crafted features, and we currently do not perform structure fine-tuning, aiming to predict the SE(3) ligand transformation in a direct shot. Moreover, we note that some of the baselines might have used part of our test set in validating their models, for example to learn surface templates, thus, their reported scores might be optimistic. Notably, HDOCK score function was validated on DB4 which overlaps with DB5.5. A more appropriate comparison would require us to re-build these baselines without information from our test sets, a task that is currently not possible without open-source implementations. Computational Efficiency. We show inference times in Fig. 5 and Table 4. Note that EQUIDOCK is between 80-500 times faster than the baselines. This is especially important for intensive screening applications that aim to scan over vast search spaces, e.g. for drug discovery. In addition, it is also relevant for de novo design of binding proteins (e.g. antibodies (Jin et al., 2021)) or for use cases when protein docking models are just a component of significantly larger end-to-end architectures targeting more involved biological scenarios, e.g., representing a drug’s mechanism of action or modeling cellular processes with a single model as opposed to a multi-pipeline architecture. Visualization. We show in Fig. 6 a successful example of a test DIPS protein pair for which our model significantly outperforms all baselines. 6 CONCLUSION We have presented an extremely fast, end-to-end rigid protein docking approach that does not rely on candidate sampling, templates, task-specific features or pre-computed meshes. Our method smartly incorporates useful rigid protein docking priors including commutativity and pairwise independent SE(3)-equivariances, thus avoiding the computational burden of data augmentation. We look forward to incorporating more domain knowledge in EQUIDOCK and extend it for flexible docking and docking molecular dynamics, as well as adapt it to other related tasks such as drug binding prediction. On the long term, we envision that fast and accurate deep learning models would allow us to tackle more complex and involved biological scenarios, for example to model the mechanism of action of various drugs or to design de novo binding proteins and drugs to specific targets (e.g. for antibody generation). Last, we hope that our architecture can inspire the design of other types of biological 3D interactions. Limitations. First, our presented model does not incorporate protein flexibility which is necessary for various protein families, e.g., antibodies. Unfortunately, both DB5 and DIPS datasets are biased towards rigid body docking . Second, we only prevent steric clashes using a soft constraint (Eq. (15)) which has limitations (see Table 6). Future extensions would hard-constrain the model to prevent such artifacts. ACKNOWLEDGEMENTS The authors thank Hannes Stärk, Gabriele Corso, Patrick Walters, Tian Xie, Xiang Fu, Jacob Stern, Jason Yim, Lewis Martin, Jeremy Wohlwend, Jiaxiang Wu, Wei Liu, and Ding Xue for insightful and helpful discussions. OEG is funded by the Machine Learning for Pharmaceutical Discovery and Synthesis (MLPDS) consortium, the Abdul Latif Jameel Clinic for Machine Learning in Health, the DTRA Discovery of Medical Countermeasures Against New and Emerging (DOMANE) threats program, and the DARPA Accelerated Molecular Discovery program. This publication was created as part of NCCR Catalysis (grant number 180544), a National Centres of Competence in Research funded by the Swiss National Science Foundation. RB and TJ also acknowledge support from NSF Expeditions grant (award 1918839): Collaborative Research: Understanding the World Through Code. A Representing Proteins as Graphs 15 B Proofs of the Main Propositions 16 B.1 Proof of Proposition 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 B.2 Proof of Proposition 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 B.3 Proof of Proposition 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 C Surface Features 17 D Datasets 18 E More Experimental Details and Results 20 A REPRESENTING PROTEINS AS GRAPHS A protein is comprised of amino acid residues. The structure of an amino acid residue is shown in Figure Fig. 7. Generally, an amino acid residue contains amino (-NH-), α-carbon atom and carboxyl (-CO-), along with a side chain (R) connected with the α-carbon atom. The side chain (R) is specific to each type of amino acid residues. We work on residue level (our approaches can be extended to atom level as well). A protein is represented by a set of nodes where each node is an amino acid residue in the protein. Each node i has a 3D coordinate xi ∈ R3 which is the 3D coordinate of α-carbon atom of the residue. The neighborhood of a node is the set of k (k = 10 in our experiments) nearest nodes where the distance is the Euclidean distance between 3D coordinates. Node feature is a one dimension indicator (one-hot encoding) of the type of amino acid residue. This one dimension indicator will be passed into an embedding layer. Local Coordinate System. Similar to Ingraham et al. (2019) and Jumper et al. (2021), we introduce a local coordinate system for each residue which denotes the orientation of a residue. Based on this, we can further design SE(3)-invariant edge features. As shown in Figure 7, for a residue i, we denote the unit vector pointing from α-carbon atom to nitrogen atom as ui. We denote the unit vector pointing from α-carbon atom to carbon atom of the carboxyl (-CO-) as ti. ui and ti together define a plane, and the normal of this plane is ni = ui×ti‖ui×ti‖ . Finally, we define vi = ni × ui. Then ni, ui and vi together form the basis of residue i’s local coordinate system. They together encode the orientation of residue i. Then we introduce the edge features of an edge j → i ∈ E . These features describe the relative position of j with respect to i, the relative orientation of j with respect to i and the distance between j and i. Relative Position Edge Features First we introduce the edge features pj→i which describe relative position of j with respect to i: pj→i = n>i u>i v>i [xj − xi] Relative Orientation Edge Features As we mention above, each residue has orientation which carries information. Here we introduce the edge features qj→i, kj→i and tj→i which describe relative orientation of j with respect to i: qj→i = n>i u>i v>i [nj ] , kj→i = n>i u>i v>i [uj ] , tj→i = n>i u>i v>i [vj ] Distance-Based Edge Features Distance also carries information. Here we use radial basis function of distance as edge features: fj→i,r = e − (‖xj−xi‖) 2 2σ2r , r = 1, 2, ..., R Where R and scale parameters {σr}1≤r≤R are hyperparameters. In experiments, the set of scale parameters we used is {1.5x|x = 0, 1, 2, ..., 14}. So for each edge, there are 15 distance-based edge features. Surface Aware Node Features We additionally compute 5 surface aware node features defined in Eq. (16) using λ ∈ {1., 2., 5., 10., 30.}. B PROOFS OF THE MAIN PROPOSITIONS B.1 PROOF OF PROPOSITION 1. Proof. Denote the predicted ligand position by R(X1|X2)X1 + t(X1|X2) = X̃1. Assume first that SE(3)-invariance of the predicted docked complex defined by Eq. (1) is satisfied. Then the transformation to dock Q1X1 + g1 with respect to Q2X2 + g2 is the same as the transformation to change Q1X1 +g1 into Q2X̃1 +g2. We use the notation: R>(X1|X2) = (R(X1|X2))>. Then, we have the following derivation steps: R(X1|X2)X1 + t(X1|X2) = X̃1 X1 + R >(X1|X2)t(X1|X2) = R>(X1|X2)X̃1 X1 + R >(X1|X2)t(X1|X2) = R>(X1|X2)Q>2 (Q2X̃1 + g2 − g2) X1 + R >(X1|X2)t(X1|X2) = R>(X1|X2)Q>2 (Q2X̃1 + g2)−R>(X1|X2)Q>2 g2 X1 + R >(X1|X2)t(X1|X2) + R>(X1|X2)Q>2 g2 = R>(X1|X2)Q>2 (Q2X̃1 + g2) Q>1 (Q1X1 + g1 − g1) + R>(X1|X2)(t(X1|X2) + Q>2 g2) = R>(X1|X2)Q>2 (Q2X̃1 + g2) Q>1 (Q1X1 + g1)−Q>1 g1 + R>(X1|X2)(t(X1|X2) + Q>2 g2) = R>(X1|X2)Q>2 (Q2X̃1 + g2) R(X1|X2)Q>1 (Q1X1 + g1)−R(X1|X2)Q>1 g1 + t(X1|X2) + Q>2 g2 = Q>2 (Q2X̃1 + g2) Q2R(X1|X2)Q>1 (Q1X1 + g1)−Q2R(X1|X2)Q>1 g1 + Q2t(X1|X2) + g2 = Q2X̃1 + g2 From the last equation above, one derives the transformation of Q1X1 + g1 into Q2X̃1 + g2, which is, by definition of the functions R and t, the same as the transformation to dock Q1X1 + g1 with respect to Q2X2 + g2. This transformation is R(Q1X1 + g1|Q2X2 + g2) = Q2R(X1|X2)Q>1 t(Q1X1 + g1|Q2X2 + g2) = Q2t(X1|X2)−Q2R(X1|X2)Q>1 g1 + g2 which concludes the proof. Conversely, assuming constraints in Eq. (2) hold, we derive that Q1X1 + g1 is transformed into Q2X̃1 + g2, which then is trivial to check that it satisfies SE(3)-invariance of the predicted docked complex defined by Eq. (1). B.2 PROOF OF PROPOSITION 3. Proof. We use the notation R>(X1|X2) := (R(X1|X2))>. As in Appendix B.1, we denote R(X1|X2)X1 + t(X1|X2) = X̃1. Then the transformation to dock X2 with respect to X1 is the same as the transformation to change X̃1 back to X1, which is R(X1|X2)X1 + t(X1|X2) = X̃1 X1 + R >(X1|X2)t(X1|X2) = R>(X1|X2)X̃1 X1 = R >(X1|X2)X̃1 −R>(X1|X2)t(X1|X2) From the last equation above, we derive the transformation to change X̃1 back to X1, which is the same as the transformation to dock X2 with respect to X1. B.3 PROOF OF PROPOSITION 4. Proof. Let X(l+1)1 ,H (l+1) 1 ,X (l+1) 2 ,H (l+1) 2 = IEGMN(X (l) 1 ,H (l) 1 ,X (l) 2 ,H (l) 2 ) be the output of an IEGMN layer. Then, for any matrices Q1,Q2 ∈ SO(3) and any translation vectors g1,g2 ∈ R3, we want to prove that IEGMN satisfy the pairwise independent SE(3)-equivariance property: Q1X (l+1) 1 +g1,H (l+1) 1 ,Q2X (l+1) 2 +g2,H (l+1) 2 = IEGMN(Q1X (l) 1 +g1,H (l) 1 ,Q2X (l) 2 +g2,H (l) 2 ) where each column of X(l)1 ∈ R3×n,H (l) 1 ∈ Rd×n,X (l) 2 ∈ R3×m and H (l) 2 ∈ Rd×m represent an individual node’s coordinate embedding or feature embedding. We first note that the equations of our proposed IEGMN layer that compute messages mj→i, µj→i, mi and µi are SE(3)-invariant. Indeed, they depend on the initial features which are SE(3)-invariant by design, the current latent node embeddings {h(l)i }i∈V1∪V2 , as well as the Euclidean distances on the current node coordinates {x(l)i }i∈V1∪V2 . Thus, we also derive that the equation that computes the new latent node embeddings h(l+1)i is SE(3)-invariant. Last, the equation that updates the coordinates x (l+1) i is SE(3)-equivariant with respect to the 3D coordinates of nodes from the same graph as i, but SE(3)-invariant with respect to the 3D coordinates of nodes from the other graph since it only uses invariant transformations of the latter. C SURFACE FEATURES Visualization. We further discuss our new surface features introduced in Eq. (16). We first visualize their design intuition in Fig. 8. A synthetic experiment is shown in Fig. 9. Correlation with MSMS features. Next, we analyze how accurate are these features compared to established residue depth estimation methods, e.g. based on the MSMS software (Sanner et al., 1996). We plot the Spearman rank-order correlation of the two methods in Fig. 10. We observe a concentrated distribution with a mean of 0.68 and a median of 0.70, suggesting a strong correlation with the MSMS depth estimation. Closed form expression. Finally, we prove that for points close to the protein surface and surrounded by (infinitely) many equally-distanced and equally-spaced points, one can derive a closed form expression of the surface features defined in Eq. (16). See Fig. 11. We work in 2 dimensions, but extensions to 3 dimensions are straightforward. Assume that the local surface at point xi has angle α. Further, assume that xi is surrounded by N equally-distanced and equally-spaced points denoted by x′i. Then, all wi,i′,λ will be identical. Then, the summation vector in the numerator of Eq. (16) will only have non-zero components on the direction that bisects the surface angle, as the other components will cancel-out. Then, under the limit N → ∞, we derive the closed form expression: ρi(xi;λ) = 1 N ∥∥∥∥∥ ∑ i′∈Ni xi − xi′ ‖xi − xi′‖ ∥∥∥∥∥ = 2 N N 2∑ j=0 cos( jα N ) ≈N→∞ 2 α ∫ α/2 0 cos(θ)dθ = 2 sin(α/2) α (17) D DATASETS The overview of datasets is in Table 2. DB5.5 is obtained from https://zlab.umassmed.edu/ benchmark/, while DIPS is downloaded from https://github.com/drorlab/DIPS. While DIPS contains only the bound structures, thus currently being only suitable for rigid docking, DB5.5 also includes unbound protein structures, however, mostly showing rigid structures - see Fig. 12. E MORE EXPERIMENTAL DETAILS AND RESULTS Baseline Failures. On the test sets, ATTRACT fails for ’1N2C’ in DB5.5, ’oi_4oip.pdb1_8’, ’oi_4oip.pdb1_3’ and ’p7_4p7s.pdb1_2’ in DIPS. For such failure cases, we use the unbound input structure as the prediction for metrics calculation. Hyperparameters. We perform hyperparameter search over the choices listed in Table 3 and select the best hyperparameters for DB5.5 and DIPS respectively based on their corresponding validation sets. Detailed Running Times. In addition to the main text, we show in Table 4 detailed running times of all methods. Hardware specifications are as follows: ATTRACT was run on a 6-Core Intel Core i7 2.2 GHz CPU; HDOCK was run on a single Intel Xeon Gold 6230 2.1 GHz CPU; EQUIDOCK was run on a single Intel Core i9-9880H 2.3 GHz CPU. CLUSPRO and PATCHDOCK have been manually run using their respective web servers. Plots for DB5.5. We show the corresponding plots for DB5.5 results in Fig. 13. Ablation Studies. To highlight contributions of different model components, we provide ablation studies in Table 5. One can note that, as expected, removing the pocket loss results in lower interface RMSD scores compared to removing other components. Analysis of the Intersection Loss. We further analyze the intersection loss introduced in Eq. (15) with parameters γ = 10 and σ = 25 (chosen on DB5 validation set). We show in Table 6 that this loss achieves almost perfect values for the ground truth structures, being important to softly constrain non-intersecting predicted proteins.
1. What is the focus of the paper regarding protein-protein complexes docking? 2. What are the strengths and weaknesses of the proposed method, particularly its core contribution? 3. Do you have any concerns about the method's application, such as steric clashes or the role of buried residues? 4. How could the performance metrics be improved, and how does the method compare to others in terms of inference time and accuracy? 5. Are there any suggestions for further development or refinement of the proposed method?
Summary Of The Paper Review
Summary Of The Paper In this paper, the authors propose a method for the “rigid body” docking of protein-protein complexes, i.e., a complex in which conformational changes in protein structures are not allowed. This method, called independent E(3)-equivariant graph matching networks (IEGMNs), finds the optimal rotation and translation to place proteins in a manner that the distances between residues in the binding site is minimized. A core feature of this method is SE(3) invariance, which means the optimal solution is invariant to the rotations and translations of the two proteins. This is achieved in an elaborate manner employing several ideas and technics: A k-NN graph is built for each protein using its CA coordinates and a set of additional features are also extracted Message-passing neural network (MPNN) is used for graph-matching Binding pocket residues or “keypoints” are identified A differentiable Kabsch model is used for superimposing is the “keypoints” Optimal transport is used for the alignment of residues in the binding pocket This method, as reported by the authors, runs very fast and can identify the protein complex efficiently. Review The authors have put a significant amount of time into designing, developing, and testing their proposed method, and it introduces and adapts several interesting ideas. In particular, the core contribution is their proposed transformation, which guarantees pairwise independent SE(3)-equivariance for the two sets of 3D coordinates. My comments are as follows: Since no conformational changes are allowed, the residues buried in the hydrophobic core of the proteins play no role in predicting the binding conformation. Calculating solvent-accessible surface area is one of the most well-studied problems in structural genomics. The authors could use one of such methods, e.g., one based on the Voronoi procedure [PMID: 12376381], to significantly reduce the problem size and render identifying keypoints easier. The authors have a loss term to avoid point cloud intersections, but there are no terms for preventing steric clashes by enforcing Van der Waals radii of proximal atoms. Have the authors checked for steric clashes in the binding pocket? Analyzing performance metrics needs more elaboration. For example, the distribution of C/I RMSDs could be plotted, and/or scatter plots could be added. Moreover, statistical tests could be performed to reliably show whether the proposed method outperforms the other two. The same applies to inference times and a box/violin plot might be a better option. Judging by the results in Table 1, one might infer that HDOCK performs better than IEGMN. It is true that IEGMN has significantly lower inference times, but in practice, experimentally determining protein structures is time-intensive (in the order of months) and costly, and run times in the order of minutes or even hours do not matter. It is good to see that the authors have used two different RMSD definitions; however, the authors should consider looking at CAPRI criteria (critical assessment of predicted interactions criteria) [PMID: 12784359] and standardize their evaluation method. The authors employ several ideas and their contribution to the final performance is not clear. For example, what would happen if the “surface-aware node features” are not used? Some minor notes: SE(3)-equivariance and SE(3)-invariance are used interchangeably. In p6, what does “normalized keypoints” mean, zero-mean?
ICLR
Title Independent SE(3)-Equivariant Models for End-to-End Rigid Protein Docking Abstract Protein complex formation is a central problem in biology, being involved in most of the cell’s processes, and essential for applications, e.g. drug design or protein engineering. We tackle rigid body protein-protein docking, i.e., computationally predicting the 3D structure of a protein-protein complex from the individual unbound structures, assuming no conformational change within the proteins happens during binding. We design a novel pairwise-independent SE(3)-equivariant graph matching network to predict the rotation and translation to place one of the proteins at the right docked position relative to the second protein. We mathematically guarantee a basic principle: the predicted complex is always identical regardless of the initial locations and orientations of the two structures. Our model, named EQUIDOCK, approximates the binding pockets and predicts the docking poses using keypoint matching and alignment, achieved through optimal transport and a differentiable Kabsch algorithm. Empirically, we achieve significant running time improvements and often outperform existing docking software despite not relying on heavy candidate sampling, structure refinement, or templates. 1 N/A 1 INTRODUCTION protein Z protein Z protein Z-dependent inhibitor protein Z-dependent inhibitor a. b. PDB ID: 3F1S Figure 1: Different views of the 3D structure of a protein complex. a. Surface and b. cartoon view of protein Z and its inhibitor. In a recent breakthrough, ALPHAFOLD 2 (Jumper et al., 2021; Senior et al., 2020) provides a solution to a grand challenge in biology—inferring a protein’s three-dimensional structure from its amino acid sequence (Baek et al., 2021), following the dogma sequence determines structure. Besides their complex three-dimensional nature, proteins dynamically alter their function and structure in response to cellular signals, changes in the environment, or upon molecular docking. In par- ticular, protein interactions are involved in various biological processes including signal transduction, protein synthesis, DNA replication and repair. Molecular docking is key to understanding protein interactions’ mechanisms and effects, and, subsequently, to developing therapeutic interventions. We here address the problem of rigid body protein-protein docking which refers to computationally predicting the 3D structure of a protein-protein complex given the 3D structures of the two proteins in unbound state. Rigid body means no deformations occur within any protein during binding, which is a realistic assumption in many biological settings. Popular docking software (Chen et al., 2003; Venkatraman et al., 2009; De Vries et al., 2010; Torchala et al., 2013; Schindler et al., 2017; Sunny and Jayaraj, 2021) are typically computationally expensive, †Correspondence to: Octavian Ganea ([email protected]) and Yatao Bian ([email protected]). ∗Equal contribution. §Work done during an internship at Tencent AI Lab. 1Our code is publicly available: https://github.com/octavian-ganea/equidock_public. taking between minutes and hours to solve a single example pair, while not being guaranteed to find accurate complex structures. These methods largely follow the steps: i.) randomly sample a large number (e.g., millions) of candidate initial complex structures, ii.) employ a scoring function to rank the candidates, iii.) adjust and refine the top complex structures based on an energy model (e.g., force field). We here take a first step towards tackling these issues by using deep learning models for direct prediction of protein complex structures. Contributions. We design EQUIDOCK, a fast, end-to-end method for rigid body docking that directly predicts the SE(3) transformation to place one of the proteins (ligand) at the right location and orientation with respect to the second protein (receptor). Our method is based on the principle that the exact same complex structure should be predicted irrespectively of the initial 3D placements and roles of both constituents (see Fig. 2). We achieve this desideratum by incorporating the inductive biases of pairwise SE(3)–equivariance and commutativity, and deriving novel theoretical results for necessary and sufficient model constraints (see Section 3). Next, we create EQUIDOCK to satisfy these properties by design, being a combination of: i) a novel type of pairwise independent SE(3)-equivariant graph matching networks, ii) an attention-based keypoint selection algorithm that discovers representative points and aligns them with the binding pocket residues using optimal transport, and iii) a differentiable superimposition model to recover the optimal global rigid transformation. Unlike prior work, our method does not use heavy candidate sampling or ranking, templates, task-specific geometric or chemical hand-crafted features, or pre-computed meshes. This enables us to achieve plausible structures with a speed-up of 80-500x compared to popular docking software, offering a promising competitive alternative to current solutions for this problem. 2 RELATED WORK Geometric Deep Learning. Graph Neural Networks (GNNs) are becoming the de facto choice for learning with graph data (Bruna et al., 2013; Defferrard et al., 2016; Kipf and Welling, 2016; Gilmer et al., 2017; Xu et al., 2018; Li et al., 2019). Motivated by symmetries naturally occurring in different data types, architectures are tailored to explicitly incorporate such properties (Cohen and Welling, 2016a;b; Thomas et al., 2018; Fuchs et al., 2020; Finzi et al., 2020; Eismann et al., 2020; Satorras et al., 2021). GNNs are validated in a variety of tasks such as particle system dynamics or conformation-based energy estimation (Weiler and Cesa, 2019; Rezende et al., 2019). Euclidean Neural Networks (E(3)-NNs). However, plain GNNs and other deep learning methods do not understand data naturally lying in the 3D Euclidean space. For example, how should the output deterministically change with the input, e.g. when it is rotated ? The recent Euclidean neural networks address this problem, being designed from geometric first-principles. They make use of SE(3)- equivariant and invariant neural layers, thus avoiding expensive data augmentation strategies. Such constrained models ease optimization and have shown important improvements in biology or chemistry – e.g. for molecular structures (Fuchs et al., 2020; Hutchinson et al., 2020; Wu et al., 2021; Jumper et al., 2021; Ganea et al., 2021) and different types of 3D point clouds (Thomas et al., 2018). Different from prior work, we here derive constraints for pairs of 3D objects via pairwise independent SE(3)-equivariances, and design a principled approach for modeling rigid body docking. Protein Folding. Deep neural networks have been used to predict inter-residue contacts, distance and/or orientations (Adhikari and Cheng, 2018; Yang et al., 2020; Senior et al., 2020; Ju et al., 2021), that are subsequently transformed into additional constraints or differentiable energy terms for protein structure optimization. ALPHAFOLD 2 (Jumper et al., 2021) and Rosetta Fold (Baek et al., 2021) are state-of-the-art approaches, and directly predict protein structures from co-evolution information embedded in homologous sequences, using geometric deep learning and E(3)-NNs. Protein-Protein Docking and Interaction. Experimentally determining structures of protein complexes is often expensive and time-consuming, rendering a premium on computational methods (Vakser, 2014). Protein docking methods (Chen et al., 2003; Venkatraman et al., 2009; De Vries et al., 2010; Biesiada et al., 2011; Torchala et al., 2013; Schindler et al., 2017; Weng et al., 2019; Sunny and Jayaraj, 2021; Christoffer et al., 2021; Yan et al., 2020) typically run several steps: first, they sample thousands or millions of complex candidates; second, they use a scoring function for ranking (Moal et al., 2013; Basu and Wallner, 2016; Launay et al., 2020; Eismann et al., 2020); finally, top-ranked candidates undergo a structure refinement process using energy or geometric models (Verburgt and Kihara, 2021). Relevant to protein-protein interaction (PPI) is the task of protein interface prediction where GNNs have showed promise (Fout et al., 2017; Townshend et al., 2019; Liu et al., 2020; Xie and Xu, 2021; Dai and Bailey-Kellogg, 2021). Recently, ALPHAFOLD 2 and ROSETTAFOLD have been utilized as subroutines to improve PPIs from different aspects (Humphreys et al., 2021; Pei et al., 2021; Jovine), e.g., combining physics-based docking method CLUSPRO (Kozakov et al., 2017; Ghani et al., 2021), or using extended multiple-sequence alignments to predict the structure of heterodimeric protein complexes from the sequence information (Bryant et al., 2021). Concurrently to our work, Evans et al. (2021) extend ALPHAFOLD 2 to multiple chains during both training and inference. Drug-Target Interaction (DTI). DTI aims to compute drug-target binding poses and affinity, playing an essential role in understanding drugs’ mechanism of action. Prior methods (Wallach et al., 2015; Li et al., 2021) predict binding affinity from protein-ligand co-crystal structures, but such data is expensive to obtain experimentally. These models are typically based on heavy candidate sampling and ranking (Trott and Olson, 2010; Koes et al., 2013; McNutt et al., 2021; Bao et al., 2021), being tailored for small drug-like ligands and often assuming known binding pocket. Thus, they are not immediately applicable to our use case. In contrast, our rigid docking approach is generic and could be extended to accelerate DTI research as part of future work. 3 MATHEMATICAL CONSTRAINTS FOR RIGID BODY DOCKING We start by introducing the rigid body docking problem and derive the geometric constraints for enforcing same output complex prediction regardless of the initial unbound positions or roles (Fig. 2). Rigid Protein-Protein Docking – Problem Setup. We are given as input a pair of proteins forming a complex. They are (arbitrarily) denoted as the ligand and receptor, consisting of n and m residues, respectively. These proteins are represented in their bound (docked) state as 3D point clouds X∗1 ∈ R3×n,X∗2 ∈ R3×m, where each residue’s location is given by the coordinates of its corresponding α-carbon atom. In the unbound state, the docked ligand is randomly rotated and translated in space, resulting in a modified point cloud X1 ∈ R3×n. For simplicity and w.l.o.g., the receptor remains in its bound location X2 = X∗2. The task is to predict a rotation R ∈ SO(3) and a translation t ∈ R3 such that RX1 + t = X∗1, using as input the proteins and their unbound positions X1 and X2. Here, R = R(X1|X2) and t = t(X1|X2) are functions of the two proteins, where we omit residue identity or other protein information in this notation, for brevity. Note that we assume rigid backbone and side-chains for both proteins. We therefore do not tackle the more challenging problem of flexible docking, but our approach offers an important step towards it. KÉÏ ïËÏÊ PIRtbÉLÉÏ r.tt Ï HÉ ï ï PFRibÉLÉÏITE We desire that the predicted complex structure is independent of the initial locations and orientations of the two proteins, as well as of their roles – see Fig. 2. Formally, we wish to guarantee that: (R(Z1|Z2)Z1 + t(Z1|Z2))⊕ Z2 ≡ (R(X1|X2)X1 + t(X1|X2))⊕X2, (SE(3)-invariance) (R(X1|X2)X1 + t(X1|X2))⊕X2 ≡ X1 ⊕ (R(X2|X1)X2 + t(X2|X1)), (commutativity) ∀Q1,Q2 ∈ SO(3),∀g1,g2 ∈ R3,∀X1 ∈ R3×n,X2 ∈ R3×m, and Zl = QlXl + gl, l ∈ {1, 2}. (1) for any rotations Q1,Q2 and translations g1,g2, where ⊕ is concatenation along columns, and ≡ denotes identity after superimposition, i.e. zero Root-Mean-Square Deviation (RMSD) between the two 3D point sets after applying the Kabsch algorithm (Kabsch, 1976). An immediate question arises: How do the constraints in Eq. (1) translate into constraints for R(·|·) and t(·|·) ? The rotation R and translation t change in a systematic way when we apply SE(3) transformations or swap proteins’ roles. These properties restrict our class of functions as derived below. SE(3)-equivariance Constraints. If we apply any distinct SE(3) transformations on the unbound ligand X1 and receptor X2, i.e. we dock Q1X1 + g1 onto Q2X2 + g2, then the rotation matrix R(Q1X1 + g1|Q2X2 + g2) and translation vector t(Q1X1 + g1|Q2X2 + g2) can be derived from the original R(X1|X2) and t(X1|X2) assuming that we always do rotations first. In this case, R(Q1X1 + g1|Q2X2 + g2) can be decomposed into three rotations: i.) apply Q>1 to undo the rotation Q1 applied on X1, ii.) apply R(X1|X2), iii.) apply Q2 to rotate the docked ligand together with the receptor. This gives R(Q1X1 + g1|Q2X2 + g2) = Q2R(X1|X2)Q>1 , which in turn constraints the translation vector. We provide a formal statement and prove it in Appendix B.1: Proposition 1. For any Q1,Q2 ∈ SO(3),g1,g2 ∈ R3, SE(3)-invariance of the predicted docked complex defined by Eq. (1) is guaranteed iff R(Q1X1 + g1|Q2X2 + g2) = Q2R(X1|X2)Q>1 t(Q1X1 + g1|Q2X2 + g2) = Q2t(X1|X2)−Q2R(X1|X2)Q>1 g1 + g2. (2) As a direct consequence of this proposition, we have the following statement. Proposition 2. Any model satisfying Proposition 1 guarantees invariance of the predicted complex w.r.t. any SE(3) transformation on X1, and equivariance w.r.t. any SE(3) transformation on X2: R(Z1|X2)Z1 + t(Z1|X2) = R(X1|X2)X1 + t(X1|X2), where Z1 = Q1X1 + g1 R(X1|Z2)X1 + t(X1|Z2) = Q2 [R(X1|X2)X1 + t(X1|X2)] + g2, where Z2 = Q2X2 + g2 ∀Q1,Q2 ∈ SO(3),∀g1,g2 ∈ R3,∀X1 ∈ R3×n,∀X2 ∈ R3×m. Commutativity. Instead of docking X1 with respect to X2, we can also dock X2 with respect to X1. In this case, we require the final complex structures to be identical after superimposition, i.e., zero RMSD. This property is named commutativity and it is satisfied as follows (proof in Appendix B.2). Proposition 3. Commutativity as defined by Eq. (1) is guaranteed iff R(X2|X1) = R>(X1|X2); t(X2|X1) = −R>(X1|X2)t(X1|X2), (3) Point Permutation Invariance. We also enforce residue permutation invariance. Formally, both R(X1|X2) and t(X1|X2) should not depend on the order or columns of X1 and, resp., of X2. 4 EQUIDOCK MODEL Protein Representation. A protein is a sequence of amino acid residues that folds in a 3D structure. Each residue has a general structure with a side-chain specifying its type, allowing us to define a local frame and derive SE(3)-invariant features for any pair of residues —see Appendix A. We represent a protein as a graph G = (V, E), similar to Fout et al. (2017); Townshend et al. (2019); Liu et al. (2020). Each node i ∈ V represents one residue and has 3D coordinates xi ∈ R3 corresponding to the α-carbon atom’s location. Edges are given by a k-nearest-neighbor (k-NN) graph using Euclidean distance of the original 3D node coordinates. Overview of Our Approach. Our model is depicted in Fig. 3. We first build k-NN protein graphs G1 = (V1, E1) and G2 = (V2, E2). We then design SE(3)-invariant node features F1 ∈ Rd×n,F2 ∈ Rd×m and edge features {fj→i : ∀(i, j) ∈ E1 ∪ E2} (see Appendix A). Next, we apply several layers consisting of functions Φ that jointly transform node coordinates and features. Crucially, we guarantee, by design, pairwise independent SE(3)-equivariance for coordinate embeddings, and invariance for feature embeddings. This double constraint is formally defined: Given Z1,H1,Z2,H2 = Φ(X1,F1,X2,F2) we have Q1Z1 + g1,H1,Q2Z2 + g2,H2 = Φ(Q1X1 + g1,F1,Q2X2 + g2,F2), ∀Q1,Q2 ∈ SO(3),∀g1,g2 ∈ R3. (4) We implement Φ as a novel type of message-passing neural network (MPNN). We then use the output node coordinate and feature embeddings to compute R(X1|X2) and t(X1|X2). These functions depend on pairwise interactions between the two proteins modeled as cross-messages, but also incorporate the 3D structure in a pairwise-independent SE(3)-equivariant way to satisfy Eq. (1), Proposition 1 and Proposition 3. We discover keypoints from each protein based on a neural attention mechanism and softly guide them to represent the respective binding pocket locations via an optimal transport based auxiliary loss. Finally, we obtain the SE(3) transformation by superimposing the two keypoint sets via a differentiable version of the Kabsch algorithm. An additional soft-constraint discourages point cloud intersections. We now detail each of these model components. Independent E(3)-Equivariant Graph Matching Networks (IEGMNs). Our architecture for Φ satisfying Eq. (4) is called Independent E(3)-Equivariant Graph Matching Network (IEGMN) – see Fig. 3. It extends both Graph Matching Networks (GMN) (Li et al., 2019) and E(3)-Equivariant Graph Neural Networks (E(3)-GNN) (Satorras et al., 2021). IEGMNs perform node coordinate and feature embedding updates for an input pair of graphs G1 = (V1, E1), G2 = (V2, E2), and use inter- and intranode messages, as well as E(3)-equivariant coordinate updates. The l-th layer of IEGMNs transforms node latent/feature embeddings {h(l)i }i∈V1∪V2 and node coordinate embeddings {x (l) i }i∈V1∪V2 as mj→i = ϕ e(h (l) i ,h (l) j , exp(−‖x (l) i − x (l) j ‖2/σ), fj→i),∀ej→i ∈ E1 ∪ E2 (5) µj→i = aj→iWh (l) j ,∀i ∈ V1, j ∈ V2 or i ∈ V2, j ∈ V1 (6) mi = 1 |N (i)| ∑ j∈N (i) mj→i,∀i ∈ V1 ∪ V2 (7) µi = ∑ j∈V2 µj→i,∀i ∈ V1, and µi = ∑ j∈V1 µj→i,∀i ∈ V2 (8) x (l+1) i = ηx (0) i + (1− η)x (l) i + ∑ j∈N (i) (x (l) i − x (l) j )ϕ x(mj→i),∀i ∈ V1 ∪ V2 (9) h (l+1) i = (1− β) · h (l) i + β · ϕh(h (l) i ,mi,µi, fi),∀i ∈ V1 ∪ V2, (10) where N (i) are the neighbors of node i; ϕx is a real-valued (scalar) parametric function; W is a learnable matrix; ϕh, ϕe are parametric functions (MLPs) outputting a vector Rd; fj→i and fi are the original edge and node features (extracted SE(3)-invariantly from the residues). aj→i is an attention based coefficient with trainable shallow neural networks ψq and ψk: aj→i = exp(〈ψq(h(l)i ), ψk(h (l) j )〉)∑ j′ exp(〈ψq(h (l) i ), ψ k(h (l) j′ )〉) , (11) Note that all parameters of W, ϕx, ϕh, ϕe, ψq, ψk can be shared or different for different IEGMN layers . The output of several IEGMN layers is then denoted as: Z1 ∈ R3×n,H1 ∈ Rd×n,Z2 ∈ R3×m,H2 ∈ Rd×m = IEGMN(X1,F1,X2,F2). (12) It is then straightforward to prove the following (see Appendix B.3): Proposition 4. IEGMNs satisfy the pairwise independent SE(3)-equivariance property in Eq. (4). Keypoints for Differentiable Protein Superimposition. Next, we use multi-head attention to obtain K points for each protein, Y1,Y2 ∈ R3×K , which we name keypoints. We train them to become representative points for the binding pocket of the respective protein pair (softly-enforced by an additional loss described later). If this would holds perfectly, then the superimposition of Y1 and Y2 would give the corresponding ground truth superimposition of X1 and X2. Our model is : y1k := n∑ i=1 αki z1i; y2k := m∑ j=1 βkj z2j , where z1i denotes the i-th column of matrix Z1, and αki = softmaxi( 1√ d h>1iW ′ kµ(ϕ(H2))) are attention scores (similarly defined for βkj ), with W ′ k ∈ Rd×d a parametric matrix (different for each attention head), ϕ a linear layer plus a LeakyReLU non-linearity, and µ(·) is the mean vector. Differentiable Kabsch Model. We design the rotation and translation that docks protein 1 into protein 2 to be the same transformation used to superimpose Y1 and Y2 — see Fig. 3. For this, we compute a differentiable version of the Kabsch algorithm (Kabsch, 1976) as follows. Let A = Y2Y > 1 ∈ R3×3 computed using zero-mean keypoints. The singular value decomposition (SVD) is A = U2SU>1 , where U2,U1 ∈ O(3). Finally, we define the differentiable functions R(X1|X2; θ) = U2 ( 1 0 0 0 1 0 0 0 d ) U>1 , where d = sign(det(U2U > 1 )) t(X1|X2; θ) = µ(Y2)−R(X1|X2; θ)µ(Y1), (13) where µ(·) is the mean vector of a point cloud. It is straightforward to prove that this model satisfies all the equivariance properties in Eqs. (1) to (3). From a practical perspective, the gradient and backpropagation through the SVD operation was analyzed by (Ionescu et al., 2015; Papadopoulo and Lourakis, 2000) and implemented in the automatic differentiation frameworks such as PyTorch. MSE Loss. During training, we randomly decide which protein is the receptor (say protein 2), keep it in the docked position (i.e., X2 = X∗2), predict the SE(3) transformation using Eq. (13) and use it to compute the final position of the ligand as X̃1 = R(X1|X2)X1 + t(X1|X2). The mean squared error (MSE) loss is then LMSE = 1n ∑n i=1 ‖x∗i − x̃i‖2. Optimal Transport and Binding Pocket Keypoint Alignment. As stated before, we desire that Y1 and Y2 are representative points for the binding pocket location of the respective protein pair. However, this needs to be encouraged explicitly, which we achieve using an additional loss. We first define the binding pocket point sets, inspiring from previous PPI work (Section 2). Given the residues’ α-carbon locations of the bound (docked) structures, X∗1 and X ∗ 2, we select all pairs of residues at less than τ Euclidean distance (τ = 8Å in our experiments). We assume these are all interacting residues. Denote these pairs as {(x∗1s,x∗2s), s ∈ 1, . . . , S}, where S is variable across data pairs. We compute midpoints of these segments, denoted as P∗1,P ∗ 2 ∈ R3×S , where p∗1s = p ∗ 2s = 0.5 · (x∗1s + x∗2s). We view P∗1 and P∗2 as binding pocket points. In the unbound state, these sets are randomly moved in space together with the respective protein residue coordinates X1 and X2. We denote them as P1,P2 ∈ R3×S . For clarity, if X1 = QX∗1 + g, then P1 = QP∗1 + g. We desire that Y1 is a representative set for the 3D set P1 (and, similarly, Y2 for P2). However, while at training time we know that every point p1s corresponds to the point p2s (and, similarly, y1k aligns with y2k, by assumption), we unfortunately do not know the actual alignment between points in Yl and Pl, for every l ∈ {1, 2}. This can be recovered using an additional optimal transport loss: LOT = min T∈U(S,K) 〈T,C〉, where Cs,k = ‖y1k − p1s‖2 + ‖y2k − p2s‖2, (14) where U(S,K) is the set of S ×K transport plans with uniform marginals. The optimal transport plan is computed using an Earth Mover’s Distance and the POT library (Flamary et al., 2021), while being kept fixed during back-propagation and optimization when only the cost matrix is trained. Note that our approach assumes that y1k corresponds to y2k, for every k ∈ {1, . . . ,K}. Intuitively, each attention head k will identify a specific geometric/chemical local surface feature of protein 1 by y1k, and match its complementary feature of protein 2 by y2k. Avoiding Point Cloud Intersection. In practice, our model does not enforce a useful inductive bias, namely that proteins forming complexes are never "intersecting" with each other. To address this issue, we first state a notion of the "interior" of a protein point cloud. Following previous work (Sverrisson et al., 2021; Venkatraman et al., 2009), we define the surface of a protein point cloud X ∈ R3×n as {x ∈ R3 : G(x) = γ}, where G(x) = −σ ln(∑ni=1 exp(−||x − xi||2/σ)). The parameters σ and γ are chosen such that there exist no "holes" inside a protein (we found γ = 10, σ = 25 to work well, see Appendix E). As a consequence, the interior of the protein is given by {x ∈ R3 : G(x) < γ}. Then, the condition for non-intersecting ligand and receptor can be written as G1(x2j) > γ, ∀j ∈ 1, . . . ,m and G2(x1i) > γ, ∀i ∈ 1, . . . , n. As a loss function, this becomes LNI = 1 n n∑ i=1 max(0, γ −G2(x1i)) + 1 m m∑ j=1 max(0, γ −G1(x2j)). (15) Surface Aware Node Features. Surface contact modeling is important for protein docking. We here design a novel surface feature type that differentiates residues closer to the surface of the protein from those in the interior. Similar to Sverrisson et al. (2021), we prioritize efficiency and avoid pre-computing meshes, but show that our new feature is a good proxy for residue’s depth (i.e. distance to the protein surface). Intuitively, residues in the core of the protein are locally surrounded in all directions by other residues. This is not true for residues on the surface, e.g., neighbors are in a half-space if the surface is locally flat. Building on this intuition, for each node (residue) i in the k-NN protein graph, we compute the norm of the weighted average of its neighbor forces, which can be interpreted as the normalized gradient of the G(x) surface function. This SE(3)-invariant feature is ρi(xi;λ) = ‖∑i′∈Ni wi,i′,λ(xi − xi′)‖∑ i′∈Ni wi,i′,λ‖xi − xi′‖ , where wi,i′,λ = exp(−||xi − xi′ ||2/λ)∑ j∈Ni exp(−||xi − xj ||2/λ) . (16) Intuitively, as depicted in Fig. 8, residues in the interior of the protein have values close to 0 since they are surrounded by vectors from all directions that cancel out, while residues near the surface have neighbors only in a narrower cone, with aperture depending on the local curvature of the surface. We show in Appendix C that this feature correlates well with more expensive residue depth estimation methods, e.g. based on MSMS, thus offering a computationally appealing alternative. We also compute an estimation of this feature for large dense point clouds based on the local surface angle. 5 EXPERIMENTS Datasets. We leverage the following datasets: Docking Benchmark 5.5 (DB5.5) (Vreven et al., 2015) and Database of Interacting Protein Structures (DIPS) (Townshend et al., 2019). DB5.5 is a gold standard dataset in terms of data quality, but contains only 253 structures. DIPS is a larger protein complex structures dataset mined from the Protein Data Bank (Berman et al., 2000) and tailored for rigid body docking. Datasets information is given in Appendix D. We filter DIPS to only keep proteins with at most 10K atoms. Datasets are then randomly partitioned in train/val/test splits of sizes 203/25/25 (DB5.5) and 39,937/974/965 (DIPS). For DIPS, the split is based on protein family to separate similar proteins. For the final evaluation in Table 1, we use the full DB5.5 test set, and randomly sample 100 pairs from different protein families from the DIPS test set. Baselines. We compare our EQUIDOCK method with popular state-of-the-art docking software 2 CLUSPRO (PIPER) (Desta et al., 2020; Kozakov et al., 2017),ATTRACT (Schindler et al., 2017; de Vries et al., 2015), PATCHDOCK (Mashiach et al., 2010; SchneidmanDuhovny et al., 2005), and HDOCK (Yan et al., 2020; 2017b;a; Huang and Zou, 2014; 2008). These baselines provide user-friendly local packages suitable for automatic experiments or webservers for manual submissions. Evaluation Metrics. To measure prediction’s quality, we report Complex Root Mean Square Deviation (CRMSD) and Interface Root Mean Square Deviation (IRMSD), de- fined below. Given the ground truth and predicted complex structures, Z∗ ∈ R3×(n+m) and Z ∈ R3×(n+m), we first superimpose them using the Kabsch algorithm (Kabsch, 1976), and then compute C-RMSD = √ 1 n+m‖Z∗ − Z‖2F . We compute I-RMSD similarly, but using only the coordinates of the interface residues with distance less than 8Å to the other protein’s residues. For a fair comparison among baselines, we use only the α-carbon coordinates to compute both metrics. Training Details. We train our models on the train part of DIPS first, using Adam (Kingma and Ba, 2014) with learning rate 2e-4 and early stopping with patience of 30 epochs. We update the best validation model only when it achieves a score of less than 98% of the previous best validation score, where the score is the median of Ligand RMSD on the full DIPS validation set. The best DIPS validated model is then tested on the DIPS test set. For DB5.5, we fine tune the DIPS pre-trained 2ClusPro: https://cluspro.bu.edu/, Attract: www.attract.ph.tum.de/services/ ATTRACT/ATTRACT.vdi.gz, PatchDock: https://bioinfo3d.cs.tau.ac.il/PatchDock/, HDOCK: http://huanglab.phys.hust.edu.cn/software/HDOCK/ model on the DB5.5 training set using learning rate 1e-4 and early stopping with 150 epochs patience. The best DB5.5 validated model is finally tested on DB5.5 test set. During training, we randomly assign the roles of ligand and receptor. Also, during both training and testing, we randomly rotate and translate the ligand in space (even though our model is invariant to this operation) for all baselines. Complex Prediction Results. Results are shown in Table 1, Fig. 4 and Appendix E. We note that our method is competitive and often outperforms the baselines. However, we do not use heavy candidate sampling and re-ranking, we do not rely on task-specific hand-crafted features, and we currently do not perform structure fine-tuning, aiming to predict the SE(3) ligand transformation in a direct shot. Moreover, we note that some of the baselines might have used part of our test set in validating their models, for example to learn surface templates, thus, their reported scores might be optimistic. Notably, HDOCK score function was validated on DB4 which overlaps with DB5.5. A more appropriate comparison would require us to re-build these baselines without information from our test sets, a task that is currently not possible without open-source implementations. Computational Efficiency. We show inference times in Fig. 5 and Table 4. Note that EQUIDOCK is between 80-500 times faster than the baselines. This is especially important for intensive screening applications that aim to scan over vast search spaces, e.g. for drug discovery. In addition, it is also relevant for de novo design of binding proteins (e.g. antibodies (Jin et al., 2021)) or for use cases when protein docking models are just a component of significantly larger end-to-end architectures targeting more involved biological scenarios, e.g., representing a drug’s mechanism of action or modeling cellular processes with a single model as opposed to a multi-pipeline architecture. Visualization. We show in Fig. 6 a successful example of a test DIPS protein pair for which our model significantly outperforms all baselines. 6 CONCLUSION We have presented an extremely fast, end-to-end rigid protein docking approach that does not rely on candidate sampling, templates, task-specific features or pre-computed meshes. Our method smartly incorporates useful rigid protein docking priors including commutativity and pairwise independent SE(3)-equivariances, thus avoiding the computational burden of data augmentation. We look forward to incorporating more domain knowledge in EQUIDOCK and extend it for flexible docking and docking molecular dynamics, as well as adapt it to other related tasks such as drug binding prediction. On the long term, we envision that fast and accurate deep learning models would allow us to tackle more complex and involved biological scenarios, for example to model the mechanism of action of various drugs or to design de novo binding proteins and drugs to specific targets (e.g. for antibody generation). Last, we hope that our architecture can inspire the design of other types of biological 3D interactions. Limitations. First, our presented model does not incorporate protein flexibility which is necessary for various protein families, e.g., antibodies. Unfortunately, both DB5 and DIPS datasets are biased towards rigid body docking . Second, we only prevent steric clashes using a soft constraint (Eq. (15)) which has limitations (see Table 6). Future extensions would hard-constrain the model to prevent such artifacts. ACKNOWLEDGEMENTS The authors thank Hannes Stärk, Gabriele Corso, Patrick Walters, Tian Xie, Xiang Fu, Jacob Stern, Jason Yim, Lewis Martin, Jeremy Wohlwend, Jiaxiang Wu, Wei Liu, and Ding Xue for insightful and helpful discussions. OEG is funded by the Machine Learning for Pharmaceutical Discovery and Synthesis (MLPDS) consortium, the Abdul Latif Jameel Clinic for Machine Learning in Health, the DTRA Discovery of Medical Countermeasures Against New and Emerging (DOMANE) threats program, and the DARPA Accelerated Molecular Discovery program. This publication was created as part of NCCR Catalysis (grant number 180544), a National Centres of Competence in Research funded by the Swiss National Science Foundation. RB and TJ also acknowledge support from NSF Expeditions grant (award 1918839): Collaborative Research: Understanding the World Through Code. A Representing Proteins as Graphs 15 B Proofs of the Main Propositions 16 B.1 Proof of Proposition 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 B.2 Proof of Proposition 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 B.3 Proof of Proposition 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 C Surface Features 17 D Datasets 18 E More Experimental Details and Results 20 A REPRESENTING PROTEINS AS GRAPHS A protein is comprised of amino acid residues. The structure of an amino acid residue is shown in Figure Fig. 7. Generally, an amino acid residue contains amino (-NH-), α-carbon atom and carboxyl (-CO-), along with a side chain (R) connected with the α-carbon atom. The side chain (R) is specific to each type of amino acid residues. We work on residue level (our approaches can be extended to atom level as well). A protein is represented by a set of nodes where each node is an amino acid residue in the protein. Each node i has a 3D coordinate xi ∈ R3 which is the 3D coordinate of α-carbon atom of the residue. The neighborhood of a node is the set of k (k = 10 in our experiments) nearest nodes where the distance is the Euclidean distance between 3D coordinates. Node feature is a one dimension indicator (one-hot encoding) of the type of amino acid residue. This one dimension indicator will be passed into an embedding layer. Local Coordinate System. Similar to Ingraham et al. (2019) and Jumper et al. (2021), we introduce a local coordinate system for each residue which denotes the orientation of a residue. Based on this, we can further design SE(3)-invariant edge features. As shown in Figure 7, for a residue i, we denote the unit vector pointing from α-carbon atom to nitrogen atom as ui. We denote the unit vector pointing from α-carbon atom to carbon atom of the carboxyl (-CO-) as ti. ui and ti together define a plane, and the normal of this plane is ni = ui×ti‖ui×ti‖ . Finally, we define vi = ni × ui. Then ni, ui and vi together form the basis of residue i’s local coordinate system. They together encode the orientation of residue i. Then we introduce the edge features of an edge j → i ∈ E . These features describe the relative position of j with respect to i, the relative orientation of j with respect to i and the distance between j and i. Relative Position Edge Features First we introduce the edge features pj→i which describe relative position of j with respect to i: pj→i = n>i u>i v>i [xj − xi] Relative Orientation Edge Features As we mention above, each residue has orientation which carries information. Here we introduce the edge features qj→i, kj→i and tj→i which describe relative orientation of j with respect to i: qj→i = n>i u>i v>i [nj ] , kj→i = n>i u>i v>i [uj ] , tj→i = n>i u>i v>i [vj ] Distance-Based Edge Features Distance also carries information. Here we use radial basis function of distance as edge features: fj→i,r = e − (‖xj−xi‖) 2 2σ2r , r = 1, 2, ..., R Where R and scale parameters {σr}1≤r≤R are hyperparameters. In experiments, the set of scale parameters we used is {1.5x|x = 0, 1, 2, ..., 14}. So for each edge, there are 15 distance-based edge features. Surface Aware Node Features We additionally compute 5 surface aware node features defined in Eq. (16) using λ ∈ {1., 2., 5., 10., 30.}. B PROOFS OF THE MAIN PROPOSITIONS B.1 PROOF OF PROPOSITION 1. Proof. Denote the predicted ligand position by R(X1|X2)X1 + t(X1|X2) = X̃1. Assume first that SE(3)-invariance of the predicted docked complex defined by Eq. (1) is satisfied. Then the transformation to dock Q1X1 + g1 with respect to Q2X2 + g2 is the same as the transformation to change Q1X1 +g1 into Q2X̃1 +g2. We use the notation: R>(X1|X2) = (R(X1|X2))>. Then, we have the following derivation steps: R(X1|X2)X1 + t(X1|X2) = X̃1 X1 + R >(X1|X2)t(X1|X2) = R>(X1|X2)X̃1 X1 + R >(X1|X2)t(X1|X2) = R>(X1|X2)Q>2 (Q2X̃1 + g2 − g2) X1 + R >(X1|X2)t(X1|X2) = R>(X1|X2)Q>2 (Q2X̃1 + g2)−R>(X1|X2)Q>2 g2 X1 + R >(X1|X2)t(X1|X2) + R>(X1|X2)Q>2 g2 = R>(X1|X2)Q>2 (Q2X̃1 + g2) Q>1 (Q1X1 + g1 − g1) + R>(X1|X2)(t(X1|X2) + Q>2 g2) = R>(X1|X2)Q>2 (Q2X̃1 + g2) Q>1 (Q1X1 + g1)−Q>1 g1 + R>(X1|X2)(t(X1|X2) + Q>2 g2) = R>(X1|X2)Q>2 (Q2X̃1 + g2) R(X1|X2)Q>1 (Q1X1 + g1)−R(X1|X2)Q>1 g1 + t(X1|X2) + Q>2 g2 = Q>2 (Q2X̃1 + g2) Q2R(X1|X2)Q>1 (Q1X1 + g1)−Q2R(X1|X2)Q>1 g1 + Q2t(X1|X2) + g2 = Q2X̃1 + g2 From the last equation above, one derives the transformation of Q1X1 + g1 into Q2X̃1 + g2, which is, by definition of the functions R and t, the same as the transformation to dock Q1X1 + g1 with respect to Q2X2 + g2. This transformation is R(Q1X1 + g1|Q2X2 + g2) = Q2R(X1|X2)Q>1 t(Q1X1 + g1|Q2X2 + g2) = Q2t(X1|X2)−Q2R(X1|X2)Q>1 g1 + g2 which concludes the proof. Conversely, assuming constraints in Eq. (2) hold, we derive that Q1X1 + g1 is transformed into Q2X̃1 + g2, which then is trivial to check that it satisfies SE(3)-invariance of the predicted docked complex defined by Eq. (1). B.2 PROOF OF PROPOSITION 3. Proof. We use the notation R>(X1|X2) := (R(X1|X2))>. As in Appendix B.1, we denote R(X1|X2)X1 + t(X1|X2) = X̃1. Then the transformation to dock X2 with respect to X1 is the same as the transformation to change X̃1 back to X1, which is R(X1|X2)X1 + t(X1|X2) = X̃1 X1 + R >(X1|X2)t(X1|X2) = R>(X1|X2)X̃1 X1 = R >(X1|X2)X̃1 −R>(X1|X2)t(X1|X2) From the last equation above, we derive the transformation to change X̃1 back to X1, which is the same as the transformation to dock X2 with respect to X1. B.3 PROOF OF PROPOSITION 4. Proof. Let X(l+1)1 ,H (l+1) 1 ,X (l+1) 2 ,H (l+1) 2 = IEGMN(X (l) 1 ,H (l) 1 ,X (l) 2 ,H (l) 2 ) be the output of an IEGMN layer. Then, for any matrices Q1,Q2 ∈ SO(3) and any translation vectors g1,g2 ∈ R3, we want to prove that IEGMN satisfy the pairwise independent SE(3)-equivariance property: Q1X (l+1) 1 +g1,H (l+1) 1 ,Q2X (l+1) 2 +g2,H (l+1) 2 = IEGMN(Q1X (l) 1 +g1,H (l) 1 ,Q2X (l) 2 +g2,H (l) 2 ) where each column of X(l)1 ∈ R3×n,H (l) 1 ∈ Rd×n,X (l) 2 ∈ R3×m and H (l) 2 ∈ Rd×m represent an individual node’s coordinate embedding or feature embedding. We first note that the equations of our proposed IEGMN layer that compute messages mj→i, µj→i, mi and µi are SE(3)-invariant. Indeed, they depend on the initial features which are SE(3)-invariant by design, the current latent node embeddings {h(l)i }i∈V1∪V2 , as well as the Euclidean distances on the current node coordinates {x(l)i }i∈V1∪V2 . Thus, we also derive that the equation that computes the new latent node embeddings h(l+1)i is SE(3)-invariant. Last, the equation that updates the coordinates x (l+1) i is SE(3)-equivariant with respect to the 3D coordinates of nodes from the same graph as i, but SE(3)-invariant with respect to the 3D coordinates of nodes from the other graph since it only uses invariant transformations of the latter. C SURFACE FEATURES Visualization. We further discuss our new surface features introduced in Eq. (16). We first visualize their design intuition in Fig. 8. A synthetic experiment is shown in Fig. 9. Correlation with MSMS features. Next, we analyze how accurate are these features compared to established residue depth estimation methods, e.g. based on the MSMS software (Sanner et al., 1996). We plot the Spearman rank-order correlation of the two methods in Fig. 10. We observe a concentrated distribution with a mean of 0.68 and a median of 0.70, suggesting a strong correlation with the MSMS depth estimation. Closed form expression. Finally, we prove that for points close to the protein surface and surrounded by (infinitely) many equally-distanced and equally-spaced points, one can derive a closed form expression of the surface features defined in Eq. (16). See Fig. 11. We work in 2 dimensions, but extensions to 3 dimensions are straightforward. Assume that the local surface at point xi has angle α. Further, assume that xi is surrounded by N equally-distanced and equally-spaced points denoted by x′i. Then, all wi,i′,λ will be identical. Then, the summation vector in the numerator of Eq. (16) will only have non-zero components on the direction that bisects the surface angle, as the other components will cancel-out. Then, under the limit N → ∞, we derive the closed form expression: ρi(xi;λ) = 1 N ∥∥∥∥∥ ∑ i′∈Ni xi − xi′ ‖xi − xi′‖ ∥∥∥∥∥ = 2 N N 2∑ j=0 cos( jα N ) ≈N→∞ 2 α ∫ α/2 0 cos(θ)dθ = 2 sin(α/2) α (17) D DATASETS The overview of datasets is in Table 2. DB5.5 is obtained from https://zlab.umassmed.edu/ benchmark/, while DIPS is downloaded from https://github.com/drorlab/DIPS. While DIPS contains only the bound structures, thus currently being only suitable for rigid docking, DB5.5 also includes unbound protein structures, however, mostly showing rigid structures - see Fig. 12. E MORE EXPERIMENTAL DETAILS AND RESULTS Baseline Failures. On the test sets, ATTRACT fails for ’1N2C’ in DB5.5, ’oi_4oip.pdb1_8’, ’oi_4oip.pdb1_3’ and ’p7_4p7s.pdb1_2’ in DIPS. For such failure cases, we use the unbound input structure as the prediction for metrics calculation. Hyperparameters. We perform hyperparameter search over the choices listed in Table 3 and select the best hyperparameters for DB5.5 and DIPS respectively based on their corresponding validation sets. Detailed Running Times. In addition to the main text, we show in Table 4 detailed running times of all methods. Hardware specifications are as follows: ATTRACT was run on a 6-Core Intel Core i7 2.2 GHz CPU; HDOCK was run on a single Intel Xeon Gold 6230 2.1 GHz CPU; EQUIDOCK was run on a single Intel Core i9-9880H 2.3 GHz CPU. CLUSPRO and PATCHDOCK have been manually run using their respective web servers. Plots for DB5.5. We show the corresponding plots for DB5.5 results in Fig. 13. Ablation Studies. To highlight contributions of different model components, we provide ablation studies in Table 5. One can note that, as expected, removing the pocket loss results in lower interface RMSD scores compared to removing other components. Analysis of the Intersection Loss. We further analyze the intersection loss introduced in Eq. (15) with parameters γ = 10 and σ = 25 (chosen on DB5 validation set). We show in Table 6 that this loss achieves almost perfect values for the ground truth structures, being important to softly constrain non-intersecting predicted proteins.
1. What is the main contribution of the paper regarding protein docking? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. Do you have any questions about the proof of SE(3)-invariance and commutativity? 4. How does the reviewer assess the performance and runtime of the proposed method? 5. What are the limitations of the method regarding downstream tasks and hard constraints? 6. Is there anything unclear regarding the choice of keypoints or the absence of citations in the proofs section?
Summary Of The Paper Review
Summary Of The Paper This paper proposes an end-to-end deep learning architecture to model the rigid body protein docking problem. By incorporating the inductive biases of SE(3)-invariance of the final docking position and commutativity, the proposed method avoids the millions of sampling and achieves a competitive performance with much faster speed. In addition, they discover and align keypoints by the attention-based selection algorithm and use optimal transport to predict the binding pocket location based on those selected points. The main contribution of this paper is the combination of the novel graph matching networks and keypoint selection algorithm to predict the 3D position of the docking model. Results are shown on the task of 1) Protein docking complex prediction 2) Runtime Review Pros: The whole paper is constructed and written very well. The authors nicely show the proof of SE(3)-invariant and commutativity for the rigid body docking problem. Following the invariance property, the paper introduces the novel graph matching network and avoids sampling step based on it. Using a soft loss function avoids point cloud intersection. Cons: As much as we appreciated the approach presented here the performance just seems unacceptable compared to other methods The author uses two baselines which were developed 4 years ago. The reason they didn’t use other methods, no local packages, is somewhat problematic too. Still, their performance is much worse than one of the methods and hypothesizing the test data was used to train those models does not resolve this issue. Their runtime is much faster. However, HDOCK, which is the best performance model among those three, uses a bearable amount of time and achieves much better performance. Three models are ran on different hardware, which makes runtime comparison problematic. Lack of result analysis It may have been helpful to show the method proposed can provide comparable results on more downstream tasks. Even though its rigid docking prediction is not good enough, maybe the prediction has other properties which can help the other perspectives. We also noted the method uses soft constraints to discourage point cloud intersections. However, the authors do not show/discuss if it really prevents this from happening and why not use hard constraints directly (stability during training?). Other Comments: It is not clear how the number of keypoints K was chosen in the “Keypoints for Differentiable Protein Superimposition” section. Also we noticed that the whole section of proofs for propositions lacks citations but they are not claimed as novelty at the beginning. Are they part of the contributions or not? How does these parts relate to previous works? Update: The improvements implemented in the revised manuscript and clarifications supplied by the authors elevated many of the above concerns and we therefore updated the evaluation to reflect this.
ICLR
Title Where and when to look? Spatial-temporal attention for action recognition in videos Abstract Inspired by the observation that humans are able to process videos efficiently by only paying attention where and when it is needed, we propose a novel spatialtemporal attention mechanism for video-based action recognition. For spatial attention, we learn a saliency mask only using convolutional layers to allow the model to focus on the most salient parts of the feature maps. For temporal attention, we employ a convolutional LSTM based attention mechanism to identify the most relevant frames from an input video. Further, we propose a set of regularizers that ensure that our attention mechanism attends to coherent regions in space and time. Our model can not only effectively improve video action recognition accuracy, but also can localize discriminative regions both spatially and temporally, despite only trained in a weakly-supervised manner with only classification labels (no bounding box spatial labels and time frame temporal labels). We evaluate our proposed approach on several public video action recognition datasets with ablation studies. Furthermore, we quantitatively and qualitatively evaluate our model’s ability to localize discriminative regions spatially and critical frames temporally. Experimental results demonstrate the efficacy of our approach, showing superior or comparable accuracy with the state-of-the-art methods with the same input. 1 INTRODUCTION An important property of human perception is that one does not need to process a whole scene in its entirety at once. Instead, humans focus attention selectively on parts of the visual space to acquire information where and when it is needed, and combine information from different fixations over time to build up an internal representation of the scene (Rensink, 2000), which can then be used for interpretation or decision making. In computer vision and natural language processing, over the last couple of years, attention models have proved similarly important. Particularly for the tasks where interpretation or explanation requires only a small portion of the image or video. Examples include visual question answering (Lu et al., 2016; Xu & Saenko, 2016; Xiong et al., 2016), activity recognition (Sharma et al., 2015; Girdhar & Ramanan, 2017; Li et al., 2018b), and natural machine translation (Bahdanau et al., 2015). These models have also provided a level of interpretability, by visualizing regions selected or attended over for a particular task or decision. In particular, for video action classification, a proper attention model can help answer the question of where and when it needs to look at the image evidence to draw a classification decision. It intuitively explains which part the model attends to when making a particular decision, which is very helpful in real applications, e.g., medical AI systems or self-driving cars. In this paper, we propose a novel spatio-temporal attention mechanism that is designed to address these challenges. Our attention mechanism is efficient, due to its space- and time- separability, and yet flexible enough to enable encoding of effective regularizers (or priors). As such, our attention mechanism consists of spatial and temporal components shown in Fig. 1. The spatial attention component, that attenuates frame-wise CNN image features, consists of the saliency mask; regularized to be discriminative and spatially smooth. The temporal component consists of a uni-modal soft attention mechanism that aggregates information over the near-by attenuated frame features before passing it into Convolutional LSTM for class prediction. Contributions: In summary, the main contributions of this work are: (1) We introduce a simple yet effective spatial-temporal attention for video action recognition, which consists of the saliency mask for spatial attention learned by ConvNets and temporal attention learned by convolutional LSTM. (2) We introduce three different regularizers, two for spatial and one for temporal attention components, to improve performance and interpretability of our model; (3) We demonstrate the efficacy of our model for video action recognition in three public datasets and explore the importance of our modeling choices through ablation experiments; (4) Finally, we qualitatively and quantitatively show that our spatio-temporal attention is able to localize discriminative regions and important frames, despite being trained in a purely weakly-supervised manner with only classification labels. 2 RELATED WORK 2.1 NETWORK INTERPRETATION Various methods have been proposed to try to explain neural networks (Zeiler & Fergus, 2014; Springenberg et al., 2014; Mahendran & Vedaldi, 2016; Zhou et al., 2016; Zhang et al., 2016; Simonyan et al., 2013; Ramprasaath et al., 2016; Ribeiro et al., 2016; 2018; Chang et al., 2018) in various of ways, including visualizing the gradients, perturbing the inputs, and bridging relations with other well-studied systems. Visual attention is also one way that tries to explain which part of the image is responsible for the network’s decision (Li et al., 2018a; Jetley et al., 2018). Besides the explanation, Li et al. (2018a) build up an end-to-end model to provide supervision directly on these explanations, specifically network’s attention. 2.2 VISUAL ATTENTION FOR VIDEO ACTION RECOGNITION For video action recognition, visualizing which part of the frame and which frame of the video sequence that the model was attending to provides valuable insight into the model’s behavior. Sharma et al. (2015) develop an attention-driven LSTM by highlighting important spatial locations for action recognition. Girdhar & Ramanan (2017) introduce an attention mechanism based on a derivation of bottom-up and top-down attention as low-rank approximations of bilinear pooling methods. However, these work only focus on the crucial spatial locations of each image, without considering temporal relations among different frames in a video sequence. To alleviate this shortcoming, visual attention is incorporated in the motion stream (Wang et al., 2016b; Li et al., 2018b; Du et al., 2018). However, the motion stream only employs the optical flow frames generated from consequent frames, cannot consider the long-term temporal relations among different frames in a video sequence. Moreover, motion stream needs additional optical flow frames as input, which imposes burden due to additional optical flow extraction, storage and computation and is especially severe for large datasets. Torabi & Sigal (2017) propose an attention based LSTM model to hightlight frames in videos, but spatial information is not used for temporal attention. An end-to-end spatial and temporal attention model is proposed in (Song et al., 2017) for human action recognition, but additional skeleton data is needed. 3 SPATIAL-TEMPORAL ATTENTION MECHANISM Our overall model is an Recurrent Neural Network (RNN) that aggregates frame-based convolutional features across the video to make action predictions as shown in Fig. 1. The convolutional features are attended over both spatially, in each frame, and subsequently temporally. Both attentions are soft, meaning that the effective final representation at time t of an RNN, used to make the prediction, is a spatio-temporally weighted aggregation of convolutional features across the video along with the past hidden state from t − 1. The core novelty is the overall form of our attention mechanism and the additional terms of the loss function that induce sensible spatial and temporal attention priors. 3.1 CONVOLUTIONAL FRAME FEATURES We use the last convolutional layer output extracted by ResNet50 or ResNet101 (He et al., 2016), pretrained on the ImageNet (Deng et al., 2009) dataset and fine-tuned for the target dataset, as our frame feature representation. We acknowledge that more accurate feature extractors (for instance, network with more parameters such as ResNet-152 or higher performance networks such as DenseNet (Huang et al., 2017) or SENet (Hu et al., 2018)) and optical flow features will likely lead to better overall performance. Our primary purpose in this paper is to prove the efficacy of our spatial-temporal attention mechanism. Hence we kept the features relatively simple. 3.2 SPATIAL ATTENTION WITH IMPORTANCE MASK We apply an importance mask Mi to the i-th image features Xi to obtain attended image features by element-wise multiplication: X̃i = Xi Mi, (1) for 1 ≤ i ≤ n. This operation attenuates certain regions of the feature map based on their estimated importance. Here we simply use three convolutional layers to learn the importance mask (please refer to Appendix B.2 for network architecture details). Fig. 2 illustrates our spatial attention mechanism. However, if left uncontrolled, an arbitrarily structured mask could be learned, leading to possible overfitting. We posit that, in practice, it is often useful to attend to a few important larger regions (e.g., objects, elements of the scene). To induce this behavior, we encourage smoothness of the mask by introducing total variation loss on the spatial attention, as will be described in Sec. 3.4. 3.3 TEMPORAL ATTENTION Inspired by attention for neural machine translation (Bahdanau et al., 2015), we introduce the temporal attention mechanism which generates energy for each attended frame X̃i at each time step t, eti = Φ(Ht−1, X̃i), (2) where Ht−1 represents the ConvLSTM hidden state at time t−1 that implicitly contains all previous information up to time step t− 1, X̃i represents the i-th frame masked features. Φ = ΦH(Ht−1) + ΦX(X̃i), where ΦH and ΦX are feed-forward neural networks which are jointly trained with all other components of the proposed system. This temporal attention model directly computes soft attention weight for each frame at each time t as shown in Fig. 3. It allows the gradient of the cost function to be backpropagated through. This gradient can be used to train the entire spatial-temporal attention model jointly. The importance weight wti for each frame is: wti = exp(eti)∑n i=1(exp(eti)) (3) for 1 ≤ i ≤ n, 1 ≤ t ≤ T . This importance weighting mechanism decides which frame of the video to pay attention to. The final feature map Yt to the ConvLSTM is a weighted sum of the feature from all of the frames as the ConvLSTM cell inputs: Yt = 1 n n∑ i=1 wtiX̃i (4) where X̃i denotes the i-th masked frame of each video, n represents the total number of frames for each video. For RNN, instead of using conventional LSTM (Graves, 2013), we use Convolutional LSTM (ConvLSTM) (Shi et al., 2015) instead. The drawback of conventional LSTM is its use of full connections in the input-to-state and state-to-state transitions in which no spatial information is encoded. In contrast, each input, cell output, hidden state, gate are 3D tensors whose last two dimensions are spatial dimensions which can preserve spatial information, which is more suitable for image inputs. We use the following initialization strategy for the ConvLSTM cell state and hidden state for faster convergence: C0 = gc( 1 n n∑ i=1 X̃i), H0 = gh( 1 n n∑ i=1 X̃i) (5) where gc and gh are two layer convolutional networks with batch normalization (Ioffe & Szegedy, 2015). We calculate the average hidden states of ConvLSTM over time length T , H = 1 T T∑ i=1 Hi (6) and send it to a fully connected classification layer for the final video action classification. 3.4 LOSS FUNCTION Considering the spatial and temporal nature of our video action recognition; we would like to learn (1) a sensible attention mask for spatial attention, (2) reasonable importance weighting scores for different frames, and (3) improve the action recognition accuracy at the same time. Therefore, our loss function L: L = LCE + λTVLTV + λcontrastLcontrast + λunimodalLunimodal, (7) where LCE is the cross entropy loss for classification, LTV represents the total variation regularization (Rudin et al., 1992); Lcontrast represents the mask and background contrast regularizer; and Lunimodal represents unimodality regularizer. λTV, λcontrast and λunimodal are the weights for corresponding regularizers. The total variation regularizationLTV of the learnable attention mask encourages spatial smoothness of the mask and is defined as: LTV = n∑ i=1 ∑ j,k |M j+1,ki −M j,k i |+ ∑ j,k |M j,k+1i −M j,k i | (8) where Mi is the mask for the i-th frame, and M j,k i is entry at the (j, k)-th spatial location of the mask. Different from the total variation of the mask of using L2 loss in Dabkowski & Gal (2017), we use L1 loss instead. The contrast regularization Lcontrast of learnable attention mask is to suppress the irrelevant information and highlight important information: Lcontrast = n∑ i=1 ( −1 2 Mi Bi + 1 2 Mi (1−Bi) ) (9) where Bi = I{Mi > 0.5} represents the binarized mask, I is the indicator function applied elementwise. The unimodality regularizer Lunimodal encourages the temporal attention weights to be unimodal, biasing against spurious temporal weights. This stems from our observation that in most cases only one activity would be present in the considered frame window, with possible irrelevant information on either or both sides. Here we use the log concave distribution to encourage the unimodal pattern of temporal attention weights: Lunimodal = T∑ t=1 n−1∑ i=2 max{0, wt,i−1wt,i+1 − w2t,i} (10) where T represents the ConvLSTM time sequence length and n is the number of frames for each video. More details on this log concave sequence please refer to Appendix A. 4 EXPERIMENTS In this section, we first conduct experiments to evaluate our proposed method on video action recognition task on three public available datasets. Then we evaluate our spatial attention mechanism on the spatial localization task and our temporal attention mechanism on the temporal localization task respectively. 4.1 VIDEO ACTION RECOGNITION We first conduct extensive studies on the widely used HMDB51 and UCF101 datasets. The purpose of these experiments is mainly for ablation study to examine the effects of different sub-components. Then we show that our method can be applied to the challenging large-scale Moments in Time dataset. Datasets. HMDB51 dataset (Kuehne et al., 2011) contains 51 distinct action categories, each containing at least 101 clips for a total of 6,766 video clips extracted from a wide range of sources. These videos include general facial actions, general body movements, body movements with object interaction, body movements for human interaction. Model HMDB51 UCF101 Ablation Experiments UCF101 dataset (Soomro et al., 2012) is an action recognition dataset of realistic action videos, collected from YouTube, with 101 action categories. Moments in Time Dataset (Monfort et al., 2018) is a collection of one million short videos with one action label per video and 339 different action classes. As there could be more than one action taking place in a video, action recognition models may predict an action correctly yet be penalized because the ground truth does not include that action. Therefore, it is believed that top 5 accuracy measure will be more meaningful for this dataset. Experimental setup. We use the same parameters for HMDB51 and UCF101: single Convolutional LSTM layer with hidden-state dimension 512, sequence length T = 25, λTV = 10−5, λcontrast = 10−4, λunimodal = 1. For the Moments in Time dataset, we use time sequence length T = 15. For more details on the experimental setup please refer to Appendix B.1. Quantitative results. We show the top-1 video action classification accuracy for HMDB51 and UCF101 dataset in Table 1. Our proposed model outperforms previous attention based model (Sharma et al., 2015; Li et al., 2018b; Girdhar & Ramanan, 2017) and conventional ResNet101ImageNet. From the ablation experiments, it demonstrates that all the sub-components of the proposed method contribute to improving the final performance. The results on the Moments in Time dataset are reported in Table 2. Our method achieves the best accuracy comparing to other single-modality-based methods, and obtains better or comparative results comparing to the methods which uses more than one modality. TRN-Multiscale (Zhou et al., 2018), which uses both RGB and optical flow images, has better performance than ours, however, extracting optical flow images for such large datasets is very time-consuming and needs the same order of magnitude of storage as RGB images. Qualitative results. We visualize the spatial attention and temporal attention results in Fig. 4. We can see that the spatial attention can correctly focus on important spatial area of the image, and the temporal attention shows a unimodal distribution for the entire action from starting the action to completing the action. More results are shown in Appendix C.1. 4.2 WEAKLY SUPERVISED LOCALIZATION Due to the existence of spatial and temporal attention mechanisms, our model can not only classify the action of the video, but also give a better interpretability of the results, i.e. telling which region and frames contribute more to the prediction. In other words, our proposed model can also localize the most discriminant region and frames at the same time. To verify this, we conduct the spatial localization and temporal localization experiments. 4.2.1 SPATIAL ACTION LOCALIZATION Dataset. UCF101-24 is a subset of 24 classes out of 101 classes of UCF101 that comes with spatio-temporal localization annotation, released as bounding box annotations of humans with THUMOS2013 and THUMOS2014 challenge (Jiang et al., 2014). Experimental setup. For training, we only use the classification labels without spatial bounding box labels. For evaluation, we threshold the produced saliency mask at 0.5 and the tightest bounding box that contains the thresholded saliency map is set as the predicted localization box for each frame. Then these predicted localization boxes are compared with the ground truth bounding boxes at different Intersection Over Union (IOU) levels. Qualitative results. We show some qualitative results in Fig. 5. Our spatial attention can attend to important action areas. The ground truth bounding boxes include all the entire human actions, while our attention could attend to crucial parts of an action such as in Fig.5 (d) and (e). Furthermore, our attention mechanism is able to attend to areas with multiple human actions. For instance, in Fig.5 (f) the ground truth only includes one person bicycling, but our attention can include both people bicycling. More qualitative results including failure cases are included in Appendix C.2. Quantitative results. Table 3 shows the quantitative results for UCF101-24 spatial localization results. Our attention mechanism works better compared with the baseline methods when the IoU threshold is lower mainly because our model only focuses on important spatial areas rather than the entire human action annotated by bounding boxes. Compared with the baseline methods train- ing with ground truth bounding boxes, we only use the action classification label, no ground truth bounding boxes are used. 4.2.2 TEMPORAL ACTION LOCALIZATION Dataset. The action detection task of THUMOS14 (Jiang et al., 2014) consists of 20 classes of sports activities, and contains 2765 trimmed videos for training, while 200 and 213 untrimmed videos for validation and test respectively. More details on this dataset and pre-processing are included in Appendix B.1. Experimental setup. We use the same hyperparameters for THUMOS14 as HMDB51, UCF101 and UCF101-24. For training, we only use the classification labels without temporal annotation labels. For evaluation, we threshold the normalized temporal attention importance weight at 0.5. Then these predicted temporal localization frames are compared with the ground truth annotation at different IoU thresholds. Qualitative results. We first visualize some examples of learned attention weights on the test data of THUMOS14 in Fig. 6. We see that our temporal attention module is able to automatically highlight important frames and to avoid irrelevant frames corresponding to background or non-action human poses. More qualitative results are included in Appendix C.4. Quantitative results. With our spatial temporal attention mechanism, the video action classification accuracy for the THUMOS’14 20 classes improved from 74.45% to 78.33%: a 3.88% increase. Besides improving the classification accuracy, we show our temporal attention mechanism is able to highlight discriminative frames quantitatively in Table 4. Compared with reinforcement learning based method (Yeung et al., 2016) and weakly supervised method (Wang et al., 2017), our method achieves the best accuracy in terms of different levels of IoU thresholds. 5 CONCLUSION In this work, we develop a novel spatial-temporal attention mechanism for the task of video action recognition, demonstrating the efficacy across three publicly available datasets. Also, we introduce a set of regularizers that ensure our attention mechanism attends to coherent regions in space and time, further improving the performance and increasing the model interpretability. Moreover, we qualitatively and quantitatively show that our spatio-temporal attention is able to localize discriminative regions and important frames, despite being trained in a purely weakly-supervised manner with only classification labels. A LOG-CONCAVE SEQUENCE In probability and statistics, a unimodal distribution is a probability distribution that has a single peak or mode. If a distribution has more modes it is called multimodal. The temporal attention weights are a univariate discrete distribution over the frames, indicating the importance of the frames for the task of classification. In the context of activity recognition, it is reasonable to assume that the frames that contain salient information should be consecutive, instead of scattered around. Therefore, we would like to design a regularizer that encourages unimodality. To this end, we introduce a mathematical concept called the log-concave sequence and define the regularizer based on it. We first give a formal definition of the unimodal sequence. Definition 1. A sequence {ai}ni=1 is unimodal if for some integer m,{ ai−1 ≤ ai if i ≤ m, ai ≥ ai+1 if i ≥ m. A univariate discrete distribution is unimodal, if its probability mass function forms a unimodal sequence. The log-concave sequence is defined as follows. Definition 2. A non-negative sequence {ai}ni=1 is log-concave if a2i ≥ ai−1ai+1. This property gets its name from the fact that if {ai}ni=1 is log-concave, then the sequence {log ak}ni=1 is concave. The connection between unimodality and log-concavity is given by the following proposition. Proposition 1. A log-concave sequence is unimodal. Proof. Rearranging the defining inequality for log-concavity, we see that ai ai−1 ≥ ai+1 ai , so the ratio of consecutive terms is decreasing. Until the ratios decrease below 1, the sequence is increasing, and after this point, the sequence is decreasing, so it is unimodal. Given the definition of log-concavity, it is straightforward to design a regularization term that encourages log-concavity: R = n−1∑ i=2 max{0, ai−1ai+1 − a2i }. (11) By Proposition 1, this regularizer also encourages unimodality. B MORE DATASETS AND IMPLEMENTATION DETAILS B.1 MORE DETAILS ON THE DATASET AND EXPERIMENTAL SETUP HMDB51 and UCF101 The dataset pre-processing and data augumentation are the same as the ResNet ImageNet experiment (He et al., 2016). All the videos are resized to 224 × 224 resolution and fed into a ResNet-50 pretrained on ImageNet. The last convolutional layer feature map size is 2048×7×7. The experimental setup for the Moments in Time dataset is the same as HMDB51 and UCF101 except time sequence and image resolution. Moments in Time For the Moments in Time dataset (Monfort et al., 2018), the videos only have 3 seconds, much shorter than HMDB51 and UCF101. We extract RGB frames from the raw videos at 5 fps. Therefore, the sequence length is T = 15. Following the practice in (Monfort et al., 2018) to make all the videos uniform resolution, we resize the RGB frames to 340 × 256 pixels. When extracting features, we use the ResNet-50 pretrained on ImageNet model using resized images with resolution of 256 × 256 pixels. The data augmentation is the same as the ResNet ImageNet experiment (He et al., 2016). The feature map size of the last convolutional layer is 2048× 8× 8. THUMOS14 The action detection task of THUMOS’14 (Jiang et al., 2014) consists of 20 classes of sports activities, and contains 2765 trimmed videos for training, while 200 and 213 untrimmed videos for validation and test respectively. Following the standard practice (Yeung et al., 2016; Zhao et al., 2017), we use the validation set as training and evaluate on the testing set. Following the standard practice (Xu et al., 2017) to avoid the training ambiguity, we remove the videos with multiple labels. We extract RGB frames from the raw videos at 10 fps. The last convolutional layer feature map size is 2048× 7× 7. B.2 SPATIAL ATTENTION NETWORK ARCHITECTURE The detailed architecture of spatial attention network described in Section 3.2 is listed in the Table 5. B.3 MORE IMPLEMENTATION DETAILS All the experiments are evaluated on machines with a single Nvidia GeForce GTX 1080Ti GPU. The networks are implemented using the Pytorch library and our code will be publicly available with the paper. C MORE RESULTS C.1 MORE SPATIAL TEMPORAL ATTENTION RESULTS Fig. 7 shows more results on spatial temporal attention. C.2 MORE SPATIAL LOCALIZATION RESULTS Fig. 8 shows more spatial localization results. Fig. 9 shows some failure cases. C.3 MORE ACTION RECOGNITION RESULTS Table 6 shows results of our spatial-temporal attention model with different base networks. Our spatial-temporal attention mechanism is a easy plug-in model which could be based on different network architectures, and can boost performance. C.4 MORE TEMPORAL LOCALIZATION RESULTS Fig. 10 shows more results on temporal localization with our temporal attention.
1. What is the focus of the paper regarding spatial and temporal attention? 2. What are the strengths of the proposed approach, particularly in its simplicity and novel regularization terms? 3. What are the weaknesses of the paper, especially in its comparisons with other works and neglecting certain features? 4. How does the reviewer assess the significance of the paper's contribution to action recognition tasks? 5. Do you have any questions regarding the choice of regularized making vs soft attention for spatial attention?
Review
Review The paper propose an end-to-end technique that applies both spatial and temporal attention. The spatial attention is done by training a mask-filter, while the temporal-attention use a soft-attention mechanism. In addition the authors propose several regularization terms to directly improve attention. The evaluated datasets are action recognition datasets, such as HMDB51, UCF10, Moments in Time, THUMOS’14. The paper reports SOTA on all three datasets. Strengths: The paper is well written: easy to follow, and describe the importance of spatial-temporal attention. The model is simple, and propose novel attention regularization terms. The authors evaluates on several tasks, and shows good qualitative behavior. Weaknesses: The reported number on UCF101 and HMDB51 are confusing/misleading. Even with only RGB, the evaluation miss numbers of models like ActionVLAD with 50% on HMDB51 or Res3D with 88% on UCF101. I’ll also add that there are available models nowadays that achieve over 94% accuracy on UCF101, and over 72% on HMDB51. The paper should at least have better discussion on those years of progress. The mis-information also continues in THUMOS14, for instance R-C3D beats the proposed model. In my opinion the paper should include a flow variant. It is a common setup in action recognition, and a good model should take advantage of these features. Especially for spatial-temporal attention, e.g., VideoLSTM paper by Li. In general spatial attention over each frame is extremely demanding. The original image features are now multiplied by 49 factor, this is more demanding in terms of memory consumption than the flow features they chose to ignore. The authors reports on 15-frames datasets for those short videos. But it will be interesting to see if the model is still useable on longer videos, for instance on Charades dataset. Can you please explain why you chose a regularized making instead of Soft-attention for spatial attention? To conclude: The goal of spatial-temporal attention is important, and the proposed approach behaves well. Yet the model is an extension of known techniques for image attention, which are not trivial to apply on long-videos with many frames. Evaluating only on rgb features is not enough for an action recognition model. Importantly, even when considering only rgb models, the paper still missed many popular stronger baselines.
ICLR
Title Where and when to look? Spatial-temporal attention for action recognition in videos Abstract Inspired by the observation that humans are able to process videos efficiently by only paying attention where and when it is needed, we propose a novel spatialtemporal attention mechanism for video-based action recognition. For spatial attention, we learn a saliency mask only using convolutional layers to allow the model to focus on the most salient parts of the feature maps. For temporal attention, we employ a convolutional LSTM based attention mechanism to identify the most relevant frames from an input video. Further, we propose a set of regularizers that ensure that our attention mechanism attends to coherent regions in space and time. Our model can not only effectively improve video action recognition accuracy, but also can localize discriminative regions both spatially and temporally, despite only trained in a weakly-supervised manner with only classification labels (no bounding box spatial labels and time frame temporal labels). We evaluate our proposed approach on several public video action recognition datasets with ablation studies. Furthermore, we quantitatively and qualitatively evaluate our model’s ability to localize discriminative regions spatially and critical frames temporally. Experimental results demonstrate the efficacy of our approach, showing superior or comparable accuracy with the state-of-the-art methods with the same input. 1 INTRODUCTION An important property of human perception is that one does not need to process a whole scene in its entirety at once. Instead, humans focus attention selectively on parts of the visual space to acquire information where and when it is needed, and combine information from different fixations over time to build up an internal representation of the scene (Rensink, 2000), which can then be used for interpretation or decision making. In computer vision and natural language processing, over the last couple of years, attention models have proved similarly important. Particularly for the tasks where interpretation or explanation requires only a small portion of the image or video. Examples include visual question answering (Lu et al., 2016; Xu & Saenko, 2016; Xiong et al., 2016), activity recognition (Sharma et al., 2015; Girdhar & Ramanan, 2017; Li et al., 2018b), and natural machine translation (Bahdanau et al., 2015). These models have also provided a level of interpretability, by visualizing regions selected or attended over for a particular task or decision. In particular, for video action classification, a proper attention model can help answer the question of where and when it needs to look at the image evidence to draw a classification decision. It intuitively explains which part the model attends to when making a particular decision, which is very helpful in real applications, e.g., medical AI systems or self-driving cars. In this paper, we propose a novel spatio-temporal attention mechanism that is designed to address these challenges. Our attention mechanism is efficient, due to its space- and time- separability, and yet flexible enough to enable encoding of effective regularizers (or priors). As such, our attention mechanism consists of spatial and temporal components shown in Fig. 1. The spatial attention component, that attenuates frame-wise CNN image features, consists of the saliency mask; regularized to be discriminative and spatially smooth. The temporal component consists of a uni-modal soft attention mechanism that aggregates information over the near-by attenuated frame features before passing it into Convolutional LSTM for class prediction. Contributions: In summary, the main contributions of this work are: (1) We introduce a simple yet effective spatial-temporal attention for video action recognition, which consists of the saliency mask for spatial attention learned by ConvNets and temporal attention learned by convolutional LSTM. (2) We introduce three different regularizers, two for spatial and one for temporal attention components, to improve performance and interpretability of our model; (3) We demonstrate the efficacy of our model for video action recognition in three public datasets and explore the importance of our modeling choices through ablation experiments; (4) Finally, we qualitatively and quantitatively show that our spatio-temporal attention is able to localize discriminative regions and important frames, despite being trained in a purely weakly-supervised manner with only classification labels. 2 RELATED WORK 2.1 NETWORK INTERPRETATION Various methods have been proposed to try to explain neural networks (Zeiler & Fergus, 2014; Springenberg et al., 2014; Mahendran & Vedaldi, 2016; Zhou et al., 2016; Zhang et al., 2016; Simonyan et al., 2013; Ramprasaath et al., 2016; Ribeiro et al., 2016; 2018; Chang et al., 2018) in various of ways, including visualizing the gradients, perturbing the inputs, and bridging relations with other well-studied systems. Visual attention is also one way that tries to explain which part of the image is responsible for the network’s decision (Li et al., 2018a; Jetley et al., 2018). Besides the explanation, Li et al. (2018a) build up an end-to-end model to provide supervision directly on these explanations, specifically network’s attention. 2.2 VISUAL ATTENTION FOR VIDEO ACTION RECOGNITION For video action recognition, visualizing which part of the frame and which frame of the video sequence that the model was attending to provides valuable insight into the model’s behavior. Sharma et al. (2015) develop an attention-driven LSTM by highlighting important spatial locations for action recognition. Girdhar & Ramanan (2017) introduce an attention mechanism based on a derivation of bottom-up and top-down attention as low-rank approximations of bilinear pooling methods. However, these work only focus on the crucial spatial locations of each image, without considering temporal relations among different frames in a video sequence. To alleviate this shortcoming, visual attention is incorporated in the motion stream (Wang et al., 2016b; Li et al., 2018b; Du et al., 2018). However, the motion stream only employs the optical flow frames generated from consequent frames, cannot consider the long-term temporal relations among different frames in a video sequence. Moreover, motion stream needs additional optical flow frames as input, which imposes burden due to additional optical flow extraction, storage and computation and is especially severe for large datasets. Torabi & Sigal (2017) propose an attention based LSTM model to hightlight frames in videos, but spatial information is not used for temporal attention. An end-to-end spatial and temporal attention model is proposed in (Song et al., 2017) for human action recognition, but additional skeleton data is needed. 3 SPATIAL-TEMPORAL ATTENTION MECHANISM Our overall model is an Recurrent Neural Network (RNN) that aggregates frame-based convolutional features across the video to make action predictions as shown in Fig. 1. The convolutional features are attended over both spatially, in each frame, and subsequently temporally. Both attentions are soft, meaning that the effective final representation at time t of an RNN, used to make the prediction, is a spatio-temporally weighted aggregation of convolutional features across the video along with the past hidden state from t − 1. The core novelty is the overall form of our attention mechanism and the additional terms of the loss function that induce sensible spatial and temporal attention priors. 3.1 CONVOLUTIONAL FRAME FEATURES We use the last convolutional layer output extracted by ResNet50 or ResNet101 (He et al., 2016), pretrained on the ImageNet (Deng et al., 2009) dataset and fine-tuned for the target dataset, as our frame feature representation. We acknowledge that more accurate feature extractors (for instance, network with more parameters such as ResNet-152 or higher performance networks such as DenseNet (Huang et al., 2017) or SENet (Hu et al., 2018)) and optical flow features will likely lead to better overall performance. Our primary purpose in this paper is to prove the efficacy of our spatial-temporal attention mechanism. Hence we kept the features relatively simple. 3.2 SPATIAL ATTENTION WITH IMPORTANCE MASK We apply an importance mask Mi to the i-th image features Xi to obtain attended image features by element-wise multiplication: X̃i = Xi Mi, (1) for 1 ≤ i ≤ n. This operation attenuates certain regions of the feature map based on their estimated importance. Here we simply use three convolutional layers to learn the importance mask (please refer to Appendix B.2 for network architecture details). Fig. 2 illustrates our spatial attention mechanism. However, if left uncontrolled, an arbitrarily structured mask could be learned, leading to possible overfitting. We posit that, in practice, it is often useful to attend to a few important larger regions (e.g., objects, elements of the scene). To induce this behavior, we encourage smoothness of the mask by introducing total variation loss on the spatial attention, as will be described in Sec. 3.4. 3.3 TEMPORAL ATTENTION Inspired by attention for neural machine translation (Bahdanau et al., 2015), we introduce the temporal attention mechanism which generates energy for each attended frame X̃i at each time step t, eti = Φ(Ht−1, X̃i), (2) where Ht−1 represents the ConvLSTM hidden state at time t−1 that implicitly contains all previous information up to time step t− 1, X̃i represents the i-th frame masked features. Φ = ΦH(Ht−1) + ΦX(X̃i), where ΦH and ΦX are feed-forward neural networks which are jointly trained with all other components of the proposed system. This temporal attention model directly computes soft attention weight for each frame at each time t as shown in Fig. 3. It allows the gradient of the cost function to be backpropagated through. This gradient can be used to train the entire spatial-temporal attention model jointly. The importance weight wti for each frame is: wti = exp(eti)∑n i=1(exp(eti)) (3) for 1 ≤ i ≤ n, 1 ≤ t ≤ T . This importance weighting mechanism decides which frame of the video to pay attention to. The final feature map Yt to the ConvLSTM is a weighted sum of the feature from all of the frames as the ConvLSTM cell inputs: Yt = 1 n n∑ i=1 wtiX̃i (4) where X̃i denotes the i-th masked frame of each video, n represents the total number of frames for each video. For RNN, instead of using conventional LSTM (Graves, 2013), we use Convolutional LSTM (ConvLSTM) (Shi et al., 2015) instead. The drawback of conventional LSTM is its use of full connections in the input-to-state and state-to-state transitions in which no spatial information is encoded. In contrast, each input, cell output, hidden state, gate are 3D tensors whose last two dimensions are spatial dimensions which can preserve spatial information, which is more suitable for image inputs. We use the following initialization strategy for the ConvLSTM cell state and hidden state for faster convergence: C0 = gc( 1 n n∑ i=1 X̃i), H0 = gh( 1 n n∑ i=1 X̃i) (5) where gc and gh are two layer convolutional networks with batch normalization (Ioffe & Szegedy, 2015). We calculate the average hidden states of ConvLSTM over time length T , H = 1 T T∑ i=1 Hi (6) and send it to a fully connected classification layer for the final video action classification. 3.4 LOSS FUNCTION Considering the spatial and temporal nature of our video action recognition; we would like to learn (1) a sensible attention mask for spatial attention, (2) reasonable importance weighting scores for different frames, and (3) improve the action recognition accuracy at the same time. Therefore, our loss function L: L = LCE + λTVLTV + λcontrastLcontrast + λunimodalLunimodal, (7) where LCE is the cross entropy loss for classification, LTV represents the total variation regularization (Rudin et al., 1992); Lcontrast represents the mask and background contrast regularizer; and Lunimodal represents unimodality regularizer. λTV, λcontrast and λunimodal are the weights for corresponding regularizers. The total variation regularizationLTV of the learnable attention mask encourages spatial smoothness of the mask and is defined as: LTV = n∑ i=1 ∑ j,k |M j+1,ki −M j,k i |+ ∑ j,k |M j,k+1i −M j,k i | (8) where Mi is the mask for the i-th frame, and M j,k i is entry at the (j, k)-th spatial location of the mask. Different from the total variation of the mask of using L2 loss in Dabkowski & Gal (2017), we use L1 loss instead. The contrast regularization Lcontrast of learnable attention mask is to suppress the irrelevant information and highlight important information: Lcontrast = n∑ i=1 ( −1 2 Mi Bi + 1 2 Mi (1−Bi) ) (9) where Bi = I{Mi > 0.5} represents the binarized mask, I is the indicator function applied elementwise. The unimodality regularizer Lunimodal encourages the temporal attention weights to be unimodal, biasing against spurious temporal weights. This stems from our observation that in most cases only one activity would be present in the considered frame window, with possible irrelevant information on either or both sides. Here we use the log concave distribution to encourage the unimodal pattern of temporal attention weights: Lunimodal = T∑ t=1 n−1∑ i=2 max{0, wt,i−1wt,i+1 − w2t,i} (10) where T represents the ConvLSTM time sequence length and n is the number of frames for each video. More details on this log concave sequence please refer to Appendix A. 4 EXPERIMENTS In this section, we first conduct experiments to evaluate our proposed method on video action recognition task on three public available datasets. Then we evaluate our spatial attention mechanism on the spatial localization task and our temporal attention mechanism on the temporal localization task respectively. 4.1 VIDEO ACTION RECOGNITION We first conduct extensive studies on the widely used HMDB51 and UCF101 datasets. The purpose of these experiments is mainly for ablation study to examine the effects of different sub-components. Then we show that our method can be applied to the challenging large-scale Moments in Time dataset. Datasets. HMDB51 dataset (Kuehne et al., 2011) contains 51 distinct action categories, each containing at least 101 clips for a total of 6,766 video clips extracted from a wide range of sources. These videos include general facial actions, general body movements, body movements with object interaction, body movements for human interaction. Model HMDB51 UCF101 Ablation Experiments UCF101 dataset (Soomro et al., 2012) is an action recognition dataset of realistic action videos, collected from YouTube, with 101 action categories. Moments in Time Dataset (Monfort et al., 2018) is a collection of one million short videos with one action label per video and 339 different action classes. As there could be more than one action taking place in a video, action recognition models may predict an action correctly yet be penalized because the ground truth does not include that action. Therefore, it is believed that top 5 accuracy measure will be more meaningful for this dataset. Experimental setup. We use the same parameters for HMDB51 and UCF101: single Convolutional LSTM layer with hidden-state dimension 512, sequence length T = 25, λTV = 10−5, λcontrast = 10−4, λunimodal = 1. For the Moments in Time dataset, we use time sequence length T = 15. For more details on the experimental setup please refer to Appendix B.1. Quantitative results. We show the top-1 video action classification accuracy for HMDB51 and UCF101 dataset in Table 1. Our proposed model outperforms previous attention based model (Sharma et al., 2015; Li et al., 2018b; Girdhar & Ramanan, 2017) and conventional ResNet101ImageNet. From the ablation experiments, it demonstrates that all the sub-components of the proposed method contribute to improving the final performance. The results on the Moments in Time dataset are reported in Table 2. Our method achieves the best accuracy comparing to other single-modality-based methods, and obtains better or comparative results comparing to the methods which uses more than one modality. TRN-Multiscale (Zhou et al., 2018), which uses both RGB and optical flow images, has better performance than ours, however, extracting optical flow images for such large datasets is very time-consuming and needs the same order of magnitude of storage as RGB images. Qualitative results. We visualize the spatial attention and temporal attention results in Fig. 4. We can see that the spatial attention can correctly focus on important spatial area of the image, and the temporal attention shows a unimodal distribution for the entire action from starting the action to completing the action. More results are shown in Appendix C.1. 4.2 WEAKLY SUPERVISED LOCALIZATION Due to the existence of spatial and temporal attention mechanisms, our model can not only classify the action of the video, but also give a better interpretability of the results, i.e. telling which region and frames contribute more to the prediction. In other words, our proposed model can also localize the most discriminant region and frames at the same time. To verify this, we conduct the spatial localization and temporal localization experiments. 4.2.1 SPATIAL ACTION LOCALIZATION Dataset. UCF101-24 is a subset of 24 classes out of 101 classes of UCF101 that comes with spatio-temporal localization annotation, released as bounding box annotations of humans with THUMOS2013 and THUMOS2014 challenge (Jiang et al., 2014). Experimental setup. For training, we only use the classification labels without spatial bounding box labels. For evaluation, we threshold the produced saliency mask at 0.5 and the tightest bounding box that contains the thresholded saliency map is set as the predicted localization box for each frame. Then these predicted localization boxes are compared with the ground truth bounding boxes at different Intersection Over Union (IOU) levels. Qualitative results. We show some qualitative results in Fig. 5. Our spatial attention can attend to important action areas. The ground truth bounding boxes include all the entire human actions, while our attention could attend to crucial parts of an action such as in Fig.5 (d) and (e). Furthermore, our attention mechanism is able to attend to areas with multiple human actions. For instance, in Fig.5 (f) the ground truth only includes one person bicycling, but our attention can include both people bicycling. More qualitative results including failure cases are included in Appendix C.2. Quantitative results. Table 3 shows the quantitative results for UCF101-24 spatial localization results. Our attention mechanism works better compared with the baseline methods when the IoU threshold is lower mainly because our model only focuses on important spatial areas rather than the entire human action annotated by bounding boxes. Compared with the baseline methods train- ing with ground truth bounding boxes, we only use the action classification label, no ground truth bounding boxes are used. 4.2.2 TEMPORAL ACTION LOCALIZATION Dataset. The action detection task of THUMOS14 (Jiang et al., 2014) consists of 20 classes of sports activities, and contains 2765 trimmed videos for training, while 200 and 213 untrimmed videos for validation and test respectively. More details on this dataset and pre-processing are included in Appendix B.1. Experimental setup. We use the same hyperparameters for THUMOS14 as HMDB51, UCF101 and UCF101-24. For training, we only use the classification labels without temporal annotation labels. For evaluation, we threshold the normalized temporal attention importance weight at 0.5. Then these predicted temporal localization frames are compared with the ground truth annotation at different IoU thresholds. Qualitative results. We first visualize some examples of learned attention weights on the test data of THUMOS14 in Fig. 6. We see that our temporal attention module is able to automatically highlight important frames and to avoid irrelevant frames corresponding to background or non-action human poses. More qualitative results are included in Appendix C.4. Quantitative results. With our spatial temporal attention mechanism, the video action classification accuracy for the THUMOS’14 20 classes improved from 74.45% to 78.33%: a 3.88% increase. Besides improving the classification accuracy, we show our temporal attention mechanism is able to highlight discriminative frames quantitatively in Table 4. Compared with reinforcement learning based method (Yeung et al., 2016) and weakly supervised method (Wang et al., 2017), our method achieves the best accuracy in terms of different levels of IoU thresholds. 5 CONCLUSION In this work, we develop a novel spatial-temporal attention mechanism for the task of video action recognition, demonstrating the efficacy across three publicly available datasets. Also, we introduce a set of regularizers that ensure our attention mechanism attends to coherent regions in space and time, further improving the performance and increasing the model interpretability. Moreover, we qualitatively and quantitatively show that our spatio-temporal attention is able to localize discriminative regions and important frames, despite being trained in a purely weakly-supervised manner with only classification labels. A LOG-CONCAVE SEQUENCE In probability and statistics, a unimodal distribution is a probability distribution that has a single peak or mode. If a distribution has more modes it is called multimodal. The temporal attention weights are a univariate discrete distribution over the frames, indicating the importance of the frames for the task of classification. In the context of activity recognition, it is reasonable to assume that the frames that contain salient information should be consecutive, instead of scattered around. Therefore, we would like to design a regularizer that encourages unimodality. To this end, we introduce a mathematical concept called the log-concave sequence and define the regularizer based on it. We first give a formal definition of the unimodal sequence. Definition 1. A sequence {ai}ni=1 is unimodal if for some integer m,{ ai−1 ≤ ai if i ≤ m, ai ≥ ai+1 if i ≥ m. A univariate discrete distribution is unimodal, if its probability mass function forms a unimodal sequence. The log-concave sequence is defined as follows. Definition 2. A non-negative sequence {ai}ni=1 is log-concave if a2i ≥ ai−1ai+1. This property gets its name from the fact that if {ai}ni=1 is log-concave, then the sequence {log ak}ni=1 is concave. The connection between unimodality and log-concavity is given by the following proposition. Proposition 1. A log-concave sequence is unimodal. Proof. Rearranging the defining inequality for log-concavity, we see that ai ai−1 ≥ ai+1 ai , so the ratio of consecutive terms is decreasing. Until the ratios decrease below 1, the sequence is increasing, and after this point, the sequence is decreasing, so it is unimodal. Given the definition of log-concavity, it is straightforward to design a regularization term that encourages log-concavity: R = n−1∑ i=2 max{0, ai−1ai+1 − a2i }. (11) By Proposition 1, this regularizer also encourages unimodality. B MORE DATASETS AND IMPLEMENTATION DETAILS B.1 MORE DETAILS ON THE DATASET AND EXPERIMENTAL SETUP HMDB51 and UCF101 The dataset pre-processing and data augumentation are the same as the ResNet ImageNet experiment (He et al., 2016). All the videos are resized to 224 × 224 resolution and fed into a ResNet-50 pretrained on ImageNet. The last convolutional layer feature map size is 2048×7×7. The experimental setup for the Moments in Time dataset is the same as HMDB51 and UCF101 except time sequence and image resolution. Moments in Time For the Moments in Time dataset (Monfort et al., 2018), the videos only have 3 seconds, much shorter than HMDB51 and UCF101. We extract RGB frames from the raw videos at 5 fps. Therefore, the sequence length is T = 15. Following the practice in (Monfort et al., 2018) to make all the videos uniform resolution, we resize the RGB frames to 340 × 256 pixels. When extracting features, we use the ResNet-50 pretrained on ImageNet model using resized images with resolution of 256 × 256 pixels. The data augmentation is the same as the ResNet ImageNet experiment (He et al., 2016). The feature map size of the last convolutional layer is 2048× 8× 8. THUMOS14 The action detection task of THUMOS’14 (Jiang et al., 2014) consists of 20 classes of sports activities, and contains 2765 trimmed videos for training, while 200 and 213 untrimmed videos for validation and test respectively. Following the standard practice (Yeung et al., 2016; Zhao et al., 2017), we use the validation set as training and evaluate on the testing set. Following the standard practice (Xu et al., 2017) to avoid the training ambiguity, we remove the videos with multiple labels. We extract RGB frames from the raw videos at 10 fps. The last convolutional layer feature map size is 2048× 7× 7. B.2 SPATIAL ATTENTION NETWORK ARCHITECTURE The detailed architecture of spatial attention network described in Section 3.2 is listed in the Table 5. B.3 MORE IMPLEMENTATION DETAILS All the experiments are evaluated on machines with a single Nvidia GeForce GTX 1080Ti GPU. The networks are implemented using the Pytorch library and our code will be publicly available with the paper. C MORE RESULTS C.1 MORE SPATIAL TEMPORAL ATTENTION RESULTS Fig. 7 shows more results on spatial temporal attention. C.2 MORE SPATIAL LOCALIZATION RESULTS Fig. 8 shows more spatial localization results. Fig. 9 shows some failure cases. C.3 MORE ACTION RECOGNITION RESULTS Table 6 shows results of our spatial-temporal attention model with different base networks. Our spatial-temporal attention mechanism is a easy plug-in model which could be based on different network architectures, and can boost performance. C.4 MORE TEMPORAL LOCALIZATION RESULTS Fig. 10 shows more results on temporal localization with our temporal attention.
1. What is the novelty of the spatio-temporal attention mechanism proposed in the paper compared to previous works? 2. How does the importance mask in Equation 1 work, and what is its purpose? 3. Can you provide more details about the definition and specifics of \phi(H) and \phi(X)? 4. What is L_{contrast} in Equation 9, and how does it regularize the mask? 5. Why is L_{unimodal} necessary, and what does it encourage in the temporal attention weights? 6. How does the proposed method differ from other spatio-temporal attention models, such as the one in Song et al. (2017)? 7. Are there any limitations or potential drawbacks of the proposed approach that could be discussed? 8. Could you provide additional examples or cases where the model's ability to handle weakly supervised action localization could be useful?
Review
Review # 1. Summary This paper presents a novel spatio-temporal attention mechanism. The spatial attention is decomposed from the temporal attention and acts on each frame independently, while the temporal attention is applied on top of it on the temporal domain. The main contribution of the paper is the introduction of regularisers that improve performance and interpretability of the model. Strengths: * Quality of the paper, although some points need to clarified and expanded a bit more (see #2) * Nice diversity of experiments, datasets and tasks that the method is tested on (see #4) Weaknesses: * The paper do not present substantial novelty compared to previous work (see #3) # 2. Clarity and Motivation The paper is in general clear and well motivated, however there are few points that need to be improved: * How is the importance mask (Eq. 1) is defined? The authors said “we simply use three convolutional layers to learn the importance mask”, however the convolutional output should be somehow processed to get out the importance map, in order to match the same sizes of X_i. The details of this network are missing to be able to reproduce the model. * The authors introduced \phi(H) and \phi(X) which are feedforward networks, but their definition and specifics are not mentioned in the paper. * It is not clear how Eq. 9 performs regularization of the mask. Can the authors give an intuition about the definition of L_{contrast}? What does it encourages? In which cases might it be useful? * Why does L_{unimodal} need to encourage the temporal attention weights to be unimodal? It seems that the assumption is valid because of the nature of the dataset, i.e., the video clips contain only a single action with some “background” frames in the beginning and the end. This is not valid in general. Can the authors discuss about this maybe with an example? # 3. Novelty The main concern of the proposal in this paper is its novelty. Temporal attention pooling have been explored in other papers; just to cite a popular one among others: * Long, Xiang, et al. "Attention clusters: Purely attention based local feature integration for video classification." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018. * Other paper from the youtube8m workshops explore the same ideas: https://research.google.com/youtube8m/workshop2017/ Sec. 2.2 should be expanded by including papers and discuss how the presented temporal attention differs from that. Moreover spatio-temporal attention has been previously explored. For example, the following paper also decouple the spatial and temporal component as the proposal: * Song, Sijie, et al. "An End-to-End Spatio-Temporal Attention Model for Human Action Recognition from Skeleton Data." AAAI. Vol. 1. No. 2. 2017. This is just an example, but there are there are other papers that model the spatio-temporal extent of videos without attention for action recognition. The authors should expand Sec. 2 by including such relevant literature. # 4. Experimentation The experiments are carried on video action recognition task on three public available datasets, including HMDB51, UCF101 and Moments in Time. The authors show a nice ablation study by removing the main components of the proposed method and show nice improvements with respect to some baseline (Table 1). Although the results are not too close to the state of the art for video action recognition on HMDB51 and UCF101, the authors first show nice accuracy on Moments in Time (Table 2). Moreover the authors show that the model can be useful on the more challenging task of weakly supervised action localization (UCF101-24, THUMOS). Specifically, spatial attention is used to localize the action in each frame by thresholding, showing competitive results (Table 3). Although some more recent references are missing, see the following paper for example: * G. Singh, S Saha, M. Sapienza, P. H. S. Torr and F Cuzzolin. "Online Real time Multiple Spatiotemporal Action Localisation and Prediction." ICCV, 2017. Then the authors tested also for temporal action localization (Table 4). In general, the paper is not showing state-of-the-art results, however the diversity of experiments, datasets and tasks that are presented makes it pretty solid and interesting.
ICLR
Title Where and when to look? Spatial-temporal attention for action recognition in videos Abstract Inspired by the observation that humans are able to process videos efficiently by only paying attention where and when it is needed, we propose a novel spatialtemporal attention mechanism for video-based action recognition. For spatial attention, we learn a saliency mask only using convolutional layers to allow the model to focus on the most salient parts of the feature maps. For temporal attention, we employ a convolutional LSTM based attention mechanism to identify the most relevant frames from an input video. Further, we propose a set of regularizers that ensure that our attention mechanism attends to coherent regions in space and time. Our model can not only effectively improve video action recognition accuracy, but also can localize discriminative regions both spatially and temporally, despite only trained in a weakly-supervised manner with only classification labels (no bounding box spatial labels and time frame temporal labels). We evaluate our proposed approach on several public video action recognition datasets with ablation studies. Furthermore, we quantitatively and qualitatively evaluate our model’s ability to localize discriminative regions spatially and critical frames temporally. Experimental results demonstrate the efficacy of our approach, showing superior or comparable accuracy with the state-of-the-art methods with the same input. 1 INTRODUCTION An important property of human perception is that one does not need to process a whole scene in its entirety at once. Instead, humans focus attention selectively on parts of the visual space to acquire information where and when it is needed, and combine information from different fixations over time to build up an internal representation of the scene (Rensink, 2000), which can then be used for interpretation or decision making. In computer vision and natural language processing, over the last couple of years, attention models have proved similarly important. Particularly for the tasks where interpretation or explanation requires only a small portion of the image or video. Examples include visual question answering (Lu et al., 2016; Xu & Saenko, 2016; Xiong et al., 2016), activity recognition (Sharma et al., 2015; Girdhar & Ramanan, 2017; Li et al., 2018b), and natural machine translation (Bahdanau et al., 2015). These models have also provided a level of interpretability, by visualizing regions selected or attended over for a particular task or decision. In particular, for video action classification, a proper attention model can help answer the question of where and when it needs to look at the image evidence to draw a classification decision. It intuitively explains which part the model attends to when making a particular decision, which is very helpful in real applications, e.g., medical AI systems or self-driving cars. In this paper, we propose a novel spatio-temporal attention mechanism that is designed to address these challenges. Our attention mechanism is efficient, due to its space- and time- separability, and yet flexible enough to enable encoding of effective regularizers (or priors). As such, our attention mechanism consists of spatial and temporal components shown in Fig. 1. The spatial attention component, that attenuates frame-wise CNN image features, consists of the saliency mask; regularized to be discriminative and spatially smooth. The temporal component consists of a uni-modal soft attention mechanism that aggregates information over the near-by attenuated frame features before passing it into Convolutional LSTM for class prediction. Contributions: In summary, the main contributions of this work are: (1) We introduce a simple yet effective spatial-temporal attention for video action recognition, which consists of the saliency mask for spatial attention learned by ConvNets and temporal attention learned by convolutional LSTM. (2) We introduce three different regularizers, two for spatial and one for temporal attention components, to improve performance and interpretability of our model; (3) We demonstrate the efficacy of our model for video action recognition in three public datasets and explore the importance of our modeling choices through ablation experiments; (4) Finally, we qualitatively and quantitatively show that our spatio-temporal attention is able to localize discriminative regions and important frames, despite being trained in a purely weakly-supervised manner with only classification labels. 2 RELATED WORK 2.1 NETWORK INTERPRETATION Various methods have been proposed to try to explain neural networks (Zeiler & Fergus, 2014; Springenberg et al., 2014; Mahendran & Vedaldi, 2016; Zhou et al., 2016; Zhang et al., 2016; Simonyan et al., 2013; Ramprasaath et al., 2016; Ribeiro et al., 2016; 2018; Chang et al., 2018) in various of ways, including visualizing the gradients, perturbing the inputs, and bridging relations with other well-studied systems. Visual attention is also one way that tries to explain which part of the image is responsible for the network’s decision (Li et al., 2018a; Jetley et al., 2018). Besides the explanation, Li et al. (2018a) build up an end-to-end model to provide supervision directly on these explanations, specifically network’s attention. 2.2 VISUAL ATTENTION FOR VIDEO ACTION RECOGNITION For video action recognition, visualizing which part of the frame and which frame of the video sequence that the model was attending to provides valuable insight into the model’s behavior. Sharma et al. (2015) develop an attention-driven LSTM by highlighting important spatial locations for action recognition. Girdhar & Ramanan (2017) introduce an attention mechanism based on a derivation of bottom-up and top-down attention as low-rank approximations of bilinear pooling methods. However, these work only focus on the crucial spatial locations of each image, without considering temporal relations among different frames in a video sequence. To alleviate this shortcoming, visual attention is incorporated in the motion stream (Wang et al., 2016b; Li et al., 2018b; Du et al., 2018). However, the motion stream only employs the optical flow frames generated from consequent frames, cannot consider the long-term temporal relations among different frames in a video sequence. Moreover, motion stream needs additional optical flow frames as input, which imposes burden due to additional optical flow extraction, storage and computation and is especially severe for large datasets. Torabi & Sigal (2017) propose an attention based LSTM model to hightlight frames in videos, but spatial information is not used for temporal attention. An end-to-end spatial and temporal attention model is proposed in (Song et al., 2017) for human action recognition, but additional skeleton data is needed. 3 SPATIAL-TEMPORAL ATTENTION MECHANISM Our overall model is an Recurrent Neural Network (RNN) that aggregates frame-based convolutional features across the video to make action predictions as shown in Fig. 1. The convolutional features are attended over both spatially, in each frame, and subsequently temporally. Both attentions are soft, meaning that the effective final representation at time t of an RNN, used to make the prediction, is a spatio-temporally weighted aggregation of convolutional features across the video along with the past hidden state from t − 1. The core novelty is the overall form of our attention mechanism and the additional terms of the loss function that induce sensible spatial and temporal attention priors. 3.1 CONVOLUTIONAL FRAME FEATURES We use the last convolutional layer output extracted by ResNet50 or ResNet101 (He et al., 2016), pretrained on the ImageNet (Deng et al., 2009) dataset and fine-tuned for the target dataset, as our frame feature representation. We acknowledge that more accurate feature extractors (for instance, network with more parameters such as ResNet-152 or higher performance networks such as DenseNet (Huang et al., 2017) or SENet (Hu et al., 2018)) and optical flow features will likely lead to better overall performance. Our primary purpose in this paper is to prove the efficacy of our spatial-temporal attention mechanism. Hence we kept the features relatively simple. 3.2 SPATIAL ATTENTION WITH IMPORTANCE MASK We apply an importance mask Mi to the i-th image features Xi to obtain attended image features by element-wise multiplication: X̃i = Xi Mi, (1) for 1 ≤ i ≤ n. This operation attenuates certain regions of the feature map based on their estimated importance. Here we simply use three convolutional layers to learn the importance mask (please refer to Appendix B.2 for network architecture details). Fig. 2 illustrates our spatial attention mechanism. However, if left uncontrolled, an arbitrarily structured mask could be learned, leading to possible overfitting. We posit that, in practice, it is often useful to attend to a few important larger regions (e.g., objects, elements of the scene). To induce this behavior, we encourage smoothness of the mask by introducing total variation loss on the spatial attention, as will be described in Sec. 3.4. 3.3 TEMPORAL ATTENTION Inspired by attention for neural machine translation (Bahdanau et al., 2015), we introduce the temporal attention mechanism which generates energy for each attended frame X̃i at each time step t, eti = Φ(Ht−1, X̃i), (2) where Ht−1 represents the ConvLSTM hidden state at time t−1 that implicitly contains all previous information up to time step t− 1, X̃i represents the i-th frame masked features. Φ = ΦH(Ht−1) + ΦX(X̃i), where ΦH and ΦX are feed-forward neural networks which are jointly trained with all other components of the proposed system. This temporal attention model directly computes soft attention weight for each frame at each time t as shown in Fig. 3. It allows the gradient of the cost function to be backpropagated through. This gradient can be used to train the entire spatial-temporal attention model jointly. The importance weight wti for each frame is: wti = exp(eti)∑n i=1(exp(eti)) (3) for 1 ≤ i ≤ n, 1 ≤ t ≤ T . This importance weighting mechanism decides which frame of the video to pay attention to. The final feature map Yt to the ConvLSTM is a weighted sum of the feature from all of the frames as the ConvLSTM cell inputs: Yt = 1 n n∑ i=1 wtiX̃i (4) where X̃i denotes the i-th masked frame of each video, n represents the total number of frames for each video. For RNN, instead of using conventional LSTM (Graves, 2013), we use Convolutional LSTM (ConvLSTM) (Shi et al., 2015) instead. The drawback of conventional LSTM is its use of full connections in the input-to-state and state-to-state transitions in which no spatial information is encoded. In contrast, each input, cell output, hidden state, gate are 3D tensors whose last two dimensions are spatial dimensions which can preserve spatial information, which is more suitable for image inputs. We use the following initialization strategy for the ConvLSTM cell state and hidden state for faster convergence: C0 = gc( 1 n n∑ i=1 X̃i), H0 = gh( 1 n n∑ i=1 X̃i) (5) where gc and gh are two layer convolutional networks with batch normalization (Ioffe & Szegedy, 2015). We calculate the average hidden states of ConvLSTM over time length T , H = 1 T T∑ i=1 Hi (6) and send it to a fully connected classification layer for the final video action classification. 3.4 LOSS FUNCTION Considering the spatial and temporal nature of our video action recognition; we would like to learn (1) a sensible attention mask for spatial attention, (2) reasonable importance weighting scores for different frames, and (3) improve the action recognition accuracy at the same time. Therefore, our loss function L: L = LCE + λTVLTV + λcontrastLcontrast + λunimodalLunimodal, (7) where LCE is the cross entropy loss for classification, LTV represents the total variation regularization (Rudin et al., 1992); Lcontrast represents the mask and background contrast regularizer; and Lunimodal represents unimodality regularizer. λTV, λcontrast and λunimodal are the weights for corresponding regularizers. The total variation regularizationLTV of the learnable attention mask encourages spatial smoothness of the mask and is defined as: LTV = n∑ i=1 ∑ j,k |M j+1,ki −M j,k i |+ ∑ j,k |M j,k+1i −M j,k i | (8) where Mi is the mask for the i-th frame, and M j,k i is entry at the (j, k)-th spatial location of the mask. Different from the total variation of the mask of using L2 loss in Dabkowski & Gal (2017), we use L1 loss instead. The contrast regularization Lcontrast of learnable attention mask is to suppress the irrelevant information and highlight important information: Lcontrast = n∑ i=1 ( −1 2 Mi Bi + 1 2 Mi (1−Bi) ) (9) where Bi = I{Mi > 0.5} represents the binarized mask, I is the indicator function applied elementwise. The unimodality regularizer Lunimodal encourages the temporal attention weights to be unimodal, biasing against spurious temporal weights. This stems from our observation that in most cases only one activity would be present in the considered frame window, with possible irrelevant information on either or both sides. Here we use the log concave distribution to encourage the unimodal pattern of temporal attention weights: Lunimodal = T∑ t=1 n−1∑ i=2 max{0, wt,i−1wt,i+1 − w2t,i} (10) where T represents the ConvLSTM time sequence length and n is the number of frames for each video. More details on this log concave sequence please refer to Appendix A. 4 EXPERIMENTS In this section, we first conduct experiments to evaluate our proposed method on video action recognition task on three public available datasets. Then we evaluate our spatial attention mechanism on the spatial localization task and our temporal attention mechanism on the temporal localization task respectively. 4.1 VIDEO ACTION RECOGNITION We first conduct extensive studies on the widely used HMDB51 and UCF101 datasets. The purpose of these experiments is mainly for ablation study to examine the effects of different sub-components. Then we show that our method can be applied to the challenging large-scale Moments in Time dataset. Datasets. HMDB51 dataset (Kuehne et al., 2011) contains 51 distinct action categories, each containing at least 101 clips for a total of 6,766 video clips extracted from a wide range of sources. These videos include general facial actions, general body movements, body movements with object interaction, body movements for human interaction. Model HMDB51 UCF101 Ablation Experiments UCF101 dataset (Soomro et al., 2012) is an action recognition dataset of realistic action videos, collected from YouTube, with 101 action categories. Moments in Time Dataset (Monfort et al., 2018) is a collection of one million short videos with one action label per video and 339 different action classes. As there could be more than one action taking place in a video, action recognition models may predict an action correctly yet be penalized because the ground truth does not include that action. Therefore, it is believed that top 5 accuracy measure will be more meaningful for this dataset. Experimental setup. We use the same parameters for HMDB51 and UCF101: single Convolutional LSTM layer with hidden-state dimension 512, sequence length T = 25, λTV = 10−5, λcontrast = 10−4, λunimodal = 1. For the Moments in Time dataset, we use time sequence length T = 15. For more details on the experimental setup please refer to Appendix B.1. Quantitative results. We show the top-1 video action classification accuracy for HMDB51 and UCF101 dataset in Table 1. Our proposed model outperforms previous attention based model (Sharma et al., 2015; Li et al., 2018b; Girdhar & Ramanan, 2017) and conventional ResNet101ImageNet. From the ablation experiments, it demonstrates that all the sub-components of the proposed method contribute to improving the final performance. The results on the Moments in Time dataset are reported in Table 2. Our method achieves the best accuracy comparing to other single-modality-based methods, and obtains better or comparative results comparing to the methods which uses more than one modality. TRN-Multiscale (Zhou et al., 2018), which uses both RGB and optical flow images, has better performance than ours, however, extracting optical flow images for such large datasets is very time-consuming and needs the same order of magnitude of storage as RGB images. Qualitative results. We visualize the spatial attention and temporal attention results in Fig. 4. We can see that the spatial attention can correctly focus on important spatial area of the image, and the temporal attention shows a unimodal distribution for the entire action from starting the action to completing the action. More results are shown in Appendix C.1. 4.2 WEAKLY SUPERVISED LOCALIZATION Due to the existence of spatial and temporal attention mechanisms, our model can not only classify the action of the video, but also give a better interpretability of the results, i.e. telling which region and frames contribute more to the prediction. In other words, our proposed model can also localize the most discriminant region and frames at the same time. To verify this, we conduct the spatial localization and temporal localization experiments. 4.2.1 SPATIAL ACTION LOCALIZATION Dataset. UCF101-24 is a subset of 24 classes out of 101 classes of UCF101 that comes with spatio-temporal localization annotation, released as bounding box annotations of humans with THUMOS2013 and THUMOS2014 challenge (Jiang et al., 2014). Experimental setup. For training, we only use the classification labels without spatial bounding box labels. For evaluation, we threshold the produced saliency mask at 0.5 and the tightest bounding box that contains the thresholded saliency map is set as the predicted localization box for each frame. Then these predicted localization boxes are compared with the ground truth bounding boxes at different Intersection Over Union (IOU) levels. Qualitative results. We show some qualitative results in Fig. 5. Our spatial attention can attend to important action areas. The ground truth bounding boxes include all the entire human actions, while our attention could attend to crucial parts of an action such as in Fig.5 (d) and (e). Furthermore, our attention mechanism is able to attend to areas with multiple human actions. For instance, in Fig.5 (f) the ground truth only includes one person bicycling, but our attention can include both people bicycling. More qualitative results including failure cases are included in Appendix C.2. Quantitative results. Table 3 shows the quantitative results for UCF101-24 spatial localization results. Our attention mechanism works better compared with the baseline methods when the IoU threshold is lower mainly because our model only focuses on important spatial areas rather than the entire human action annotated by bounding boxes. Compared with the baseline methods train- ing with ground truth bounding boxes, we only use the action classification label, no ground truth bounding boxes are used. 4.2.2 TEMPORAL ACTION LOCALIZATION Dataset. The action detection task of THUMOS14 (Jiang et al., 2014) consists of 20 classes of sports activities, and contains 2765 trimmed videos for training, while 200 and 213 untrimmed videos for validation and test respectively. More details on this dataset and pre-processing are included in Appendix B.1. Experimental setup. We use the same hyperparameters for THUMOS14 as HMDB51, UCF101 and UCF101-24. For training, we only use the classification labels without temporal annotation labels. For evaluation, we threshold the normalized temporal attention importance weight at 0.5. Then these predicted temporal localization frames are compared with the ground truth annotation at different IoU thresholds. Qualitative results. We first visualize some examples of learned attention weights on the test data of THUMOS14 in Fig. 6. We see that our temporal attention module is able to automatically highlight important frames and to avoid irrelevant frames corresponding to background or non-action human poses. More qualitative results are included in Appendix C.4. Quantitative results. With our spatial temporal attention mechanism, the video action classification accuracy for the THUMOS’14 20 classes improved from 74.45% to 78.33%: a 3.88% increase. Besides improving the classification accuracy, we show our temporal attention mechanism is able to highlight discriminative frames quantitatively in Table 4. Compared with reinforcement learning based method (Yeung et al., 2016) and weakly supervised method (Wang et al., 2017), our method achieves the best accuracy in terms of different levels of IoU thresholds. 5 CONCLUSION In this work, we develop a novel spatial-temporal attention mechanism for the task of video action recognition, demonstrating the efficacy across three publicly available datasets. Also, we introduce a set of regularizers that ensure our attention mechanism attends to coherent regions in space and time, further improving the performance and increasing the model interpretability. Moreover, we qualitatively and quantitatively show that our spatio-temporal attention is able to localize discriminative regions and important frames, despite being trained in a purely weakly-supervised manner with only classification labels. A LOG-CONCAVE SEQUENCE In probability and statistics, a unimodal distribution is a probability distribution that has a single peak or mode. If a distribution has more modes it is called multimodal. The temporal attention weights are a univariate discrete distribution over the frames, indicating the importance of the frames for the task of classification. In the context of activity recognition, it is reasonable to assume that the frames that contain salient information should be consecutive, instead of scattered around. Therefore, we would like to design a regularizer that encourages unimodality. To this end, we introduce a mathematical concept called the log-concave sequence and define the regularizer based on it. We first give a formal definition of the unimodal sequence. Definition 1. A sequence {ai}ni=1 is unimodal if for some integer m,{ ai−1 ≤ ai if i ≤ m, ai ≥ ai+1 if i ≥ m. A univariate discrete distribution is unimodal, if its probability mass function forms a unimodal sequence. The log-concave sequence is defined as follows. Definition 2. A non-negative sequence {ai}ni=1 is log-concave if a2i ≥ ai−1ai+1. This property gets its name from the fact that if {ai}ni=1 is log-concave, then the sequence {log ak}ni=1 is concave. The connection between unimodality and log-concavity is given by the following proposition. Proposition 1. A log-concave sequence is unimodal. Proof. Rearranging the defining inequality for log-concavity, we see that ai ai−1 ≥ ai+1 ai , so the ratio of consecutive terms is decreasing. Until the ratios decrease below 1, the sequence is increasing, and after this point, the sequence is decreasing, so it is unimodal. Given the definition of log-concavity, it is straightforward to design a regularization term that encourages log-concavity: R = n−1∑ i=2 max{0, ai−1ai+1 − a2i }. (11) By Proposition 1, this regularizer also encourages unimodality. B MORE DATASETS AND IMPLEMENTATION DETAILS B.1 MORE DETAILS ON THE DATASET AND EXPERIMENTAL SETUP HMDB51 and UCF101 The dataset pre-processing and data augumentation are the same as the ResNet ImageNet experiment (He et al., 2016). All the videos are resized to 224 × 224 resolution and fed into a ResNet-50 pretrained on ImageNet. The last convolutional layer feature map size is 2048×7×7. The experimental setup for the Moments in Time dataset is the same as HMDB51 and UCF101 except time sequence and image resolution. Moments in Time For the Moments in Time dataset (Monfort et al., 2018), the videos only have 3 seconds, much shorter than HMDB51 and UCF101. We extract RGB frames from the raw videos at 5 fps. Therefore, the sequence length is T = 15. Following the practice in (Monfort et al., 2018) to make all the videos uniform resolution, we resize the RGB frames to 340 × 256 pixels. When extracting features, we use the ResNet-50 pretrained on ImageNet model using resized images with resolution of 256 × 256 pixels. The data augmentation is the same as the ResNet ImageNet experiment (He et al., 2016). The feature map size of the last convolutional layer is 2048× 8× 8. THUMOS14 The action detection task of THUMOS’14 (Jiang et al., 2014) consists of 20 classes of sports activities, and contains 2765 trimmed videos for training, while 200 and 213 untrimmed videos for validation and test respectively. Following the standard practice (Yeung et al., 2016; Zhao et al., 2017), we use the validation set as training and evaluate on the testing set. Following the standard practice (Xu et al., 2017) to avoid the training ambiguity, we remove the videos with multiple labels. We extract RGB frames from the raw videos at 10 fps. The last convolutional layer feature map size is 2048× 7× 7. B.2 SPATIAL ATTENTION NETWORK ARCHITECTURE The detailed architecture of spatial attention network described in Section 3.2 is listed in the Table 5. B.3 MORE IMPLEMENTATION DETAILS All the experiments are evaluated on machines with a single Nvidia GeForce GTX 1080Ti GPU. The networks are implemented using the Pytorch library and our code will be publicly available with the paper. C MORE RESULTS C.1 MORE SPATIAL TEMPORAL ATTENTION RESULTS Fig. 7 shows more results on spatial temporal attention. C.2 MORE SPATIAL LOCALIZATION RESULTS Fig. 8 shows more spatial localization results. Fig. 9 shows some failure cases. C.3 MORE ACTION RECOGNITION RESULTS Table 6 shows results of our spatial-temporal attention model with different base networks. Our spatial-temporal attention mechanism is a easy plug-in model which could be based on different network architectures, and can boost performance. C.4 MORE TEMPORAL LOCALIZATION RESULTS Fig. 10 shows more results on temporal localization with our temporal attention.
1. What is the main contribution of the paper on activity recognition? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its novelty and relevance to the current state of research in the field? 3. How does the reviewer assess the related works section and the comparison with other methods in the paper? 4. What are the limitations of the unimodality prior and how might alternative priors be used for improvement? 5. What insights can be gained from the ablation study and comparisons with other loss functions? 6. How does the reviewer evaluate the significance of the results and the comparison with state-of-the-art methods in activity recognition?
Review
Review A method for activity recognition in videos is presented, which uses spatial soft attention combined with temporal soft attention. In a nutshell, a pixelwise mask is output and elementwise combined with feature maps for spatial attention, and temporal attention is a distribution over frames. The method is tested on several datasets. My biggest concern with the paper is novelty, which is rather low. Attention models are one of the most highly impactful discoveries in deep learning, which have been widely and extensively studied in computer vision, and also in activity recognition. Spatial and temporal attention mechanisms are now widely used by the community. I am not sure to see the exact novelty of the proposed, it seems to be very classic: soft attention over feature maps and frames is not new. Using attention distributions for localization has also been shown in the past. This also shows in the related works section, which contains only 3 references for spatial attention and only 2 references for temporal attention out of a vast body of known work. The unimodality prior (implemented as log concave prior) is interesting, but uni-modality is a very strong assumption. While it could be argued that spurior attention should be avoided, unimodality is much less clear. For this reason, the prior should be compared with even simpler priors, like total variation over time (similar to what has been done over space). The ablation study in the experimental section shows, that the different mechanisms only marginally contribute to the performance of the method: +0.7p on HMDB51, slightly more on UCF101. Similarly, the different loss functions only very marginally contribute to the performance. The method is only compared to Sharma 2015 on these datasets, which starts to be dated and is not state of the art anymore. Activity recognition has recently very much benefitted from optimization of convolutional backbones, like I3D and variants. The LSTM equations at the end of page are unnecessary because widely known.
ICLR
Title Fast Adaptation in Generative Models with Generative Matching Networks Abstract Despite recent advances, the remaining bottlenecks in deep generative models are necessity of extensive training and difficulties with generalization from small number of training examples. Both problems may be addressed by conditional generative models that are trained to adapt the generative distribution to additional input data. So far this idea was explored only under certain limitations such as restricting the input data to be a single object or multiple objects representing the same concept. In this work we develop a new class of deep generative model called generative matching networks which is inspired by the recently proposed matching networks for one-shot learning in discriminative tasks and the ideas from meta-learning. By conditioning on the additional input dataset, generative matching networks may instantly learn new concepts that were not available during the training but conform to a similar generative process, without explicit limitations on the number of additional input objects or the number of concepts they represent. Our experiments on the Omniglot dataset demonstrate that generative matching networks can significantly improve predictive performance on the fly as more additional data is available to the model and also adapt the latent space which is beneficial in the context of feature extraction. 1 INTRODUCTION Deep generative models are currently one of the most promising directions in generative modelling. In this class of models the generative process is defined by a composition of conditional distributions modelled using deep neural networks which form a hierarchy of latent and observed variables. This approach allows to build models with complex, non-linear dependencies between variables and efficiently learn the variability across training examples. Such models are trained by stochastic gradient methods which can handle large datasets and a wide variety of model architectures but also present certain limitations. The training process usually consists of small, incremental updates of networks’ parameters and requires many passes over training data. Notably, once a model is trained it cannot be adapted to newly available data without complete re-training to avoid catastrophic interference (McCloskey & Cohen, 1989; Ratcliff, 1990). There is also a risk of overfitting for concepts that are not represented by enough training examples which is caused by high capacity of the models. Hence, most of deep generative models are not well-suited for rapid learning in one-shot scenario which is often encountered in real-world applications where data acquisition is expensive or fast adaptation to new data is required. A potential solution to these problems is explicit learning of adaptation mechanisms complementing the shared generative process. In probabilistic modelling framework, adaptation may be expressed as conditioning the model on additional input examples serving as induction bias. Notable steps in this direction have been made by Rezende et al. (2016) whose model was able to condition on a single object to produce new examples of the concept it represents. Later, Edwards & Storkey (2016) proposed a model that maintained a global latent variable capturing statistics about multiple input objects which was used to condition the generative distribution. It allowed to implement the fast learning ability, but due to the particular model architecture used the model was not well-suited to datasets consisting of several different concepts. In this work we present Generative Matching Networks, a new family of conditional generative models capable of instant adaptation to new concepts that were not available at the training time but share the structure of underlying generative process with the training examples. By conditioning on additional inputs, Generative Matching Networks improve their predictive performance, the quality of generated samples and also adapt their latent space which may be useful for unsupervised feature extraction. Importantly, no explicit limitations on the conditioning data are imposed such as number of objects or number of different concepts which expands the applicability of one-shot generative modelling and distinguish our work from existing approaches. Our model is inspired by the attentional mechanism implemented in Matching Networks (Vinyals et al., 2016) previously proposed for discriminative tasks and the recent advances from meta-learning (Santoro et al., 2016). Our approach for adaptation is an extension of these ideas to generative modelling and it may be re-used in a variety of different models being not restricted to the particular architecture used in the paper. The source code for generative matching networks is available at http://github.com/sbos/gmn. This paper is organized as follows. First, in section 2 we revisit the necessary background in variational approach to training generative models and mention the related work in conditional generative models. Then, in section 3 we describe the proposed generative model, it’s recognition counterpart and the training protocol. Section 4 contains experimental evaluation of the proposed model as both generative model and unsupervised feature extractor in small-shot learning settings. We conclude with discussion of the results in section 5. 2 BACKGROUND We consider the problem of learning a probabilistic generative model which can be expressed as a probability distribution p(x|θ) over objects of interests x parametrized by θ. The major class of generative models introduce also latent variables z that are used to explain or generate an object x such that p(x|θ) = ∫ p(z|θ)p(x|z,θ)dz and assumed to be non-observable. Currently, the common practice is to restrict the conditional distributions p(z|θ) and p(x|z,θ) to tractable distribution families and use deep neural networks for regressing their parameters. The expressive power of deep non-linear generative models comes at a price since neither marginal distribution p(x|θ) can be computed analytically nor it can be directly optimized in a statistically efficient way. Fortunately, intractable maximum likelihood training can be avoided in practice by resorting to adversarial training (Gutmann & Hyvärinen, 2012; Goodfellow et al., 2014) or variational inference framework (Kingma & Welling, 2013; Rezende et al., 2014) which we consider further. 2.1 TRAINING GENERATIVE MODELS WITH VARIATIONAL INFERENCE Recent developments in variational inference alleviate problems with maximizing the intractable marginal likelihood log p(x|θ) by approximating it with a lower bound (Jordan et al., 1999): log p(x|θ) ≥ L(θ,φ) = Eq [log p(x, z|θ)− log q(z|x,φ)] = log p(x|θ)− KL(q||p(·|x,θ)). (1) Tightness of the bound is controlled by the recognition model q(z|x,φ) which aims to minimize Kullback-Leibler divergence from the true posterior p(z|x,θ). Similarly to the generative model, recognition model may also be implemented with the use of deep neural networks or other parameter regression which is known as amortized inference (Gershman & Goodman, 2014). Amortized inference allows to use a single recognition model for many training examples. Thus, it is convenient to perform training of the generative model p(x|θ) by stochastic gradient optimization of variational lower bounds (1) corresponding to independent observations {xi}Ni=1: N∑ i=1 log p(xi|θ) ≥ N∑ i=1 Eq [log p(xi, zi|θ)− log q(zi|xi,φ)]→ max θ,φ . The clear advantage of this approach is its scalability. Every stochastic update to the parameters computed from only a small portion of training examples has an immediate effect for the whole dataset. However, while a single parameter update may be relatively fast a large number of them is required to significantly improve generative or inferential performance of the model. Hence, gradient training of generative models usually results into an extensive computational process which prevents from rapid incremental learning. In the next section we discuss potential solutions to this problem that allow to implement fast learning ability in generative models. 2.2 ADAPTATION IN GENERATIVE MODELS In probabilistic modelling framework the natural way of incorporating knowledge about newly available data is conditioning. One may design a model that being conditioned on the additional input data X = x1,x2, . . . ,xT represents a new generative distribution p(x|X,θ). An implementation of this idea can be found in the model by Rezende et al. (2016). Besides many other attractive novelties such as using sophisticated attention and feedback components, the model was able to produce new examples of a concept that was missing at the training time but had similarities in the underlying generative process with the other training examples. The model supported an explicit conditioning on a single observation x′ representing the new concept to construct a new generative distribution of the form p(x|x′,θ). The explicit conditioning when adaptation is performed by the model itself and and has to be learned is not the only way to propagate knowledge about new data. Another solution which is often encountered in Bayesian models is to maintain a global latent variable α encoding information about the whole available dataset such that the individual observations are conditionally independent given it’s value. The model then would have the following form: p(X|θ) = ∫ p(α|θ) ∏T t=1 p(xt|α,θ)dα. (2) The principal existence of such a global variable may be justified by the de Finetti’s theorem (Diaconis & Freedman, 1980) under the exchangeability assumption. In the model (2), the conditional generative distribution p(x|X,θ) is then defined implicitly via posterior over the global variable: p(x|X,θ) = ∫ p(x|α,θ)p(α|X,θ)dα. (3) Once there is an efficient inference procedure for the global variable α, fast adaptation of the generative model can be implemented straightforwardly. There are several relevant examples of generative models with global latent variables used for model adaptation and one-shot learning. Salakhutdinov et al. (2013) combined deep Boltzmann machine (DBM) with nested Dirichlet process (nDP) in a Hierarchical-Deep (HD) model. While being a compelling demonstration of important ideas from Bayesian nonparametrics and deep learning, the HD model required an extensive Markov chain Monte Carlo inference procedure used both for training and adaptation. Thus, while Bayesian learning approach could prevent overfitting the fast learning ability still presents a challenge for sampling-based inference. Later, Lake et al. (2015) proposed Bayesian program learning (BPL) approach for building a generative model of handwritten characters. The model was defined as a probabilistic program contained fine-grained specification of prior knowledge of the task such as generation of strokes and their composition into characters mimicking human drawing strategies. Authors used an extensive posterior inference as the training procedure and the conditioning mechanism (3) for generating new examples. The model was shown to efficiently learn from small number of training examples, but similarly to the HD model, sophisticated and computationally expensive inference procedure makes fast adaptation in BPL generally hard to achieve. The recently proposed neural statistician model (Edwards & Storkey, 2016) is an example of deep generative model with a global latent variable (2). The model was trained by optimizing a variational lower bound following the approach described in section 2.1 but with an additional recognition model approximating posterior distribution over the global latent variable. Authors designed the recognition model to be computationally efficient and require only a single pass over data which consisted of extracting special features from the examples, applying to them a pooling operation (e.g. averaging) and passing the result to another network providing parameters of the variational approximation. This simple architecture allowed for the fast learning and guaranteed invariance to both data permutations and size of the conditioning dataset. However, experimentally the fast learning ability in the model was evaluated only in the setting where all of the training examples represented the same single concept. We argue that in order to capture more information about the conditioning data such as a number of different concepts a more sophisticated aggregation procedure must be employed. Moreover, a fixed parametric description is too restrictive for an accurate representation of datasets of varying size. This motivates us to combine the best of two worlds: nonparametric representation of data and fast inference with neural recognition models. We proceed with a description of the proposed model. 3 GENERATIVE MATCHING NETWORKS Generative matching networks aim to model conditional generative distributions of the form p(x|X,θ). Similarly to other deep generative models we introduce a local latent variable z. Thus the full joint distribution of our model can be expressed as: p(x, z|X,θ) = p(z|X,θ)p(x|z,X,θ). (4) We also maintain a recognition model approximating the posterior over the latent variable z: q(z|x,X,φ) ≈ p(z|x,X,θ). In order to design a fast adaptation mechanism we have to make certain assumptions about relationships between training data and the new data used to condition the model. Thus we assume the homogeneity of generative processes for training and conditioning data up to some parametrization. One may think of this parametrization as specifying weights of a neural network defining a generative model. The generative process is assumed to have an approximately linear dependence on the parameters such that interpolation between parameters corresponding to different examples of the same concept can serve as good parameters for generating other examples. A similar assumption is used e.g. in the neural statistician model (Edwards & Storkey, 2016). However, even if a single concept can be well embedded to a fixed parameter space, this does not imply that a diverse set of concepts will fit into the same parametrization. Hence we express the dependency on the conditioning data in a different way. Instead of embedding the whole conditioning dataset we use a special matching procedure that extracts relevant observations from X and interpolates between their descriptions allowing to generate and recognize similar observations. 3.1 BASIC MODEL In the basic model, the prior over latent variables p(z) is independent from conditioning data X, e.g. a standard normal distribution. In order to generate a new object, a sample from the prior z and conditioning objects X = x1,x2, . . . ,xT are mapped into the matching space Φ where they are compared using a similarity function sim(., .) to form an attention kernel a(z,x). After that, the conditioning objects are interpolated in the prototype space Ψ weighted according to the attention kernel. The resulting interpolation is then used to parametrize the generative process that corresponds to the sampled value of latent variable. Formally, the described matching procedure can be described by the following equations: r = T∑ t=1 a(z,xt)ψL(xt), a(z,xt) = exp(sim(fL(z), gL(xt)))∑T t′=1 exp(sim(fL(z), gL(xt′))) . (5) After the vector r is computed, it is used as an input to a decoder, e.g. a deconvolutional network. Functions fL and gL are used to map latent variables and conditioning objects, correspondingly, into the matching space Φ. Since Φ is supposed to be a feature space that is good for discriminating between objects, gL can be implemented as a feature extractor suitable for the domain of observations, a convolutional network in our case. We found it sufficient to implement the function fL as a simple affine transformation followed by a non-linearity, because the latent variable itself is assumed to be an abstract object description. We also used a simple dot product as a similarity function between these vectors. Function ψL can also be considered as a feature extractor, although since the features useful to specify the generative process are not necessarily good for discrimination, it makes sense to represent ψL and gL differently. However, in our implementation ψL was implemented as a convolutional network sharing most of the parameters with gL to keep the number of trainable parameters small. We have described the basic matching procedure on the example of the conditional likelihood p(x|z,X,θ). Although the procedure (5) is invoked in several parts of the model, each part may operate with it’s own implementation of the functions, hence the subscript ·L used for the functions f , g and ψ is for likelihood part and below we use ·R to denote the recognition part. The recognition model q(z|X,x) uses the matching procedure (5) with the difference that the conditioning objects are being matched not with a value of latent variable, but rather with an observation x. The feature extractor fR in this case can share most of the parameters with gR and in our implementation these functions were identical for matching in the recognition model, i.e. gR = fR Moreover, since gL is also used to project observations into the space Φ, we further re-use already defined functionality by setting gR = gL. We also shared prototype functions ψ for all parts of our model although this is not technically required. After the matching, interpolated prototype vector r is used to compute parameters of the approximate posterior which in our case was a normal distribution with diagonal covariance matrix, i.e. q(z|X,x,φ) = N (z|µ(r),Σ(r)). A major difference between the generative matching networks and the originally proposed discriminative matching networks (Vinyals et al., 2016) is that since no label information is available to the model, the interpolation in equation (5) is performed not in the label space but rather in the prototype space which itself is defined by the model and is learnt during the training. One can note that the described conditional model is not applicable in a situation where no conditioning objects are available. A possible solution to this problem involves implicit addition of a pseudo-input to the set of conditioning objects X. A pseudo-input is not an actual observation, but rather just the corresponding outputs of functions f , g and ψ which are assumed to be another trainable parameters. A stochastic computational graph describing the basic model with pseudo-input can be found on figure 1. Further by default we assume the presence of a single pseudo-input in the model and denote models without pseudo-input as conditional. 3.2 EXTENSIONS Although the basic model is capable of instant adaptation to the conditioning dataset X, it admits a number of extensions that can seriously improve it’s performance. The disadvantage of the basic matching procedure (5) is that conditioning observations X are embedded to the space Φ independently from each other. Similarly to discriminative matching networks we address this problem by computing full contextual embeddings (FCE) (Vinyals et al., 2015). In order to obtain a joint embedding of conditioning data we allow K attentional passes over X of the form (5), guided by a recurrent controller R which accumulates global knowledge about the conditioning data in its hidden state h. The hidden state is thus passed to feature extractors f and g to obtain context-dependent embeddings. We refer to this process as the full matching procedure which modifies equation (5) as: rk = T∑ t=1 a(z,xt)ψ(xt), a(z,xt) = exp(sim(f(z,hk), g(xt,hk)))∑T t′=1 exp(sim(f(z,hk), g(xt′ ,hk))) , hk+1 = R(hk, rk). (6) The output of the full matching procedure is thus the interpolated prototype vector from the last iteration rK and the last hidden state of hK+1. Besides context-dependent embedding of the conditioning data, full matching procedure allows to implement the data-dependent prior over latent variables p(z|X). In this case, no query point such as a latent variable z or an observation x is used to match with the conditioning data and only hidden state of the controller h is passed to functions f and g. Output of the procedure is then used to compute parameters of the prior, i.e. means and standard deviations in our case. As we discuss in the experiments section, we found these extensions so important that further we consider only the model with full matching described by equation (6) and data-dependent prior. Please refer to the appendix and the source code for architectural details of our implementation. 3.3 TRAINING Training of our model consists of maximizing marginal likelihood of a dataset X which can be expressed as: p(X|θ) = T∏ t=1 p(xt|X<t,θ), X<t = {xs}t−1s=1. (7) Ideally we would like to use the whole available training data as X but due to computational limitations we instead use a training strategy rooted in curriculum learning (Bengio et al., 2009) and meta-learning (Thrun, 1998; Vilalta & Drissi, 2002; Hochreiter et al., 2001) which recently was successfully applied for one-shot discriminative learning (Santoro et al., 2016). In particular, we define a task-generating distribution pd(X) which in our case samples datasets X of size T from training examples. Then we train our model to explain well all of the sampled datasets simultaneously: Epd(X) [p(X|θ)]→ maxθ . (8) Obviously, the structure of task-generating distribution has a large impact on training and using an arbitrary distribution will unlikely lead to good results. Hence, we assume that at the training time we have an access to label information and can distinguish different concepts or classes. We thus constrain pd(X) to generate datasets consisting of examples that represent up to C randomly selected classes so that even on short datasets the model has a clear incentive to re-use conditioning data. This may be considered as a form of weak supervision but we want to emphasize that one does not need the label information at test time unless the model is deliberately used for classification which is also possible. Since the marginal likelihood (7) as well as the conditional marginal likelihoods are intractable we instead use variational lower bound (see section 2.1) as a proxy to p(X|θ) in the objective (8): L(X,θ,φ) = ∑T t=1 Eq(zt|xt,X<t,φ) [log p(xt, zt|X<t,θ)− log q(zt|xt,X<t,φ)] . 4 EXPERIMENTS For our experiments we use the Omniglot dataset (Lake et al., 2015) which consists of 1623 classes of handwritten characters from 50 different alphabets. The first 30 alphabets are devoted for training and the remaining 20 alphabets are left for testing. Importantly, only 20 examples of each class are available which makes this dataset specifically useful for small-shot learning problems. Unfortunately, the literature is inconsistent in usage of the dataset and multiple versions of Omniglot were used for evaluation which differ by train/test split, resolution, binarization and augmentation, see e.g. (Burda et al., 2015; Rezende et al., 2016; Santoro et al., 2016). We use the canonical split provided by Lake et al. (2015). In order to speed-up training we downscaled images to 28×28 resolution and since the result was fully binary we did not apply any further pre-processing. We also did not augment our data in contrast to (Santoro et al., 2016; Edwards & Storkey, 2016) to make future comparisons with our results easier. Unless otherwise stated, we train models on datasets of length T = 20 and of up to C = 2 different classes as we did not observe any improvement from training with larger values of C. 4.1 NUMBER OF ATTENTION STEPS Since the full context matching procedure (6) described in section 3.2 consists of multiple attention steps, it is interesting to see the effect of these numbers on model’s performance. We trained several models with smaller architecture and T = 10 varying number of attention steps allowed for the likelihood and recognition shared controller and the prior controller respectively. The models were compared using exponential moving averages of lower bounds corresponding to different numbers of conditioning examples X<t obtained during the training. Results of the comparison can be found on figure 2. Interestingly, larger numbers of steps lead to better results, however lower bounds are almost not improving after the shared controller is allowed for 4 steps. This behaviour was not observed with discriminative matching networks perhaps confirming the difficulty of unsupervised learning. Another important result is that the standard Gaussian prior makes adaptation significantly harder for the model yet still possible which justifies the importance of adaptation not just for the likelihood model but also for the prior. One may also see that all models preferred to set higher variances for a prior resulting to higher entropy comparing to standard normal prior. Clearly as more examples are available, generative matching networks become more certain about the data and output less dispersed Gaussians. Based on this comparison we decided to proceed with models that have 4 steps for the shared controller and a single step for the prior controller which is a reasonable compromise between computational cost and performance. 4.2 FAST ADAPTATION AND SMALL-SHOT GENERATION In this section we compare generative matching networks with a set of baselines by expected conditional likelihoods Epd(X)p(xt|X<t). The conditional likelihoods were estimated using importance sampling with 1000 samples from the recognition model used as a proposal. As we mention in section 3.1, it is possible to add a pseudo-input to the model to make it applicable for cases when no conditioning data is available. In this comparison by default we assume that a single pseudo-input was added to the model, otherwise we denote a model with no pseudo-input as conditional. When training and evaluating conditional models we ensure that the first C objects in a dataset belong to different classes so that they in principle contain enough information to explain rest of the dataset. We found it hard to properly compute conditional likelihoods for the neural statistician model (3) and hence had to exclude this model from the comparison, please see appendix for the details. Instead, we consider a simple generative matching network denoted as avg in which the matching procedure is replaced with prototype averaging which makes the adaptation mechanism similar to the one used in neural statistician. We also omitted sequential generative models (Rezende et al., 2016) from the comparison as they were reported to overfit on the canonical train/test split of Omniglot. Another baseline we use is a standard variational autoencoder which has the same architecture for generative and recognition model as the full generative matching networks. Table 1 contains results of the evaluation on the test alphabets from Omniglot. Ctrain and Ctest denote the maximum number of classes in task-generating distributions pd(·) used for training and evaluating respectively. As one could expect, larger values of Ctest make adaptation harder since on average less examples of the same class are available to the model. Still generative matching networks are capable of working in low-data regime even when testing setting is harder than one used for training, i.e. Ctest > Ctrain. Unsurprisingly, adaptation by averaging over prototype features performed reasonably well for simple datasets constructed of a single class, although significantly worse than the proposed matching procedure. On more difficult datasets with mixed examples of two different classes (Ctest = 2) averaging was ineffective for expressing dependency on the conditioning data which justifies our argument on the necessity of nonparametric representations. In order to visually assess the fast adaptation ability of generative matching networks we also provide conditionally generated samples in figure 3. Interestingly, the conditional version of our model which does not use a pseudo-input both at training and testing time generated samples slightly more similar to the conditioning data while sacrificing the predictive performance. Therefore, presence or absence of the pseudo-input should depend on target application of the model, i.e. density estimation or producing new examples. 5 CONCLUSION In this paper we presented a new class of conditional deep generative models called generative matching networks. These models are capable of fast adaptation to conditioning dataset by adjusting both the latent space and the predictive density while making very few assumptions on the data. The nonparametric matching enabling these features can be seen as a generalization of the original matching procedure since it allows a model to define the label space itself extending the applicability of matching networks to unsupervised and perhaps semi-supervised settings. We believe that these ideas can evolve further and help to implement more data-efficient models in other domains such as reinforcement learning where data acquisition is especially hard. ACKNOWLEDGMENTS We would like to thank Michael Figurnov and Timothy Lillicrap for useful discussions. Dmitry P. Vetrov is supported by RFBR project No.15-31-20596 (mol-a-ved) and by Microsoft: MSU joint research center (RPD 1053945). APPENDIX A. MODEL ARCHITECTURE CONDITIONAL GENERATOR The conditional generator network producing parameters for p(x|z,X,θ) has concatenation of z and the output of the matching operation [r,h] as input which is transformed to 3 × 3 × 32 tensor and then passed through 3 residual blocks of transposed convolutions. Each block has the following form: h = conv1(x), y = f(conv2(h) + h) + pool(scale(x)), where f is a non-linearity which in our architecture is always parametric rectified linear function (He et al., 2015). The block is parametrized by size of filters used in convolutions conv1 and conv2, shared number of filters F and stride S. • scale is another convolution with 1× 1 filters and the shared stride S. • In all other convolutions number of filters is the same and equals F . • conv1 and pool have also stride S. • conv2 preserve size of the input by padding and has stride 1. Blocks used in our paper have the following parameters (W1 ×H1,W2 ×H2, F, S): 1. (2× 2, 2× 2, 32, 2) 2. (3× 3, 3× 3, 16, 2) 3. (4× 4, 3× 3, 16, 2) Then log-probabilities for binary pixels were obtained by summing the result of these convolutions along the channel dimension. FEATURE ENCODER ψ Function ψ has an architecture which is symmetric from the generator network. The only difference is that the scale scale operation is replaced by bilinear upscaling. The residual blocks for feature encoder has following parameters: 1. (4× 4, 3× 3, 16, 2) 2. (3× 3, 3× 3, 16, 2) 3. (2× 2, 2× 2, 32, 2) The result is a tensor of 3× 3× 32 = 288 dimensions. FUNCTIONS f AND g Each function f or g used in our model is simply an affine transformation of feature encoder’s output (interpreted as a vector) to a 200-dimensional space followed by parametric rectified non-linearity. APPENDIX B. TRANSFER TO MNIST In this experiment we test the ability of generative matching networks to adapt not just to new concepts, but also to a new domain. Since we trained our models on 28×28 resolution for Omniglot it should be possible to apply them on MNIST dataset as well. We used the test part of MNIST to which we applied a single random binarization. Table 2 contains estimated predictive likelihood for different models. Qualitative results from the evaluation on Omniglot remain the same. Although transfer to a new domain caused significant drop in performance for all of the models, one may see that generative matching networks still demonstrate the ability to adapt to conditioning data. At the same time, average matching does not seem to efficiently re-use the conditioned data in such transfer task since relative improvements in expected conditional log-likelihood are rather small. Apparently, the model trained on a one-class datasets also learned highly dataset-dependent features as it actually performed even worse than the model with Ctrain = 2. We also provide conditional samples on figure 4. Both visual quality of samples and test loglikelihoods are significantly worse comparing to Omniglot which can be caused by a visual difference of the MNIST digits from Omniglot characters. The images are bolder and less regular due to binarization. Edwards & Storkey (2016) suggest that the quality of transfer may be improved by augmentation of the training data, however for the sake of experimental simplicity and reproducibility we resisted from any augmentation. APPENDIX C. CLASSIFICATION Generative matching networks are useful not only as adaptive density estimators. For example, one may use a pre-trained model for classification in several ways. Given a small number of labeled examples Xc = {xc,1,xc,2, . . .xc,N} for each class c ∈ {1, 2, . . . , C}, it possible to use the probability p(x|Xc) as a relative score to assign class c for a new object x. Alternatively, one may use the recognition model q(z|X1, . . . ,XC) to extract features describing the new object x and then use a classifier of choice, e.g. the nearest neighbour classifier. We implemented this method using cosine similarity on mean parameters of approximate Normal posteriors. The results under different number of training examples available are provided in table 3. Surprisingly, the simpler model with average matching performed slightly better than the full matching model. Perhaps, generative matching networks are very smooth density models and even being conditioned on a number of same-class example still assign enough probability mass to discrepant observations. The same conclusion can be made by assessing the generated samples on figure 3 which may guide further research on the topic. APPENDIX D. EVALUATION OF THE NEURAL STATISTICIAN MODEL The neural statistician model falls into the category of models with global latent variables which we describe in section 2.2. The conditional likelihood for these models has the form: p(x|X,θ) = ∫ p(α|X,θ)p(x|α,θ)dα. This quantity is hard to compute since it consists of an expectation with respect to the true posterior over global variable α. Since this distribution is intractable, simple importance sampling can not be used to estimate the likelihood. Thus, we tried the following strategies. First, we used self-normalizing importance sampling to directly estimate p(x|X,θ) as p̂(x|X,θ) = ∑S s=1 wsp(x, z (s)|α(s),θ)∑S s=1 ws , ws = p(α(s),X,Z(s)|θ) q(α(s)|X,φ)q(Z(s), z(s)|X,x,α(s),φ) , but observed somewhat contradictory results such as non-monotonic dependency of the estimate on the size of conditioning dataset. The diagnostic of the effective sample size suggested that the recognition model is not well suited as proposal for the task. Another strategy was to sequentially estimate p(X<t,θ) and then use the equation p(xt|X<t,θ) = p(xt,X<t|θ) p(X<t|θ) , which appeared to as unreliable as the previous strategy.
1. What is the main contribution of the paper in the field of meta-learning? 2. What are the strengths and weaknesses of the proposed algorithm compared to previous works? 3. How does the reviewer assess the clarity and quality of the paper's content? 4. What are the reviewer's concerns regarding the paper's novelty and comparison with other works? 5. Can you provide more information or explanations regarding the meta-learning framework used in the paper?
Review
Review This paper presents a meta-learning algorithm which learns to learn generative models from a small set of examples. It’s similar in structure to the matching networks of Vinyals et al. (2016), and is trained in a meta-learning framework where the inputs correspond to datasets. Results are shown on Omniglot in terms of log-likelihoods and in terms of generated samples. The proposed idea seems reasonable, but I’m struggling to understand various aspects of the paper. The exposition is hard to follow, partly because existing methods are described using terminology fairly different from that of the original authors. Most importantly, I can’t tell which aspects are meant to be novel, since there are only a few sentences devoted to matching networks, even though this work builds closely upon them. (I brought this up in my Reviewer Question, and the paper has not been revised to make this clearer.) I’m also confused about the meta-learning setup. One natural formulation for meta-learning of generative models would be that the inputs consist of small datasets X, and the task is to predict the distribution from which X was sampled. But this would imply a uniform weighting of data points, which is different from the proposed method. Based on 3.1, it seems like one additionally has some sort of query q, but it’s not clear what this represents. In terms of experimental validation, there aren’t any comparisons against prior work. This seems necessary, since several other methods have already been proposed which are similar in spirit.
ICLR
Title Fast Adaptation in Generative Models with Generative Matching Networks Abstract Despite recent advances, the remaining bottlenecks in deep generative models are necessity of extensive training and difficulties with generalization from small number of training examples. Both problems may be addressed by conditional generative models that are trained to adapt the generative distribution to additional input data. So far this idea was explored only under certain limitations such as restricting the input data to be a single object or multiple objects representing the same concept. In this work we develop a new class of deep generative model called generative matching networks which is inspired by the recently proposed matching networks for one-shot learning in discriminative tasks and the ideas from meta-learning. By conditioning on the additional input dataset, generative matching networks may instantly learn new concepts that were not available during the training but conform to a similar generative process, without explicit limitations on the number of additional input objects or the number of concepts they represent. Our experiments on the Omniglot dataset demonstrate that generative matching networks can significantly improve predictive performance on the fly as more additional data is available to the model and also adapt the latent space which is beneficial in the context of feature extraction. 1 INTRODUCTION Deep generative models are currently one of the most promising directions in generative modelling. In this class of models the generative process is defined by a composition of conditional distributions modelled using deep neural networks which form a hierarchy of latent and observed variables. This approach allows to build models with complex, non-linear dependencies between variables and efficiently learn the variability across training examples. Such models are trained by stochastic gradient methods which can handle large datasets and a wide variety of model architectures but also present certain limitations. The training process usually consists of small, incremental updates of networks’ parameters and requires many passes over training data. Notably, once a model is trained it cannot be adapted to newly available data without complete re-training to avoid catastrophic interference (McCloskey & Cohen, 1989; Ratcliff, 1990). There is also a risk of overfitting for concepts that are not represented by enough training examples which is caused by high capacity of the models. Hence, most of deep generative models are not well-suited for rapid learning in one-shot scenario which is often encountered in real-world applications where data acquisition is expensive or fast adaptation to new data is required. A potential solution to these problems is explicit learning of adaptation mechanisms complementing the shared generative process. In probabilistic modelling framework, adaptation may be expressed as conditioning the model on additional input examples serving as induction bias. Notable steps in this direction have been made by Rezende et al. (2016) whose model was able to condition on a single object to produce new examples of the concept it represents. Later, Edwards & Storkey (2016) proposed a model that maintained a global latent variable capturing statistics about multiple input objects which was used to condition the generative distribution. It allowed to implement the fast learning ability, but due to the particular model architecture used the model was not well-suited to datasets consisting of several different concepts. In this work we present Generative Matching Networks, a new family of conditional generative models capable of instant adaptation to new concepts that were not available at the training time but share the structure of underlying generative process with the training examples. By conditioning on additional inputs, Generative Matching Networks improve their predictive performance, the quality of generated samples and also adapt their latent space which may be useful for unsupervised feature extraction. Importantly, no explicit limitations on the conditioning data are imposed such as number of objects or number of different concepts which expands the applicability of one-shot generative modelling and distinguish our work from existing approaches. Our model is inspired by the attentional mechanism implemented in Matching Networks (Vinyals et al., 2016) previously proposed for discriminative tasks and the recent advances from meta-learning (Santoro et al., 2016). Our approach for adaptation is an extension of these ideas to generative modelling and it may be re-used in a variety of different models being not restricted to the particular architecture used in the paper. The source code for generative matching networks is available at http://github.com/sbos/gmn. This paper is organized as follows. First, in section 2 we revisit the necessary background in variational approach to training generative models and mention the related work in conditional generative models. Then, in section 3 we describe the proposed generative model, it’s recognition counterpart and the training protocol. Section 4 contains experimental evaluation of the proposed model as both generative model and unsupervised feature extractor in small-shot learning settings. We conclude with discussion of the results in section 5. 2 BACKGROUND We consider the problem of learning a probabilistic generative model which can be expressed as a probability distribution p(x|θ) over objects of interests x parametrized by θ. The major class of generative models introduce also latent variables z that are used to explain or generate an object x such that p(x|θ) = ∫ p(z|θ)p(x|z,θ)dz and assumed to be non-observable. Currently, the common practice is to restrict the conditional distributions p(z|θ) and p(x|z,θ) to tractable distribution families and use deep neural networks for regressing their parameters. The expressive power of deep non-linear generative models comes at a price since neither marginal distribution p(x|θ) can be computed analytically nor it can be directly optimized in a statistically efficient way. Fortunately, intractable maximum likelihood training can be avoided in practice by resorting to adversarial training (Gutmann & Hyvärinen, 2012; Goodfellow et al., 2014) or variational inference framework (Kingma & Welling, 2013; Rezende et al., 2014) which we consider further. 2.1 TRAINING GENERATIVE MODELS WITH VARIATIONAL INFERENCE Recent developments in variational inference alleviate problems with maximizing the intractable marginal likelihood log p(x|θ) by approximating it with a lower bound (Jordan et al., 1999): log p(x|θ) ≥ L(θ,φ) = Eq [log p(x, z|θ)− log q(z|x,φ)] = log p(x|θ)− KL(q||p(·|x,θ)). (1) Tightness of the bound is controlled by the recognition model q(z|x,φ) which aims to minimize Kullback-Leibler divergence from the true posterior p(z|x,θ). Similarly to the generative model, recognition model may also be implemented with the use of deep neural networks or other parameter regression which is known as amortized inference (Gershman & Goodman, 2014). Amortized inference allows to use a single recognition model for many training examples. Thus, it is convenient to perform training of the generative model p(x|θ) by stochastic gradient optimization of variational lower bounds (1) corresponding to independent observations {xi}Ni=1: N∑ i=1 log p(xi|θ) ≥ N∑ i=1 Eq [log p(xi, zi|θ)− log q(zi|xi,φ)]→ max θ,φ . The clear advantage of this approach is its scalability. Every stochastic update to the parameters computed from only a small portion of training examples has an immediate effect for the whole dataset. However, while a single parameter update may be relatively fast a large number of them is required to significantly improve generative or inferential performance of the model. Hence, gradient training of generative models usually results into an extensive computational process which prevents from rapid incremental learning. In the next section we discuss potential solutions to this problem that allow to implement fast learning ability in generative models. 2.2 ADAPTATION IN GENERATIVE MODELS In probabilistic modelling framework the natural way of incorporating knowledge about newly available data is conditioning. One may design a model that being conditioned on the additional input data X = x1,x2, . . . ,xT represents a new generative distribution p(x|X,θ). An implementation of this idea can be found in the model by Rezende et al. (2016). Besides many other attractive novelties such as using sophisticated attention and feedback components, the model was able to produce new examples of a concept that was missing at the training time but had similarities in the underlying generative process with the other training examples. The model supported an explicit conditioning on a single observation x′ representing the new concept to construct a new generative distribution of the form p(x|x′,θ). The explicit conditioning when adaptation is performed by the model itself and and has to be learned is not the only way to propagate knowledge about new data. Another solution which is often encountered in Bayesian models is to maintain a global latent variable α encoding information about the whole available dataset such that the individual observations are conditionally independent given it’s value. The model then would have the following form: p(X|θ) = ∫ p(α|θ) ∏T t=1 p(xt|α,θ)dα. (2) The principal existence of such a global variable may be justified by the de Finetti’s theorem (Diaconis & Freedman, 1980) under the exchangeability assumption. In the model (2), the conditional generative distribution p(x|X,θ) is then defined implicitly via posterior over the global variable: p(x|X,θ) = ∫ p(x|α,θ)p(α|X,θ)dα. (3) Once there is an efficient inference procedure for the global variable α, fast adaptation of the generative model can be implemented straightforwardly. There are several relevant examples of generative models with global latent variables used for model adaptation and one-shot learning. Salakhutdinov et al. (2013) combined deep Boltzmann machine (DBM) with nested Dirichlet process (nDP) in a Hierarchical-Deep (HD) model. While being a compelling demonstration of important ideas from Bayesian nonparametrics and deep learning, the HD model required an extensive Markov chain Monte Carlo inference procedure used both for training and adaptation. Thus, while Bayesian learning approach could prevent overfitting the fast learning ability still presents a challenge for sampling-based inference. Later, Lake et al. (2015) proposed Bayesian program learning (BPL) approach for building a generative model of handwritten characters. The model was defined as a probabilistic program contained fine-grained specification of prior knowledge of the task such as generation of strokes and their composition into characters mimicking human drawing strategies. Authors used an extensive posterior inference as the training procedure and the conditioning mechanism (3) for generating new examples. The model was shown to efficiently learn from small number of training examples, but similarly to the HD model, sophisticated and computationally expensive inference procedure makes fast adaptation in BPL generally hard to achieve. The recently proposed neural statistician model (Edwards & Storkey, 2016) is an example of deep generative model with a global latent variable (2). The model was trained by optimizing a variational lower bound following the approach described in section 2.1 but with an additional recognition model approximating posterior distribution over the global latent variable. Authors designed the recognition model to be computationally efficient and require only a single pass over data which consisted of extracting special features from the examples, applying to them a pooling operation (e.g. averaging) and passing the result to another network providing parameters of the variational approximation. This simple architecture allowed for the fast learning and guaranteed invariance to both data permutations and size of the conditioning dataset. However, experimentally the fast learning ability in the model was evaluated only in the setting where all of the training examples represented the same single concept. We argue that in order to capture more information about the conditioning data such as a number of different concepts a more sophisticated aggregation procedure must be employed. Moreover, a fixed parametric description is too restrictive for an accurate representation of datasets of varying size. This motivates us to combine the best of two worlds: nonparametric representation of data and fast inference with neural recognition models. We proceed with a description of the proposed model. 3 GENERATIVE MATCHING NETWORKS Generative matching networks aim to model conditional generative distributions of the form p(x|X,θ). Similarly to other deep generative models we introduce a local latent variable z. Thus the full joint distribution of our model can be expressed as: p(x, z|X,θ) = p(z|X,θ)p(x|z,X,θ). (4) We also maintain a recognition model approximating the posterior over the latent variable z: q(z|x,X,φ) ≈ p(z|x,X,θ). In order to design a fast adaptation mechanism we have to make certain assumptions about relationships between training data and the new data used to condition the model. Thus we assume the homogeneity of generative processes for training and conditioning data up to some parametrization. One may think of this parametrization as specifying weights of a neural network defining a generative model. The generative process is assumed to have an approximately linear dependence on the parameters such that interpolation between parameters corresponding to different examples of the same concept can serve as good parameters for generating other examples. A similar assumption is used e.g. in the neural statistician model (Edwards & Storkey, 2016). However, even if a single concept can be well embedded to a fixed parameter space, this does not imply that a diverse set of concepts will fit into the same parametrization. Hence we express the dependency on the conditioning data in a different way. Instead of embedding the whole conditioning dataset we use a special matching procedure that extracts relevant observations from X and interpolates between their descriptions allowing to generate and recognize similar observations. 3.1 BASIC MODEL In the basic model, the prior over latent variables p(z) is independent from conditioning data X, e.g. a standard normal distribution. In order to generate a new object, a sample from the prior z and conditioning objects X = x1,x2, . . . ,xT are mapped into the matching space Φ where they are compared using a similarity function sim(., .) to form an attention kernel a(z,x). After that, the conditioning objects are interpolated in the prototype space Ψ weighted according to the attention kernel. The resulting interpolation is then used to parametrize the generative process that corresponds to the sampled value of latent variable. Formally, the described matching procedure can be described by the following equations: r = T∑ t=1 a(z,xt)ψL(xt), a(z,xt) = exp(sim(fL(z), gL(xt)))∑T t′=1 exp(sim(fL(z), gL(xt′))) . (5) After the vector r is computed, it is used as an input to a decoder, e.g. a deconvolutional network. Functions fL and gL are used to map latent variables and conditioning objects, correspondingly, into the matching space Φ. Since Φ is supposed to be a feature space that is good for discriminating between objects, gL can be implemented as a feature extractor suitable for the domain of observations, a convolutional network in our case. We found it sufficient to implement the function fL as a simple affine transformation followed by a non-linearity, because the latent variable itself is assumed to be an abstract object description. We also used a simple dot product as a similarity function between these vectors. Function ψL can also be considered as a feature extractor, although since the features useful to specify the generative process are not necessarily good for discrimination, it makes sense to represent ψL and gL differently. However, in our implementation ψL was implemented as a convolutional network sharing most of the parameters with gL to keep the number of trainable parameters small. We have described the basic matching procedure on the example of the conditional likelihood p(x|z,X,θ). Although the procedure (5) is invoked in several parts of the model, each part may operate with it’s own implementation of the functions, hence the subscript ·L used for the functions f , g and ψ is for likelihood part and below we use ·R to denote the recognition part. The recognition model q(z|X,x) uses the matching procedure (5) with the difference that the conditioning objects are being matched not with a value of latent variable, but rather with an observation x. The feature extractor fR in this case can share most of the parameters with gR and in our implementation these functions were identical for matching in the recognition model, i.e. gR = fR Moreover, since gL is also used to project observations into the space Φ, we further re-use already defined functionality by setting gR = gL. We also shared prototype functions ψ for all parts of our model although this is not technically required. After the matching, interpolated prototype vector r is used to compute parameters of the approximate posterior which in our case was a normal distribution with diagonal covariance matrix, i.e. q(z|X,x,φ) = N (z|µ(r),Σ(r)). A major difference between the generative matching networks and the originally proposed discriminative matching networks (Vinyals et al., 2016) is that since no label information is available to the model, the interpolation in equation (5) is performed not in the label space but rather in the prototype space which itself is defined by the model and is learnt during the training. One can note that the described conditional model is not applicable in a situation where no conditioning objects are available. A possible solution to this problem involves implicit addition of a pseudo-input to the set of conditioning objects X. A pseudo-input is not an actual observation, but rather just the corresponding outputs of functions f , g and ψ which are assumed to be another trainable parameters. A stochastic computational graph describing the basic model with pseudo-input can be found on figure 1. Further by default we assume the presence of a single pseudo-input in the model and denote models without pseudo-input as conditional. 3.2 EXTENSIONS Although the basic model is capable of instant adaptation to the conditioning dataset X, it admits a number of extensions that can seriously improve it’s performance. The disadvantage of the basic matching procedure (5) is that conditioning observations X are embedded to the space Φ independently from each other. Similarly to discriminative matching networks we address this problem by computing full contextual embeddings (FCE) (Vinyals et al., 2015). In order to obtain a joint embedding of conditioning data we allow K attentional passes over X of the form (5), guided by a recurrent controller R which accumulates global knowledge about the conditioning data in its hidden state h. The hidden state is thus passed to feature extractors f and g to obtain context-dependent embeddings. We refer to this process as the full matching procedure which modifies equation (5) as: rk = T∑ t=1 a(z,xt)ψ(xt), a(z,xt) = exp(sim(f(z,hk), g(xt,hk)))∑T t′=1 exp(sim(f(z,hk), g(xt′ ,hk))) , hk+1 = R(hk, rk). (6) The output of the full matching procedure is thus the interpolated prototype vector from the last iteration rK and the last hidden state of hK+1. Besides context-dependent embedding of the conditioning data, full matching procedure allows to implement the data-dependent prior over latent variables p(z|X). In this case, no query point such as a latent variable z or an observation x is used to match with the conditioning data and only hidden state of the controller h is passed to functions f and g. Output of the procedure is then used to compute parameters of the prior, i.e. means and standard deviations in our case. As we discuss in the experiments section, we found these extensions so important that further we consider only the model with full matching described by equation (6) and data-dependent prior. Please refer to the appendix and the source code for architectural details of our implementation. 3.3 TRAINING Training of our model consists of maximizing marginal likelihood of a dataset X which can be expressed as: p(X|θ) = T∏ t=1 p(xt|X<t,θ), X<t = {xs}t−1s=1. (7) Ideally we would like to use the whole available training data as X but due to computational limitations we instead use a training strategy rooted in curriculum learning (Bengio et al., 2009) and meta-learning (Thrun, 1998; Vilalta & Drissi, 2002; Hochreiter et al., 2001) which recently was successfully applied for one-shot discriminative learning (Santoro et al., 2016). In particular, we define a task-generating distribution pd(X) which in our case samples datasets X of size T from training examples. Then we train our model to explain well all of the sampled datasets simultaneously: Epd(X) [p(X|θ)]→ maxθ . (8) Obviously, the structure of task-generating distribution has a large impact on training and using an arbitrary distribution will unlikely lead to good results. Hence, we assume that at the training time we have an access to label information and can distinguish different concepts or classes. We thus constrain pd(X) to generate datasets consisting of examples that represent up to C randomly selected classes so that even on short datasets the model has a clear incentive to re-use conditioning data. This may be considered as a form of weak supervision but we want to emphasize that one does not need the label information at test time unless the model is deliberately used for classification which is also possible. Since the marginal likelihood (7) as well as the conditional marginal likelihoods are intractable we instead use variational lower bound (see section 2.1) as a proxy to p(X|θ) in the objective (8): L(X,θ,φ) = ∑T t=1 Eq(zt|xt,X<t,φ) [log p(xt, zt|X<t,θ)− log q(zt|xt,X<t,φ)] . 4 EXPERIMENTS For our experiments we use the Omniglot dataset (Lake et al., 2015) which consists of 1623 classes of handwritten characters from 50 different alphabets. The first 30 alphabets are devoted for training and the remaining 20 alphabets are left for testing. Importantly, only 20 examples of each class are available which makes this dataset specifically useful for small-shot learning problems. Unfortunately, the literature is inconsistent in usage of the dataset and multiple versions of Omniglot were used for evaluation which differ by train/test split, resolution, binarization and augmentation, see e.g. (Burda et al., 2015; Rezende et al., 2016; Santoro et al., 2016). We use the canonical split provided by Lake et al. (2015). In order to speed-up training we downscaled images to 28×28 resolution and since the result was fully binary we did not apply any further pre-processing. We also did not augment our data in contrast to (Santoro et al., 2016; Edwards & Storkey, 2016) to make future comparisons with our results easier. Unless otherwise stated, we train models on datasets of length T = 20 and of up to C = 2 different classes as we did not observe any improvement from training with larger values of C. 4.1 NUMBER OF ATTENTION STEPS Since the full context matching procedure (6) described in section 3.2 consists of multiple attention steps, it is interesting to see the effect of these numbers on model’s performance. We trained several models with smaller architecture and T = 10 varying number of attention steps allowed for the likelihood and recognition shared controller and the prior controller respectively. The models were compared using exponential moving averages of lower bounds corresponding to different numbers of conditioning examples X<t obtained during the training. Results of the comparison can be found on figure 2. Interestingly, larger numbers of steps lead to better results, however lower bounds are almost not improving after the shared controller is allowed for 4 steps. This behaviour was not observed with discriminative matching networks perhaps confirming the difficulty of unsupervised learning. Another important result is that the standard Gaussian prior makes adaptation significantly harder for the model yet still possible which justifies the importance of adaptation not just for the likelihood model but also for the prior. One may also see that all models preferred to set higher variances for a prior resulting to higher entropy comparing to standard normal prior. Clearly as more examples are available, generative matching networks become more certain about the data and output less dispersed Gaussians. Based on this comparison we decided to proceed with models that have 4 steps for the shared controller and a single step for the prior controller which is a reasonable compromise between computational cost and performance. 4.2 FAST ADAPTATION AND SMALL-SHOT GENERATION In this section we compare generative matching networks with a set of baselines by expected conditional likelihoods Epd(X)p(xt|X<t). The conditional likelihoods were estimated using importance sampling with 1000 samples from the recognition model used as a proposal. As we mention in section 3.1, it is possible to add a pseudo-input to the model to make it applicable for cases when no conditioning data is available. In this comparison by default we assume that a single pseudo-input was added to the model, otherwise we denote a model with no pseudo-input as conditional. When training and evaluating conditional models we ensure that the first C objects in a dataset belong to different classes so that they in principle contain enough information to explain rest of the dataset. We found it hard to properly compute conditional likelihoods for the neural statistician model (3) and hence had to exclude this model from the comparison, please see appendix for the details. Instead, we consider a simple generative matching network denoted as avg in which the matching procedure is replaced with prototype averaging which makes the adaptation mechanism similar to the one used in neural statistician. We also omitted sequential generative models (Rezende et al., 2016) from the comparison as they were reported to overfit on the canonical train/test split of Omniglot. Another baseline we use is a standard variational autoencoder which has the same architecture for generative and recognition model as the full generative matching networks. Table 1 contains results of the evaluation on the test alphabets from Omniglot. Ctrain and Ctest denote the maximum number of classes in task-generating distributions pd(·) used for training and evaluating respectively. As one could expect, larger values of Ctest make adaptation harder since on average less examples of the same class are available to the model. Still generative matching networks are capable of working in low-data regime even when testing setting is harder than one used for training, i.e. Ctest > Ctrain. Unsurprisingly, adaptation by averaging over prototype features performed reasonably well for simple datasets constructed of a single class, although significantly worse than the proposed matching procedure. On more difficult datasets with mixed examples of two different classes (Ctest = 2) averaging was ineffective for expressing dependency on the conditioning data which justifies our argument on the necessity of nonparametric representations. In order to visually assess the fast adaptation ability of generative matching networks we also provide conditionally generated samples in figure 3. Interestingly, the conditional version of our model which does not use a pseudo-input both at training and testing time generated samples slightly more similar to the conditioning data while sacrificing the predictive performance. Therefore, presence or absence of the pseudo-input should depend on target application of the model, i.e. density estimation or producing new examples. 5 CONCLUSION In this paper we presented a new class of conditional deep generative models called generative matching networks. These models are capable of fast adaptation to conditioning dataset by adjusting both the latent space and the predictive density while making very few assumptions on the data. The nonparametric matching enabling these features can be seen as a generalization of the original matching procedure since it allows a model to define the label space itself extending the applicability of matching networks to unsupervised and perhaps semi-supervised settings. We believe that these ideas can evolve further and help to implement more data-efficient models in other domains such as reinforcement learning where data acquisition is especially hard. ACKNOWLEDGMENTS We would like to thank Michael Figurnov and Timothy Lillicrap for useful discussions. Dmitry P. Vetrov is supported by RFBR project No.15-31-20596 (mol-a-ved) and by Microsoft: MSU joint research center (RPD 1053945). APPENDIX A. MODEL ARCHITECTURE CONDITIONAL GENERATOR The conditional generator network producing parameters for p(x|z,X,θ) has concatenation of z and the output of the matching operation [r,h] as input which is transformed to 3 × 3 × 32 tensor and then passed through 3 residual blocks of transposed convolutions. Each block has the following form: h = conv1(x), y = f(conv2(h) + h) + pool(scale(x)), where f is a non-linearity which in our architecture is always parametric rectified linear function (He et al., 2015). The block is parametrized by size of filters used in convolutions conv1 and conv2, shared number of filters F and stride S. • scale is another convolution with 1× 1 filters and the shared stride S. • In all other convolutions number of filters is the same and equals F . • conv1 and pool have also stride S. • conv2 preserve size of the input by padding and has stride 1. Blocks used in our paper have the following parameters (W1 ×H1,W2 ×H2, F, S): 1. (2× 2, 2× 2, 32, 2) 2. (3× 3, 3× 3, 16, 2) 3. (4× 4, 3× 3, 16, 2) Then log-probabilities for binary pixels were obtained by summing the result of these convolutions along the channel dimension. FEATURE ENCODER ψ Function ψ has an architecture which is symmetric from the generator network. The only difference is that the scale scale operation is replaced by bilinear upscaling. The residual blocks for feature encoder has following parameters: 1. (4× 4, 3× 3, 16, 2) 2. (3× 3, 3× 3, 16, 2) 3. (2× 2, 2× 2, 32, 2) The result is a tensor of 3× 3× 32 = 288 dimensions. FUNCTIONS f AND g Each function f or g used in our model is simply an affine transformation of feature encoder’s output (interpreted as a vector) to a 200-dimensional space followed by parametric rectified non-linearity. APPENDIX B. TRANSFER TO MNIST In this experiment we test the ability of generative matching networks to adapt not just to new concepts, but also to a new domain. Since we trained our models on 28×28 resolution for Omniglot it should be possible to apply them on MNIST dataset as well. We used the test part of MNIST to which we applied a single random binarization. Table 2 contains estimated predictive likelihood for different models. Qualitative results from the evaluation on Omniglot remain the same. Although transfer to a new domain caused significant drop in performance for all of the models, one may see that generative matching networks still demonstrate the ability to adapt to conditioning data. At the same time, average matching does not seem to efficiently re-use the conditioned data in such transfer task since relative improvements in expected conditional log-likelihood are rather small. Apparently, the model trained on a one-class datasets also learned highly dataset-dependent features as it actually performed even worse than the model with Ctrain = 2. We also provide conditional samples on figure 4. Both visual quality of samples and test loglikelihoods are significantly worse comparing to Omniglot which can be caused by a visual difference of the MNIST digits from Omniglot characters. The images are bolder and less regular due to binarization. Edwards & Storkey (2016) suggest that the quality of transfer may be improved by augmentation of the training data, however for the sake of experimental simplicity and reproducibility we resisted from any augmentation. APPENDIX C. CLASSIFICATION Generative matching networks are useful not only as adaptive density estimators. For example, one may use a pre-trained model for classification in several ways. Given a small number of labeled examples Xc = {xc,1,xc,2, . . .xc,N} for each class c ∈ {1, 2, . . . , C}, it possible to use the probability p(x|Xc) as a relative score to assign class c for a new object x. Alternatively, one may use the recognition model q(z|X1, . . . ,XC) to extract features describing the new object x and then use a classifier of choice, e.g. the nearest neighbour classifier. We implemented this method using cosine similarity on mean parameters of approximate Normal posteriors. The results under different number of training examples available are provided in table 3. Surprisingly, the simpler model with average matching performed slightly better than the full matching model. Perhaps, generative matching networks are very smooth density models and even being conditioned on a number of same-class example still assign enough probability mass to discrepant observations. The same conclusion can be made by assessing the generated samples on figure 3 which may guide further research on the topic. APPENDIX D. EVALUATION OF THE NEURAL STATISTICIAN MODEL The neural statistician model falls into the category of models with global latent variables which we describe in section 2.2. The conditional likelihood for these models has the form: p(x|X,θ) = ∫ p(α|X,θ)p(x|α,θ)dα. This quantity is hard to compute since it consists of an expectation with respect to the true posterior over global variable α. Since this distribution is intractable, simple importance sampling can not be used to estimate the likelihood. Thus, we tried the following strategies. First, we used self-normalizing importance sampling to directly estimate p(x|X,θ) as p̂(x|X,θ) = ∑S s=1 wsp(x, z (s)|α(s),θ)∑S s=1 ws , ws = p(α(s),X,Z(s)|θ) q(α(s)|X,φ)q(Z(s), z(s)|X,x,α(s),φ) , but observed somewhat contradictory results such as non-monotonic dependency of the estimate on the size of conditioning dataset. The diagnostic of the effective sample size suggested that the recognition model is not well suited as proposal for the task. Another strategy was to sequentially estimate p(X<t,θ) and then use the equation p(xt|X<t,θ) = p(xt,X<t|θ) p(X<t|θ) , which appeared to as unreliable as the previous strategy.
1. What is the main contribution of the paper, and how does it relate to one-shot learning? 2. What are the strengths and weaknesses of the proposed method, particularly regarding its ability to adapt to new input distributions? 3. How does the method compare to other approaches in terms of performance and efficiency? 4. What are some potential limitations or areas for improvement in the proposed approach? 5. How might the method be extended or applied to other domains or tasks?
Review
Review This paper proposes an interesting idea for rapidly adapting generative models in the low data regime. The idea is to use similar techniques that are used in one-shot learning, specifically ideas from matching networks. To that end, the authors propose the generative matching networks model, which is effectively a variational auto-encoder that can be conditioned on an input dataset. Given a query point, the model matches the query point to points in the conditioning set using an attention model in an embedding space (this is similar to matching networks). The results on the Omniglot dataset show that this method is successfully able to rapidly adapt to new input distributions given few examples. I think that the method is very interesting, however the major issue for me with this paper is a lack of clarity. I outline more details below, but overall I found the paper somewhat difficult to follow. There are a lot of details that I feel are scattered throughout, and I did not get a sense after reading this paper that I would be able to implement the method and replicate the results. My suggestion is to consolidate the major implementation details into a single section, and be explicit about the functional form of the different embedding functions and their variants. I was a bit disappointed to see that weak supervision in the form of labels had to be used. How does the method perform in a completely unsupervised setting? This could be an interesting baseline. There is a lack of definition of the different functions. Some basic insight into the functional forms of f, g, \phi, sim and R would be nice. Otherwise it is very unclear to me what’s going on. Section 3.2: “only state of the recurrent controller was used for matching”, my reading of this section (after several passes) is that the pseudo-input is used in the place of a regular input. Is this correct? Otherwise, this sentence/section needs more clarification. I noticed upon further reading in section 4.2 that there are two versions of the model: one in which a pseudo input is used, and one in which a pseudo input is not used (the conditional version). What is the difference in functional form between these? That is, how do the formulas for the embeddings f and g change between these settings? “since the result was fully contrastive we did not apply any further binarization” what does it mean for a result to be fully contrastive? For clarity, the figures and table refer to the number of shots, but this is never defined. I assume this is T here. This should be made consistent. Figure 2: why is the value of T only 9 in this case? What does it mean for it to be 0? It is stated earlier that T should go up to 20 (I assume #shot corresponds to T). It also looks like the results continue to improve with an increased number of steps, I would like to see the results for 5 and maybe 6 steps as well. Presumably there will come a point where you get diminishing returns. Table 1: is the VAE a fair baseline? You mention that Ctest affects Pd() in the evaluation. The fact that the VAE does not have an associated Ctest implies that the two models are being evaluated with a different metric. Can the authors clarify this? It’s important that the comparison is apples-to-apples. MNIST is much more common than Omniglot for evaluating generative models. Would it be possible to perform similar experiments on this dataset? That way it can be compared with many more models. Further, why are the negative log-likelihood values monotonically decreasing in the number of shots? That is, is there ever a case where increasing the number of shots can hurt things? What happens at T=30? 40? As a minor grammatical issue, the paper is missing determiners in several sentences. At one point, the model is referred to as “she” instead of “it”. “On figure 3” should be changed to “in figure 3” in the experiments section.
ICLR
Title Fast Adaptation in Generative Models with Generative Matching Networks Abstract Despite recent advances, the remaining bottlenecks in deep generative models are necessity of extensive training and difficulties with generalization from small number of training examples. Both problems may be addressed by conditional generative models that are trained to adapt the generative distribution to additional input data. So far this idea was explored only under certain limitations such as restricting the input data to be a single object or multiple objects representing the same concept. In this work we develop a new class of deep generative model called generative matching networks which is inspired by the recently proposed matching networks for one-shot learning in discriminative tasks and the ideas from meta-learning. By conditioning on the additional input dataset, generative matching networks may instantly learn new concepts that were not available during the training but conform to a similar generative process, without explicit limitations on the number of additional input objects or the number of concepts they represent. Our experiments on the Omniglot dataset demonstrate that generative matching networks can significantly improve predictive performance on the fly as more additional data is available to the model and also adapt the latent space which is beneficial in the context of feature extraction. 1 INTRODUCTION Deep generative models are currently one of the most promising directions in generative modelling. In this class of models the generative process is defined by a composition of conditional distributions modelled using deep neural networks which form a hierarchy of latent and observed variables. This approach allows to build models with complex, non-linear dependencies between variables and efficiently learn the variability across training examples. Such models are trained by stochastic gradient methods which can handle large datasets and a wide variety of model architectures but also present certain limitations. The training process usually consists of small, incremental updates of networks’ parameters and requires many passes over training data. Notably, once a model is trained it cannot be adapted to newly available data without complete re-training to avoid catastrophic interference (McCloskey & Cohen, 1989; Ratcliff, 1990). There is also a risk of overfitting for concepts that are not represented by enough training examples which is caused by high capacity of the models. Hence, most of deep generative models are not well-suited for rapid learning in one-shot scenario which is often encountered in real-world applications where data acquisition is expensive or fast adaptation to new data is required. A potential solution to these problems is explicit learning of adaptation mechanisms complementing the shared generative process. In probabilistic modelling framework, adaptation may be expressed as conditioning the model on additional input examples serving as induction bias. Notable steps in this direction have been made by Rezende et al. (2016) whose model was able to condition on a single object to produce new examples of the concept it represents. Later, Edwards & Storkey (2016) proposed a model that maintained a global latent variable capturing statistics about multiple input objects which was used to condition the generative distribution. It allowed to implement the fast learning ability, but due to the particular model architecture used the model was not well-suited to datasets consisting of several different concepts. In this work we present Generative Matching Networks, a new family of conditional generative models capable of instant adaptation to new concepts that were not available at the training time but share the structure of underlying generative process with the training examples. By conditioning on additional inputs, Generative Matching Networks improve their predictive performance, the quality of generated samples and also adapt their latent space which may be useful for unsupervised feature extraction. Importantly, no explicit limitations on the conditioning data are imposed such as number of objects or number of different concepts which expands the applicability of one-shot generative modelling and distinguish our work from existing approaches. Our model is inspired by the attentional mechanism implemented in Matching Networks (Vinyals et al., 2016) previously proposed for discriminative tasks and the recent advances from meta-learning (Santoro et al., 2016). Our approach for adaptation is an extension of these ideas to generative modelling and it may be re-used in a variety of different models being not restricted to the particular architecture used in the paper. The source code for generative matching networks is available at http://github.com/sbos/gmn. This paper is organized as follows. First, in section 2 we revisit the necessary background in variational approach to training generative models and mention the related work in conditional generative models. Then, in section 3 we describe the proposed generative model, it’s recognition counterpart and the training protocol. Section 4 contains experimental evaluation of the proposed model as both generative model and unsupervised feature extractor in small-shot learning settings. We conclude with discussion of the results in section 5. 2 BACKGROUND We consider the problem of learning a probabilistic generative model which can be expressed as a probability distribution p(x|θ) over objects of interests x parametrized by θ. The major class of generative models introduce also latent variables z that are used to explain or generate an object x such that p(x|θ) = ∫ p(z|θ)p(x|z,θ)dz and assumed to be non-observable. Currently, the common practice is to restrict the conditional distributions p(z|θ) and p(x|z,θ) to tractable distribution families and use deep neural networks for regressing their parameters. The expressive power of deep non-linear generative models comes at a price since neither marginal distribution p(x|θ) can be computed analytically nor it can be directly optimized in a statistically efficient way. Fortunately, intractable maximum likelihood training can be avoided in practice by resorting to adversarial training (Gutmann & Hyvärinen, 2012; Goodfellow et al., 2014) or variational inference framework (Kingma & Welling, 2013; Rezende et al., 2014) which we consider further. 2.1 TRAINING GENERATIVE MODELS WITH VARIATIONAL INFERENCE Recent developments in variational inference alleviate problems with maximizing the intractable marginal likelihood log p(x|θ) by approximating it with a lower bound (Jordan et al., 1999): log p(x|θ) ≥ L(θ,φ) = Eq [log p(x, z|θ)− log q(z|x,φ)] = log p(x|θ)− KL(q||p(·|x,θ)). (1) Tightness of the bound is controlled by the recognition model q(z|x,φ) which aims to minimize Kullback-Leibler divergence from the true posterior p(z|x,θ). Similarly to the generative model, recognition model may also be implemented with the use of deep neural networks or other parameter regression which is known as amortized inference (Gershman & Goodman, 2014). Amortized inference allows to use a single recognition model for many training examples. Thus, it is convenient to perform training of the generative model p(x|θ) by stochastic gradient optimization of variational lower bounds (1) corresponding to independent observations {xi}Ni=1: N∑ i=1 log p(xi|θ) ≥ N∑ i=1 Eq [log p(xi, zi|θ)− log q(zi|xi,φ)]→ max θ,φ . The clear advantage of this approach is its scalability. Every stochastic update to the parameters computed from only a small portion of training examples has an immediate effect for the whole dataset. However, while a single parameter update may be relatively fast a large number of them is required to significantly improve generative or inferential performance of the model. Hence, gradient training of generative models usually results into an extensive computational process which prevents from rapid incremental learning. In the next section we discuss potential solutions to this problem that allow to implement fast learning ability in generative models. 2.2 ADAPTATION IN GENERATIVE MODELS In probabilistic modelling framework the natural way of incorporating knowledge about newly available data is conditioning. One may design a model that being conditioned on the additional input data X = x1,x2, . . . ,xT represents a new generative distribution p(x|X,θ). An implementation of this idea can be found in the model by Rezende et al. (2016). Besides many other attractive novelties such as using sophisticated attention and feedback components, the model was able to produce new examples of a concept that was missing at the training time but had similarities in the underlying generative process with the other training examples. The model supported an explicit conditioning on a single observation x′ representing the new concept to construct a new generative distribution of the form p(x|x′,θ). The explicit conditioning when adaptation is performed by the model itself and and has to be learned is not the only way to propagate knowledge about new data. Another solution which is often encountered in Bayesian models is to maintain a global latent variable α encoding information about the whole available dataset such that the individual observations are conditionally independent given it’s value. The model then would have the following form: p(X|θ) = ∫ p(α|θ) ∏T t=1 p(xt|α,θ)dα. (2) The principal existence of such a global variable may be justified by the de Finetti’s theorem (Diaconis & Freedman, 1980) under the exchangeability assumption. In the model (2), the conditional generative distribution p(x|X,θ) is then defined implicitly via posterior over the global variable: p(x|X,θ) = ∫ p(x|α,θ)p(α|X,θ)dα. (3) Once there is an efficient inference procedure for the global variable α, fast adaptation of the generative model can be implemented straightforwardly. There are several relevant examples of generative models with global latent variables used for model adaptation and one-shot learning. Salakhutdinov et al. (2013) combined deep Boltzmann machine (DBM) with nested Dirichlet process (nDP) in a Hierarchical-Deep (HD) model. While being a compelling demonstration of important ideas from Bayesian nonparametrics and deep learning, the HD model required an extensive Markov chain Monte Carlo inference procedure used both for training and adaptation. Thus, while Bayesian learning approach could prevent overfitting the fast learning ability still presents a challenge for sampling-based inference. Later, Lake et al. (2015) proposed Bayesian program learning (BPL) approach for building a generative model of handwritten characters. The model was defined as a probabilistic program contained fine-grained specification of prior knowledge of the task such as generation of strokes and their composition into characters mimicking human drawing strategies. Authors used an extensive posterior inference as the training procedure and the conditioning mechanism (3) for generating new examples. The model was shown to efficiently learn from small number of training examples, but similarly to the HD model, sophisticated and computationally expensive inference procedure makes fast adaptation in BPL generally hard to achieve. The recently proposed neural statistician model (Edwards & Storkey, 2016) is an example of deep generative model with a global latent variable (2). The model was trained by optimizing a variational lower bound following the approach described in section 2.1 but with an additional recognition model approximating posterior distribution over the global latent variable. Authors designed the recognition model to be computationally efficient and require only a single pass over data which consisted of extracting special features from the examples, applying to them a pooling operation (e.g. averaging) and passing the result to another network providing parameters of the variational approximation. This simple architecture allowed for the fast learning and guaranteed invariance to both data permutations and size of the conditioning dataset. However, experimentally the fast learning ability in the model was evaluated only in the setting where all of the training examples represented the same single concept. We argue that in order to capture more information about the conditioning data such as a number of different concepts a more sophisticated aggregation procedure must be employed. Moreover, a fixed parametric description is too restrictive for an accurate representation of datasets of varying size. This motivates us to combine the best of two worlds: nonparametric representation of data and fast inference with neural recognition models. We proceed with a description of the proposed model. 3 GENERATIVE MATCHING NETWORKS Generative matching networks aim to model conditional generative distributions of the form p(x|X,θ). Similarly to other deep generative models we introduce a local latent variable z. Thus the full joint distribution of our model can be expressed as: p(x, z|X,θ) = p(z|X,θ)p(x|z,X,θ). (4) We also maintain a recognition model approximating the posterior over the latent variable z: q(z|x,X,φ) ≈ p(z|x,X,θ). In order to design a fast adaptation mechanism we have to make certain assumptions about relationships between training data and the new data used to condition the model. Thus we assume the homogeneity of generative processes for training and conditioning data up to some parametrization. One may think of this parametrization as specifying weights of a neural network defining a generative model. The generative process is assumed to have an approximately linear dependence on the parameters such that interpolation between parameters corresponding to different examples of the same concept can serve as good parameters for generating other examples. A similar assumption is used e.g. in the neural statistician model (Edwards & Storkey, 2016). However, even if a single concept can be well embedded to a fixed parameter space, this does not imply that a diverse set of concepts will fit into the same parametrization. Hence we express the dependency on the conditioning data in a different way. Instead of embedding the whole conditioning dataset we use a special matching procedure that extracts relevant observations from X and interpolates between their descriptions allowing to generate and recognize similar observations. 3.1 BASIC MODEL In the basic model, the prior over latent variables p(z) is independent from conditioning data X, e.g. a standard normal distribution. In order to generate a new object, a sample from the prior z and conditioning objects X = x1,x2, . . . ,xT are mapped into the matching space Φ where they are compared using a similarity function sim(., .) to form an attention kernel a(z,x). After that, the conditioning objects are interpolated in the prototype space Ψ weighted according to the attention kernel. The resulting interpolation is then used to parametrize the generative process that corresponds to the sampled value of latent variable. Formally, the described matching procedure can be described by the following equations: r = T∑ t=1 a(z,xt)ψL(xt), a(z,xt) = exp(sim(fL(z), gL(xt)))∑T t′=1 exp(sim(fL(z), gL(xt′))) . (5) After the vector r is computed, it is used as an input to a decoder, e.g. a deconvolutional network. Functions fL and gL are used to map latent variables and conditioning objects, correspondingly, into the matching space Φ. Since Φ is supposed to be a feature space that is good for discriminating between objects, gL can be implemented as a feature extractor suitable for the domain of observations, a convolutional network in our case. We found it sufficient to implement the function fL as a simple affine transformation followed by a non-linearity, because the latent variable itself is assumed to be an abstract object description. We also used a simple dot product as a similarity function between these vectors. Function ψL can also be considered as a feature extractor, although since the features useful to specify the generative process are not necessarily good for discrimination, it makes sense to represent ψL and gL differently. However, in our implementation ψL was implemented as a convolutional network sharing most of the parameters with gL to keep the number of trainable parameters small. We have described the basic matching procedure on the example of the conditional likelihood p(x|z,X,θ). Although the procedure (5) is invoked in several parts of the model, each part may operate with it’s own implementation of the functions, hence the subscript ·L used for the functions f , g and ψ is for likelihood part and below we use ·R to denote the recognition part. The recognition model q(z|X,x) uses the matching procedure (5) with the difference that the conditioning objects are being matched not with a value of latent variable, but rather with an observation x. The feature extractor fR in this case can share most of the parameters with gR and in our implementation these functions were identical for matching in the recognition model, i.e. gR = fR Moreover, since gL is also used to project observations into the space Φ, we further re-use already defined functionality by setting gR = gL. We also shared prototype functions ψ for all parts of our model although this is not technically required. After the matching, interpolated prototype vector r is used to compute parameters of the approximate posterior which in our case was a normal distribution with diagonal covariance matrix, i.e. q(z|X,x,φ) = N (z|µ(r),Σ(r)). A major difference between the generative matching networks and the originally proposed discriminative matching networks (Vinyals et al., 2016) is that since no label information is available to the model, the interpolation in equation (5) is performed not in the label space but rather in the prototype space which itself is defined by the model and is learnt during the training. One can note that the described conditional model is not applicable in a situation where no conditioning objects are available. A possible solution to this problem involves implicit addition of a pseudo-input to the set of conditioning objects X. A pseudo-input is not an actual observation, but rather just the corresponding outputs of functions f , g and ψ which are assumed to be another trainable parameters. A stochastic computational graph describing the basic model with pseudo-input can be found on figure 1. Further by default we assume the presence of a single pseudo-input in the model and denote models without pseudo-input as conditional. 3.2 EXTENSIONS Although the basic model is capable of instant adaptation to the conditioning dataset X, it admits a number of extensions that can seriously improve it’s performance. The disadvantage of the basic matching procedure (5) is that conditioning observations X are embedded to the space Φ independently from each other. Similarly to discriminative matching networks we address this problem by computing full contextual embeddings (FCE) (Vinyals et al., 2015). In order to obtain a joint embedding of conditioning data we allow K attentional passes over X of the form (5), guided by a recurrent controller R which accumulates global knowledge about the conditioning data in its hidden state h. The hidden state is thus passed to feature extractors f and g to obtain context-dependent embeddings. We refer to this process as the full matching procedure which modifies equation (5) as: rk = T∑ t=1 a(z,xt)ψ(xt), a(z,xt) = exp(sim(f(z,hk), g(xt,hk)))∑T t′=1 exp(sim(f(z,hk), g(xt′ ,hk))) , hk+1 = R(hk, rk). (6) The output of the full matching procedure is thus the interpolated prototype vector from the last iteration rK and the last hidden state of hK+1. Besides context-dependent embedding of the conditioning data, full matching procedure allows to implement the data-dependent prior over latent variables p(z|X). In this case, no query point such as a latent variable z or an observation x is used to match with the conditioning data and only hidden state of the controller h is passed to functions f and g. Output of the procedure is then used to compute parameters of the prior, i.e. means and standard deviations in our case. As we discuss in the experiments section, we found these extensions so important that further we consider only the model with full matching described by equation (6) and data-dependent prior. Please refer to the appendix and the source code for architectural details of our implementation. 3.3 TRAINING Training of our model consists of maximizing marginal likelihood of a dataset X which can be expressed as: p(X|θ) = T∏ t=1 p(xt|X<t,θ), X<t = {xs}t−1s=1. (7) Ideally we would like to use the whole available training data as X but due to computational limitations we instead use a training strategy rooted in curriculum learning (Bengio et al., 2009) and meta-learning (Thrun, 1998; Vilalta & Drissi, 2002; Hochreiter et al., 2001) which recently was successfully applied for one-shot discriminative learning (Santoro et al., 2016). In particular, we define a task-generating distribution pd(X) which in our case samples datasets X of size T from training examples. Then we train our model to explain well all of the sampled datasets simultaneously: Epd(X) [p(X|θ)]→ maxθ . (8) Obviously, the structure of task-generating distribution has a large impact on training and using an arbitrary distribution will unlikely lead to good results. Hence, we assume that at the training time we have an access to label information and can distinguish different concepts or classes. We thus constrain pd(X) to generate datasets consisting of examples that represent up to C randomly selected classes so that even on short datasets the model has a clear incentive to re-use conditioning data. This may be considered as a form of weak supervision but we want to emphasize that one does not need the label information at test time unless the model is deliberately used for classification which is also possible. Since the marginal likelihood (7) as well as the conditional marginal likelihoods are intractable we instead use variational lower bound (see section 2.1) as a proxy to p(X|θ) in the objective (8): L(X,θ,φ) = ∑T t=1 Eq(zt|xt,X<t,φ) [log p(xt, zt|X<t,θ)− log q(zt|xt,X<t,φ)] . 4 EXPERIMENTS For our experiments we use the Omniglot dataset (Lake et al., 2015) which consists of 1623 classes of handwritten characters from 50 different alphabets. The first 30 alphabets are devoted for training and the remaining 20 alphabets are left for testing. Importantly, only 20 examples of each class are available which makes this dataset specifically useful for small-shot learning problems. Unfortunately, the literature is inconsistent in usage of the dataset and multiple versions of Omniglot were used for evaluation which differ by train/test split, resolution, binarization and augmentation, see e.g. (Burda et al., 2015; Rezende et al., 2016; Santoro et al., 2016). We use the canonical split provided by Lake et al. (2015). In order to speed-up training we downscaled images to 28×28 resolution and since the result was fully binary we did not apply any further pre-processing. We also did not augment our data in contrast to (Santoro et al., 2016; Edwards & Storkey, 2016) to make future comparisons with our results easier. Unless otherwise stated, we train models on datasets of length T = 20 and of up to C = 2 different classes as we did not observe any improvement from training with larger values of C. 4.1 NUMBER OF ATTENTION STEPS Since the full context matching procedure (6) described in section 3.2 consists of multiple attention steps, it is interesting to see the effect of these numbers on model’s performance. We trained several models with smaller architecture and T = 10 varying number of attention steps allowed for the likelihood and recognition shared controller and the prior controller respectively. The models were compared using exponential moving averages of lower bounds corresponding to different numbers of conditioning examples X<t obtained during the training. Results of the comparison can be found on figure 2. Interestingly, larger numbers of steps lead to better results, however lower bounds are almost not improving after the shared controller is allowed for 4 steps. This behaviour was not observed with discriminative matching networks perhaps confirming the difficulty of unsupervised learning. Another important result is that the standard Gaussian prior makes adaptation significantly harder for the model yet still possible which justifies the importance of adaptation not just for the likelihood model but also for the prior. One may also see that all models preferred to set higher variances for a prior resulting to higher entropy comparing to standard normal prior. Clearly as more examples are available, generative matching networks become more certain about the data and output less dispersed Gaussians. Based on this comparison we decided to proceed with models that have 4 steps for the shared controller and a single step for the prior controller which is a reasonable compromise between computational cost and performance. 4.2 FAST ADAPTATION AND SMALL-SHOT GENERATION In this section we compare generative matching networks with a set of baselines by expected conditional likelihoods Epd(X)p(xt|X<t). The conditional likelihoods were estimated using importance sampling with 1000 samples from the recognition model used as a proposal. As we mention in section 3.1, it is possible to add a pseudo-input to the model to make it applicable for cases when no conditioning data is available. In this comparison by default we assume that a single pseudo-input was added to the model, otherwise we denote a model with no pseudo-input as conditional. When training and evaluating conditional models we ensure that the first C objects in a dataset belong to different classes so that they in principle contain enough information to explain rest of the dataset. We found it hard to properly compute conditional likelihoods for the neural statistician model (3) and hence had to exclude this model from the comparison, please see appendix for the details. Instead, we consider a simple generative matching network denoted as avg in which the matching procedure is replaced with prototype averaging which makes the adaptation mechanism similar to the one used in neural statistician. We also omitted sequential generative models (Rezende et al., 2016) from the comparison as they were reported to overfit on the canonical train/test split of Omniglot. Another baseline we use is a standard variational autoencoder which has the same architecture for generative and recognition model as the full generative matching networks. Table 1 contains results of the evaluation on the test alphabets from Omniglot. Ctrain and Ctest denote the maximum number of classes in task-generating distributions pd(·) used for training and evaluating respectively. As one could expect, larger values of Ctest make adaptation harder since on average less examples of the same class are available to the model. Still generative matching networks are capable of working in low-data regime even when testing setting is harder than one used for training, i.e. Ctest > Ctrain. Unsurprisingly, adaptation by averaging over prototype features performed reasonably well for simple datasets constructed of a single class, although significantly worse than the proposed matching procedure. On more difficult datasets with mixed examples of two different classes (Ctest = 2) averaging was ineffective for expressing dependency on the conditioning data which justifies our argument on the necessity of nonparametric representations. In order to visually assess the fast adaptation ability of generative matching networks we also provide conditionally generated samples in figure 3. Interestingly, the conditional version of our model which does not use a pseudo-input both at training and testing time generated samples slightly more similar to the conditioning data while sacrificing the predictive performance. Therefore, presence or absence of the pseudo-input should depend on target application of the model, i.e. density estimation or producing new examples. 5 CONCLUSION In this paper we presented a new class of conditional deep generative models called generative matching networks. These models are capable of fast adaptation to conditioning dataset by adjusting both the latent space and the predictive density while making very few assumptions on the data. The nonparametric matching enabling these features can be seen as a generalization of the original matching procedure since it allows a model to define the label space itself extending the applicability of matching networks to unsupervised and perhaps semi-supervised settings. We believe that these ideas can evolve further and help to implement more data-efficient models in other domains such as reinforcement learning where data acquisition is especially hard. ACKNOWLEDGMENTS We would like to thank Michael Figurnov and Timothy Lillicrap for useful discussions. Dmitry P. Vetrov is supported by RFBR project No.15-31-20596 (mol-a-ved) and by Microsoft: MSU joint research center (RPD 1053945). APPENDIX A. MODEL ARCHITECTURE CONDITIONAL GENERATOR The conditional generator network producing parameters for p(x|z,X,θ) has concatenation of z and the output of the matching operation [r,h] as input which is transformed to 3 × 3 × 32 tensor and then passed through 3 residual blocks of transposed convolutions. Each block has the following form: h = conv1(x), y = f(conv2(h) + h) + pool(scale(x)), where f is a non-linearity which in our architecture is always parametric rectified linear function (He et al., 2015). The block is parametrized by size of filters used in convolutions conv1 and conv2, shared number of filters F and stride S. • scale is another convolution with 1× 1 filters and the shared stride S. • In all other convolutions number of filters is the same and equals F . • conv1 and pool have also stride S. • conv2 preserve size of the input by padding and has stride 1. Blocks used in our paper have the following parameters (W1 ×H1,W2 ×H2, F, S): 1. (2× 2, 2× 2, 32, 2) 2. (3× 3, 3× 3, 16, 2) 3. (4× 4, 3× 3, 16, 2) Then log-probabilities for binary pixels were obtained by summing the result of these convolutions along the channel dimension. FEATURE ENCODER ψ Function ψ has an architecture which is symmetric from the generator network. The only difference is that the scale scale operation is replaced by bilinear upscaling. The residual blocks for feature encoder has following parameters: 1. (4× 4, 3× 3, 16, 2) 2. (3× 3, 3× 3, 16, 2) 3. (2× 2, 2× 2, 32, 2) The result is a tensor of 3× 3× 32 = 288 dimensions. FUNCTIONS f AND g Each function f or g used in our model is simply an affine transformation of feature encoder’s output (interpreted as a vector) to a 200-dimensional space followed by parametric rectified non-linearity. APPENDIX B. TRANSFER TO MNIST In this experiment we test the ability of generative matching networks to adapt not just to new concepts, but also to a new domain. Since we trained our models on 28×28 resolution for Omniglot it should be possible to apply them on MNIST dataset as well. We used the test part of MNIST to which we applied a single random binarization. Table 2 contains estimated predictive likelihood for different models. Qualitative results from the evaluation on Omniglot remain the same. Although transfer to a new domain caused significant drop in performance for all of the models, one may see that generative matching networks still demonstrate the ability to adapt to conditioning data. At the same time, average matching does not seem to efficiently re-use the conditioned data in such transfer task since relative improvements in expected conditional log-likelihood are rather small. Apparently, the model trained on a one-class datasets also learned highly dataset-dependent features as it actually performed even worse than the model with Ctrain = 2. We also provide conditional samples on figure 4. Both visual quality of samples and test loglikelihoods are significantly worse comparing to Omniglot which can be caused by a visual difference of the MNIST digits from Omniglot characters. The images are bolder and less regular due to binarization. Edwards & Storkey (2016) suggest that the quality of transfer may be improved by augmentation of the training data, however for the sake of experimental simplicity and reproducibility we resisted from any augmentation. APPENDIX C. CLASSIFICATION Generative matching networks are useful not only as adaptive density estimators. For example, one may use a pre-trained model for classification in several ways. Given a small number of labeled examples Xc = {xc,1,xc,2, . . .xc,N} for each class c ∈ {1, 2, . . . , C}, it possible to use the probability p(x|Xc) as a relative score to assign class c for a new object x. Alternatively, one may use the recognition model q(z|X1, . . . ,XC) to extract features describing the new object x and then use a classifier of choice, e.g. the nearest neighbour classifier. We implemented this method using cosine similarity on mean parameters of approximate Normal posteriors. The results under different number of training examples available are provided in table 3. Surprisingly, the simpler model with average matching performed slightly better than the full matching model. Perhaps, generative matching networks are very smooth density models and even being conditioned on a number of same-class example still assign enough probability mass to discrepant observations. The same conclusion can be made by assessing the generated samples on figure 3 which may guide further research on the topic. APPENDIX D. EVALUATION OF THE NEURAL STATISTICIAN MODEL The neural statistician model falls into the category of models with global latent variables which we describe in section 2.2. The conditional likelihood for these models has the form: p(x|X,θ) = ∫ p(α|X,θ)p(x|α,θ)dα. This quantity is hard to compute since it consists of an expectation with respect to the true posterior over global variable α. Since this distribution is intractable, simple importance sampling can not be used to estimate the likelihood. Thus, we tried the following strategies. First, we used self-normalizing importance sampling to directly estimate p(x|X,θ) as p̂(x|X,θ) = ∑S s=1 wsp(x, z (s)|α(s),θ)∑S s=1 ws , ws = p(α(s),X,Z(s)|θ) q(α(s)|X,φ)q(Z(s), z(s)|X,x,α(s),φ) , but observed somewhat contradictory results such as non-monotonic dependency of the estimate on the size of conditioning dataset. The diagnostic of the effective sample size suggested that the recognition model is not well suited as proposal for the task. Another strategy was to sequentially estimate p(X<t,θ) and then use the equation p(xt|X<t,θ) = p(xt,X<t|θ) p(X<t|θ) , which appeared to as unreliable as the previous strategy.
1. What is the focus of the paper in terms of the proposed architecture and training procedure? 2. How does the proposed approach differ from traditional methods in processing exemplars? 3. Can you explain how the resulting "summary" is utilized in generating new samples? 4. How does the model generalize when generating samples from multiple classes? 5. Are there any limitations or areas for improvement regarding the experimental design or comparisons to prior works?
Review
Review The paper explores a VAE architecture and training procedure that allows to generate new samples of a concept based on several exemplars that are shown to the model. The proposed architecture processes the set of exemplars with a recurrent neural network and aggregation procedure similar to the one used in Matching Networks. The resulting "summary" is used to condition a generative model (a VAE) that produces new samples of the same kind as the exemplars shown. The proposed aggregation and conditioning procedure are better suited to sets of exemplars that come from several classes than simple averaging. Perhaps surprisingly the model generalizes from generation conditioned on samples from 2 classes to generation conditioned on samples from 4 classes. The experiments are conducted on the OMNIGLOT dataset and are quite convincing. An explicit comparison to previous works is lacking, but this is explained in the appendices, and a comparison to architectures similar to previous work is presented.
ICLR
Title Learning from Asymmetrically-corrupted Data in Regression for Sensor Magnitude Abstract This paper addresses a regression problem in which output label values represent the results of sensing the magnitude of a phenomenon. A low value of such labels can either mean that the actual magnitude of the phenomenon has been low or that the sensor has made an incomplete observation. This leads to a bias toward lower values in labels and its resultant learning because labels for incomplete observations are recorded as lower than those for typical observations, even if both have monitored similar phenomena. Moreover, because an incomplete observation does not provide any tags indicating incompleteness, we cannot eliminate or impute them. To address this issue, we propose a learning algorithm that explicitly models the incomplete observations to be corrupted with an asymmetric noise that always has a negative value. We show that our algorithm is unbiased with a regression learned from the uncorrupted data that does not involve incomplete observations. We demonstrate the advantages of our algorithm through numerical experiments. N/A This paper addresses a regression problem in which output label values represent the results of sensing the magnitude of a phenomenon. A low value of such labels can either mean that the actual magnitude of the phenomenon has been low or that the sensor has made an incomplete observation. This leads to a bias toward lower values in labels and its resultant learning because labels for incomplete observations are recorded as lower than those for typical observations, even if both have monitored similar phenomena. Moreover, because an incomplete observation does not provide any tags indicating incompleteness, we cannot eliminate or impute them. To address this issue, we propose a learning algorithm that explicitly models the incomplete observations to be corrupted with an asymmetric noise that always has a negative value. We show that our algorithm is unbiased with a regression learned from the uncorrupted data that does not involve incomplete observations. We demonstrate the advantages of our algorithm through numerical experiments. 1 INTRODUCTION This paper addresses a regression problem for predicting the magnitude of a phenomenon when an observed magnitude involves a particular measurement error. The magnitude typically represents how large a phenomenon is or how strong the nature of the phenomenon is. Such examples of predicting the magnitude are found in several application areas, including pressure, vibration, and temperature (Vandal et al., 2017; Shi et al., 2017; Wilby et al., 2004; Tanaka et al., 2019). In medicine and healthcare, the magnitude may represent pulsation, respiration, or body movements (Inan et al., 2009; Nukaya et al., 2010; Lee et al., 2016; Alaziz et al., 2016; 2017; Carlson et al., 2018). More specifically, we learn a regression function to predict the label representing the magnitude of a phenomenon from explanatory variables. The training data consists of pairs of the label and explanatory variables, but note that the label in the data is observed with a sensor and is not necessarily in agreement with the actual magnitude of the phenomenon. We note that we use the term “label” even though we address the regression problem, and it refers to a real-valued label in this paper. In the example of predicting the magnitude of body movements, the label in the data is measured with an intrusive sensor attached to the chest or the wrist, and the explanatory variables are the values measured with non-intrusive bed sensors (Mullaney et al., 1980; Webster et al., 1982; Cole et al., 1992; Tryon, 2013). A regression function for this example would make it possible to replace intrusive sensors with non-intrusive ones, which in turn will reduce the burden on patients. Although the sensors that measure the label generally have high accuracy, they often make incomplete observations, and such incomplete observations are recorded as low values instead of missing values. This leads to the particular challenge where a low value of the label can either mean that the actual magnitude of the phenomenon has been low or that the sensor has made an incomplete observation, and there are no clues that allow us to tell which is the case. We illustrate this challenge in Fig. 1-(a). Such incomplete observations are prevalent in measuring the magnitude of a phenomenon. For example, the phenomenon may be outside the coverage of a sensor, or the sensing system may experience temporal mechanical failures. In the example of body movements, the sensor may be temporarily detached from the chest or wrist. In all cases, the sensor keeps recording low values, while the actual magnitude may be high, and no tag indicating incompleteness can be provided. This incomplete observation is particularly severe for the sensor measuring the label since it is single-source and has narrower data coverage. This stems from the fact that the sensor is usually intrusive or it is costly to produce highly accurate observations for measuring the label. Examples of this can be seen in chest or wrist sensors that focus on the movements of a local body part with high accuracy and often miss movements outside their coverage, such as those of parts located far from where the sensor is attached. At most, a single intrusive sensor can be attached to a patient to avoid burdening them. In contrast, the sensors measuring the explanatory variables are usually multi-source and provide broader data coverage. For example, multiple sensors can be attached to various places of a bed and globally monitor the movements of all body parts on the bed but with lower accuracy. One cannot simply ignore the problem that the observations of labels may be incomplete because the estimated regression functions trained on such data with incomplete observations are severely biased toward lower values regardless of the amount of available training data. This bias comes from the fact that incomplete observations always have lower values than the actual magnitude of a phenomenon, and they occur intensively on label sensors, while explanatory variables are usually observed completely. Moreover, incomplete observations can be much more frequent than expected. Unfortunately, since we cannot identify which observations are incomplete, we cannot eliminate or impute them by using existing methods that require identifying incomplete observations. Such methods include thresholding, missing value detection (Pearson, 2006; Qahtan et al., 2018), imputation (Enders, 2010; Smieja et al., 2018; Ma & Chen, 2019; Sportisse et al., 2020), and semi-supervised regression (Zhou & Li, 2005; Zhu & Goldberg, 2009; Jean et al., 2018; Zhou et al., 2019). The issues of incomplete observations also cannot be solved with robust regression (Huber et al., 1964; Narula & Wellington, 1982; Draper & Smith, 1998; Wilcox, 1997), which takes into account the possibility that the observed labels contain outliers. While robust regression is an established approach and state-of-the-art against corrupted labels in regression, it assumes symmetric label corruption. Namely, the noise is assumed to not be biased either positively or negatively. Since incomplete observations induce the noise that is severely biased toward lower values, robust regression methods still produce regression functions that are biased toward lower values than the one that would be learned from the data without incomplete observations. In this paper, to mitigate the bias toward lower values, we explicitly assume the existence of the noise from incomplete observations, which always has negative values, in addition to the ordinary symmetric noise. That is, we consider our training data to be asymmetrically-corrupted data. We then formulate a regression problem from our asymmetrically-corrupted data and design a principled learning algorithm for this regression problem. By explicitly modeling the incomplete observation, we derive a learning algorithm that has a rather drastic feature: namely, it ignores the labels that have relatively low values (lower-side labeled data). In other words, our algorithm uses the data whose labels have relatively high values (upper-side labeled data) and the data whose labels are ignored (unlabeled data). Hence, we refer to our algorithm as upper and unlabeled regression (U2 regression). This aligns with the intuition that the labels with low values are unreliable, since those low values may be due to incomplete observations. Our main result is that U2 regression, which learns from the asymmetrically-corrupted data, produces a regression function that is, under some technical assumptions, unbiased and consistent with the one that is produced from the uncorrupted data that does not involve incomplete observations. This counterintuitive result is achieved by considering a specific class of loss functions and deriving their gradient, which only requires upper-side labeled data and unlabeled data in the asymmetricallycorrupted data and can still be shown to be asymptotically equivalent to the expression of the gradient that has access to the uncorrupted data. The main novelty in our approach is thus in the loss function, and we will empirically demonstrate the effectiveness of the proposed class of loss functions over existing common loss functions in dealing with asymmetrically-corrupted data in synthetic and six real-world regression tasks. Contributions. The main contributions of this paper are summarized as follows. • We formulate a novel problem of learning a regression function from asymmetricallycorrupted data. This is important for applications where the magnitude of a phenomenon is measured with a sensor that is susceptible to unidentifiable incomplete observations. • We derive an unbiased and consistent learning algorithm (U2 regression) for this problem from the new class of loss functions. • Extensive experiments on synthetic and six real-world regression tasks including a real use case for healthcare demonstrate the effectiveness of the proposed method. 2 REGRESSION FROM ASYMMETRICALLY-CORRUPTED DATA Our goal is to derive a learning algorithm with asymmetrically-corrupted data, i.e., labels in the training data are corrupted with negative-valued noise due to incomplete observations, in a manner that is unbiased and consistent with the regression that uses uncorrupted data without involving incomplete observations. We first consider the regression problem that uses the uncorrupted data in Section 2.1 and then formulate learning from the asymmetrically-corrupted data in Section 2.2. 2.1 REGRESSION PROBLEM FROM DATA WITHOUT INCOMPLETE OBSERVATIONS Let x ∈ RD(D ∈ N) be a D-dimensional explanatory variable and y ∈ R be a real-valued label. We assume that, without incomplete observations, y is observed in accordance with y = f∗(x) + s, (1) where f∗ is the oracle regressor and s is the symmetric noise with 0 as the center, such as additive white Gaussian noise (AWGN). We learn a regression function f(x) that computes the value of the estimation of a label, ŷ, for a newly observed x as ŷ = f(x). The optimal regression function, f̂ , is given by f̂ ≡ arg min f∈F L(f), (2) where F is a hypothesis space for f , and L(f) is the expected loss when the regression function f(x) is applied to data (x, y), distributed in accordance with an underlying distribution p(x, y): L(f) ≡ Ep(x,y)[L(f(x), y)], (3) where Ep[•] denotes the expectation over the distribution p, and L(f(x), y) is the loss function between f(x) and y, e.g., the squared loss, L(f(x), y) = ‖f(x)− y‖2. The expectation Ep(x,y) can be estimated by computing a sample average for the training data D ≡ {(xn, yn)}Nn=1, which is N pairs of explanatory variables and labels. 2.2 REGRESSION PROBLEM FROM ASYMMETRICALLY-CORRUPTED DATA In this paper, we consider a scenario in which we only have access to the asymmetrically-corrupted data D′ ≡ {(xn, y′n)}Nn=1, where a label y′ may be corrupted due to incomplete observations. A corrupted label y′ is observed from the uncorrupted y with an asymmetric negative-valued noise, a: y′ = y + a, (4) where the asymmetric noise a always has a random negative value, which means y′ ≤ y. Using only D′, we learn a regression function f(x) as the solution for Equation 2 in an unbiased and consistent manner. Although AWGN can be handled even when we use a naive regression method such as least squares, the asymmetric noise a, which always has a negative value, is problematic. Intuitively, the asymmetric noise a makes lower-side labeled data particularly unreliable and inappropriate for learning, while keeping upper-side labeled data reliable, where the upper-side labeled data refers to the data {(x, y)} whose label is above the regression line (i.e., f(x) ≤ y) and the lower-side labeled data refers to the data whose label is below the regression line. The regression line represents the estimation of a regression function. Figure 1-(b) illustrates this as a scatter plot of the value of the label against the value of an explanatory variable. Here, the data with incomplete observations appear only in the lower side of the regression line because a makes observations have lower label values than those of typical observations, where the regression line represents such typical observations. This asymmetry leads to biased learning compared with the learning from the uncorrupted data without incomplete observations. To address the asymmetric noise a and its resultant bias, we formalize the assumption on the observation process for the asymmetrically-corrupted data D′ and derive a lemma representing the nature of D′. Then, we propose a learning algorithm based on the lemma in the next section. The observation processes of D and D′ are formally characterized as follows. Assumption 2.1. Assume s⊥ f∗(x), Ep( s)[ s] = 0; a⊥ f∗(x), a ≤ 0 almost surely (a.s.); 2| s| < | a| a.s. when a < 0; and {(xn, y, y′n)}Nn=1 are i.i.d. observations in accordance with Equation 1 and Equation 4, This assumption means that D′ has enough information to estimate f , and the asymmetric noise a is significant enough compared to the symmetric noise s, which are necessary assumptions so that the learning problem is solvable, and a should be handled separately from s. From Assumption 2.1, we then have the following lemma. Lemma 2.2. Let F ′ ≡ {f ∈ F : |f(x)− f∗(x)| ≤ | s| a.s.}. When f ∈ F ′, the following holds for y ≡ f∗(x) + s and y′ ≡ y + a under Assumption 2.1: Ep(x,y′|f(x)≤y′)[G(x, y′)] = Ep(x,y|f(x)≤y)[G(x, y)] (5) for any function G : RD × R→ R as long as the expectations exist. Proof. We outline a proof here and provide a complete one in Appendix A.1. We first show that a does not change the distribution for upper-side labeled data (f∗(x) ≤ y′) on the basis of the oracle regression function f∗ before and after adding a, i.e., a = 0 when f∗(x) ≤ y′. With the condition f ∈ F ′, we can further prove that a = 0 when f(x) ≤ y′, which is for upper-side labeled data on the basis of f . This establishes p(x, y′|f(x) ≤ y′) = p(x, y|f(x) ≤ y) and implies Lemma 2.2. The condition parts of these conditional distributions represent the relationships between labels and the estimations of the regression function f , e.g., p(x, y|f(x) ≤ y) is the distribution of x and y when y is higher than what is given by f . The condition f ∈ F ′ represents our natural expectation that the regression function f well approximates f∗. Lemma 2.2 shows that a does not change the expectation for our upper-side labeled data (f(x) ≤ y′) before and after adding a, which makes them still reliable for regression. In the next section, we derive an unbiased learning algorithm based on this lemma. 3 U2 REGRESSION We seek to find the minimizer of the objective in Equation 2 from the asymmetrically-corrupted data D′. To this end, we propose a gradient that relies only on the knowledge of the distribution of the corrupted data p(x, y′) but is still equivalent to the gradient of Equation 3, which relies on the knowledge of the distribution of the uncorrupted data p(x, y). Based on Lemma 2.2, we rewrite the gradient based on p(x, y) into the one that only requires p(x, y′). 3.1 GRADIENT FOR LEARNING FROM ASYMMETRICALLY-CORRUPTED DATA Here, we address Equation 2 with the gradient descent. At step t + 1 in the gradient descent, the gradient of Equation 3 with respect to the parameters θ of f is represented with a regression function, ft, which is estimated at step t, as follows: ∇L(ft) ≡ Ep(x,y)[∇L(ft(x), y)], where ∇L(ft(x), y) ≡ ∂L(f(x), y) ∂θ ∣∣∣ f=ft . (6) Note that this holds for any step in the gradient descent. When t = 0, f0 is the initial value of f , and when t =∞, we suppose f∞ = f̂ . We can decompose∇L(ft) as ∇L(ft) =p(ft(x) ≤ y)Ep(x,y|ft(x)≤y)[∇L(ft(x), y)] (7) + p(y < ft(x))Ep(x,y|y<ft(x))[∇L(ft(x), y)]. We then assume that, when y < f(x), the gradient of the loss function does not depend on y and only depends on f(x); thus we write ∇L(f(x), y) as g(f(x)) when y < f(x) to emphasize this independence. Formally, Condition 3.1. Let g(f(x)) be ∇L(f(x), y) for y < f(x). g(f(x)) is a gradient function depending only on f(x) and not on the value of y. Such common losses are the absolute loss and pinball loss, which are respectively used in least absolute regression and quantile regression and work well on real data (Lee et al., 2016; Yeung et al., 2002; Wang et al., 2005; Srinivas et al., 2020). For example, the gradient of the absolute loss is ∂|f(x)− y| ∂θ = ∂f(x) ∂θ when y < f(x), (8) which does not depend on the value of y but only on f(x). We now propose a gradient that does not rely on the knowledge of p(x, y) but instead uses only p(x, y′). Namely, ∇L̃(ft) ≡ p(ft(x) ≤ y)Ep(x,y′|ft(x)≤y′) [ ∇L(ft(x), y) ] (9) + Ep(x) [ g(ft(x)) ] − p(ft(x) ≤ y)Ep(x|ft(x)≤y′) [ g(ft(x)) ] . In Section 3.2, we will formally establish the equivalence between the gradient in Equation 9 and that in Equation 6 under our assumptions. Note that in the second and third terms of Equation 9, we apply expectations over p(x) and p(x|ft(x) ≤ y′) to g(f(x)), even though g(f(x)) is defined to be the gradient∇L(f(x), y) for y < f(x). This is tractable due to the nature of g(f(x)), which only depends on f(x) and does not depend on the value of y. Since the expectations in Equation 9 only depend on x and y′, they can be estimated by computing a sample average for our asymmetrically-corrupted data D′ as ∇L̂(ft) = πup nup [ ∑ (x,y)∈{Xup,y′up} ∇L(ft(x), y) ] + 1 N [ ∑ x∈Xun g(ft(x)) ] − πup nup [ ∑ x∈Xup g(ft(x)) ] , (10) where {Xup,y′up} represents the set of coupled pairs of x and y′ in the upper-side labeled sample set, {x, y′ : ft(x) ≤ y′}, in D′;Xun is a sample set of x in D′ ignoring labels y′; nup is the number of samples in the upper-side labeled set; πup is πup ≡ p(ft(x) ≤ y). Note that πup depends on the current estimation of the function ft and the label y with complete observation. Thus, it changes at each step of the gradient descent, and we cannot determine its value in a general way. In this paper, we propose a simple approach of choosing πup as a single value of the hyperparameter. We optimize it with the grid search based on the validation set, which enables us to flexibly handle data variation. We will show that it works practically in our experiments. As we will show in Section 3.2, we can use Equation 10 to design an algorithm that gives an unbiased and consistent regression function. By using the gradient in Equation 10, we can optimize Equation 2 and learn the regression function only with upper-side labeled samples and unlabeled samples from D′ independent of lower-side labels. This addresses the issue that our lower-side labeled data is particularly unreliable and leads to overcoming the bias that stems from this unreliable part of the data. We refer to our algorithm as upper and unlabeled regression (U2 regression). See Appendix B for the specific implementation of the algorithm based on stochastic optimization. The gradient in Equation 10 can be interpreted in an intuitive manner. The first term in Equation 10 has the effect of minimizing the upper-side loss. Recall that the upper-side data are not affected by the asymmetric noise under our assumptions. Thus, U2 regression seeks to learn the regression function f on the basis of this reliable upper-side data. Notice that the first term becomes zero when all of the data points are below f (i.e., y′ ≤ ft(x),∀(x, y′) ∈ D′), since then {Xup,y′up } becomes empty. The second term thus has the effect of pushing down f at all of the data points so that some data points are above f . Meanwhile, the third term partially cancels this effect of the second term for the upper-side data to control the balance between the first and the second terms. 3.2 UNBIASEDNESS AND CONSISTENCY OF GRADIENT U2 regression is the learning algorithm based on the gradient,∇L̂(ft), in Equation 10 and uses only asymmetrically-corrupted data D′. The use of ∇L̂(ft) can be justified as follows: Proposition 3.2. Suppose that Assumption 2.1 holds and the loss function L(f(x), y) satisfies Condition 3.1. Then, the gradient ∇L̃(ft) in Equation 9 and its empirical approximation ∇L̂(ft) in Equation 10 are unbiased and consistent with the gradient∇L(ft) in Equation 6 a.s. Proof. We outline a proof here and provide a complete one in Appendix A.2. First, we rewrite Equation 7 into a gradient that only contains the expectation over p(x, y|ft(x) ≤ y) with Condition 3.1. Then, we apply Lemma 2.2 to the gradient, and it becomes an expression identical to Equation 9. In other words, U2 regression asymptotically produces the same result as the learning algorithm based on the gradient∇L(ft) in Equation 6, which requires the uncorrupted data without incomplete observations, D. The convergence rate of U2 regression is of the order Op(1/√nup + 1/ √ N) in accordance with the central limit theorem (Chung, 1968), where Op denotes the order in probability. We further justify our approach of having the specific form of Equation 9 by showing that a straightforward variant that uses D′ as if it does not involve incomplete observations (i.e., p(x, y) ≈ p(x, y′)) can fail for our problem. To this end, we introduce an additional assumption on the observation process: Assumption 3.3. Assume a⊥ x. Then, we have Lemma 3.4. Let∇Ľ(ft) be a variant of the gradient in Equation 7 replacing p(x, y) with p(x, y′), δ be the difference between the expectations of the gradients in the upper side and the lower side δ ≡ ∣∣Ep(x,y|f(x)≤y)[∇L(f(x), y)]− Ep(x,y|y<f(x))[∇L(f(x), y)]∣∣, 0 < η < 1 be probability when 0 ≤ s, and 0 < ξ < 1 be probability when a = 0. Then,∇Ľ(ft) is not consistent with the gradient in Equation 6 a.s., and the difference (bias) between them at step t+ 1 in the gradient descent is η(1− η)(1− ξ) (1− η) + η(1− ξ) δ ≤ |∇Ľ(ft)−∇L(ft)|. (11) Proof. We outline a proof here and provide a complete one in Appendix A.3. We first show that the bias |∇Ľ(ft)−∇L(ft)| can be represented by the difference between the expectation of g(ft(x)) with the upper-side data and that with the lower-side data, which can be written by δ. The bias also has the coefficient which contains the proportions for the lower-side data and the original upper-side data mixed into the lower side due to incomplete observations. These values can be written by η and ξ from their definitions. Lemma 3.4 shows that the bias caused by incomplete observations becomes severe when there is a large difference between the expectations of the gradients in the upper side and the lower side. δ is usually higher than zero because δ = 0 implies there is no difference between the expectations of the gradients in the upper side and the lower side or both of the expectations are zero. Furthermore, a larger 1− ξ = p( a < 0) makes the bias more significant, which agrees with the intuition that as the proportion of incomplete observations increases, the problem becomes more difficult. 4 EXPERIMENTS We now evaluate the proposed method through numerical experiments. We first introduce the baselines to be compared in the experiments. Then, we present the experimental results to show the effectiveness of our unbiased learning. Baselines. Recall that the novelty of the proposed approach lies in the unbiased gradient in Equation 10, which is derived from the new class of loss functions in equation 9 with Condition 3.1. An objective of our experiments is thus to validate the effectiveness of this new class of loss functions and the corresponding gradients against common loss functions in the literature. Specifically, we compare the proposed method with MSE (mean squared error), MAE (mean absolute error), and Huber losses (Huber et al., 1964; Narula & Wellington, 1982; Wilcox, 1997). For robust loss function in regression, MAE and Huber losses are considered the de facto standard and state-of-the-art in many studies and libraries. We use the same model and optimization method with all of the loss functions under consideration, and hence the only difference among the proposed method and the baselines is in the gradients. Since the loss function uniquely determines the baseline, we refer to each baseline method as MSE, MAE, or Huber. 4.1 EXPERIMENTAL PROCEDURE AND RESULTS The experiments are organized into three parts. In Section 4.1.1, we visually demonstrate the effectiveness of the proposed approach in giving unbiased prediction. In Section 4.1.2, we intensively and quantitatively evaluate the predictive error of the proposed method and baselines with five real-world regression tasks. In Section 4.1.3, we demonstrate the practical benefit of our approach in a real healthcare use case, which has motivated this work. See the appendix for the details of the experimental settings. 4.1.1 DEMONSTRATION OF UNBIASED LEARNING Procedure. We start by conducting the experiments with synthetic data to show the effectiveness of our method in obtaining unbiased learning results from asymmetrically-corrupted data with different proportions of incomplete observations, K = {25, 50, 75}%. We use three synthetic tasks, LowNoise, HighNoise, and Breathing collected from the Kaggle dataset (Sen, 2016). We compare the proposed method against MSE, which assumes that both upper- and lower-side data are correctly labeled. This comparison shows whether our method can learn from asymmetrically-corrupted data in an unbiased manner, which MSE cannot do. Results. In Fig. 2, we plot the error in prediction (i.e., the predicted value minus the true value) given by the proposed method and MSE for each data-point of the three tasks with K = 50%. Note that, for evaluating the unbiasedness, these test sets do not have incomplete observations. Since MSE regards both upper- and lower-side data as correctly labeled, it produces biased results due to the incomplete observations, where the average error (shown by the green dashed line) is negative, which means the estimation has a negative bias. In contrast, the average error by the proposed method (shown by the blue solid line) is approximately zero. This clearly shows that the proposed method obtained unbiased learning results. The figures for the other settings and tables showing quantitative performance are in Appendix E. 4.1.2 PERFORMANCE COMPARISON AMONG DIFFERENT LOSS FUNCTIONS Procedure. We next apply the proposed method and baselines to five different real-world healthcare tasks from the UCI Machine Learning Repository (Velloso, 2013; Velloso et al., 2013) to show a more extensive comparison between the proposed method and the baselines (MSE, MAE, and Huber). For the proposed method, we use two implementations of L(f(x), y) for f(x) ≤ y′ in Equation 10: the absolute loss (Proposed-1) and the squared loss (Proposed-2). Here, we report the mean absolute error (MAE), and its standard error, of the predictions ŷ = {ŷn}Nn=1 against the corresponding true labels y across 5-fold cross-validation, each with a different randomly sampled training-testing split. MAE is the common metric used in the healthcare domain (Lee et al., 2016; Yeung et al., 2002; Wang et al., 2005; Srinivas et al., 2020) and is defined as MAE(y, ŷ) ≡ 1/N ∑N n=1 |yn − ŷn|. For each fold of the cross-validation, we use a randomly sampled 20% of the training set as a validation set to choose the best hyperparameters for each algorithm, in which hyperparameters providing the highest MAE in the validation set are chosen. Results. As seen in Table 1, Proposed-1 and Proposed-2 largely outperformes the baselines. The robust regression methods (MAE and Huber) did not improve in performance against MSE. In particular, Proposed-1 and Proposed-2 respectively reduced the MAE by more than 20% and 30% on average, compared with baselines. 4.1.3 REAL USE CASE FOR HEALTHCARE Procedure Finally, we demonstrate the practicality of our approach in a real use case in healthcare. From non-intrusive bed sensors installed under each of the four legs of a bed, we estimate the motion intensity of a subject that could be measured accurately but intrusively with ActiGraph, a gold standard sensor wrapped around the wrist (Tryon, 2013; Mullaney et al., 1980; Webster et al., 1982; Cole et al., 1992). If we can mimic the outputs from ActiGraph with outputs from the bed sensors, we can measure the motion with high accuracy and high coverage, while also easing the burden on the subject. We divide the dataset into three pieces and evaluate the results with 3-fold cross-validation. We here use the evaluation metrics that are specifically designed for sleep-wake discrimination (Cole et al., 1992) i.e., proportion of correct prediction period and rate of false prediction. Results. Table 2 shows the proportion of correct prediction period and rate of false prediction, which indicate that the proposed method captured 89 percent of the total time period of the motions that were captured by ActiGraph, and false detection due to factors such as floor vibration was only 1.6 percent. Furthermore, the proposed method captured 15 additional motions that were not captured by ActiGraph. The baseline method MSE was severely underfitted, and most of the weights were zero; thus, we omitted these results. Overall, our findings here demonstrate that ActiGraph can be replaced with bed sensors, and we can also use the bed sensors for the inputs of certain ActiGraph functions, such as sleep-wake discrimination (Cole et al., 1992). See also Appendix G for further details, including the actual estimation results of the motion intensity. 5 DISCUSSION Limitations. In this paper, we do not address symmetric label corruption, such as ordinary outliers, where the coverage and incompleteness are consistent between a label and explanatory variables. Other established approaches can handle such cases. Only when the corruption is asymmetric does it lead to the technical challenge we address here. In that sense, we can handle the opposite asymmetric corruption, in which labels for some observations may become inconsistently higher than those for typical observations. This can be handled as learning from lower-side labeled data and unlabeled data, i.e., LU regression. Since our derivation of U2 regression is straightforwardly applicable to this LU regression case, we show only its learning algorithm in Appendix C. Asymmetric Label Corruption in Classification. In the classification problem setting, asymmetric label corruption is addressed with positive-unlabeled (PU) learning, where it is assumed that negative data cannot be obtained, but unlabeled data are available as well as positive data (Denis, 1998; De Comité et al., 1999; Letouzey et al., 2000; Shi et al., 2018; Kato et al., 2019; Sakai & Shimizu, 2019; Li et al., 2019; Zhang et al., 2019; 2020; Chen et al., 2020b;a; Luo et al., 2021; Hu et al., 2021; Li et al., 2021). An unbiased risk estimator has also been proposed (Du Plessis et al., 2014; 2015). However, PU classification cannot be used for a regression problem, where labels are real values and we need to handle order and gradation between labels. This is because its derivation and algorithm are based on the nature that labels must be binary, i.e., only positive or negative. We overcome this limitation with a novel approach based on an unbiased gradient. Future work. We showed that our approach to estimating hyperparameters based on the grid search with the validation set was effective even for the one contains the important ratio for upper-side labeled data, p(ft(x) ≤ y). It also provides the flexibility needed to handle data variation. Most studies on PU learning assume that a hyperparameter corresponding to πup is given (Hammoudeh & Lowd, 2020; Sonntag et al., 2021; Lin et al., 2022), and some papers have addressed this hyperparameter estimation as their main contribution (Jain et al., 2016; Ramaswamy et al., 2016; Christoffel et al., 2016; Jain et al., 2020; Yao et al., 2021). Developing a method for the hyperparameter estimation to improve performance would be a worthwhile next step of our study. Also, in Assumption 2.1, we assumed s⊥ f∗(x) and a⊥ f∗(x), which is a common noise assumption. Addressing the case when the noises are not independent of f∗(x) is another future direction of our work. Conclusion. We formulated a regression problem from asymmetrically-corrupted data in which training data are corrupted with an asymmetric noise that always has a negative value. This causes labels for data with relatively lower label values to be particularly unreliable. To address this problem, we proposed a learning algorithm, U2 regression. Under some technical assumptions, we showed that our algorithm is unbiased and consistent with regression that uses uncorrupted data without incomplete observations. Our analysis is based on the equivalence of the gradient between them. An experimental evaluation demonstrated that the proposed method was significantly better than the methods without the assumption of the asymmetrical label corruption. A PROOFS A.1 PROOF OF LEMMA 2.2 Proof. For the proof of Lemma 2.2, we will derive two important lemmas from Assumption 2.1. Then, we will prove Lemma 2.2 by using them. We first show f∗(x) ≤ y′ ⇒ a = 0. When f∗(x) ≤ y′, we have from Equation 1 and Equation 4: f∗(x) ≤ f∗(x) + s + a (12) 0 ≤ s + a − a ≤ s. Since a ≤ 0 by Assumption 2.1, we have | a| ≤ s. (13) If a < 0, Assumption 2.1 implies | s| < | a|, which contradicts Equation 13. Hence, we must have a = 0. (14) Since y = y′ when a = 0, we have p(x, y′|f∗(x) ≤ y′) = p(x, y′|f∗(x) ≤ y′, a = 0) (15) = p(x, y|f∗(x) ≤ y, a = 0) = p(x, y|f∗(x) ≤ y), which establishes Lemma A.1. Let p(x, y, y′) be the underlying probability distribution for x, y, and y′. Then, p(x, y′|f∗(x) ≤ y′) = p(x, y|f∗(x) ≤ y). (16) The condition parts of these conditional distributions represent the relationships between labels and regression functions, e.g., p(x, y|f∗(x) ≤ y) is the distribution of x and y when y is higher than what is given by the oracle regression function f∗. Similar to Lemma A.1, we show f(x) ≤ y′ ⇒ a = 0. Let F ′ ≡ {f ∈ F : |f(x) − f∗(x)| ≤ | s| a.s.}, which represents our natural expectation that the regression function f well approximates f∗. When f(x) ≤ y′, we have from Equation 1 and Equation 4 with the condition f ∈ F ′: f(x) ≤ f∗(x) + s + a (17) f(x) ≤ f(x) + s + a + | s| 0 ≤ s + a + | s| − a ≤ s + | s|. Since a ≤ 0 by Assumption 2.1, we have | a| ≤ s + | s|. (18) If a < 0, Assumption 2.1 implies 2| s| < | a|, which contradicts Equation 18. Hence, we must have a = 0. (19) Since y = y′ when a = 0, by replacing f∗ with f for the argument in the derivation of Lemma A.1 in Equation 15, we have Lemma A.2. Let F ′ ≡ {f ∈ F : |f(x)− f∗(x)| ≤ | s|}. When f ∈ F ′, the following holds: p(x, y′|f(x) ≤ y′) = p(x, y|f(x) ≤ y). (20) Lemma A.1 immediately implies Ep(x,y′|f∗(x)≤y′)[G(x, y′)] = Ep(x,y|f∗(x)≤y)[G(x, y)] (21) for any function G : RD×R→ R as long as the expectations exist. When f ∈ F ′, from Lemma A.2, we then have Ep(x,y′|f(x)≤y′)[G(x, y′)] = Ep(x,y|f(x)≤y)[G(x, y)]. (22) A.2 PROOF OF PROPOSITION 3.2 Proof. From the decomposed gradients∇L(ft) in Equation 7, we derive the proposed gradient only with the expectations over p(x, y′). From Condition 3.1 for L(f(x), y),∇L(f(x), y) = g(f(x)) when y < f(x). Thus, Equation 7 can be rewritten as ∇L(ft) =p(ft(x) ≤ y)Ep(x,y|ft(x)≤y)[∇L(ft(x), y)] (23) + p(y < ft(x))Ep(x|y<ft(x)) [ g(ft(x)) ] , where y is marginalized out in the expectation in the second term since g(ft(x)) does not depend on y. Here, Equation 6 and Equation 7 can be rewritten by replacing∇L(ft(x), y) with g(ft(x)), as Ep(x,y)[g(ft(x))] = p(ft(x) ≤ y)Ep(x,y|ft(x)≤y) [ g(ft(x)) ] (24) + p(y < ft(x))Ep(x,y|y<ft(x)) [ g(ft(x)) ] p(y < ft(x))Ep(x,y|y<ft(x)) [ g(ft(x)) ] (25) = Ep(x,y) [ g(ft(x)) ] − p(ft(x) ≤ y)Ep(x,y|ft(x)≤y) [ g(ft(x)) ] . Since g(ft(x)) does not depend on y, we can marginalize out y in Equation 25 as p(y < ft(x))Ep(x|y<ft(x)) [ g(ft(x)) ] (26) = Ep(x) [ g(ft(x)) ] − p(ft(x) ≤ y)Ep(x|ft(x)≤y) [ g(ft(x)) ] . From Equation 26, we can express Equation 23 as ∇L(ft) = p(ft(x) ≤ y)Ep(x,y|ft(x)≤y)[∇L(ft(x), y)] (27) + Ep(x) [ g(ft(x)) ] − p(ft(x) ≤ y)Ep(x|ft(x)≤y) [ g(ft(x)) ] . Finally, from Lemma 2.2, we can rewrite Equation 27 as: ∇L(ft) = p(ft(x) ≤ y)Ep(x,y′|ft(x)≤y′)[∇L(ft(x), y)] (28) + Ep(x) [ g(ft(x)) ] − p(ft(x) ≤ y)Ep(x|ft(x)≤y′) [ g(ft(x)) ] , which is identical to Equation 9. Thus, the gradient in Equation 9 is unbiased and consistent with the gradient in Equation 6 a.s. A.3 PROOF OF LEMMA 3.4 Proof. The difference between the decomposed gradients ∇Ľ(ft) and ∇L(ft) at step t+ 1 in the gradient descent is |∇Ľ(ft)−∇L(ft)| (29) = ∣∣∣∣p(ft(x) ≤ y)Ep(x,y′|ft(x)≤y′)[∇L(ft(x), y)] + p(y < ft(x))Ep(x,y′|y′<ft(x))[∇L(ft(x), y)] −p(ft(x) ≤ y)Ep(x,y|ft(x)≤y)[∇L(ft(x), y)] − p(y < ft(x))Ep(x,y|y<ft(x))[∇L(ft(x), y)] ∣∣∣∣. From Lemma 2.2 and Condition 3.1, |∇Ľ(ft)−∇L(ft)| (30) = ∣∣∣∣p(y < ft(x))Ep(x,y′|y′<ft(x))[∇L(ft(x), y)] − p(y < ft(x))Ep(x,y|y<ft(x))[∇L(ft(x), y)] ∣∣∣∣ = ∣∣∣∣p(y < ft(x))Ep(x|y′<ft(x))[g(ft(x))] − p(y < ft(x))Ep(x|y<ft(x))[g(ft(x))] ∣∣∣∣. We decompose Ep(x|y′<ft(x))[g(ft(x))] again as |∇Ľ(ft)−∇L(ft)| (31) = ∣∣∣∣p(y < ft(x))( p(ft(x) ≤ y|y′ < ft(x))Ep(x|y′<ft(x)∧ft(x)≤y)[g(ft(x))] + p(y < ft(x)|y′ < ft(x))Ep(x|y′<ft(x)∧y<ft(x))[g(ft(x))] ) − p(y < ft(x))Ep(x|y<ft(x))[g(ft(x))] ∣∣∣∣. The condition y′ < ft(x) ∧ y < ft(x) is equivalent to the condition y < ft(x) since y′ ≤ y from Assumption 2.1 and thus p(y′ < ft(x)|y < ft(x)) = 1. Then, we have |∇Ľ(ft)−∇L(ft)| (32) = ∣∣∣∣p(y < ft(x))( p(ft(x) ≤ y|y′ < ft(x))Ep(x|y′<ft(x)∧ft(x)≤y)[g(ft(x))] + p(y < ft(x)|y′ < ft(x))Ep(x|y<ft(x))[g(ft(x))] ) − p(y < ft(x))Ep(x|y<ft(x))[g(ft(x))] ∣∣∣∣. Additionally, since p(y < ft(x)|y′ < ft(x)) = 1− p(ft(x) ≤ y|y′ < ft(x)), |∇Ľ(ft)−∇L(ft)| (33) = ∣∣∣∣p(y < ft(x))p(ft(x) ≤ y|y′ < ft(x))( Ep(x|y′<ft(x)∧ft(x)≤y)[g(ft(x))]− Ep(x|y<ft(x))[g(ft(x))] )∣∣∣∣. This equation shows that the bias is represented by the difference between the expectation of g(ft(x)) with the lower-side data and that with the original upper-side data mixed into the lower side due to incomplete observations and the corresponding proportions. From Assumption 3.3, since a⊥ x, |∇Ľ(ft)−∇L(ft)| (34) = ∣∣∣∣p(y < ft(x))p(ft(x) ≤ y|y′ < ft(x))( Ep(x|ft(x)≤y)[g(ft(x))]− Ep(x|y<ft(x))[g(ft(x))] )∣∣∣∣. Since |f − f∗| ≤ | s| a.s., p(ft(x) ≤ y) = η and p(y < ft(x)) = 1− η from their definition, p(ft(x) ≤ y|y′ < ft(x)) = p(ft(x) ≤ y)p( a < 0) p(y < ft(x)) + p(ft(x) ≤ y)p( a < 0) = η(1− ξ) (1− η) + η(1− ξ) . (35) Therefore, from the definition of δ, |∇Ľ(ft)−∇L(ft)| ≥ η(1− η)(1− ξ) (1− η) + η(1− ξ) δ. (36) B IMPLEMENTATION OF LEARNING ALGORITHM BASED ON STOCHASTIC OPTIMIZATION We scale up our U2 regression algorithm by stochastic approximation with M mini-batches and add a regularization term, R(f): ∇L̂{m}(ft) = ∑ (x,y)∈ { X {m} up ,y {m} up }∇L(ft(x), y) (37) + ρ [ ∑ x∈X{m}un g(ft(x)) ] − ∑ x∈X{m}up g(ft(x)) + λ ∂R(ft) ∂θ , where∇L̂{m}(ft) is the gradient for the m-th mini-batch, {X{m}up ,y{m}up } andX{m}un respectively are upper-side and unlabeled sets in the m-th mini-batch based on the current ft, λ is a regularization parameter, and the regularization term R(f) is, for example, the L1 or L2 norm of the parameter vector θ of f . We also convert nup/(πupN) to a hyperparameter ρ, ignoring constant coefficients instead of directly handling πup. The hyperparameters ρ and λ are optimized in training based on the grid-search with the validation set. The U2 regression algorithm based on stochastic optimization is described in Algorithm 1. We learn the regression function with the gradient in Equation 37 by using any stochastic gradient method. Here, we used Adam with the hyperparameters recommended in Kingma & Ba (2015), and the number of samples in the mini-batches was set to 32. We set the candidates of the hyperparameters, ρ and λ, to {10−3, 10−2, 10−1, 100}. By using the learned f , we can estimate ŷ = f(x) for new data x. C ALGORITHM FOR LU REGRESSION We show the algorithm for the lower and unlabeled regression (LU regression), where labels for some observations may become inconsistently higher than those for typical observations. Let LLU(f(x), y) be a loss function for LU regression and gLU(f(x)) be a gradient∇LLU(f(x), y) when f(x) ≤ y. Similar to Condition 3.1 for U2 regression, we assume that the class of LLU(f(x), y) satisfies the Algorithm 1 U2 regression based on stochastic gradient method. Input: Training data D′ = {xn, y′n}Nn=1; hyperparameters ρ, λ ≥ 0; an external stochastic gradient method A Output: Model parameters θ for f 1: while No stopping criterion has been met 2: Shuffle D′ into M mini-batches: { X{m},y{m} }M m=1 3: for m = 1 to M 4: Compute the gradient∇L̂{m}(ft) in Equation 37 with { X{m},y{m} } 5: Update θ by A with∇L̂{m}(ft) condition that gLU(f(x)) is a gradient function depending only on f(x) and not on the value of y. Then, LU regression is Algorithm 1, with the following gradient,∇L̂{m}LU (ft), instead of∇L̂{m}(ft) in Equation 37, as ∇L̂{m}LU (ft) = ∑ {x,y}∈ { X {m} lo ,y {m} lo }∇LLU(ft(x), y) (38) + ρ [ ∑ x∈X{m}un gLU(ft(x)) ] − ∑ x∈X{m}lo gLU(ft(x)) + λ ∂R(ft) ∂θ , where {X{m}lo ,y {m} lo } and X {m} un respectively are lower-side and unlabeled sets in the m-th minibatch based on the current ft. D COMPUTING INFRASTRUCTURE All of the experiments were carried out with a Python and TensorFlow implementation on workstations having 80 GB of memory, a 4.0 GHz CPU, and an Nvidia Titan X GPU. In this environment, the computational time to produce the results was a few hours. E DETAILS OF EXPERIMENTS IN SECTION 4.1.1 E.1 SYNTHETIC DATASETS We conducted the experiments on synthetic data to evaluate the feasibility of our method for obtaining unbiased learning results from asymmetrically-corrupted data containing different proportions of incomplete observations. We generated synthetic data on the basis of Assumption 2.1 and Equation 4. We randomly generated N = 1000 training samples, X = {xn}Nn=1, from the standard Gaussian distribution N (xn; 0, I), where the number of features in x was D = 10, and I is the identity matrix. Then, usingX , we generated the corresponding N sets of true labels y = {yn}Nn=1 from the distribution N (yn;w>xn, β), where w are coefficients that were also randomly generated from the standard Gaussian distributionN (w; 0, I), β is the noise precision, and > denotes the transpose. For simulating the situation in which a label has incomplete observations, we created corrupted labels y′ = {y′n}Nn=1 by randomly selecting K percent of data in y and subtracting the absolute value of white Gaussian noise with twice the value of the precision as that of y, 2β, from their values. We repeatedly evaluated the proposed method for each of the following settings. The noise precision was β = {100, 10−1}, which corresponded to a low-noise setting task (LowNoise) and a high-noise setting task (HighNoise), and the proportion of incomplete training samples was K = {25, 50, 75}%. In the case of K = 75%, only 25 percent of the samples correctly corresponded to labels, and all of the other samples were attached with labels that were lower than the corresponding true values. It is quite difficult to learn regression functions using such data. In these tasks, we used a linear model, θ>x, for f(x) and an implementation for Equation 37 with the absolute loss, which satisfies Condition 3.1, for the loss function L and L1-regularization for the regularization term. We set the candidates of the hyperparameters, ρ and λ, to {10−3, 10−2, 10−1, 100}. We standardized the data by subtracting their mean and dividing by their standard deviation in the training split. We used Adam with the hyperparameters recommended in Kingma & Ba (2015), and the number of samples in the mini-batches was set to 32. We also used a real-world sensor dataset collected from the Kaggle dataset (Sen, 2016) that contains breathing signals (Breathing). The dataset consisted of N = 1, 432 samples. We used signals from a chest belt asX = {xn}Nn=1, and x in each sample had D = 2 number of features, i.e., the period and height of the expansion/contraction of the chest. We used signals obtained by the Douglas bag (DB) method, which is the gold standard for measuring ventilation, as true labels y = {yn}Nn=1. For our problem setting, we created corrupted labels y′ = {y′n}Nn=1 through the same procedure for synthetic corruption as that for LowNoise and HighNoise with K = {25, 50, 75}%. In the experiment on Breathing, for its non-linearity, we used θ>φ(x, σ) for f(x), where φ is a radial basis function with the training set as its bases, and σ is a hyperparameter representing the kernel width that is also optimized by using the validation set. We set the candidates of the hyperparameter σ to {10−3, 10−2, 10−1, 100}. The other implementation details were the same as those for LowNoise and HighNoise. E.2 DETAILED RESULTS Figure 3 shows the error between the estimation results of the proposed method and their true values and those of MSE for LowNoise, HighNoise, and Breathing with 25 and 75 percent of incomplete training samples. Table 3 shows the performance on LowNoise, HighNoise, and Breathing for the proposed method and MSE. As shown in Figure 3, the proposed method obtains unbiased learning results in all cases, while MSE produces biased results. From Table 3, we can see that the proposed method outperformes MSE overall. We found that the performance of our method is not significantly affected by the increase in the proportion of incomplete training samples K even for K = 75%, unlike that of MSE. E.3 PERFORMANCE OVER DIFFERENT SIZES OF VALIDATION SET To demonstrate the robustness of our validation-set-based approach to estimating the hyperparameter πup, we show the performance of the proposed method over different sizes of the validation set in Fig. 4. This analysis is conducted on the tasks in Section 4.1.1; LowNoise, HighNoise, and Breathing, with K=50%. Figure 4 shows that the proposed method does not degrade its performance much, even when we use only 1% of the training set as the validation set. This demonstrates that the proposed approach is robust enough also for the small size of the validation set as well as the high proportion of incomplete validation samples. In Fig. 5, we also show a chart similar to Fig. 2 (the error in prediction) when we used 1% of the training set as the validation set. We can see that even in this case, the proposed method achieved unbiased learning (the average error shown by the blue solid line is approximately zero.). F DETAILS OF EXPERIMENTS IN SECTION 4.1.2 We applied the algorithm to five different real-world healthcare tasks recorded in the datasets from the UCI Machine Learning Repository (Velloso, 2013; Velloso et al., 2013), which contains sensor outputs from wearable devices attached to the arm while subjects exercised. From the non-intrusive sensors attached to gym equipment, we estimated the motion intensity of a subject that was measured accurately with an arm sensor that was an intrusive sensor wrapped around the arm. If we can mimic outputs from the arm sensor with outputs from the equipment sensor, it could contribute to the subjects’ comfort, as they would not need to wear sensors to measure their motion intensity. We used all of the features from the equipment sensor that took “None” values less than ten times asX = {xn}Nn=1, where each sample had D = 13 number of features. The corrupted labels y′ = {y′n}Nn=1 were the magnitude of acceleration from the arm sensor, which can accurately sense motion intensity on the arm, but it had insufficient data coverage and incomplete or missing observations for the movements of other body parts. For performance evaluation, we used the magnitude of acceleration for the entire body as true labels y = {yn}Nn=1. The number of samples were N = 11, 159, N = 7, 593, N = 6, 844, N = 6, 432, and N = 7, 214 respectively for the tasks, Specification, Throwing A, Lifting, Lowering, and Throwing B. For the complex nature of the tasks, we used a 6-layer multilayer perceptron with ReLU (Nair & Hinton, 2010) (more specifically, D-100-100-100-100-1) as f(x), which demonstrates the usefulness of the proposed method for training deep neural networks. We also used a dropout (Srivastava et al., 2014) with a rate of 50% after each fully connected layer. We used two implementations for L(f(x), y) when f(x) ≤ y′ in Equation 37 with the absolute loss (Proposed-1) and the squared loss (Proposed-2). For both implementations, we used the absolute loss, which satisfies Condition 3.1, for the loss function L(f(x), y) when y′ < f(x) and used L1-regularization for the regularization term. The other implementation details were the same as those for LowNoise, HighNoise, and Breathing. G DETAILS OF EXPERIMENTS IN SECTION 4.1.3 We demonstrate the practicality of our approach in a real use case in healthcare. From non-intrusive bed sensors installed under each of the four legs of a bed, we estimated the motion intensity of a subject that was measured accurately with ActiGraph, a gold standard intrusive sensor wrapped around the wrist (Tryon, 2013; Mullaney et al., 1980; Webster et al., 1982; Cole et al., 1992). The sensing results of ActiGraph are used for tasks such as discriminating whether a subject is asleep or awake (Cole et al., 1992). While ActiGraph can accurately sense motion on the forearm, it has insufficient data coverage in other areas and often causes observations of movements on other body parts to be missing. The bed sensors have a broader data coverage since they can sense global motion on all body parts; however, the sensing accuracy is limited due to their non-intrusiveness. If we can mimic the outputs from ActiGraph with outputs from the bed sensors, we can expect to achieve sufficient accuracy and coverage while also easing the burden on the subject. The dataset we used included three pieces of data, Data (i), (ii), and (iii), which were respectively recorded over 20, 18, and 18.5 minutes. Each piece of data consisted of pairs of bed-sensor-data sequences and the corresponding motion intensity sequence obtained by ActiGraph. We used the “magnitude” attribute of ActiGraph as corrupted labels y′ for the motion intensity, whose sampling rate was about one sample per second. For true labels y, we manually measured the motion intensity every minute under the management of a domain expert. ForX , we first computed the gravity center of the four sensor outputs that were obtained from the bed sensors under the four legs of a bed. Then, we computed the time derivatives and cross terms of the raw sensor outputs and the gravity center. The sampling rate of the bed sensors was different from that of ActiGraph, about one sample per five milliseconds. Thus,X was finally generated as a sliding window of statistics in 1, 000-millisecond (1 second) subsequences of the time series of the above computed variables, where 1 second was the same as the sampling interval of ActiGraph. The statistics were means, standard deviations, and {0.05, 0.25, 0.5, 0.75, 0.95} quantiles. In this task, we used the linear model θ>x for f(x) due to its interpretability, which is inevitable in real-world healthcare and medical applications. G.1 ESTIMATION RESULTS FOR MOTION INTENSITY Figure 6 compares our estimation results for motion intensity with the output of ActiGraph and true labels. G.2 IMPORTANT FEATURES ESTIMATING MOTION INTENSITY The important features selected by L1 regularization were the statistics of the gravity center and the cross terms and time derivatives of the raw sensor outputs. The largest weight was assigned to the standard deviation of the gravity center, which represents the amplitude of the gravity center, so it is directly related to the motion of subjects. H OTHER POSSIBLE USE CASES OF REGRESSION FOR SENSOR MAGNITUDE Examples of predicting the magnitude values of a sensor, which is a field of application of U2 regression, can be found in several areas. Besides the medical and healthcare applications discussed in the main text, another example is estimating the wind speed or rainfall in a specific region from observable macroscopic information (Cheng & Tan, 2008; Abraham & Tan, 2010; Abraham et al., 2013; Vandal et al., 2017), known as statistical downscaling (Wilby et al., 2004). Wind speed and rainfall, which are labels in these tasks, can be sensed locally in a limited number of locations and provide incomplete observations and biased labels compared with macroscopic information, which is considered to be explanatory variables.
1. What is the focus of the paper regarding decision functions and sensor failures? 2. What are the strengths and weaknesses of the proposed approach, particularly concerning the choice of loss functions and its limitations? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content, including specific points of confusion and suggestions for improvement? 4. Are there any other examples where a similar method could be applied, as suggested by the reviewer? 5. How would a model with censorship behave when given the information if the output has been altered or not, as raised by the reviewer? 6. Can the authors provide more advanced experiments, especially on real data, as suggested by the reviewer? 7. How does the reviewer compare the proposed approach with related works such as Median-of-Means (MoM)-based methods for robust regression, as mentioned in the review?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper studies the problem of learning a decision function when the output might be corrupted by the fact that the sensor in charge of collecting it has failed to record it properly, underestimating it. Without a debiasing procedure, the decision function naturally underestimate the magnitude of the event. Under the assumption that the loss function does not depend on the prediction (when the latter is greater than the output), the authors propose an estimator of the gradient that is unbiased. Experiments complement the paper. Strengths And Weaknesses Strengths the paper is globally clear and well written the problem studied is of interest and the proposed approach is natural Weaknesses it seems to me that the interest of the approach is completely shortcut by the choice of loss functions, which do not depend on the label. This is a very strong limitation of the approach to me in the same vein, could the authors think about other examples where to apply a similar method? This could make the contribution of greater interest the experiments are pretty rudimentary, especially on real data. For instance how would a model with censorship behave, when given the information if the output has been altered or not? I am also pointing out the literature on Median-of-Means (MoM)-based methods for robust regression, see in particular [1], that do not assume the outliers to be symmetric and could be interesting to benchmark [1] Robust classification via MOM minimization, Lecué et al. 2020 Clarity, Quality, Novelty And Reproducibility Gloablly good, except for the following points the way conditional expectation are presented is confusing to me in Eq. (9), (10), shouldn't it be y ′ instead of y ? Lem 3.4: shouldn't η be 1 / 2 since the noise is symmetric?
ICLR
Title Learning from Asymmetrically-corrupted Data in Regression for Sensor Magnitude Abstract This paper addresses a regression problem in which output label values represent the results of sensing the magnitude of a phenomenon. A low value of such labels can either mean that the actual magnitude of the phenomenon has been low or that the sensor has made an incomplete observation. This leads to a bias toward lower values in labels and its resultant learning because labels for incomplete observations are recorded as lower than those for typical observations, even if both have monitored similar phenomena. Moreover, because an incomplete observation does not provide any tags indicating incompleteness, we cannot eliminate or impute them. To address this issue, we propose a learning algorithm that explicitly models the incomplete observations to be corrupted with an asymmetric noise that always has a negative value. We show that our algorithm is unbiased with a regression learned from the uncorrupted data that does not involve incomplete observations. We demonstrate the advantages of our algorithm through numerical experiments. N/A This paper addresses a regression problem in which output label values represent the results of sensing the magnitude of a phenomenon. A low value of such labels can either mean that the actual magnitude of the phenomenon has been low or that the sensor has made an incomplete observation. This leads to a bias toward lower values in labels and its resultant learning because labels for incomplete observations are recorded as lower than those for typical observations, even if both have monitored similar phenomena. Moreover, because an incomplete observation does not provide any tags indicating incompleteness, we cannot eliminate or impute them. To address this issue, we propose a learning algorithm that explicitly models the incomplete observations to be corrupted with an asymmetric noise that always has a negative value. We show that our algorithm is unbiased with a regression learned from the uncorrupted data that does not involve incomplete observations. We demonstrate the advantages of our algorithm through numerical experiments. 1 INTRODUCTION This paper addresses a regression problem for predicting the magnitude of a phenomenon when an observed magnitude involves a particular measurement error. The magnitude typically represents how large a phenomenon is or how strong the nature of the phenomenon is. Such examples of predicting the magnitude are found in several application areas, including pressure, vibration, and temperature (Vandal et al., 2017; Shi et al., 2017; Wilby et al., 2004; Tanaka et al., 2019). In medicine and healthcare, the magnitude may represent pulsation, respiration, or body movements (Inan et al., 2009; Nukaya et al., 2010; Lee et al., 2016; Alaziz et al., 2016; 2017; Carlson et al., 2018). More specifically, we learn a regression function to predict the label representing the magnitude of a phenomenon from explanatory variables. The training data consists of pairs of the label and explanatory variables, but note that the label in the data is observed with a sensor and is not necessarily in agreement with the actual magnitude of the phenomenon. We note that we use the term “label” even though we address the regression problem, and it refers to a real-valued label in this paper. In the example of predicting the magnitude of body movements, the label in the data is measured with an intrusive sensor attached to the chest or the wrist, and the explanatory variables are the values measured with non-intrusive bed sensors (Mullaney et al., 1980; Webster et al., 1982; Cole et al., 1992; Tryon, 2013). A regression function for this example would make it possible to replace intrusive sensors with non-intrusive ones, which in turn will reduce the burden on patients. Although the sensors that measure the label generally have high accuracy, they often make incomplete observations, and such incomplete observations are recorded as low values instead of missing values. This leads to the particular challenge where a low value of the label can either mean that the actual magnitude of the phenomenon has been low or that the sensor has made an incomplete observation, and there are no clues that allow us to tell which is the case. We illustrate this challenge in Fig. 1-(a). Such incomplete observations are prevalent in measuring the magnitude of a phenomenon. For example, the phenomenon may be outside the coverage of a sensor, or the sensing system may experience temporal mechanical failures. In the example of body movements, the sensor may be temporarily detached from the chest or wrist. In all cases, the sensor keeps recording low values, while the actual magnitude may be high, and no tag indicating incompleteness can be provided. This incomplete observation is particularly severe for the sensor measuring the label since it is single-source and has narrower data coverage. This stems from the fact that the sensor is usually intrusive or it is costly to produce highly accurate observations for measuring the label. Examples of this can be seen in chest or wrist sensors that focus on the movements of a local body part with high accuracy and often miss movements outside their coverage, such as those of parts located far from where the sensor is attached. At most, a single intrusive sensor can be attached to a patient to avoid burdening them. In contrast, the sensors measuring the explanatory variables are usually multi-source and provide broader data coverage. For example, multiple sensors can be attached to various places of a bed and globally monitor the movements of all body parts on the bed but with lower accuracy. One cannot simply ignore the problem that the observations of labels may be incomplete because the estimated regression functions trained on such data with incomplete observations are severely biased toward lower values regardless of the amount of available training data. This bias comes from the fact that incomplete observations always have lower values than the actual magnitude of a phenomenon, and they occur intensively on label sensors, while explanatory variables are usually observed completely. Moreover, incomplete observations can be much more frequent than expected. Unfortunately, since we cannot identify which observations are incomplete, we cannot eliminate or impute them by using existing methods that require identifying incomplete observations. Such methods include thresholding, missing value detection (Pearson, 2006; Qahtan et al., 2018), imputation (Enders, 2010; Smieja et al., 2018; Ma & Chen, 2019; Sportisse et al., 2020), and semi-supervised regression (Zhou & Li, 2005; Zhu & Goldberg, 2009; Jean et al., 2018; Zhou et al., 2019). The issues of incomplete observations also cannot be solved with robust regression (Huber et al., 1964; Narula & Wellington, 1982; Draper & Smith, 1998; Wilcox, 1997), which takes into account the possibility that the observed labels contain outliers. While robust regression is an established approach and state-of-the-art against corrupted labels in regression, it assumes symmetric label corruption. Namely, the noise is assumed to not be biased either positively or negatively. Since incomplete observations induce the noise that is severely biased toward lower values, robust regression methods still produce regression functions that are biased toward lower values than the one that would be learned from the data without incomplete observations. In this paper, to mitigate the bias toward lower values, we explicitly assume the existence of the noise from incomplete observations, which always has negative values, in addition to the ordinary symmetric noise. That is, we consider our training data to be asymmetrically-corrupted data. We then formulate a regression problem from our asymmetrically-corrupted data and design a principled learning algorithm for this regression problem. By explicitly modeling the incomplete observation, we derive a learning algorithm that has a rather drastic feature: namely, it ignores the labels that have relatively low values (lower-side labeled data). In other words, our algorithm uses the data whose labels have relatively high values (upper-side labeled data) and the data whose labels are ignored (unlabeled data). Hence, we refer to our algorithm as upper and unlabeled regression (U2 regression). This aligns with the intuition that the labels with low values are unreliable, since those low values may be due to incomplete observations. Our main result is that U2 regression, which learns from the asymmetrically-corrupted data, produces a regression function that is, under some technical assumptions, unbiased and consistent with the one that is produced from the uncorrupted data that does not involve incomplete observations. This counterintuitive result is achieved by considering a specific class of loss functions and deriving their gradient, which only requires upper-side labeled data and unlabeled data in the asymmetricallycorrupted data and can still be shown to be asymptotically equivalent to the expression of the gradient that has access to the uncorrupted data. The main novelty in our approach is thus in the loss function, and we will empirically demonstrate the effectiveness of the proposed class of loss functions over existing common loss functions in dealing with asymmetrically-corrupted data in synthetic and six real-world regression tasks. Contributions. The main contributions of this paper are summarized as follows. • We formulate a novel problem of learning a regression function from asymmetricallycorrupted data. This is important for applications where the magnitude of a phenomenon is measured with a sensor that is susceptible to unidentifiable incomplete observations. • We derive an unbiased and consistent learning algorithm (U2 regression) for this problem from the new class of loss functions. • Extensive experiments on synthetic and six real-world regression tasks including a real use case for healthcare demonstrate the effectiveness of the proposed method. 2 REGRESSION FROM ASYMMETRICALLY-CORRUPTED DATA Our goal is to derive a learning algorithm with asymmetrically-corrupted data, i.e., labels in the training data are corrupted with negative-valued noise due to incomplete observations, in a manner that is unbiased and consistent with the regression that uses uncorrupted data without involving incomplete observations. We first consider the regression problem that uses the uncorrupted data in Section 2.1 and then formulate learning from the asymmetrically-corrupted data in Section 2.2. 2.1 REGRESSION PROBLEM FROM DATA WITHOUT INCOMPLETE OBSERVATIONS Let x ∈ RD(D ∈ N) be a D-dimensional explanatory variable and y ∈ R be a real-valued label. We assume that, without incomplete observations, y is observed in accordance with y = f∗(x) + s, (1) where f∗ is the oracle regressor and s is the symmetric noise with 0 as the center, such as additive white Gaussian noise (AWGN). We learn a regression function f(x) that computes the value of the estimation of a label, ŷ, for a newly observed x as ŷ = f(x). The optimal regression function, f̂ , is given by f̂ ≡ arg min f∈F L(f), (2) where F is a hypothesis space for f , and L(f) is the expected loss when the regression function f(x) is applied to data (x, y), distributed in accordance with an underlying distribution p(x, y): L(f) ≡ Ep(x,y)[L(f(x), y)], (3) where Ep[•] denotes the expectation over the distribution p, and L(f(x), y) is the loss function between f(x) and y, e.g., the squared loss, L(f(x), y) = ‖f(x)− y‖2. The expectation Ep(x,y) can be estimated by computing a sample average for the training data D ≡ {(xn, yn)}Nn=1, which is N pairs of explanatory variables and labels. 2.2 REGRESSION PROBLEM FROM ASYMMETRICALLY-CORRUPTED DATA In this paper, we consider a scenario in which we only have access to the asymmetrically-corrupted data D′ ≡ {(xn, y′n)}Nn=1, where a label y′ may be corrupted due to incomplete observations. A corrupted label y′ is observed from the uncorrupted y with an asymmetric negative-valued noise, a: y′ = y + a, (4) where the asymmetric noise a always has a random negative value, which means y′ ≤ y. Using only D′, we learn a regression function f(x) as the solution for Equation 2 in an unbiased and consistent manner. Although AWGN can be handled even when we use a naive regression method such as least squares, the asymmetric noise a, which always has a negative value, is problematic. Intuitively, the asymmetric noise a makes lower-side labeled data particularly unreliable and inappropriate for learning, while keeping upper-side labeled data reliable, where the upper-side labeled data refers to the data {(x, y)} whose label is above the regression line (i.e., f(x) ≤ y) and the lower-side labeled data refers to the data whose label is below the regression line. The regression line represents the estimation of a regression function. Figure 1-(b) illustrates this as a scatter plot of the value of the label against the value of an explanatory variable. Here, the data with incomplete observations appear only in the lower side of the regression line because a makes observations have lower label values than those of typical observations, where the regression line represents such typical observations. This asymmetry leads to biased learning compared with the learning from the uncorrupted data without incomplete observations. To address the asymmetric noise a and its resultant bias, we formalize the assumption on the observation process for the asymmetrically-corrupted data D′ and derive a lemma representing the nature of D′. Then, we propose a learning algorithm based on the lemma in the next section. The observation processes of D and D′ are formally characterized as follows. Assumption 2.1. Assume s⊥ f∗(x), Ep( s)[ s] = 0; a⊥ f∗(x), a ≤ 0 almost surely (a.s.); 2| s| < | a| a.s. when a < 0; and {(xn, y, y′n)}Nn=1 are i.i.d. observations in accordance with Equation 1 and Equation 4, This assumption means that D′ has enough information to estimate f , and the asymmetric noise a is significant enough compared to the symmetric noise s, which are necessary assumptions so that the learning problem is solvable, and a should be handled separately from s. From Assumption 2.1, we then have the following lemma. Lemma 2.2. Let F ′ ≡ {f ∈ F : |f(x)− f∗(x)| ≤ | s| a.s.}. When f ∈ F ′, the following holds for y ≡ f∗(x) + s and y′ ≡ y + a under Assumption 2.1: Ep(x,y′|f(x)≤y′)[G(x, y′)] = Ep(x,y|f(x)≤y)[G(x, y)] (5) for any function G : RD × R→ R as long as the expectations exist. Proof. We outline a proof here and provide a complete one in Appendix A.1. We first show that a does not change the distribution for upper-side labeled data (f∗(x) ≤ y′) on the basis of the oracle regression function f∗ before and after adding a, i.e., a = 0 when f∗(x) ≤ y′. With the condition f ∈ F ′, we can further prove that a = 0 when f(x) ≤ y′, which is for upper-side labeled data on the basis of f . This establishes p(x, y′|f(x) ≤ y′) = p(x, y|f(x) ≤ y) and implies Lemma 2.2. The condition parts of these conditional distributions represent the relationships between labels and the estimations of the regression function f , e.g., p(x, y|f(x) ≤ y) is the distribution of x and y when y is higher than what is given by f . The condition f ∈ F ′ represents our natural expectation that the regression function f well approximates f∗. Lemma 2.2 shows that a does not change the expectation for our upper-side labeled data (f(x) ≤ y′) before and after adding a, which makes them still reliable for regression. In the next section, we derive an unbiased learning algorithm based on this lemma. 3 U2 REGRESSION We seek to find the minimizer of the objective in Equation 2 from the asymmetrically-corrupted data D′. To this end, we propose a gradient that relies only on the knowledge of the distribution of the corrupted data p(x, y′) but is still equivalent to the gradient of Equation 3, which relies on the knowledge of the distribution of the uncorrupted data p(x, y). Based on Lemma 2.2, we rewrite the gradient based on p(x, y) into the one that only requires p(x, y′). 3.1 GRADIENT FOR LEARNING FROM ASYMMETRICALLY-CORRUPTED DATA Here, we address Equation 2 with the gradient descent. At step t + 1 in the gradient descent, the gradient of Equation 3 with respect to the parameters θ of f is represented with a regression function, ft, which is estimated at step t, as follows: ∇L(ft) ≡ Ep(x,y)[∇L(ft(x), y)], where ∇L(ft(x), y) ≡ ∂L(f(x), y) ∂θ ∣∣∣ f=ft . (6) Note that this holds for any step in the gradient descent. When t = 0, f0 is the initial value of f , and when t =∞, we suppose f∞ = f̂ . We can decompose∇L(ft) as ∇L(ft) =p(ft(x) ≤ y)Ep(x,y|ft(x)≤y)[∇L(ft(x), y)] (7) + p(y < ft(x))Ep(x,y|y<ft(x))[∇L(ft(x), y)]. We then assume that, when y < f(x), the gradient of the loss function does not depend on y and only depends on f(x); thus we write ∇L(f(x), y) as g(f(x)) when y < f(x) to emphasize this independence. Formally, Condition 3.1. Let g(f(x)) be ∇L(f(x), y) for y < f(x). g(f(x)) is a gradient function depending only on f(x) and not on the value of y. Such common losses are the absolute loss and pinball loss, which are respectively used in least absolute regression and quantile regression and work well on real data (Lee et al., 2016; Yeung et al., 2002; Wang et al., 2005; Srinivas et al., 2020). For example, the gradient of the absolute loss is ∂|f(x)− y| ∂θ = ∂f(x) ∂θ when y < f(x), (8) which does not depend on the value of y but only on f(x). We now propose a gradient that does not rely on the knowledge of p(x, y) but instead uses only p(x, y′). Namely, ∇L̃(ft) ≡ p(ft(x) ≤ y)Ep(x,y′|ft(x)≤y′) [ ∇L(ft(x), y) ] (9) + Ep(x) [ g(ft(x)) ] − p(ft(x) ≤ y)Ep(x|ft(x)≤y′) [ g(ft(x)) ] . In Section 3.2, we will formally establish the equivalence between the gradient in Equation 9 and that in Equation 6 under our assumptions. Note that in the second and third terms of Equation 9, we apply expectations over p(x) and p(x|ft(x) ≤ y′) to g(f(x)), even though g(f(x)) is defined to be the gradient∇L(f(x), y) for y < f(x). This is tractable due to the nature of g(f(x)), which only depends on f(x) and does not depend on the value of y. Since the expectations in Equation 9 only depend on x and y′, they can be estimated by computing a sample average for our asymmetrically-corrupted data D′ as ∇L̂(ft) = πup nup [ ∑ (x,y)∈{Xup,y′up} ∇L(ft(x), y) ] + 1 N [ ∑ x∈Xun g(ft(x)) ] − πup nup [ ∑ x∈Xup g(ft(x)) ] , (10) where {Xup,y′up} represents the set of coupled pairs of x and y′ in the upper-side labeled sample set, {x, y′ : ft(x) ≤ y′}, in D′;Xun is a sample set of x in D′ ignoring labels y′; nup is the number of samples in the upper-side labeled set; πup is πup ≡ p(ft(x) ≤ y). Note that πup depends on the current estimation of the function ft and the label y with complete observation. Thus, it changes at each step of the gradient descent, and we cannot determine its value in a general way. In this paper, we propose a simple approach of choosing πup as a single value of the hyperparameter. We optimize it with the grid search based on the validation set, which enables us to flexibly handle data variation. We will show that it works practically in our experiments. As we will show in Section 3.2, we can use Equation 10 to design an algorithm that gives an unbiased and consistent regression function. By using the gradient in Equation 10, we can optimize Equation 2 and learn the regression function only with upper-side labeled samples and unlabeled samples from D′ independent of lower-side labels. This addresses the issue that our lower-side labeled data is particularly unreliable and leads to overcoming the bias that stems from this unreliable part of the data. We refer to our algorithm as upper and unlabeled regression (U2 regression). See Appendix B for the specific implementation of the algorithm based on stochastic optimization. The gradient in Equation 10 can be interpreted in an intuitive manner. The first term in Equation 10 has the effect of minimizing the upper-side loss. Recall that the upper-side data are not affected by the asymmetric noise under our assumptions. Thus, U2 regression seeks to learn the regression function f on the basis of this reliable upper-side data. Notice that the first term becomes zero when all of the data points are below f (i.e., y′ ≤ ft(x),∀(x, y′) ∈ D′), since then {Xup,y′up } becomes empty. The second term thus has the effect of pushing down f at all of the data points so that some data points are above f . Meanwhile, the third term partially cancels this effect of the second term for the upper-side data to control the balance between the first and the second terms. 3.2 UNBIASEDNESS AND CONSISTENCY OF GRADIENT U2 regression is the learning algorithm based on the gradient,∇L̂(ft), in Equation 10 and uses only asymmetrically-corrupted data D′. The use of ∇L̂(ft) can be justified as follows: Proposition 3.2. Suppose that Assumption 2.1 holds and the loss function L(f(x), y) satisfies Condition 3.1. Then, the gradient ∇L̃(ft) in Equation 9 and its empirical approximation ∇L̂(ft) in Equation 10 are unbiased and consistent with the gradient∇L(ft) in Equation 6 a.s. Proof. We outline a proof here and provide a complete one in Appendix A.2. First, we rewrite Equation 7 into a gradient that only contains the expectation over p(x, y|ft(x) ≤ y) with Condition 3.1. Then, we apply Lemma 2.2 to the gradient, and it becomes an expression identical to Equation 9. In other words, U2 regression asymptotically produces the same result as the learning algorithm based on the gradient∇L(ft) in Equation 6, which requires the uncorrupted data without incomplete observations, D. The convergence rate of U2 regression is of the order Op(1/√nup + 1/ √ N) in accordance with the central limit theorem (Chung, 1968), where Op denotes the order in probability. We further justify our approach of having the specific form of Equation 9 by showing that a straightforward variant that uses D′ as if it does not involve incomplete observations (i.e., p(x, y) ≈ p(x, y′)) can fail for our problem. To this end, we introduce an additional assumption on the observation process: Assumption 3.3. Assume a⊥ x. Then, we have Lemma 3.4. Let∇Ľ(ft) be a variant of the gradient in Equation 7 replacing p(x, y) with p(x, y′), δ be the difference between the expectations of the gradients in the upper side and the lower side δ ≡ ∣∣Ep(x,y|f(x)≤y)[∇L(f(x), y)]− Ep(x,y|y<f(x))[∇L(f(x), y)]∣∣, 0 < η < 1 be probability when 0 ≤ s, and 0 < ξ < 1 be probability when a = 0. Then,∇Ľ(ft) is not consistent with the gradient in Equation 6 a.s., and the difference (bias) between them at step t+ 1 in the gradient descent is η(1− η)(1− ξ) (1− η) + η(1− ξ) δ ≤ |∇Ľ(ft)−∇L(ft)|. (11) Proof. We outline a proof here and provide a complete one in Appendix A.3. We first show that the bias |∇Ľ(ft)−∇L(ft)| can be represented by the difference between the expectation of g(ft(x)) with the upper-side data and that with the lower-side data, which can be written by δ. The bias also has the coefficient which contains the proportions for the lower-side data and the original upper-side data mixed into the lower side due to incomplete observations. These values can be written by η and ξ from their definitions. Lemma 3.4 shows that the bias caused by incomplete observations becomes severe when there is a large difference between the expectations of the gradients in the upper side and the lower side. δ is usually higher than zero because δ = 0 implies there is no difference between the expectations of the gradients in the upper side and the lower side or both of the expectations are zero. Furthermore, a larger 1− ξ = p( a < 0) makes the bias more significant, which agrees with the intuition that as the proportion of incomplete observations increases, the problem becomes more difficult. 4 EXPERIMENTS We now evaluate the proposed method through numerical experiments. We first introduce the baselines to be compared in the experiments. Then, we present the experimental results to show the effectiveness of our unbiased learning. Baselines. Recall that the novelty of the proposed approach lies in the unbiased gradient in Equation 10, which is derived from the new class of loss functions in equation 9 with Condition 3.1. An objective of our experiments is thus to validate the effectiveness of this new class of loss functions and the corresponding gradients against common loss functions in the literature. Specifically, we compare the proposed method with MSE (mean squared error), MAE (mean absolute error), and Huber losses (Huber et al., 1964; Narula & Wellington, 1982; Wilcox, 1997). For robust loss function in regression, MAE and Huber losses are considered the de facto standard and state-of-the-art in many studies and libraries. We use the same model and optimization method with all of the loss functions under consideration, and hence the only difference among the proposed method and the baselines is in the gradients. Since the loss function uniquely determines the baseline, we refer to each baseline method as MSE, MAE, or Huber. 4.1 EXPERIMENTAL PROCEDURE AND RESULTS The experiments are organized into three parts. In Section 4.1.1, we visually demonstrate the effectiveness of the proposed approach in giving unbiased prediction. In Section 4.1.2, we intensively and quantitatively evaluate the predictive error of the proposed method and baselines with five real-world regression tasks. In Section 4.1.3, we demonstrate the practical benefit of our approach in a real healthcare use case, which has motivated this work. See the appendix for the details of the experimental settings. 4.1.1 DEMONSTRATION OF UNBIASED LEARNING Procedure. We start by conducting the experiments with synthetic data to show the effectiveness of our method in obtaining unbiased learning results from asymmetrically-corrupted data with different proportions of incomplete observations, K = {25, 50, 75}%. We use three synthetic tasks, LowNoise, HighNoise, and Breathing collected from the Kaggle dataset (Sen, 2016). We compare the proposed method against MSE, which assumes that both upper- and lower-side data are correctly labeled. This comparison shows whether our method can learn from asymmetrically-corrupted data in an unbiased manner, which MSE cannot do. Results. In Fig. 2, we plot the error in prediction (i.e., the predicted value minus the true value) given by the proposed method and MSE for each data-point of the three tasks with K = 50%. Note that, for evaluating the unbiasedness, these test sets do not have incomplete observations. Since MSE regards both upper- and lower-side data as correctly labeled, it produces biased results due to the incomplete observations, where the average error (shown by the green dashed line) is negative, which means the estimation has a negative bias. In contrast, the average error by the proposed method (shown by the blue solid line) is approximately zero. This clearly shows that the proposed method obtained unbiased learning results. The figures for the other settings and tables showing quantitative performance are in Appendix E. 4.1.2 PERFORMANCE COMPARISON AMONG DIFFERENT LOSS FUNCTIONS Procedure. We next apply the proposed method and baselines to five different real-world healthcare tasks from the UCI Machine Learning Repository (Velloso, 2013; Velloso et al., 2013) to show a more extensive comparison between the proposed method and the baselines (MSE, MAE, and Huber). For the proposed method, we use two implementations of L(f(x), y) for f(x) ≤ y′ in Equation 10: the absolute loss (Proposed-1) and the squared loss (Proposed-2). Here, we report the mean absolute error (MAE), and its standard error, of the predictions ŷ = {ŷn}Nn=1 against the corresponding true labels y across 5-fold cross-validation, each with a different randomly sampled training-testing split. MAE is the common metric used in the healthcare domain (Lee et al., 2016; Yeung et al., 2002; Wang et al., 2005; Srinivas et al., 2020) and is defined as MAE(y, ŷ) ≡ 1/N ∑N n=1 |yn − ŷn|. For each fold of the cross-validation, we use a randomly sampled 20% of the training set as a validation set to choose the best hyperparameters for each algorithm, in which hyperparameters providing the highest MAE in the validation set are chosen. Results. As seen in Table 1, Proposed-1 and Proposed-2 largely outperformes the baselines. The robust regression methods (MAE and Huber) did not improve in performance against MSE. In particular, Proposed-1 and Proposed-2 respectively reduced the MAE by more than 20% and 30% on average, compared with baselines. 4.1.3 REAL USE CASE FOR HEALTHCARE Procedure Finally, we demonstrate the practicality of our approach in a real use case in healthcare. From non-intrusive bed sensors installed under each of the four legs of a bed, we estimate the motion intensity of a subject that could be measured accurately but intrusively with ActiGraph, a gold standard sensor wrapped around the wrist (Tryon, 2013; Mullaney et al., 1980; Webster et al., 1982; Cole et al., 1992). If we can mimic the outputs from ActiGraph with outputs from the bed sensors, we can measure the motion with high accuracy and high coverage, while also easing the burden on the subject. We divide the dataset into three pieces and evaluate the results with 3-fold cross-validation. We here use the evaluation metrics that are specifically designed for sleep-wake discrimination (Cole et al., 1992) i.e., proportion of correct prediction period and rate of false prediction. Results. Table 2 shows the proportion of correct prediction period and rate of false prediction, which indicate that the proposed method captured 89 percent of the total time period of the motions that were captured by ActiGraph, and false detection due to factors such as floor vibration was only 1.6 percent. Furthermore, the proposed method captured 15 additional motions that were not captured by ActiGraph. The baseline method MSE was severely underfitted, and most of the weights were zero; thus, we omitted these results. Overall, our findings here demonstrate that ActiGraph can be replaced with bed sensors, and we can also use the bed sensors for the inputs of certain ActiGraph functions, such as sleep-wake discrimination (Cole et al., 1992). See also Appendix G for further details, including the actual estimation results of the motion intensity. 5 DISCUSSION Limitations. In this paper, we do not address symmetric label corruption, such as ordinary outliers, where the coverage and incompleteness are consistent between a label and explanatory variables. Other established approaches can handle such cases. Only when the corruption is asymmetric does it lead to the technical challenge we address here. In that sense, we can handle the opposite asymmetric corruption, in which labels for some observations may become inconsistently higher than those for typical observations. This can be handled as learning from lower-side labeled data and unlabeled data, i.e., LU regression. Since our derivation of U2 regression is straightforwardly applicable to this LU regression case, we show only its learning algorithm in Appendix C. Asymmetric Label Corruption in Classification. In the classification problem setting, asymmetric label corruption is addressed with positive-unlabeled (PU) learning, where it is assumed that negative data cannot be obtained, but unlabeled data are available as well as positive data (Denis, 1998; De Comité et al., 1999; Letouzey et al., 2000; Shi et al., 2018; Kato et al., 2019; Sakai & Shimizu, 2019; Li et al., 2019; Zhang et al., 2019; 2020; Chen et al., 2020b;a; Luo et al., 2021; Hu et al., 2021; Li et al., 2021). An unbiased risk estimator has also been proposed (Du Plessis et al., 2014; 2015). However, PU classification cannot be used for a regression problem, where labels are real values and we need to handle order and gradation between labels. This is because its derivation and algorithm are based on the nature that labels must be binary, i.e., only positive or negative. We overcome this limitation with a novel approach based on an unbiased gradient. Future work. We showed that our approach to estimating hyperparameters based on the grid search with the validation set was effective even for the one contains the important ratio for upper-side labeled data, p(ft(x) ≤ y). It also provides the flexibility needed to handle data variation. Most studies on PU learning assume that a hyperparameter corresponding to πup is given (Hammoudeh & Lowd, 2020; Sonntag et al., 2021; Lin et al., 2022), and some papers have addressed this hyperparameter estimation as their main contribution (Jain et al., 2016; Ramaswamy et al., 2016; Christoffel et al., 2016; Jain et al., 2020; Yao et al., 2021). Developing a method for the hyperparameter estimation to improve performance would be a worthwhile next step of our study. Also, in Assumption 2.1, we assumed s⊥ f∗(x) and a⊥ f∗(x), which is a common noise assumption. Addressing the case when the noises are not independent of f∗(x) is another future direction of our work. Conclusion. We formulated a regression problem from asymmetrically-corrupted data in which training data are corrupted with an asymmetric noise that always has a negative value. This causes labels for data with relatively lower label values to be particularly unreliable. To address this problem, we proposed a learning algorithm, U2 regression. Under some technical assumptions, we showed that our algorithm is unbiased and consistent with regression that uses uncorrupted data without incomplete observations. Our analysis is based on the equivalence of the gradient between them. An experimental evaluation demonstrated that the proposed method was significantly better than the methods without the assumption of the asymmetrical label corruption. A PROOFS A.1 PROOF OF LEMMA 2.2 Proof. For the proof of Lemma 2.2, we will derive two important lemmas from Assumption 2.1. Then, we will prove Lemma 2.2 by using them. We first show f∗(x) ≤ y′ ⇒ a = 0. When f∗(x) ≤ y′, we have from Equation 1 and Equation 4: f∗(x) ≤ f∗(x) + s + a (12) 0 ≤ s + a − a ≤ s. Since a ≤ 0 by Assumption 2.1, we have | a| ≤ s. (13) If a < 0, Assumption 2.1 implies | s| < | a|, which contradicts Equation 13. Hence, we must have a = 0. (14) Since y = y′ when a = 0, we have p(x, y′|f∗(x) ≤ y′) = p(x, y′|f∗(x) ≤ y′, a = 0) (15) = p(x, y|f∗(x) ≤ y, a = 0) = p(x, y|f∗(x) ≤ y), which establishes Lemma A.1. Let p(x, y, y′) be the underlying probability distribution for x, y, and y′. Then, p(x, y′|f∗(x) ≤ y′) = p(x, y|f∗(x) ≤ y). (16) The condition parts of these conditional distributions represent the relationships between labels and regression functions, e.g., p(x, y|f∗(x) ≤ y) is the distribution of x and y when y is higher than what is given by the oracle regression function f∗. Similar to Lemma A.1, we show f(x) ≤ y′ ⇒ a = 0. Let F ′ ≡ {f ∈ F : |f(x) − f∗(x)| ≤ | s| a.s.}, which represents our natural expectation that the regression function f well approximates f∗. When f(x) ≤ y′, we have from Equation 1 and Equation 4 with the condition f ∈ F ′: f(x) ≤ f∗(x) + s + a (17) f(x) ≤ f(x) + s + a + | s| 0 ≤ s + a + | s| − a ≤ s + | s|. Since a ≤ 0 by Assumption 2.1, we have | a| ≤ s + | s|. (18) If a < 0, Assumption 2.1 implies 2| s| < | a|, which contradicts Equation 18. Hence, we must have a = 0. (19) Since y = y′ when a = 0, by replacing f∗ with f for the argument in the derivation of Lemma A.1 in Equation 15, we have Lemma A.2. Let F ′ ≡ {f ∈ F : |f(x)− f∗(x)| ≤ | s|}. When f ∈ F ′, the following holds: p(x, y′|f(x) ≤ y′) = p(x, y|f(x) ≤ y). (20) Lemma A.1 immediately implies Ep(x,y′|f∗(x)≤y′)[G(x, y′)] = Ep(x,y|f∗(x)≤y)[G(x, y)] (21) for any function G : RD×R→ R as long as the expectations exist. When f ∈ F ′, from Lemma A.2, we then have Ep(x,y′|f(x)≤y′)[G(x, y′)] = Ep(x,y|f(x)≤y)[G(x, y)]. (22) A.2 PROOF OF PROPOSITION 3.2 Proof. From the decomposed gradients∇L(ft) in Equation 7, we derive the proposed gradient only with the expectations over p(x, y′). From Condition 3.1 for L(f(x), y),∇L(f(x), y) = g(f(x)) when y < f(x). Thus, Equation 7 can be rewritten as ∇L(ft) =p(ft(x) ≤ y)Ep(x,y|ft(x)≤y)[∇L(ft(x), y)] (23) + p(y < ft(x))Ep(x|y<ft(x)) [ g(ft(x)) ] , where y is marginalized out in the expectation in the second term since g(ft(x)) does not depend on y. Here, Equation 6 and Equation 7 can be rewritten by replacing∇L(ft(x), y) with g(ft(x)), as Ep(x,y)[g(ft(x))] = p(ft(x) ≤ y)Ep(x,y|ft(x)≤y) [ g(ft(x)) ] (24) + p(y < ft(x))Ep(x,y|y<ft(x)) [ g(ft(x)) ] p(y < ft(x))Ep(x,y|y<ft(x)) [ g(ft(x)) ] (25) = Ep(x,y) [ g(ft(x)) ] − p(ft(x) ≤ y)Ep(x,y|ft(x)≤y) [ g(ft(x)) ] . Since g(ft(x)) does not depend on y, we can marginalize out y in Equation 25 as p(y < ft(x))Ep(x|y<ft(x)) [ g(ft(x)) ] (26) = Ep(x) [ g(ft(x)) ] − p(ft(x) ≤ y)Ep(x|ft(x)≤y) [ g(ft(x)) ] . From Equation 26, we can express Equation 23 as ∇L(ft) = p(ft(x) ≤ y)Ep(x,y|ft(x)≤y)[∇L(ft(x), y)] (27) + Ep(x) [ g(ft(x)) ] − p(ft(x) ≤ y)Ep(x|ft(x)≤y) [ g(ft(x)) ] . Finally, from Lemma 2.2, we can rewrite Equation 27 as: ∇L(ft) = p(ft(x) ≤ y)Ep(x,y′|ft(x)≤y′)[∇L(ft(x), y)] (28) + Ep(x) [ g(ft(x)) ] − p(ft(x) ≤ y)Ep(x|ft(x)≤y′) [ g(ft(x)) ] , which is identical to Equation 9. Thus, the gradient in Equation 9 is unbiased and consistent with the gradient in Equation 6 a.s. A.3 PROOF OF LEMMA 3.4 Proof. The difference between the decomposed gradients ∇Ľ(ft) and ∇L(ft) at step t+ 1 in the gradient descent is |∇Ľ(ft)−∇L(ft)| (29) = ∣∣∣∣p(ft(x) ≤ y)Ep(x,y′|ft(x)≤y′)[∇L(ft(x), y)] + p(y < ft(x))Ep(x,y′|y′<ft(x))[∇L(ft(x), y)] −p(ft(x) ≤ y)Ep(x,y|ft(x)≤y)[∇L(ft(x), y)] − p(y < ft(x))Ep(x,y|y<ft(x))[∇L(ft(x), y)] ∣∣∣∣. From Lemma 2.2 and Condition 3.1, |∇Ľ(ft)−∇L(ft)| (30) = ∣∣∣∣p(y < ft(x))Ep(x,y′|y′<ft(x))[∇L(ft(x), y)] − p(y < ft(x))Ep(x,y|y<ft(x))[∇L(ft(x), y)] ∣∣∣∣ = ∣∣∣∣p(y < ft(x))Ep(x|y′<ft(x))[g(ft(x))] − p(y < ft(x))Ep(x|y<ft(x))[g(ft(x))] ∣∣∣∣. We decompose Ep(x|y′<ft(x))[g(ft(x))] again as |∇Ľ(ft)−∇L(ft)| (31) = ∣∣∣∣p(y < ft(x))( p(ft(x) ≤ y|y′ < ft(x))Ep(x|y′<ft(x)∧ft(x)≤y)[g(ft(x))] + p(y < ft(x)|y′ < ft(x))Ep(x|y′<ft(x)∧y<ft(x))[g(ft(x))] ) − p(y < ft(x))Ep(x|y<ft(x))[g(ft(x))] ∣∣∣∣. The condition y′ < ft(x) ∧ y < ft(x) is equivalent to the condition y < ft(x) since y′ ≤ y from Assumption 2.1 and thus p(y′ < ft(x)|y < ft(x)) = 1. Then, we have |∇Ľ(ft)−∇L(ft)| (32) = ∣∣∣∣p(y < ft(x))( p(ft(x) ≤ y|y′ < ft(x))Ep(x|y′<ft(x)∧ft(x)≤y)[g(ft(x))] + p(y < ft(x)|y′ < ft(x))Ep(x|y<ft(x))[g(ft(x))] ) − p(y < ft(x))Ep(x|y<ft(x))[g(ft(x))] ∣∣∣∣. Additionally, since p(y < ft(x)|y′ < ft(x)) = 1− p(ft(x) ≤ y|y′ < ft(x)), |∇Ľ(ft)−∇L(ft)| (33) = ∣∣∣∣p(y < ft(x))p(ft(x) ≤ y|y′ < ft(x))( Ep(x|y′<ft(x)∧ft(x)≤y)[g(ft(x))]− Ep(x|y<ft(x))[g(ft(x))] )∣∣∣∣. This equation shows that the bias is represented by the difference between the expectation of g(ft(x)) with the lower-side data and that with the original upper-side data mixed into the lower side due to incomplete observations and the corresponding proportions. From Assumption 3.3, since a⊥ x, |∇Ľ(ft)−∇L(ft)| (34) = ∣∣∣∣p(y < ft(x))p(ft(x) ≤ y|y′ < ft(x))( Ep(x|ft(x)≤y)[g(ft(x))]− Ep(x|y<ft(x))[g(ft(x))] )∣∣∣∣. Since |f − f∗| ≤ | s| a.s., p(ft(x) ≤ y) = η and p(y < ft(x)) = 1− η from their definition, p(ft(x) ≤ y|y′ < ft(x)) = p(ft(x) ≤ y)p( a < 0) p(y < ft(x)) + p(ft(x) ≤ y)p( a < 0) = η(1− ξ) (1− η) + η(1− ξ) . (35) Therefore, from the definition of δ, |∇Ľ(ft)−∇L(ft)| ≥ η(1− η)(1− ξ) (1− η) + η(1− ξ) δ. (36) B IMPLEMENTATION OF LEARNING ALGORITHM BASED ON STOCHASTIC OPTIMIZATION We scale up our U2 regression algorithm by stochastic approximation with M mini-batches and add a regularization term, R(f): ∇L̂{m}(ft) = ∑ (x,y)∈ { X {m} up ,y {m} up }∇L(ft(x), y) (37) + ρ [ ∑ x∈X{m}un g(ft(x)) ] − ∑ x∈X{m}up g(ft(x)) + λ ∂R(ft) ∂θ , where∇L̂{m}(ft) is the gradient for the m-th mini-batch, {X{m}up ,y{m}up } andX{m}un respectively are upper-side and unlabeled sets in the m-th mini-batch based on the current ft, λ is a regularization parameter, and the regularization term R(f) is, for example, the L1 or L2 norm of the parameter vector θ of f . We also convert nup/(πupN) to a hyperparameter ρ, ignoring constant coefficients instead of directly handling πup. The hyperparameters ρ and λ are optimized in training based on the grid-search with the validation set. The U2 regression algorithm based on stochastic optimization is described in Algorithm 1. We learn the regression function with the gradient in Equation 37 by using any stochastic gradient method. Here, we used Adam with the hyperparameters recommended in Kingma & Ba (2015), and the number of samples in the mini-batches was set to 32. We set the candidates of the hyperparameters, ρ and λ, to {10−3, 10−2, 10−1, 100}. By using the learned f , we can estimate ŷ = f(x) for new data x. C ALGORITHM FOR LU REGRESSION We show the algorithm for the lower and unlabeled regression (LU regression), where labels for some observations may become inconsistently higher than those for typical observations. Let LLU(f(x), y) be a loss function for LU regression and gLU(f(x)) be a gradient∇LLU(f(x), y) when f(x) ≤ y. Similar to Condition 3.1 for U2 regression, we assume that the class of LLU(f(x), y) satisfies the Algorithm 1 U2 regression based on stochastic gradient method. Input: Training data D′ = {xn, y′n}Nn=1; hyperparameters ρ, λ ≥ 0; an external stochastic gradient method A Output: Model parameters θ for f 1: while No stopping criterion has been met 2: Shuffle D′ into M mini-batches: { X{m},y{m} }M m=1 3: for m = 1 to M 4: Compute the gradient∇L̂{m}(ft) in Equation 37 with { X{m},y{m} } 5: Update θ by A with∇L̂{m}(ft) condition that gLU(f(x)) is a gradient function depending only on f(x) and not on the value of y. Then, LU regression is Algorithm 1, with the following gradient,∇L̂{m}LU (ft), instead of∇L̂{m}(ft) in Equation 37, as ∇L̂{m}LU (ft) = ∑ {x,y}∈ { X {m} lo ,y {m} lo }∇LLU(ft(x), y) (38) + ρ [ ∑ x∈X{m}un gLU(ft(x)) ] − ∑ x∈X{m}lo gLU(ft(x)) + λ ∂R(ft) ∂θ , where {X{m}lo ,y {m} lo } and X {m} un respectively are lower-side and unlabeled sets in the m-th minibatch based on the current ft. D COMPUTING INFRASTRUCTURE All of the experiments were carried out with a Python and TensorFlow implementation on workstations having 80 GB of memory, a 4.0 GHz CPU, and an Nvidia Titan X GPU. In this environment, the computational time to produce the results was a few hours. E DETAILS OF EXPERIMENTS IN SECTION 4.1.1 E.1 SYNTHETIC DATASETS We conducted the experiments on synthetic data to evaluate the feasibility of our method for obtaining unbiased learning results from asymmetrically-corrupted data containing different proportions of incomplete observations. We generated synthetic data on the basis of Assumption 2.1 and Equation 4. We randomly generated N = 1000 training samples, X = {xn}Nn=1, from the standard Gaussian distribution N (xn; 0, I), where the number of features in x was D = 10, and I is the identity matrix. Then, usingX , we generated the corresponding N sets of true labels y = {yn}Nn=1 from the distribution N (yn;w>xn, β), where w are coefficients that were also randomly generated from the standard Gaussian distributionN (w; 0, I), β is the noise precision, and > denotes the transpose. For simulating the situation in which a label has incomplete observations, we created corrupted labels y′ = {y′n}Nn=1 by randomly selecting K percent of data in y and subtracting the absolute value of white Gaussian noise with twice the value of the precision as that of y, 2β, from their values. We repeatedly evaluated the proposed method for each of the following settings. The noise precision was β = {100, 10−1}, which corresponded to a low-noise setting task (LowNoise) and a high-noise setting task (HighNoise), and the proportion of incomplete training samples was K = {25, 50, 75}%. In the case of K = 75%, only 25 percent of the samples correctly corresponded to labels, and all of the other samples were attached with labels that were lower than the corresponding true values. It is quite difficult to learn regression functions using such data. In these tasks, we used a linear model, θ>x, for f(x) and an implementation for Equation 37 with the absolute loss, which satisfies Condition 3.1, for the loss function L and L1-regularization for the regularization term. We set the candidates of the hyperparameters, ρ and λ, to {10−3, 10−2, 10−1, 100}. We standardized the data by subtracting their mean and dividing by their standard deviation in the training split. We used Adam with the hyperparameters recommended in Kingma & Ba (2015), and the number of samples in the mini-batches was set to 32. We also used a real-world sensor dataset collected from the Kaggle dataset (Sen, 2016) that contains breathing signals (Breathing). The dataset consisted of N = 1, 432 samples. We used signals from a chest belt asX = {xn}Nn=1, and x in each sample had D = 2 number of features, i.e., the period and height of the expansion/contraction of the chest. We used signals obtained by the Douglas bag (DB) method, which is the gold standard for measuring ventilation, as true labels y = {yn}Nn=1. For our problem setting, we created corrupted labels y′ = {y′n}Nn=1 through the same procedure for synthetic corruption as that for LowNoise and HighNoise with K = {25, 50, 75}%. In the experiment on Breathing, for its non-linearity, we used θ>φ(x, σ) for f(x), where φ is a radial basis function with the training set as its bases, and σ is a hyperparameter representing the kernel width that is also optimized by using the validation set. We set the candidates of the hyperparameter σ to {10−3, 10−2, 10−1, 100}. The other implementation details were the same as those for LowNoise and HighNoise. E.2 DETAILED RESULTS Figure 3 shows the error between the estimation results of the proposed method and their true values and those of MSE for LowNoise, HighNoise, and Breathing with 25 and 75 percent of incomplete training samples. Table 3 shows the performance on LowNoise, HighNoise, and Breathing for the proposed method and MSE. As shown in Figure 3, the proposed method obtains unbiased learning results in all cases, while MSE produces biased results. From Table 3, we can see that the proposed method outperformes MSE overall. We found that the performance of our method is not significantly affected by the increase in the proportion of incomplete training samples K even for K = 75%, unlike that of MSE. E.3 PERFORMANCE OVER DIFFERENT SIZES OF VALIDATION SET To demonstrate the robustness of our validation-set-based approach to estimating the hyperparameter πup, we show the performance of the proposed method over different sizes of the validation set in Fig. 4. This analysis is conducted on the tasks in Section 4.1.1; LowNoise, HighNoise, and Breathing, with K=50%. Figure 4 shows that the proposed method does not degrade its performance much, even when we use only 1% of the training set as the validation set. This demonstrates that the proposed approach is robust enough also for the small size of the validation set as well as the high proportion of incomplete validation samples. In Fig. 5, we also show a chart similar to Fig. 2 (the error in prediction) when we used 1% of the training set as the validation set. We can see that even in this case, the proposed method achieved unbiased learning (the average error shown by the blue solid line is approximately zero.). F DETAILS OF EXPERIMENTS IN SECTION 4.1.2 We applied the algorithm to five different real-world healthcare tasks recorded in the datasets from the UCI Machine Learning Repository (Velloso, 2013; Velloso et al., 2013), which contains sensor outputs from wearable devices attached to the arm while subjects exercised. From the non-intrusive sensors attached to gym equipment, we estimated the motion intensity of a subject that was measured accurately with an arm sensor that was an intrusive sensor wrapped around the arm. If we can mimic outputs from the arm sensor with outputs from the equipment sensor, it could contribute to the subjects’ comfort, as they would not need to wear sensors to measure their motion intensity. We used all of the features from the equipment sensor that took “None” values less than ten times asX = {xn}Nn=1, where each sample had D = 13 number of features. The corrupted labels y′ = {y′n}Nn=1 were the magnitude of acceleration from the arm sensor, which can accurately sense motion intensity on the arm, but it had insufficient data coverage and incomplete or missing observations for the movements of other body parts. For performance evaluation, we used the magnitude of acceleration for the entire body as true labels y = {yn}Nn=1. The number of samples were N = 11, 159, N = 7, 593, N = 6, 844, N = 6, 432, and N = 7, 214 respectively for the tasks, Specification, Throwing A, Lifting, Lowering, and Throwing B. For the complex nature of the tasks, we used a 6-layer multilayer perceptron with ReLU (Nair & Hinton, 2010) (more specifically, D-100-100-100-100-1) as f(x), which demonstrates the usefulness of the proposed method for training deep neural networks. We also used a dropout (Srivastava et al., 2014) with a rate of 50% after each fully connected layer. We used two implementations for L(f(x), y) when f(x) ≤ y′ in Equation 37 with the absolute loss (Proposed-1) and the squared loss (Proposed-2). For both implementations, we used the absolute loss, which satisfies Condition 3.1, for the loss function L(f(x), y) when y′ < f(x) and used L1-regularization for the regularization term. The other implementation details were the same as those for LowNoise, HighNoise, and Breathing. G DETAILS OF EXPERIMENTS IN SECTION 4.1.3 We demonstrate the practicality of our approach in a real use case in healthcare. From non-intrusive bed sensors installed under each of the four legs of a bed, we estimated the motion intensity of a subject that was measured accurately with ActiGraph, a gold standard intrusive sensor wrapped around the wrist (Tryon, 2013; Mullaney et al., 1980; Webster et al., 1982; Cole et al., 1992). The sensing results of ActiGraph are used for tasks such as discriminating whether a subject is asleep or awake (Cole et al., 1992). While ActiGraph can accurately sense motion on the forearm, it has insufficient data coverage in other areas and often causes observations of movements on other body parts to be missing. The bed sensors have a broader data coverage since they can sense global motion on all body parts; however, the sensing accuracy is limited due to their non-intrusiveness. If we can mimic the outputs from ActiGraph with outputs from the bed sensors, we can expect to achieve sufficient accuracy and coverage while also easing the burden on the subject. The dataset we used included three pieces of data, Data (i), (ii), and (iii), which were respectively recorded over 20, 18, and 18.5 minutes. Each piece of data consisted of pairs of bed-sensor-data sequences and the corresponding motion intensity sequence obtained by ActiGraph. We used the “magnitude” attribute of ActiGraph as corrupted labels y′ for the motion intensity, whose sampling rate was about one sample per second. For true labels y, we manually measured the motion intensity every minute under the management of a domain expert. ForX , we first computed the gravity center of the four sensor outputs that were obtained from the bed sensors under the four legs of a bed. Then, we computed the time derivatives and cross terms of the raw sensor outputs and the gravity center. The sampling rate of the bed sensors was different from that of ActiGraph, about one sample per five milliseconds. Thus,X was finally generated as a sliding window of statistics in 1, 000-millisecond (1 second) subsequences of the time series of the above computed variables, where 1 second was the same as the sampling interval of ActiGraph. The statistics were means, standard deviations, and {0.05, 0.25, 0.5, 0.75, 0.95} quantiles. In this task, we used the linear model θ>x for f(x) due to its interpretability, which is inevitable in real-world healthcare and medical applications. G.1 ESTIMATION RESULTS FOR MOTION INTENSITY Figure 6 compares our estimation results for motion intensity with the output of ActiGraph and true labels. G.2 IMPORTANT FEATURES ESTIMATING MOTION INTENSITY The important features selected by L1 regularization were the statistics of the gravity center and the cross terms and time derivatives of the raw sensor outputs. The largest weight was assigned to the standard deviation of the gravity center, which represents the amplitude of the gravity center, so it is directly related to the motion of subjects. H OTHER POSSIBLE USE CASES OF REGRESSION FOR SENSOR MAGNITUDE Examples of predicting the magnitude values of a sensor, which is a field of application of U2 regression, can be found in several areas. Besides the medical and healthcare applications discussed in the main text, another example is estimating the wind speed or rainfall in a specific region from observable macroscopic information (Cheng & Tan, 2008; Abraham & Tan, 2010; Abraham et al., 2013; Vandal et al., 2017), known as statistical downscaling (Wilby et al., 2004). Wind speed and rainfall, which are labels in these tasks, can be sensed locally in a limited number of locations and provide incomplete observations and biased labels compared with macroscopic information, which is considered to be explanatory variables.
1. What is the focus of the paper regarding weakly supervised regression? 2. What are the strengths of the proposed approach, particularly in terms of its effectiveness and theoretical properties? 3. What are the weaknesses of the paper, especially regarding the estimation of πup and the limited applicability of the method? 4. Do you have any concerns or questions about the implementation of the proposed method? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This draft studies a special weakly supervised regression problem in which the labels are generalized by sensors. Therefore, the low value displayed by the sensor could either be actual or suggest that the label is missing. The author formulates this problem as a regression from asymmetrically corrupted data and proposes a new approach based on an unbiased gradient estimator to solve it. Both synthetic and real-world experiments demonstrate the effectiveness of the proposed approach. Strengths And Weaknesses Strength: The work is well motivated by many sensor-based applications and covers a wide range of real-world applications. To address this task, the proposed method appears promising. Also, empirical results support the proposed method's effectiveness. The proposed method has nice theoretical properties. According to certain assumptions, the proposed approach produces an unbiased estimator with asymmetrically corrupted data and thus obtains a well-generalized model. Weakness: The estimation of π u p is heuristic and changes according to the training procedure based on the validation set. Therefore, the proposed method and the theoretical results do not end-to-end match exactly. Due to the small validation data set, the estimation can be unreliable and unstable due to overfitting. This method can only be applied to absolute loss and pinball loss, which limits its use. There is a question regarding the implementation of the proposed method. The proposed method relays on an adaptive estimation of / p i [ u p ] and an upper-side labeled sample set based on the validation set. What is the size of the validation set and how does it affect the performance of the proposed method? How to avoid the overfit to the validation data? Clarity, Quality, Novelty And Reproducibility This draft is well-organized and easy to follow. The problem in this draft is novel and interesting. The proposed method seems promising, but the implementation is not discussed in sufficient detail.
ICLR
Title Learning from Asymmetrically-corrupted Data in Regression for Sensor Magnitude Abstract This paper addresses a regression problem in which output label values represent the results of sensing the magnitude of a phenomenon. A low value of such labels can either mean that the actual magnitude of the phenomenon has been low or that the sensor has made an incomplete observation. This leads to a bias toward lower values in labels and its resultant learning because labels for incomplete observations are recorded as lower than those for typical observations, even if both have monitored similar phenomena. Moreover, because an incomplete observation does not provide any tags indicating incompleteness, we cannot eliminate or impute them. To address this issue, we propose a learning algorithm that explicitly models the incomplete observations to be corrupted with an asymmetric noise that always has a negative value. We show that our algorithm is unbiased with a regression learned from the uncorrupted data that does not involve incomplete observations. We demonstrate the advantages of our algorithm through numerical experiments. N/A This paper addresses a regression problem in which output label values represent the results of sensing the magnitude of a phenomenon. A low value of such labels can either mean that the actual magnitude of the phenomenon has been low or that the sensor has made an incomplete observation. This leads to a bias toward lower values in labels and its resultant learning because labels for incomplete observations are recorded as lower than those for typical observations, even if both have monitored similar phenomena. Moreover, because an incomplete observation does not provide any tags indicating incompleteness, we cannot eliminate or impute them. To address this issue, we propose a learning algorithm that explicitly models the incomplete observations to be corrupted with an asymmetric noise that always has a negative value. We show that our algorithm is unbiased with a regression learned from the uncorrupted data that does not involve incomplete observations. We demonstrate the advantages of our algorithm through numerical experiments. 1 INTRODUCTION This paper addresses a regression problem for predicting the magnitude of a phenomenon when an observed magnitude involves a particular measurement error. The magnitude typically represents how large a phenomenon is or how strong the nature of the phenomenon is. Such examples of predicting the magnitude are found in several application areas, including pressure, vibration, and temperature (Vandal et al., 2017; Shi et al., 2017; Wilby et al., 2004; Tanaka et al., 2019). In medicine and healthcare, the magnitude may represent pulsation, respiration, or body movements (Inan et al., 2009; Nukaya et al., 2010; Lee et al., 2016; Alaziz et al., 2016; 2017; Carlson et al., 2018). More specifically, we learn a regression function to predict the label representing the magnitude of a phenomenon from explanatory variables. The training data consists of pairs of the label and explanatory variables, but note that the label in the data is observed with a sensor and is not necessarily in agreement with the actual magnitude of the phenomenon. We note that we use the term “label” even though we address the regression problem, and it refers to a real-valued label in this paper. In the example of predicting the magnitude of body movements, the label in the data is measured with an intrusive sensor attached to the chest or the wrist, and the explanatory variables are the values measured with non-intrusive bed sensors (Mullaney et al., 1980; Webster et al., 1982; Cole et al., 1992; Tryon, 2013). A regression function for this example would make it possible to replace intrusive sensors with non-intrusive ones, which in turn will reduce the burden on patients. Although the sensors that measure the label generally have high accuracy, they often make incomplete observations, and such incomplete observations are recorded as low values instead of missing values. This leads to the particular challenge where a low value of the label can either mean that the actual magnitude of the phenomenon has been low or that the sensor has made an incomplete observation, and there are no clues that allow us to tell which is the case. We illustrate this challenge in Fig. 1-(a). Such incomplete observations are prevalent in measuring the magnitude of a phenomenon. For example, the phenomenon may be outside the coverage of a sensor, or the sensing system may experience temporal mechanical failures. In the example of body movements, the sensor may be temporarily detached from the chest or wrist. In all cases, the sensor keeps recording low values, while the actual magnitude may be high, and no tag indicating incompleteness can be provided. This incomplete observation is particularly severe for the sensor measuring the label since it is single-source and has narrower data coverage. This stems from the fact that the sensor is usually intrusive or it is costly to produce highly accurate observations for measuring the label. Examples of this can be seen in chest or wrist sensors that focus on the movements of a local body part with high accuracy and often miss movements outside their coverage, such as those of parts located far from where the sensor is attached. At most, a single intrusive sensor can be attached to a patient to avoid burdening them. In contrast, the sensors measuring the explanatory variables are usually multi-source and provide broader data coverage. For example, multiple sensors can be attached to various places of a bed and globally monitor the movements of all body parts on the bed but with lower accuracy. One cannot simply ignore the problem that the observations of labels may be incomplete because the estimated regression functions trained on such data with incomplete observations are severely biased toward lower values regardless of the amount of available training data. This bias comes from the fact that incomplete observations always have lower values than the actual magnitude of a phenomenon, and they occur intensively on label sensors, while explanatory variables are usually observed completely. Moreover, incomplete observations can be much more frequent than expected. Unfortunately, since we cannot identify which observations are incomplete, we cannot eliminate or impute them by using existing methods that require identifying incomplete observations. Such methods include thresholding, missing value detection (Pearson, 2006; Qahtan et al., 2018), imputation (Enders, 2010; Smieja et al., 2018; Ma & Chen, 2019; Sportisse et al., 2020), and semi-supervised regression (Zhou & Li, 2005; Zhu & Goldberg, 2009; Jean et al., 2018; Zhou et al., 2019). The issues of incomplete observations also cannot be solved with robust regression (Huber et al., 1964; Narula & Wellington, 1982; Draper & Smith, 1998; Wilcox, 1997), which takes into account the possibility that the observed labels contain outliers. While robust regression is an established approach and state-of-the-art against corrupted labels in regression, it assumes symmetric label corruption. Namely, the noise is assumed to not be biased either positively or negatively. Since incomplete observations induce the noise that is severely biased toward lower values, robust regression methods still produce regression functions that are biased toward lower values than the one that would be learned from the data without incomplete observations. In this paper, to mitigate the bias toward lower values, we explicitly assume the existence of the noise from incomplete observations, which always has negative values, in addition to the ordinary symmetric noise. That is, we consider our training data to be asymmetrically-corrupted data. We then formulate a regression problem from our asymmetrically-corrupted data and design a principled learning algorithm for this regression problem. By explicitly modeling the incomplete observation, we derive a learning algorithm that has a rather drastic feature: namely, it ignores the labels that have relatively low values (lower-side labeled data). In other words, our algorithm uses the data whose labels have relatively high values (upper-side labeled data) and the data whose labels are ignored (unlabeled data). Hence, we refer to our algorithm as upper and unlabeled regression (U2 regression). This aligns with the intuition that the labels with low values are unreliable, since those low values may be due to incomplete observations. Our main result is that U2 regression, which learns from the asymmetrically-corrupted data, produces a regression function that is, under some technical assumptions, unbiased and consistent with the one that is produced from the uncorrupted data that does not involve incomplete observations. This counterintuitive result is achieved by considering a specific class of loss functions and deriving their gradient, which only requires upper-side labeled data and unlabeled data in the asymmetricallycorrupted data and can still be shown to be asymptotically equivalent to the expression of the gradient that has access to the uncorrupted data. The main novelty in our approach is thus in the loss function, and we will empirically demonstrate the effectiveness of the proposed class of loss functions over existing common loss functions in dealing with asymmetrically-corrupted data in synthetic and six real-world regression tasks. Contributions. The main contributions of this paper are summarized as follows. • We formulate a novel problem of learning a regression function from asymmetricallycorrupted data. This is important for applications where the magnitude of a phenomenon is measured with a sensor that is susceptible to unidentifiable incomplete observations. • We derive an unbiased and consistent learning algorithm (U2 regression) for this problem from the new class of loss functions. • Extensive experiments on synthetic and six real-world regression tasks including a real use case for healthcare demonstrate the effectiveness of the proposed method. 2 REGRESSION FROM ASYMMETRICALLY-CORRUPTED DATA Our goal is to derive a learning algorithm with asymmetrically-corrupted data, i.e., labels in the training data are corrupted with negative-valued noise due to incomplete observations, in a manner that is unbiased and consistent with the regression that uses uncorrupted data without involving incomplete observations. We first consider the regression problem that uses the uncorrupted data in Section 2.1 and then formulate learning from the asymmetrically-corrupted data in Section 2.2. 2.1 REGRESSION PROBLEM FROM DATA WITHOUT INCOMPLETE OBSERVATIONS Let x ∈ RD(D ∈ N) be a D-dimensional explanatory variable and y ∈ R be a real-valued label. We assume that, without incomplete observations, y is observed in accordance with y = f∗(x) + s, (1) where f∗ is the oracle regressor and s is the symmetric noise with 0 as the center, such as additive white Gaussian noise (AWGN). We learn a regression function f(x) that computes the value of the estimation of a label, ŷ, for a newly observed x as ŷ = f(x). The optimal regression function, f̂ , is given by f̂ ≡ arg min f∈F L(f), (2) where F is a hypothesis space for f , and L(f) is the expected loss when the regression function f(x) is applied to data (x, y), distributed in accordance with an underlying distribution p(x, y): L(f) ≡ Ep(x,y)[L(f(x), y)], (3) where Ep[•] denotes the expectation over the distribution p, and L(f(x), y) is the loss function between f(x) and y, e.g., the squared loss, L(f(x), y) = ‖f(x)− y‖2. The expectation Ep(x,y) can be estimated by computing a sample average for the training data D ≡ {(xn, yn)}Nn=1, which is N pairs of explanatory variables and labels. 2.2 REGRESSION PROBLEM FROM ASYMMETRICALLY-CORRUPTED DATA In this paper, we consider a scenario in which we only have access to the asymmetrically-corrupted data D′ ≡ {(xn, y′n)}Nn=1, where a label y′ may be corrupted due to incomplete observations. A corrupted label y′ is observed from the uncorrupted y with an asymmetric negative-valued noise, a: y′ = y + a, (4) where the asymmetric noise a always has a random negative value, which means y′ ≤ y. Using only D′, we learn a regression function f(x) as the solution for Equation 2 in an unbiased and consistent manner. Although AWGN can be handled even when we use a naive regression method such as least squares, the asymmetric noise a, which always has a negative value, is problematic. Intuitively, the asymmetric noise a makes lower-side labeled data particularly unreliable and inappropriate for learning, while keeping upper-side labeled data reliable, where the upper-side labeled data refers to the data {(x, y)} whose label is above the regression line (i.e., f(x) ≤ y) and the lower-side labeled data refers to the data whose label is below the regression line. The regression line represents the estimation of a regression function. Figure 1-(b) illustrates this as a scatter plot of the value of the label against the value of an explanatory variable. Here, the data with incomplete observations appear only in the lower side of the regression line because a makes observations have lower label values than those of typical observations, where the regression line represents such typical observations. This asymmetry leads to biased learning compared with the learning from the uncorrupted data without incomplete observations. To address the asymmetric noise a and its resultant bias, we formalize the assumption on the observation process for the asymmetrically-corrupted data D′ and derive a lemma representing the nature of D′. Then, we propose a learning algorithm based on the lemma in the next section. The observation processes of D and D′ are formally characterized as follows. Assumption 2.1. Assume s⊥ f∗(x), Ep( s)[ s] = 0; a⊥ f∗(x), a ≤ 0 almost surely (a.s.); 2| s| < | a| a.s. when a < 0; and {(xn, y, y′n)}Nn=1 are i.i.d. observations in accordance with Equation 1 and Equation 4, This assumption means that D′ has enough information to estimate f , and the asymmetric noise a is significant enough compared to the symmetric noise s, which are necessary assumptions so that the learning problem is solvable, and a should be handled separately from s. From Assumption 2.1, we then have the following lemma. Lemma 2.2. Let F ′ ≡ {f ∈ F : |f(x)− f∗(x)| ≤ | s| a.s.}. When f ∈ F ′, the following holds for y ≡ f∗(x) + s and y′ ≡ y + a under Assumption 2.1: Ep(x,y′|f(x)≤y′)[G(x, y′)] = Ep(x,y|f(x)≤y)[G(x, y)] (5) for any function G : RD × R→ R as long as the expectations exist. Proof. We outline a proof here and provide a complete one in Appendix A.1. We first show that a does not change the distribution for upper-side labeled data (f∗(x) ≤ y′) on the basis of the oracle regression function f∗ before and after adding a, i.e., a = 0 when f∗(x) ≤ y′. With the condition f ∈ F ′, we can further prove that a = 0 when f(x) ≤ y′, which is for upper-side labeled data on the basis of f . This establishes p(x, y′|f(x) ≤ y′) = p(x, y|f(x) ≤ y) and implies Lemma 2.2. The condition parts of these conditional distributions represent the relationships between labels and the estimations of the regression function f , e.g., p(x, y|f(x) ≤ y) is the distribution of x and y when y is higher than what is given by f . The condition f ∈ F ′ represents our natural expectation that the regression function f well approximates f∗. Lemma 2.2 shows that a does not change the expectation for our upper-side labeled data (f(x) ≤ y′) before and after adding a, which makes them still reliable for regression. In the next section, we derive an unbiased learning algorithm based on this lemma. 3 U2 REGRESSION We seek to find the minimizer of the objective in Equation 2 from the asymmetrically-corrupted data D′. To this end, we propose a gradient that relies only on the knowledge of the distribution of the corrupted data p(x, y′) but is still equivalent to the gradient of Equation 3, which relies on the knowledge of the distribution of the uncorrupted data p(x, y). Based on Lemma 2.2, we rewrite the gradient based on p(x, y) into the one that only requires p(x, y′). 3.1 GRADIENT FOR LEARNING FROM ASYMMETRICALLY-CORRUPTED DATA Here, we address Equation 2 with the gradient descent. At step t + 1 in the gradient descent, the gradient of Equation 3 with respect to the parameters θ of f is represented with a regression function, ft, which is estimated at step t, as follows: ∇L(ft) ≡ Ep(x,y)[∇L(ft(x), y)], where ∇L(ft(x), y) ≡ ∂L(f(x), y) ∂θ ∣∣∣ f=ft . (6) Note that this holds for any step in the gradient descent. When t = 0, f0 is the initial value of f , and when t =∞, we suppose f∞ = f̂ . We can decompose∇L(ft) as ∇L(ft) =p(ft(x) ≤ y)Ep(x,y|ft(x)≤y)[∇L(ft(x), y)] (7) + p(y < ft(x))Ep(x,y|y<ft(x))[∇L(ft(x), y)]. We then assume that, when y < f(x), the gradient of the loss function does not depend on y and only depends on f(x); thus we write ∇L(f(x), y) as g(f(x)) when y < f(x) to emphasize this independence. Formally, Condition 3.1. Let g(f(x)) be ∇L(f(x), y) for y < f(x). g(f(x)) is a gradient function depending only on f(x) and not on the value of y. Such common losses are the absolute loss and pinball loss, which are respectively used in least absolute regression and quantile regression and work well on real data (Lee et al., 2016; Yeung et al., 2002; Wang et al., 2005; Srinivas et al., 2020). For example, the gradient of the absolute loss is ∂|f(x)− y| ∂θ = ∂f(x) ∂θ when y < f(x), (8) which does not depend on the value of y but only on f(x). We now propose a gradient that does not rely on the knowledge of p(x, y) but instead uses only p(x, y′). Namely, ∇L̃(ft) ≡ p(ft(x) ≤ y)Ep(x,y′|ft(x)≤y′) [ ∇L(ft(x), y) ] (9) + Ep(x) [ g(ft(x)) ] − p(ft(x) ≤ y)Ep(x|ft(x)≤y′) [ g(ft(x)) ] . In Section 3.2, we will formally establish the equivalence between the gradient in Equation 9 and that in Equation 6 under our assumptions. Note that in the second and third terms of Equation 9, we apply expectations over p(x) and p(x|ft(x) ≤ y′) to g(f(x)), even though g(f(x)) is defined to be the gradient∇L(f(x), y) for y < f(x). This is tractable due to the nature of g(f(x)), which only depends on f(x) and does not depend on the value of y. Since the expectations in Equation 9 only depend on x and y′, they can be estimated by computing a sample average for our asymmetrically-corrupted data D′ as ∇L̂(ft) = πup nup [ ∑ (x,y)∈{Xup,y′up} ∇L(ft(x), y) ] + 1 N [ ∑ x∈Xun g(ft(x)) ] − πup nup [ ∑ x∈Xup g(ft(x)) ] , (10) where {Xup,y′up} represents the set of coupled pairs of x and y′ in the upper-side labeled sample set, {x, y′ : ft(x) ≤ y′}, in D′;Xun is a sample set of x in D′ ignoring labels y′; nup is the number of samples in the upper-side labeled set; πup is πup ≡ p(ft(x) ≤ y). Note that πup depends on the current estimation of the function ft and the label y with complete observation. Thus, it changes at each step of the gradient descent, and we cannot determine its value in a general way. In this paper, we propose a simple approach of choosing πup as a single value of the hyperparameter. We optimize it with the grid search based on the validation set, which enables us to flexibly handle data variation. We will show that it works practically in our experiments. As we will show in Section 3.2, we can use Equation 10 to design an algorithm that gives an unbiased and consistent regression function. By using the gradient in Equation 10, we can optimize Equation 2 and learn the regression function only with upper-side labeled samples and unlabeled samples from D′ independent of lower-side labels. This addresses the issue that our lower-side labeled data is particularly unreliable and leads to overcoming the bias that stems from this unreliable part of the data. We refer to our algorithm as upper and unlabeled regression (U2 regression). See Appendix B for the specific implementation of the algorithm based on stochastic optimization. The gradient in Equation 10 can be interpreted in an intuitive manner. The first term in Equation 10 has the effect of minimizing the upper-side loss. Recall that the upper-side data are not affected by the asymmetric noise under our assumptions. Thus, U2 regression seeks to learn the regression function f on the basis of this reliable upper-side data. Notice that the first term becomes zero when all of the data points are below f (i.e., y′ ≤ ft(x),∀(x, y′) ∈ D′), since then {Xup,y′up } becomes empty. The second term thus has the effect of pushing down f at all of the data points so that some data points are above f . Meanwhile, the third term partially cancels this effect of the second term for the upper-side data to control the balance between the first and the second terms. 3.2 UNBIASEDNESS AND CONSISTENCY OF GRADIENT U2 regression is the learning algorithm based on the gradient,∇L̂(ft), in Equation 10 and uses only asymmetrically-corrupted data D′. The use of ∇L̂(ft) can be justified as follows: Proposition 3.2. Suppose that Assumption 2.1 holds and the loss function L(f(x), y) satisfies Condition 3.1. Then, the gradient ∇L̃(ft) in Equation 9 and its empirical approximation ∇L̂(ft) in Equation 10 are unbiased and consistent with the gradient∇L(ft) in Equation 6 a.s. Proof. We outline a proof here and provide a complete one in Appendix A.2. First, we rewrite Equation 7 into a gradient that only contains the expectation over p(x, y|ft(x) ≤ y) with Condition 3.1. Then, we apply Lemma 2.2 to the gradient, and it becomes an expression identical to Equation 9. In other words, U2 regression asymptotically produces the same result as the learning algorithm based on the gradient∇L(ft) in Equation 6, which requires the uncorrupted data without incomplete observations, D. The convergence rate of U2 regression is of the order Op(1/√nup + 1/ √ N) in accordance with the central limit theorem (Chung, 1968), where Op denotes the order in probability. We further justify our approach of having the specific form of Equation 9 by showing that a straightforward variant that uses D′ as if it does not involve incomplete observations (i.e., p(x, y) ≈ p(x, y′)) can fail for our problem. To this end, we introduce an additional assumption on the observation process: Assumption 3.3. Assume a⊥ x. Then, we have Lemma 3.4. Let∇Ľ(ft) be a variant of the gradient in Equation 7 replacing p(x, y) with p(x, y′), δ be the difference between the expectations of the gradients in the upper side and the lower side δ ≡ ∣∣Ep(x,y|f(x)≤y)[∇L(f(x), y)]− Ep(x,y|y<f(x))[∇L(f(x), y)]∣∣, 0 < η < 1 be probability when 0 ≤ s, and 0 < ξ < 1 be probability when a = 0. Then,∇Ľ(ft) is not consistent with the gradient in Equation 6 a.s., and the difference (bias) between them at step t+ 1 in the gradient descent is η(1− η)(1− ξ) (1− η) + η(1− ξ) δ ≤ |∇Ľ(ft)−∇L(ft)|. (11) Proof. We outline a proof here and provide a complete one in Appendix A.3. We first show that the bias |∇Ľ(ft)−∇L(ft)| can be represented by the difference between the expectation of g(ft(x)) with the upper-side data and that with the lower-side data, which can be written by δ. The bias also has the coefficient which contains the proportions for the lower-side data and the original upper-side data mixed into the lower side due to incomplete observations. These values can be written by η and ξ from their definitions. Lemma 3.4 shows that the bias caused by incomplete observations becomes severe when there is a large difference between the expectations of the gradients in the upper side and the lower side. δ is usually higher than zero because δ = 0 implies there is no difference between the expectations of the gradients in the upper side and the lower side or both of the expectations are zero. Furthermore, a larger 1− ξ = p( a < 0) makes the bias more significant, which agrees with the intuition that as the proportion of incomplete observations increases, the problem becomes more difficult. 4 EXPERIMENTS We now evaluate the proposed method through numerical experiments. We first introduce the baselines to be compared in the experiments. Then, we present the experimental results to show the effectiveness of our unbiased learning. Baselines. Recall that the novelty of the proposed approach lies in the unbiased gradient in Equation 10, which is derived from the new class of loss functions in equation 9 with Condition 3.1. An objective of our experiments is thus to validate the effectiveness of this new class of loss functions and the corresponding gradients against common loss functions in the literature. Specifically, we compare the proposed method with MSE (mean squared error), MAE (mean absolute error), and Huber losses (Huber et al., 1964; Narula & Wellington, 1982; Wilcox, 1997). For robust loss function in regression, MAE and Huber losses are considered the de facto standard and state-of-the-art in many studies and libraries. We use the same model and optimization method with all of the loss functions under consideration, and hence the only difference among the proposed method and the baselines is in the gradients. Since the loss function uniquely determines the baseline, we refer to each baseline method as MSE, MAE, or Huber. 4.1 EXPERIMENTAL PROCEDURE AND RESULTS The experiments are organized into three parts. In Section 4.1.1, we visually demonstrate the effectiveness of the proposed approach in giving unbiased prediction. In Section 4.1.2, we intensively and quantitatively evaluate the predictive error of the proposed method and baselines with five real-world regression tasks. In Section 4.1.3, we demonstrate the practical benefit of our approach in a real healthcare use case, which has motivated this work. See the appendix for the details of the experimental settings. 4.1.1 DEMONSTRATION OF UNBIASED LEARNING Procedure. We start by conducting the experiments with synthetic data to show the effectiveness of our method in obtaining unbiased learning results from asymmetrically-corrupted data with different proportions of incomplete observations, K = {25, 50, 75}%. We use three synthetic tasks, LowNoise, HighNoise, and Breathing collected from the Kaggle dataset (Sen, 2016). We compare the proposed method against MSE, which assumes that both upper- and lower-side data are correctly labeled. This comparison shows whether our method can learn from asymmetrically-corrupted data in an unbiased manner, which MSE cannot do. Results. In Fig. 2, we plot the error in prediction (i.e., the predicted value minus the true value) given by the proposed method and MSE for each data-point of the three tasks with K = 50%. Note that, for evaluating the unbiasedness, these test sets do not have incomplete observations. Since MSE regards both upper- and lower-side data as correctly labeled, it produces biased results due to the incomplete observations, where the average error (shown by the green dashed line) is negative, which means the estimation has a negative bias. In contrast, the average error by the proposed method (shown by the blue solid line) is approximately zero. This clearly shows that the proposed method obtained unbiased learning results. The figures for the other settings and tables showing quantitative performance are in Appendix E. 4.1.2 PERFORMANCE COMPARISON AMONG DIFFERENT LOSS FUNCTIONS Procedure. We next apply the proposed method and baselines to five different real-world healthcare tasks from the UCI Machine Learning Repository (Velloso, 2013; Velloso et al., 2013) to show a more extensive comparison between the proposed method and the baselines (MSE, MAE, and Huber). For the proposed method, we use two implementations of L(f(x), y) for f(x) ≤ y′ in Equation 10: the absolute loss (Proposed-1) and the squared loss (Proposed-2). Here, we report the mean absolute error (MAE), and its standard error, of the predictions ŷ = {ŷn}Nn=1 against the corresponding true labels y across 5-fold cross-validation, each with a different randomly sampled training-testing split. MAE is the common metric used in the healthcare domain (Lee et al., 2016; Yeung et al., 2002; Wang et al., 2005; Srinivas et al., 2020) and is defined as MAE(y, ŷ) ≡ 1/N ∑N n=1 |yn − ŷn|. For each fold of the cross-validation, we use a randomly sampled 20% of the training set as a validation set to choose the best hyperparameters for each algorithm, in which hyperparameters providing the highest MAE in the validation set are chosen. Results. As seen in Table 1, Proposed-1 and Proposed-2 largely outperformes the baselines. The robust regression methods (MAE and Huber) did not improve in performance against MSE. In particular, Proposed-1 and Proposed-2 respectively reduced the MAE by more than 20% and 30% on average, compared with baselines. 4.1.3 REAL USE CASE FOR HEALTHCARE Procedure Finally, we demonstrate the practicality of our approach in a real use case in healthcare. From non-intrusive bed sensors installed under each of the four legs of a bed, we estimate the motion intensity of a subject that could be measured accurately but intrusively with ActiGraph, a gold standard sensor wrapped around the wrist (Tryon, 2013; Mullaney et al., 1980; Webster et al., 1982; Cole et al., 1992). If we can mimic the outputs from ActiGraph with outputs from the bed sensors, we can measure the motion with high accuracy and high coverage, while also easing the burden on the subject. We divide the dataset into three pieces and evaluate the results with 3-fold cross-validation. We here use the evaluation metrics that are specifically designed for sleep-wake discrimination (Cole et al., 1992) i.e., proportion of correct prediction period and rate of false prediction. Results. Table 2 shows the proportion of correct prediction period and rate of false prediction, which indicate that the proposed method captured 89 percent of the total time period of the motions that were captured by ActiGraph, and false detection due to factors such as floor vibration was only 1.6 percent. Furthermore, the proposed method captured 15 additional motions that were not captured by ActiGraph. The baseline method MSE was severely underfitted, and most of the weights were zero; thus, we omitted these results. Overall, our findings here demonstrate that ActiGraph can be replaced with bed sensors, and we can also use the bed sensors for the inputs of certain ActiGraph functions, such as sleep-wake discrimination (Cole et al., 1992). See also Appendix G for further details, including the actual estimation results of the motion intensity. 5 DISCUSSION Limitations. In this paper, we do not address symmetric label corruption, such as ordinary outliers, where the coverage and incompleteness are consistent between a label and explanatory variables. Other established approaches can handle such cases. Only when the corruption is asymmetric does it lead to the technical challenge we address here. In that sense, we can handle the opposite asymmetric corruption, in which labels for some observations may become inconsistently higher than those for typical observations. This can be handled as learning from lower-side labeled data and unlabeled data, i.e., LU regression. Since our derivation of U2 regression is straightforwardly applicable to this LU regression case, we show only its learning algorithm in Appendix C. Asymmetric Label Corruption in Classification. In the classification problem setting, asymmetric label corruption is addressed with positive-unlabeled (PU) learning, where it is assumed that negative data cannot be obtained, but unlabeled data are available as well as positive data (Denis, 1998; De Comité et al., 1999; Letouzey et al., 2000; Shi et al., 2018; Kato et al., 2019; Sakai & Shimizu, 2019; Li et al., 2019; Zhang et al., 2019; 2020; Chen et al., 2020b;a; Luo et al., 2021; Hu et al., 2021; Li et al., 2021). An unbiased risk estimator has also been proposed (Du Plessis et al., 2014; 2015). However, PU classification cannot be used for a regression problem, where labels are real values and we need to handle order and gradation between labels. This is because its derivation and algorithm are based on the nature that labels must be binary, i.e., only positive or negative. We overcome this limitation with a novel approach based on an unbiased gradient. Future work. We showed that our approach to estimating hyperparameters based on the grid search with the validation set was effective even for the one contains the important ratio for upper-side labeled data, p(ft(x) ≤ y). It also provides the flexibility needed to handle data variation. Most studies on PU learning assume that a hyperparameter corresponding to πup is given (Hammoudeh & Lowd, 2020; Sonntag et al., 2021; Lin et al., 2022), and some papers have addressed this hyperparameter estimation as their main contribution (Jain et al., 2016; Ramaswamy et al., 2016; Christoffel et al., 2016; Jain et al., 2020; Yao et al., 2021). Developing a method for the hyperparameter estimation to improve performance would be a worthwhile next step of our study. Also, in Assumption 2.1, we assumed s⊥ f∗(x) and a⊥ f∗(x), which is a common noise assumption. Addressing the case when the noises are not independent of f∗(x) is another future direction of our work. Conclusion. We formulated a regression problem from asymmetrically-corrupted data in which training data are corrupted with an asymmetric noise that always has a negative value. This causes labels for data with relatively lower label values to be particularly unreliable. To address this problem, we proposed a learning algorithm, U2 regression. Under some technical assumptions, we showed that our algorithm is unbiased and consistent with regression that uses uncorrupted data without incomplete observations. Our analysis is based on the equivalence of the gradient between them. An experimental evaluation demonstrated that the proposed method was significantly better than the methods without the assumption of the asymmetrical label corruption. A PROOFS A.1 PROOF OF LEMMA 2.2 Proof. For the proof of Lemma 2.2, we will derive two important lemmas from Assumption 2.1. Then, we will prove Lemma 2.2 by using them. We first show f∗(x) ≤ y′ ⇒ a = 0. When f∗(x) ≤ y′, we have from Equation 1 and Equation 4: f∗(x) ≤ f∗(x) + s + a (12) 0 ≤ s + a − a ≤ s. Since a ≤ 0 by Assumption 2.1, we have | a| ≤ s. (13) If a < 0, Assumption 2.1 implies | s| < | a|, which contradicts Equation 13. Hence, we must have a = 0. (14) Since y = y′ when a = 0, we have p(x, y′|f∗(x) ≤ y′) = p(x, y′|f∗(x) ≤ y′, a = 0) (15) = p(x, y|f∗(x) ≤ y, a = 0) = p(x, y|f∗(x) ≤ y), which establishes Lemma A.1. Let p(x, y, y′) be the underlying probability distribution for x, y, and y′. Then, p(x, y′|f∗(x) ≤ y′) = p(x, y|f∗(x) ≤ y). (16) The condition parts of these conditional distributions represent the relationships between labels and regression functions, e.g., p(x, y|f∗(x) ≤ y) is the distribution of x and y when y is higher than what is given by the oracle regression function f∗. Similar to Lemma A.1, we show f(x) ≤ y′ ⇒ a = 0. Let F ′ ≡ {f ∈ F : |f(x) − f∗(x)| ≤ | s| a.s.}, which represents our natural expectation that the regression function f well approximates f∗. When f(x) ≤ y′, we have from Equation 1 and Equation 4 with the condition f ∈ F ′: f(x) ≤ f∗(x) + s + a (17) f(x) ≤ f(x) + s + a + | s| 0 ≤ s + a + | s| − a ≤ s + | s|. Since a ≤ 0 by Assumption 2.1, we have | a| ≤ s + | s|. (18) If a < 0, Assumption 2.1 implies 2| s| < | a|, which contradicts Equation 18. Hence, we must have a = 0. (19) Since y = y′ when a = 0, by replacing f∗ with f for the argument in the derivation of Lemma A.1 in Equation 15, we have Lemma A.2. Let F ′ ≡ {f ∈ F : |f(x)− f∗(x)| ≤ | s|}. When f ∈ F ′, the following holds: p(x, y′|f(x) ≤ y′) = p(x, y|f(x) ≤ y). (20) Lemma A.1 immediately implies Ep(x,y′|f∗(x)≤y′)[G(x, y′)] = Ep(x,y|f∗(x)≤y)[G(x, y)] (21) for any function G : RD×R→ R as long as the expectations exist. When f ∈ F ′, from Lemma A.2, we then have Ep(x,y′|f(x)≤y′)[G(x, y′)] = Ep(x,y|f(x)≤y)[G(x, y)]. (22) A.2 PROOF OF PROPOSITION 3.2 Proof. From the decomposed gradients∇L(ft) in Equation 7, we derive the proposed gradient only with the expectations over p(x, y′). From Condition 3.1 for L(f(x), y),∇L(f(x), y) = g(f(x)) when y < f(x). Thus, Equation 7 can be rewritten as ∇L(ft) =p(ft(x) ≤ y)Ep(x,y|ft(x)≤y)[∇L(ft(x), y)] (23) + p(y < ft(x))Ep(x|y<ft(x)) [ g(ft(x)) ] , where y is marginalized out in the expectation in the second term since g(ft(x)) does not depend on y. Here, Equation 6 and Equation 7 can be rewritten by replacing∇L(ft(x), y) with g(ft(x)), as Ep(x,y)[g(ft(x))] = p(ft(x) ≤ y)Ep(x,y|ft(x)≤y) [ g(ft(x)) ] (24) + p(y < ft(x))Ep(x,y|y<ft(x)) [ g(ft(x)) ] p(y < ft(x))Ep(x,y|y<ft(x)) [ g(ft(x)) ] (25) = Ep(x,y) [ g(ft(x)) ] − p(ft(x) ≤ y)Ep(x,y|ft(x)≤y) [ g(ft(x)) ] . Since g(ft(x)) does not depend on y, we can marginalize out y in Equation 25 as p(y < ft(x))Ep(x|y<ft(x)) [ g(ft(x)) ] (26) = Ep(x) [ g(ft(x)) ] − p(ft(x) ≤ y)Ep(x|ft(x)≤y) [ g(ft(x)) ] . From Equation 26, we can express Equation 23 as ∇L(ft) = p(ft(x) ≤ y)Ep(x,y|ft(x)≤y)[∇L(ft(x), y)] (27) + Ep(x) [ g(ft(x)) ] − p(ft(x) ≤ y)Ep(x|ft(x)≤y) [ g(ft(x)) ] . Finally, from Lemma 2.2, we can rewrite Equation 27 as: ∇L(ft) = p(ft(x) ≤ y)Ep(x,y′|ft(x)≤y′)[∇L(ft(x), y)] (28) + Ep(x) [ g(ft(x)) ] − p(ft(x) ≤ y)Ep(x|ft(x)≤y′) [ g(ft(x)) ] , which is identical to Equation 9. Thus, the gradient in Equation 9 is unbiased and consistent with the gradient in Equation 6 a.s. A.3 PROOF OF LEMMA 3.4 Proof. The difference between the decomposed gradients ∇Ľ(ft) and ∇L(ft) at step t+ 1 in the gradient descent is |∇Ľ(ft)−∇L(ft)| (29) = ∣∣∣∣p(ft(x) ≤ y)Ep(x,y′|ft(x)≤y′)[∇L(ft(x), y)] + p(y < ft(x))Ep(x,y′|y′<ft(x))[∇L(ft(x), y)] −p(ft(x) ≤ y)Ep(x,y|ft(x)≤y)[∇L(ft(x), y)] − p(y < ft(x))Ep(x,y|y<ft(x))[∇L(ft(x), y)] ∣∣∣∣. From Lemma 2.2 and Condition 3.1, |∇Ľ(ft)−∇L(ft)| (30) = ∣∣∣∣p(y < ft(x))Ep(x,y′|y′<ft(x))[∇L(ft(x), y)] − p(y < ft(x))Ep(x,y|y<ft(x))[∇L(ft(x), y)] ∣∣∣∣ = ∣∣∣∣p(y < ft(x))Ep(x|y′<ft(x))[g(ft(x))] − p(y < ft(x))Ep(x|y<ft(x))[g(ft(x))] ∣∣∣∣. We decompose Ep(x|y′<ft(x))[g(ft(x))] again as |∇Ľ(ft)−∇L(ft)| (31) = ∣∣∣∣p(y < ft(x))( p(ft(x) ≤ y|y′ < ft(x))Ep(x|y′<ft(x)∧ft(x)≤y)[g(ft(x))] + p(y < ft(x)|y′ < ft(x))Ep(x|y′<ft(x)∧y<ft(x))[g(ft(x))] ) − p(y < ft(x))Ep(x|y<ft(x))[g(ft(x))] ∣∣∣∣. The condition y′ < ft(x) ∧ y < ft(x) is equivalent to the condition y < ft(x) since y′ ≤ y from Assumption 2.1 and thus p(y′ < ft(x)|y < ft(x)) = 1. Then, we have |∇Ľ(ft)−∇L(ft)| (32) = ∣∣∣∣p(y < ft(x))( p(ft(x) ≤ y|y′ < ft(x))Ep(x|y′<ft(x)∧ft(x)≤y)[g(ft(x))] + p(y < ft(x)|y′ < ft(x))Ep(x|y<ft(x))[g(ft(x))] ) − p(y < ft(x))Ep(x|y<ft(x))[g(ft(x))] ∣∣∣∣. Additionally, since p(y < ft(x)|y′ < ft(x)) = 1− p(ft(x) ≤ y|y′ < ft(x)), |∇Ľ(ft)−∇L(ft)| (33) = ∣∣∣∣p(y < ft(x))p(ft(x) ≤ y|y′ < ft(x))( Ep(x|y′<ft(x)∧ft(x)≤y)[g(ft(x))]− Ep(x|y<ft(x))[g(ft(x))] )∣∣∣∣. This equation shows that the bias is represented by the difference between the expectation of g(ft(x)) with the lower-side data and that with the original upper-side data mixed into the lower side due to incomplete observations and the corresponding proportions. From Assumption 3.3, since a⊥ x, |∇Ľ(ft)−∇L(ft)| (34) = ∣∣∣∣p(y < ft(x))p(ft(x) ≤ y|y′ < ft(x))( Ep(x|ft(x)≤y)[g(ft(x))]− Ep(x|y<ft(x))[g(ft(x))] )∣∣∣∣. Since |f − f∗| ≤ | s| a.s., p(ft(x) ≤ y) = η and p(y < ft(x)) = 1− η from their definition, p(ft(x) ≤ y|y′ < ft(x)) = p(ft(x) ≤ y)p( a < 0) p(y < ft(x)) + p(ft(x) ≤ y)p( a < 0) = η(1− ξ) (1− η) + η(1− ξ) . (35) Therefore, from the definition of δ, |∇Ľ(ft)−∇L(ft)| ≥ η(1− η)(1− ξ) (1− η) + η(1− ξ) δ. (36) B IMPLEMENTATION OF LEARNING ALGORITHM BASED ON STOCHASTIC OPTIMIZATION We scale up our U2 regression algorithm by stochastic approximation with M mini-batches and add a regularization term, R(f): ∇L̂{m}(ft) = ∑ (x,y)∈ { X {m} up ,y {m} up }∇L(ft(x), y) (37) + ρ [ ∑ x∈X{m}un g(ft(x)) ] − ∑ x∈X{m}up g(ft(x)) + λ ∂R(ft) ∂θ , where∇L̂{m}(ft) is the gradient for the m-th mini-batch, {X{m}up ,y{m}up } andX{m}un respectively are upper-side and unlabeled sets in the m-th mini-batch based on the current ft, λ is a regularization parameter, and the regularization term R(f) is, for example, the L1 or L2 norm of the parameter vector θ of f . We also convert nup/(πupN) to a hyperparameter ρ, ignoring constant coefficients instead of directly handling πup. The hyperparameters ρ and λ are optimized in training based on the grid-search with the validation set. The U2 regression algorithm based on stochastic optimization is described in Algorithm 1. We learn the regression function with the gradient in Equation 37 by using any stochastic gradient method. Here, we used Adam with the hyperparameters recommended in Kingma & Ba (2015), and the number of samples in the mini-batches was set to 32. We set the candidates of the hyperparameters, ρ and λ, to {10−3, 10−2, 10−1, 100}. By using the learned f , we can estimate ŷ = f(x) for new data x. C ALGORITHM FOR LU REGRESSION We show the algorithm for the lower and unlabeled regression (LU regression), where labels for some observations may become inconsistently higher than those for typical observations. Let LLU(f(x), y) be a loss function for LU regression and gLU(f(x)) be a gradient∇LLU(f(x), y) when f(x) ≤ y. Similar to Condition 3.1 for U2 regression, we assume that the class of LLU(f(x), y) satisfies the Algorithm 1 U2 regression based on stochastic gradient method. Input: Training data D′ = {xn, y′n}Nn=1; hyperparameters ρ, λ ≥ 0; an external stochastic gradient method A Output: Model parameters θ for f 1: while No stopping criterion has been met 2: Shuffle D′ into M mini-batches: { X{m},y{m} }M m=1 3: for m = 1 to M 4: Compute the gradient∇L̂{m}(ft) in Equation 37 with { X{m},y{m} } 5: Update θ by A with∇L̂{m}(ft) condition that gLU(f(x)) is a gradient function depending only on f(x) and not on the value of y. Then, LU regression is Algorithm 1, with the following gradient,∇L̂{m}LU (ft), instead of∇L̂{m}(ft) in Equation 37, as ∇L̂{m}LU (ft) = ∑ {x,y}∈ { X {m} lo ,y {m} lo }∇LLU(ft(x), y) (38) + ρ [ ∑ x∈X{m}un gLU(ft(x)) ] − ∑ x∈X{m}lo gLU(ft(x)) + λ ∂R(ft) ∂θ , where {X{m}lo ,y {m} lo } and X {m} un respectively are lower-side and unlabeled sets in the m-th minibatch based on the current ft. D COMPUTING INFRASTRUCTURE All of the experiments were carried out with a Python and TensorFlow implementation on workstations having 80 GB of memory, a 4.0 GHz CPU, and an Nvidia Titan X GPU. In this environment, the computational time to produce the results was a few hours. E DETAILS OF EXPERIMENTS IN SECTION 4.1.1 E.1 SYNTHETIC DATASETS We conducted the experiments on synthetic data to evaluate the feasibility of our method for obtaining unbiased learning results from asymmetrically-corrupted data containing different proportions of incomplete observations. We generated synthetic data on the basis of Assumption 2.1 and Equation 4. We randomly generated N = 1000 training samples, X = {xn}Nn=1, from the standard Gaussian distribution N (xn; 0, I), where the number of features in x was D = 10, and I is the identity matrix. Then, usingX , we generated the corresponding N sets of true labels y = {yn}Nn=1 from the distribution N (yn;w>xn, β), where w are coefficients that were also randomly generated from the standard Gaussian distributionN (w; 0, I), β is the noise precision, and > denotes the transpose. For simulating the situation in which a label has incomplete observations, we created corrupted labels y′ = {y′n}Nn=1 by randomly selecting K percent of data in y and subtracting the absolute value of white Gaussian noise with twice the value of the precision as that of y, 2β, from their values. We repeatedly evaluated the proposed method for each of the following settings. The noise precision was β = {100, 10−1}, which corresponded to a low-noise setting task (LowNoise) and a high-noise setting task (HighNoise), and the proportion of incomplete training samples was K = {25, 50, 75}%. In the case of K = 75%, only 25 percent of the samples correctly corresponded to labels, and all of the other samples were attached with labels that were lower than the corresponding true values. It is quite difficult to learn regression functions using such data. In these tasks, we used a linear model, θ>x, for f(x) and an implementation for Equation 37 with the absolute loss, which satisfies Condition 3.1, for the loss function L and L1-regularization for the regularization term. We set the candidates of the hyperparameters, ρ and λ, to {10−3, 10−2, 10−1, 100}. We standardized the data by subtracting their mean and dividing by their standard deviation in the training split. We used Adam with the hyperparameters recommended in Kingma & Ba (2015), and the number of samples in the mini-batches was set to 32. We also used a real-world sensor dataset collected from the Kaggle dataset (Sen, 2016) that contains breathing signals (Breathing). The dataset consisted of N = 1, 432 samples. We used signals from a chest belt asX = {xn}Nn=1, and x in each sample had D = 2 number of features, i.e., the period and height of the expansion/contraction of the chest. We used signals obtained by the Douglas bag (DB) method, which is the gold standard for measuring ventilation, as true labels y = {yn}Nn=1. For our problem setting, we created corrupted labels y′ = {y′n}Nn=1 through the same procedure for synthetic corruption as that for LowNoise and HighNoise with K = {25, 50, 75}%. In the experiment on Breathing, for its non-linearity, we used θ>φ(x, σ) for f(x), where φ is a radial basis function with the training set as its bases, and σ is a hyperparameter representing the kernel width that is also optimized by using the validation set. We set the candidates of the hyperparameter σ to {10−3, 10−2, 10−1, 100}. The other implementation details were the same as those for LowNoise and HighNoise. E.2 DETAILED RESULTS Figure 3 shows the error between the estimation results of the proposed method and their true values and those of MSE for LowNoise, HighNoise, and Breathing with 25 and 75 percent of incomplete training samples. Table 3 shows the performance on LowNoise, HighNoise, and Breathing for the proposed method and MSE. As shown in Figure 3, the proposed method obtains unbiased learning results in all cases, while MSE produces biased results. From Table 3, we can see that the proposed method outperformes MSE overall. We found that the performance of our method is not significantly affected by the increase in the proportion of incomplete training samples K even for K = 75%, unlike that of MSE. E.3 PERFORMANCE OVER DIFFERENT SIZES OF VALIDATION SET To demonstrate the robustness of our validation-set-based approach to estimating the hyperparameter πup, we show the performance of the proposed method over different sizes of the validation set in Fig. 4. This analysis is conducted on the tasks in Section 4.1.1; LowNoise, HighNoise, and Breathing, with K=50%. Figure 4 shows that the proposed method does not degrade its performance much, even when we use only 1% of the training set as the validation set. This demonstrates that the proposed approach is robust enough also for the small size of the validation set as well as the high proportion of incomplete validation samples. In Fig. 5, we also show a chart similar to Fig. 2 (the error in prediction) when we used 1% of the training set as the validation set. We can see that even in this case, the proposed method achieved unbiased learning (the average error shown by the blue solid line is approximately zero.). F DETAILS OF EXPERIMENTS IN SECTION 4.1.2 We applied the algorithm to five different real-world healthcare tasks recorded in the datasets from the UCI Machine Learning Repository (Velloso, 2013; Velloso et al., 2013), which contains sensor outputs from wearable devices attached to the arm while subjects exercised. From the non-intrusive sensors attached to gym equipment, we estimated the motion intensity of a subject that was measured accurately with an arm sensor that was an intrusive sensor wrapped around the arm. If we can mimic outputs from the arm sensor with outputs from the equipment sensor, it could contribute to the subjects’ comfort, as they would not need to wear sensors to measure their motion intensity. We used all of the features from the equipment sensor that took “None” values less than ten times asX = {xn}Nn=1, where each sample had D = 13 number of features. The corrupted labels y′ = {y′n}Nn=1 were the magnitude of acceleration from the arm sensor, which can accurately sense motion intensity on the arm, but it had insufficient data coverage and incomplete or missing observations for the movements of other body parts. For performance evaluation, we used the magnitude of acceleration for the entire body as true labels y = {yn}Nn=1. The number of samples were N = 11, 159, N = 7, 593, N = 6, 844, N = 6, 432, and N = 7, 214 respectively for the tasks, Specification, Throwing A, Lifting, Lowering, and Throwing B. For the complex nature of the tasks, we used a 6-layer multilayer perceptron with ReLU (Nair & Hinton, 2010) (more specifically, D-100-100-100-100-1) as f(x), which demonstrates the usefulness of the proposed method for training deep neural networks. We also used a dropout (Srivastava et al., 2014) with a rate of 50% after each fully connected layer. We used two implementations for L(f(x), y) when f(x) ≤ y′ in Equation 37 with the absolute loss (Proposed-1) and the squared loss (Proposed-2). For both implementations, we used the absolute loss, which satisfies Condition 3.1, for the loss function L(f(x), y) when y′ < f(x) and used L1-regularization for the regularization term. The other implementation details were the same as those for LowNoise, HighNoise, and Breathing. G DETAILS OF EXPERIMENTS IN SECTION 4.1.3 We demonstrate the practicality of our approach in a real use case in healthcare. From non-intrusive bed sensors installed under each of the four legs of a bed, we estimated the motion intensity of a subject that was measured accurately with ActiGraph, a gold standard intrusive sensor wrapped around the wrist (Tryon, 2013; Mullaney et al., 1980; Webster et al., 1982; Cole et al., 1992). The sensing results of ActiGraph are used for tasks such as discriminating whether a subject is asleep or awake (Cole et al., 1992). While ActiGraph can accurately sense motion on the forearm, it has insufficient data coverage in other areas and often causes observations of movements on other body parts to be missing. The bed sensors have a broader data coverage since they can sense global motion on all body parts; however, the sensing accuracy is limited due to their non-intrusiveness. If we can mimic the outputs from ActiGraph with outputs from the bed sensors, we can expect to achieve sufficient accuracy and coverage while also easing the burden on the subject. The dataset we used included three pieces of data, Data (i), (ii), and (iii), which were respectively recorded over 20, 18, and 18.5 minutes. Each piece of data consisted of pairs of bed-sensor-data sequences and the corresponding motion intensity sequence obtained by ActiGraph. We used the “magnitude” attribute of ActiGraph as corrupted labels y′ for the motion intensity, whose sampling rate was about one sample per second. For true labels y, we manually measured the motion intensity every minute under the management of a domain expert. ForX , we first computed the gravity center of the four sensor outputs that were obtained from the bed sensors under the four legs of a bed. Then, we computed the time derivatives and cross terms of the raw sensor outputs and the gravity center. The sampling rate of the bed sensors was different from that of ActiGraph, about one sample per five milliseconds. Thus,X was finally generated as a sliding window of statistics in 1, 000-millisecond (1 second) subsequences of the time series of the above computed variables, where 1 second was the same as the sampling interval of ActiGraph. The statistics were means, standard deviations, and {0.05, 0.25, 0.5, 0.75, 0.95} quantiles. In this task, we used the linear model θ>x for f(x) due to its interpretability, which is inevitable in real-world healthcare and medical applications. G.1 ESTIMATION RESULTS FOR MOTION INTENSITY Figure 6 compares our estimation results for motion intensity with the output of ActiGraph and true labels. G.2 IMPORTANT FEATURES ESTIMATING MOTION INTENSITY The important features selected by L1 regularization were the statistics of the gravity center and the cross terms and time derivatives of the raw sensor outputs. The largest weight was assigned to the standard deviation of the gravity center, which represents the amplitude of the gravity center, so it is directly related to the motion of subjects. H OTHER POSSIBLE USE CASES OF REGRESSION FOR SENSOR MAGNITUDE Examples of predicting the magnitude values of a sensor, which is a field of application of U2 regression, can be found in several areas. Besides the medical and healthcare applications discussed in the main text, another example is estimating the wind speed or rainfall in a specific region from observable macroscopic information (Cheng & Tan, 2008; Abraham & Tan, 2010; Abraham et al., 2013; Vandal et al., 2017), known as statistical downscaling (Wilby et al., 2004). Wind speed and rainfall, which are labels in these tasks, can be sensed locally in a limited number of locations and provide incomplete observations and biased labels compared with macroscopic information, which is considered to be explanatory variables.
1. What is the focus of the paper regarding regression problems with incomplete observations? 2. What are the strengths and weaknesses of the proposed algorithm, particularly in its assumptions and experiments? 3. Do you have any concerns regarding the approach's ability to handle asymmetrically corrupted datasets? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper considered a regression problem where incomplete observations have labels biased towards lower values. The authors proposed an algorithm which utilizes only high-valued labels and discarded the lower-valued labels. They conducted several experiments on both synthetic and real-world datasets to demonstrate their algorithm. Strengths And Weaknesses Strength: The idea is explained well. Their work is motivated by real-world use cases which makes it interesting. Weakness: In general I do not really buy this idea for addressing asymmetrically corrupted dataset, and the assumptions they made seem problematic. On a high level, completely ignoring the lower-valued labels, no matter they are true low values or come from incomplete observations, could potentially lose some information. Couple of questions are in order. As the authors showed in the experiments, Huber regression performed badly. But have you tried using Huber regression as an outlier detection step? Namely, identify observations whose (x,y) relationship differs significantly from others, and remove those from your sample set, and then apply Least Squares regression to the rest of normal data points. The logic is that, I do not think we can completely remove lower-valued labels, but instead, we can do some pre-processing to identify abnormal data points. The high-value / low-value scheme does not seem to well supported. Section 2.1 is not needed in my view. It is common knowledge and should be described very briefly. Based on your definition of upper-side labeled data, their noise \epsion_s >=0. But technically \epsilon_s could also be negative. So you are missing a portion of complete observations by restricting to the upper-side labeled data. Lemma 2.2 also seems problematic. Ideally when there is no corruption, namely, (x,y), we want to use all the data, not just data with f(x) <= y. Eq. (5) doesn't seem to establish the consistent estimator as the authors claimed. The authors claimed that for some loss functions, when y<f(x), the gradient of the loss function does not depend on y, e.g., Eq. (8). But notice that even when y>f(x), the gradient of this type of loss also does not depend on y. So it is about the loss function itself, not the upper or lower sides of the labels. When you switch to squared loss (which was used in your Proposal-2 in the experimental section and thus violated your assumption here), the gradient would depend on y no matter you are in the upper or lower sided region. In order to identify the upper or lower sides, you need to do an iterative process where you use the f_t from previous iteration to define upper or lower region. Does this unnecessarily complicate the problem? Clarity, Quality, Novelty And Reproducibility The paper is explained in a relatively clear way, but the quality or novelty is not sufficient for publication at ICLR. There are also problematic assumptions / results which I have brought up in the above section.
ICLR
Title Learning from Asymmetrically-corrupted Data in Regression for Sensor Magnitude Abstract This paper addresses a regression problem in which output label values represent the results of sensing the magnitude of a phenomenon. A low value of such labels can either mean that the actual magnitude of the phenomenon has been low or that the sensor has made an incomplete observation. This leads to a bias toward lower values in labels and its resultant learning because labels for incomplete observations are recorded as lower than those for typical observations, even if both have monitored similar phenomena. Moreover, because an incomplete observation does not provide any tags indicating incompleteness, we cannot eliminate or impute them. To address this issue, we propose a learning algorithm that explicitly models the incomplete observations to be corrupted with an asymmetric noise that always has a negative value. We show that our algorithm is unbiased with a regression learned from the uncorrupted data that does not involve incomplete observations. We demonstrate the advantages of our algorithm through numerical experiments. N/A This paper addresses a regression problem in which output label values represent the results of sensing the magnitude of a phenomenon. A low value of such labels can either mean that the actual magnitude of the phenomenon has been low or that the sensor has made an incomplete observation. This leads to a bias toward lower values in labels and its resultant learning because labels for incomplete observations are recorded as lower than those for typical observations, even if both have monitored similar phenomena. Moreover, because an incomplete observation does not provide any tags indicating incompleteness, we cannot eliminate or impute them. To address this issue, we propose a learning algorithm that explicitly models the incomplete observations to be corrupted with an asymmetric noise that always has a negative value. We show that our algorithm is unbiased with a regression learned from the uncorrupted data that does not involve incomplete observations. We demonstrate the advantages of our algorithm through numerical experiments. 1 INTRODUCTION This paper addresses a regression problem for predicting the magnitude of a phenomenon when an observed magnitude involves a particular measurement error. The magnitude typically represents how large a phenomenon is or how strong the nature of the phenomenon is. Such examples of predicting the magnitude are found in several application areas, including pressure, vibration, and temperature (Vandal et al., 2017; Shi et al., 2017; Wilby et al., 2004; Tanaka et al., 2019). In medicine and healthcare, the magnitude may represent pulsation, respiration, or body movements (Inan et al., 2009; Nukaya et al., 2010; Lee et al., 2016; Alaziz et al., 2016; 2017; Carlson et al., 2018). More specifically, we learn a regression function to predict the label representing the magnitude of a phenomenon from explanatory variables. The training data consists of pairs of the label and explanatory variables, but note that the label in the data is observed with a sensor and is not necessarily in agreement with the actual magnitude of the phenomenon. We note that we use the term “label” even though we address the regression problem, and it refers to a real-valued label in this paper. In the example of predicting the magnitude of body movements, the label in the data is measured with an intrusive sensor attached to the chest or the wrist, and the explanatory variables are the values measured with non-intrusive bed sensors (Mullaney et al., 1980; Webster et al., 1982; Cole et al., 1992; Tryon, 2013). A regression function for this example would make it possible to replace intrusive sensors with non-intrusive ones, which in turn will reduce the burden on patients. Although the sensors that measure the label generally have high accuracy, they often make incomplete observations, and such incomplete observations are recorded as low values instead of missing values. This leads to the particular challenge where a low value of the label can either mean that the actual magnitude of the phenomenon has been low or that the sensor has made an incomplete observation, and there are no clues that allow us to tell which is the case. We illustrate this challenge in Fig. 1-(a). Such incomplete observations are prevalent in measuring the magnitude of a phenomenon. For example, the phenomenon may be outside the coverage of a sensor, or the sensing system may experience temporal mechanical failures. In the example of body movements, the sensor may be temporarily detached from the chest or wrist. In all cases, the sensor keeps recording low values, while the actual magnitude may be high, and no tag indicating incompleteness can be provided. This incomplete observation is particularly severe for the sensor measuring the label since it is single-source and has narrower data coverage. This stems from the fact that the sensor is usually intrusive or it is costly to produce highly accurate observations for measuring the label. Examples of this can be seen in chest or wrist sensors that focus on the movements of a local body part with high accuracy and often miss movements outside their coverage, such as those of parts located far from where the sensor is attached. At most, a single intrusive sensor can be attached to a patient to avoid burdening them. In contrast, the sensors measuring the explanatory variables are usually multi-source and provide broader data coverage. For example, multiple sensors can be attached to various places of a bed and globally monitor the movements of all body parts on the bed but with lower accuracy. One cannot simply ignore the problem that the observations of labels may be incomplete because the estimated regression functions trained on such data with incomplete observations are severely biased toward lower values regardless of the amount of available training data. This bias comes from the fact that incomplete observations always have lower values than the actual magnitude of a phenomenon, and they occur intensively on label sensors, while explanatory variables are usually observed completely. Moreover, incomplete observations can be much more frequent than expected. Unfortunately, since we cannot identify which observations are incomplete, we cannot eliminate or impute them by using existing methods that require identifying incomplete observations. Such methods include thresholding, missing value detection (Pearson, 2006; Qahtan et al., 2018), imputation (Enders, 2010; Smieja et al., 2018; Ma & Chen, 2019; Sportisse et al., 2020), and semi-supervised regression (Zhou & Li, 2005; Zhu & Goldberg, 2009; Jean et al., 2018; Zhou et al., 2019). The issues of incomplete observations also cannot be solved with robust regression (Huber et al., 1964; Narula & Wellington, 1982; Draper & Smith, 1998; Wilcox, 1997), which takes into account the possibility that the observed labels contain outliers. While robust regression is an established approach and state-of-the-art against corrupted labels in regression, it assumes symmetric label corruption. Namely, the noise is assumed to not be biased either positively or negatively. Since incomplete observations induce the noise that is severely biased toward lower values, robust regression methods still produce regression functions that are biased toward lower values than the one that would be learned from the data without incomplete observations. In this paper, to mitigate the bias toward lower values, we explicitly assume the existence of the noise from incomplete observations, which always has negative values, in addition to the ordinary symmetric noise. That is, we consider our training data to be asymmetrically-corrupted data. We then formulate a regression problem from our asymmetrically-corrupted data and design a principled learning algorithm for this regression problem. By explicitly modeling the incomplete observation, we derive a learning algorithm that has a rather drastic feature: namely, it ignores the labels that have relatively low values (lower-side labeled data). In other words, our algorithm uses the data whose labels have relatively high values (upper-side labeled data) and the data whose labels are ignored (unlabeled data). Hence, we refer to our algorithm as upper and unlabeled regression (U2 regression). This aligns with the intuition that the labels with low values are unreliable, since those low values may be due to incomplete observations. Our main result is that U2 regression, which learns from the asymmetrically-corrupted data, produces a regression function that is, under some technical assumptions, unbiased and consistent with the one that is produced from the uncorrupted data that does not involve incomplete observations. This counterintuitive result is achieved by considering a specific class of loss functions and deriving their gradient, which only requires upper-side labeled data and unlabeled data in the asymmetricallycorrupted data and can still be shown to be asymptotically equivalent to the expression of the gradient that has access to the uncorrupted data. The main novelty in our approach is thus in the loss function, and we will empirically demonstrate the effectiveness of the proposed class of loss functions over existing common loss functions in dealing with asymmetrically-corrupted data in synthetic and six real-world regression tasks. Contributions. The main contributions of this paper are summarized as follows. • We formulate a novel problem of learning a regression function from asymmetricallycorrupted data. This is important for applications where the magnitude of a phenomenon is measured with a sensor that is susceptible to unidentifiable incomplete observations. • We derive an unbiased and consistent learning algorithm (U2 regression) for this problem from the new class of loss functions. • Extensive experiments on synthetic and six real-world regression tasks including a real use case for healthcare demonstrate the effectiveness of the proposed method. 2 REGRESSION FROM ASYMMETRICALLY-CORRUPTED DATA Our goal is to derive a learning algorithm with asymmetrically-corrupted data, i.e., labels in the training data are corrupted with negative-valued noise due to incomplete observations, in a manner that is unbiased and consistent with the regression that uses uncorrupted data without involving incomplete observations. We first consider the regression problem that uses the uncorrupted data in Section 2.1 and then formulate learning from the asymmetrically-corrupted data in Section 2.2. 2.1 REGRESSION PROBLEM FROM DATA WITHOUT INCOMPLETE OBSERVATIONS Let x ∈ RD(D ∈ N) be a D-dimensional explanatory variable and y ∈ R be a real-valued label. We assume that, without incomplete observations, y is observed in accordance with y = f∗(x) + s, (1) where f∗ is the oracle regressor and s is the symmetric noise with 0 as the center, such as additive white Gaussian noise (AWGN). We learn a regression function f(x) that computes the value of the estimation of a label, ŷ, for a newly observed x as ŷ = f(x). The optimal regression function, f̂ , is given by f̂ ≡ arg min f∈F L(f), (2) where F is a hypothesis space for f , and L(f) is the expected loss when the regression function f(x) is applied to data (x, y), distributed in accordance with an underlying distribution p(x, y): L(f) ≡ Ep(x,y)[L(f(x), y)], (3) where Ep[•] denotes the expectation over the distribution p, and L(f(x), y) is the loss function between f(x) and y, e.g., the squared loss, L(f(x), y) = ‖f(x)− y‖2. The expectation Ep(x,y) can be estimated by computing a sample average for the training data D ≡ {(xn, yn)}Nn=1, which is N pairs of explanatory variables and labels. 2.2 REGRESSION PROBLEM FROM ASYMMETRICALLY-CORRUPTED DATA In this paper, we consider a scenario in which we only have access to the asymmetrically-corrupted data D′ ≡ {(xn, y′n)}Nn=1, where a label y′ may be corrupted due to incomplete observations. A corrupted label y′ is observed from the uncorrupted y with an asymmetric negative-valued noise, a: y′ = y + a, (4) where the asymmetric noise a always has a random negative value, which means y′ ≤ y. Using only D′, we learn a regression function f(x) as the solution for Equation 2 in an unbiased and consistent manner. Although AWGN can be handled even when we use a naive regression method such as least squares, the asymmetric noise a, which always has a negative value, is problematic. Intuitively, the asymmetric noise a makes lower-side labeled data particularly unreliable and inappropriate for learning, while keeping upper-side labeled data reliable, where the upper-side labeled data refers to the data {(x, y)} whose label is above the regression line (i.e., f(x) ≤ y) and the lower-side labeled data refers to the data whose label is below the regression line. The regression line represents the estimation of a regression function. Figure 1-(b) illustrates this as a scatter plot of the value of the label against the value of an explanatory variable. Here, the data with incomplete observations appear only in the lower side of the regression line because a makes observations have lower label values than those of typical observations, where the regression line represents such typical observations. This asymmetry leads to biased learning compared with the learning from the uncorrupted data without incomplete observations. To address the asymmetric noise a and its resultant bias, we formalize the assumption on the observation process for the asymmetrically-corrupted data D′ and derive a lemma representing the nature of D′. Then, we propose a learning algorithm based on the lemma in the next section. The observation processes of D and D′ are formally characterized as follows. Assumption 2.1. Assume s⊥ f∗(x), Ep( s)[ s] = 0; a⊥ f∗(x), a ≤ 0 almost surely (a.s.); 2| s| < | a| a.s. when a < 0; and {(xn, y, y′n)}Nn=1 are i.i.d. observations in accordance with Equation 1 and Equation 4, This assumption means that D′ has enough information to estimate f , and the asymmetric noise a is significant enough compared to the symmetric noise s, which are necessary assumptions so that the learning problem is solvable, and a should be handled separately from s. From Assumption 2.1, we then have the following lemma. Lemma 2.2. Let F ′ ≡ {f ∈ F : |f(x)− f∗(x)| ≤ | s| a.s.}. When f ∈ F ′, the following holds for y ≡ f∗(x) + s and y′ ≡ y + a under Assumption 2.1: Ep(x,y′|f(x)≤y′)[G(x, y′)] = Ep(x,y|f(x)≤y)[G(x, y)] (5) for any function G : RD × R→ R as long as the expectations exist. Proof. We outline a proof here and provide a complete one in Appendix A.1. We first show that a does not change the distribution for upper-side labeled data (f∗(x) ≤ y′) on the basis of the oracle regression function f∗ before and after adding a, i.e., a = 0 when f∗(x) ≤ y′. With the condition f ∈ F ′, we can further prove that a = 0 when f(x) ≤ y′, which is for upper-side labeled data on the basis of f . This establishes p(x, y′|f(x) ≤ y′) = p(x, y|f(x) ≤ y) and implies Lemma 2.2. The condition parts of these conditional distributions represent the relationships between labels and the estimations of the regression function f , e.g., p(x, y|f(x) ≤ y) is the distribution of x and y when y is higher than what is given by f . The condition f ∈ F ′ represents our natural expectation that the regression function f well approximates f∗. Lemma 2.2 shows that a does not change the expectation for our upper-side labeled data (f(x) ≤ y′) before and after adding a, which makes them still reliable for regression. In the next section, we derive an unbiased learning algorithm based on this lemma. 3 U2 REGRESSION We seek to find the minimizer of the objective in Equation 2 from the asymmetrically-corrupted data D′. To this end, we propose a gradient that relies only on the knowledge of the distribution of the corrupted data p(x, y′) but is still equivalent to the gradient of Equation 3, which relies on the knowledge of the distribution of the uncorrupted data p(x, y). Based on Lemma 2.2, we rewrite the gradient based on p(x, y) into the one that only requires p(x, y′). 3.1 GRADIENT FOR LEARNING FROM ASYMMETRICALLY-CORRUPTED DATA Here, we address Equation 2 with the gradient descent. At step t + 1 in the gradient descent, the gradient of Equation 3 with respect to the parameters θ of f is represented with a regression function, ft, which is estimated at step t, as follows: ∇L(ft) ≡ Ep(x,y)[∇L(ft(x), y)], where ∇L(ft(x), y) ≡ ∂L(f(x), y) ∂θ ∣∣∣ f=ft . (6) Note that this holds for any step in the gradient descent. When t = 0, f0 is the initial value of f , and when t =∞, we suppose f∞ = f̂ . We can decompose∇L(ft) as ∇L(ft) =p(ft(x) ≤ y)Ep(x,y|ft(x)≤y)[∇L(ft(x), y)] (7) + p(y < ft(x))Ep(x,y|y<ft(x))[∇L(ft(x), y)]. We then assume that, when y < f(x), the gradient of the loss function does not depend on y and only depends on f(x); thus we write ∇L(f(x), y) as g(f(x)) when y < f(x) to emphasize this independence. Formally, Condition 3.1. Let g(f(x)) be ∇L(f(x), y) for y < f(x). g(f(x)) is a gradient function depending only on f(x) and not on the value of y. Such common losses are the absolute loss and pinball loss, which are respectively used in least absolute regression and quantile regression and work well on real data (Lee et al., 2016; Yeung et al., 2002; Wang et al., 2005; Srinivas et al., 2020). For example, the gradient of the absolute loss is ∂|f(x)− y| ∂θ = ∂f(x) ∂θ when y < f(x), (8) which does not depend on the value of y but only on f(x). We now propose a gradient that does not rely on the knowledge of p(x, y) but instead uses only p(x, y′). Namely, ∇L̃(ft) ≡ p(ft(x) ≤ y)Ep(x,y′|ft(x)≤y′) [ ∇L(ft(x), y) ] (9) + Ep(x) [ g(ft(x)) ] − p(ft(x) ≤ y)Ep(x|ft(x)≤y′) [ g(ft(x)) ] . In Section 3.2, we will formally establish the equivalence between the gradient in Equation 9 and that in Equation 6 under our assumptions. Note that in the second and third terms of Equation 9, we apply expectations over p(x) and p(x|ft(x) ≤ y′) to g(f(x)), even though g(f(x)) is defined to be the gradient∇L(f(x), y) for y < f(x). This is tractable due to the nature of g(f(x)), which only depends on f(x) and does not depend on the value of y. Since the expectations in Equation 9 only depend on x and y′, they can be estimated by computing a sample average for our asymmetrically-corrupted data D′ as ∇L̂(ft) = πup nup [ ∑ (x,y)∈{Xup,y′up} ∇L(ft(x), y) ] + 1 N [ ∑ x∈Xun g(ft(x)) ] − πup nup [ ∑ x∈Xup g(ft(x)) ] , (10) where {Xup,y′up} represents the set of coupled pairs of x and y′ in the upper-side labeled sample set, {x, y′ : ft(x) ≤ y′}, in D′;Xun is a sample set of x in D′ ignoring labels y′; nup is the number of samples in the upper-side labeled set; πup is πup ≡ p(ft(x) ≤ y). Note that πup depends on the current estimation of the function ft and the label y with complete observation. Thus, it changes at each step of the gradient descent, and we cannot determine its value in a general way. In this paper, we propose a simple approach of choosing πup as a single value of the hyperparameter. We optimize it with the grid search based on the validation set, which enables us to flexibly handle data variation. We will show that it works practically in our experiments. As we will show in Section 3.2, we can use Equation 10 to design an algorithm that gives an unbiased and consistent regression function. By using the gradient in Equation 10, we can optimize Equation 2 and learn the regression function only with upper-side labeled samples and unlabeled samples from D′ independent of lower-side labels. This addresses the issue that our lower-side labeled data is particularly unreliable and leads to overcoming the bias that stems from this unreliable part of the data. We refer to our algorithm as upper and unlabeled regression (U2 regression). See Appendix B for the specific implementation of the algorithm based on stochastic optimization. The gradient in Equation 10 can be interpreted in an intuitive manner. The first term in Equation 10 has the effect of minimizing the upper-side loss. Recall that the upper-side data are not affected by the asymmetric noise under our assumptions. Thus, U2 regression seeks to learn the regression function f on the basis of this reliable upper-side data. Notice that the first term becomes zero when all of the data points are below f (i.e., y′ ≤ ft(x),∀(x, y′) ∈ D′), since then {Xup,y′up } becomes empty. The second term thus has the effect of pushing down f at all of the data points so that some data points are above f . Meanwhile, the third term partially cancels this effect of the second term for the upper-side data to control the balance between the first and the second terms. 3.2 UNBIASEDNESS AND CONSISTENCY OF GRADIENT U2 regression is the learning algorithm based on the gradient,∇L̂(ft), in Equation 10 and uses only asymmetrically-corrupted data D′. The use of ∇L̂(ft) can be justified as follows: Proposition 3.2. Suppose that Assumption 2.1 holds and the loss function L(f(x), y) satisfies Condition 3.1. Then, the gradient ∇L̃(ft) in Equation 9 and its empirical approximation ∇L̂(ft) in Equation 10 are unbiased and consistent with the gradient∇L(ft) in Equation 6 a.s. Proof. We outline a proof here and provide a complete one in Appendix A.2. First, we rewrite Equation 7 into a gradient that only contains the expectation over p(x, y|ft(x) ≤ y) with Condition 3.1. Then, we apply Lemma 2.2 to the gradient, and it becomes an expression identical to Equation 9. In other words, U2 regression asymptotically produces the same result as the learning algorithm based on the gradient∇L(ft) in Equation 6, which requires the uncorrupted data without incomplete observations, D. The convergence rate of U2 regression is of the order Op(1/√nup + 1/ √ N) in accordance with the central limit theorem (Chung, 1968), where Op denotes the order in probability. We further justify our approach of having the specific form of Equation 9 by showing that a straightforward variant that uses D′ as if it does not involve incomplete observations (i.e., p(x, y) ≈ p(x, y′)) can fail for our problem. To this end, we introduce an additional assumption on the observation process: Assumption 3.3. Assume a⊥ x. Then, we have Lemma 3.4. Let∇Ľ(ft) be a variant of the gradient in Equation 7 replacing p(x, y) with p(x, y′), δ be the difference between the expectations of the gradients in the upper side and the lower side δ ≡ ∣∣Ep(x,y|f(x)≤y)[∇L(f(x), y)]− Ep(x,y|y<f(x))[∇L(f(x), y)]∣∣, 0 < η < 1 be probability when 0 ≤ s, and 0 < ξ < 1 be probability when a = 0. Then,∇Ľ(ft) is not consistent with the gradient in Equation 6 a.s., and the difference (bias) between them at step t+ 1 in the gradient descent is η(1− η)(1− ξ) (1− η) + η(1− ξ) δ ≤ |∇Ľ(ft)−∇L(ft)|. (11) Proof. We outline a proof here and provide a complete one in Appendix A.3. We first show that the bias |∇Ľ(ft)−∇L(ft)| can be represented by the difference between the expectation of g(ft(x)) with the upper-side data and that with the lower-side data, which can be written by δ. The bias also has the coefficient which contains the proportions for the lower-side data and the original upper-side data mixed into the lower side due to incomplete observations. These values can be written by η and ξ from their definitions. Lemma 3.4 shows that the bias caused by incomplete observations becomes severe when there is a large difference between the expectations of the gradients in the upper side and the lower side. δ is usually higher than zero because δ = 0 implies there is no difference between the expectations of the gradients in the upper side and the lower side or both of the expectations are zero. Furthermore, a larger 1− ξ = p( a < 0) makes the bias more significant, which agrees with the intuition that as the proportion of incomplete observations increases, the problem becomes more difficult. 4 EXPERIMENTS We now evaluate the proposed method through numerical experiments. We first introduce the baselines to be compared in the experiments. Then, we present the experimental results to show the effectiveness of our unbiased learning. Baselines. Recall that the novelty of the proposed approach lies in the unbiased gradient in Equation 10, which is derived from the new class of loss functions in equation 9 with Condition 3.1. An objective of our experiments is thus to validate the effectiveness of this new class of loss functions and the corresponding gradients against common loss functions in the literature. Specifically, we compare the proposed method with MSE (mean squared error), MAE (mean absolute error), and Huber losses (Huber et al., 1964; Narula & Wellington, 1982; Wilcox, 1997). For robust loss function in regression, MAE and Huber losses are considered the de facto standard and state-of-the-art in many studies and libraries. We use the same model and optimization method with all of the loss functions under consideration, and hence the only difference among the proposed method and the baselines is in the gradients. Since the loss function uniquely determines the baseline, we refer to each baseline method as MSE, MAE, or Huber. 4.1 EXPERIMENTAL PROCEDURE AND RESULTS The experiments are organized into three parts. In Section 4.1.1, we visually demonstrate the effectiveness of the proposed approach in giving unbiased prediction. In Section 4.1.2, we intensively and quantitatively evaluate the predictive error of the proposed method and baselines with five real-world regression tasks. In Section 4.1.3, we demonstrate the practical benefit of our approach in a real healthcare use case, which has motivated this work. See the appendix for the details of the experimental settings. 4.1.1 DEMONSTRATION OF UNBIASED LEARNING Procedure. We start by conducting the experiments with synthetic data to show the effectiveness of our method in obtaining unbiased learning results from asymmetrically-corrupted data with different proportions of incomplete observations, K = {25, 50, 75}%. We use three synthetic tasks, LowNoise, HighNoise, and Breathing collected from the Kaggle dataset (Sen, 2016). We compare the proposed method against MSE, which assumes that both upper- and lower-side data are correctly labeled. This comparison shows whether our method can learn from asymmetrically-corrupted data in an unbiased manner, which MSE cannot do. Results. In Fig. 2, we plot the error in prediction (i.e., the predicted value minus the true value) given by the proposed method and MSE for each data-point of the three tasks with K = 50%. Note that, for evaluating the unbiasedness, these test sets do not have incomplete observations. Since MSE regards both upper- and lower-side data as correctly labeled, it produces biased results due to the incomplete observations, where the average error (shown by the green dashed line) is negative, which means the estimation has a negative bias. In contrast, the average error by the proposed method (shown by the blue solid line) is approximately zero. This clearly shows that the proposed method obtained unbiased learning results. The figures for the other settings and tables showing quantitative performance are in Appendix E. 4.1.2 PERFORMANCE COMPARISON AMONG DIFFERENT LOSS FUNCTIONS Procedure. We next apply the proposed method and baselines to five different real-world healthcare tasks from the UCI Machine Learning Repository (Velloso, 2013; Velloso et al., 2013) to show a more extensive comparison between the proposed method and the baselines (MSE, MAE, and Huber). For the proposed method, we use two implementations of L(f(x), y) for f(x) ≤ y′ in Equation 10: the absolute loss (Proposed-1) and the squared loss (Proposed-2). Here, we report the mean absolute error (MAE), and its standard error, of the predictions ŷ = {ŷn}Nn=1 against the corresponding true labels y across 5-fold cross-validation, each with a different randomly sampled training-testing split. MAE is the common metric used in the healthcare domain (Lee et al., 2016; Yeung et al., 2002; Wang et al., 2005; Srinivas et al., 2020) and is defined as MAE(y, ŷ) ≡ 1/N ∑N n=1 |yn − ŷn|. For each fold of the cross-validation, we use a randomly sampled 20% of the training set as a validation set to choose the best hyperparameters for each algorithm, in which hyperparameters providing the highest MAE in the validation set are chosen. Results. As seen in Table 1, Proposed-1 and Proposed-2 largely outperformes the baselines. The robust regression methods (MAE and Huber) did not improve in performance against MSE. In particular, Proposed-1 and Proposed-2 respectively reduced the MAE by more than 20% and 30% on average, compared with baselines. 4.1.3 REAL USE CASE FOR HEALTHCARE Procedure Finally, we demonstrate the practicality of our approach in a real use case in healthcare. From non-intrusive bed sensors installed under each of the four legs of a bed, we estimate the motion intensity of a subject that could be measured accurately but intrusively with ActiGraph, a gold standard sensor wrapped around the wrist (Tryon, 2013; Mullaney et al., 1980; Webster et al., 1982; Cole et al., 1992). If we can mimic the outputs from ActiGraph with outputs from the bed sensors, we can measure the motion with high accuracy and high coverage, while also easing the burden on the subject. We divide the dataset into three pieces and evaluate the results with 3-fold cross-validation. We here use the evaluation metrics that are specifically designed for sleep-wake discrimination (Cole et al., 1992) i.e., proportion of correct prediction period and rate of false prediction. Results. Table 2 shows the proportion of correct prediction period and rate of false prediction, which indicate that the proposed method captured 89 percent of the total time period of the motions that were captured by ActiGraph, and false detection due to factors such as floor vibration was only 1.6 percent. Furthermore, the proposed method captured 15 additional motions that were not captured by ActiGraph. The baseline method MSE was severely underfitted, and most of the weights were zero; thus, we omitted these results. Overall, our findings here demonstrate that ActiGraph can be replaced with bed sensors, and we can also use the bed sensors for the inputs of certain ActiGraph functions, such as sleep-wake discrimination (Cole et al., 1992). See also Appendix G for further details, including the actual estimation results of the motion intensity. 5 DISCUSSION Limitations. In this paper, we do not address symmetric label corruption, such as ordinary outliers, where the coverage and incompleteness are consistent between a label and explanatory variables. Other established approaches can handle such cases. Only when the corruption is asymmetric does it lead to the technical challenge we address here. In that sense, we can handle the opposite asymmetric corruption, in which labels for some observations may become inconsistently higher than those for typical observations. This can be handled as learning from lower-side labeled data and unlabeled data, i.e., LU regression. Since our derivation of U2 regression is straightforwardly applicable to this LU regression case, we show only its learning algorithm in Appendix C. Asymmetric Label Corruption in Classification. In the classification problem setting, asymmetric label corruption is addressed with positive-unlabeled (PU) learning, where it is assumed that negative data cannot be obtained, but unlabeled data are available as well as positive data (Denis, 1998; De Comité et al., 1999; Letouzey et al., 2000; Shi et al., 2018; Kato et al., 2019; Sakai & Shimizu, 2019; Li et al., 2019; Zhang et al., 2019; 2020; Chen et al., 2020b;a; Luo et al., 2021; Hu et al., 2021; Li et al., 2021). An unbiased risk estimator has also been proposed (Du Plessis et al., 2014; 2015). However, PU classification cannot be used for a regression problem, where labels are real values and we need to handle order and gradation between labels. This is because its derivation and algorithm are based on the nature that labels must be binary, i.e., only positive or negative. We overcome this limitation with a novel approach based on an unbiased gradient. Future work. We showed that our approach to estimating hyperparameters based on the grid search with the validation set was effective even for the one contains the important ratio for upper-side labeled data, p(ft(x) ≤ y). It also provides the flexibility needed to handle data variation. Most studies on PU learning assume that a hyperparameter corresponding to πup is given (Hammoudeh & Lowd, 2020; Sonntag et al., 2021; Lin et al., 2022), and some papers have addressed this hyperparameter estimation as their main contribution (Jain et al., 2016; Ramaswamy et al., 2016; Christoffel et al., 2016; Jain et al., 2020; Yao et al., 2021). Developing a method for the hyperparameter estimation to improve performance would be a worthwhile next step of our study. Also, in Assumption 2.1, we assumed s⊥ f∗(x) and a⊥ f∗(x), which is a common noise assumption. Addressing the case when the noises are not independent of f∗(x) is another future direction of our work. Conclusion. We formulated a regression problem from asymmetrically-corrupted data in which training data are corrupted with an asymmetric noise that always has a negative value. This causes labels for data with relatively lower label values to be particularly unreliable. To address this problem, we proposed a learning algorithm, U2 regression. Under some technical assumptions, we showed that our algorithm is unbiased and consistent with regression that uses uncorrupted data without incomplete observations. Our analysis is based on the equivalence of the gradient between them. An experimental evaluation demonstrated that the proposed method was significantly better than the methods without the assumption of the asymmetrical label corruption. A PROOFS A.1 PROOF OF LEMMA 2.2 Proof. For the proof of Lemma 2.2, we will derive two important lemmas from Assumption 2.1. Then, we will prove Lemma 2.2 by using them. We first show f∗(x) ≤ y′ ⇒ a = 0. When f∗(x) ≤ y′, we have from Equation 1 and Equation 4: f∗(x) ≤ f∗(x) + s + a (12) 0 ≤ s + a − a ≤ s. Since a ≤ 0 by Assumption 2.1, we have | a| ≤ s. (13) If a < 0, Assumption 2.1 implies | s| < | a|, which contradicts Equation 13. Hence, we must have a = 0. (14) Since y = y′ when a = 0, we have p(x, y′|f∗(x) ≤ y′) = p(x, y′|f∗(x) ≤ y′, a = 0) (15) = p(x, y|f∗(x) ≤ y, a = 0) = p(x, y|f∗(x) ≤ y), which establishes Lemma A.1. Let p(x, y, y′) be the underlying probability distribution for x, y, and y′. Then, p(x, y′|f∗(x) ≤ y′) = p(x, y|f∗(x) ≤ y). (16) The condition parts of these conditional distributions represent the relationships between labels and regression functions, e.g., p(x, y|f∗(x) ≤ y) is the distribution of x and y when y is higher than what is given by the oracle regression function f∗. Similar to Lemma A.1, we show f(x) ≤ y′ ⇒ a = 0. Let F ′ ≡ {f ∈ F : |f(x) − f∗(x)| ≤ | s| a.s.}, which represents our natural expectation that the regression function f well approximates f∗. When f(x) ≤ y′, we have from Equation 1 and Equation 4 with the condition f ∈ F ′: f(x) ≤ f∗(x) + s + a (17) f(x) ≤ f(x) + s + a + | s| 0 ≤ s + a + | s| − a ≤ s + | s|. Since a ≤ 0 by Assumption 2.1, we have | a| ≤ s + | s|. (18) If a < 0, Assumption 2.1 implies 2| s| < | a|, which contradicts Equation 18. Hence, we must have a = 0. (19) Since y = y′ when a = 0, by replacing f∗ with f for the argument in the derivation of Lemma A.1 in Equation 15, we have Lemma A.2. Let F ′ ≡ {f ∈ F : |f(x)− f∗(x)| ≤ | s|}. When f ∈ F ′, the following holds: p(x, y′|f(x) ≤ y′) = p(x, y|f(x) ≤ y). (20) Lemma A.1 immediately implies Ep(x,y′|f∗(x)≤y′)[G(x, y′)] = Ep(x,y|f∗(x)≤y)[G(x, y)] (21) for any function G : RD×R→ R as long as the expectations exist. When f ∈ F ′, from Lemma A.2, we then have Ep(x,y′|f(x)≤y′)[G(x, y′)] = Ep(x,y|f(x)≤y)[G(x, y)]. (22) A.2 PROOF OF PROPOSITION 3.2 Proof. From the decomposed gradients∇L(ft) in Equation 7, we derive the proposed gradient only with the expectations over p(x, y′). From Condition 3.1 for L(f(x), y),∇L(f(x), y) = g(f(x)) when y < f(x). Thus, Equation 7 can be rewritten as ∇L(ft) =p(ft(x) ≤ y)Ep(x,y|ft(x)≤y)[∇L(ft(x), y)] (23) + p(y < ft(x))Ep(x|y<ft(x)) [ g(ft(x)) ] , where y is marginalized out in the expectation in the second term since g(ft(x)) does not depend on y. Here, Equation 6 and Equation 7 can be rewritten by replacing∇L(ft(x), y) with g(ft(x)), as Ep(x,y)[g(ft(x))] = p(ft(x) ≤ y)Ep(x,y|ft(x)≤y) [ g(ft(x)) ] (24) + p(y < ft(x))Ep(x,y|y<ft(x)) [ g(ft(x)) ] p(y < ft(x))Ep(x,y|y<ft(x)) [ g(ft(x)) ] (25) = Ep(x,y) [ g(ft(x)) ] − p(ft(x) ≤ y)Ep(x,y|ft(x)≤y) [ g(ft(x)) ] . Since g(ft(x)) does not depend on y, we can marginalize out y in Equation 25 as p(y < ft(x))Ep(x|y<ft(x)) [ g(ft(x)) ] (26) = Ep(x) [ g(ft(x)) ] − p(ft(x) ≤ y)Ep(x|ft(x)≤y) [ g(ft(x)) ] . From Equation 26, we can express Equation 23 as ∇L(ft) = p(ft(x) ≤ y)Ep(x,y|ft(x)≤y)[∇L(ft(x), y)] (27) + Ep(x) [ g(ft(x)) ] − p(ft(x) ≤ y)Ep(x|ft(x)≤y) [ g(ft(x)) ] . Finally, from Lemma 2.2, we can rewrite Equation 27 as: ∇L(ft) = p(ft(x) ≤ y)Ep(x,y′|ft(x)≤y′)[∇L(ft(x), y)] (28) + Ep(x) [ g(ft(x)) ] − p(ft(x) ≤ y)Ep(x|ft(x)≤y′) [ g(ft(x)) ] , which is identical to Equation 9. Thus, the gradient in Equation 9 is unbiased and consistent with the gradient in Equation 6 a.s. A.3 PROOF OF LEMMA 3.4 Proof. The difference between the decomposed gradients ∇Ľ(ft) and ∇L(ft) at step t+ 1 in the gradient descent is |∇Ľ(ft)−∇L(ft)| (29) = ∣∣∣∣p(ft(x) ≤ y)Ep(x,y′|ft(x)≤y′)[∇L(ft(x), y)] + p(y < ft(x))Ep(x,y′|y′<ft(x))[∇L(ft(x), y)] −p(ft(x) ≤ y)Ep(x,y|ft(x)≤y)[∇L(ft(x), y)] − p(y < ft(x))Ep(x,y|y<ft(x))[∇L(ft(x), y)] ∣∣∣∣. From Lemma 2.2 and Condition 3.1, |∇Ľ(ft)−∇L(ft)| (30) = ∣∣∣∣p(y < ft(x))Ep(x,y′|y′<ft(x))[∇L(ft(x), y)] − p(y < ft(x))Ep(x,y|y<ft(x))[∇L(ft(x), y)] ∣∣∣∣ = ∣∣∣∣p(y < ft(x))Ep(x|y′<ft(x))[g(ft(x))] − p(y < ft(x))Ep(x|y<ft(x))[g(ft(x))] ∣∣∣∣. We decompose Ep(x|y′<ft(x))[g(ft(x))] again as |∇Ľ(ft)−∇L(ft)| (31) = ∣∣∣∣p(y < ft(x))( p(ft(x) ≤ y|y′ < ft(x))Ep(x|y′<ft(x)∧ft(x)≤y)[g(ft(x))] + p(y < ft(x)|y′ < ft(x))Ep(x|y′<ft(x)∧y<ft(x))[g(ft(x))] ) − p(y < ft(x))Ep(x|y<ft(x))[g(ft(x))] ∣∣∣∣. The condition y′ < ft(x) ∧ y < ft(x) is equivalent to the condition y < ft(x) since y′ ≤ y from Assumption 2.1 and thus p(y′ < ft(x)|y < ft(x)) = 1. Then, we have |∇Ľ(ft)−∇L(ft)| (32) = ∣∣∣∣p(y < ft(x))( p(ft(x) ≤ y|y′ < ft(x))Ep(x|y′<ft(x)∧ft(x)≤y)[g(ft(x))] + p(y < ft(x)|y′ < ft(x))Ep(x|y<ft(x))[g(ft(x))] ) − p(y < ft(x))Ep(x|y<ft(x))[g(ft(x))] ∣∣∣∣. Additionally, since p(y < ft(x)|y′ < ft(x)) = 1− p(ft(x) ≤ y|y′ < ft(x)), |∇Ľ(ft)−∇L(ft)| (33) = ∣∣∣∣p(y < ft(x))p(ft(x) ≤ y|y′ < ft(x))( Ep(x|y′<ft(x)∧ft(x)≤y)[g(ft(x))]− Ep(x|y<ft(x))[g(ft(x))] )∣∣∣∣. This equation shows that the bias is represented by the difference between the expectation of g(ft(x)) with the lower-side data and that with the original upper-side data mixed into the lower side due to incomplete observations and the corresponding proportions. From Assumption 3.3, since a⊥ x, |∇Ľ(ft)−∇L(ft)| (34) = ∣∣∣∣p(y < ft(x))p(ft(x) ≤ y|y′ < ft(x))( Ep(x|ft(x)≤y)[g(ft(x))]− Ep(x|y<ft(x))[g(ft(x))] )∣∣∣∣. Since |f − f∗| ≤ | s| a.s., p(ft(x) ≤ y) = η and p(y < ft(x)) = 1− η from their definition, p(ft(x) ≤ y|y′ < ft(x)) = p(ft(x) ≤ y)p( a < 0) p(y < ft(x)) + p(ft(x) ≤ y)p( a < 0) = η(1− ξ) (1− η) + η(1− ξ) . (35) Therefore, from the definition of δ, |∇Ľ(ft)−∇L(ft)| ≥ η(1− η)(1− ξ) (1− η) + η(1− ξ) δ. (36) B IMPLEMENTATION OF LEARNING ALGORITHM BASED ON STOCHASTIC OPTIMIZATION We scale up our U2 regression algorithm by stochastic approximation with M mini-batches and add a regularization term, R(f): ∇L̂{m}(ft) = ∑ (x,y)∈ { X {m} up ,y {m} up }∇L(ft(x), y) (37) + ρ [ ∑ x∈X{m}un g(ft(x)) ] − ∑ x∈X{m}up g(ft(x)) + λ ∂R(ft) ∂θ , where∇L̂{m}(ft) is the gradient for the m-th mini-batch, {X{m}up ,y{m}up } andX{m}un respectively are upper-side and unlabeled sets in the m-th mini-batch based on the current ft, λ is a regularization parameter, and the regularization term R(f) is, for example, the L1 or L2 norm of the parameter vector θ of f . We also convert nup/(πupN) to a hyperparameter ρ, ignoring constant coefficients instead of directly handling πup. The hyperparameters ρ and λ are optimized in training based on the grid-search with the validation set. The U2 regression algorithm based on stochastic optimization is described in Algorithm 1. We learn the regression function with the gradient in Equation 37 by using any stochastic gradient method. Here, we used Adam with the hyperparameters recommended in Kingma & Ba (2015), and the number of samples in the mini-batches was set to 32. We set the candidates of the hyperparameters, ρ and λ, to {10−3, 10−2, 10−1, 100}. By using the learned f , we can estimate ŷ = f(x) for new data x. C ALGORITHM FOR LU REGRESSION We show the algorithm for the lower and unlabeled regression (LU regression), where labels for some observations may become inconsistently higher than those for typical observations. Let LLU(f(x), y) be a loss function for LU regression and gLU(f(x)) be a gradient∇LLU(f(x), y) when f(x) ≤ y. Similar to Condition 3.1 for U2 regression, we assume that the class of LLU(f(x), y) satisfies the Algorithm 1 U2 regression based on stochastic gradient method. Input: Training data D′ = {xn, y′n}Nn=1; hyperparameters ρ, λ ≥ 0; an external stochastic gradient method A Output: Model parameters θ for f 1: while No stopping criterion has been met 2: Shuffle D′ into M mini-batches: { X{m},y{m} }M m=1 3: for m = 1 to M 4: Compute the gradient∇L̂{m}(ft) in Equation 37 with { X{m},y{m} } 5: Update θ by A with∇L̂{m}(ft) condition that gLU(f(x)) is a gradient function depending only on f(x) and not on the value of y. Then, LU regression is Algorithm 1, with the following gradient,∇L̂{m}LU (ft), instead of∇L̂{m}(ft) in Equation 37, as ∇L̂{m}LU (ft) = ∑ {x,y}∈ { X {m} lo ,y {m} lo }∇LLU(ft(x), y) (38) + ρ [ ∑ x∈X{m}un gLU(ft(x)) ] − ∑ x∈X{m}lo gLU(ft(x)) + λ ∂R(ft) ∂θ , where {X{m}lo ,y {m} lo } and X {m} un respectively are lower-side and unlabeled sets in the m-th minibatch based on the current ft. D COMPUTING INFRASTRUCTURE All of the experiments were carried out with a Python and TensorFlow implementation on workstations having 80 GB of memory, a 4.0 GHz CPU, and an Nvidia Titan X GPU. In this environment, the computational time to produce the results was a few hours. E DETAILS OF EXPERIMENTS IN SECTION 4.1.1 E.1 SYNTHETIC DATASETS We conducted the experiments on synthetic data to evaluate the feasibility of our method for obtaining unbiased learning results from asymmetrically-corrupted data containing different proportions of incomplete observations. We generated synthetic data on the basis of Assumption 2.1 and Equation 4. We randomly generated N = 1000 training samples, X = {xn}Nn=1, from the standard Gaussian distribution N (xn; 0, I), where the number of features in x was D = 10, and I is the identity matrix. Then, usingX , we generated the corresponding N sets of true labels y = {yn}Nn=1 from the distribution N (yn;w>xn, β), where w are coefficients that were also randomly generated from the standard Gaussian distributionN (w; 0, I), β is the noise precision, and > denotes the transpose. For simulating the situation in which a label has incomplete observations, we created corrupted labels y′ = {y′n}Nn=1 by randomly selecting K percent of data in y and subtracting the absolute value of white Gaussian noise with twice the value of the precision as that of y, 2β, from their values. We repeatedly evaluated the proposed method for each of the following settings. The noise precision was β = {100, 10−1}, which corresponded to a low-noise setting task (LowNoise) and a high-noise setting task (HighNoise), and the proportion of incomplete training samples was K = {25, 50, 75}%. In the case of K = 75%, only 25 percent of the samples correctly corresponded to labels, and all of the other samples were attached with labels that were lower than the corresponding true values. It is quite difficult to learn regression functions using such data. In these tasks, we used a linear model, θ>x, for f(x) and an implementation for Equation 37 with the absolute loss, which satisfies Condition 3.1, for the loss function L and L1-regularization for the regularization term. We set the candidates of the hyperparameters, ρ and λ, to {10−3, 10−2, 10−1, 100}. We standardized the data by subtracting their mean and dividing by their standard deviation in the training split. We used Adam with the hyperparameters recommended in Kingma & Ba (2015), and the number of samples in the mini-batches was set to 32. We also used a real-world sensor dataset collected from the Kaggle dataset (Sen, 2016) that contains breathing signals (Breathing). The dataset consisted of N = 1, 432 samples. We used signals from a chest belt asX = {xn}Nn=1, and x in each sample had D = 2 number of features, i.e., the period and height of the expansion/contraction of the chest. We used signals obtained by the Douglas bag (DB) method, which is the gold standard for measuring ventilation, as true labels y = {yn}Nn=1. For our problem setting, we created corrupted labels y′ = {y′n}Nn=1 through the same procedure for synthetic corruption as that for LowNoise and HighNoise with K = {25, 50, 75}%. In the experiment on Breathing, for its non-linearity, we used θ>φ(x, σ) for f(x), where φ is a radial basis function with the training set as its bases, and σ is a hyperparameter representing the kernel width that is also optimized by using the validation set. We set the candidates of the hyperparameter σ to {10−3, 10−2, 10−1, 100}. The other implementation details were the same as those for LowNoise and HighNoise. E.2 DETAILED RESULTS Figure 3 shows the error between the estimation results of the proposed method and their true values and those of MSE for LowNoise, HighNoise, and Breathing with 25 and 75 percent of incomplete training samples. Table 3 shows the performance on LowNoise, HighNoise, and Breathing for the proposed method and MSE. As shown in Figure 3, the proposed method obtains unbiased learning results in all cases, while MSE produces biased results. From Table 3, we can see that the proposed method outperformes MSE overall. We found that the performance of our method is not significantly affected by the increase in the proportion of incomplete training samples K even for K = 75%, unlike that of MSE. E.3 PERFORMANCE OVER DIFFERENT SIZES OF VALIDATION SET To demonstrate the robustness of our validation-set-based approach to estimating the hyperparameter πup, we show the performance of the proposed method over different sizes of the validation set in Fig. 4. This analysis is conducted on the tasks in Section 4.1.1; LowNoise, HighNoise, and Breathing, with K=50%. Figure 4 shows that the proposed method does not degrade its performance much, even when we use only 1% of the training set as the validation set. This demonstrates that the proposed approach is robust enough also for the small size of the validation set as well as the high proportion of incomplete validation samples. In Fig. 5, we also show a chart similar to Fig. 2 (the error in prediction) when we used 1% of the training set as the validation set. We can see that even in this case, the proposed method achieved unbiased learning (the average error shown by the blue solid line is approximately zero.). F DETAILS OF EXPERIMENTS IN SECTION 4.1.2 We applied the algorithm to five different real-world healthcare tasks recorded in the datasets from the UCI Machine Learning Repository (Velloso, 2013; Velloso et al., 2013), which contains sensor outputs from wearable devices attached to the arm while subjects exercised. From the non-intrusive sensors attached to gym equipment, we estimated the motion intensity of a subject that was measured accurately with an arm sensor that was an intrusive sensor wrapped around the arm. If we can mimic outputs from the arm sensor with outputs from the equipment sensor, it could contribute to the subjects’ comfort, as they would not need to wear sensors to measure their motion intensity. We used all of the features from the equipment sensor that took “None” values less than ten times asX = {xn}Nn=1, where each sample had D = 13 number of features. The corrupted labels y′ = {y′n}Nn=1 were the magnitude of acceleration from the arm sensor, which can accurately sense motion intensity on the arm, but it had insufficient data coverage and incomplete or missing observations for the movements of other body parts. For performance evaluation, we used the magnitude of acceleration for the entire body as true labels y = {yn}Nn=1. The number of samples were N = 11, 159, N = 7, 593, N = 6, 844, N = 6, 432, and N = 7, 214 respectively for the tasks, Specification, Throwing A, Lifting, Lowering, and Throwing B. For the complex nature of the tasks, we used a 6-layer multilayer perceptron with ReLU (Nair & Hinton, 2010) (more specifically, D-100-100-100-100-1) as f(x), which demonstrates the usefulness of the proposed method for training deep neural networks. We also used a dropout (Srivastava et al., 2014) with a rate of 50% after each fully connected layer. We used two implementations for L(f(x), y) when f(x) ≤ y′ in Equation 37 with the absolute loss (Proposed-1) and the squared loss (Proposed-2). For both implementations, we used the absolute loss, which satisfies Condition 3.1, for the loss function L(f(x), y) when y′ < f(x) and used L1-regularization for the regularization term. The other implementation details were the same as those for LowNoise, HighNoise, and Breathing. G DETAILS OF EXPERIMENTS IN SECTION 4.1.3 We demonstrate the practicality of our approach in a real use case in healthcare. From non-intrusive bed sensors installed under each of the four legs of a bed, we estimated the motion intensity of a subject that was measured accurately with ActiGraph, a gold standard intrusive sensor wrapped around the wrist (Tryon, 2013; Mullaney et al., 1980; Webster et al., 1982; Cole et al., 1992). The sensing results of ActiGraph are used for tasks such as discriminating whether a subject is asleep or awake (Cole et al., 1992). While ActiGraph can accurately sense motion on the forearm, it has insufficient data coverage in other areas and often causes observations of movements on other body parts to be missing. The bed sensors have a broader data coverage since they can sense global motion on all body parts; however, the sensing accuracy is limited due to their non-intrusiveness. If we can mimic the outputs from ActiGraph with outputs from the bed sensors, we can expect to achieve sufficient accuracy and coverage while also easing the burden on the subject. The dataset we used included three pieces of data, Data (i), (ii), and (iii), which were respectively recorded over 20, 18, and 18.5 minutes. Each piece of data consisted of pairs of bed-sensor-data sequences and the corresponding motion intensity sequence obtained by ActiGraph. We used the “magnitude” attribute of ActiGraph as corrupted labels y′ for the motion intensity, whose sampling rate was about one sample per second. For true labels y, we manually measured the motion intensity every minute under the management of a domain expert. ForX , we first computed the gravity center of the four sensor outputs that were obtained from the bed sensors under the four legs of a bed. Then, we computed the time derivatives and cross terms of the raw sensor outputs and the gravity center. The sampling rate of the bed sensors was different from that of ActiGraph, about one sample per five milliseconds. Thus,X was finally generated as a sliding window of statistics in 1, 000-millisecond (1 second) subsequences of the time series of the above computed variables, where 1 second was the same as the sampling interval of ActiGraph. The statistics were means, standard deviations, and {0.05, 0.25, 0.5, 0.75, 0.95} quantiles. In this task, we used the linear model θ>x for f(x) due to its interpretability, which is inevitable in real-world healthcare and medical applications. G.1 ESTIMATION RESULTS FOR MOTION INTENSITY Figure 6 compares our estimation results for motion intensity with the output of ActiGraph and true labels. G.2 IMPORTANT FEATURES ESTIMATING MOTION INTENSITY The important features selected by L1 regularization were the statistics of the gravity center and the cross terms and time derivatives of the raw sensor outputs. The largest weight was assigned to the standard deviation of the gravity center, which represents the amplitude of the gravity center, so it is directly related to the motion of subjects. H OTHER POSSIBLE USE CASES OF REGRESSION FOR SENSOR MAGNITUDE Examples of predicting the magnitude values of a sensor, which is a field of application of U2 regression, can be found in several areas. Besides the medical and healthcare applications discussed in the main text, another example is estimating the wind speed or rainfall in a specific region from observable macroscopic information (Cheng & Tan, 2008; Abraham & Tan, 2010; Abraham et al., 2013; Vandal et al., 2017), known as statistical downscaling (Wilby et al., 2004). Wind speed and rainfall, which are labels in these tasks, can be sensed locally in a limited number of locations and provide incomplete observations and biased labels compared with macroscopic information, which is considered to be explanatory variables.
1. What is the focus of the paper regarding the asymmetrically corrupted regression problem? 2. What are the strengths of the proposed approach, particularly in terms of its novelty and theoretical analysis? 3. What are the weaknesses of the paper, especially regarding its comparisons with other works? 4. Do you have any minor comments or suggestions for improving the paper's clarity and reproducibility?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors address an interesting problem of the asymmetrically corrupted regression problem. They motivate the problem using several real-world examples. They propose a solution by modeling the target value as corrupted with asymmetric noise. To learn the regression function, they derive a loss function based on the data model and use gradient descent to find the solution. They use real and synthetic data to demonstrate the proposed approach. Strengths And Weaknesses Strengths: The paper is well written, and the English level is satisfactory. The proposed problem is interesting with a novel solution. They prove the unbiasedness and consistency of the gradient of the proposed loss function. Several examples are provided to demonstrate the advantages of the proposed approach. Weaknesses: The authors only compare their method to simple baselines, while several solutions exist to the related problem of regression with asymmetric loss functions. Minor comments: For clarity, please mention what K is in the caption of figure 2. Page 4 results of figure 2, I assume that the data in the test set is not incomplete, but this is not explained. Please expand on the caption of Table 2. P9 the results related to table 2, can be viewed as a classification task, so why not compare to baselines that focus on asymmetric classification? Why is the word Rate capitalized? Clarity, Quality, Novelty And Reproducibility The introduction and problem statement are clear, the paper provides a novel solution with some theoretical analysis. The experimental setting is simple and well-explained, so I believe it is reproducible.
ICLR
Title Discrete Contrastive Diffusion for Cross-Modal Music and Image Generation Abstract Diffusion probabilistic models (DPMs) have become a popular approach to conditional generation, due to their promising results and support for cross-modal synthesis. A key desideratum in conditional synthesis is to achieve high correspondence between the conditioning input and generated output. Most existing methods learn such relationships implicitly, by incorporating the prior into the variational lower bound. In this work, we take a different route—we explicitly enhance input-output connections by maximizing their mutual information. To this end, we introduce a Conditional Discrete Contrastive Diffusion (CDCD) loss and design two contrastive diffusion mechanisms to effectively incorporate it into the denoising process, combining the diffusion training and contrastive learning for the first time by connecting it with the conventional variational objectives. We demonstrate the efficacy of our approach in evaluations with diverse multimodal conditional synthesis tasks: dance-to-music generation, text-to-image synthesis, as well as class-conditioned image synthesis. On each, we enhance the inputoutput correspondence and achieve higher or competitive general synthesis quality. Furthermore, the proposed approach improves the convergence of diffusion models, reducing the number of required diffusion steps by more than 35% on two benchmarks, significantly increasing the inference speed. 1 INTRODUCTION Generative tasks that seek to synthesize data in different modalities, such as audio and images, have attracted much attention. The recently explored diffusion probabilistic models (DPMs) Sohl-Dickstein et al. (2015b) have served as a powerful generative backbone that achieves promising results in both unconditional and conditional generation Kong et al. (2020); Mittal et al. (2021); Lee & Han (2021); Ho et al. (2020); Nichol & Dhariwal (2021); Dhariwal & Nichol (2021); Ho et al. (2022); Hu et al. (2021). Compared to the unconditional case, conditional generation is usually applied in more concrete and practical cross-modality scenarios, e.g., video-based music generation Di et al. (2021); Zhu et al. (2022a); Gan et al. (2020a) and text-based image generation Gu et al. (2022); Ramesh et al. (2021); Li et al. (2019); Ruan et al. (2021). Most existing DPM-based conditional synthesis works Gu et al. (2022); Dhariwal & Nichol (2021) learn the connection between the conditioning and the generated data implicitly by adding a prior to the variational lower bound Sohl-Dickstein et al. (2015b). While such approaches still feature high generation fidelity, the correspondence between the conditioning and the synthesized data can sometimes get lost, as illustrated in the right column in Fig. 1. To this end, we aim to explicitly enhance the input-output faithfulness via their maximized mutual information under the diffusion generative framework for conditional settings in this paper. Examples of our synthesized music audio and image results are given in Fig. 1. Contrastive methods Oord et al. (2018); Bachman et al. (2019); Song & Ermon (2020a) have been proven to be very powerful for data representation learning. Their high-level idea aims to learn the representation z of raw data x based on the assumption that a properly encoded z benefits the ability of a generative model p to reconstruct the raw data given z as prior. This idea can be achieved via optimization of the density ratio p(x|z)p(x) Oord et al. (2018) as an entirety, without explicitly modeling the actual generative model p. While the direct optimization of mutual information via generative models p is a challenging problem to implement and train Song & Ermon (2020b); Belghazi et al. (2018) in the conventional contrastive representation learning field, we show that this can be effectively done within our proposed contrastive diffusion framework. Specifically, we reformulate the optimization problem for the desired conditional generative tasks via DPMs by analogy to the above embedding z and raw data x with our conditioning input and synthesized output. We introduce a Conditional Discrete Contrastive Diffusion (CDCD) loss, and design two contrastive diffusion mechanisms - step-wise parallel diffusion that invokes multiple parallel diffusion processes during contrastive learning, and sample-wise auxiliary diffusion, which maintains one principal diffusion process, to effectively incorporate the CDCD loss into the denoising process. We demonstrate that with the proposed contrastive diffusion method, we can not only effectively train so as to maximize the desired mutual information by connecting the CDCD loss with the conventional variational objective function, but also to directly optimize the generative network p. The optimized CDCD loss further encourages faster convergence of a DPM model with fewer diffusion steps. We additionally present our intra- and inter-negative sampling methods by providing internally disordered and instance-level negative samples, respectively. To better illustrate the input-output connections, we conduct main experiments on the novel crossmodal dance-to-music generation task Zhu et al. (2022a), which aims to generate music audio based on silent dance videos. Compared to other tasks such as text-to-image synthesis, dance-to-music generation explicitly evaluates the input-output correspondence in terms of various cross-modal alignment features such as dance-music beats, genre and general quality. However, various generative settings, frameworks, and applications can also benefit from our contrastive diffusion approach, e.g., joint or separate training of conditioning encoders, continuous or discrete conditioning inputs, and diverse input-output modalities as detailed in Sec. 4. Overall, we achieve results superior or comparable to state-of-the-art on three conditional synthesis tasks: dance-to-music (datasets: AIST++ Tsuchida et al. (2019); Li et al. (2021), TikTok Dance-Music Zhu et al. (2022a)), text-toimage (datasets: CUB200 Wah et al. (2011), MSCOCO Lin et al. (2014)) and class-conditioned image synthesis (dataset: ImageNet Russakovsky et al. (2015)). Our experimental findings suggest three key take-away: 1 Improving the input-output connections via maximized mutual information is indeed beneficial for their correspondence and the general fidelity of the results (see Fig. 1 and supplement). 2 Both our proposed step-wise parallel diffusion with intra-negative samples and sample-wise auxiliary diffusion with inter-negative samples show state-of-the-art scores in our evaluations. The former is more beneficial for capturing the intra-sample correlations, e.g., musical rhythms, while the latter improves the instance-level performance, e.g., music genre and image class. 3 With maximized mutual information, our conditional contrastive diffusion converge in substantially fewer diffusion steps compared to vanilla DPMs, while maintaining the same or even superior performance (approximately 35% fewer steps for dance-to-music generation and 40% fewer for text-to-image synthesis), thus significantly increasing inference speed. 2 BACKGROUND Diffusion Probabilistic Models. DPMs Sohl-Dickstein et al. (2015b) are a class of generative models that learn to convert a simple Gaussian distribution into a data distribution. This process consists of a forward diffusion process and a reverse denoising process, each consisting of a sequence of T steps that act as a Markov chain. During forward diffusion, an input data sample x0 is gradually “corrupted” at each step t by adding Gaussian noise to the output of step t− 1. The reverse denoising process, seeks to convert the noisy latent variable xT into the original data sample x0 by removing the noise added during diffusion. The stationary distribution for the final latent variable xT is typically assumed to be a normal distribution, p(xT ) = N (xT |0, I). An extension of this approach replaces the continuous state with a discrete one Sohl-Dickstein et al. (2015a); Hoogeboom et al. (2021); Austin et al. (2021), in which the latent variables x1:T typically take the form of one-hot vectors with K categories. The diffusion process can then be parameterized using a multinomial categorical transition matrix defined as q(xt|xt−1) = Cat(xt; p = xt−1Qt), where [Qt]ij = q(xt = j|xt−1 = i). The reverse process pθ(xt|xt−1) can also be factorized as conditionally independent over the discrete sequences Austin et al. (2021). In both the continuous and discrete state formulations of DPMs Song & Ermon (2020c); Song et al. (2020b); Kingma et al. (2021); Song et al. (2021); Huang et al. (2021); Vahdat et al. (2021), the denoising process pθ can be optimized by the KL divergence between q and pθ in closed forms Song et al. (2020a); Nichol & Dhariwal (2021); Ho et al. (2020); Hoogeboom et al. (2021); Austin et al. (2021) via the variational bound on the negative log-likelihood: Lvb = Eq[DKL(q(xT |x0)||p(xT ))︸ ︷︷ ︸ LT + ∑ t>1 DKL(q(xt−1|xt, x0)||pθ(xt−1|xt))︸ ︷︷ ︸ Lt−1 − log pθ(x0|x1)︸ ︷︷ ︸ L0 ]. (1) Existing conditional generation works via DPMs Gu et al. (2022); Dhariwal & Nichol (2021) usually learn the implicit relationship between the conditioning c and the synthesized data x0 by directly adding the c as the prior in (1). DPMs with discrete state space provide more controls on the data corruption and denoising compared to its continuous counterpart Austin et al. (2021); Gu et al. (2022) by the flexible designs of transition matrix, which benefits for practical downstream operations such as editing and interactive synthesis Tseng et al. (2020); Cui et al. (2021); Xu et al. (2021). We hence employ contrastive diffusion using a discrete state space in this work. Contrastive Representation Learning. Contrastive learning uses loss functions designed to make neural networks learn to understand and represent the specific similarities and differences between elements in the training data without labels explicitly defining such features, with positive and negative pairs of data points, respectively. This approach has been successfully applied in learning representations of high-dimensional data Oord et al. (2018); Bachman et al. (2019); He et al. (2020); Song & Ermon (2020a); Chen et al. (2020); Lin et al. (2021). Many such works seek to maximize the mutual information between the original data x and its learned representation z under the framework of likelihood-free inference Oord et al. (2018); Song & Ermon (2020a); Wu et al. (2021). The above problem can be formulated as maximizing a density ratio p(x|z)p(x) that preserves the mutual information between the raw data x and learned representation z. To achieve this, existing contrastive methods Oord et al. (2018); Durkan et al. (2020); He et al. (2020); Zhang et al. (2021) typically adopt a neural network to directly model the ratio as an entirety and avoid explicitly considering the actual generative model p(x|z), which has proven to be a more challenging problem Song & Ermon (2020b); Belghazi et al. (2018). In contrast, we show that by formulating the conventional contrastive representation learning problem under the generative setting, the properties of DPMs enable us to directly optimize the model p in this work, which can be interpreted as the optimal version of the density ratio Oord et al. (2018). Vector-Quantized Representations for Conditional Generation. Vector quantization is a classical technique in which a high-dimensional space is represented using a discrete number of vectors. More recently, Vector-Quantized (VQ) deep learning models employ this technique to allow for compact and discrete representations of music and image data Oord et al. (2017); Razavi et al. (2019); Esser et al. (2021b); Dhariwal et al. (2020); Chen et al. (2022). Typically, the VQ-based models use an encoder-codebook-decoder framework, where the “codebook” contains a fixed number of vectors (entries) to represent the original high dimensional raw data. The encoder transforms the input x into feature embedding that are each mapped to the closest corresponding vector in the codebook, while the decoder uses the set of quantized vectors z to reconstruct the input data, producing x′ as illustrated in the upper part of Fig. 2. In this work, we perform conditional diffusion process on the VQ space (i.e., discrete token sequences) as shown in the bottom part of Fig. 2, which largely reduces the dimensionality of the raw data, thus avoiding the expensive raw data decoding and synthesis. As our approach is flexible enough to be employed with various input and output modalities, the exact underlying VQ model we use depends on the target data domain. For music synthesis, we employ a fine-tuned Jukebox Dhariwal et al. (2020) model, while for image generation, we employ VQ-GAN Esser et al. (2021b). See Sec. 4 for further details. We refer to z, the latent quantized representation of x, as z0 below to distinguish it from the latent representation at prior stages in the denoising process. 3 METHOD Here we outline our approach to cross-modal and conditional generation using our proposed discrete contrastive diffusion approach, which is depicted in Fig. 2. In Sec. 3.1, we formulate our Conditional Discrete Contrastive Diffusion loss in detail, and demonstrate how it helps to maximize the mutual information between the conditioning and generated discrete data representations. Sec. 3.2 defines two specific mechanisms for applying this loss within a diffusion model training framework, samplewise and step-wise. In Sec. 3.3, we detail techniques for constructing negative samples designed to improve the overall quality and coherence of the generated sequences. Given the data pair (c, x), where c is the conditioning information from a given input modality (e.g., videos, text, or a class label), our objective is to generate a data sample x in the target modality (e.g., music audio or images) corresponding to c. In the training stage, we first employ and train a VQ-based model to obtain discrete representation z0 of the data x from the target modality. Next, our diffusion process operates on the encoded latent representation z0 of x. The denoising process recovers the latent representation z0 given the conditioning c that can be decoded to obtain the reconstruction x′. In inference, we generate z0 based on the conditioning c, and decode the latent VQ representation z0 back to raw data domain using the decoder from the pre-trained and fixed VQ decoder. 3.1 CONDITIONAL DISCRETE CONTRASTIVE DIFFUSION LOSS We seek to enhance the connection between c and the generated data z0 by maximizing their mutual information, defined as I(z0; c) = ∑ z0 pθ(z0, c) log pθ(z0|c) pθ(z0) . We introduce a set of negative VQ sequences Z ′ = {z1, z2, ..., zN}, encoded from N negative samples X ′ = {x1, x2, ..., xN}, and define f(z0, c) = pθ(z0|c) pθ(z0) . Our proposed Conditional Discrete Contrastive Diffusion (CDCD) loss is: LCDCD := −E [ log f(z0, c) f(z0, c) + Σzj∈Z′f(z j 0, c) ] . (2) The proposed CDCD loss is similar to the categorical cross-entropy loss for classifying the positive sample as in Oord et al. (2018), where our conditioning c and the generated data z0 corresponds to the original learned representation and raw data, and optimization of this loss leads to maximization of I(z0; c). However, the loss in Oord et al. (2018) models the density ratio f(z0, c) as an entirety. In our case, we demonstrate that the DPMs properties Sohl-Dickstein et al. (2015b); Ho et al. (2020); Austin et al. (2021) enable us to directly optimize the actual distribution pθ within the diffusion process for the desired conditional generation tasks. Specifically, we show the connections between the proposed CDCD loss and the conventional variational loss Lvb (see (1)) in Sec. 3.2, and thus how it contributes to efficient DPM learning. Additionally, we can derive the lower bound for the mutual information as I(z0; c) ≥ log(N) − LCDCD (see supplement for details), which indicates that a larger number of negative samples increases the lower bound. These two factors allow for faster convergence of a DPM with fewer diffusion steps. 3.2 PARALLEL AND AUXILIARY DIFFUSION PROCESS The CDCD loss in (2) considers the mutual information between c and z0 in a general way, without specifying the intermediate diffusion steps. We propose and analyze two contrastive diffusion mechanisms to efficiently incorporate this loss into DPM learning, and demonstrate that we can directly optimize the generative model pθ in the diffusion process. We present our step-wise parallel diffusion and the sample-wise auxiliary diffusion mechanisms, which are distinguished by the specific operations applied for the intermediate negative latent variables zj1:T for each negative sample x j . The high-level intuition for the parallel and auxiliary designs is to emphasize different attributes of the synthesized data given specific applications. Particularly, we propose the parallel variant to learn the internal coherence of the audio sequential data by emphasizing the gradual change at each time step, while the auxiliary mechanism focuses more on the sample-level connections to the conditioning. Step-Wise Parallel Diffusion. This mechanism not only focuses on the mutual information between c and z0, but also takes the intermediate negative latent variables z j 1:T into account by explicitly invoking the complete diffusion process for each negative sample zj ∈ Z ′. As illustrated in Fig. 2 (bottom left), we initiate N + 1 parallel diffusion processes, among which N are invoked by negative samples. For each negative sample xj ∈ X ′, we explicitly compute its negative latent discrete variables zj0:T . In this case, (2) is as follows (see supplement for the detailed derivation): LCDCD−Step := EZ log [ 1 + pθ(z0:T ) pθ(z0:T |c) NEZ′ [pθ(zj0:T |c) pθ(z j 0:T ) ]] ≡ Lvb(z, c)− C ∑ zj∈Z′ Lvb(zj , c). (3) The equation above factorizes the proposed CDCD loss using the step-wise parallel diffusion mechanism into two terms, where the first term corresponds to the original variational bound Lvb, and the second term can be interpreted as the negative sum of variational bounds induced by the negative samples and the provided conditioning c. C is a constant as detailed in our supplement. Sample-Wise Auxiliary Diffusion. Alternatively, our sample-wise auxiliary diffusion mechanism maintains one principal diffusion process, as in traditional diffusion training, shown in Fig. 2 (bottom right). It contrasts the intermediate positive latent variables z1:T with the negative sample z j 0 ∈ Z. In this case, we can write the CDCD loss from. (2) as (see supplement for details): LCDCD−Sample := Eq[−log pθ(z0|zt, c)]− C Σzj∈Z′Eq[−log pθ(zj0|zt, c)]. (4) As with the step-wise loss, the CDCD-Sample loss includes two terms. The first refers to sampling directly from the positive z0 at an arbitrary timestep t. The second sums the same auxiliary loss from negative samples zj0. This marginalization operation is based on the property of Markov chain as in previous discrete DPMs Austin et al. (2021); Gu et al. (2022), which imposes direct supervision from the sample data. The first term is similar to the auxiliary denoising objective in Austin et al. (2021); Gu et al. (2022). Both contrastive diffusion mechanisms enable us to effectively incorporate the CDCD loss into our DPM learning process by directly optimizing the actual denoising generative network pθ. Final Loss Function. The final loss function for our contrastive diffusion training process is: L = Lvb(z, c) + λLCDCD, (5) Lvb is conditioning c related, and takes the form of Lt−1 = DKL(q(zt−1|zt, z0)||pθ(zt−1|zt, c)) as in Gu et al. (2022), where c included as the prior for all the intermediate steps. LCDCD refers to either the step-wise parallel diffusion or sample-wise auxiliary diffusion loss. Empirically, we can omit the first term in (3), or directly optimize LCDCD−Step, in which the standard Lvb is already included. The detailed training algorithm is explained in the supplement. 3.3 INTRA- AND INTER-NEGATIVE SAMPLING Previous contrastive works construct negative samples using techniques such as image augmentation Chen et al. (2020); He et al. (2020) or spatially adjacent image patches Oord et al. (2018). In this work, we categorize our sampling methods into intra- and inter-negative samplings as in Fig. 3. For the intra-sample negative sampling, we construct X ′ based on the given original x. This bears resemblance to the patch-based technique in the image domain Oord et al. (2018). As for the audio data, we first divide the original audio waveform into multiple chunks, and randomly shuffle their ordering. For the inter-sample negative sampling, X ′ consists of instance-level negative samples x′ that differ from the given data pair (c, x). In practice, we define negative samples x′ to be music sequences with different musical genres from x in the music generation task, while x′ denotes images other than x in the image synthesis task (in practice, we choose x′ with different class labels as x). Based on our proposed contrastive diffusion modes and negative sampling methods, there are four possible contrastive settings: step-wise parallel diffusion with either intra- or inter-negative sampling (denoted as Step-Intra and Step-Inter), or sample-wise auxiliary diffusion with either intra- or internegative sampling (denoted as Sample-Intra and Sample-Inter). Intuitively, we argue that Step-Intra and Sample-Inter settings are more reasonable compared to Step-Inter and Sample-Intra because of the consistency between the diffusion data corruption process and the way to construct negative samples. Specifically, the data corruption process in the discrete DPMs includes sampling and replacing certain tokens with some random or mask tokens at each diffusion step Austin et al. (2021); Gu et al. (2022), which is a chunk-level operation within a given data sequence similar to the ways we construct intra-negative samples by shuffling the chunk-level orders. In contrast, the sample-wise auxiliary diffusion seeks to provide sample-level supervision, which is consistent with our inter-negative sampling method. In the interest of clarity and concision, we only present the experimental results for Step-Intra and Sample-Inter settings in Sec. 4 of our main paper. The complete results obtained with other contrastive settings and more detailed analysis are included in the supplement. 4 EXPERIMENTS We conduct experiments on three conditional generation tasks: dance-to-music generation, text-toimage synthesis, and class-conditioned image synthesis. For the dance-to-music task, we seek to generate audio waveforms for complex music from human motion and dance video frames. For the text-to-image task, the objective is to generate images from given textual descriptions. Given our emphasis on the input-output faithfulness for cross-modal generations, the main analysis are based on the dance-to-music generation task since the evaluation protocol from Zhu et al. (2022a) explicitly measures such connections in terms of beats, genre and general correspondence for generated music. 4.1 DANCE-TO-MUSIC GENERATION Dataset. We use the AIST++ Li et al. (2021) dataset and the TikTok Dance-Music dataset Zhu et al. (2022a) for the dance-to-music experiments. AIST++ is a subset of the AIST dataset Tsuchida et al. (2019), which contains 1020 dance videos and 60 songs performed by professional dancers and filmed in clean studio environment settings without occlusions. AIST++ provide human motion data in the form of SMPL Loper et al. (2015) parameters and body keypoints, and includes the annotations for different genres and choreography styles. The TikTok Dance-Music dataset includes 445 dance videos collected from the social media platform. The 2D skeleton data extracted with OpenPose Cao et al. (2017); Cao et al. (2019) is used as the motion representation. We adopt the official cross-modality splits without overlapping music songs for both datasets. Implementations. The sampling rate for all audio signals is 22.5 kHz in our experiments. We use 2-second music samples as in Zhu et al. (2022a) for the main experiments. We fine-tuned the pre-trained Jukebox Dhariwal et al. (2020) for our Music VQ-VAE model. For the motion encoder, we deploy a backbone stacked with convolutional layers and residual blocks. For the visual encoder, we extract I3D features Carreira & Zisserman (2017) using a model pre-trained on Kinectics Kay et al. (2017) as the visual conditioning. The motion and visual encoder outputs are concatenated to form the final continuous conditioning input to our contrastive diffusion model. For the contrastive diffusion model, we adopt a transformer-based backbone to learn the denoising network pθ. It includes 19 transformer blocks, with each block consisting of full attention, cross attention and feed forward modules, and a channel size of 1024 for each block. We set the initial weight for the contrastive loss as λ = 5e− 5. The number N of intra- and inter-negative samples for each GT music sample is 10. The visual encoder, motion encoder, and the contrastive diffusion model are jointly optimized. More implementation details are provided in the supplement. Evaluations. The evaluation of synthesized music measures both the conditioning-output correspondence and the general synthesis quality using the metrics introduced in Zhu et al. (2022a). Specifically, the metrics include the beats coverage score, the beats hit scores, the genre accuracy score, and two subjective evaluation tests with Mean Opinion Scores (MOS) for the musical coherence and general quality. Among these metrics, the beats scores emphasize the intra-sample properties, since they calculate the second-level audio onset strength within musical chunks Ellis (2007), while the genre accuracy focuses on the instance-level musical attributes of music styles. Detailed explanations of the above metrics can be found in Zhu et al. (2022a). We compare against multiple dance-to-music Table 1: Quantitative evaluation results for the dance-to-music task on the AIST++ dataset. This table shows the best performance scores we obtain for different contrastive diffusion steps. We report the mean and standard deviations of our contrastive diffusion for three inference tests. Musical features Rhythms Rhythms Genre Coherence Quality Metrics Coverage ↑ Hit ↑ Accuracy ↑ MOS ↑ MOS ↑ GT Music 100 100 88.5 4.7 4.8 Foley 74.1 69.4 8.1 2.9 - Dance2Music 83.5 82.4 7.0 3.0 - CMT 85.5 83.5 11.6 3.0 - D2M-GAN 88.2 84.7 24.4 3.3 3.4 Ours Vanilla 89.0±1.1 83.8±1.5 25.3±0.8 3.3 3.6 Ours Step-Intra 93.9±1.2 90.7±1.5 25.8±0.6 3.6 3.5 Ours Sample-Inter 91.8±1.6 86.9±1.4 27.2±0.5 3.6 3.6 Table 2: Quantitative evaluastion results for the dance-to-music task on the TikTok dataset. We set the default number of diffusion steps to be 80. Methods BeatsCoverage/Hit ↑ D2M-GAN 88.4/ 82.3 Ours Vanilla 88.7/ 81.4 Ours Step-Intra 91.8/ 86.3 Ours Sample-Inter 90.1/ 85.5 generation works: Foley Gan et al. (2020a), Dance2Music Aggarwal & Parikh (2021), CMT Di et al. (2021), and D2M-GAN Zhu et al. (2022a). The first three models rely on symbolic discrete MIDI musical representations, while the last one also uses a VQ musical representation. The major difference between the symbolic MIDI and discrete VQ musical representations lies within the fact that the MIDI is pre-defined for each instrument, while the VQ is learning-based. The latter thus enables complex and free music synthesis appropriate for scenarios like dance videos. Results and Discussion. The quantitative experimental results are shown in Tab. 1 and Tab. 2. Our proposed methods achieve better performance than the competing methods even with vanilla version without contrastive mechanisms. Furthermore, we find that the Step-Intra setting is more helpful in increasing the beats scores, while the Sample-Inter setting yields more improvements for the genre accuracy scores. We believe this is due to the evaluation methods of different metrics. The beats scores measure the chunk-level (i.e., , the audio onset strength Ellis (2007)) consistency between the GT and synthesized music samples Zhu et al. (2022a), while the genre scores consider the overall musical attributes of each sample sequence in instance level. This finding is consistent with our assumptions in Sec. 3.3. Convergence Analysis. We also analyze the impact of the proposed contrastive diffusion on model convergence in terms of diffusion steps. The number of diffusion steps is a significant hyper-parameter for DPMs Sohl-Dickstein et al. (2015b); Nichol & Dhariwal (2021); Austin et al. (2021); Gu et al. (2022); Kingma et al. (2021) that directly influences the inference time and synthesis quality. Previous works have shown that a larger number of diffusion steps usually lead to better model performance, but longer inference times Kingma et al. (2021); Gu et al. (2022). We demonstrate that, with the improved mutual information via the proposed contrastive diffusion method, we can greatly reduce the number of steps needed. As shown in Fig. 4 (left), we observe that the beats scores reach a stable level at approximately 80 steps, ∼35% less than the vanilla DPM that converges in ∼120 steps. More ablation studies and analysis on this task can be found in the supplement. 4.2 CONDITIONAL IMAGE SYNTHESIS Dataset. We conduct text-to-image synthesis on CUB200 Wah et al. (2011) and MSCOCO datasets Lin et al. (2014). The CUB200 dataset contains images of 200 bird species. Each image has 10 corresponding text descriptions. The MSCOCO dataset contains 82k images for training and 40k images for testing. Each image has 5 text descriptions. We also perform the class-conditioned image generation on ImageNet Deng et al. (2009); Russakovsky et al. (2015). Implementation details for both tasks are provided in the supplement. Evaluations. We adopt two evaluation metrics for text-to-image synthesis: the classic FID score Heusel et al. (2017) as the general measurement for image quality, and the CLIPScore Hessel et al. (2021) to evaluate the correspondence between the given textual caption and the synthesized image. For the class-conditioned image synthesis, we use the FID score and a classifier-based accuracy for general and input-output correspondence measurement. We compare against text-to-image generation methods including StackGAN Zhang et al. (2017), StackGAN++ Zhang et al. (2018), SEGAN Tan et al. (2019), AttnGAN Xu et al. (2018), DM-GAN Zhu et al. (2019), DF-GAN Tao et al. (2020), DAE-GAN Ruan et al. (2021), DALLE Ramesh et al. (2021), and VQ-Diffusion Gu et al. (2022). For experiments on ImageNet, we list the result comparisons with ImageBART Esser et al. (2021a), VQGAN Esser et al. (2021b), IDDPM Nichol & Dhariwal (2021), and VQ-D Gu et al. (2022). Specifically, VQ-Diffusion Gu et al. (2022) also adopts the discrete diffusion generative backbone, which can be considered as the vanilla version without contrastive mechanisms. Additionally, we provide more comparisons with other methods in terms of dataset, model scale and training time in the supplement for a more comprehensive and fair understanding of our proposed method. Results and Discussion. The quantitative results are represented in Tab. 3 and Tab. 4. We observe that our contrastive diffusion achieves state-of-the-art performance for both general synthesis fidelity and input-output correspondence, and the Sample-Inter contrastive setting is more beneficial compared to Step-Intra for the image synthesis. This empirical finding again validates our assumption regarding the contrastive settings in Sec. 3.3, where the Sample-Inter setting helps more with the instance-level synthesis quality. Notably, as shown in Fig. 4 (right), our contrastive diffusion method shows model convergence at about 60 diffusion steps, while the vanilla version converges at approximately 100 steps on CUB200 Wah et al. (2011), which greatly increases the inference speed by 40%. 5 CONCLUSION While DPMs have demonstrated remarkable potential, improving their training and inference efficiency while maintaining flexible and accurate results for conditional generation is an ongoing challenge, particularly for cross-modal tasks. Our Conditional Discrete Contrastive Diffusion (CDCD) loss addresses this by maximizing the mutual information between the conditioning input and the generated output. Our contrastive diffusion mechanisms and negative sampling methods effectively incorporate this loss into DPM training. Extensive experiments on various cross-modal conditional generation tasks demonstrate the efficacy of our approach in bridging drastically differing domains. ACKNOWLEDGMENT This research is partially supported by NSF SCH-2123521 and Snap unrestricted gift funding. This article solely reflects the opinions and conclusions of its authors and not the funding agents. ETHICS STATEMENTS As in other media generation works, there are possible malicious uses of such media to be addressed by oversight organizations and regulatory agencies. Our primary objective as researchers is always creating more reliable and secure AI and machine learning systems that maximally benefit our society. A MORE RELATED WORKS In addition to the fields of Diffusion Probabilistic Models, Contrastive Representation Learning, and VQ Representations for Conditional Generation discussed in the main paper, our work is also closely related to the multi-modal learning and generation fields. The research topic of multimodal learning, which incorporates data from various modalities such as audio, vision, and language has attracted much attention in recent years Baltrušaitis et al. (2018); Zhu et al. (2022b); Wu et al. (2023). General audio and visual learning works typically seek to investigate their correlations from the intrinsic synchronization nature Aytar et al. (2016); Korbar et al. (2018); Owens & Efros (2018); Owens et al. (2016); Arandjelovic & Zisserman (2017), and then utilize them in various downstream audio-visual tasks such as audio-visual action recognition Kazakos et al. (2019); Gao et al. (2020), audio-visual event localization and parsing Tian et al. (2018); Zhu et al. (2021a); Wu et al. (2019); Wu & Yang (2021), and audio-visual captioning Rahman et al. (2019); Wang et al. (2018). Works to generate music from visual or/and motion data have also been widely explored in recent years Gan et al. (2020a); Di et al. (2021); Aggarwal & Parikh (2021); Zhu et al. (2022a). For vision and language area, the text generation from visions are extensively explored in the image and video captioning task Zhu et al. (2020; 2021b); Anderson et al. (2018); You et al. (2016); Wang et al. (2017). At the same time, works on image/video generation from text have also attracted much attention with recently released largescale models Radford et al. (2021); Li et al. (2019); Ruan et al. (2021); Ramesh et al. (2021). B DETAILED PROOF AND TRAINING B.1 LOWER BOUND OF CDCD LOSS We show that the proposed CDCD loss has a lower bound related to the mutual information and the number of negative samples N . The derivations below are similar to those from Oord et al. (2018): LCDCD := EZ [−log pθ(z0|c) pθ(z0) pθ(z0|c) pθ(z0) + ∑ zj∈Z′ pθ(z j 0|c) pθ(z j 0) ] (6a) = EZ log [1 + pθ(z0) pθ(z0|c) ∑ zj∈Z′ pθ(z j 0|c) pθ(z j 0) ] (6b) ≈ EZ log [1 +N pθ(z0) pθ(z0|c) EZ′ [ pθ(z j 0|c) pθ(z j 0) ]] (6c) = EZ log[1 +N pθ(z0) pθ(z0|c) ] (6d) ≥ EZ log[N pθ(z0) pθ(z0|c) ] (6e) = log(N)− I(z0, c). (6f) B.2 CONVENTIONAL VARIATIONAL LOSS The conventional variational loss Lvb is derived as follows Sohl-Dickstein et al. (2015b): Lvb(x) := Eq[−log pθ(x0:T ) q(x1:T |x0) ] = Eq[−log p(xT )− ∑ t>1 log pθ(xt−1|xt) q(xt|xt−1) − log pθ(x0|x1) q(x1|x0) ] = Eq[−log p(xT )− ∑ t>1 log pθ(xt−1|xt) q(xt−1|xt, x0) · q(xt−1|x0) q(xt|x0) − log pθ(x0|x1) q(x1|x0) ] = Eq[−log p(xT ) q(xT |x0) − ∑ t>1 log pθ(xt−1|xt) q(xt−1|xt, x0) − log pθ(x0|x1)] = Eq[DKL(q(xT |x0)||p(xT )) + ∑ t>1 DKL(q(xt−1|xt, x0)||pθ(xt−1|xt))− log pθ(x0|x1)]. (7) B.3 Lvb WITH CONDITIONING PRIOR Following the unconditional conventional variational loss, we then show its conditional variant with the conditioning c as prior, which has also been adopted in Gu et al. (2022). Lvb(x, c) = L0 + L1 + ...+ LT−1 + LT L0 = −log pθ(x0|x1, c) Lt−1 = DKL(q(xt−1|xt, x0)||pθ(xt−1|xt, c)) LT = DKL(q(xT |x0)||p(xT )) (8) B.4 STEP-WISE AND SAMPLE-WISE CONTRASTIVE DIFFUSION Below, we show the full derivation for the step-wise parallel contrastive diffusion loss. Given that the intermediate variables from z1:T are also taken into account in this step-wise contrastive diffusion, we slightly modify the initial notation f(z0, c) = pθ(z0|c) pθ(z0) from Eq.(2) in the main paper to f(z, c) = pθ(z0:T |c)pθ(z0:T ) . LCDCD−Step := −EZ [log f(z, c) f(z, c) + ∑ zj∈Z′ f(z j , c) ] (9a) = EZ log [1 + ∑ zj∈Z′ f(z j , c) f(z, c) ] (9b) = EZ log [1 + pθ(z0:T ) pθ(z0:T |c) ∑ zj∈Z′ pθ(z j 0:T |c) pθ(z j 0:T ) ] (9c) ≈ EZ log [1 + pθ(z0:T ) pθ(z0:T |c) NEZ′ pθ(z j 0:T |c) pθ(z j 0:T ) ] (same as Eq.(6c)) (9d) ≈ EZEq log[ q(z1:T |z0) pθ(z0:T |c) N pθ(z0:T |c) q(z1:T |z0) ] (conditional pθ) (9e) ≈ Eq[−log pθ(z0:T |c) q(z1:T |z0) ]− logN EZ′Eq[−log pθ(z0:T |c) q(z1:T |z0) ] (9f) = Lvb(z, c)− C ∑ zj∈Z′ Lvb(zj , c). (9g) Algorithm 1 Conditional Discrete Contrastive Diffusion Training. The referenced equations can be found in the main paper. Input: Initial network parameters θ, contrastive loss weight λ, learning rate η, number of negative samples N , total diffusion steps T , conditioning information c, contrastive mode m ∈ {Step, Sample}. 1: for each training iteration do 2: t ∼ Uniform({1, 2, ..., T}) 3: zt ← Sample from q(zt|zt−1) 4: Lvb ← ∑ i=1,...,t Li ▷ Eq. 1 5: if m == Step then 6: for j = 1, ..., N do 7: zjt ← Sample from q(z j t |z j t−1, c) ▷ from negative variables in previous steps 8: end for 9: LCDCD = − 1N ∑ Ljvb ▷ Eq. 3 10: else if m == Sample then 11: for j = 1, ..., N do 12: zt ← Sample from q(zt|zj0, c) ▷ from negative variables in step 0 13: end for 14: LCDCD = − 1N ∑ Ljz0 ▷ Eq. 4 15: end if 16: L ← Lvb + λLCDCD ▷ Eq. 5 17: θ ← θ − η∇θL 18: end for In the above Eq.(9g), C stands for a constant that equals to logN , which can be further adjusted by the weight we select for the CDCD loss as in Eq. 5. Similarly for the sample-wise auxiliary contrastive diffusion, the loss can be derived as follows: LCDCD−Sample := −EZ [log f(z0, c) f(z0, c) + ∑ zj∈Z′ f(z j 0, c) ] (10a) = EZ log [1 + pθ(z0) pθ(z0|c) NEZ′ [ pθ(z j 0|c) pθ(z j 0) ]] (10b) ≈ EZEq log[ q(z1:T |z0) pθ(z0|c) N pθ(z0|c) q(z1:T |z0) ] (10c) ≈ Eq[−log pθ(z0|c) q(z1:T |z0) ]−N EZ′Eq[−log pθ(z0|c) q(z1:T |z0) ] (10d) = Eq[−log pθ(z0|zt, c)]− C ∑ zj∈Z′ Eq[−log pθ(zj0|zt, c)]. (10e) Note that from a high-level perspective, our contrastive idea covers two different concepts, while conventional contrastive learning usually focuses only on the negative samples. In our case, due to the unique formulation of diffusion models that bring the diffusion steps into the methodology design, we consider the contrast within the context of “negative samples” and “negative steps” (also corresponds to the “negative intermediate steps”). In the deviation above, we use the symbols Z and q to distinguish between these two concepts. B.5 CONDITIONAL DISCRETE CONTRASTIVE DIFFUSION TRAINING The training process for the proposed contrastive diffusion is explained in Algo. 1. C ADDITIONAL EXPERIMENTAL DETAILS AND ANALYSIS C.1 DANCE-TO-MUSIC TASK Implementation. The sampling rate for all audio signals is 22.5 kHz in our experiments. We use 2-second music samples as in Zhu et al. (2022a) for our main experiments, resulting in 44,100 audio data points for each raw music sequence. For the Music VQ-VAE, we fine-tuned Jukebox Dhariwal et al. (2020) on our data to leverage its pre-learned codebook from a large-scale music dataset (approximately 1.2 million songs). The codebook size K is 2048, with a token dimension dz = 128, and the hop-length L is 128 in our default experimental setting. For the motion module, we deploy a backbone stacked with convolutional layers and residual blocks. The dimension size of the embedding we use for music conditioning is 1024. For the visual module, we extract I3D features Carreira & Zisserman (2017) using a model pre-trained on Kinectics Kay et al. (2017) as the visual conditioning information, with a dimension size of 2048. In the implementation of our contrastive diffusion model, we adopt a transformer-based backbone to learn the denoising network pθ. It includes 19 transformer blocks, in which each block is consists of full-attention, cross-attention and a feed-forward network, and the channel size for each block is 1024. We set the initial weight for the contrastive loss as λ = 5e− 5. The numbers of intra- and inter-negative samples for each GT music sample are both 10. The AdamW Loshchilov & Hutter (2017) optimizer with β1 = 0.9 and β2 = 0.96 is deployed in our training, with a learning rate of 4.5e− 4. We also employ an adaptive weight for the denoising loss weight by gradually decreasing the weight as the diffusion step increases and approaches the end of the chain. The visual module, motion module, and the contrastive diffusion model are jointly optimized. The architecture of adopted motion encoder is shown in Tab. 5, which is the same as in Zhu et al. (2022a). Other than the aforementioned implementation details, we also include the mask token technique that bears resemblance to those used in language modelling Devlin et al. (2018) and text-to-image synthesis Gu et al. (2022) for our dance-to-music generation task. We adopt a truncation rate of 0.86 in our inference. MOS Evaluation Test. We asked a total of 32 participants to participate in our subjective Mean Opinion Scores (MOS) music evaluations Zhu et al. (2022a); Kumar et al. (2019), among which 11 of them are female, while the rest are male. For the dance-music coherence test, we fuse the generated music samples with the GT videos as post-processing. We then asked each evaluator to rate 20 generated videos with a score of 1 (least coherent) to 5 (most coherent) after watching the processed video clip. Specifically, the participants are asked to pay more attention to the dance-music coherence in terms of the dance moves corresponding to the music genre and rhythm, rather than the overall music quality, with reference to the GT video clips with the original music. As for the overall quality evaluations, we only play the audio tracks without the video frames to each evaluator. As before, they are asked to rate the overall music quality with a score of 1 (worst audio quality) to 5 (best audio quality). Training Cost. For the dance2music task experiments on the AIST++ dataset, we use 4 NVIDIA RTX A5000 GPUs, and train the model for approximately 2 days. For the same task on the TikTok dance-music dataset, the training takes approximately 1.5 days on the same hardware. Complete Results for Contrastive Settings. As discussed in our main paper, there are four possible combinations for contrastive settings given different contrastive diffusion mechanisms and negative sampling methods. Here, we include complete quantitative scores for different contrastive settings in Tab. 6. We observe that all the four contrastive settings, including the Step-Inter and SampleIntra settings that are not reported in our main paper, help to improve the performance. As we noted, amongst all the settings, Step-Intra and Sample-Inter are more reasonable and yield larger improvements for intra-sample data attributes (i.e., beats scores) and instance-level features (i.e., genre accuracy scores). Ablation on Music Length. Although we use 2-second musical sequences in the main experiments to make for consistent and fair comparisons with Zhu et al. (2022a), our framework can also synthesize longer musical sequences. In the supplementary, we show our generated music sequences in 6- seconds. The quantitative evaluations in terms of different musical sequence lengths are presented Tab. 7, where we show better performance when synthesizing longer musical sequences. C.2 TEXT-TO-IMAGE TASK Implementation. For the text-to-image generation task, we adopt VQ-GAN Esser et al. (2021b) as the discrete encoder and decoder. The codebook size K is 2886, with a token dimension dz = 256. VQGAN converts a 256× 256 resolution image to 32× 32 discrete tokens. For the textual conditioning, we employ the pre-trained CLIP Radford et al. (2021) model to encode the given textual descriptions. The denoising diffusion model pθ has 18 transformer blocks and a channel size of 192, which is a similar model scale to the small version of VQ-Diffusion Gu et al. (2022). We use λ = 5e − 5 as the contrastive loss weight. Similar to the dance-to-music task, we also use the adaptive weight that changes within the diffusion stages. We keep the same truncation rate of 0.86 as in our dance-to-music experiment and in Gu et al. (2022). Unlike in the dance-to-music experiments, where we jointly learn the conditioning encoders, both the VQ-GAN and CLIP models are fixed during the contrastive diffusion training. Training Cost. For the text2image task experiments on the CUB200 dataste, the training takes approximately 5 days using 4 NVIDIA RTX A5000 GPUs. For the same experiments on the MSCOCO dataset, we run the experiments on Amazon Web Services (AWS) using 8 NVIDIA Tesla V100 GPUs. This task required 10 days of training. C.3 CLASS-CONDITIONED IMAGE SYNTHESIS TASK Implementation. For the class-conditioned image synthesis, we also adopt the pre-trained VQGAN Esser et al. (2021b) as the discrete encoder and decoder. We replace the conditioning encoder with class embedding optimized during the contrastive diffusion training. The size of the conditional embedding is 512. Other parameters and techniques remain the same, as in the text-to-image task. Training Cost. For the class-conditioned experiments on the ImageNet, we use 8 NVIDIA Tesla V100 GPUs running on AWS. This task required 20 days of training. D MORE QUALITATIVE RESULTS D.1 GENERATED MUSIC SAMPLES For qualitative samples of synthesized dance music sequences, please refer to our anonymous page in the supplement with music samples. In addition to the generated music samples on AIST++ Tsuchida et al. (2019); Li et al. (2021) and TikTok Dance-Music Dataset Zhu et al. (2022a), we also include some qualitative samples obtained with the music editing operations based on the dance-music genre annotations from AIST++. Specifically, we edit the original paired motion conditioning input with a different dance-music genre using a different dance choreographer. Discussion on Musical Representations and Audio Quality. It is worth noting that we only compare the overall audio quality with that of D2M-GAN Zhu et al. (2022a). This is due to the nature of the different musical representations in the literature of deep-learning based music generation Gan et al. (2020a); Dong et al. (2018); Huang et al. (2019); Gan et al. (2020b); Aggarwal & Parikh (2021). There are mainly two categories for adopted musical representations in previous works: pre-defined symbolic and learning-based representations Ji et al. (2020); Briot et al. (2020). For the former symbolic music representation, typical options include 1D piano-roll and 2D MIDI-based representations. While these works benefit from the pre-defined music synthesizers and produce music that does not include raw audio noise, the main limitation is that such representations are usually limited to a single specific instrument, which hinders their flexibility to be applied in wider and more complex scenarios such as dance videos. In contrast, the learning-based music representations (i.e., musical VQ in our case) rely on well-trained music synthesizers as decoders, but can be used as a unified representation for various musical sounds, e.g., instruments or voices. However, the training of such music encoders and decoders for high-quality audio signals itself remains a challenging problem. Specifically, high-quality audio is a form of high-dimensional data with an extremely large sampling rate, even compared to high-resolution images. For example, the sampling rate for CD-quality audio signals is 44.1 kHz, resulting in 2,646,000 data points for a one-minute musical piece. To this end, existing deep learning based works Dhariwal et al. (2020); Kumar et al. (2019) for music generation employ methods to reduce the number of dimensions, e.g., by introducing hop lengths and a smaller sampling rate. These operations help to make music learning and generation more computationally tractable, but also introduce additional noise in the synthesized audio signals. In this work, we adopt the pre-trained JukeBox model Dhariwal et al. (2020) as our music encoder and decoder for the musical VQ representation. The adopted model has a hop length of 128, which corresponds to the top-level model from their original work Dhariwal et al. (2020). Jukebox employs 3 models: top-, middle-, and bottom-level, with both audio quality and required computation increasing from the first to the last model. As an example, in the supplemental HTML page, we provide music samples directly reconstructed from JukeBox using the top-level model we employ in our work, compared to the ground-truth audio. While they allow for high-quality audio reconstruction (from the bottom-level model, with a hop length of 8), it requires much more time and computation not only for training but also for the final inference, e.g., 3 hours to generate a 20-second musical sequence. As the synthesized music from the top-level model includes some audible noise, we apply a noise reduction operation Sainburg et al. (2020). However, the overall audio quality is not a primary factor that we specifically address in this work on cross-modal conditioning and generation, as it largely depends on the specific music encoder and decoder that are employed. This explains why we report similar MOS scores in terms of the general audio quality. D.2 SYNTHESIZED IMAGES We present more qualitative examples for text-to-image synthesis and class-conditioned image synthesis in Fig. 5, Fig. 6, and Fig. 7. E FURTHER DISCUSSION ON THE CDCD LOSS In this section, we provide our further discussion on the proposed CDCD loss in terms of various aspects, including its relevance to the existing auxiliary losses, the impact of the CDCD strength, as well as additional experimental results. E.1 CDCD AS AUXILIARY LOSS While the diffusion models are typically trained and optimized with the conventional variational lower bound loss Lvb as we described in the main paper and Appendix B.2, there are several different types of auxiliary losses proposed to further regularize and improve the learning of diffusion models. Specifically, Dhariwal & Nichol (2021) introduces the idea of classifier based guidance for the diffusion denoising probabilistic models with continuous state space. Classifier-free guidance is proposed in Ho & Salimans (2022). In the area with discrete diffusion formulations Austin et al. (2021); Gu et al. (2022), an auxiliary loss that encourages the model to predict the noiseless token at the arbitrary step is adopted and proven to help with the synthesis quality. Similar to the previous cases, we consider the proposed CDCD loss as a type of auxiliary losses, which seeks to provide additional guidance to better learn the conditional distribution p(x|c). Specifically, the classifier-free guidance Ho & Salimans (2022) propose to randomly discard conditioning while learning a conditional diffusion generative model, which bears resemblance to our introduced downsampled contrastive steps in Appendix E.3. E.2 IMPACT OF CDCD STRENGTH We further show the ablation studies on the parameter λ, which is the weight of our proposed CDCD loss that characterize the strength of this contrastive regularizer. We conduct the dance-to-music generation experiments with different values of λ, and show the results in Tab. 9. As we observe from the table that the performance in terms of the beat scores are relatively robust for different λ values ranging from 4e − 5 to 5e − 5. At the same time, we empirically observe that with a large value of λ, the model converges faster with less training epochs. In case of the image synthesis task, we are rather cautious on the strength of the imposed contrastive regularizer. Intuitively, the proposed CDCD loss encourages the model to learn a slightly different distribution for negative samples, which could impose a trade-off between the one for the actual data given a specific conditioning. Therefore, while a larger value of λ helps with the learning speed, we empirically set the λ to be 5e− 5. Note that this value is adapted from the weight for other auxiliary losses in previous works Gu et al. (2022). E.3 DOWNSAMPLED CONTRASTIVE STEPS While we show the performance of complete step-wise contrastive diffusion in the main paper, we discuss here an alternative way to implement the proposed method with less computational cost, by downsampling the contrastive steps in the diffusion process. Specifically, we randomly downsampled the steps with the proposed CDCD loss, which shares the similar spirit as in the class-free guidance Ho & Salimans (2022) to randomly drop out the conditioning. The experimental results are listed in Tab. 10, where there is little performance drop with downsampled contrastive steps.
1. What is the main contribution of the paper regarding the regularizer for diffusion processes? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of computational resources and elaborateness? 3. Do you have any questions or concerns about the experiments, such as ablation studies, baselines, and time complexity? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any suggestions or recommendations for improving the paper, such as adding Stable Diffusion as a baseline, reporting wall-clock training time, and providing more extensive ablation studies?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper presents a regularizer that enforces that the condition embedding and the latent embedding of the input datum share the maximal mutual information between them. This regularizer then helps the diffusion process because the condition is rich with information about the latent embeddings, and thus helping the denoising process. The regularizer term is inspired by a contrastive loss in which the numerator indirectly measures the mutual information between the condition and latent embeddings, and the denominator includes information from conditions and embeddings that should not be related to each other (i.e., mimicking negative samples). The paper also discusses two ways of applying this regularizer: step-size parallel diffusion and sample-wise auxiliary diffusion. The former allows parallel evaluations of the denoising diffusion process while the latter does not. The paper presents experiments on dance-to-music generation, text-to-image synthesis, and class-conditioned image synthesis showing improvements over the included baselines. Strengths And Weaknesses Strengths: I find the idea of using negative samples as part of the contrastive-loss-based regularizer to maximize the mutual information between the condition and latent interesting and novel. I think using these negative samples can indeed inform the model about what not to generate and constraint better the data generation. Weaknesses: While I find the idea novel, I think the method is quite elaborate and can imply more computational resources. First, including negative samples as part of the loss will increase computation making the computational cost even more expensive for a denoising diffusion process. Second, while the proposed stepwise diffusion allows parallelization, it still requires more resources that can increase the cost of an already expensive denoising diffusion process. Insufficient experiments: The paper lacks an ablation study about the parameter λ which controls the contribution to the total loss of the proposed regularizer. According to Section 4.1, λ was set to 5e-5, which I find the value too low. It is unclear how to set this parameter from the experiments. More importantly, what the impact of increasing the value of λ and thus enforcing the regularizer stronger on performance is not clear. The paper also misses an ablation study about the latent encoder. What is the effect of not even using one? Wouldn’t a latent encoder likely reduce the information (in the information theoretical sense) from the original input. Can the proposed methods work on raw signals, i.e., latents are the input signals directly. The experiments in paragraph “Results and Discussion” and Fig. 4 state that because the proposed method requires fewer steps to converge the proposed method is faster to converge. While the experiments show a reduction in steps, it is unclear about the cost of each step in terms of time in the proposed method. I think having a paper demonstrating that the proposed method indeed reduces the time of convergence is more important than the number of steps. The experiment in Table 3 is missing a more appropriate baseline: Stable Diffusion. I think using Stable Diffusion instead of DALLE makes more sense because Stable Diffusion also uses a latent representation while DALLE does not. From the theoretical perspective, is there a proof showing that the proposed regularizer combined w/ the variational-bound-based loss still preserves the Langevin dynamics in some way? I think discussing the theoretical guarantees can be informative. ================= Post-Discussion ================= After engaging with the authors in the discussion, I still think the paper can benefit from reporting wall-clock time of the training phase, add more extensive ablation studies, and add Stable Diffusion as a baseline. For the most part, most of my concerns about clarity were addressed. Nevertheless, because I think there are missing experiments, I cannot champion the paper fully as I think the paper can benefit from another revision. I will slightly increase my rating to 6 - marginally above the acceptance threshold. Clarity, Quality, Novelty And Reproducibility Clarity concerns: While I find the idea of using negative samples in a contrastive-based loss to improve a diffusion process interesting, the narrative is missing intuitive explanations of the “Step-Wise Parallel Diffusion” and “Sample-Wise Auxiliary Diffusion”. It is unclear from the paper the justification of the two different designs. Also, I think the proposed regularizer discussed in 3.1 requires more discussion. Is it fair to say that the regularizer enforces the diffusion process to also generate negative samples? From Fig. 2, I understand that the negative samples go through a denoising diffusion process too. Thus, is it fair to say that the regularizer thus makes the model learn two distributions: one distribution that generates the expected data given a condition, while the other one, is a distribution generating what it is not expected? Reproducibility concerns: Given the current state of the paper, I find it hard to implement and reproduce the results. The architecture of the conditioning encoders is not discussed, and it is unclear if their parameters are optimized as part of the regularizer. Overall, the equations do not reveal what parameters are learned and are a bit convoluted. For example, in Sec. 3.1, the pdfs involved in the mutual information all seem to be parameterized by \theta. Are they really sharing the same parameters? Is f() the neural network to learn? If so, what is the architecture?
ICLR
Title Discrete Contrastive Diffusion for Cross-Modal Music and Image Generation Abstract Diffusion probabilistic models (DPMs) have become a popular approach to conditional generation, due to their promising results and support for cross-modal synthesis. A key desideratum in conditional synthesis is to achieve high correspondence between the conditioning input and generated output. Most existing methods learn such relationships implicitly, by incorporating the prior into the variational lower bound. In this work, we take a different route—we explicitly enhance input-output connections by maximizing their mutual information. To this end, we introduce a Conditional Discrete Contrastive Diffusion (CDCD) loss and design two contrastive diffusion mechanisms to effectively incorporate it into the denoising process, combining the diffusion training and contrastive learning for the first time by connecting it with the conventional variational objectives. We demonstrate the efficacy of our approach in evaluations with diverse multimodal conditional synthesis tasks: dance-to-music generation, text-to-image synthesis, as well as class-conditioned image synthesis. On each, we enhance the inputoutput correspondence and achieve higher or competitive general synthesis quality. Furthermore, the proposed approach improves the convergence of diffusion models, reducing the number of required diffusion steps by more than 35% on two benchmarks, significantly increasing the inference speed. 1 INTRODUCTION Generative tasks that seek to synthesize data in different modalities, such as audio and images, have attracted much attention. The recently explored diffusion probabilistic models (DPMs) Sohl-Dickstein et al. (2015b) have served as a powerful generative backbone that achieves promising results in both unconditional and conditional generation Kong et al. (2020); Mittal et al. (2021); Lee & Han (2021); Ho et al. (2020); Nichol & Dhariwal (2021); Dhariwal & Nichol (2021); Ho et al. (2022); Hu et al. (2021). Compared to the unconditional case, conditional generation is usually applied in more concrete and practical cross-modality scenarios, e.g., video-based music generation Di et al. (2021); Zhu et al. (2022a); Gan et al. (2020a) and text-based image generation Gu et al. (2022); Ramesh et al. (2021); Li et al. (2019); Ruan et al. (2021). Most existing DPM-based conditional synthesis works Gu et al. (2022); Dhariwal & Nichol (2021) learn the connection between the conditioning and the generated data implicitly by adding a prior to the variational lower bound Sohl-Dickstein et al. (2015b). While such approaches still feature high generation fidelity, the correspondence between the conditioning and the synthesized data can sometimes get lost, as illustrated in the right column in Fig. 1. To this end, we aim to explicitly enhance the input-output faithfulness via their maximized mutual information under the diffusion generative framework for conditional settings in this paper. Examples of our synthesized music audio and image results are given in Fig. 1. Contrastive methods Oord et al. (2018); Bachman et al. (2019); Song & Ermon (2020a) have been proven to be very powerful for data representation learning. Their high-level idea aims to learn the representation z of raw data x based on the assumption that a properly encoded z benefits the ability of a generative model p to reconstruct the raw data given z as prior. This idea can be achieved via optimization of the density ratio p(x|z)p(x) Oord et al. (2018) as an entirety, without explicitly modeling the actual generative model p. While the direct optimization of mutual information via generative models p is a challenging problem to implement and train Song & Ermon (2020b); Belghazi et al. (2018) in the conventional contrastive representation learning field, we show that this can be effectively done within our proposed contrastive diffusion framework. Specifically, we reformulate the optimization problem for the desired conditional generative tasks via DPMs by analogy to the above embedding z and raw data x with our conditioning input and synthesized output. We introduce a Conditional Discrete Contrastive Diffusion (CDCD) loss, and design two contrastive diffusion mechanisms - step-wise parallel diffusion that invokes multiple parallel diffusion processes during contrastive learning, and sample-wise auxiliary diffusion, which maintains one principal diffusion process, to effectively incorporate the CDCD loss into the denoising process. We demonstrate that with the proposed contrastive diffusion method, we can not only effectively train so as to maximize the desired mutual information by connecting the CDCD loss with the conventional variational objective function, but also to directly optimize the generative network p. The optimized CDCD loss further encourages faster convergence of a DPM model with fewer diffusion steps. We additionally present our intra- and inter-negative sampling methods by providing internally disordered and instance-level negative samples, respectively. To better illustrate the input-output connections, we conduct main experiments on the novel crossmodal dance-to-music generation task Zhu et al. (2022a), which aims to generate music audio based on silent dance videos. Compared to other tasks such as text-to-image synthesis, dance-to-music generation explicitly evaluates the input-output correspondence in terms of various cross-modal alignment features such as dance-music beats, genre and general quality. However, various generative settings, frameworks, and applications can also benefit from our contrastive diffusion approach, e.g., joint or separate training of conditioning encoders, continuous or discrete conditioning inputs, and diverse input-output modalities as detailed in Sec. 4. Overall, we achieve results superior or comparable to state-of-the-art on three conditional synthesis tasks: dance-to-music (datasets: AIST++ Tsuchida et al. (2019); Li et al. (2021), TikTok Dance-Music Zhu et al. (2022a)), text-toimage (datasets: CUB200 Wah et al. (2011), MSCOCO Lin et al. (2014)) and class-conditioned image synthesis (dataset: ImageNet Russakovsky et al. (2015)). Our experimental findings suggest three key take-away: 1 Improving the input-output connections via maximized mutual information is indeed beneficial for their correspondence and the general fidelity of the results (see Fig. 1 and supplement). 2 Both our proposed step-wise parallel diffusion with intra-negative samples and sample-wise auxiliary diffusion with inter-negative samples show state-of-the-art scores in our evaluations. The former is more beneficial for capturing the intra-sample correlations, e.g., musical rhythms, while the latter improves the instance-level performance, e.g., music genre and image class. 3 With maximized mutual information, our conditional contrastive diffusion converge in substantially fewer diffusion steps compared to vanilla DPMs, while maintaining the same or even superior performance (approximately 35% fewer steps for dance-to-music generation and 40% fewer for text-to-image synthesis), thus significantly increasing inference speed. 2 BACKGROUND Diffusion Probabilistic Models. DPMs Sohl-Dickstein et al. (2015b) are a class of generative models that learn to convert a simple Gaussian distribution into a data distribution. This process consists of a forward diffusion process and a reverse denoising process, each consisting of a sequence of T steps that act as a Markov chain. During forward diffusion, an input data sample x0 is gradually “corrupted” at each step t by adding Gaussian noise to the output of step t− 1. The reverse denoising process, seeks to convert the noisy latent variable xT into the original data sample x0 by removing the noise added during diffusion. The stationary distribution for the final latent variable xT is typically assumed to be a normal distribution, p(xT ) = N (xT |0, I). An extension of this approach replaces the continuous state with a discrete one Sohl-Dickstein et al. (2015a); Hoogeboom et al. (2021); Austin et al. (2021), in which the latent variables x1:T typically take the form of one-hot vectors with K categories. The diffusion process can then be parameterized using a multinomial categorical transition matrix defined as q(xt|xt−1) = Cat(xt; p = xt−1Qt), where [Qt]ij = q(xt = j|xt−1 = i). The reverse process pθ(xt|xt−1) can also be factorized as conditionally independent over the discrete sequences Austin et al. (2021). In both the continuous and discrete state formulations of DPMs Song & Ermon (2020c); Song et al. (2020b); Kingma et al. (2021); Song et al. (2021); Huang et al. (2021); Vahdat et al. (2021), the denoising process pθ can be optimized by the KL divergence between q and pθ in closed forms Song et al. (2020a); Nichol & Dhariwal (2021); Ho et al. (2020); Hoogeboom et al. (2021); Austin et al. (2021) via the variational bound on the negative log-likelihood: Lvb = Eq[DKL(q(xT |x0)||p(xT ))︸ ︷︷ ︸ LT + ∑ t>1 DKL(q(xt−1|xt, x0)||pθ(xt−1|xt))︸ ︷︷ ︸ Lt−1 − log pθ(x0|x1)︸ ︷︷ ︸ L0 ]. (1) Existing conditional generation works via DPMs Gu et al. (2022); Dhariwal & Nichol (2021) usually learn the implicit relationship between the conditioning c and the synthesized data x0 by directly adding the c as the prior in (1). DPMs with discrete state space provide more controls on the data corruption and denoising compared to its continuous counterpart Austin et al. (2021); Gu et al. (2022) by the flexible designs of transition matrix, which benefits for practical downstream operations such as editing and interactive synthesis Tseng et al. (2020); Cui et al. (2021); Xu et al. (2021). We hence employ contrastive diffusion using a discrete state space in this work. Contrastive Representation Learning. Contrastive learning uses loss functions designed to make neural networks learn to understand and represent the specific similarities and differences between elements in the training data without labels explicitly defining such features, with positive and negative pairs of data points, respectively. This approach has been successfully applied in learning representations of high-dimensional data Oord et al. (2018); Bachman et al. (2019); He et al. (2020); Song & Ermon (2020a); Chen et al. (2020); Lin et al. (2021). Many such works seek to maximize the mutual information between the original data x and its learned representation z under the framework of likelihood-free inference Oord et al. (2018); Song & Ermon (2020a); Wu et al. (2021). The above problem can be formulated as maximizing a density ratio p(x|z)p(x) that preserves the mutual information between the raw data x and learned representation z. To achieve this, existing contrastive methods Oord et al. (2018); Durkan et al. (2020); He et al. (2020); Zhang et al. (2021) typically adopt a neural network to directly model the ratio as an entirety and avoid explicitly considering the actual generative model p(x|z), which has proven to be a more challenging problem Song & Ermon (2020b); Belghazi et al. (2018). In contrast, we show that by formulating the conventional contrastive representation learning problem under the generative setting, the properties of DPMs enable us to directly optimize the model p in this work, which can be interpreted as the optimal version of the density ratio Oord et al. (2018). Vector-Quantized Representations for Conditional Generation. Vector quantization is a classical technique in which a high-dimensional space is represented using a discrete number of vectors. More recently, Vector-Quantized (VQ) deep learning models employ this technique to allow for compact and discrete representations of music and image data Oord et al. (2017); Razavi et al. (2019); Esser et al. (2021b); Dhariwal et al. (2020); Chen et al. (2022). Typically, the VQ-based models use an encoder-codebook-decoder framework, where the “codebook” contains a fixed number of vectors (entries) to represent the original high dimensional raw data. The encoder transforms the input x into feature embedding that are each mapped to the closest corresponding vector in the codebook, while the decoder uses the set of quantized vectors z to reconstruct the input data, producing x′ as illustrated in the upper part of Fig. 2. In this work, we perform conditional diffusion process on the VQ space (i.e., discrete token sequences) as shown in the bottom part of Fig. 2, which largely reduces the dimensionality of the raw data, thus avoiding the expensive raw data decoding and synthesis. As our approach is flexible enough to be employed with various input and output modalities, the exact underlying VQ model we use depends on the target data domain. For music synthesis, we employ a fine-tuned Jukebox Dhariwal et al. (2020) model, while for image generation, we employ VQ-GAN Esser et al. (2021b). See Sec. 4 for further details. We refer to z, the latent quantized representation of x, as z0 below to distinguish it from the latent representation at prior stages in the denoising process. 3 METHOD Here we outline our approach to cross-modal and conditional generation using our proposed discrete contrastive diffusion approach, which is depicted in Fig. 2. In Sec. 3.1, we formulate our Conditional Discrete Contrastive Diffusion loss in detail, and demonstrate how it helps to maximize the mutual information between the conditioning and generated discrete data representations. Sec. 3.2 defines two specific mechanisms for applying this loss within a diffusion model training framework, samplewise and step-wise. In Sec. 3.3, we detail techniques for constructing negative samples designed to improve the overall quality and coherence of the generated sequences. Given the data pair (c, x), where c is the conditioning information from a given input modality (e.g., videos, text, or a class label), our objective is to generate a data sample x in the target modality (e.g., music audio or images) corresponding to c. In the training stage, we first employ and train a VQ-based model to obtain discrete representation z0 of the data x from the target modality. Next, our diffusion process operates on the encoded latent representation z0 of x. The denoising process recovers the latent representation z0 given the conditioning c that can be decoded to obtain the reconstruction x′. In inference, we generate z0 based on the conditioning c, and decode the latent VQ representation z0 back to raw data domain using the decoder from the pre-trained and fixed VQ decoder. 3.1 CONDITIONAL DISCRETE CONTRASTIVE DIFFUSION LOSS We seek to enhance the connection between c and the generated data z0 by maximizing their mutual information, defined as I(z0; c) = ∑ z0 pθ(z0, c) log pθ(z0|c) pθ(z0) . We introduce a set of negative VQ sequences Z ′ = {z1, z2, ..., zN}, encoded from N negative samples X ′ = {x1, x2, ..., xN}, and define f(z0, c) = pθ(z0|c) pθ(z0) . Our proposed Conditional Discrete Contrastive Diffusion (CDCD) loss is: LCDCD := −E [ log f(z0, c) f(z0, c) + Σzj∈Z′f(z j 0, c) ] . (2) The proposed CDCD loss is similar to the categorical cross-entropy loss for classifying the positive sample as in Oord et al. (2018), where our conditioning c and the generated data z0 corresponds to the original learned representation and raw data, and optimization of this loss leads to maximization of I(z0; c). However, the loss in Oord et al. (2018) models the density ratio f(z0, c) as an entirety. In our case, we demonstrate that the DPMs properties Sohl-Dickstein et al. (2015b); Ho et al. (2020); Austin et al. (2021) enable us to directly optimize the actual distribution pθ within the diffusion process for the desired conditional generation tasks. Specifically, we show the connections between the proposed CDCD loss and the conventional variational loss Lvb (see (1)) in Sec. 3.2, and thus how it contributes to efficient DPM learning. Additionally, we can derive the lower bound for the mutual information as I(z0; c) ≥ log(N) − LCDCD (see supplement for details), which indicates that a larger number of negative samples increases the lower bound. These two factors allow for faster convergence of a DPM with fewer diffusion steps. 3.2 PARALLEL AND AUXILIARY DIFFUSION PROCESS The CDCD loss in (2) considers the mutual information between c and z0 in a general way, without specifying the intermediate diffusion steps. We propose and analyze two contrastive diffusion mechanisms to efficiently incorporate this loss into DPM learning, and demonstrate that we can directly optimize the generative model pθ in the diffusion process. We present our step-wise parallel diffusion and the sample-wise auxiliary diffusion mechanisms, which are distinguished by the specific operations applied for the intermediate negative latent variables zj1:T for each negative sample x j . The high-level intuition for the parallel and auxiliary designs is to emphasize different attributes of the synthesized data given specific applications. Particularly, we propose the parallel variant to learn the internal coherence of the audio sequential data by emphasizing the gradual change at each time step, while the auxiliary mechanism focuses more on the sample-level connections to the conditioning. Step-Wise Parallel Diffusion. This mechanism not only focuses on the mutual information between c and z0, but also takes the intermediate negative latent variables z j 1:T into account by explicitly invoking the complete diffusion process for each negative sample zj ∈ Z ′. As illustrated in Fig. 2 (bottom left), we initiate N + 1 parallel diffusion processes, among which N are invoked by negative samples. For each negative sample xj ∈ X ′, we explicitly compute its negative latent discrete variables zj0:T . In this case, (2) is as follows (see supplement for the detailed derivation): LCDCD−Step := EZ log [ 1 + pθ(z0:T ) pθ(z0:T |c) NEZ′ [pθ(zj0:T |c) pθ(z j 0:T ) ]] ≡ Lvb(z, c)− C ∑ zj∈Z′ Lvb(zj , c). (3) The equation above factorizes the proposed CDCD loss using the step-wise parallel diffusion mechanism into two terms, where the first term corresponds to the original variational bound Lvb, and the second term can be interpreted as the negative sum of variational bounds induced by the negative samples and the provided conditioning c. C is a constant as detailed in our supplement. Sample-Wise Auxiliary Diffusion. Alternatively, our sample-wise auxiliary diffusion mechanism maintains one principal diffusion process, as in traditional diffusion training, shown in Fig. 2 (bottom right). It contrasts the intermediate positive latent variables z1:T with the negative sample z j 0 ∈ Z. In this case, we can write the CDCD loss from. (2) as (see supplement for details): LCDCD−Sample := Eq[−log pθ(z0|zt, c)]− C Σzj∈Z′Eq[−log pθ(zj0|zt, c)]. (4) As with the step-wise loss, the CDCD-Sample loss includes two terms. The first refers to sampling directly from the positive z0 at an arbitrary timestep t. The second sums the same auxiliary loss from negative samples zj0. This marginalization operation is based on the property of Markov chain as in previous discrete DPMs Austin et al. (2021); Gu et al. (2022), which imposes direct supervision from the sample data. The first term is similar to the auxiliary denoising objective in Austin et al. (2021); Gu et al. (2022). Both contrastive diffusion mechanisms enable us to effectively incorporate the CDCD loss into our DPM learning process by directly optimizing the actual denoising generative network pθ. Final Loss Function. The final loss function for our contrastive diffusion training process is: L = Lvb(z, c) + λLCDCD, (5) Lvb is conditioning c related, and takes the form of Lt−1 = DKL(q(zt−1|zt, z0)||pθ(zt−1|zt, c)) as in Gu et al. (2022), where c included as the prior for all the intermediate steps. LCDCD refers to either the step-wise parallel diffusion or sample-wise auxiliary diffusion loss. Empirically, we can omit the first term in (3), or directly optimize LCDCD−Step, in which the standard Lvb is already included. The detailed training algorithm is explained in the supplement. 3.3 INTRA- AND INTER-NEGATIVE SAMPLING Previous contrastive works construct negative samples using techniques such as image augmentation Chen et al. (2020); He et al. (2020) or spatially adjacent image patches Oord et al. (2018). In this work, we categorize our sampling methods into intra- and inter-negative samplings as in Fig. 3. For the intra-sample negative sampling, we construct X ′ based on the given original x. This bears resemblance to the patch-based technique in the image domain Oord et al. (2018). As for the audio data, we first divide the original audio waveform into multiple chunks, and randomly shuffle their ordering. For the inter-sample negative sampling, X ′ consists of instance-level negative samples x′ that differ from the given data pair (c, x). In practice, we define negative samples x′ to be music sequences with different musical genres from x in the music generation task, while x′ denotes images other than x in the image synthesis task (in practice, we choose x′ with different class labels as x). Based on our proposed contrastive diffusion modes and negative sampling methods, there are four possible contrastive settings: step-wise parallel diffusion with either intra- or inter-negative sampling (denoted as Step-Intra and Step-Inter), or sample-wise auxiliary diffusion with either intra- or internegative sampling (denoted as Sample-Intra and Sample-Inter). Intuitively, we argue that Step-Intra and Sample-Inter settings are more reasonable compared to Step-Inter and Sample-Intra because of the consistency between the diffusion data corruption process and the way to construct negative samples. Specifically, the data corruption process in the discrete DPMs includes sampling and replacing certain tokens with some random or mask tokens at each diffusion step Austin et al. (2021); Gu et al. (2022), which is a chunk-level operation within a given data sequence similar to the ways we construct intra-negative samples by shuffling the chunk-level orders. In contrast, the sample-wise auxiliary diffusion seeks to provide sample-level supervision, which is consistent with our inter-negative sampling method. In the interest of clarity and concision, we only present the experimental results for Step-Intra and Sample-Inter settings in Sec. 4 of our main paper. The complete results obtained with other contrastive settings and more detailed analysis are included in the supplement. 4 EXPERIMENTS We conduct experiments on three conditional generation tasks: dance-to-music generation, text-toimage synthesis, and class-conditioned image synthesis. For the dance-to-music task, we seek to generate audio waveforms for complex music from human motion and dance video frames. For the text-to-image task, the objective is to generate images from given textual descriptions. Given our emphasis on the input-output faithfulness for cross-modal generations, the main analysis are based on the dance-to-music generation task since the evaluation protocol from Zhu et al. (2022a) explicitly measures such connections in terms of beats, genre and general correspondence for generated music. 4.1 DANCE-TO-MUSIC GENERATION Dataset. We use the AIST++ Li et al. (2021) dataset and the TikTok Dance-Music dataset Zhu et al. (2022a) for the dance-to-music experiments. AIST++ is a subset of the AIST dataset Tsuchida et al. (2019), which contains 1020 dance videos and 60 songs performed by professional dancers and filmed in clean studio environment settings without occlusions. AIST++ provide human motion data in the form of SMPL Loper et al. (2015) parameters and body keypoints, and includes the annotations for different genres and choreography styles. The TikTok Dance-Music dataset includes 445 dance videos collected from the social media platform. The 2D skeleton data extracted with OpenPose Cao et al. (2017); Cao et al. (2019) is used as the motion representation. We adopt the official cross-modality splits without overlapping music songs for both datasets. Implementations. The sampling rate for all audio signals is 22.5 kHz in our experiments. We use 2-second music samples as in Zhu et al. (2022a) for the main experiments. We fine-tuned the pre-trained Jukebox Dhariwal et al. (2020) for our Music VQ-VAE model. For the motion encoder, we deploy a backbone stacked with convolutional layers and residual blocks. For the visual encoder, we extract I3D features Carreira & Zisserman (2017) using a model pre-trained on Kinectics Kay et al. (2017) as the visual conditioning. The motion and visual encoder outputs are concatenated to form the final continuous conditioning input to our contrastive diffusion model. For the contrastive diffusion model, we adopt a transformer-based backbone to learn the denoising network pθ. It includes 19 transformer blocks, with each block consisting of full attention, cross attention and feed forward modules, and a channel size of 1024 for each block. We set the initial weight for the contrastive loss as λ = 5e− 5. The number N of intra- and inter-negative samples for each GT music sample is 10. The visual encoder, motion encoder, and the contrastive diffusion model are jointly optimized. More implementation details are provided in the supplement. Evaluations. The evaluation of synthesized music measures both the conditioning-output correspondence and the general synthesis quality using the metrics introduced in Zhu et al. (2022a). Specifically, the metrics include the beats coverage score, the beats hit scores, the genre accuracy score, and two subjective evaluation tests with Mean Opinion Scores (MOS) for the musical coherence and general quality. Among these metrics, the beats scores emphasize the intra-sample properties, since they calculate the second-level audio onset strength within musical chunks Ellis (2007), while the genre accuracy focuses on the instance-level musical attributes of music styles. Detailed explanations of the above metrics can be found in Zhu et al. (2022a). We compare against multiple dance-to-music Table 1: Quantitative evaluation results for the dance-to-music task on the AIST++ dataset. This table shows the best performance scores we obtain for different contrastive diffusion steps. We report the mean and standard deviations of our contrastive diffusion for three inference tests. Musical features Rhythms Rhythms Genre Coherence Quality Metrics Coverage ↑ Hit ↑ Accuracy ↑ MOS ↑ MOS ↑ GT Music 100 100 88.5 4.7 4.8 Foley 74.1 69.4 8.1 2.9 - Dance2Music 83.5 82.4 7.0 3.0 - CMT 85.5 83.5 11.6 3.0 - D2M-GAN 88.2 84.7 24.4 3.3 3.4 Ours Vanilla 89.0±1.1 83.8±1.5 25.3±0.8 3.3 3.6 Ours Step-Intra 93.9±1.2 90.7±1.5 25.8±0.6 3.6 3.5 Ours Sample-Inter 91.8±1.6 86.9±1.4 27.2±0.5 3.6 3.6 Table 2: Quantitative evaluastion results for the dance-to-music task on the TikTok dataset. We set the default number of diffusion steps to be 80. Methods BeatsCoverage/Hit ↑ D2M-GAN 88.4/ 82.3 Ours Vanilla 88.7/ 81.4 Ours Step-Intra 91.8/ 86.3 Ours Sample-Inter 90.1/ 85.5 generation works: Foley Gan et al. (2020a), Dance2Music Aggarwal & Parikh (2021), CMT Di et al. (2021), and D2M-GAN Zhu et al. (2022a). The first three models rely on symbolic discrete MIDI musical representations, while the last one also uses a VQ musical representation. The major difference between the symbolic MIDI and discrete VQ musical representations lies within the fact that the MIDI is pre-defined for each instrument, while the VQ is learning-based. The latter thus enables complex and free music synthesis appropriate for scenarios like dance videos. Results and Discussion. The quantitative experimental results are shown in Tab. 1 and Tab. 2. Our proposed methods achieve better performance than the competing methods even with vanilla version without contrastive mechanisms. Furthermore, we find that the Step-Intra setting is more helpful in increasing the beats scores, while the Sample-Inter setting yields more improvements for the genre accuracy scores. We believe this is due to the evaluation methods of different metrics. The beats scores measure the chunk-level (i.e., , the audio onset strength Ellis (2007)) consistency between the GT and synthesized music samples Zhu et al. (2022a), while the genre scores consider the overall musical attributes of each sample sequence in instance level. This finding is consistent with our assumptions in Sec. 3.3. Convergence Analysis. We also analyze the impact of the proposed contrastive diffusion on model convergence in terms of diffusion steps. The number of diffusion steps is a significant hyper-parameter for DPMs Sohl-Dickstein et al. (2015b); Nichol & Dhariwal (2021); Austin et al. (2021); Gu et al. (2022); Kingma et al. (2021) that directly influences the inference time and synthesis quality. Previous works have shown that a larger number of diffusion steps usually lead to better model performance, but longer inference times Kingma et al. (2021); Gu et al. (2022). We demonstrate that, with the improved mutual information via the proposed contrastive diffusion method, we can greatly reduce the number of steps needed. As shown in Fig. 4 (left), we observe that the beats scores reach a stable level at approximately 80 steps, ∼35% less than the vanilla DPM that converges in ∼120 steps. More ablation studies and analysis on this task can be found in the supplement. 4.2 CONDITIONAL IMAGE SYNTHESIS Dataset. We conduct text-to-image synthesis on CUB200 Wah et al. (2011) and MSCOCO datasets Lin et al. (2014). The CUB200 dataset contains images of 200 bird species. Each image has 10 corresponding text descriptions. The MSCOCO dataset contains 82k images for training and 40k images for testing. Each image has 5 text descriptions. We also perform the class-conditioned image generation on ImageNet Deng et al. (2009); Russakovsky et al. (2015). Implementation details for both tasks are provided in the supplement. Evaluations. We adopt two evaluation metrics for text-to-image synthesis: the classic FID score Heusel et al. (2017) as the general measurement for image quality, and the CLIPScore Hessel et al. (2021) to evaluate the correspondence between the given textual caption and the synthesized image. For the class-conditioned image synthesis, we use the FID score and a classifier-based accuracy for general and input-output correspondence measurement. We compare against text-to-image generation methods including StackGAN Zhang et al. (2017), StackGAN++ Zhang et al. (2018), SEGAN Tan et al. (2019), AttnGAN Xu et al. (2018), DM-GAN Zhu et al. (2019), DF-GAN Tao et al. (2020), DAE-GAN Ruan et al. (2021), DALLE Ramesh et al. (2021), and VQ-Diffusion Gu et al. (2022). For experiments on ImageNet, we list the result comparisons with ImageBART Esser et al. (2021a), VQGAN Esser et al. (2021b), IDDPM Nichol & Dhariwal (2021), and VQ-D Gu et al. (2022). Specifically, VQ-Diffusion Gu et al. (2022) also adopts the discrete diffusion generative backbone, which can be considered as the vanilla version without contrastive mechanisms. Additionally, we provide more comparisons with other methods in terms of dataset, model scale and training time in the supplement for a more comprehensive and fair understanding of our proposed method. Results and Discussion. The quantitative results are represented in Tab. 3 and Tab. 4. We observe that our contrastive diffusion achieves state-of-the-art performance for both general synthesis fidelity and input-output correspondence, and the Sample-Inter contrastive setting is more beneficial compared to Step-Intra for the image synthesis. This empirical finding again validates our assumption regarding the contrastive settings in Sec. 3.3, where the Sample-Inter setting helps more with the instance-level synthesis quality. Notably, as shown in Fig. 4 (right), our contrastive diffusion method shows model convergence at about 60 diffusion steps, while the vanilla version converges at approximately 100 steps on CUB200 Wah et al. (2011), which greatly increases the inference speed by 40%. 5 CONCLUSION While DPMs have demonstrated remarkable potential, improving their training and inference efficiency while maintaining flexible and accurate results for conditional generation is an ongoing challenge, particularly for cross-modal tasks. Our Conditional Discrete Contrastive Diffusion (CDCD) loss addresses this by maximizing the mutual information between the conditioning input and the generated output. Our contrastive diffusion mechanisms and negative sampling methods effectively incorporate this loss into DPM training. Extensive experiments on various cross-modal conditional generation tasks demonstrate the efficacy of our approach in bridging drastically differing domains. ACKNOWLEDGMENT This research is partially supported by NSF SCH-2123521 and Snap unrestricted gift funding. This article solely reflects the opinions and conclusions of its authors and not the funding agents. ETHICS STATEMENTS As in other media generation works, there are possible malicious uses of such media to be addressed by oversight organizations and regulatory agencies. Our primary objective as researchers is always creating more reliable and secure AI and machine learning systems that maximally benefit our society. A MORE RELATED WORKS In addition to the fields of Diffusion Probabilistic Models, Contrastive Representation Learning, and VQ Representations for Conditional Generation discussed in the main paper, our work is also closely related to the multi-modal learning and generation fields. The research topic of multimodal learning, which incorporates data from various modalities such as audio, vision, and language has attracted much attention in recent years Baltrušaitis et al. (2018); Zhu et al. (2022b); Wu et al. (2023). General audio and visual learning works typically seek to investigate their correlations from the intrinsic synchronization nature Aytar et al. (2016); Korbar et al. (2018); Owens & Efros (2018); Owens et al. (2016); Arandjelovic & Zisserman (2017), and then utilize them in various downstream audio-visual tasks such as audio-visual action recognition Kazakos et al. (2019); Gao et al. (2020), audio-visual event localization and parsing Tian et al. (2018); Zhu et al. (2021a); Wu et al. (2019); Wu & Yang (2021), and audio-visual captioning Rahman et al. (2019); Wang et al. (2018). Works to generate music from visual or/and motion data have also been widely explored in recent years Gan et al. (2020a); Di et al. (2021); Aggarwal & Parikh (2021); Zhu et al. (2022a). For vision and language area, the text generation from visions are extensively explored in the image and video captioning task Zhu et al. (2020; 2021b); Anderson et al. (2018); You et al. (2016); Wang et al. (2017). At the same time, works on image/video generation from text have also attracted much attention with recently released largescale models Radford et al. (2021); Li et al. (2019); Ruan et al. (2021); Ramesh et al. (2021). B DETAILED PROOF AND TRAINING B.1 LOWER BOUND OF CDCD LOSS We show that the proposed CDCD loss has a lower bound related to the mutual information and the number of negative samples N . The derivations below are similar to those from Oord et al. (2018): LCDCD := EZ [−log pθ(z0|c) pθ(z0) pθ(z0|c) pθ(z0) + ∑ zj∈Z′ pθ(z j 0|c) pθ(z j 0) ] (6a) = EZ log [1 + pθ(z0) pθ(z0|c) ∑ zj∈Z′ pθ(z j 0|c) pθ(z j 0) ] (6b) ≈ EZ log [1 +N pθ(z0) pθ(z0|c) EZ′ [ pθ(z j 0|c) pθ(z j 0) ]] (6c) = EZ log[1 +N pθ(z0) pθ(z0|c) ] (6d) ≥ EZ log[N pθ(z0) pθ(z0|c) ] (6e) = log(N)− I(z0, c). (6f) B.2 CONVENTIONAL VARIATIONAL LOSS The conventional variational loss Lvb is derived as follows Sohl-Dickstein et al. (2015b): Lvb(x) := Eq[−log pθ(x0:T ) q(x1:T |x0) ] = Eq[−log p(xT )− ∑ t>1 log pθ(xt−1|xt) q(xt|xt−1) − log pθ(x0|x1) q(x1|x0) ] = Eq[−log p(xT )− ∑ t>1 log pθ(xt−1|xt) q(xt−1|xt, x0) · q(xt−1|x0) q(xt|x0) − log pθ(x0|x1) q(x1|x0) ] = Eq[−log p(xT ) q(xT |x0) − ∑ t>1 log pθ(xt−1|xt) q(xt−1|xt, x0) − log pθ(x0|x1)] = Eq[DKL(q(xT |x0)||p(xT )) + ∑ t>1 DKL(q(xt−1|xt, x0)||pθ(xt−1|xt))− log pθ(x0|x1)]. (7) B.3 Lvb WITH CONDITIONING PRIOR Following the unconditional conventional variational loss, we then show its conditional variant with the conditioning c as prior, which has also been adopted in Gu et al. (2022). Lvb(x, c) = L0 + L1 + ...+ LT−1 + LT L0 = −log pθ(x0|x1, c) Lt−1 = DKL(q(xt−1|xt, x0)||pθ(xt−1|xt, c)) LT = DKL(q(xT |x0)||p(xT )) (8) B.4 STEP-WISE AND SAMPLE-WISE CONTRASTIVE DIFFUSION Below, we show the full derivation for the step-wise parallel contrastive diffusion loss. Given that the intermediate variables from z1:T are also taken into account in this step-wise contrastive diffusion, we slightly modify the initial notation f(z0, c) = pθ(z0|c) pθ(z0) from Eq.(2) in the main paper to f(z, c) = pθ(z0:T |c)pθ(z0:T ) . LCDCD−Step := −EZ [log f(z, c) f(z, c) + ∑ zj∈Z′ f(z j , c) ] (9a) = EZ log [1 + ∑ zj∈Z′ f(z j , c) f(z, c) ] (9b) = EZ log [1 + pθ(z0:T ) pθ(z0:T |c) ∑ zj∈Z′ pθ(z j 0:T |c) pθ(z j 0:T ) ] (9c) ≈ EZ log [1 + pθ(z0:T ) pθ(z0:T |c) NEZ′ pθ(z j 0:T |c) pθ(z j 0:T ) ] (same as Eq.(6c)) (9d) ≈ EZEq log[ q(z1:T |z0) pθ(z0:T |c) N pθ(z0:T |c) q(z1:T |z0) ] (conditional pθ) (9e) ≈ Eq[−log pθ(z0:T |c) q(z1:T |z0) ]− logN EZ′Eq[−log pθ(z0:T |c) q(z1:T |z0) ] (9f) = Lvb(z, c)− C ∑ zj∈Z′ Lvb(zj , c). (9g) Algorithm 1 Conditional Discrete Contrastive Diffusion Training. The referenced equations can be found in the main paper. Input: Initial network parameters θ, contrastive loss weight λ, learning rate η, number of negative samples N , total diffusion steps T , conditioning information c, contrastive mode m ∈ {Step, Sample}. 1: for each training iteration do 2: t ∼ Uniform({1, 2, ..., T}) 3: zt ← Sample from q(zt|zt−1) 4: Lvb ← ∑ i=1,...,t Li ▷ Eq. 1 5: if m == Step then 6: for j = 1, ..., N do 7: zjt ← Sample from q(z j t |z j t−1, c) ▷ from negative variables in previous steps 8: end for 9: LCDCD = − 1N ∑ Ljvb ▷ Eq. 3 10: else if m == Sample then 11: for j = 1, ..., N do 12: zt ← Sample from q(zt|zj0, c) ▷ from negative variables in step 0 13: end for 14: LCDCD = − 1N ∑ Ljz0 ▷ Eq. 4 15: end if 16: L ← Lvb + λLCDCD ▷ Eq. 5 17: θ ← θ − η∇θL 18: end for In the above Eq.(9g), C stands for a constant that equals to logN , which can be further adjusted by the weight we select for the CDCD loss as in Eq. 5. Similarly for the sample-wise auxiliary contrastive diffusion, the loss can be derived as follows: LCDCD−Sample := −EZ [log f(z0, c) f(z0, c) + ∑ zj∈Z′ f(z j 0, c) ] (10a) = EZ log [1 + pθ(z0) pθ(z0|c) NEZ′ [ pθ(z j 0|c) pθ(z j 0) ]] (10b) ≈ EZEq log[ q(z1:T |z0) pθ(z0|c) N pθ(z0|c) q(z1:T |z0) ] (10c) ≈ Eq[−log pθ(z0|c) q(z1:T |z0) ]−N EZ′Eq[−log pθ(z0|c) q(z1:T |z0) ] (10d) = Eq[−log pθ(z0|zt, c)]− C ∑ zj∈Z′ Eq[−log pθ(zj0|zt, c)]. (10e) Note that from a high-level perspective, our contrastive idea covers two different concepts, while conventional contrastive learning usually focuses only on the negative samples. In our case, due to the unique formulation of diffusion models that bring the diffusion steps into the methodology design, we consider the contrast within the context of “negative samples” and “negative steps” (also corresponds to the “negative intermediate steps”). In the deviation above, we use the symbols Z and q to distinguish between these two concepts. B.5 CONDITIONAL DISCRETE CONTRASTIVE DIFFUSION TRAINING The training process for the proposed contrastive diffusion is explained in Algo. 1. C ADDITIONAL EXPERIMENTAL DETAILS AND ANALYSIS C.1 DANCE-TO-MUSIC TASK Implementation. The sampling rate for all audio signals is 22.5 kHz in our experiments. We use 2-second music samples as in Zhu et al. (2022a) for our main experiments, resulting in 44,100 audio data points for each raw music sequence. For the Music VQ-VAE, we fine-tuned Jukebox Dhariwal et al. (2020) on our data to leverage its pre-learned codebook from a large-scale music dataset (approximately 1.2 million songs). The codebook size K is 2048, with a token dimension dz = 128, and the hop-length L is 128 in our default experimental setting. For the motion module, we deploy a backbone stacked with convolutional layers and residual blocks. The dimension size of the embedding we use for music conditioning is 1024. For the visual module, we extract I3D features Carreira & Zisserman (2017) using a model pre-trained on Kinectics Kay et al. (2017) as the visual conditioning information, with a dimension size of 2048. In the implementation of our contrastive diffusion model, we adopt a transformer-based backbone to learn the denoising network pθ. It includes 19 transformer blocks, in which each block is consists of full-attention, cross-attention and a feed-forward network, and the channel size for each block is 1024. We set the initial weight for the contrastive loss as λ = 5e− 5. The numbers of intra- and inter-negative samples for each GT music sample are both 10. The AdamW Loshchilov & Hutter (2017) optimizer with β1 = 0.9 and β2 = 0.96 is deployed in our training, with a learning rate of 4.5e− 4. We also employ an adaptive weight for the denoising loss weight by gradually decreasing the weight as the diffusion step increases and approaches the end of the chain. The visual module, motion module, and the contrastive diffusion model are jointly optimized. The architecture of adopted motion encoder is shown in Tab. 5, which is the same as in Zhu et al. (2022a). Other than the aforementioned implementation details, we also include the mask token technique that bears resemblance to those used in language modelling Devlin et al. (2018) and text-to-image synthesis Gu et al. (2022) for our dance-to-music generation task. We adopt a truncation rate of 0.86 in our inference. MOS Evaluation Test. We asked a total of 32 participants to participate in our subjective Mean Opinion Scores (MOS) music evaluations Zhu et al. (2022a); Kumar et al. (2019), among which 11 of them are female, while the rest are male. For the dance-music coherence test, we fuse the generated music samples with the GT videos as post-processing. We then asked each evaluator to rate 20 generated videos with a score of 1 (least coherent) to 5 (most coherent) after watching the processed video clip. Specifically, the participants are asked to pay more attention to the dance-music coherence in terms of the dance moves corresponding to the music genre and rhythm, rather than the overall music quality, with reference to the GT video clips with the original music. As for the overall quality evaluations, we only play the audio tracks without the video frames to each evaluator. As before, they are asked to rate the overall music quality with a score of 1 (worst audio quality) to 5 (best audio quality). Training Cost. For the dance2music task experiments on the AIST++ dataset, we use 4 NVIDIA RTX A5000 GPUs, and train the model for approximately 2 days. For the same task on the TikTok dance-music dataset, the training takes approximately 1.5 days on the same hardware. Complete Results for Contrastive Settings. As discussed in our main paper, there are four possible combinations for contrastive settings given different contrastive diffusion mechanisms and negative sampling methods. Here, we include complete quantitative scores for different contrastive settings in Tab. 6. We observe that all the four contrastive settings, including the Step-Inter and SampleIntra settings that are not reported in our main paper, help to improve the performance. As we noted, amongst all the settings, Step-Intra and Sample-Inter are more reasonable and yield larger improvements for intra-sample data attributes (i.e., beats scores) and instance-level features (i.e., genre accuracy scores). Ablation on Music Length. Although we use 2-second musical sequences in the main experiments to make for consistent and fair comparisons with Zhu et al. (2022a), our framework can also synthesize longer musical sequences. In the supplementary, we show our generated music sequences in 6- seconds. The quantitative evaluations in terms of different musical sequence lengths are presented Tab. 7, where we show better performance when synthesizing longer musical sequences. C.2 TEXT-TO-IMAGE TASK Implementation. For the text-to-image generation task, we adopt VQ-GAN Esser et al. (2021b) as the discrete encoder and decoder. The codebook size K is 2886, with a token dimension dz = 256. VQGAN converts a 256× 256 resolution image to 32× 32 discrete tokens. For the textual conditioning, we employ the pre-trained CLIP Radford et al. (2021) model to encode the given textual descriptions. The denoising diffusion model pθ has 18 transformer blocks and a channel size of 192, which is a similar model scale to the small version of VQ-Diffusion Gu et al. (2022). We use λ = 5e − 5 as the contrastive loss weight. Similar to the dance-to-music task, we also use the adaptive weight that changes within the diffusion stages. We keep the same truncation rate of 0.86 as in our dance-to-music experiment and in Gu et al. (2022). Unlike in the dance-to-music experiments, where we jointly learn the conditioning encoders, both the VQ-GAN and CLIP models are fixed during the contrastive diffusion training. Training Cost. For the text2image task experiments on the CUB200 dataste, the training takes approximately 5 days using 4 NVIDIA RTX A5000 GPUs. For the same experiments on the MSCOCO dataset, we run the experiments on Amazon Web Services (AWS) using 8 NVIDIA Tesla V100 GPUs. This task required 10 days of training. C.3 CLASS-CONDITIONED IMAGE SYNTHESIS TASK Implementation. For the class-conditioned image synthesis, we also adopt the pre-trained VQGAN Esser et al. (2021b) as the discrete encoder and decoder. We replace the conditioning encoder with class embedding optimized during the contrastive diffusion training. The size of the conditional embedding is 512. Other parameters and techniques remain the same, as in the text-to-image task. Training Cost. For the class-conditioned experiments on the ImageNet, we use 8 NVIDIA Tesla V100 GPUs running on AWS. This task required 20 days of training. D MORE QUALITATIVE RESULTS D.1 GENERATED MUSIC SAMPLES For qualitative samples of synthesized dance music sequences, please refer to our anonymous page in the supplement with music samples. In addition to the generated music samples on AIST++ Tsuchida et al. (2019); Li et al. (2021) and TikTok Dance-Music Dataset Zhu et al. (2022a), we also include some qualitative samples obtained with the music editing operations based on the dance-music genre annotations from AIST++. Specifically, we edit the original paired motion conditioning input with a different dance-music genre using a different dance choreographer. Discussion on Musical Representations and Audio Quality. It is worth noting that we only compare the overall audio quality with that of D2M-GAN Zhu et al. (2022a). This is due to the nature of the different musical representations in the literature of deep-learning based music generation Gan et al. (2020a); Dong et al. (2018); Huang et al. (2019); Gan et al. (2020b); Aggarwal & Parikh (2021). There are mainly two categories for adopted musical representations in previous works: pre-defined symbolic and learning-based representations Ji et al. (2020); Briot et al. (2020). For the former symbolic music representation, typical options include 1D piano-roll and 2D MIDI-based representations. While these works benefit from the pre-defined music synthesizers and produce music that does not include raw audio noise, the main limitation is that such representations are usually limited to a single specific instrument, which hinders their flexibility to be applied in wider and more complex scenarios such as dance videos. In contrast, the learning-based music representations (i.e., musical VQ in our case) rely on well-trained music synthesizers as decoders, but can be used as a unified representation for various musical sounds, e.g., instruments or voices. However, the training of such music encoders and decoders for high-quality audio signals itself remains a challenging problem. Specifically, high-quality audio is a form of high-dimensional data with an extremely large sampling rate, even compared to high-resolution images. For example, the sampling rate for CD-quality audio signals is 44.1 kHz, resulting in 2,646,000 data points for a one-minute musical piece. To this end, existing deep learning based works Dhariwal et al. (2020); Kumar et al. (2019) for music generation employ methods to reduce the number of dimensions, e.g., by introducing hop lengths and a smaller sampling rate. These operations help to make music learning and generation more computationally tractable, but also introduce additional noise in the synthesized audio signals. In this work, we adopt the pre-trained JukeBox model Dhariwal et al. (2020) as our music encoder and decoder for the musical VQ representation. The adopted model has a hop length of 128, which corresponds to the top-level model from their original work Dhariwal et al. (2020). Jukebox employs 3 models: top-, middle-, and bottom-level, with both audio quality and required computation increasing from the first to the last model. As an example, in the supplemental HTML page, we provide music samples directly reconstructed from JukeBox using the top-level model we employ in our work, compared to the ground-truth audio. While they allow for high-quality audio reconstruction (from the bottom-level model, with a hop length of 8), it requires much more time and computation not only for training but also for the final inference, e.g., 3 hours to generate a 20-second musical sequence. As the synthesized music from the top-level model includes some audible noise, we apply a noise reduction operation Sainburg et al. (2020). However, the overall audio quality is not a primary factor that we specifically address in this work on cross-modal conditioning and generation, as it largely depends on the specific music encoder and decoder that are employed. This explains why we report similar MOS scores in terms of the general audio quality. D.2 SYNTHESIZED IMAGES We present more qualitative examples for text-to-image synthesis and class-conditioned image synthesis in Fig. 5, Fig. 6, and Fig. 7. E FURTHER DISCUSSION ON THE CDCD LOSS In this section, we provide our further discussion on the proposed CDCD loss in terms of various aspects, including its relevance to the existing auxiliary losses, the impact of the CDCD strength, as well as additional experimental results. E.1 CDCD AS AUXILIARY LOSS While the diffusion models are typically trained and optimized with the conventional variational lower bound loss Lvb as we described in the main paper and Appendix B.2, there are several different types of auxiliary losses proposed to further regularize and improve the learning of diffusion models. Specifically, Dhariwal & Nichol (2021) introduces the idea of classifier based guidance for the diffusion denoising probabilistic models with continuous state space. Classifier-free guidance is proposed in Ho & Salimans (2022). In the area with discrete diffusion formulations Austin et al. (2021); Gu et al. (2022), an auxiliary loss that encourages the model to predict the noiseless token at the arbitrary step is adopted and proven to help with the synthesis quality. Similar to the previous cases, we consider the proposed CDCD loss as a type of auxiliary losses, which seeks to provide additional guidance to better learn the conditional distribution p(x|c). Specifically, the classifier-free guidance Ho & Salimans (2022) propose to randomly discard conditioning while learning a conditional diffusion generative model, which bears resemblance to our introduced downsampled contrastive steps in Appendix E.3. E.2 IMPACT OF CDCD STRENGTH We further show the ablation studies on the parameter λ, which is the weight of our proposed CDCD loss that characterize the strength of this contrastive regularizer. We conduct the dance-to-music generation experiments with different values of λ, and show the results in Tab. 9. As we observe from the table that the performance in terms of the beat scores are relatively robust for different λ values ranging from 4e − 5 to 5e − 5. At the same time, we empirically observe that with a large value of λ, the model converges faster with less training epochs. In case of the image synthesis task, we are rather cautious on the strength of the imposed contrastive regularizer. Intuitively, the proposed CDCD loss encourages the model to learn a slightly different distribution for negative samples, which could impose a trade-off between the one for the actual data given a specific conditioning. Therefore, while a larger value of λ helps with the learning speed, we empirically set the λ to be 5e− 5. Note that this value is adapted from the weight for other auxiliary losses in previous works Gu et al. (2022). E.3 DOWNSAMPLED CONTRASTIVE STEPS While we show the performance of complete step-wise contrastive diffusion in the main paper, we discuss here an alternative way to implement the proposed method with less computational cost, by downsampling the contrastive steps in the diffusion process. Specifically, we randomly downsampled the steps with the proposed CDCD loss, which shares the similar spirit as in the class-free guidance Ho & Salimans (2022) to randomly drop out the conditioning. The experimental results are listed in Tab. 10, where there is little performance drop with downsampled contrastive steps.
1. What is the focus of the paper regarding cross-modal generation methods? 2. What are the strengths of the proposed approach, particularly in enhancing cross-modal relationships? 3. What are the weaknesses of the paper, especially regarding the claim about prior models and the potential contradiction between objectives? 4. Do you have any concerns about the construction of inter-negative samples or the choice of baseline models? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper Existing diffusion-based cross-modal generation methods mainly establish the cross-modal relationships by incorporating the cross-modal prior model into the variational lower bound of the diffusion model. However, the authors claim that this method may lead to the loss of the cross-modal correspondence in the denoising process. To overcome this, the authors propose Conditional Discrete Contrastive Diffusion (CDCD) loss, which enhances the cross-modal relationships by constructing negative samples and introducing contrastive loss in the training. Strengths And Weaknesses Strength: The idea of introducing contrastive loss to enhance the cross-modal relationship is reasonable. The proposed pipeline is well demonstrated in Figure2. The proposed approach is clearly described, which makes the manuscript easy to follow. Weaknesses: The authors should provide more evidence to support the claim that incorporating prior into the variational lower bound can lead to the loss of the cross-modal correspondence. Would the objective of enhancing cross-modal relationships contradict to increase the sample quality? How would the authors balance the variational loss and contrastive loss? In the construction of inter-negative samples, the authors take all the images x’ other than x as negative samples. In this way, similar images may also be considered negative samples. How would the authors address this? In the text-to-image generation task, the authors use the VQ-diffusion-S as the baseline. The results of the proposed approach slightly outperform the VQ-diffusion-S while falling behind the VQ-diffusion-B greatly. The authors should verify the effectiveness of the proposed approach on larger models. In Table3, the performance of the proposed approach falls behind the DF-GAN. Clarity, Quality, Novelty And Reproducibility The idea that introducing the contrastive loss into the diffusion model is reasonable and novel which demonstrates the contribution of this paper. The proposed approach is clearly described in the paper.
ICLR
Title Discrete Contrastive Diffusion for Cross-Modal Music and Image Generation Abstract Diffusion probabilistic models (DPMs) have become a popular approach to conditional generation, due to their promising results and support for cross-modal synthesis. A key desideratum in conditional synthesis is to achieve high correspondence between the conditioning input and generated output. Most existing methods learn such relationships implicitly, by incorporating the prior into the variational lower bound. In this work, we take a different route—we explicitly enhance input-output connections by maximizing their mutual information. To this end, we introduce a Conditional Discrete Contrastive Diffusion (CDCD) loss and design two contrastive diffusion mechanisms to effectively incorporate it into the denoising process, combining the diffusion training and contrastive learning for the first time by connecting it with the conventional variational objectives. We demonstrate the efficacy of our approach in evaluations with diverse multimodal conditional synthesis tasks: dance-to-music generation, text-to-image synthesis, as well as class-conditioned image synthesis. On each, we enhance the inputoutput correspondence and achieve higher or competitive general synthesis quality. Furthermore, the proposed approach improves the convergence of diffusion models, reducing the number of required diffusion steps by more than 35% on two benchmarks, significantly increasing the inference speed. 1 INTRODUCTION Generative tasks that seek to synthesize data in different modalities, such as audio and images, have attracted much attention. The recently explored diffusion probabilistic models (DPMs) Sohl-Dickstein et al. (2015b) have served as a powerful generative backbone that achieves promising results in both unconditional and conditional generation Kong et al. (2020); Mittal et al. (2021); Lee & Han (2021); Ho et al. (2020); Nichol & Dhariwal (2021); Dhariwal & Nichol (2021); Ho et al. (2022); Hu et al. (2021). Compared to the unconditional case, conditional generation is usually applied in more concrete and practical cross-modality scenarios, e.g., video-based music generation Di et al. (2021); Zhu et al. (2022a); Gan et al. (2020a) and text-based image generation Gu et al. (2022); Ramesh et al. (2021); Li et al. (2019); Ruan et al. (2021). Most existing DPM-based conditional synthesis works Gu et al. (2022); Dhariwal & Nichol (2021) learn the connection between the conditioning and the generated data implicitly by adding a prior to the variational lower bound Sohl-Dickstein et al. (2015b). While such approaches still feature high generation fidelity, the correspondence between the conditioning and the synthesized data can sometimes get lost, as illustrated in the right column in Fig. 1. To this end, we aim to explicitly enhance the input-output faithfulness via their maximized mutual information under the diffusion generative framework for conditional settings in this paper. Examples of our synthesized music audio and image results are given in Fig. 1. Contrastive methods Oord et al. (2018); Bachman et al. (2019); Song & Ermon (2020a) have been proven to be very powerful for data representation learning. Their high-level idea aims to learn the representation z of raw data x based on the assumption that a properly encoded z benefits the ability of a generative model p to reconstruct the raw data given z as prior. This idea can be achieved via optimization of the density ratio p(x|z)p(x) Oord et al. (2018) as an entirety, without explicitly modeling the actual generative model p. While the direct optimization of mutual information via generative models p is a challenging problem to implement and train Song & Ermon (2020b); Belghazi et al. (2018) in the conventional contrastive representation learning field, we show that this can be effectively done within our proposed contrastive diffusion framework. Specifically, we reformulate the optimization problem for the desired conditional generative tasks via DPMs by analogy to the above embedding z and raw data x with our conditioning input and synthesized output. We introduce a Conditional Discrete Contrastive Diffusion (CDCD) loss, and design two contrastive diffusion mechanisms - step-wise parallel diffusion that invokes multiple parallel diffusion processes during contrastive learning, and sample-wise auxiliary diffusion, which maintains one principal diffusion process, to effectively incorporate the CDCD loss into the denoising process. We demonstrate that with the proposed contrastive diffusion method, we can not only effectively train so as to maximize the desired mutual information by connecting the CDCD loss with the conventional variational objective function, but also to directly optimize the generative network p. The optimized CDCD loss further encourages faster convergence of a DPM model with fewer diffusion steps. We additionally present our intra- and inter-negative sampling methods by providing internally disordered and instance-level negative samples, respectively. To better illustrate the input-output connections, we conduct main experiments on the novel crossmodal dance-to-music generation task Zhu et al. (2022a), which aims to generate music audio based on silent dance videos. Compared to other tasks such as text-to-image synthesis, dance-to-music generation explicitly evaluates the input-output correspondence in terms of various cross-modal alignment features such as dance-music beats, genre and general quality. However, various generative settings, frameworks, and applications can also benefit from our contrastive diffusion approach, e.g., joint or separate training of conditioning encoders, continuous or discrete conditioning inputs, and diverse input-output modalities as detailed in Sec. 4. Overall, we achieve results superior or comparable to state-of-the-art on three conditional synthesis tasks: dance-to-music (datasets: AIST++ Tsuchida et al. (2019); Li et al. (2021), TikTok Dance-Music Zhu et al. (2022a)), text-toimage (datasets: CUB200 Wah et al. (2011), MSCOCO Lin et al. (2014)) and class-conditioned image synthesis (dataset: ImageNet Russakovsky et al. (2015)). Our experimental findings suggest three key take-away: 1 Improving the input-output connections via maximized mutual information is indeed beneficial for their correspondence and the general fidelity of the results (see Fig. 1 and supplement). 2 Both our proposed step-wise parallel diffusion with intra-negative samples and sample-wise auxiliary diffusion with inter-negative samples show state-of-the-art scores in our evaluations. The former is more beneficial for capturing the intra-sample correlations, e.g., musical rhythms, while the latter improves the instance-level performance, e.g., music genre and image class. 3 With maximized mutual information, our conditional contrastive diffusion converge in substantially fewer diffusion steps compared to vanilla DPMs, while maintaining the same or even superior performance (approximately 35% fewer steps for dance-to-music generation and 40% fewer for text-to-image synthesis), thus significantly increasing inference speed. 2 BACKGROUND Diffusion Probabilistic Models. DPMs Sohl-Dickstein et al. (2015b) are a class of generative models that learn to convert a simple Gaussian distribution into a data distribution. This process consists of a forward diffusion process and a reverse denoising process, each consisting of a sequence of T steps that act as a Markov chain. During forward diffusion, an input data sample x0 is gradually “corrupted” at each step t by adding Gaussian noise to the output of step t− 1. The reverse denoising process, seeks to convert the noisy latent variable xT into the original data sample x0 by removing the noise added during diffusion. The stationary distribution for the final latent variable xT is typically assumed to be a normal distribution, p(xT ) = N (xT |0, I). An extension of this approach replaces the continuous state with a discrete one Sohl-Dickstein et al. (2015a); Hoogeboom et al. (2021); Austin et al. (2021), in which the latent variables x1:T typically take the form of one-hot vectors with K categories. The diffusion process can then be parameterized using a multinomial categorical transition matrix defined as q(xt|xt−1) = Cat(xt; p = xt−1Qt), where [Qt]ij = q(xt = j|xt−1 = i). The reverse process pθ(xt|xt−1) can also be factorized as conditionally independent over the discrete sequences Austin et al. (2021). In both the continuous and discrete state formulations of DPMs Song & Ermon (2020c); Song et al. (2020b); Kingma et al. (2021); Song et al. (2021); Huang et al. (2021); Vahdat et al. (2021), the denoising process pθ can be optimized by the KL divergence between q and pθ in closed forms Song et al. (2020a); Nichol & Dhariwal (2021); Ho et al. (2020); Hoogeboom et al. (2021); Austin et al. (2021) via the variational bound on the negative log-likelihood: Lvb = Eq[DKL(q(xT |x0)||p(xT ))︸ ︷︷ ︸ LT + ∑ t>1 DKL(q(xt−1|xt, x0)||pθ(xt−1|xt))︸ ︷︷ ︸ Lt−1 − log pθ(x0|x1)︸ ︷︷ ︸ L0 ]. (1) Existing conditional generation works via DPMs Gu et al. (2022); Dhariwal & Nichol (2021) usually learn the implicit relationship between the conditioning c and the synthesized data x0 by directly adding the c as the prior in (1). DPMs with discrete state space provide more controls on the data corruption and denoising compared to its continuous counterpart Austin et al. (2021); Gu et al. (2022) by the flexible designs of transition matrix, which benefits for practical downstream operations such as editing and interactive synthesis Tseng et al. (2020); Cui et al. (2021); Xu et al. (2021). We hence employ contrastive diffusion using a discrete state space in this work. Contrastive Representation Learning. Contrastive learning uses loss functions designed to make neural networks learn to understand and represent the specific similarities and differences between elements in the training data without labels explicitly defining such features, with positive and negative pairs of data points, respectively. This approach has been successfully applied in learning representations of high-dimensional data Oord et al. (2018); Bachman et al. (2019); He et al. (2020); Song & Ermon (2020a); Chen et al. (2020); Lin et al. (2021). Many such works seek to maximize the mutual information between the original data x and its learned representation z under the framework of likelihood-free inference Oord et al. (2018); Song & Ermon (2020a); Wu et al. (2021). The above problem can be formulated as maximizing a density ratio p(x|z)p(x) that preserves the mutual information between the raw data x and learned representation z. To achieve this, existing contrastive methods Oord et al. (2018); Durkan et al. (2020); He et al. (2020); Zhang et al. (2021) typically adopt a neural network to directly model the ratio as an entirety and avoid explicitly considering the actual generative model p(x|z), which has proven to be a more challenging problem Song & Ermon (2020b); Belghazi et al. (2018). In contrast, we show that by formulating the conventional contrastive representation learning problem under the generative setting, the properties of DPMs enable us to directly optimize the model p in this work, which can be interpreted as the optimal version of the density ratio Oord et al. (2018). Vector-Quantized Representations for Conditional Generation. Vector quantization is a classical technique in which a high-dimensional space is represented using a discrete number of vectors. More recently, Vector-Quantized (VQ) deep learning models employ this technique to allow for compact and discrete representations of music and image data Oord et al. (2017); Razavi et al. (2019); Esser et al. (2021b); Dhariwal et al. (2020); Chen et al. (2022). Typically, the VQ-based models use an encoder-codebook-decoder framework, where the “codebook” contains a fixed number of vectors (entries) to represent the original high dimensional raw data. The encoder transforms the input x into feature embedding that are each mapped to the closest corresponding vector in the codebook, while the decoder uses the set of quantized vectors z to reconstruct the input data, producing x′ as illustrated in the upper part of Fig. 2. In this work, we perform conditional diffusion process on the VQ space (i.e., discrete token sequences) as shown in the bottom part of Fig. 2, which largely reduces the dimensionality of the raw data, thus avoiding the expensive raw data decoding and synthesis. As our approach is flexible enough to be employed with various input and output modalities, the exact underlying VQ model we use depends on the target data domain. For music synthesis, we employ a fine-tuned Jukebox Dhariwal et al. (2020) model, while for image generation, we employ VQ-GAN Esser et al. (2021b). See Sec. 4 for further details. We refer to z, the latent quantized representation of x, as z0 below to distinguish it from the latent representation at prior stages in the denoising process. 3 METHOD Here we outline our approach to cross-modal and conditional generation using our proposed discrete contrastive diffusion approach, which is depicted in Fig. 2. In Sec. 3.1, we formulate our Conditional Discrete Contrastive Diffusion loss in detail, and demonstrate how it helps to maximize the mutual information between the conditioning and generated discrete data representations. Sec. 3.2 defines two specific mechanisms for applying this loss within a diffusion model training framework, samplewise and step-wise. In Sec. 3.3, we detail techniques for constructing negative samples designed to improve the overall quality and coherence of the generated sequences. Given the data pair (c, x), where c is the conditioning information from a given input modality (e.g., videos, text, or a class label), our objective is to generate a data sample x in the target modality (e.g., music audio or images) corresponding to c. In the training stage, we first employ and train a VQ-based model to obtain discrete representation z0 of the data x from the target modality. Next, our diffusion process operates on the encoded latent representation z0 of x. The denoising process recovers the latent representation z0 given the conditioning c that can be decoded to obtain the reconstruction x′. In inference, we generate z0 based on the conditioning c, and decode the latent VQ representation z0 back to raw data domain using the decoder from the pre-trained and fixed VQ decoder. 3.1 CONDITIONAL DISCRETE CONTRASTIVE DIFFUSION LOSS We seek to enhance the connection between c and the generated data z0 by maximizing their mutual information, defined as I(z0; c) = ∑ z0 pθ(z0, c) log pθ(z0|c) pθ(z0) . We introduce a set of negative VQ sequences Z ′ = {z1, z2, ..., zN}, encoded from N negative samples X ′ = {x1, x2, ..., xN}, and define f(z0, c) = pθ(z0|c) pθ(z0) . Our proposed Conditional Discrete Contrastive Diffusion (CDCD) loss is: LCDCD := −E [ log f(z0, c) f(z0, c) + Σzj∈Z′f(z j 0, c) ] . (2) The proposed CDCD loss is similar to the categorical cross-entropy loss for classifying the positive sample as in Oord et al. (2018), where our conditioning c and the generated data z0 corresponds to the original learned representation and raw data, and optimization of this loss leads to maximization of I(z0; c). However, the loss in Oord et al. (2018) models the density ratio f(z0, c) as an entirety. In our case, we demonstrate that the DPMs properties Sohl-Dickstein et al. (2015b); Ho et al. (2020); Austin et al. (2021) enable us to directly optimize the actual distribution pθ within the diffusion process for the desired conditional generation tasks. Specifically, we show the connections between the proposed CDCD loss and the conventional variational loss Lvb (see (1)) in Sec. 3.2, and thus how it contributes to efficient DPM learning. Additionally, we can derive the lower bound for the mutual information as I(z0; c) ≥ log(N) − LCDCD (see supplement for details), which indicates that a larger number of negative samples increases the lower bound. These two factors allow for faster convergence of a DPM with fewer diffusion steps. 3.2 PARALLEL AND AUXILIARY DIFFUSION PROCESS The CDCD loss in (2) considers the mutual information between c and z0 in a general way, without specifying the intermediate diffusion steps. We propose and analyze two contrastive diffusion mechanisms to efficiently incorporate this loss into DPM learning, and demonstrate that we can directly optimize the generative model pθ in the diffusion process. We present our step-wise parallel diffusion and the sample-wise auxiliary diffusion mechanisms, which are distinguished by the specific operations applied for the intermediate negative latent variables zj1:T for each negative sample x j . The high-level intuition for the parallel and auxiliary designs is to emphasize different attributes of the synthesized data given specific applications. Particularly, we propose the parallel variant to learn the internal coherence of the audio sequential data by emphasizing the gradual change at each time step, while the auxiliary mechanism focuses more on the sample-level connections to the conditioning. Step-Wise Parallel Diffusion. This mechanism not only focuses on the mutual information between c and z0, but also takes the intermediate negative latent variables z j 1:T into account by explicitly invoking the complete diffusion process for each negative sample zj ∈ Z ′. As illustrated in Fig. 2 (bottom left), we initiate N + 1 parallel diffusion processes, among which N are invoked by negative samples. For each negative sample xj ∈ X ′, we explicitly compute its negative latent discrete variables zj0:T . In this case, (2) is as follows (see supplement for the detailed derivation): LCDCD−Step := EZ log [ 1 + pθ(z0:T ) pθ(z0:T |c) NEZ′ [pθ(zj0:T |c) pθ(z j 0:T ) ]] ≡ Lvb(z, c)− C ∑ zj∈Z′ Lvb(zj , c). (3) The equation above factorizes the proposed CDCD loss using the step-wise parallel diffusion mechanism into two terms, where the first term corresponds to the original variational bound Lvb, and the second term can be interpreted as the negative sum of variational bounds induced by the negative samples and the provided conditioning c. C is a constant as detailed in our supplement. Sample-Wise Auxiliary Diffusion. Alternatively, our sample-wise auxiliary diffusion mechanism maintains one principal diffusion process, as in traditional diffusion training, shown in Fig. 2 (bottom right). It contrasts the intermediate positive latent variables z1:T with the negative sample z j 0 ∈ Z. In this case, we can write the CDCD loss from. (2) as (see supplement for details): LCDCD−Sample := Eq[−log pθ(z0|zt, c)]− C Σzj∈Z′Eq[−log pθ(zj0|zt, c)]. (4) As with the step-wise loss, the CDCD-Sample loss includes two terms. The first refers to sampling directly from the positive z0 at an arbitrary timestep t. The second sums the same auxiliary loss from negative samples zj0. This marginalization operation is based on the property of Markov chain as in previous discrete DPMs Austin et al. (2021); Gu et al. (2022), which imposes direct supervision from the sample data. The first term is similar to the auxiliary denoising objective in Austin et al. (2021); Gu et al. (2022). Both contrastive diffusion mechanisms enable us to effectively incorporate the CDCD loss into our DPM learning process by directly optimizing the actual denoising generative network pθ. Final Loss Function. The final loss function for our contrastive diffusion training process is: L = Lvb(z, c) + λLCDCD, (5) Lvb is conditioning c related, and takes the form of Lt−1 = DKL(q(zt−1|zt, z0)||pθ(zt−1|zt, c)) as in Gu et al. (2022), where c included as the prior for all the intermediate steps. LCDCD refers to either the step-wise parallel diffusion or sample-wise auxiliary diffusion loss. Empirically, we can omit the first term in (3), or directly optimize LCDCD−Step, in which the standard Lvb is already included. The detailed training algorithm is explained in the supplement. 3.3 INTRA- AND INTER-NEGATIVE SAMPLING Previous contrastive works construct negative samples using techniques such as image augmentation Chen et al. (2020); He et al. (2020) or spatially adjacent image patches Oord et al. (2018). In this work, we categorize our sampling methods into intra- and inter-negative samplings as in Fig. 3. For the intra-sample negative sampling, we construct X ′ based on the given original x. This bears resemblance to the patch-based technique in the image domain Oord et al. (2018). As for the audio data, we first divide the original audio waveform into multiple chunks, and randomly shuffle their ordering. For the inter-sample negative sampling, X ′ consists of instance-level negative samples x′ that differ from the given data pair (c, x). In practice, we define negative samples x′ to be music sequences with different musical genres from x in the music generation task, while x′ denotes images other than x in the image synthesis task (in practice, we choose x′ with different class labels as x). Based on our proposed contrastive diffusion modes and negative sampling methods, there are four possible contrastive settings: step-wise parallel diffusion with either intra- or inter-negative sampling (denoted as Step-Intra and Step-Inter), or sample-wise auxiliary diffusion with either intra- or internegative sampling (denoted as Sample-Intra and Sample-Inter). Intuitively, we argue that Step-Intra and Sample-Inter settings are more reasonable compared to Step-Inter and Sample-Intra because of the consistency between the diffusion data corruption process and the way to construct negative samples. Specifically, the data corruption process in the discrete DPMs includes sampling and replacing certain tokens with some random or mask tokens at each diffusion step Austin et al. (2021); Gu et al. (2022), which is a chunk-level operation within a given data sequence similar to the ways we construct intra-negative samples by shuffling the chunk-level orders. In contrast, the sample-wise auxiliary diffusion seeks to provide sample-level supervision, which is consistent with our inter-negative sampling method. In the interest of clarity and concision, we only present the experimental results for Step-Intra and Sample-Inter settings in Sec. 4 of our main paper. The complete results obtained with other contrastive settings and more detailed analysis are included in the supplement. 4 EXPERIMENTS We conduct experiments on three conditional generation tasks: dance-to-music generation, text-toimage synthesis, and class-conditioned image synthesis. For the dance-to-music task, we seek to generate audio waveforms for complex music from human motion and dance video frames. For the text-to-image task, the objective is to generate images from given textual descriptions. Given our emphasis on the input-output faithfulness for cross-modal generations, the main analysis are based on the dance-to-music generation task since the evaluation protocol from Zhu et al. (2022a) explicitly measures such connections in terms of beats, genre and general correspondence for generated music. 4.1 DANCE-TO-MUSIC GENERATION Dataset. We use the AIST++ Li et al. (2021) dataset and the TikTok Dance-Music dataset Zhu et al. (2022a) for the dance-to-music experiments. AIST++ is a subset of the AIST dataset Tsuchida et al. (2019), which contains 1020 dance videos and 60 songs performed by professional dancers and filmed in clean studio environment settings without occlusions. AIST++ provide human motion data in the form of SMPL Loper et al. (2015) parameters and body keypoints, and includes the annotations for different genres and choreography styles. The TikTok Dance-Music dataset includes 445 dance videos collected from the social media platform. The 2D skeleton data extracted with OpenPose Cao et al. (2017); Cao et al. (2019) is used as the motion representation. We adopt the official cross-modality splits without overlapping music songs for both datasets. Implementations. The sampling rate for all audio signals is 22.5 kHz in our experiments. We use 2-second music samples as in Zhu et al. (2022a) for the main experiments. We fine-tuned the pre-trained Jukebox Dhariwal et al. (2020) for our Music VQ-VAE model. For the motion encoder, we deploy a backbone stacked with convolutional layers and residual blocks. For the visual encoder, we extract I3D features Carreira & Zisserman (2017) using a model pre-trained on Kinectics Kay et al. (2017) as the visual conditioning. The motion and visual encoder outputs are concatenated to form the final continuous conditioning input to our contrastive diffusion model. For the contrastive diffusion model, we adopt a transformer-based backbone to learn the denoising network pθ. It includes 19 transformer blocks, with each block consisting of full attention, cross attention and feed forward modules, and a channel size of 1024 for each block. We set the initial weight for the contrastive loss as λ = 5e− 5. The number N of intra- and inter-negative samples for each GT music sample is 10. The visual encoder, motion encoder, and the contrastive diffusion model are jointly optimized. More implementation details are provided in the supplement. Evaluations. The evaluation of synthesized music measures both the conditioning-output correspondence and the general synthesis quality using the metrics introduced in Zhu et al. (2022a). Specifically, the metrics include the beats coverage score, the beats hit scores, the genre accuracy score, and two subjective evaluation tests with Mean Opinion Scores (MOS) for the musical coherence and general quality. Among these metrics, the beats scores emphasize the intra-sample properties, since they calculate the second-level audio onset strength within musical chunks Ellis (2007), while the genre accuracy focuses on the instance-level musical attributes of music styles. Detailed explanations of the above metrics can be found in Zhu et al. (2022a). We compare against multiple dance-to-music Table 1: Quantitative evaluation results for the dance-to-music task on the AIST++ dataset. This table shows the best performance scores we obtain for different contrastive diffusion steps. We report the mean and standard deviations of our contrastive diffusion for three inference tests. Musical features Rhythms Rhythms Genre Coherence Quality Metrics Coverage ↑ Hit ↑ Accuracy ↑ MOS ↑ MOS ↑ GT Music 100 100 88.5 4.7 4.8 Foley 74.1 69.4 8.1 2.9 - Dance2Music 83.5 82.4 7.0 3.0 - CMT 85.5 83.5 11.6 3.0 - D2M-GAN 88.2 84.7 24.4 3.3 3.4 Ours Vanilla 89.0±1.1 83.8±1.5 25.3±0.8 3.3 3.6 Ours Step-Intra 93.9±1.2 90.7±1.5 25.8±0.6 3.6 3.5 Ours Sample-Inter 91.8±1.6 86.9±1.4 27.2±0.5 3.6 3.6 Table 2: Quantitative evaluastion results for the dance-to-music task on the TikTok dataset. We set the default number of diffusion steps to be 80. Methods BeatsCoverage/Hit ↑ D2M-GAN 88.4/ 82.3 Ours Vanilla 88.7/ 81.4 Ours Step-Intra 91.8/ 86.3 Ours Sample-Inter 90.1/ 85.5 generation works: Foley Gan et al. (2020a), Dance2Music Aggarwal & Parikh (2021), CMT Di et al. (2021), and D2M-GAN Zhu et al. (2022a). The first three models rely on symbolic discrete MIDI musical representations, while the last one also uses a VQ musical representation. The major difference between the symbolic MIDI and discrete VQ musical representations lies within the fact that the MIDI is pre-defined for each instrument, while the VQ is learning-based. The latter thus enables complex and free music synthesis appropriate for scenarios like dance videos. Results and Discussion. The quantitative experimental results are shown in Tab. 1 and Tab. 2. Our proposed methods achieve better performance than the competing methods even with vanilla version without contrastive mechanisms. Furthermore, we find that the Step-Intra setting is more helpful in increasing the beats scores, while the Sample-Inter setting yields more improvements for the genre accuracy scores. We believe this is due to the evaluation methods of different metrics. The beats scores measure the chunk-level (i.e., , the audio onset strength Ellis (2007)) consistency between the GT and synthesized music samples Zhu et al. (2022a), while the genre scores consider the overall musical attributes of each sample sequence in instance level. This finding is consistent with our assumptions in Sec. 3.3. Convergence Analysis. We also analyze the impact of the proposed contrastive diffusion on model convergence in terms of diffusion steps. The number of diffusion steps is a significant hyper-parameter for DPMs Sohl-Dickstein et al. (2015b); Nichol & Dhariwal (2021); Austin et al. (2021); Gu et al. (2022); Kingma et al. (2021) that directly influences the inference time and synthesis quality. Previous works have shown that a larger number of diffusion steps usually lead to better model performance, but longer inference times Kingma et al. (2021); Gu et al. (2022). We demonstrate that, with the improved mutual information via the proposed contrastive diffusion method, we can greatly reduce the number of steps needed. As shown in Fig. 4 (left), we observe that the beats scores reach a stable level at approximately 80 steps, ∼35% less than the vanilla DPM that converges in ∼120 steps. More ablation studies and analysis on this task can be found in the supplement. 4.2 CONDITIONAL IMAGE SYNTHESIS Dataset. We conduct text-to-image synthesis on CUB200 Wah et al. (2011) and MSCOCO datasets Lin et al. (2014). The CUB200 dataset contains images of 200 bird species. Each image has 10 corresponding text descriptions. The MSCOCO dataset contains 82k images for training and 40k images for testing. Each image has 5 text descriptions. We also perform the class-conditioned image generation on ImageNet Deng et al. (2009); Russakovsky et al. (2015). Implementation details for both tasks are provided in the supplement. Evaluations. We adopt two evaluation metrics for text-to-image synthesis: the classic FID score Heusel et al. (2017) as the general measurement for image quality, and the CLIPScore Hessel et al. (2021) to evaluate the correspondence between the given textual caption and the synthesized image. For the class-conditioned image synthesis, we use the FID score and a classifier-based accuracy for general and input-output correspondence measurement. We compare against text-to-image generation methods including StackGAN Zhang et al. (2017), StackGAN++ Zhang et al. (2018), SEGAN Tan et al. (2019), AttnGAN Xu et al. (2018), DM-GAN Zhu et al. (2019), DF-GAN Tao et al. (2020), DAE-GAN Ruan et al. (2021), DALLE Ramesh et al. (2021), and VQ-Diffusion Gu et al. (2022). For experiments on ImageNet, we list the result comparisons with ImageBART Esser et al. (2021a), VQGAN Esser et al. (2021b), IDDPM Nichol & Dhariwal (2021), and VQ-D Gu et al. (2022). Specifically, VQ-Diffusion Gu et al. (2022) also adopts the discrete diffusion generative backbone, which can be considered as the vanilla version without contrastive mechanisms. Additionally, we provide more comparisons with other methods in terms of dataset, model scale and training time in the supplement for a more comprehensive and fair understanding of our proposed method. Results and Discussion. The quantitative results are represented in Tab. 3 and Tab. 4. We observe that our contrastive diffusion achieves state-of-the-art performance for both general synthesis fidelity and input-output correspondence, and the Sample-Inter contrastive setting is more beneficial compared to Step-Intra for the image synthesis. This empirical finding again validates our assumption regarding the contrastive settings in Sec. 3.3, where the Sample-Inter setting helps more with the instance-level synthesis quality. Notably, as shown in Fig. 4 (right), our contrastive diffusion method shows model convergence at about 60 diffusion steps, while the vanilla version converges at approximately 100 steps on CUB200 Wah et al. (2011), which greatly increases the inference speed by 40%. 5 CONCLUSION While DPMs have demonstrated remarkable potential, improving their training and inference efficiency while maintaining flexible and accurate results for conditional generation is an ongoing challenge, particularly for cross-modal tasks. Our Conditional Discrete Contrastive Diffusion (CDCD) loss addresses this by maximizing the mutual information between the conditioning input and the generated output. Our contrastive diffusion mechanisms and negative sampling methods effectively incorporate this loss into DPM training. Extensive experiments on various cross-modal conditional generation tasks demonstrate the efficacy of our approach in bridging drastically differing domains. ACKNOWLEDGMENT This research is partially supported by NSF SCH-2123521 and Snap unrestricted gift funding. This article solely reflects the opinions and conclusions of its authors and not the funding agents. ETHICS STATEMENTS As in other media generation works, there are possible malicious uses of such media to be addressed by oversight organizations and regulatory agencies. Our primary objective as researchers is always creating more reliable and secure AI and machine learning systems that maximally benefit our society. A MORE RELATED WORKS In addition to the fields of Diffusion Probabilistic Models, Contrastive Representation Learning, and VQ Representations for Conditional Generation discussed in the main paper, our work is also closely related to the multi-modal learning and generation fields. The research topic of multimodal learning, which incorporates data from various modalities such as audio, vision, and language has attracted much attention in recent years Baltrušaitis et al. (2018); Zhu et al. (2022b); Wu et al. (2023). General audio and visual learning works typically seek to investigate their correlations from the intrinsic synchronization nature Aytar et al. (2016); Korbar et al. (2018); Owens & Efros (2018); Owens et al. (2016); Arandjelovic & Zisserman (2017), and then utilize them in various downstream audio-visual tasks such as audio-visual action recognition Kazakos et al. (2019); Gao et al. (2020), audio-visual event localization and parsing Tian et al. (2018); Zhu et al. (2021a); Wu et al. (2019); Wu & Yang (2021), and audio-visual captioning Rahman et al. (2019); Wang et al. (2018). Works to generate music from visual or/and motion data have also been widely explored in recent years Gan et al. (2020a); Di et al. (2021); Aggarwal & Parikh (2021); Zhu et al. (2022a). For vision and language area, the text generation from visions are extensively explored in the image and video captioning task Zhu et al. (2020; 2021b); Anderson et al. (2018); You et al. (2016); Wang et al. (2017). At the same time, works on image/video generation from text have also attracted much attention with recently released largescale models Radford et al. (2021); Li et al. (2019); Ruan et al. (2021); Ramesh et al. (2021). B DETAILED PROOF AND TRAINING B.1 LOWER BOUND OF CDCD LOSS We show that the proposed CDCD loss has a lower bound related to the mutual information and the number of negative samples N . The derivations below are similar to those from Oord et al. (2018): LCDCD := EZ [−log pθ(z0|c) pθ(z0) pθ(z0|c) pθ(z0) + ∑ zj∈Z′ pθ(z j 0|c) pθ(z j 0) ] (6a) = EZ log [1 + pθ(z0) pθ(z0|c) ∑ zj∈Z′ pθ(z j 0|c) pθ(z j 0) ] (6b) ≈ EZ log [1 +N pθ(z0) pθ(z0|c) EZ′ [ pθ(z j 0|c) pθ(z j 0) ]] (6c) = EZ log[1 +N pθ(z0) pθ(z0|c) ] (6d) ≥ EZ log[N pθ(z0) pθ(z0|c) ] (6e) = log(N)− I(z0, c). (6f) B.2 CONVENTIONAL VARIATIONAL LOSS The conventional variational loss Lvb is derived as follows Sohl-Dickstein et al. (2015b): Lvb(x) := Eq[−log pθ(x0:T ) q(x1:T |x0) ] = Eq[−log p(xT )− ∑ t>1 log pθ(xt−1|xt) q(xt|xt−1) − log pθ(x0|x1) q(x1|x0) ] = Eq[−log p(xT )− ∑ t>1 log pθ(xt−1|xt) q(xt−1|xt, x0) · q(xt−1|x0) q(xt|x0) − log pθ(x0|x1) q(x1|x0) ] = Eq[−log p(xT ) q(xT |x0) − ∑ t>1 log pθ(xt−1|xt) q(xt−1|xt, x0) − log pθ(x0|x1)] = Eq[DKL(q(xT |x0)||p(xT )) + ∑ t>1 DKL(q(xt−1|xt, x0)||pθ(xt−1|xt))− log pθ(x0|x1)]. (7) B.3 Lvb WITH CONDITIONING PRIOR Following the unconditional conventional variational loss, we then show its conditional variant with the conditioning c as prior, which has also been adopted in Gu et al. (2022). Lvb(x, c) = L0 + L1 + ...+ LT−1 + LT L0 = −log pθ(x0|x1, c) Lt−1 = DKL(q(xt−1|xt, x0)||pθ(xt−1|xt, c)) LT = DKL(q(xT |x0)||p(xT )) (8) B.4 STEP-WISE AND SAMPLE-WISE CONTRASTIVE DIFFUSION Below, we show the full derivation for the step-wise parallel contrastive diffusion loss. Given that the intermediate variables from z1:T are also taken into account in this step-wise contrastive diffusion, we slightly modify the initial notation f(z0, c) = pθ(z0|c) pθ(z0) from Eq.(2) in the main paper to f(z, c) = pθ(z0:T |c)pθ(z0:T ) . LCDCD−Step := −EZ [log f(z, c) f(z, c) + ∑ zj∈Z′ f(z j , c) ] (9a) = EZ log [1 + ∑ zj∈Z′ f(z j , c) f(z, c) ] (9b) = EZ log [1 + pθ(z0:T ) pθ(z0:T |c) ∑ zj∈Z′ pθ(z j 0:T |c) pθ(z j 0:T ) ] (9c) ≈ EZ log [1 + pθ(z0:T ) pθ(z0:T |c) NEZ′ pθ(z j 0:T |c) pθ(z j 0:T ) ] (same as Eq.(6c)) (9d) ≈ EZEq log[ q(z1:T |z0) pθ(z0:T |c) N pθ(z0:T |c) q(z1:T |z0) ] (conditional pθ) (9e) ≈ Eq[−log pθ(z0:T |c) q(z1:T |z0) ]− logN EZ′Eq[−log pθ(z0:T |c) q(z1:T |z0) ] (9f) = Lvb(z, c)− C ∑ zj∈Z′ Lvb(zj , c). (9g) Algorithm 1 Conditional Discrete Contrastive Diffusion Training. The referenced equations can be found in the main paper. Input: Initial network parameters θ, contrastive loss weight λ, learning rate η, number of negative samples N , total diffusion steps T , conditioning information c, contrastive mode m ∈ {Step, Sample}. 1: for each training iteration do 2: t ∼ Uniform({1, 2, ..., T}) 3: zt ← Sample from q(zt|zt−1) 4: Lvb ← ∑ i=1,...,t Li ▷ Eq. 1 5: if m == Step then 6: for j = 1, ..., N do 7: zjt ← Sample from q(z j t |z j t−1, c) ▷ from negative variables in previous steps 8: end for 9: LCDCD = − 1N ∑ Ljvb ▷ Eq. 3 10: else if m == Sample then 11: for j = 1, ..., N do 12: zt ← Sample from q(zt|zj0, c) ▷ from negative variables in step 0 13: end for 14: LCDCD = − 1N ∑ Ljz0 ▷ Eq. 4 15: end if 16: L ← Lvb + λLCDCD ▷ Eq. 5 17: θ ← θ − η∇θL 18: end for In the above Eq.(9g), C stands for a constant that equals to logN , which can be further adjusted by the weight we select for the CDCD loss as in Eq. 5. Similarly for the sample-wise auxiliary contrastive diffusion, the loss can be derived as follows: LCDCD−Sample := −EZ [log f(z0, c) f(z0, c) + ∑ zj∈Z′ f(z j 0, c) ] (10a) = EZ log [1 + pθ(z0) pθ(z0|c) NEZ′ [ pθ(z j 0|c) pθ(z j 0) ]] (10b) ≈ EZEq log[ q(z1:T |z0) pθ(z0|c) N pθ(z0|c) q(z1:T |z0) ] (10c) ≈ Eq[−log pθ(z0|c) q(z1:T |z0) ]−N EZ′Eq[−log pθ(z0|c) q(z1:T |z0) ] (10d) = Eq[−log pθ(z0|zt, c)]− C ∑ zj∈Z′ Eq[−log pθ(zj0|zt, c)]. (10e) Note that from a high-level perspective, our contrastive idea covers two different concepts, while conventional contrastive learning usually focuses only on the negative samples. In our case, due to the unique formulation of diffusion models that bring the diffusion steps into the methodology design, we consider the contrast within the context of “negative samples” and “negative steps” (also corresponds to the “negative intermediate steps”). In the deviation above, we use the symbols Z and q to distinguish between these two concepts. B.5 CONDITIONAL DISCRETE CONTRASTIVE DIFFUSION TRAINING The training process for the proposed contrastive diffusion is explained in Algo. 1. C ADDITIONAL EXPERIMENTAL DETAILS AND ANALYSIS C.1 DANCE-TO-MUSIC TASK Implementation. The sampling rate for all audio signals is 22.5 kHz in our experiments. We use 2-second music samples as in Zhu et al. (2022a) for our main experiments, resulting in 44,100 audio data points for each raw music sequence. For the Music VQ-VAE, we fine-tuned Jukebox Dhariwal et al. (2020) on our data to leverage its pre-learned codebook from a large-scale music dataset (approximately 1.2 million songs). The codebook size K is 2048, with a token dimension dz = 128, and the hop-length L is 128 in our default experimental setting. For the motion module, we deploy a backbone stacked with convolutional layers and residual blocks. The dimension size of the embedding we use for music conditioning is 1024. For the visual module, we extract I3D features Carreira & Zisserman (2017) using a model pre-trained on Kinectics Kay et al. (2017) as the visual conditioning information, with a dimension size of 2048. In the implementation of our contrastive diffusion model, we adopt a transformer-based backbone to learn the denoising network pθ. It includes 19 transformer blocks, in which each block is consists of full-attention, cross-attention and a feed-forward network, and the channel size for each block is 1024. We set the initial weight for the contrastive loss as λ = 5e− 5. The numbers of intra- and inter-negative samples for each GT music sample are both 10. The AdamW Loshchilov & Hutter (2017) optimizer with β1 = 0.9 and β2 = 0.96 is deployed in our training, with a learning rate of 4.5e− 4. We also employ an adaptive weight for the denoising loss weight by gradually decreasing the weight as the diffusion step increases and approaches the end of the chain. The visual module, motion module, and the contrastive diffusion model are jointly optimized. The architecture of adopted motion encoder is shown in Tab. 5, which is the same as in Zhu et al. (2022a). Other than the aforementioned implementation details, we also include the mask token technique that bears resemblance to those used in language modelling Devlin et al. (2018) and text-to-image synthesis Gu et al. (2022) for our dance-to-music generation task. We adopt a truncation rate of 0.86 in our inference. MOS Evaluation Test. We asked a total of 32 participants to participate in our subjective Mean Opinion Scores (MOS) music evaluations Zhu et al. (2022a); Kumar et al. (2019), among which 11 of them are female, while the rest are male. For the dance-music coherence test, we fuse the generated music samples with the GT videos as post-processing. We then asked each evaluator to rate 20 generated videos with a score of 1 (least coherent) to 5 (most coherent) after watching the processed video clip. Specifically, the participants are asked to pay more attention to the dance-music coherence in terms of the dance moves corresponding to the music genre and rhythm, rather than the overall music quality, with reference to the GT video clips with the original music. As for the overall quality evaluations, we only play the audio tracks without the video frames to each evaluator. As before, they are asked to rate the overall music quality with a score of 1 (worst audio quality) to 5 (best audio quality). Training Cost. For the dance2music task experiments on the AIST++ dataset, we use 4 NVIDIA RTX A5000 GPUs, and train the model for approximately 2 days. For the same task on the TikTok dance-music dataset, the training takes approximately 1.5 days on the same hardware. Complete Results for Contrastive Settings. As discussed in our main paper, there are four possible combinations for contrastive settings given different contrastive diffusion mechanisms and negative sampling methods. Here, we include complete quantitative scores for different contrastive settings in Tab. 6. We observe that all the four contrastive settings, including the Step-Inter and SampleIntra settings that are not reported in our main paper, help to improve the performance. As we noted, amongst all the settings, Step-Intra and Sample-Inter are more reasonable and yield larger improvements for intra-sample data attributes (i.e., beats scores) and instance-level features (i.e., genre accuracy scores). Ablation on Music Length. Although we use 2-second musical sequences in the main experiments to make for consistent and fair comparisons with Zhu et al. (2022a), our framework can also synthesize longer musical sequences. In the supplementary, we show our generated music sequences in 6- seconds. The quantitative evaluations in terms of different musical sequence lengths are presented Tab. 7, where we show better performance when synthesizing longer musical sequences. C.2 TEXT-TO-IMAGE TASK Implementation. For the text-to-image generation task, we adopt VQ-GAN Esser et al. (2021b) as the discrete encoder and decoder. The codebook size K is 2886, with a token dimension dz = 256. VQGAN converts a 256× 256 resolution image to 32× 32 discrete tokens. For the textual conditioning, we employ the pre-trained CLIP Radford et al. (2021) model to encode the given textual descriptions. The denoising diffusion model pθ has 18 transformer blocks and a channel size of 192, which is a similar model scale to the small version of VQ-Diffusion Gu et al. (2022). We use λ = 5e − 5 as the contrastive loss weight. Similar to the dance-to-music task, we also use the adaptive weight that changes within the diffusion stages. We keep the same truncation rate of 0.86 as in our dance-to-music experiment and in Gu et al. (2022). Unlike in the dance-to-music experiments, where we jointly learn the conditioning encoders, both the VQ-GAN and CLIP models are fixed during the contrastive diffusion training. Training Cost. For the text2image task experiments on the CUB200 dataste, the training takes approximately 5 days using 4 NVIDIA RTX A5000 GPUs. For the same experiments on the MSCOCO dataset, we run the experiments on Amazon Web Services (AWS) using 8 NVIDIA Tesla V100 GPUs. This task required 10 days of training. C.3 CLASS-CONDITIONED IMAGE SYNTHESIS TASK Implementation. For the class-conditioned image synthesis, we also adopt the pre-trained VQGAN Esser et al. (2021b) as the discrete encoder and decoder. We replace the conditioning encoder with class embedding optimized during the contrastive diffusion training. The size of the conditional embedding is 512. Other parameters and techniques remain the same, as in the text-to-image task. Training Cost. For the class-conditioned experiments on the ImageNet, we use 8 NVIDIA Tesla V100 GPUs running on AWS. This task required 20 days of training. D MORE QUALITATIVE RESULTS D.1 GENERATED MUSIC SAMPLES For qualitative samples of synthesized dance music sequences, please refer to our anonymous page in the supplement with music samples. In addition to the generated music samples on AIST++ Tsuchida et al. (2019); Li et al. (2021) and TikTok Dance-Music Dataset Zhu et al. (2022a), we also include some qualitative samples obtained with the music editing operations based on the dance-music genre annotations from AIST++. Specifically, we edit the original paired motion conditioning input with a different dance-music genre using a different dance choreographer. Discussion on Musical Representations and Audio Quality. It is worth noting that we only compare the overall audio quality with that of D2M-GAN Zhu et al. (2022a). This is due to the nature of the different musical representations in the literature of deep-learning based music generation Gan et al. (2020a); Dong et al. (2018); Huang et al. (2019); Gan et al. (2020b); Aggarwal & Parikh (2021). There are mainly two categories for adopted musical representations in previous works: pre-defined symbolic and learning-based representations Ji et al. (2020); Briot et al. (2020). For the former symbolic music representation, typical options include 1D piano-roll and 2D MIDI-based representations. While these works benefit from the pre-defined music synthesizers and produce music that does not include raw audio noise, the main limitation is that such representations are usually limited to a single specific instrument, which hinders their flexibility to be applied in wider and more complex scenarios such as dance videos. In contrast, the learning-based music representations (i.e., musical VQ in our case) rely on well-trained music synthesizers as decoders, but can be used as a unified representation for various musical sounds, e.g., instruments or voices. However, the training of such music encoders and decoders for high-quality audio signals itself remains a challenging problem. Specifically, high-quality audio is a form of high-dimensional data with an extremely large sampling rate, even compared to high-resolution images. For example, the sampling rate for CD-quality audio signals is 44.1 kHz, resulting in 2,646,000 data points for a one-minute musical piece. To this end, existing deep learning based works Dhariwal et al. (2020); Kumar et al. (2019) for music generation employ methods to reduce the number of dimensions, e.g., by introducing hop lengths and a smaller sampling rate. These operations help to make music learning and generation more computationally tractable, but also introduce additional noise in the synthesized audio signals. In this work, we adopt the pre-trained JukeBox model Dhariwal et al. (2020) as our music encoder and decoder for the musical VQ representation. The adopted model has a hop length of 128, which corresponds to the top-level model from their original work Dhariwal et al. (2020). Jukebox employs 3 models: top-, middle-, and bottom-level, with both audio quality and required computation increasing from the first to the last model. As an example, in the supplemental HTML page, we provide music samples directly reconstructed from JukeBox using the top-level model we employ in our work, compared to the ground-truth audio. While they allow for high-quality audio reconstruction (from the bottom-level model, with a hop length of 8), it requires much more time and computation not only for training but also for the final inference, e.g., 3 hours to generate a 20-second musical sequence. As the synthesized music from the top-level model includes some audible noise, we apply a noise reduction operation Sainburg et al. (2020). However, the overall audio quality is not a primary factor that we specifically address in this work on cross-modal conditioning and generation, as it largely depends on the specific music encoder and decoder that are employed. This explains why we report similar MOS scores in terms of the general audio quality. D.2 SYNTHESIZED IMAGES We present more qualitative examples for text-to-image synthesis and class-conditioned image synthesis in Fig. 5, Fig. 6, and Fig. 7. E FURTHER DISCUSSION ON THE CDCD LOSS In this section, we provide our further discussion on the proposed CDCD loss in terms of various aspects, including its relevance to the existing auxiliary losses, the impact of the CDCD strength, as well as additional experimental results. E.1 CDCD AS AUXILIARY LOSS While the diffusion models are typically trained and optimized with the conventional variational lower bound loss Lvb as we described in the main paper and Appendix B.2, there are several different types of auxiliary losses proposed to further regularize and improve the learning of diffusion models. Specifically, Dhariwal & Nichol (2021) introduces the idea of classifier based guidance for the diffusion denoising probabilistic models with continuous state space. Classifier-free guidance is proposed in Ho & Salimans (2022). In the area with discrete diffusion formulations Austin et al. (2021); Gu et al. (2022), an auxiliary loss that encourages the model to predict the noiseless token at the arbitrary step is adopted and proven to help with the synthesis quality. Similar to the previous cases, we consider the proposed CDCD loss as a type of auxiliary losses, which seeks to provide additional guidance to better learn the conditional distribution p(x|c). Specifically, the classifier-free guidance Ho & Salimans (2022) propose to randomly discard conditioning while learning a conditional diffusion generative model, which bears resemblance to our introduced downsampled contrastive steps in Appendix E.3. E.2 IMPACT OF CDCD STRENGTH We further show the ablation studies on the parameter λ, which is the weight of our proposed CDCD loss that characterize the strength of this contrastive regularizer. We conduct the dance-to-music generation experiments with different values of λ, and show the results in Tab. 9. As we observe from the table that the performance in terms of the beat scores are relatively robust for different λ values ranging from 4e − 5 to 5e − 5. At the same time, we empirically observe that with a large value of λ, the model converges faster with less training epochs. In case of the image synthesis task, we are rather cautious on the strength of the imposed contrastive regularizer. Intuitively, the proposed CDCD loss encourages the model to learn a slightly different distribution for negative samples, which could impose a trade-off between the one for the actual data given a specific conditioning. Therefore, while a larger value of λ helps with the learning speed, we empirically set the λ to be 5e− 5. Note that this value is adapted from the weight for other auxiliary losses in previous works Gu et al. (2022). E.3 DOWNSAMPLED CONTRASTIVE STEPS While we show the performance of complete step-wise contrastive diffusion in the main paper, we discuss here an alternative way to implement the proposed method with less computational cost, by downsampling the contrastive steps in the diffusion process. Specifically, we randomly downsampled the steps with the proposed CDCD loss, which shares the similar spirit as in the class-free guidance Ho & Salimans (2022) to randomly drop out the conditioning. The experimental results are listed in Tab. 10, where there is little performance drop with downsampled contrastive steps.
1. What is the focus and contribution of the paper on cross-modal music and image generation? 2. What are the strengths of the proposed approach, particularly in combining diffusion training and contrastive learning? 3. What are the weaknesses of the paper regarding the synthesis quality and comparison with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a so-called Discrete Contrastive Diffusion model for cross-modal music and image generation. The main idea is to explicitly enhance input-output connections by maximizing their mutual information instead of implicitly learning such relationships. Specifically, the authors try to combine diffusion training and contrastive learning via the conventional variational objectives. Experiments on three different tasks (i.e., dance-to-music generation, text-to-image synthesis, and class-conditioned image synthesis) verify the effectiveness of the proposed method by showing higher synthesis quality and faster inference speed compared to other existing diffusion models. Strengths And Weaknesses Strengths: To the best of my knowledge, this paper is the first to combine diffusion training and contrastive learning to design an effective generative model. The authors propose the CDCD loss and design two contrastive diffusion mechanisms to achieve this goal. The enhanced diffusion model is designed in a reasonable manner. The paper is well-written and well-organized. The proposed method is clearly described with all necessary analyses and mathematical details. Experiments have been conducted to demonstrate the effectiveness of the proposed method by showing higher synthesis quality and faster inference speed compared to other existing diffusion models. Weaknesses: By zooming in the synthesized images shown in Fig. 6 and Fig. 7, it seems to me that they are not as good as some SOTA results obtained by DALLE-V2, Imagen, etc. The authors need to explain the reasons and show/compare more qualitative results on the tasks of text-to-image synthesis and class-conditioned image synthesis in their supplemental materials. Clarity, Quality, Novelty And Reproducibility Good.
ICLR
Title Discrete Contrastive Diffusion for Cross-Modal Music and Image Generation Abstract Diffusion probabilistic models (DPMs) have become a popular approach to conditional generation, due to their promising results and support for cross-modal synthesis. A key desideratum in conditional synthesis is to achieve high correspondence between the conditioning input and generated output. Most existing methods learn such relationships implicitly, by incorporating the prior into the variational lower bound. In this work, we take a different route—we explicitly enhance input-output connections by maximizing their mutual information. To this end, we introduce a Conditional Discrete Contrastive Diffusion (CDCD) loss and design two contrastive diffusion mechanisms to effectively incorporate it into the denoising process, combining the diffusion training and contrastive learning for the first time by connecting it with the conventional variational objectives. We demonstrate the efficacy of our approach in evaluations with diverse multimodal conditional synthesis tasks: dance-to-music generation, text-to-image synthesis, as well as class-conditioned image synthesis. On each, we enhance the inputoutput correspondence and achieve higher or competitive general synthesis quality. Furthermore, the proposed approach improves the convergence of diffusion models, reducing the number of required diffusion steps by more than 35% on two benchmarks, significantly increasing the inference speed. 1 INTRODUCTION Generative tasks that seek to synthesize data in different modalities, such as audio and images, have attracted much attention. The recently explored diffusion probabilistic models (DPMs) Sohl-Dickstein et al. (2015b) have served as a powerful generative backbone that achieves promising results in both unconditional and conditional generation Kong et al. (2020); Mittal et al. (2021); Lee & Han (2021); Ho et al. (2020); Nichol & Dhariwal (2021); Dhariwal & Nichol (2021); Ho et al. (2022); Hu et al. (2021). Compared to the unconditional case, conditional generation is usually applied in more concrete and practical cross-modality scenarios, e.g., video-based music generation Di et al. (2021); Zhu et al. (2022a); Gan et al. (2020a) and text-based image generation Gu et al. (2022); Ramesh et al. (2021); Li et al. (2019); Ruan et al. (2021). Most existing DPM-based conditional synthesis works Gu et al. (2022); Dhariwal & Nichol (2021) learn the connection between the conditioning and the generated data implicitly by adding a prior to the variational lower bound Sohl-Dickstein et al. (2015b). While such approaches still feature high generation fidelity, the correspondence between the conditioning and the synthesized data can sometimes get lost, as illustrated in the right column in Fig. 1. To this end, we aim to explicitly enhance the input-output faithfulness via their maximized mutual information under the diffusion generative framework for conditional settings in this paper. Examples of our synthesized music audio and image results are given in Fig. 1. Contrastive methods Oord et al. (2018); Bachman et al. (2019); Song & Ermon (2020a) have been proven to be very powerful for data representation learning. Their high-level idea aims to learn the representation z of raw data x based on the assumption that a properly encoded z benefits the ability of a generative model p to reconstruct the raw data given z as prior. This idea can be achieved via optimization of the density ratio p(x|z)p(x) Oord et al. (2018) as an entirety, without explicitly modeling the actual generative model p. While the direct optimization of mutual information via generative models p is a challenging problem to implement and train Song & Ermon (2020b); Belghazi et al. (2018) in the conventional contrastive representation learning field, we show that this can be effectively done within our proposed contrastive diffusion framework. Specifically, we reformulate the optimization problem for the desired conditional generative tasks via DPMs by analogy to the above embedding z and raw data x with our conditioning input and synthesized output. We introduce a Conditional Discrete Contrastive Diffusion (CDCD) loss, and design two contrastive diffusion mechanisms - step-wise parallel diffusion that invokes multiple parallel diffusion processes during contrastive learning, and sample-wise auxiliary diffusion, which maintains one principal diffusion process, to effectively incorporate the CDCD loss into the denoising process. We demonstrate that with the proposed contrastive diffusion method, we can not only effectively train so as to maximize the desired mutual information by connecting the CDCD loss with the conventional variational objective function, but also to directly optimize the generative network p. The optimized CDCD loss further encourages faster convergence of a DPM model with fewer diffusion steps. We additionally present our intra- and inter-negative sampling methods by providing internally disordered and instance-level negative samples, respectively. To better illustrate the input-output connections, we conduct main experiments on the novel crossmodal dance-to-music generation task Zhu et al. (2022a), which aims to generate music audio based on silent dance videos. Compared to other tasks such as text-to-image synthesis, dance-to-music generation explicitly evaluates the input-output correspondence in terms of various cross-modal alignment features such as dance-music beats, genre and general quality. However, various generative settings, frameworks, and applications can also benefit from our contrastive diffusion approach, e.g., joint or separate training of conditioning encoders, continuous or discrete conditioning inputs, and diverse input-output modalities as detailed in Sec. 4. Overall, we achieve results superior or comparable to state-of-the-art on three conditional synthesis tasks: dance-to-music (datasets: AIST++ Tsuchida et al. (2019); Li et al. (2021), TikTok Dance-Music Zhu et al. (2022a)), text-toimage (datasets: CUB200 Wah et al. (2011), MSCOCO Lin et al. (2014)) and class-conditioned image synthesis (dataset: ImageNet Russakovsky et al. (2015)). Our experimental findings suggest three key take-away: 1 Improving the input-output connections via maximized mutual information is indeed beneficial for their correspondence and the general fidelity of the results (see Fig. 1 and supplement). 2 Both our proposed step-wise parallel diffusion with intra-negative samples and sample-wise auxiliary diffusion with inter-negative samples show state-of-the-art scores in our evaluations. The former is more beneficial for capturing the intra-sample correlations, e.g., musical rhythms, while the latter improves the instance-level performance, e.g., music genre and image class. 3 With maximized mutual information, our conditional contrastive diffusion converge in substantially fewer diffusion steps compared to vanilla DPMs, while maintaining the same or even superior performance (approximately 35% fewer steps for dance-to-music generation and 40% fewer for text-to-image synthesis), thus significantly increasing inference speed. 2 BACKGROUND Diffusion Probabilistic Models. DPMs Sohl-Dickstein et al. (2015b) are a class of generative models that learn to convert a simple Gaussian distribution into a data distribution. This process consists of a forward diffusion process and a reverse denoising process, each consisting of a sequence of T steps that act as a Markov chain. During forward diffusion, an input data sample x0 is gradually “corrupted” at each step t by adding Gaussian noise to the output of step t− 1. The reverse denoising process, seeks to convert the noisy latent variable xT into the original data sample x0 by removing the noise added during diffusion. The stationary distribution for the final latent variable xT is typically assumed to be a normal distribution, p(xT ) = N (xT |0, I). An extension of this approach replaces the continuous state with a discrete one Sohl-Dickstein et al. (2015a); Hoogeboom et al. (2021); Austin et al. (2021), in which the latent variables x1:T typically take the form of one-hot vectors with K categories. The diffusion process can then be parameterized using a multinomial categorical transition matrix defined as q(xt|xt−1) = Cat(xt; p = xt−1Qt), where [Qt]ij = q(xt = j|xt−1 = i). The reverse process pθ(xt|xt−1) can also be factorized as conditionally independent over the discrete sequences Austin et al. (2021). In both the continuous and discrete state formulations of DPMs Song & Ermon (2020c); Song et al. (2020b); Kingma et al. (2021); Song et al. (2021); Huang et al. (2021); Vahdat et al. (2021), the denoising process pθ can be optimized by the KL divergence between q and pθ in closed forms Song et al. (2020a); Nichol & Dhariwal (2021); Ho et al. (2020); Hoogeboom et al. (2021); Austin et al. (2021) via the variational bound on the negative log-likelihood: Lvb = Eq[DKL(q(xT |x0)||p(xT ))︸ ︷︷ ︸ LT + ∑ t>1 DKL(q(xt−1|xt, x0)||pθ(xt−1|xt))︸ ︷︷ ︸ Lt−1 − log pθ(x0|x1)︸ ︷︷ ︸ L0 ]. (1) Existing conditional generation works via DPMs Gu et al. (2022); Dhariwal & Nichol (2021) usually learn the implicit relationship between the conditioning c and the synthesized data x0 by directly adding the c as the prior in (1). DPMs with discrete state space provide more controls on the data corruption and denoising compared to its continuous counterpart Austin et al. (2021); Gu et al. (2022) by the flexible designs of transition matrix, which benefits for practical downstream operations such as editing and interactive synthesis Tseng et al. (2020); Cui et al. (2021); Xu et al. (2021). We hence employ contrastive diffusion using a discrete state space in this work. Contrastive Representation Learning. Contrastive learning uses loss functions designed to make neural networks learn to understand and represent the specific similarities and differences between elements in the training data without labels explicitly defining such features, with positive and negative pairs of data points, respectively. This approach has been successfully applied in learning representations of high-dimensional data Oord et al. (2018); Bachman et al. (2019); He et al. (2020); Song & Ermon (2020a); Chen et al. (2020); Lin et al. (2021). Many such works seek to maximize the mutual information between the original data x and its learned representation z under the framework of likelihood-free inference Oord et al. (2018); Song & Ermon (2020a); Wu et al. (2021). The above problem can be formulated as maximizing a density ratio p(x|z)p(x) that preserves the mutual information between the raw data x and learned representation z. To achieve this, existing contrastive methods Oord et al. (2018); Durkan et al. (2020); He et al. (2020); Zhang et al. (2021) typically adopt a neural network to directly model the ratio as an entirety and avoid explicitly considering the actual generative model p(x|z), which has proven to be a more challenging problem Song & Ermon (2020b); Belghazi et al. (2018). In contrast, we show that by formulating the conventional contrastive representation learning problem under the generative setting, the properties of DPMs enable us to directly optimize the model p in this work, which can be interpreted as the optimal version of the density ratio Oord et al. (2018). Vector-Quantized Representations for Conditional Generation. Vector quantization is a classical technique in which a high-dimensional space is represented using a discrete number of vectors. More recently, Vector-Quantized (VQ) deep learning models employ this technique to allow for compact and discrete representations of music and image data Oord et al. (2017); Razavi et al. (2019); Esser et al. (2021b); Dhariwal et al. (2020); Chen et al. (2022). Typically, the VQ-based models use an encoder-codebook-decoder framework, where the “codebook” contains a fixed number of vectors (entries) to represent the original high dimensional raw data. The encoder transforms the input x into feature embedding that are each mapped to the closest corresponding vector in the codebook, while the decoder uses the set of quantized vectors z to reconstruct the input data, producing x′ as illustrated in the upper part of Fig. 2. In this work, we perform conditional diffusion process on the VQ space (i.e., discrete token sequences) as shown in the bottom part of Fig. 2, which largely reduces the dimensionality of the raw data, thus avoiding the expensive raw data decoding and synthesis. As our approach is flexible enough to be employed with various input and output modalities, the exact underlying VQ model we use depends on the target data domain. For music synthesis, we employ a fine-tuned Jukebox Dhariwal et al. (2020) model, while for image generation, we employ VQ-GAN Esser et al. (2021b). See Sec. 4 for further details. We refer to z, the latent quantized representation of x, as z0 below to distinguish it from the latent representation at prior stages in the denoising process. 3 METHOD Here we outline our approach to cross-modal and conditional generation using our proposed discrete contrastive diffusion approach, which is depicted in Fig. 2. In Sec. 3.1, we formulate our Conditional Discrete Contrastive Diffusion loss in detail, and demonstrate how it helps to maximize the mutual information between the conditioning and generated discrete data representations. Sec. 3.2 defines two specific mechanisms for applying this loss within a diffusion model training framework, samplewise and step-wise. In Sec. 3.3, we detail techniques for constructing negative samples designed to improve the overall quality and coherence of the generated sequences. Given the data pair (c, x), where c is the conditioning information from a given input modality (e.g., videos, text, or a class label), our objective is to generate a data sample x in the target modality (e.g., music audio or images) corresponding to c. In the training stage, we first employ and train a VQ-based model to obtain discrete representation z0 of the data x from the target modality. Next, our diffusion process operates on the encoded latent representation z0 of x. The denoising process recovers the latent representation z0 given the conditioning c that can be decoded to obtain the reconstruction x′. In inference, we generate z0 based on the conditioning c, and decode the latent VQ representation z0 back to raw data domain using the decoder from the pre-trained and fixed VQ decoder. 3.1 CONDITIONAL DISCRETE CONTRASTIVE DIFFUSION LOSS We seek to enhance the connection between c and the generated data z0 by maximizing their mutual information, defined as I(z0; c) = ∑ z0 pθ(z0, c) log pθ(z0|c) pθ(z0) . We introduce a set of negative VQ sequences Z ′ = {z1, z2, ..., zN}, encoded from N negative samples X ′ = {x1, x2, ..., xN}, and define f(z0, c) = pθ(z0|c) pθ(z0) . Our proposed Conditional Discrete Contrastive Diffusion (CDCD) loss is: LCDCD := −E [ log f(z0, c) f(z0, c) + Σzj∈Z′f(z j 0, c) ] . (2) The proposed CDCD loss is similar to the categorical cross-entropy loss for classifying the positive sample as in Oord et al. (2018), where our conditioning c and the generated data z0 corresponds to the original learned representation and raw data, and optimization of this loss leads to maximization of I(z0; c). However, the loss in Oord et al. (2018) models the density ratio f(z0, c) as an entirety. In our case, we demonstrate that the DPMs properties Sohl-Dickstein et al. (2015b); Ho et al. (2020); Austin et al. (2021) enable us to directly optimize the actual distribution pθ within the diffusion process for the desired conditional generation tasks. Specifically, we show the connections between the proposed CDCD loss and the conventional variational loss Lvb (see (1)) in Sec. 3.2, and thus how it contributes to efficient DPM learning. Additionally, we can derive the lower bound for the mutual information as I(z0; c) ≥ log(N) − LCDCD (see supplement for details), which indicates that a larger number of negative samples increases the lower bound. These two factors allow for faster convergence of a DPM with fewer diffusion steps. 3.2 PARALLEL AND AUXILIARY DIFFUSION PROCESS The CDCD loss in (2) considers the mutual information between c and z0 in a general way, without specifying the intermediate diffusion steps. We propose and analyze two contrastive diffusion mechanisms to efficiently incorporate this loss into DPM learning, and demonstrate that we can directly optimize the generative model pθ in the diffusion process. We present our step-wise parallel diffusion and the sample-wise auxiliary diffusion mechanisms, which are distinguished by the specific operations applied for the intermediate negative latent variables zj1:T for each negative sample x j . The high-level intuition for the parallel and auxiliary designs is to emphasize different attributes of the synthesized data given specific applications. Particularly, we propose the parallel variant to learn the internal coherence of the audio sequential data by emphasizing the gradual change at each time step, while the auxiliary mechanism focuses more on the sample-level connections to the conditioning. Step-Wise Parallel Diffusion. This mechanism not only focuses on the mutual information between c and z0, but also takes the intermediate negative latent variables z j 1:T into account by explicitly invoking the complete diffusion process for each negative sample zj ∈ Z ′. As illustrated in Fig. 2 (bottom left), we initiate N + 1 parallel diffusion processes, among which N are invoked by negative samples. For each negative sample xj ∈ X ′, we explicitly compute its negative latent discrete variables zj0:T . In this case, (2) is as follows (see supplement for the detailed derivation): LCDCD−Step := EZ log [ 1 + pθ(z0:T ) pθ(z0:T |c) NEZ′ [pθ(zj0:T |c) pθ(z j 0:T ) ]] ≡ Lvb(z, c)− C ∑ zj∈Z′ Lvb(zj , c). (3) The equation above factorizes the proposed CDCD loss using the step-wise parallel diffusion mechanism into two terms, where the first term corresponds to the original variational bound Lvb, and the second term can be interpreted as the negative sum of variational bounds induced by the negative samples and the provided conditioning c. C is a constant as detailed in our supplement. Sample-Wise Auxiliary Diffusion. Alternatively, our sample-wise auxiliary diffusion mechanism maintains one principal diffusion process, as in traditional diffusion training, shown in Fig. 2 (bottom right). It contrasts the intermediate positive latent variables z1:T with the negative sample z j 0 ∈ Z. In this case, we can write the CDCD loss from. (2) as (see supplement for details): LCDCD−Sample := Eq[−log pθ(z0|zt, c)]− C Σzj∈Z′Eq[−log pθ(zj0|zt, c)]. (4) As with the step-wise loss, the CDCD-Sample loss includes two terms. The first refers to sampling directly from the positive z0 at an arbitrary timestep t. The second sums the same auxiliary loss from negative samples zj0. This marginalization operation is based on the property of Markov chain as in previous discrete DPMs Austin et al. (2021); Gu et al. (2022), which imposes direct supervision from the sample data. The first term is similar to the auxiliary denoising objective in Austin et al. (2021); Gu et al. (2022). Both contrastive diffusion mechanisms enable us to effectively incorporate the CDCD loss into our DPM learning process by directly optimizing the actual denoising generative network pθ. Final Loss Function. The final loss function for our contrastive diffusion training process is: L = Lvb(z, c) + λLCDCD, (5) Lvb is conditioning c related, and takes the form of Lt−1 = DKL(q(zt−1|zt, z0)||pθ(zt−1|zt, c)) as in Gu et al. (2022), where c included as the prior for all the intermediate steps. LCDCD refers to either the step-wise parallel diffusion or sample-wise auxiliary diffusion loss. Empirically, we can omit the first term in (3), or directly optimize LCDCD−Step, in which the standard Lvb is already included. The detailed training algorithm is explained in the supplement. 3.3 INTRA- AND INTER-NEGATIVE SAMPLING Previous contrastive works construct negative samples using techniques such as image augmentation Chen et al. (2020); He et al. (2020) or spatially adjacent image patches Oord et al. (2018). In this work, we categorize our sampling methods into intra- and inter-negative samplings as in Fig. 3. For the intra-sample negative sampling, we construct X ′ based on the given original x. This bears resemblance to the patch-based technique in the image domain Oord et al. (2018). As for the audio data, we first divide the original audio waveform into multiple chunks, and randomly shuffle their ordering. For the inter-sample negative sampling, X ′ consists of instance-level negative samples x′ that differ from the given data pair (c, x). In practice, we define negative samples x′ to be music sequences with different musical genres from x in the music generation task, while x′ denotes images other than x in the image synthesis task (in practice, we choose x′ with different class labels as x). Based on our proposed contrastive diffusion modes and negative sampling methods, there are four possible contrastive settings: step-wise parallel diffusion with either intra- or inter-negative sampling (denoted as Step-Intra and Step-Inter), or sample-wise auxiliary diffusion with either intra- or internegative sampling (denoted as Sample-Intra and Sample-Inter). Intuitively, we argue that Step-Intra and Sample-Inter settings are more reasonable compared to Step-Inter and Sample-Intra because of the consistency between the diffusion data corruption process and the way to construct negative samples. Specifically, the data corruption process in the discrete DPMs includes sampling and replacing certain tokens with some random or mask tokens at each diffusion step Austin et al. (2021); Gu et al. (2022), which is a chunk-level operation within a given data sequence similar to the ways we construct intra-negative samples by shuffling the chunk-level orders. In contrast, the sample-wise auxiliary diffusion seeks to provide sample-level supervision, which is consistent with our inter-negative sampling method. In the interest of clarity and concision, we only present the experimental results for Step-Intra and Sample-Inter settings in Sec. 4 of our main paper. The complete results obtained with other contrastive settings and more detailed analysis are included in the supplement. 4 EXPERIMENTS We conduct experiments on three conditional generation tasks: dance-to-music generation, text-toimage synthesis, and class-conditioned image synthesis. For the dance-to-music task, we seek to generate audio waveforms for complex music from human motion and dance video frames. For the text-to-image task, the objective is to generate images from given textual descriptions. Given our emphasis on the input-output faithfulness for cross-modal generations, the main analysis are based on the dance-to-music generation task since the evaluation protocol from Zhu et al. (2022a) explicitly measures such connections in terms of beats, genre and general correspondence for generated music. 4.1 DANCE-TO-MUSIC GENERATION Dataset. We use the AIST++ Li et al. (2021) dataset and the TikTok Dance-Music dataset Zhu et al. (2022a) for the dance-to-music experiments. AIST++ is a subset of the AIST dataset Tsuchida et al. (2019), which contains 1020 dance videos and 60 songs performed by professional dancers and filmed in clean studio environment settings without occlusions. AIST++ provide human motion data in the form of SMPL Loper et al. (2015) parameters and body keypoints, and includes the annotations for different genres and choreography styles. The TikTok Dance-Music dataset includes 445 dance videos collected from the social media platform. The 2D skeleton data extracted with OpenPose Cao et al. (2017); Cao et al. (2019) is used as the motion representation. We adopt the official cross-modality splits without overlapping music songs for both datasets. Implementations. The sampling rate for all audio signals is 22.5 kHz in our experiments. We use 2-second music samples as in Zhu et al. (2022a) for the main experiments. We fine-tuned the pre-trained Jukebox Dhariwal et al. (2020) for our Music VQ-VAE model. For the motion encoder, we deploy a backbone stacked with convolutional layers and residual blocks. For the visual encoder, we extract I3D features Carreira & Zisserman (2017) using a model pre-trained on Kinectics Kay et al. (2017) as the visual conditioning. The motion and visual encoder outputs are concatenated to form the final continuous conditioning input to our contrastive diffusion model. For the contrastive diffusion model, we adopt a transformer-based backbone to learn the denoising network pθ. It includes 19 transformer blocks, with each block consisting of full attention, cross attention and feed forward modules, and a channel size of 1024 for each block. We set the initial weight for the contrastive loss as λ = 5e− 5. The number N of intra- and inter-negative samples for each GT music sample is 10. The visual encoder, motion encoder, and the contrastive diffusion model are jointly optimized. More implementation details are provided in the supplement. Evaluations. The evaluation of synthesized music measures both the conditioning-output correspondence and the general synthesis quality using the metrics introduced in Zhu et al. (2022a). Specifically, the metrics include the beats coverage score, the beats hit scores, the genre accuracy score, and two subjective evaluation tests with Mean Opinion Scores (MOS) for the musical coherence and general quality. Among these metrics, the beats scores emphasize the intra-sample properties, since they calculate the second-level audio onset strength within musical chunks Ellis (2007), while the genre accuracy focuses on the instance-level musical attributes of music styles. Detailed explanations of the above metrics can be found in Zhu et al. (2022a). We compare against multiple dance-to-music Table 1: Quantitative evaluation results for the dance-to-music task on the AIST++ dataset. This table shows the best performance scores we obtain for different contrastive diffusion steps. We report the mean and standard deviations of our contrastive diffusion for three inference tests. Musical features Rhythms Rhythms Genre Coherence Quality Metrics Coverage ↑ Hit ↑ Accuracy ↑ MOS ↑ MOS ↑ GT Music 100 100 88.5 4.7 4.8 Foley 74.1 69.4 8.1 2.9 - Dance2Music 83.5 82.4 7.0 3.0 - CMT 85.5 83.5 11.6 3.0 - D2M-GAN 88.2 84.7 24.4 3.3 3.4 Ours Vanilla 89.0±1.1 83.8±1.5 25.3±0.8 3.3 3.6 Ours Step-Intra 93.9±1.2 90.7±1.5 25.8±0.6 3.6 3.5 Ours Sample-Inter 91.8±1.6 86.9±1.4 27.2±0.5 3.6 3.6 Table 2: Quantitative evaluastion results for the dance-to-music task on the TikTok dataset. We set the default number of diffusion steps to be 80. Methods BeatsCoverage/Hit ↑ D2M-GAN 88.4/ 82.3 Ours Vanilla 88.7/ 81.4 Ours Step-Intra 91.8/ 86.3 Ours Sample-Inter 90.1/ 85.5 generation works: Foley Gan et al. (2020a), Dance2Music Aggarwal & Parikh (2021), CMT Di et al. (2021), and D2M-GAN Zhu et al. (2022a). The first three models rely on symbolic discrete MIDI musical representations, while the last one also uses a VQ musical representation. The major difference between the symbolic MIDI and discrete VQ musical representations lies within the fact that the MIDI is pre-defined for each instrument, while the VQ is learning-based. The latter thus enables complex and free music synthesis appropriate for scenarios like dance videos. Results and Discussion. The quantitative experimental results are shown in Tab. 1 and Tab. 2. Our proposed methods achieve better performance than the competing methods even with vanilla version without contrastive mechanisms. Furthermore, we find that the Step-Intra setting is more helpful in increasing the beats scores, while the Sample-Inter setting yields more improvements for the genre accuracy scores. We believe this is due to the evaluation methods of different metrics. The beats scores measure the chunk-level (i.e., , the audio onset strength Ellis (2007)) consistency between the GT and synthesized music samples Zhu et al. (2022a), while the genre scores consider the overall musical attributes of each sample sequence in instance level. This finding is consistent with our assumptions in Sec. 3.3. Convergence Analysis. We also analyze the impact of the proposed contrastive diffusion on model convergence in terms of diffusion steps. The number of diffusion steps is a significant hyper-parameter for DPMs Sohl-Dickstein et al. (2015b); Nichol & Dhariwal (2021); Austin et al. (2021); Gu et al. (2022); Kingma et al. (2021) that directly influences the inference time and synthesis quality. Previous works have shown that a larger number of diffusion steps usually lead to better model performance, but longer inference times Kingma et al. (2021); Gu et al. (2022). We demonstrate that, with the improved mutual information via the proposed contrastive diffusion method, we can greatly reduce the number of steps needed. As shown in Fig. 4 (left), we observe that the beats scores reach a stable level at approximately 80 steps, ∼35% less than the vanilla DPM that converges in ∼120 steps. More ablation studies and analysis on this task can be found in the supplement. 4.2 CONDITIONAL IMAGE SYNTHESIS Dataset. We conduct text-to-image synthesis on CUB200 Wah et al. (2011) and MSCOCO datasets Lin et al. (2014). The CUB200 dataset contains images of 200 bird species. Each image has 10 corresponding text descriptions. The MSCOCO dataset contains 82k images for training and 40k images for testing. Each image has 5 text descriptions. We also perform the class-conditioned image generation on ImageNet Deng et al. (2009); Russakovsky et al. (2015). Implementation details for both tasks are provided in the supplement. Evaluations. We adopt two evaluation metrics for text-to-image synthesis: the classic FID score Heusel et al. (2017) as the general measurement for image quality, and the CLIPScore Hessel et al. (2021) to evaluate the correspondence between the given textual caption and the synthesized image. For the class-conditioned image synthesis, we use the FID score and a classifier-based accuracy for general and input-output correspondence measurement. We compare against text-to-image generation methods including StackGAN Zhang et al. (2017), StackGAN++ Zhang et al. (2018), SEGAN Tan et al. (2019), AttnGAN Xu et al. (2018), DM-GAN Zhu et al. (2019), DF-GAN Tao et al. (2020), DAE-GAN Ruan et al. (2021), DALLE Ramesh et al. (2021), and VQ-Diffusion Gu et al. (2022). For experiments on ImageNet, we list the result comparisons with ImageBART Esser et al. (2021a), VQGAN Esser et al. (2021b), IDDPM Nichol & Dhariwal (2021), and VQ-D Gu et al. (2022). Specifically, VQ-Diffusion Gu et al. (2022) also adopts the discrete diffusion generative backbone, which can be considered as the vanilla version without contrastive mechanisms. Additionally, we provide more comparisons with other methods in terms of dataset, model scale and training time in the supplement for a more comprehensive and fair understanding of our proposed method. Results and Discussion. The quantitative results are represented in Tab. 3 and Tab. 4. We observe that our contrastive diffusion achieves state-of-the-art performance for both general synthesis fidelity and input-output correspondence, and the Sample-Inter contrastive setting is more beneficial compared to Step-Intra for the image synthesis. This empirical finding again validates our assumption regarding the contrastive settings in Sec. 3.3, where the Sample-Inter setting helps more with the instance-level synthesis quality. Notably, as shown in Fig. 4 (right), our contrastive diffusion method shows model convergence at about 60 diffusion steps, while the vanilla version converges at approximately 100 steps on CUB200 Wah et al. (2011), which greatly increases the inference speed by 40%. 5 CONCLUSION While DPMs have demonstrated remarkable potential, improving their training and inference efficiency while maintaining flexible and accurate results for conditional generation is an ongoing challenge, particularly for cross-modal tasks. Our Conditional Discrete Contrastive Diffusion (CDCD) loss addresses this by maximizing the mutual information between the conditioning input and the generated output. Our contrastive diffusion mechanisms and negative sampling methods effectively incorporate this loss into DPM training. Extensive experiments on various cross-modal conditional generation tasks demonstrate the efficacy of our approach in bridging drastically differing domains. ACKNOWLEDGMENT This research is partially supported by NSF SCH-2123521 and Snap unrestricted gift funding. This article solely reflects the opinions and conclusions of its authors and not the funding agents. ETHICS STATEMENTS As in other media generation works, there are possible malicious uses of such media to be addressed by oversight organizations and regulatory agencies. Our primary objective as researchers is always creating more reliable and secure AI and machine learning systems that maximally benefit our society. A MORE RELATED WORKS In addition to the fields of Diffusion Probabilistic Models, Contrastive Representation Learning, and VQ Representations for Conditional Generation discussed in the main paper, our work is also closely related to the multi-modal learning and generation fields. The research topic of multimodal learning, which incorporates data from various modalities such as audio, vision, and language has attracted much attention in recent years Baltrušaitis et al. (2018); Zhu et al. (2022b); Wu et al. (2023). General audio and visual learning works typically seek to investigate their correlations from the intrinsic synchronization nature Aytar et al. (2016); Korbar et al. (2018); Owens & Efros (2018); Owens et al. (2016); Arandjelovic & Zisserman (2017), and then utilize them in various downstream audio-visual tasks such as audio-visual action recognition Kazakos et al. (2019); Gao et al. (2020), audio-visual event localization and parsing Tian et al. (2018); Zhu et al. (2021a); Wu et al. (2019); Wu & Yang (2021), and audio-visual captioning Rahman et al. (2019); Wang et al. (2018). Works to generate music from visual or/and motion data have also been widely explored in recent years Gan et al. (2020a); Di et al. (2021); Aggarwal & Parikh (2021); Zhu et al. (2022a). For vision and language area, the text generation from visions are extensively explored in the image and video captioning task Zhu et al. (2020; 2021b); Anderson et al. (2018); You et al. (2016); Wang et al. (2017). At the same time, works on image/video generation from text have also attracted much attention with recently released largescale models Radford et al. (2021); Li et al. (2019); Ruan et al. (2021); Ramesh et al. (2021). B DETAILED PROOF AND TRAINING B.1 LOWER BOUND OF CDCD LOSS We show that the proposed CDCD loss has a lower bound related to the mutual information and the number of negative samples N . The derivations below are similar to those from Oord et al. (2018): LCDCD := EZ [−log pθ(z0|c) pθ(z0) pθ(z0|c) pθ(z0) + ∑ zj∈Z′ pθ(z j 0|c) pθ(z j 0) ] (6a) = EZ log [1 + pθ(z0) pθ(z0|c) ∑ zj∈Z′ pθ(z j 0|c) pθ(z j 0) ] (6b) ≈ EZ log [1 +N pθ(z0) pθ(z0|c) EZ′ [ pθ(z j 0|c) pθ(z j 0) ]] (6c) = EZ log[1 +N pθ(z0) pθ(z0|c) ] (6d) ≥ EZ log[N pθ(z0) pθ(z0|c) ] (6e) = log(N)− I(z0, c). (6f) B.2 CONVENTIONAL VARIATIONAL LOSS The conventional variational loss Lvb is derived as follows Sohl-Dickstein et al. (2015b): Lvb(x) := Eq[−log pθ(x0:T ) q(x1:T |x0) ] = Eq[−log p(xT )− ∑ t>1 log pθ(xt−1|xt) q(xt|xt−1) − log pθ(x0|x1) q(x1|x0) ] = Eq[−log p(xT )− ∑ t>1 log pθ(xt−1|xt) q(xt−1|xt, x0) · q(xt−1|x0) q(xt|x0) − log pθ(x0|x1) q(x1|x0) ] = Eq[−log p(xT ) q(xT |x0) − ∑ t>1 log pθ(xt−1|xt) q(xt−1|xt, x0) − log pθ(x0|x1)] = Eq[DKL(q(xT |x0)||p(xT )) + ∑ t>1 DKL(q(xt−1|xt, x0)||pθ(xt−1|xt))− log pθ(x0|x1)]. (7) B.3 Lvb WITH CONDITIONING PRIOR Following the unconditional conventional variational loss, we then show its conditional variant with the conditioning c as prior, which has also been adopted in Gu et al. (2022). Lvb(x, c) = L0 + L1 + ...+ LT−1 + LT L0 = −log pθ(x0|x1, c) Lt−1 = DKL(q(xt−1|xt, x0)||pθ(xt−1|xt, c)) LT = DKL(q(xT |x0)||p(xT )) (8) B.4 STEP-WISE AND SAMPLE-WISE CONTRASTIVE DIFFUSION Below, we show the full derivation for the step-wise parallel contrastive diffusion loss. Given that the intermediate variables from z1:T are also taken into account in this step-wise contrastive diffusion, we slightly modify the initial notation f(z0, c) = pθ(z0|c) pθ(z0) from Eq.(2) in the main paper to f(z, c) = pθ(z0:T |c)pθ(z0:T ) . LCDCD−Step := −EZ [log f(z, c) f(z, c) + ∑ zj∈Z′ f(z j , c) ] (9a) = EZ log [1 + ∑ zj∈Z′ f(z j , c) f(z, c) ] (9b) = EZ log [1 + pθ(z0:T ) pθ(z0:T |c) ∑ zj∈Z′ pθ(z j 0:T |c) pθ(z j 0:T ) ] (9c) ≈ EZ log [1 + pθ(z0:T ) pθ(z0:T |c) NEZ′ pθ(z j 0:T |c) pθ(z j 0:T ) ] (same as Eq.(6c)) (9d) ≈ EZEq log[ q(z1:T |z0) pθ(z0:T |c) N pθ(z0:T |c) q(z1:T |z0) ] (conditional pθ) (9e) ≈ Eq[−log pθ(z0:T |c) q(z1:T |z0) ]− logN EZ′Eq[−log pθ(z0:T |c) q(z1:T |z0) ] (9f) = Lvb(z, c)− C ∑ zj∈Z′ Lvb(zj , c). (9g) Algorithm 1 Conditional Discrete Contrastive Diffusion Training. The referenced equations can be found in the main paper. Input: Initial network parameters θ, contrastive loss weight λ, learning rate η, number of negative samples N , total diffusion steps T , conditioning information c, contrastive mode m ∈ {Step, Sample}. 1: for each training iteration do 2: t ∼ Uniform({1, 2, ..., T}) 3: zt ← Sample from q(zt|zt−1) 4: Lvb ← ∑ i=1,...,t Li ▷ Eq. 1 5: if m == Step then 6: for j = 1, ..., N do 7: zjt ← Sample from q(z j t |z j t−1, c) ▷ from negative variables in previous steps 8: end for 9: LCDCD = − 1N ∑ Ljvb ▷ Eq. 3 10: else if m == Sample then 11: for j = 1, ..., N do 12: zt ← Sample from q(zt|zj0, c) ▷ from negative variables in step 0 13: end for 14: LCDCD = − 1N ∑ Ljz0 ▷ Eq. 4 15: end if 16: L ← Lvb + λLCDCD ▷ Eq. 5 17: θ ← θ − η∇θL 18: end for In the above Eq.(9g), C stands for a constant that equals to logN , which can be further adjusted by the weight we select for the CDCD loss as in Eq. 5. Similarly for the sample-wise auxiliary contrastive diffusion, the loss can be derived as follows: LCDCD−Sample := −EZ [log f(z0, c) f(z0, c) + ∑ zj∈Z′ f(z j 0, c) ] (10a) = EZ log [1 + pθ(z0) pθ(z0|c) NEZ′ [ pθ(z j 0|c) pθ(z j 0) ]] (10b) ≈ EZEq log[ q(z1:T |z0) pθ(z0|c) N pθ(z0|c) q(z1:T |z0) ] (10c) ≈ Eq[−log pθ(z0|c) q(z1:T |z0) ]−N EZ′Eq[−log pθ(z0|c) q(z1:T |z0) ] (10d) = Eq[−log pθ(z0|zt, c)]− C ∑ zj∈Z′ Eq[−log pθ(zj0|zt, c)]. (10e) Note that from a high-level perspective, our contrastive idea covers two different concepts, while conventional contrastive learning usually focuses only on the negative samples. In our case, due to the unique formulation of diffusion models that bring the diffusion steps into the methodology design, we consider the contrast within the context of “negative samples” and “negative steps” (also corresponds to the “negative intermediate steps”). In the deviation above, we use the symbols Z and q to distinguish between these two concepts. B.5 CONDITIONAL DISCRETE CONTRASTIVE DIFFUSION TRAINING The training process for the proposed contrastive diffusion is explained in Algo. 1. C ADDITIONAL EXPERIMENTAL DETAILS AND ANALYSIS C.1 DANCE-TO-MUSIC TASK Implementation. The sampling rate for all audio signals is 22.5 kHz in our experiments. We use 2-second music samples as in Zhu et al. (2022a) for our main experiments, resulting in 44,100 audio data points for each raw music sequence. For the Music VQ-VAE, we fine-tuned Jukebox Dhariwal et al. (2020) on our data to leverage its pre-learned codebook from a large-scale music dataset (approximately 1.2 million songs). The codebook size K is 2048, with a token dimension dz = 128, and the hop-length L is 128 in our default experimental setting. For the motion module, we deploy a backbone stacked with convolutional layers and residual blocks. The dimension size of the embedding we use for music conditioning is 1024. For the visual module, we extract I3D features Carreira & Zisserman (2017) using a model pre-trained on Kinectics Kay et al. (2017) as the visual conditioning information, with a dimension size of 2048. In the implementation of our contrastive diffusion model, we adopt a transformer-based backbone to learn the denoising network pθ. It includes 19 transformer blocks, in which each block is consists of full-attention, cross-attention and a feed-forward network, and the channel size for each block is 1024. We set the initial weight for the contrastive loss as λ = 5e− 5. The numbers of intra- and inter-negative samples for each GT music sample are both 10. The AdamW Loshchilov & Hutter (2017) optimizer with β1 = 0.9 and β2 = 0.96 is deployed in our training, with a learning rate of 4.5e− 4. We also employ an adaptive weight for the denoising loss weight by gradually decreasing the weight as the diffusion step increases and approaches the end of the chain. The visual module, motion module, and the contrastive diffusion model are jointly optimized. The architecture of adopted motion encoder is shown in Tab. 5, which is the same as in Zhu et al. (2022a). Other than the aforementioned implementation details, we also include the mask token technique that bears resemblance to those used in language modelling Devlin et al. (2018) and text-to-image synthesis Gu et al. (2022) for our dance-to-music generation task. We adopt a truncation rate of 0.86 in our inference. MOS Evaluation Test. We asked a total of 32 participants to participate in our subjective Mean Opinion Scores (MOS) music evaluations Zhu et al. (2022a); Kumar et al. (2019), among which 11 of them are female, while the rest are male. For the dance-music coherence test, we fuse the generated music samples with the GT videos as post-processing. We then asked each evaluator to rate 20 generated videos with a score of 1 (least coherent) to 5 (most coherent) after watching the processed video clip. Specifically, the participants are asked to pay more attention to the dance-music coherence in terms of the dance moves corresponding to the music genre and rhythm, rather than the overall music quality, with reference to the GT video clips with the original music. As for the overall quality evaluations, we only play the audio tracks without the video frames to each evaluator. As before, they are asked to rate the overall music quality with a score of 1 (worst audio quality) to 5 (best audio quality). Training Cost. For the dance2music task experiments on the AIST++ dataset, we use 4 NVIDIA RTX A5000 GPUs, and train the model for approximately 2 days. For the same task on the TikTok dance-music dataset, the training takes approximately 1.5 days on the same hardware. Complete Results for Contrastive Settings. As discussed in our main paper, there are four possible combinations for contrastive settings given different contrastive diffusion mechanisms and negative sampling methods. Here, we include complete quantitative scores for different contrastive settings in Tab. 6. We observe that all the four contrastive settings, including the Step-Inter and SampleIntra settings that are not reported in our main paper, help to improve the performance. As we noted, amongst all the settings, Step-Intra and Sample-Inter are more reasonable and yield larger improvements for intra-sample data attributes (i.e., beats scores) and instance-level features (i.e., genre accuracy scores). Ablation on Music Length. Although we use 2-second musical sequences in the main experiments to make for consistent and fair comparisons with Zhu et al. (2022a), our framework can also synthesize longer musical sequences. In the supplementary, we show our generated music sequences in 6- seconds. The quantitative evaluations in terms of different musical sequence lengths are presented Tab. 7, where we show better performance when synthesizing longer musical sequences. C.2 TEXT-TO-IMAGE TASK Implementation. For the text-to-image generation task, we adopt VQ-GAN Esser et al. (2021b) as the discrete encoder and decoder. The codebook size K is 2886, with a token dimension dz = 256. VQGAN converts a 256× 256 resolution image to 32× 32 discrete tokens. For the textual conditioning, we employ the pre-trained CLIP Radford et al. (2021) model to encode the given textual descriptions. The denoising diffusion model pθ has 18 transformer blocks and a channel size of 192, which is a similar model scale to the small version of VQ-Diffusion Gu et al. (2022). We use λ = 5e − 5 as the contrastive loss weight. Similar to the dance-to-music task, we also use the adaptive weight that changes within the diffusion stages. We keep the same truncation rate of 0.86 as in our dance-to-music experiment and in Gu et al. (2022). Unlike in the dance-to-music experiments, where we jointly learn the conditioning encoders, both the VQ-GAN and CLIP models are fixed during the contrastive diffusion training. Training Cost. For the text2image task experiments on the CUB200 dataste, the training takes approximately 5 days using 4 NVIDIA RTX A5000 GPUs. For the same experiments on the MSCOCO dataset, we run the experiments on Amazon Web Services (AWS) using 8 NVIDIA Tesla V100 GPUs. This task required 10 days of training. C.3 CLASS-CONDITIONED IMAGE SYNTHESIS TASK Implementation. For the class-conditioned image synthesis, we also adopt the pre-trained VQGAN Esser et al. (2021b) as the discrete encoder and decoder. We replace the conditioning encoder with class embedding optimized during the contrastive diffusion training. The size of the conditional embedding is 512. Other parameters and techniques remain the same, as in the text-to-image task. Training Cost. For the class-conditioned experiments on the ImageNet, we use 8 NVIDIA Tesla V100 GPUs running on AWS. This task required 20 days of training. D MORE QUALITATIVE RESULTS D.1 GENERATED MUSIC SAMPLES For qualitative samples of synthesized dance music sequences, please refer to our anonymous page in the supplement with music samples. In addition to the generated music samples on AIST++ Tsuchida et al. (2019); Li et al. (2021) and TikTok Dance-Music Dataset Zhu et al. (2022a), we also include some qualitative samples obtained with the music editing operations based on the dance-music genre annotations from AIST++. Specifically, we edit the original paired motion conditioning input with a different dance-music genre using a different dance choreographer. Discussion on Musical Representations and Audio Quality. It is worth noting that we only compare the overall audio quality with that of D2M-GAN Zhu et al. (2022a). This is due to the nature of the different musical representations in the literature of deep-learning based music generation Gan et al. (2020a); Dong et al. (2018); Huang et al. (2019); Gan et al. (2020b); Aggarwal & Parikh (2021). There are mainly two categories for adopted musical representations in previous works: pre-defined symbolic and learning-based representations Ji et al. (2020); Briot et al. (2020). For the former symbolic music representation, typical options include 1D piano-roll and 2D MIDI-based representations. While these works benefit from the pre-defined music synthesizers and produce music that does not include raw audio noise, the main limitation is that such representations are usually limited to a single specific instrument, which hinders their flexibility to be applied in wider and more complex scenarios such as dance videos. In contrast, the learning-based music representations (i.e., musical VQ in our case) rely on well-trained music synthesizers as decoders, but can be used as a unified representation for various musical sounds, e.g., instruments or voices. However, the training of such music encoders and decoders for high-quality audio signals itself remains a challenging problem. Specifically, high-quality audio is a form of high-dimensional data with an extremely large sampling rate, even compared to high-resolution images. For example, the sampling rate for CD-quality audio signals is 44.1 kHz, resulting in 2,646,000 data points for a one-minute musical piece. To this end, existing deep learning based works Dhariwal et al. (2020); Kumar et al. (2019) for music generation employ methods to reduce the number of dimensions, e.g., by introducing hop lengths and a smaller sampling rate. These operations help to make music learning and generation more computationally tractable, but also introduce additional noise in the synthesized audio signals. In this work, we adopt the pre-trained JukeBox model Dhariwal et al. (2020) as our music encoder and decoder for the musical VQ representation. The adopted model has a hop length of 128, which corresponds to the top-level model from their original work Dhariwal et al. (2020). Jukebox employs 3 models: top-, middle-, and bottom-level, with both audio quality and required computation increasing from the first to the last model. As an example, in the supplemental HTML page, we provide music samples directly reconstructed from JukeBox using the top-level model we employ in our work, compared to the ground-truth audio. While they allow for high-quality audio reconstruction (from the bottom-level model, with a hop length of 8), it requires much more time and computation not only for training but also for the final inference, e.g., 3 hours to generate a 20-second musical sequence. As the synthesized music from the top-level model includes some audible noise, we apply a noise reduction operation Sainburg et al. (2020). However, the overall audio quality is not a primary factor that we specifically address in this work on cross-modal conditioning and generation, as it largely depends on the specific music encoder and decoder that are employed. This explains why we report similar MOS scores in terms of the general audio quality. D.2 SYNTHESIZED IMAGES We present more qualitative examples for text-to-image synthesis and class-conditioned image synthesis in Fig. 5, Fig. 6, and Fig. 7. E FURTHER DISCUSSION ON THE CDCD LOSS In this section, we provide our further discussion on the proposed CDCD loss in terms of various aspects, including its relevance to the existing auxiliary losses, the impact of the CDCD strength, as well as additional experimental results. E.1 CDCD AS AUXILIARY LOSS While the diffusion models are typically trained and optimized with the conventional variational lower bound loss Lvb as we described in the main paper and Appendix B.2, there are several different types of auxiliary losses proposed to further regularize and improve the learning of diffusion models. Specifically, Dhariwal & Nichol (2021) introduces the idea of classifier based guidance for the diffusion denoising probabilistic models with continuous state space. Classifier-free guidance is proposed in Ho & Salimans (2022). In the area with discrete diffusion formulations Austin et al. (2021); Gu et al. (2022), an auxiliary loss that encourages the model to predict the noiseless token at the arbitrary step is adopted and proven to help with the synthesis quality. Similar to the previous cases, we consider the proposed CDCD loss as a type of auxiliary losses, which seeks to provide additional guidance to better learn the conditional distribution p(x|c). Specifically, the classifier-free guidance Ho & Salimans (2022) propose to randomly discard conditioning while learning a conditional diffusion generative model, which bears resemblance to our introduced downsampled contrastive steps in Appendix E.3. E.2 IMPACT OF CDCD STRENGTH We further show the ablation studies on the parameter λ, which is the weight of our proposed CDCD loss that characterize the strength of this contrastive regularizer. We conduct the dance-to-music generation experiments with different values of λ, and show the results in Tab. 9. As we observe from the table that the performance in terms of the beat scores are relatively robust for different λ values ranging from 4e − 5 to 5e − 5. At the same time, we empirically observe that with a large value of λ, the model converges faster with less training epochs. In case of the image synthesis task, we are rather cautious on the strength of the imposed contrastive regularizer. Intuitively, the proposed CDCD loss encourages the model to learn a slightly different distribution for negative samples, which could impose a trade-off between the one for the actual data given a specific conditioning. Therefore, while a larger value of λ helps with the learning speed, we empirically set the λ to be 5e− 5. Note that this value is adapted from the weight for other auxiliary losses in previous works Gu et al. (2022). E.3 DOWNSAMPLED CONTRASTIVE STEPS While we show the performance of complete step-wise contrastive diffusion in the main paper, we discuss here an alternative way to implement the proposed method with less computational cost, by downsampling the contrastive steps in the diffusion process. Specifically, we randomly downsampled the steps with the proposed CDCD loss, which shares the similar spirit as in the class-free guidance Ho & Salimans (2022) to randomly drop out the conditioning. The experimental results are listed in Tab. 10, where there is little performance drop with downsampled contrastive steps.
1. What is the focus and contribution of the paper on Conditional Discrete Contrastive Diffusion (CDCD)? 2. What are the strengths of the proposed approach, particularly in terms of combining diffusion training and contrastive learning? 3. What are the weaknesses of the paper, if any? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any concerns or suggestions regarding the proposed method's applicability and potential impact on multimodal conditional synthesis tasks?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper introduces a Conditional Discrete Contrastive Diffusion (CDCD) loss to enhance input-output connections by maximizing their mutual information. The author also designs two contrastive diffusion mechanisms to incorporate L C D C D into the denoising process. Diverse multimodal conditional synthesis tasks have been evaluated and the proposed methods achieve higher or competitive general synthesis quality. Strengths And Weaknesses Strength: · This work first combines diffusion training and contrastive learning, which provides a new perspective on the research of conditional DPMs. · The proposed object function and contrastive diffusion mechanisms are shown promising quantitative and visual results. · The frame work improves the convergence of diffusion models, reducing the number of required diffusion steps by more than 35% on two benchmarks. · Proper INTRA- AND INTER-NEGATIVE SAMPLING methods are adopted and relevant experiments revealed their effectiveness. Weakness: I did't see any weaknesses Clarity, Quality, Novelty And Reproducibility Quality & Novelty · The process of proof and experiment are sufficient. Some people in the community will be very interested in this work. Clarity: · The author's writing is good and ideas were clarified well. Reproducibility: · The author provided implementation details and already released their codes and pre-trained models. I think this work is reproducible.
ICLR
Title Isometric Autoencoders Abstract High dimensional data is often assumed to be concentrated on or near a lowdimensional manifold. Autoencoders (AE) is a popular technique to learn representations of such data by pushing it through a neural network with a low dimension bottleneck while minimizing a reconstruction error. Using high capacity AE often leads to a large collection of minimizers, many of which represent a low dimensional manifold that fits the data well but generalizes poorly. Two sources of bad generalization are: extrinsic, where the learned manifold possesses extraneous parts that are far from the data; and intrinsic, where the encoder and decoder introduce arbitrary distortion in the low dimensional parameterization. An approach taken to alleviate these issues is to add a regularizer that favors a particular solution; common regularizers promote sparsity, small derivatives, or robustness to noise. In this paper, we advocate an isometry (i.e., local distance preserving) regularizer. Specifically, our regularizer encourages: (i) the decoder to be an isometry; and (ii) the encoder to be the decoder’s pseudo-inverse, that is, the encoder extends the inverse of the decoder to the ambient space by orthogonal projection. In a nutshell, (i) and (ii) fix both intrinsic and extrinsic degrees of freedom and provide a non-linear generalization to principal component analysis (PCA). Experimenting with the isometry regularizer on dimensionality reduction tasks produces useful low-dimensional data representations. 1 INTRODUCTION A common assumption is that high dimensional data X ⊂ RD is sampled from some distribution p concentrated on, or near, some lower d-dimensional submanifoldM⊂ RD, where d < D. The task of estimating p can therefore be decomposed into: (i) approximate the manifoldM; and (ii) approximate p restricted to, or concentrated nearM. In this paper we focus on task (i), mostly known as manifold learning. A common approach to approximate the d-dimensional manifoldM, e.g., in (Tenenbaum et al., 2000; Roweis & Saul, 2000; Belkin & Niyogi, 2002; Maaten & Hinton, 2008; McQueen et al., 2016; McInnes et al., 2018), is to embed X in Rd. This is often done by first constructing a graph G where nearby samples in X are conngected by edges, and second, optimizing for the locations of the samples in Rd striving to minimize edge length distortions in G. Autoencoders (AE) can also be seen as a method to learn low dimensional manifold representation of high dimensional data X . AE are designed to reconstruct X as the image of its low dimensional embedding. When restricting AE to linear encoders and decoders it learns linear subspaces; with mean squared reconstruction loss they reproduce principle component analysis (PCA). Using higher capacity neural networks as the encoder and decoder, allows complex manifolds to be approximated. To avoid overfitting, different regularizers are added to the AE loss. Popular regularizers include sparsity promoting (Ranzato et al., 2007; 2008; Glorot et al., 2011), contractive or penalizing large derivatives (Rifai et al., 2011a;b), and denoising (Vincent et al., 2010; Poole et al., 2014). Recent AE regularizers directly promote distance preservation of the encoder (Pai et al., 2019; Peterfreund et al., 2020). In this paper we advocate a novel AE regularization promoting isometry (i.e., local distance preservation), called Isometric-AE (I-AE). Our key idea is to promote the decoder to be isometric, and the encoder to be its pseudo-inverse. Given an isometric decoder Rd → RD, there is no well-defined inverse RD → Rd; we define the pseudo-inverse to be a projection on the image of the decoder composed with the inverse of the decoder restricted to its image. Locally, the I-AE regularization therefore encourages: (i) the differential of the decoder A ∈ RD×d to be an isometry, i.e., ATA = Id, where Id is the d× d identity matrix; and (ii) the differential of the encoder, B ∈ Rd×D to be the pseudo-inverse (now in the standard linear algebra sense) of the differential of the decoder A ∈ RD×d, namely, B = A+. In view of (i) this implies B = AT . This means that locally our decoder and encoder behave like PCA, where the encoder and decoder are linear transformations satisfying (i) and (ii); That is, the PCA encoder can be seen as a composition of an orthogonal projection on the linear subspace spanned by the decoder, followed by an orthogonal transformation (isometry) to the low dimensional space. In a sense, our method can be seen as a version of denoising/contractive AEs (DAE/CAE, respectively). DAE and CAE promote a projection from the ambient space onto the data manifold, but can distort distances and be non-injective. Locally, using differentials again, projection on the learned manifold means (AB)2 = AB. Indeed, as can be readily checked conditions (i) and (ii) above imply A(BA)B = AB. This means that I-AE also belongs to the same class of DAE/CAE, capturing the variations in tangent directions of the data,M, while ignoring orthogonal variations which often represent noise (Vincent et al., 2010; Alain & Bengio, 2014). The benefit in I-AE is that its projection on the data manifold is locally an isometry, preserving distances and sampling the learned manifold evenly. That is, I-AE does not shrink or expand the space; locally, it can be imagined as an orthogonal linear transformation. The inset shows results of a simple experiment comparing contractive AE (CAE-bottom) and isometric AE (I-AE-top). Both AEs are trained on the green data points; the red arrows depict projection of points (in blue) in vicinity of the data onto the learned manifold (in black) as calculated by applying the encoder followed by the decoder. Note that CAE indeed projects on the learned manifold but not evenly, tending to shrink space around data points; in contrast I-AE provides a more even sampling of the learned manifold. Experiments confirm that optimizing the I-AE loss results in a close-to-isometric encoder/decoder explaining the data. We further demonstrate the efficacy of I-AE for dimensionality reduction of different standard datatsets, showing its benefits over manifold learning and other AE baselines. 2 RELATED WORKS Manifold learning. Manifold learning generalizes classic dimensionality reduction methods such as PCA (F.R.S., 1901) and MDS (Kruskal, 1964; Sammon, 1969), by aiming to preserve the local geometry of the data. Tenenbaum et al. (2000) use the nn-graph to approximate the geodesic distances over the manifold, followed by MDS to preserve it in the lower dimension. Roweis & Saul (2000); Belkin & Niyogi (2002); Donoho & Grimes (2003) use spectral methods to minimize different distortion energy functions over the graph matrix. Coifman et al. (2005); Coifman & Lafon (2006) approximate the heat diffusion over the manifold by a random walk over the nn-graph, to gain a robust distance measure on the manifold. Stochastic neighboring embedding algorithms (Hinton & Roweis, 2003; Maaten & Hinton, 2008) captures the local geometry of the data as a mixture of Gaussians around each data points, and try to find a low dimension mixture model by minimizing the KL-divergence. In a relatively recent work, McInnes et al. (2018) use iterative spectral and embedding optimization using fuzzy sets. Several works tried to adapt classic manifold learning ideas to neural networks and autoencoders. Pai et al. (2019) suggest to embed high dimensional points into a low dimension with a neural network by constructing a metric between pairs of data points and minimizing the metric distortion energy. Kato et al. (2019) suggest to learn an isometric decoder by using noisy latent variables. They prove under certain conditions that it encourages isometric decoder. Peterfreund et al. (2020) suggest autoencoders that promote the isometry of the encoder over the data by approximating its differential gram matrix using sample covariance matrix. Zhan et al. (2018) encourage distance preserving autoencoders by minimizing metric distortion energy in common feature space. Modern autoencoders. There is an extensive literature on extending autoencoders to a generative model (task (ii) in section 1). That is, learning a probability distribution in addition to approximating the data manifoldM. Variational autoencoder (VAE) Kingma & Welling (2014) and its variants Makhzani et al. (2015); Burda et al. (2016); Sønderby et al. (2016); Higgins et al. (2017); Tolstikhin et al. (2018); Park et al. (2019); Zhao et al. (2019) are examples to such methods. In essence, these methods augment the AE structure with a learned probabilistic model in the low dimensional (latent) space Rd that is used to approximate the probability P that generated the observed data X . More relevant to our work, are recent works suggesting regularizers for deterministic autoencoders that together with ex-post density estimation in latent space forms a generative model. Ghosh et al. (2020) suggested to reduce the decoder degrees of freedom, either by regularizing the norm of the decoder weights or the norm of the decoder differential. Other regularizers of the differential of the decoder, aiming towards a deterministic variant of VAE, were recently suggested in Kumar & Poole (2020); Kumar et al. (2020). In contrast to our method, these methods do not regularize the encoder explicitly. 3 ISOMETRIC AUTOENCODERS We consider high dimensional data points X = {xi}ni=1 ⊂ RD sampled from some probability distribution P (x) in RD concentrated on or near some d dimensional submanifoldM⊂ RD, where d < D. Our goal is to compute isometric autoencoder (I-AE) defined as follows. Let g : RD → Rd denote the encoder, and f : Rd → RD the decoder; N is the learned manifold, i.e., the image of the decoder, N = f(Rd). I-AE is defined by the following requirements: (i) The data X is close to N . (ii) f is an isometry. (iii) g is the pseudo-inverse of f . Figure 2 is an illustration of I-AE. Let θ denote the parameters of f , and φ the parameters of g. We enforce the requirements (i)-(iii) by prescribing a loss function L(θ, φ) and optimize it using standard stochastic gradient descent (SGD). We next break down the loss L to its different components. Condition (i) is promoted with the standard reconstruction loss in AE: Lrec(θ, φ) = 1 n n∑ i=1 ‖f(g(xi))− xi‖2 , (1) where ‖·‖ is the 2-norm. Before handling conditions (ii),(iii) let us first define the notions of isometry and pseudo-inverse. A differentiable mapping f between the euclidean spaces Rd and RD is a local isometry if it has an orthogonal differential matrix df(z) ∈ RD×d, df(z)T df(z) = Id, (2) where Id ∈ Rd×d is the identity matrix, and df(z)ij = ∂f i ∂zj (z). A local isometry which is also a diffeomorphism is a global isometry. Restricting the decoder to isometry is beneficial for several reasons. First, Nash-Kuiper Embedding Theorem Nash (1956) asserts that non-expansive maps can be approximated arbitrary well with isometries if D ≥ d + 1 and hence promoting an isometry does not limit the expressive power of the decoder. Second, the low dimensional representation of the data computed with an isometric encoder preserves the geometric structure of the data. In particular volume, length, angles and probability densities are preserved between the low dimensional representation Rd, and the learned manifold N . Lastly, for a fixed manifold N there is a huge space of possible decoders such that N = f(Rd). For isometric f , this space is reduced considerably: Indeed, consider two isometries parameterizing N , i.e., f1, f2 : Rd → N . Then, since composition of isometries is an isometry we have that f−12 ◦ f1 : Rd → Rd is a dimension-preserving isometry and hence a rigid motion. That is, all decoders of the same manifold are the same up to a rigid motion. For the encoder the situation is different. Since D > d the encoder g cannot be an isometry in the standard sense. Therefore we ask g to be the pseudo-inverse of f . For that end we define the projection operator p on a submanifold N ⊂ RD as p(x) = arg min x′∈N ‖x− x′‖ . Note that the closest point is not generally unique, however the Tubular Neighborhood Theorem (see e.g., Theorem 6.24 in Lee (2013)) implies uniqueness for points x sufficiently close to the manifold N . Definition 1. We say the g is the pseudo-inverse of f if g can be written as g = f−1 ◦ p, where p is the projection on N = f(Rd). Consequently, if g is the pseudo-inverse of an isometry f then it extends the standard notion of isometry by projecting every point on a submanifold N and then applying an isometry between the d-dimensional manifolds N and Rd. See Figure 2 for an illustration. First-order characterization. To encourage f, g to satisfy the (local) isometry and the pseudoinverse properties (resp.) we will first provide a first-order (necessary) characterization using their differentials: Theorem 1. Let f be a decoder and g an encoder satisfying conditions (ii),(iii). Then their differentials A = df(z) ∈ RD×d, B = dg(f(z)) ∈ Rd×D satisfy ATA = Id (3) BBT = Id (4) B = AT (5) The theorem asserts that the differentials of the encoder and decoder are orthogonal (rectangular) matrices, and that the encoder is the pseudo-inverse of the differential of the decoder. Before proving this theorem, let us first use it to construct the relevant losses for promoting the isometry of f and pseudo-inverse g. We need to promote conditions (3), (4), (5). Since we want to avoid computing the full differentials A = df(z), B = dg(f(z)), we will replace (3) and (4) with stochastic estimations based on the following lemma: denote the unit d− 1-sphere by Sd−1 = { z ∈ Rd| ‖z‖ = 1 } . Lemma 1. Let A ∈ RD×d, where d ≤ D. If ‖Au‖ = 1 for all u ∈ Sd−1, then A is columnorthogonal, that is ATA = Id. Therefore, the isometry promoting loss, encouraging (3), is defined by Liso(θ) = Ez,u ( ‖df(z)u‖ − 1 )2 , (6) where z ∼ Piso(Rd), and Piso(Rd) is a probability measure on Rd; u ∼ P (Sd−1), and P (Sd−1) is the standard rotation invariant probability measure on the d− 1-sphere Sd−1. The pseudo-inverse promoting loss, encouraging (4) would be Lpiso(φ) = Ex,u (∥∥uT dg(x)∥∥− 1)2, (7) where x ∼ P (M) and u ∼ P (Sd−1). As usual, the expectation with respect to P (M) is computed empirically using the data samples X . Lastly, (5) might seem challenging to enforce with neural networks, however the orthogonality of A,B can be leveraged to replace this loss with a more tractable loss asking the encoder is merely the inverse of the decoder over its image: Lemma 2. Let A ∈ RD×d, and B ∈ Rd×D. If ATA = Id = BBT and BA = Id then B = A+ = AT . Fortunately, this is already taken care of by the reconstruction loss: since low reconstruction loss in equation 1 forces the encoder and the decoder to be the inverse of one another over the data manifold, i.e., g(f(z)) = z, it encourages BA = Id and therefore, by Lemma 2, automatically encourages equation 5. Note that invertability also implies bijectivity of the encoder/decoder restricted to the data manifold, pushing for global isometries (rather than local). Summing all up, we define our loss for I-AE by L(θ, φ) = Lrec(θ, φ) + λiso (Liso(θ) + Lpiso(φ)) , (8) where λiso is a parameter controlling the isometry-reconstruction trade-off. 3.1 DETAILS AND PROOFS. Let us prove Theorem 1 characterizing the relation of the differentials of isometries and pseudoisometries, A = df(z) ∈ RD×d, B = dg(f(z)) ∈ Rd×D. First, by definition of isometry (equation 2), ATA = Id. We denote by TxN the d-dimensional tangent space to N at x ∈ N ; accordingly, TxN⊥ denotes the normal tangent space. Lemma 3. The differential dp(x) ∈ RD×D at x ∈ N of the projection operator p : RD → N is dp(x)u = { u u ∈ TxN 0 u ∈ TxN⊥ (9) That is, dp(x) is the orthogonal projection on the tangent space of N at x. Proof. First, consider the squared distance function to N defined by η(x) = 12 minx′∈N ‖x− x ′‖2. The envelope theorem implies that∇η(x) = x− p(x). Differentiating both sides and rearranging we get dp(x) = ID −∇2η(x). As proved in Ambrosio & Soner (1994) (Theorem 3.1), ∇2η(x) is the orthogonal projection on TxN⊥. Let x = f(z) ∈ N . Since x ∈ N we have p(x) = x. Condition (iii) asserts that g(y) = f−1(p(y)); taking the derivative at y = x we get dg(x) = df−1(x)dp(x). Lemma 3 implies that dp(x) = AAT , since AAT is the orthogonal projection on TxN . Furthermore, df−1(x) restricted to Im(A) is AT . Putting this together we get B = dg(x) = ATAAT = AT . This implies that BBT = Id, and that B = A+ = AT . This concludes the proof of Theorem 1. Proof of Lemma 1. Writing the SVD of A = UΣV T , where Σ = diag(σ1, . . . , σd) are the singular values of A, we get that ∑d i=1 σ 2 i v 2 i = 1 for all v ∈ Sd−1. Plugging v = ej , j ∈ [d] (the standard basis) we get that all σi = 1 for i ∈ [d] and A = UV T is orthogonal as claimed. Proof of Lemma 2. Let U = [A,V ], V ∈ RD×(D−d), be a completion of A to an orthogonal matrix in RD×D. Now, Id = BUUTBT = Id + BV V TBT , and since BV V TBT 0 this means that BV = 0, that is B takes to null the orthogonal space to the column space of A. A direct computation shows that BU = ATU which in turn implies B = AT = A+. Implementation. Implementing the losses in equation 6 and equation 7 requires making a choice for the probability densities and approximating the expectations. We take Piso(Rd) to be either uniform or gaussian fit to the latent codes g(X ); and P (M) is approximated as the uniform distribution on X , as mentioned above. The expectations are estimated using Monte-Carlo sampling. That is, at each iteration we draw samples x̂ ∈ X , ẑ ∼ Piso(Rd), û ∼ P (Sd−1) and use the approximations Liso(θ) ≈ ( ‖df(ẑ)û‖ − 1 )2 Lpiso(φ) ≈ ( ∥∥ûT dg(x̂)∥∥− 1)2 The right differential multiplication df(ẑ)û and left differential multiplication ûT dg(x̂) are computed using forward and backward mode automatic differentiation (resp.). Their derivatives with respect to the networks’ parameters θ, φ are computed by another backward mode automatic differentiation. 4 EXPERIMENTS 4.1 EVALUATION We start by evaluating the effectiveness of our suggested I-AE regularizer, addressing the following questions: (i) does our suggested loss L (θ, φ) in equation 8 drive I-AE training to converge to an isometry? (ii) What is the effect of the Lpiso term? In particular, does it encourage better manifold approximations as conjectured? To that end, we examined the I-AE training on data points X sampled uniformly from 3D surfaces with known global parameterizations. Figure 3 shows qualitative comparison of the learned embeddings for various AE regularization techniques: Vanilla autoencoder (AE); Contractive autoencoder (CAE) (Rifai et al., 2011b); Contractive autoencoder with decoder weights tied to the encoder weights (TCAE) (Rifai et al., 2011a); Gradient penalty on the decoder (RAE-GP) (Ghosh et al., 2020); and Denoising autoencoder with gaussian noise (DAE) (Vincent et al., 2010). For fairness in evaluation, all methods were trained using the same training hyper-parameters. See Appendix for the complete experiment details including mathematical formulation of the different AE regularizers. In addition, we compared against popular classic manifold learning techniques: U-MAP (McInnes et al., 2018), t-SNE (Maaten & Hinton, 2008) and LLE. (Roweis & Saul, 2000). The results demonstrate that I-AE is able to learn an isometric embedding, showing some of the advantages in our method: sampling density and distances between input points is preserved in the learned low dimensional space. In addition, for the AE methods, we quantitatively evaluate how close is the learnt decoder to an isometry. For this purpose, we triangulate a grid of planar points {zi} ⊂ R2. We denote by {eij} the triangles edges incident to grid points zi and zj . Then, we measured the edge lengths ratio, lij = ‖f (zi)− f (zj)‖/‖eij‖ expected to be ≈ 1 for all edges eij in an isometry. In Table 1 we log the standard deviation (Std) of {lij} for I-AE compared to other regularized AEs. For a fair comparison, we scaled zi so the mean of lij is 1 in all experiments. As can be seen in the table, the distribution of {lij} for I-AE is significantly more concentrated than the different AE baselines. Finally, althoughLiso is already responsible for learning an isometric decoder, the pseudo-inverse encoder (enforced by the lossLpiso) helps it converge to simpler solutions. We ran AE training with and without the Lpiso term. Figure 4 shows in gray the learnt decoder surface, N , without Lpiso (left), containing extra (unnatural) surface parts compared to the learnt surface with Lpiso (right). In both cases we expect (and achieve) a decoder approximating an isometry that passes through the input data points. Nevertheless, the pseudo-inverse loss restricts some of the degrees of freedom of the encoder which in turn leads to a simpler solution. 4.2 DATA VISUALIZATION In this experiment we evaluate our method in the task of high dimension data visualization, i.e., reducing high dimensional data into two dimensional space. Usually the data is not assumed to lie on a manifold with such a low dimension, and it is therefore impossible to preserve all of its geometric properties. A common artifact when squeezing higher dimensional data into the plane is crowding (Maaten & Hinton, 2008), that is planar embedded points are crowded around the origin. We evaluate our method on three standard datasets of images: MNIST (LeCun, 1998) (60k handwritten digits), Fashion-MNIST (60k Zalando’s article images) (Xiao et al., 2017) and COIL20 (Nene et al., 1996) (20 different images of object rotated with 72 even rotations). For baselines we take: Vanilla AE; CAE; GP-RAE; DAE; U-MAP and t-SNE. We use the same architecture for all auto-encoder methods on each dataset. MNIST and FMNIST we evaluated in two scenarios: (i) Both encoder and decoder are fully-connected (MLP) networks; and (ii) Both encoder and decoder are Convolutional Neural Network (CNN). For COIL20 dataset both encoder and decoder are Convolutional Neural Network. Full implementation details and hyper-parameters values can be found in the Appendix. The results are presented in figure 5; where each embedded point z is colored by its ground-truth class/label. We make several observation. First, in all the datasets our method is more resilient to crowding compared to the baseline AEs, and provide a more even spread. U-MAP and t-SNE produce better separated clusters. However, this separation can come at a cost: See the COIL20 result (third row) and blow-ups of three of the classes (bottom row). In this dataset we expect evenly spaced points that correspond to the even rotations of the objects in the images. Note (in the blow-ups) that U-MAP maps the three classes on top of each other (non-injectivity of the "encoder"), t-SNE is somewhat better but does not preserve well the distance between pairs of data points (we expect them to be more or less equidistant in this dataset). In I-AE the rings are better separated and points are more equidistant; the baseline AEs tend to densify the points near the origin. Lastly, considering the inter and intra-class variations for the MNIST and FMNIST datasets, we are not sure that isometric embeddings are expected to produce strongly separated clusters as in U-MAP and t-SNE (e.g., think about similar digits of different classes and dissimilar digits of the same class with distances measured in euclidean norm). 4.3 DOWNSTREAM CLASSIFICATION To quantitatively evaluate the unsupervised low-dimensional embedding computed with the I-AE we performed the following experiment: We trained simple classifiers on the embedded vectors computed by I-AE and baseline AEs and compared their performance (i.e., accuracy). Note that the process of learning the embedding is unsupervised and completely oblivious to the labels, which are used solely for training and testing the classifiers. We evaluate on the same datasets as in Section 4.2: In MNIST and FMNIST we use the standard train-test split, and on COIL20 we split 75%-25% randomly. As AE baselines we take vanilla AE, CAE, DAE and RAE-GP, as described above. We repeat each experiment with 3 different latent dimensions, {16, 64, 256}, and use two different simple classification algorithms: linear Support vector machines (SVM) (Cortes & Vapnik, 1995) and K-nearest neighbors (K-NN), with K = 5. Table 2 logs the results, where for both types of classifiers I-AE outperforms the baseline AEs in almost all combinations, where the SVM experiments demonstrate larger margins in favor of I-AE. The results of the K-NN indicate that euclidean metric captures similarity in our embedding, and the results of the SVM, especially on the MNIST and COIL20 datasets, indicate that I-AE is able to embed the data in an arguably simpler, linearly separable manner. The very high classification rates in COIL20 are probably due to the size and structure of this dataset. Nevertheless with SVM, already in 16 dimensions I-AE provides an accuracy of 95%, with 5% margin from 2nd place. 4.4 HYPER-PARAMETERS SENSITIVITY To evaluate the affect of λiso on the output we compared the visualizations and optimized loss values of MNIST and FMNIST, trained with same CNN architecture as in Section 4.2 with λiso ∈ {0, 0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 0.75, 0.1}. Figure 6 shows the different visualization results as well as Lrec, Liso, Lpiso as a function of λiso. As can be seen in both datasets the visualizations and losses are stable for λiso values between 0.01 and 0.5, where a significant change to the embedding is noticeable at 0.75. The trends in the loss values are also rather stable; Liso and Lpiso start very high in the regular AE, i.e., λiso = 0, and quickly stabilize. As for Lrec on FMNIST we see a stable increase while in MNIST it also starts with a steady increase until it reaches 0.75 and then it starts to be rockier, which is also noticeable in the visualizations. λiso = 0 0.01 0.025 0.05 0.075 0.1 0.25 0.5 0.75 1 Lrec Liso Lpiso 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 iso 0.07 0.08 0.09 0.1 0.11 0.12 0.13 0.14 0.15 0.16 L re c MNIST FMNIST 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 iso -10 -8 -6 -4 -2 0 2 4 6 lo g (L is o ) MNIST FMNIST 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 iso -7 -6 -5 -4 -3 -2 -1 0 lo g (L p is o ) MNIST FMNIST Figure 6: Sensitivity to hyper-parameters. Top: visualizations of MNIST (1st row) and FMNIST (2nd row) datasets trained with different λiso values. Bottom: plots of the final train losses as a function of λiso; left to right: Lrec (linear scale), Liso (log scale), and Lpiso (log scale). 5 CONCLUSIONS We have introduced I-AE, a regularizer for autoencoders that promotes isometry of the decoder and pseudo-inverse of the encoder. Our goal was two-fold: (i) producing a favorable low dimensional manifold approximation to high dimensional data, isometrically parameterized for preserving, as much as possible, its geometric properties; and (ii) avoiding complex isometric solutions based on the notion of psuedo-inverse. Our regularizers are simple to implement and can be easily incorporated into existing autoencoders architectures. We have tested I-AE on common manifold learning tasks, demonstrating the usefulness of isometric autoencoders. An interesting future work venue is to consider task (ii) from section 1, namely incorporating I-AE losses in a probabilistic model and examine the potential benefits of the isometry prior for generative models. One motivation is the fact that isometries push probability distributions by a simple change of coordinates, P (z) = P (f(z)). A APPENDIX A.1 IMPLEMENTATION DETAILS All experiments were conducted on a Tesla V100 Nvidia GPU using PYTORCH framework Paszke et al. (2017). A.1.1 NOTATIONS Table 3 describes the notation for the different network layers. A.1.2 EVALUATION Architecture. We used an autoencoder consisted of 5 FC 256 layers followed by a LIN 2 layer for the encoder; similarly, 5 FC 256 layers followed by a LIN 3 layer were used for the decoder. Training details. All methods were trained for a relatively long period of 100K epochs. Training was done with the ADAM optimizer Kingma & Ba (2014), setting a fixed learning rate of 0.001 and a full batch. I-AE parameter was set to λiso = 0.01. Baselines. The following regularizers were used as baselines: Contractive autoencoder (CAE) Rifai et al. (2011b); Contractive autoencoder with decoder weights tied to the encoder weights (TCAE) Rifai et al. (2011a); Gradient penalty on the decoder (RAE-GP) Ghosh et al. (2020); Denoising autoencoder with gaussian noise (DAE) Vincent et al. (2010). For both CAE, and TCAE the regularization term is ‖dg(x)‖2. For RAE-GP the regularization term is ‖df(z)‖2. For U-MAP McInnes et al. (2018), we set the number of neighbours to 30. For t-SNE Maaten & Hinton (2008), we set perplexity= 50. A.1.3 DATA VISUALIZATION Architecture. Table 4 lists the complete architecture details of this experiment. Both MNIST and FMNIST were trained with FC-NN and S-CNN, and COIL20 was trained with L-CNN. Training details. Training was done using ADAM optimizer Kingma & Ba (2014). The rest of the training details are on table 5. Baselines. The following regularizers were used as baselines: Contractive autoencoder (CAE) Rifai et al. (2011b); Gradient penalty on the decoder (RAE-GP) Ghosh et al. (2020); Denoising autoencoder with gaussian noise (DAE) Vincent et al. (2010). For CAE the regularization term is ‖dg(x)‖2. For RAE-GP the regularization term is ‖df(z)‖2. We used U-MAP McInnes et al. (2018) official implementation with random_state = 42, and Ulyanov (2016) multicore implementation for t-SNE Maaten & Hinton (2008) with default parameters. A.2 ADDITIONAL EXPERIMENTS A.2.1 GENERALIZATION IN HIGH DIMENSIONAL SPACE Next, we evaluate how well our suggested isometric prior induces manifolds that generalizes well to unseen data. We experimented with three different images datasets: MNIST (LeCun, 1998); CIFAR10 (Krizhevsky et al., 2009); and CelebA (Liu et al., 2015). We quantitatively estimate methods performance by measuring the L2 distance and the Fréchet Inception Distance (FID) Heusel et al. (2017) on a held out test set. For each dataset, we used the official train-test splits. For comparison versus baselines we have selected among relevant existing AE based methods the following: Vanilla AE (AE); autoencoder trained with weight decay (AEW); Contractive autoencoder (CAE); autoencoder with spectral weights normalization (RAE-SN); and autoencoder with L2 regularization on decoder weights (RAE-SN). RAE-L2 and RAE-SN were recently successfully applied to this data in (Ghosh et al., 2020), demonstrating state-of-the-art performance on this task. In addition, we compare versus the Wasserstein Auto-Encoder (WAE) Tolstikhin et al. (2018), chosen as state-of-the-art among generative autoencoders. For evaluation fairness, all methods were trained using the same training hyper-parameters: network architecture, optimizer settings, batch size, number of epochs for training and learning rate scheduling. See the appendix for specific hyper-parameters values. In addition, we generated a validation set out of the training set using 10k samples for the MNIST and CIFAR-10 experiment, whereas for the CelebA experiment we used the official validation set. For each training epoch, we evaluated the reconstruction L2 loss on the validation set and chose the final network weights to be the one that achieves the minimum reconstruction. We experimented with two variants of I-AE regularizers: Lpiso and Lpiso + Liso. Table 7 logs the results. Note that I-AE produced competitive results with the current SOTA on this task. Architecture. For all methods, we used an autoencoder with Convolutional and Convolutional transpose layers. Table 6 lists the complete details. Training details. Training was done with the ADAM optimizer Kingma & Ba (2014), setting a learning rate of 0.0005 and batch size 100. I-AE parameter was set to λiso = 0.1.
1. What is the focus and contribution of the paper on autoencoders? 2. What are the strengths of the proposed approach, particularly in terms of preserving distances and angles? 3. Do you have any concerns regarding the desirability of isometric decoders for modeling data on a manifold? 4. How does the method evenly sample the manifold, and what are the implications for downstream classification tasks? 5. What are the minor points of confusion regarding notation and projection operators? 6. Can you provide more information on estimating the L_iso term and the role of alternating minimization?
Review
Review Update: I appreciate the authors addressing my concerns. I have increased my score accordingly. Original Review: This paper describes a new type of regularization for the parameters of an autoencoder - one that forces the decoder to be an isometry. The authors present conditions that need to be satisfied by the encoder and decoder parameters, and show empirically that the regularization terms that they propose ensure that the resulting autoencoder has an isometric decoder. The paper is well written and easy to follow. While the authors assert that forcing the decoder to be an isometry is desirable since isometries preserve distances and angles, it is not clear why that is a desirable property while modeling data on a manifold. Distances between points on a data manifold are not usually measured through L2 distances in a latent dimension, and it is not clear why one should require that L2 distances in the high dimensional space are the same as distances in the latent space. The numerical results on reconstruction error that the authors present in the appendix do not indicate any reason to prefer isometric AEs over other baselines that are considered. In case there is a setting where isometric AEs can be shown to model the data manifold better than regular AEs, that is not highlighted in the current draft. The authors claim that isometric autoencoders would "evenly sample the manifold" which is a little confusing, since the sampling of the data manifold is separate from the technique used to model the data (regular AEs vs isometric AEs). The experimental results also do not indicate how the embeddings learned using the proposed method perform on downstream classification tasks, for instance. This comparison would be useful to have to compare the usefulness of the embeddings. A few minor points of confusion: the notation f^{-1} is a little misleading since the encoder is not necessarily an invertible function from R^d to R^D. If the encoder mapping is restricted to the range of f then this notation is more appropriate. The projection operator that is used to define the pseudoinverse of the encoder is not necessarily a function, since there could possibly be many points on the manifold that correspond to the same L2 distance from the point being projected. Are there further assumptions on the structure of the data manifold that prevent this from being the case? Estimating the L_iso term seems to require a distribution over the latent space R^d, that the authors say is computed using a fit of the latent codes g(x), x \in \cal X. Are the latent codes computed using the current estimate of the encoder? If so is there some sort of alternating minimization happening, which holds the current estimate of the encoder fixed while computing the isometric regularization? If not, how are the latent codes computed?
ICLR
Title Isometric Autoencoders Abstract High dimensional data is often assumed to be concentrated on or near a lowdimensional manifold. Autoencoders (AE) is a popular technique to learn representations of such data by pushing it through a neural network with a low dimension bottleneck while minimizing a reconstruction error. Using high capacity AE often leads to a large collection of minimizers, many of which represent a low dimensional manifold that fits the data well but generalizes poorly. Two sources of bad generalization are: extrinsic, where the learned manifold possesses extraneous parts that are far from the data; and intrinsic, where the encoder and decoder introduce arbitrary distortion in the low dimensional parameterization. An approach taken to alleviate these issues is to add a regularizer that favors a particular solution; common regularizers promote sparsity, small derivatives, or robustness to noise. In this paper, we advocate an isometry (i.e., local distance preserving) regularizer. Specifically, our regularizer encourages: (i) the decoder to be an isometry; and (ii) the encoder to be the decoder’s pseudo-inverse, that is, the encoder extends the inverse of the decoder to the ambient space by orthogonal projection. In a nutshell, (i) and (ii) fix both intrinsic and extrinsic degrees of freedom and provide a non-linear generalization to principal component analysis (PCA). Experimenting with the isometry regularizer on dimensionality reduction tasks produces useful low-dimensional data representations. 1 INTRODUCTION A common assumption is that high dimensional data X ⊂ RD is sampled from some distribution p concentrated on, or near, some lower d-dimensional submanifoldM⊂ RD, where d < D. The task of estimating p can therefore be decomposed into: (i) approximate the manifoldM; and (ii) approximate p restricted to, or concentrated nearM. In this paper we focus on task (i), mostly known as manifold learning. A common approach to approximate the d-dimensional manifoldM, e.g., in (Tenenbaum et al., 2000; Roweis & Saul, 2000; Belkin & Niyogi, 2002; Maaten & Hinton, 2008; McQueen et al., 2016; McInnes et al., 2018), is to embed X in Rd. This is often done by first constructing a graph G where nearby samples in X are conngected by edges, and second, optimizing for the locations of the samples in Rd striving to minimize edge length distortions in G. Autoencoders (AE) can also be seen as a method to learn low dimensional manifold representation of high dimensional data X . AE are designed to reconstruct X as the image of its low dimensional embedding. When restricting AE to linear encoders and decoders it learns linear subspaces; with mean squared reconstruction loss they reproduce principle component analysis (PCA). Using higher capacity neural networks as the encoder and decoder, allows complex manifolds to be approximated. To avoid overfitting, different regularizers are added to the AE loss. Popular regularizers include sparsity promoting (Ranzato et al., 2007; 2008; Glorot et al., 2011), contractive or penalizing large derivatives (Rifai et al., 2011a;b), and denoising (Vincent et al., 2010; Poole et al., 2014). Recent AE regularizers directly promote distance preservation of the encoder (Pai et al., 2019; Peterfreund et al., 2020). In this paper we advocate a novel AE regularization promoting isometry (i.e., local distance preservation), called Isometric-AE (I-AE). Our key idea is to promote the decoder to be isometric, and the encoder to be its pseudo-inverse. Given an isometric decoder Rd → RD, there is no well-defined inverse RD → Rd; we define the pseudo-inverse to be a projection on the image of the decoder composed with the inverse of the decoder restricted to its image. Locally, the I-AE regularization therefore encourages: (i) the differential of the decoder A ∈ RD×d to be an isometry, i.e., ATA = Id, where Id is the d× d identity matrix; and (ii) the differential of the encoder, B ∈ Rd×D to be the pseudo-inverse (now in the standard linear algebra sense) of the differential of the decoder A ∈ RD×d, namely, B = A+. In view of (i) this implies B = AT . This means that locally our decoder and encoder behave like PCA, where the encoder and decoder are linear transformations satisfying (i) and (ii); That is, the PCA encoder can be seen as a composition of an orthogonal projection on the linear subspace spanned by the decoder, followed by an orthogonal transformation (isometry) to the low dimensional space. In a sense, our method can be seen as a version of denoising/contractive AEs (DAE/CAE, respectively). DAE and CAE promote a projection from the ambient space onto the data manifold, but can distort distances and be non-injective. Locally, using differentials again, projection on the learned manifold means (AB)2 = AB. Indeed, as can be readily checked conditions (i) and (ii) above imply A(BA)B = AB. This means that I-AE also belongs to the same class of DAE/CAE, capturing the variations in tangent directions of the data,M, while ignoring orthogonal variations which often represent noise (Vincent et al., 2010; Alain & Bengio, 2014). The benefit in I-AE is that its projection on the data manifold is locally an isometry, preserving distances and sampling the learned manifold evenly. That is, I-AE does not shrink or expand the space; locally, it can be imagined as an orthogonal linear transformation. The inset shows results of a simple experiment comparing contractive AE (CAE-bottom) and isometric AE (I-AE-top). Both AEs are trained on the green data points; the red arrows depict projection of points (in blue) in vicinity of the data onto the learned manifold (in black) as calculated by applying the encoder followed by the decoder. Note that CAE indeed projects on the learned manifold but not evenly, tending to shrink space around data points; in contrast I-AE provides a more even sampling of the learned manifold. Experiments confirm that optimizing the I-AE loss results in a close-to-isometric encoder/decoder explaining the data. We further demonstrate the efficacy of I-AE for dimensionality reduction of different standard datatsets, showing its benefits over manifold learning and other AE baselines. 2 RELATED WORKS Manifold learning. Manifold learning generalizes classic dimensionality reduction methods such as PCA (F.R.S., 1901) and MDS (Kruskal, 1964; Sammon, 1969), by aiming to preserve the local geometry of the data. Tenenbaum et al. (2000) use the nn-graph to approximate the geodesic distances over the manifold, followed by MDS to preserve it in the lower dimension. Roweis & Saul (2000); Belkin & Niyogi (2002); Donoho & Grimes (2003) use spectral methods to minimize different distortion energy functions over the graph matrix. Coifman et al. (2005); Coifman & Lafon (2006) approximate the heat diffusion over the manifold by a random walk over the nn-graph, to gain a robust distance measure on the manifold. Stochastic neighboring embedding algorithms (Hinton & Roweis, 2003; Maaten & Hinton, 2008) captures the local geometry of the data as a mixture of Gaussians around each data points, and try to find a low dimension mixture model by minimizing the KL-divergence. In a relatively recent work, McInnes et al. (2018) use iterative spectral and embedding optimization using fuzzy sets. Several works tried to adapt classic manifold learning ideas to neural networks and autoencoders. Pai et al. (2019) suggest to embed high dimensional points into a low dimension with a neural network by constructing a metric between pairs of data points and minimizing the metric distortion energy. Kato et al. (2019) suggest to learn an isometric decoder by using noisy latent variables. They prove under certain conditions that it encourages isometric decoder. Peterfreund et al. (2020) suggest autoencoders that promote the isometry of the encoder over the data by approximating its differential gram matrix using sample covariance matrix. Zhan et al. (2018) encourage distance preserving autoencoders by minimizing metric distortion energy in common feature space. Modern autoencoders. There is an extensive literature on extending autoencoders to a generative model (task (ii) in section 1). That is, learning a probability distribution in addition to approximating the data manifoldM. Variational autoencoder (VAE) Kingma & Welling (2014) and its variants Makhzani et al. (2015); Burda et al. (2016); Sønderby et al. (2016); Higgins et al. (2017); Tolstikhin et al. (2018); Park et al. (2019); Zhao et al. (2019) are examples to such methods. In essence, these methods augment the AE structure with a learned probabilistic model in the low dimensional (latent) space Rd that is used to approximate the probability P that generated the observed data X . More relevant to our work, are recent works suggesting regularizers for deterministic autoencoders that together with ex-post density estimation in latent space forms a generative model. Ghosh et al. (2020) suggested to reduce the decoder degrees of freedom, either by regularizing the norm of the decoder weights or the norm of the decoder differential. Other regularizers of the differential of the decoder, aiming towards a deterministic variant of VAE, were recently suggested in Kumar & Poole (2020); Kumar et al. (2020). In contrast to our method, these methods do not regularize the encoder explicitly. 3 ISOMETRIC AUTOENCODERS We consider high dimensional data points X = {xi}ni=1 ⊂ RD sampled from some probability distribution P (x) in RD concentrated on or near some d dimensional submanifoldM⊂ RD, where d < D. Our goal is to compute isometric autoencoder (I-AE) defined as follows. Let g : RD → Rd denote the encoder, and f : Rd → RD the decoder; N is the learned manifold, i.e., the image of the decoder, N = f(Rd). I-AE is defined by the following requirements: (i) The data X is close to N . (ii) f is an isometry. (iii) g is the pseudo-inverse of f . Figure 2 is an illustration of I-AE. Let θ denote the parameters of f , and φ the parameters of g. We enforce the requirements (i)-(iii) by prescribing a loss function L(θ, φ) and optimize it using standard stochastic gradient descent (SGD). We next break down the loss L to its different components. Condition (i) is promoted with the standard reconstruction loss in AE: Lrec(θ, φ) = 1 n n∑ i=1 ‖f(g(xi))− xi‖2 , (1) where ‖·‖ is the 2-norm. Before handling conditions (ii),(iii) let us first define the notions of isometry and pseudo-inverse. A differentiable mapping f between the euclidean spaces Rd and RD is a local isometry if it has an orthogonal differential matrix df(z) ∈ RD×d, df(z)T df(z) = Id, (2) where Id ∈ Rd×d is the identity matrix, and df(z)ij = ∂f i ∂zj (z). A local isometry which is also a diffeomorphism is a global isometry. Restricting the decoder to isometry is beneficial for several reasons. First, Nash-Kuiper Embedding Theorem Nash (1956) asserts that non-expansive maps can be approximated arbitrary well with isometries if D ≥ d + 1 and hence promoting an isometry does not limit the expressive power of the decoder. Second, the low dimensional representation of the data computed with an isometric encoder preserves the geometric structure of the data. In particular volume, length, angles and probability densities are preserved between the low dimensional representation Rd, and the learned manifold N . Lastly, for a fixed manifold N there is a huge space of possible decoders such that N = f(Rd). For isometric f , this space is reduced considerably: Indeed, consider two isometries parameterizing N , i.e., f1, f2 : Rd → N . Then, since composition of isometries is an isometry we have that f−12 ◦ f1 : Rd → Rd is a dimension-preserving isometry and hence a rigid motion. That is, all decoders of the same manifold are the same up to a rigid motion. For the encoder the situation is different. Since D > d the encoder g cannot be an isometry in the standard sense. Therefore we ask g to be the pseudo-inverse of f . For that end we define the projection operator p on a submanifold N ⊂ RD as p(x) = arg min x′∈N ‖x− x′‖ . Note that the closest point is not generally unique, however the Tubular Neighborhood Theorem (see e.g., Theorem 6.24 in Lee (2013)) implies uniqueness for points x sufficiently close to the manifold N . Definition 1. We say the g is the pseudo-inverse of f if g can be written as g = f−1 ◦ p, where p is the projection on N = f(Rd). Consequently, if g is the pseudo-inverse of an isometry f then it extends the standard notion of isometry by projecting every point on a submanifold N and then applying an isometry between the d-dimensional manifolds N and Rd. See Figure 2 for an illustration. First-order characterization. To encourage f, g to satisfy the (local) isometry and the pseudoinverse properties (resp.) we will first provide a first-order (necessary) characterization using their differentials: Theorem 1. Let f be a decoder and g an encoder satisfying conditions (ii),(iii). Then their differentials A = df(z) ∈ RD×d, B = dg(f(z)) ∈ Rd×D satisfy ATA = Id (3) BBT = Id (4) B = AT (5) The theorem asserts that the differentials of the encoder and decoder are orthogonal (rectangular) matrices, and that the encoder is the pseudo-inverse of the differential of the decoder. Before proving this theorem, let us first use it to construct the relevant losses for promoting the isometry of f and pseudo-inverse g. We need to promote conditions (3), (4), (5). Since we want to avoid computing the full differentials A = df(z), B = dg(f(z)), we will replace (3) and (4) with stochastic estimations based on the following lemma: denote the unit d− 1-sphere by Sd−1 = { z ∈ Rd| ‖z‖ = 1 } . Lemma 1. Let A ∈ RD×d, where d ≤ D. If ‖Au‖ = 1 for all u ∈ Sd−1, then A is columnorthogonal, that is ATA = Id. Therefore, the isometry promoting loss, encouraging (3), is defined by Liso(θ) = Ez,u ( ‖df(z)u‖ − 1 )2 , (6) where z ∼ Piso(Rd), and Piso(Rd) is a probability measure on Rd; u ∼ P (Sd−1), and P (Sd−1) is the standard rotation invariant probability measure on the d− 1-sphere Sd−1. The pseudo-inverse promoting loss, encouraging (4) would be Lpiso(φ) = Ex,u (∥∥uT dg(x)∥∥− 1)2, (7) where x ∼ P (M) and u ∼ P (Sd−1). As usual, the expectation with respect to P (M) is computed empirically using the data samples X . Lastly, (5) might seem challenging to enforce with neural networks, however the orthogonality of A,B can be leveraged to replace this loss with a more tractable loss asking the encoder is merely the inverse of the decoder over its image: Lemma 2. Let A ∈ RD×d, and B ∈ Rd×D. If ATA = Id = BBT and BA = Id then B = A+ = AT . Fortunately, this is already taken care of by the reconstruction loss: since low reconstruction loss in equation 1 forces the encoder and the decoder to be the inverse of one another over the data manifold, i.e., g(f(z)) = z, it encourages BA = Id and therefore, by Lemma 2, automatically encourages equation 5. Note that invertability also implies bijectivity of the encoder/decoder restricted to the data manifold, pushing for global isometries (rather than local). Summing all up, we define our loss for I-AE by L(θ, φ) = Lrec(θ, φ) + λiso (Liso(θ) + Lpiso(φ)) , (8) where λiso is a parameter controlling the isometry-reconstruction trade-off. 3.1 DETAILS AND PROOFS. Let us prove Theorem 1 characterizing the relation of the differentials of isometries and pseudoisometries, A = df(z) ∈ RD×d, B = dg(f(z)) ∈ Rd×D. First, by definition of isometry (equation 2), ATA = Id. We denote by TxN the d-dimensional tangent space to N at x ∈ N ; accordingly, TxN⊥ denotes the normal tangent space. Lemma 3. The differential dp(x) ∈ RD×D at x ∈ N of the projection operator p : RD → N is dp(x)u = { u u ∈ TxN 0 u ∈ TxN⊥ (9) That is, dp(x) is the orthogonal projection on the tangent space of N at x. Proof. First, consider the squared distance function to N defined by η(x) = 12 minx′∈N ‖x− x ′‖2. The envelope theorem implies that∇η(x) = x− p(x). Differentiating both sides and rearranging we get dp(x) = ID −∇2η(x). As proved in Ambrosio & Soner (1994) (Theorem 3.1), ∇2η(x) is the orthogonal projection on TxN⊥. Let x = f(z) ∈ N . Since x ∈ N we have p(x) = x. Condition (iii) asserts that g(y) = f−1(p(y)); taking the derivative at y = x we get dg(x) = df−1(x)dp(x). Lemma 3 implies that dp(x) = AAT , since AAT is the orthogonal projection on TxN . Furthermore, df−1(x) restricted to Im(A) is AT . Putting this together we get B = dg(x) = ATAAT = AT . This implies that BBT = Id, and that B = A+ = AT . This concludes the proof of Theorem 1. Proof of Lemma 1. Writing the SVD of A = UΣV T , where Σ = diag(σ1, . . . , σd) are the singular values of A, we get that ∑d i=1 σ 2 i v 2 i = 1 for all v ∈ Sd−1. Plugging v = ej , j ∈ [d] (the standard basis) we get that all σi = 1 for i ∈ [d] and A = UV T is orthogonal as claimed. Proof of Lemma 2. Let U = [A,V ], V ∈ RD×(D−d), be a completion of A to an orthogonal matrix in RD×D. Now, Id = BUUTBT = Id + BV V TBT , and since BV V TBT 0 this means that BV = 0, that is B takes to null the orthogonal space to the column space of A. A direct computation shows that BU = ATU which in turn implies B = AT = A+. Implementation. Implementing the losses in equation 6 and equation 7 requires making a choice for the probability densities and approximating the expectations. We take Piso(Rd) to be either uniform or gaussian fit to the latent codes g(X ); and P (M) is approximated as the uniform distribution on X , as mentioned above. The expectations are estimated using Monte-Carlo sampling. That is, at each iteration we draw samples x̂ ∈ X , ẑ ∼ Piso(Rd), û ∼ P (Sd−1) and use the approximations Liso(θ) ≈ ( ‖df(ẑ)û‖ − 1 )2 Lpiso(φ) ≈ ( ∥∥ûT dg(x̂)∥∥− 1)2 The right differential multiplication df(ẑ)û and left differential multiplication ûT dg(x̂) are computed using forward and backward mode automatic differentiation (resp.). Their derivatives with respect to the networks’ parameters θ, φ are computed by another backward mode automatic differentiation. 4 EXPERIMENTS 4.1 EVALUATION We start by evaluating the effectiveness of our suggested I-AE regularizer, addressing the following questions: (i) does our suggested loss L (θ, φ) in equation 8 drive I-AE training to converge to an isometry? (ii) What is the effect of the Lpiso term? In particular, does it encourage better manifold approximations as conjectured? To that end, we examined the I-AE training on data points X sampled uniformly from 3D surfaces with known global parameterizations. Figure 3 shows qualitative comparison of the learned embeddings for various AE regularization techniques: Vanilla autoencoder (AE); Contractive autoencoder (CAE) (Rifai et al., 2011b); Contractive autoencoder with decoder weights tied to the encoder weights (TCAE) (Rifai et al., 2011a); Gradient penalty on the decoder (RAE-GP) (Ghosh et al., 2020); and Denoising autoencoder with gaussian noise (DAE) (Vincent et al., 2010). For fairness in evaluation, all methods were trained using the same training hyper-parameters. See Appendix for the complete experiment details including mathematical formulation of the different AE regularizers. In addition, we compared against popular classic manifold learning techniques: U-MAP (McInnes et al., 2018), t-SNE (Maaten & Hinton, 2008) and LLE. (Roweis & Saul, 2000). The results demonstrate that I-AE is able to learn an isometric embedding, showing some of the advantages in our method: sampling density and distances between input points is preserved in the learned low dimensional space. In addition, for the AE methods, we quantitatively evaluate how close is the learnt decoder to an isometry. For this purpose, we triangulate a grid of planar points {zi} ⊂ R2. We denote by {eij} the triangles edges incident to grid points zi and zj . Then, we measured the edge lengths ratio, lij = ‖f (zi)− f (zj)‖/‖eij‖ expected to be ≈ 1 for all edges eij in an isometry. In Table 1 we log the standard deviation (Std) of {lij} for I-AE compared to other regularized AEs. For a fair comparison, we scaled zi so the mean of lij is 1 in all experiments. As can be seen in the table, the distribution of {lij} for I-AE is significantly more concentrated than the different AE baselines. Finally, althoughLiso is already responsible for learning an isometric decoder, the pseudo-inverse encoder (enforced by the lossLpiso) helps it converge to simpler solutions. We ran AE training with and without the Lpiso term. Figure 4 shows in gray the learnt decoder surface, N , without Lpiso (left), containing extra (unnatural) surface parts compared to the learnt surface with Lpiso (right). In both cases we expect (and achieve) a decoder approximating an isometry that passes through the input data points. Nevertheless, the pseudo-inverse loss restricts some of the degrees of freedom of the encoder which in turn leads to a simpler solution. 4.2 DATA VISUALIZATION In this experiment we evaluate our method in the task of high dimension data visualization, i.e., reducing high dimensional data into two dimensional space. Usually the data is not assumed to lie on a manifold with such a low dimension, and it is therefore impossible to preserve all of its geometric properties. A common artifact when squeezing higher dimensional data into the plane is crowding (Maaten & Hinton, 2008), that is planar embedded points are crowded around the origin. We evaluate our method on three standard datasets of images: MNIST (LeCun, 1998) (60k handwritten digits), Fashion-MNIST (60k Zalando’s article images) (Xiao et al., 2017) and COIL20 (Nene et al., 1996) (20 different images of object rotated with 72 even rotations). For baselines we take: Vanilla AE; CAE; GP-RAE; DAE; U-MAP and t-SNE. We use the same architecture for all auto-encoder methods on each dataset. MNIST and FMNIST we evaluated in two scenarios: (i) Both encoder and decoder are fully-connected (MLP) networks; and (ii) Both encoder and decoder are Convolutional Neural Network (CNN). For COIL20 dataset both encoder and decoder are Convolutional Neural Network. Full implementation details and hyper-parameters values can be found in the Appendix. The results are presented in figure 5; where each embedded point z is colored by its ground-truth class/label. We make several observation. First, in all the datasets our method is more resilient to crowding compared to the baseline AEs, and provide a more even spread. U-MAP and t-SNE produce better separated clusters. However, this separation can come at a cost: See the COIL20 result (third row) and blow-ups of three of the classes (bottom row). In this dataset we expect evenly spaced points that correspond to the even rotations of the objects in the images. Note (in the blow-ups) that U-MAP maps the three classes on top of each other (non-injectivity of the "encoder"), t-SNE is somewhat better but does not preserve well the distance between pairs of data points (we expect them to be more or less equidistant in this dataset). In I-AE the rings are better separated and points are more equidistant; the baseline AEs tend to densify the points near the origin. Lastly, considering the inter and intra-class variations for the MNIST and FMNIST datasets, we are not sure that isometric embeddings are expected to produce strongly separated clusters as in U-MAP and t-SNE (e.g., think about similar digits of different classes and dissimilar digits of the same class with distances measured in euclidean norm). 4.3 DOWNSTREAM CLASSIFICATION To quantitatively evaluate the unsupervised low-dimensional embedding computed with the I-AE we performed the following experiment: We trained simple classifiers on the embedded vectors computed by I-AE and baseline AEs and compared their performance (i.e., accuracy). Note that the process of learning the embedding is unsupervised and completely oblivious to the labels, which are used solely for training and testing the classifiers. We evaluate on the same datasets as in Section 4.2: In MNIST and FMNIST we use the standard train-test split, and on COIL20 we split 75%-25% randomly. As AE baselines we take vanilla AE, CAE, DAE and RAE-GP, as described above. We repeat each experiment with 3 different latent dimensions, {16, 64, 256}, and use two different simple classification algorithms: linear Support vector machines (SVM) (Cortes & Vapnik, 1995) and K-nearest neighbors (K-NN), with K = 5. Table 2 logs the results, where for both types of classifiers I-AE outperforms the baseline AEs in almost all combinations, where the SVM experiments demonstrate larger margins in favor of I-AE. The results of the K-NN indicate that euclidean metric captures similarity in our embedding, and the results of the SVM, especially on the MNIST and COIL20 datasets, indicate that I-AE is able to embed the data in an arguably simpler, linearly separable manner. The very high classification rates in COIL20 are probably due to the size and structure of this dataset. Nevertheless with SVM, already in 16 dimensions I-AE provides an accuracy of 95%, with 5% margin from 2nd place. 4.4 HYPER-PARAMETERS SENSITIVITY To evaluate the affect of λiso on the output we compared the visualizations and optimized loss values of MNIST and FMNIST, trained with same CNN architecture as in Section 4.2 with λiso ∈ {0, 0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 0.75, 0.1}. Figure 6 shows the different visualization results as well as Lrec, Liso, Lpiso as a function of λiso. As can be seen in both datasets the visualizations and losses are stable for λiso values between 0.01 and 0.5, where a significant change to the embedding is noticeable at 0.75. The trends in the loss values are also rather stable; Liso and Lpiso start very high in the regular AE, i.e., λiso = 0, and quickly stabilize. As for Lrec on FMNIST we see a stable increase while in MNIST it also starts with a steady increase until it reaches 0.75 and then it starts to be rockier, which is also noticeable in the visualizations. λiso = 0 0.01 0.025 0.05 0.075 0.1 0.25 0.5 0.75 1 Lrec Liso Lpiso 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 iso 0.07 0.08 0.09 0.1 0.11 0.12 0.13 0.14 0.15 0.16 L re c MNIST FMNIST 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 iso -10 -8 -6 -4 -2 0 2 4 6 lo g (L is o ) MNIST FMNIST 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 iso -7 -6 -5 -4 -3 -2 -1 0 lo g (L p is o ) MNIST FMNIST Figure 6: Sensitivity to hyper-parameters. Top: visualizations of MNIST (1st row) and FMNIST (2nd row) datasets trained with different λiso values. Bottom: plots of the final train losses as a function of λiso; left to right: Lrec (linear scale), Liso (log scale), and Lpiso (log scale). 5 CONCLUSIONS We have introduced I-AE, a regularizer for autoencoders that promotes isometry of the decoder and pseudo-inverse of the encoder. Our goal was two-fold: (i) producing a favorable low dimensional manifold approximation to high dimensional data, isometrically parameterized for preserving, as much as possible, its geometric properties; and (ii) avoiding complex isometric solutions based on the notion of psuedo-inverse. Our regularizers are simple to implement and can be easily incorporated into existing autoencoders architectures. We have tested I-AE on common manifold learning tasks, demonstrating the usefulness of isometric autoencoders. An interesting future work venue is to consider task (ii) from section 1, namely incorporating I-AE losses in a probabilistic model and examine the potential benefits of the isometry prior for generative models. One motivation is the fact that isometries push probability distributions by a simple change of coordinates, P (z) = P (f(z)). A APPENDIX A.1 IMPLEMENTATION DETAILS All experiments were conducted on a Tesla V100 Nvidia GPU using PYTORCH framework Paszke et al. (2017). A.1.1 NOTATIONS Table 3 describes the notation for the different network layers. A.1.2 EVALUATION Architecture. We used an autoencoder consisted of 5 FC 256 layers followed by a LIN 2 layer for the encoder; similarly, 5 FC 256 layers followed by a LIN 3 layer were used for the decoder. Training details. All methods were trained for a relatively long period of 100K epochs. Training was done with the ADAM optimizer Kingma & Ba (2014), setting a fixed learning rate of 0.001 and a full batch. I-AE parameter was set to λiso = 0.01. Baselines. The following regularizers were used as baselines: Contractive autoencoder (CAE) Rifai et al. (2011b); Contractive autoencoder with decoder weights tied to the encoder weights (TCAE) Rifai et al. (2011a); Gradient penalty on the decoder (RAE-GP) Ghosh et al. (2020); Denoising autoencoder with gaussian noise (DAE) Vincent et al. (2010). For both CAE, and TCAE the regularization term is ‖dg(x)‖2. For RAE-GP the regularization term is ‖df(z)‖2. For U-MAP McInnes et al. (2018), we set the number of neighbours to 30. For t-SNE Maaten & Hinton (2008), we set perplexity= 50. A.1.3 DATA VISUALIZATION Architecture. Table 4 lists the complete architecture details of this experiment. Both MNIST and FMNIST were trained with FC-NN and S-CNN, and COIL20 was trained with L-CNN. Training details. Training was done using ADAM optimizer Kingma & Ba (2014). The rest of the training details are on table 5. Baselines. The following regularizers were used as baselines: Contractive autoencoder (CAE) Rifai et al. (2011b); Gradient penalty on the decoder (RAE-GP) Ghosh et al. (2020); Denoising autoencoder with gaussian noise (DAE) Vincent et al. (2010). For CAE the regularization term is ‖dg(x)‖2. For RAE-GP the regularization term is ‖df(z)‖2. We used U-MAP McInnes et al. (2018) official implementation with random_state = 42, and Ulyanov (2016) multicore implementation for t-SNE Maaten & Hinton (2008) with default parameters. A.2 ADDITIONAL EXPERIMENTS A.2.1 GENERALIZATION IN HIGH DIMENSIONAL SPACE Next, we evaluate how well our suggested isometric prior induces manifolds that generalizes well to unseen data. We experimented with three different images datasets: MNIST (LeCun, 1998); CIFAR10 (Krizhevsky et al., 2009); and CelebA (Liu et al., 2015). We quantitatively estimate methods performance by measuring the L2 distance and the Fréchet Inception Distance (FID) Heusel et al. (2017) on a held out test set. For each dataset, we used the official train-test splits. For comparison versus baselines we have selected among relevant existing AE based methods the following: Vanilla AE (AE); autoencoder trained with weight decay (AEW); Contractive autoencoder (CAE); autoencoder with spectral weights normalization (RAE-SN); and autoencoder with L2 regularization on decoder weights (RAE-SN). RAE-L2 and RAE-SN were recently successfully applied to this data in (Ghosh et al., 2020), demonstrating state-of-the-art performance on this task. In addition, we compare versus the Wasserstein Auto-Encoder (WAE) Tolstikhin et al. (2018), chosen as state-of-the-art among generative autoencoders. For evaluation fairness, all methods were trained using the same training hyper-parameters: network architecture, optimizer settings, batch size, number of epochs for training and learning rate scheduling. See the appendix for specific hyper-parameters values. In addition, we generated a validation set out of the training set using 10k samples for the MNIST and CIFAR-10 experiment, whereas for the CelebA experiment we used the official validation set. For each training epoch, we evaluated the reconstruction L2 loss on the validation set and chose the final network weights to be the one that achieves the minimum reconstruction. We experimented with two variants of I-AE regularizers: Lpiso and Lpiso + Liso. Table 7 logs the results. Note that I-AE produced competitive results with the current SOTA on this task. Architecture. For all methods, we used an autoencoder with Convolutional and Convolutional transpose layers. Table 6 lists the complete details. Training details. Training was done with the ADAM optimizer Kingma & Ba (2014), setting a learning rate of 0.0005 and batch size 100. I-AE parameter was set to λiso = 0.1.
1. What is the main contribution of the paper in terms of manifold learning? 2. What are the strengths and weaknesses of the proposed method, particularly regarding its theoretical foundation and experimental results? 3. Do you have any concerns about the choice of data used for experiments? 4. How does the reviewer assess the clarity and quality of the paper's content?
Review
Review The paper suggests a novel auto-encoder based method for manifold learning, by encouraging the decoder to be an isometry and the encoder to locally be a pseudo-inverse of the decoder. It is noted that for a linear architecture, this gives PCA, therefore, this can be seen as a nonlinear PCA approach. In theorem 1, the authors claim that for the encoder-decoder solution to have the desired properties, certain equalities have to be satisfied by the local differential matrices of the encoder and decoder. This gives rise to a loss function that is combined of 3 parts: A reconstruction loss (as usual with autoencoders) plus a combination of a loss penalizing non isometric decoders, plus a loss penalizing an encoder that is not a pseudo-inverse of the decoder. This loss function is claimed to be the main technical novelty of the paper. In the experimental part, the authors compare the merits of this approach on synthetically generated low dimensional manifolds in high dimensional ambient spaces, against other standard manifold learning algorithms, and show that the paper's method outperforms other method using a measure of distortion of triangle edges on a grid. They also experiment with "real data" (e.g. MNIST), show the merits of the proposed algorithm when visualizing the 2 dimensional bottleneck of the autoencoder. The comparison here is against other algorithms for high dimensional data visualization. The overall idea and theory seem interesting. The experiments are a bit disappointing. For the synthetic data, I am not sure I understand why they did not chose something of high dimension? Maybe I am missing something, but would it be impossible to generate, say, a 50 dimensional manifold in 100 dimensions? Maybe the triangulation part will be challenging, but that is not the only way to compare between the various algorithms. As for the real data section (e.g. MNIST), I am not sure I see why you compare your algorithm against algorithms that are intended for 2-d visualization (e.g. t-SNE). Your algorithm does manifold learning. Why not,for instance, take all the images corresponding to some fixed digit (e.g. "3"), which is presumably close to a low (but definitely more than 2....) dimensional manifold, and see how well your manifold learning algorithm reconstructs them? The editorial level of the paper is not very high, due to grammatical English mistakes. Here are examples (the list is not complete): p. 1 "Autoencoder (AE) can also be seen" => "Autoencoders can also be seen" or "An autoencoder can also be seen..." "AE is trying to reconstruct X..." - The present progressive tense is not suitable here. Maybe "AE's try to reconstruct"? Or "AE's are designed to reconstruct..." or "An AE reconstructs..." p. 2 Manifold learning generalizeS p. 4 "As-usual " => As usual p. 5 "Does our suggested Loss... drives" -> "drive" p. 6 Why is "Denoising" capitalized? "In addition, we compared versus..." => "...compared against..."
ICLR
Title Isometric Autoencoders Abstract High dimensional data is often assumed to be concentrated on or near a lowdimensional manifold. Autoencoders (AE) is a popular technique to learn representations of such data by pushing it through a neural network with a low dimension bottleneck while minimizing a reconstruction error. Using high capacity AE often leads to a large collection of minimizers, many of which represent a low dimensional manifold that fits the data well but generalizes poorly. Two sources of bad generalization are: extrinsic, where the learned manifold possesses extraneous parts that are far from the data; and intrinsic, where the encoder and decoder introduce arbitrary distortion in the low dimensional parameterization. An approach taken to alleviate these issues is to add a regularizer that favors a particular solution; common regularizers promote sparsity, small derivatives, or robustness to noise. In this paper, we advocate an isometry (i.e., local distance preserving) regularizer. Specifically, our regularizer encourages: (i) the decoder to be an isometry; and (ii) the encoder to be the decoder’s pseudo-inverse, that is, the encoder extends the inverse of the decoder to the ambient space by orthogonal projection. In a nutshell, (i) and (ii) fix both intrinsic and extrinsic degrees of freedom and provide a non-linear generalization to principal component analysis (PCA). Experimenting with the isometry regularizer on dimensionality reduction tasks produces useful low-dimensional data representations. 1 INTRODUCTION A common assumption is that high dimensional data X ⊂ RD is sampled from some distribution p concentrated on, or near, some lower d-dimensional submanifoldM⊂ RD, where d < D. The task of estimating p can therefore be decomposed into: (i) approximate the manifoldM; and (ii) approximate p restricted to, or concentrated nearM. In this paper we focus on task (i), mostly known as manifold learning. A common approach to approximate the d-dimensional manifoldM, e.g., in (Tenenbaum et al., 2000; Roweis & Saul, 2000; Belkin & Niyogi, 2002; Maaten & Hinton, 2008; McQueen et al., 2016; McInnes et al., 2018), is to embed X in Rd. This is often done by first constructing a graph G where nearby samples in X are conngected by edges, and second, optimizing for the locations of the samples in Rd striving to minimize edge length distortions in G. Autoencoders (AE) can also be seen as a method to learn low dimensional manifold representation of high dimensional data X . AE are designed to reconstruct X as the image of its low dimensional embedding. When restricting AE to linear encoders and decoders it learns linear subspaces; with mean squared reconstruction loss they reproduce principle component analysis (PCA). Using higher capacity neural networks as the encoder and decoder, allows complex manifolds to be approximated. To avoid overfitting, different regularizers are added to the AE loss. Popular regularizers include sparsity promoting (Ranzato et al., 2007; 2008; Glorot et al., 2011), contractive or penalizing large derivatives (Rifai et al., 2011a;b), and denoising (Vincent et al., 2010; Poole et al., 2014). Recent AE regularizers directly promote distance preservation of the encoder (Pai et al., 2019; Peterfreund et al., 2020). In this paper we advocate a novel AE regularization promoting isometry (i.e., local distance preservation), called Isometric-AE (I-AE). Our key idea is to promote the decoder to be isometric, and the encoder to be its pseudo-inverse. Given an isometric decoder Rd → RD, there is no well-defined inverse RD → Rd; we define the pseudo-inverse to be a projection on the image of the decoder composed with the inverse of the decoder restricted to its image. Locally, the I-AE regularization therefore encourages: (i) the differential of the decoder A ∈ RD×d to be an isometry, i.e., ATA = Id, where Id is the d× d identity matrix; and (ii) the differential of the encoder, B ∈ Rd×D to be the pseudo-inverse (now in the standard linear algebra sense) of the differential of the decoder A ∈ RD×d, namely, B = A+. In view of (i) this implies B = AT . This means that locally our decoder and encoder behave like PCA, where the encoder and decoder are linear transformations satisfying (i) and (ii); That is, the PCA encoder can be seen as a composition of an orthogonal projection on the linear subspace spanned by the decoder, followed by an orthogonal transformation (isometry) to the low dimensional space. In a sense, our method can be seen as a version of denoising/contractive AEs (DAE/CAE, respectively). DAE and CAE promote a projection from the ambient space onto the data manifold, but can distort distances and be non-injective. Locally, using differentials again, projection on the learned manifold means (AB)2 = AB. Indeed, as can be readily checked conditions (i) and (ii) above imply A(BA)B = AB. This means that I-AE also belongs to the same class of DAE/CAE, capturing the variations in tangent directions of the data,M, while ignoring orthogonal variations which often represent noise (Vincent et al., 2010; Alain & Bengio, 2014). The benefit in I-AE is that its projection on the data manifold is locally an isometry, preserving distances and sampling the learned manifold evenly. That is, I-AE does not shrink or expand the space; locally, it can be imagined as an orthogonal linear transformation. The inset shows results of a simple experiment comparing contractive AE (CAE-bottom) and isometric AE (I-AE-top). Both AEs are trained on the green data points; the red arrows depict projection of points (in blue) in vicinity of the data onto the learned manifold (in black) as calculated by applying the encoder followed by the decoder. Note that CAE indeed projects on the learned manifold but not evenly, tending to shrink space around data points; in contrast I-AE provides a more even sampling of the learned manifold. Experiments confirm that optimizing the I-AE loss results in a close-to-isometric encoder/decoder explaining the data. We further demonstrate the efficacy of I-AE for dimensionality reduction of different standard datatsets, showing its benefits over manifold learning and other AE baselines. 2 RELATED WORKS Manifold learning. Manifold learning generalizes classic dimensionality reduction methods such as PCA (F.R.S., 1901) and MDS (Kruskal, 1964; Sammon, 1969), by aiming to preserve the local geometry of the data. Tenenbaum et al. (2000) use the nn-graph to approximate the geodesic distances over the manifold, followed by MDS to preserve it in the lower dimension. Roweis & Saul (2000); Belkin & Niyogi (2002); Donoho & Grimes (2003) use spectral methods to minimize different distortion energy functions over the graph matrix. Coifman et al. (2005); Coifman & Lafon (2006) approximate the heat diffusion over the manifold by a random walk over the nn-graph, to gain a robust distance measure on the manifold. Stochastic neighboring embedding algorithms (Hinton & Roweis, 2003; Maaten & Hinton, 2008) captures the local geometry of the data as a mixture of Gaussians around each data points, and try to find a low dimension mixture model by minimizing the KL-divergence. In a relatively recent work, McInnes et al. (2018) use iterative spectral and embedding optimization using fuzzy sets. Several works tried to adapt classic manifold learning ideas to neural networks and autoencoders. Pai et al. (2019) suggest to embed high dimensional points into a low dimension with a neural network by constructing a metric between pairs of data points and minimizing the metric distortion energy. Kato et al. (2019) suggest to learn an isometric decoder by using noisy latent variables. They prove under certain conditions that it encourages isometric decoder. Peterfreund et al. (2020) suggest autoencoders that promote the isometry of the encoder over the data by approximating its differential gram matrix using sample covariance matrix. Zhan et al. (2018) encourage distance preserving autoencoders by minimizing metric distortion energy in common feature space. Modern autoencoders. There is an extensive literature on extending autoencoders to a generative model (task (ii) in section 1). That is, learning a probability distribution in addition to approximating the data manifoldM. Variational autoencoder (VAE) Kingma & Welling (2014) and its variants Makhzani et al. (2015); Burda et al. (2016); Sønderby et al. (2016); Higgins et al. (2017); Tolstikhin et al. (2018); Park et al. (2019); Zhao et al. (2019) are examples to such methods. In essence, these methods augment the AE structure with a learned probabilistic model in the low dimensional (latent) space Rd that is used to approximate the probability P that generated the observed data X . More relevant to our work, are recent works suggesting regularizers for deterministic autoencoders that together with ex-post density estimation in latent space forms a generative model. Ghosh et al. (2020) suggested to reduce the decoder degrees of freedom, either by regularizing the norm of the decoder weights or the norm of the decoder differential. Other regularizers of the differential of the decoder, aiming towards a deterministic variant of VAE, were recently suggested in Kumar & Poole (2020); Kumar et al. (2020). In contrast to our method, these methods do not regularize the encoder explicitly. 3 ISOMETRIC AUTOENCODERS We consider high dimensional data points X = {xi}ni=1 ⊂ RD sampled from some probability distribution P (x) in RD concentrated on or near some d dimensional submanifoldM⊂ RD, where d < D. Our goal is to compute isometric autoencoder (I-AE) defined as follows. Let g : RD → Rd denote the encoder, and f : Rd → RD the decoder; N is the learned manifold, i.e., the image of the decoder, N = f(Rd). I-AE is defined by the following requirements: (i) The data X is close to N . (ii) f is an isometry. (iii) g is the pseudo-inverse of f . Figure 2 is an illustration of I-AE. Let θ denote the parameters of f , and φ the parameters of g. We enforce the requirements (i)-(iii) by prescribing a loss function L(θ, φ) and optimize it using standard stochastic gradient descent (SGD). We next break down the loss L to its different components. Condition (i) is promoted with the standard reconstruction loss in AE: Lrec(θ, φ) = 1 n n∑ i=1 ‖f(g(xi))− xi‖2 , (1) where ‖·‖ is the 2-norm. Before handling conditions (ii),(iii) let us first define the notions of isometry and pseudo-inverse. A differentiable mapping f between the euclidean spaces Rd and RD is a local isometry if it has an orthogonal differential matrix df(z) ∈ RD×d, df(z)T df(z) = Id, (2) where Id ∈ Rd×d is the identity matrix, and df(z)ij = ∂f i ∂zj (z). A local isometry which is also a diffeomorphism is a global isometry. Restricting the decoder to isometry is beneficial for several reasons. First, Nash-Kuiper Embedding Theorem Nash (1956) asserts that non-expansive maps can be approximated arbitrary well with isometries if D ≥ d + 1 and hence promoting an isometry does not limit the expressive power of the decoder. Second, the low dimensional representation of the data computed with an isometric encoder preserves the geometric structure of the data. In particular volume, length, angles and probability densities are preserved between the low dimensional representation Rd, and the learned manifold N . Lastly, for a fixed manifold N there is a huge space of possible decoders such that N = f(Rd). For isometric f , this space is reduced considerably: Indeed, consider two isometries parameterizing N , i.e., f1, f2 : Rd → N . Then, since composition of isometries is an isometry we have that f−12 ◦ f1 : Rd → Rd is a dimension-preserving isometry and hence a rigid motion. That is, all decoders of the same manifold are the same up to a rigid motion. For the encoder the situation is different. Since D > d the encoder g cannot be an isometry in the standard sense. Therefore we ask g to be the pseudo-inverse of f . For that end we define the projection operator p on a submanifold N ⊂ RD as p(x) = arg min x′∈N ‖x− x′‖ . Note that the closest point is not generally unique, however the Tubular Neighborhood Theorem (see e.g., Theorem 6.24 in Lee (2013)) implies uniqueness for points x sufficiently close to the manifold N . Definition 1. We say the g is the pseudo-inverse of f if g can be written as g = f−1 ◦ p, where p is the projection on N = f(Rd). Consequently, if g is the pseudo-inverse of an isometry f then it extends the standard notion of isometry by projecting every point on a submanifold N and then applying an isometry between the d-dimensional manifolds N and Rd. See Figure 2 for an illustration. First-order characterization. To encourage f, g to satisfy the (local) isometry and the pseudoinverse properties (resp.) we will first provide a first-order (necessary) characterization using their differentials: Theorem 1. Let f be a decoder and g an encoder satisfying conditions (ii),(iii). Then their differentials A = df(z) ∈ RD×d, B = dg(f(z)) ∈ Rd×D satisfy ATA = Id (3) BBT = Id (4) B = AT (5) The theorem asserts that the differentials of the encoder and decoder are orthogonal (rectangular) matrices, and that the encoder is the pseudo-inverse of the differential of the decoder. Before proving this theorem, let us first use it to construct the relevant losses for promoting the isometry of f and pseudo-inverse g. We need to promote conditions (3), (4), (5). Since we want to avoid computing the full differentials A = df(z), B = dg(f(z)), we will replace (3) and (4) with stochastic estimations based on the following lemma: denote the unit d− 1-sphere by Sd−1 = { z ∈ Rd| ‖z‖ = 1 } . Lemma 1. Let A ∈ RD×d, where d ≤ D. If ‖Au‖ = 1 for all u ∈ Sd−1, then A is columnorthogonal, that is ATA = Id. Therefore, the isometry promoting loss, encouraging (3), is defined by Liso(θ) = Ez,u ( ‖df(z)u‖ − 1 )2 , (6) where z ∼ Piso(Rd), and Piso(Rd) is a probability measure on Rd; u ∼ P (Sd−1), and P (Sd−1) is the standard rotation invariant probability measure on the d− 1-sphere Sd−1. The pseudo-inverse promoting loss, encouraging (4) would be Lpiso(φ) = Ex,u (∥∥uT dg(x)∥∥− 1)2, (7) where x ∼ P (M) and u ∼ P (Sd−1). As usual, the expectation with respect to P (M) is computed empirically using the data samples X . Lastly, (5) might seem challenging to enforce with neural networks, however the orthogonality of A,B can be leveraged to replace this loss with a more tractable loss asking the encoder is merely the inverse of the decoder over its image: Lemma 2. Let A ∈ RD×d, and B ∈ Rd×D. If ATA = Id = BBT and BA = Id then B = A+ = AT . Fortunately, this is already taken care of by the reconstruction loss: since low reconstruction loss in equation 1 forces the encoder and the decoder to be the inverse of one another over the data manifold, i.e., g(f(z)) = z, it encourages BA = Id and therefore, by Lemma 2, automatically encourages equation 5. Note that invertability also implies bijectivity of the encoder/decoder restricted to the data manifold, pushing for global isometries (rather than local). Summing all up, we define our loss for I-AE by L(θ, φ) = Lrec(θ, φ) + λiso (Liso(θ) + Lpiso(φ)) , (8) where λiso is a parameter controlling the isometry-reconstruction trade-off. 3.1 DETAILS AND PROOFS. Let us prove Theorem 1 characterizing the relation of the differentials of isometries and pseudoisometries, A = df(z) ∈ RD×d, B = dg(f(z)) ∈ Rd×D. First, by definition of isometry (equation 2), ATA = Id. We denote by TxN the d-dimensional tangent space to N at x ∈ N ; accordingly, TxN⊥ denotes the normal tangent space. Lemma 3. The differential dp(x) ∈ RD×D at x ∈ N of the projection operator p : RD → N is dp(x)u = { u u ∈ TxN 0 u ∈ TxN⊥ (9) That is, dp(x) is the orthogonal projection on the tangent space of N at x. Proof. First, consider the squared distance function to N defined by η(x) = 12 minx′∈N ‖x− x ′‖2. The envelope theorem implies that∇η(x) = x− p(x). Differentiating both sides and rearranging we get dp(x) = ID −∇2η(x). As proved in Ambrosio & Soner (1994) (Theorem 3.1), ∇2η(x) is the orthogonal projection on TxN⊥. Let x = f(z) ∈ N . Since x ∈ N we have p(x) = x. Condition (iii) asserts that g(y) = f−1(p(y)); taking the derivative at y = x we get dg(x) = df−1(x)dp(x). Lemma 3 implies that dp(x) = AAT , since AAT is the orthogonal projection on TxN . Furthermore, df−1(x) restricted to Im(A) is AT . Putting this together we get B = dg(x) = ATAAT = AT . This implies that BBT = Id, and that B = A+ = AT . This concludes the proof of Theorem 1. Proof of Lemma 1. Writing the SVD of A = UΣV T , where Σ = diag(σ1, . . . , σd) are the singular values of A, we get that ∑d i=1 σ 2 i v 2 i = 1 for all v ∈ Sd−1. Plugging v = ej , j ∈ [d] (the standard basis) we get that all σi = 1 for i ∈ [d] and A = UV T is orthogonal as claimed. Proof of Lemma 2. Let U = [A,V ], V ∈ RD×(D−d), be a completion of A to an orthogonal matrix in RD×D. Now, Id = BUUTBT = Id + BV V TBT , and since BV V TBT 0 this means that BV = 0, that is B takes to null the orthogonal space to the column space of A. A direct computation shows that BU = ATU which in turn implies B = AT = A+. Implementation. Implementing the losses in equation 6 and equation 7 requires making a choice for the probability densities and approximating the expectations. We take Piso(Rd) to be either uniform or gaussian fit to the latent codes g(X ); and P (M) is approximated as the uniform distribution on X , as mentioned above. The expectations are estimated using Monte-Carlo sampling. That is, at each iteration we draw samples x̂ ∈ X , ẑ ∼ Piso(Rd), û ∼ P (Sd−1) and use the approximations Liso(θ) ≈ ( ‖df(ẑ)û‖ − 1 )2 Lpiso(φ) ≈ ( ∥∥ûT dg(x̂)∥∥− 1)2 The right differential multiplication df(ẑ)û and left differential multiplication ûT dg(x̂) are computed using forward and backward mode automatic differentiation (resp.). Their derivatives with respect to the networks’ parameters θ, φ are computed by another backward mode automatic differentiation. 4 EXPERIMENTS 4.1 EVALUATION We start by evaluating the effectiveness of our suggested I-AE regularizer, addressing the following questions: (i) does our suggested loss L (θ, φ) in equation 8 drive I-AE training to converge to an isometry? (ii) What is the effect of the Lpiso term? In particular, does it encourage better manifold approximations as conjectured? To that end, we examined the I-AE training on data points X sampled uniformly from 3D surfaces with known global parameterizations. Figure 3 shows qualitative comparison of the learned embeddings for various AE regularization techniques: Vanilla autoencoder (AE); Contractive autoencoder (CAE) (Rifai et al., 2011b); Contractive autoencoder with decoder weights tied to the encoder weights (TCAE) (Rifai et al., 2011a); Gradient penalty on the decoder (RAE-GP) (Ghosh et al., 2020); and Denoising autoencoder with gaussian noise (DAE) (Vincent et al., 2010). For fairness in evaluation, all methods were trained using the same training hyper-parameters. See Appendix for the complete experiment details including mathematical formulation of the different AE regularizers. In addition, we compared against popular classic manifold learning techniques: U-MAP (McInnes et al., 2018), t-SNE (Maaten & Hinton, 2008) and LLE. (Roweis & Saul, 2000). The results demonstrate that I-AE is able to learn an isometric embedding, showing some of the advantages in our method: sampling density and distances between input points is preserved in the learned low dimensional space. In addition, for the AE methods, we quantitatively evaluate how close is the learnt decoder to an isometry. For this purpose, we triangulate a grid of planar points {zi} ⊂ R2. We denote by {eij} the triangles edges incident to grid points zi and zj . Then, we measured the edge lengths ratio, lij = ‖f (zi)− f (zj)‖/‖eij‖ expected to be ≈ 1 for all edges eij in an isometry. In Table 1 we log the standard deviation (Std) of {lij} for I-AE compared to other regularized AEs. For a fair comparison, we scaled zi so the mean of lij is 1 in all experiments. As can be seen in the table, the distribution of {lij} for I-AE is significantly more concentrated than the different AE baselines. Finally, althoughLiso is already responsible for learning an isometric decoder, the pseudo-inverse encoder (enforced by the lossLpiso) helps it converge to simpler solutions. We ran AE training with and without the Lpiso term. Figure 4 shows in gray the learnt decoder surface, N , without Lpiso (left), containing extra (unnatural) surface parts compared to the learnt surface with Lpiso (right). In both cases we expect (and achieve) a decoder approximating an isometry that passes through the input data points. Nevertheless, the pseudo-inverse loss restricts some of the degrees of freedom of the encoder which in turn leads to a simpler solution. 4.2 DATA VISUALIZATION In this experiment we evaluate our method in the task of high dimension data visualization, i.e., reducing high dimensional data into two dimensional space. Usually the data is not assumed to lie on a manifold with such a low dimension, and it is therefore impossible to preserve all of its geometric properties. A common artifact when squeezing higher dimensional data into the plane is crowding (Maaten & Hinton, 2008), that is planar embedded points are crowded around the origin. We evaluate our method on three standard datasets of images: MNIST (LeCun, 1998) (60k handwritten digits), Fashion-MNIST (60k Zalando’s article images) (Xiao et al., 2017) and COIL20 (Nene et al., 1996) (20 different images of object rotated with 72 even rotations). For baselines we take: Vanilla AE; CAE; GP-RAE; DAE; U-MAP and t-SNE. We use the same architecture for all auto-encoder methods on each dataset. MNIST and FMNIST we evaluated in two scenarios: (i) Both encoder and decoder are fully-connected (MLP) networks; and (ii) Both encoder and decoder are Convolutional Neural Network (CNN). For COIL20 dataset both encoder and decoder are Convolutional Neural Network. Full implementation details and hyper-parameters values can be found in the Appendix. The results are presented in figure 5; where each embedded point z is colored by its ground-truth class/label. We make several observation. First, in all the datasets our method is more resilient to crowding compared to the baseline AEs, and provide a more even spread. U-MAP and t-SNE produce better separated clusters. However, this separation can come at a cost: See the COIL20 result (third row) and blow-ups of three of the classes (bottom row). In this dataset we expect evenly spaced points that correspond to the even rotations of the objects in the images. Note (in the blow-ups) that U-MAP maps the three classes on top of each other (non-injectivity of the "encoder"), t-SNE is somewhat better but does not preserve well the distance between pairs of data points (we expect them to be more or less equidistant in this dataset). In I-AE the rings are better separated and points are more equidistant; the baseline AEs tend to densify the points near the origin. Lastly, considering the inter and intra-class variations for the MNIST and FMNIST datasets, we are not sure that isometric embeddings are expected to produce strongly separated clusters as in U-MAP and t-SNE (e.g., think about similar digits of different classes and dissimilar digits of the same class with distances measured in euclidean norm). 4.3 DOWNSTREAM CLASSIFICATION To quantitatively evaluate the unsupervised low-dimensional embedding computed with the I-AE we performed the following experiment: We trained simple classifiers on the embedded vectors computed by I-AE and baseline AEs and compared their performance (i.e., accuracy). Note that the process of learning the embedding is unsupervised and completely oblivious to the labels, which are used solely for training and testing the classifiers. We evaluate on the same datasets as in Section 4.2: In MNIST and FMNIST we use the standard train-test split, and on COIL20 we split 75%-25% randomly. As AE baselines we take vanilla AE, CAE, DAE and RAE-GP, as described above. We repeat each experiment with 3 different latent dimensions, {16, 64, 256}, and use two different simple classification algorithms: linear Support vector machines (SVM) (Cortes & Vapnik, 1995) and K-nearest neighbors (K-NN), with K = 5. Table 2 logs the results, where for both types of classifiers I-AE outperforms the baseline AEs in almost all combinations, where the SVM experiments demonstrate larger margins in favor of I-AE. The results of the K-NN indicate that euclidean metric captures similarity in our embedding, and the results of the SVM, especially on the MNIST and COIL20 datasets, indicate that I-AE is able to embed the data in an arguably simpler, linearly separable manner. The very high classification rates in COIL20 are probably due to the size and structure of this dataset. Nevertheless with SVM, already in 16 dimensions I-AE provides an accuracy of 95%, with 5% margin from 2nd place. 4.4 HYPER-PARAMETERS SENSITIVITY To evaluate the affect of λiso on the output we compared the visualizations and optimized loss values of MNIST and FMNIST, trained with same CNN architecture as in Section 4.2 with λiso ∈ {0, 0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 0.75, 0.1}. Figure 6 shows the different visualization results as well as Lrec, Liso, Lpiso as a function of λiso. As can be seen in both datasets the visualizations and losses are stable for λiso values between 0.01 and 0.5, where a significant change to the embedding is noticeable at 0.75. The trends in the loss values are also rather stable; Liso and Lpiso start very high in the regular AE, i.e., λiso = 0, and quickly stabilize. As for Lrec on FMNIST we see a stable increase while in MNIST it also starts with a steady increase until it reaches 0.75 and then it starts to be rockier, which is also noticeable in the visualizations. λiso = 0 0.01 0.025 0.05 0.075 0.1 0.25 0.5 0.75 1 Lrec Liso Lpiso 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 iso 0.07 0.08 0.09 0.1 0.11 0.12 0.13 0.14 0.15 0.16 L re c MNIST FMNIST 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 iso -10 -8 -6 -4 -2 0 2 4 6 lo g (L is o ) MNIST FMNIST 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 iso -7 -6 -5 -4 -3 -2 -1 0 lo g (L p is o ) MNIST FMNIST Figure 6: Sensitivity to hyper-parameters. Top: visualizations of MNIST (1st row) and FMNIST (2nd row) datasets trained with different λiso values. Bottom: plots of the final train losses as a function of λiso; left to right: Lrec (linear scale), Liso (log scale), and Lpiso (log scale). 5 CONCLUSIONS We have introduced I-AE, a regularizer for autoencoders that promotes isometry of the decoder and pseudo-inverse of the encoder. Our goal was two-fold: (i) producing a favorable low dimensional manifold approximation to high dimensional data, isometrically parameterized for preserving, as much as possible, its geometric properties; and (ii) avoiding complex isometric solutions based on the notion of psuedo-inverse. Our regularizers are simple to implement and can be easily incorporated into existing autoencoders architectures. We have tested I-AE on common manifold learning tasks, demonstrating the usefulness of isometric autoencoders. An interesting future work venue is to consider task (ii) from section 1, namely incorporating I-AE losses in a probabilistic model and examine the potential benefits of the isometry prior for generative models. One motivation is the fact that isometries push probability distributions by a simple change of coordinates, P (z) = P (f(z)). A APPENDIX A.1 IMPLEMENTATION DETAILS All experiments were conducted on a Tesla V100 Nvidia GPU using PYTORCH framework Paszke et al. (2017). A.1.1 NOTATIONS Table 3 describes the notation for the different network layers. A.1.2 EVALUATION Architecture. We used an autoencoder consisted of 5 FC 256 layers followed by a LIN 2 layer for the encoder; similarly, 5 FC 256 layers followed by a LIN 3 layer were used for the decoder. Training details. All methods were trained for a relatively long period of 100K epochs. Training was done with the ADAM optimizer Kingma & Ba (2014), setting a fixed learning rate of 0.001 and a full batch. I-AE parameter was set to λiso = 0.01. Baselines. The following regularizers were used as baselines: Contractive autoencoder (CAE) Rifai et al. (2011b); Contractive autoencoder with decoder weights tied to the encoder weights (TCAE) Rifai et al. (2011a); Gradient penalty on the decoder (RAE-GP) Ghosh et al. (2020); Denoising autoencoder with gaussian noise (DAE) Vincent et al. (2010). For both CAE, and TCAE the regularization term is ‖dg(x)‖2. For RAE-GP the regularization term is ‖df(z)‖2. For U-MAP McInnes et al. (2018), we set the number of neighbours to 30. For t-SNE Maaten & Hinton (2008), we set perplexity= 50. A.1.3 DATA VISUALIZATION Architecture. Table 4 lists the complete architecture details of this experiment. Both MNIST and FMNIST were trained with FC-NN and S-CNN, and COIL20 was trained with L-CNN. Training details. Training was done using ADAM optimizer Kingma & Ba (2014). The rest of the training details are on table 5. Baselines. The following regularizers were used as baselines: Contractive autoencoder (CAE) Rifai et al. (2011b); Gradient penalty on the decoder (RAE-GP) Ghosh et al. (2020); Denoising autoencoder with gaussian noise (DAE) Vincent et al. (2010). For CAE the regularization term is ‖dg(x)‖2. For RAE-GP the regularization term is ‖df(z)‖2. We used U-MAP McInnes et al. (2018) official implementation with random_state = 42, and Ulyanov (2016) multicore implementation for t-SNE Maaten & Hinton (2008) with default parameters. A.2 ADDITIONAL EXPERIMENTS A.2.1 GENERALIZATION IN HIGH DIMENSIONAL SPACE Next, we evaluate how well our suggested isometric prior induces manifolds that generalizes well to unseen data. We experimented with three different images datasets: MNIST (LeCun, 1998); CIFAR10 (Krizhevsky et al., 2009); and CelebA (Liu et al., 2015). We quantitatively estimate methods performance by measuring the L2 distance and the Fréchet Inception Distance (FID) Heusel et al. (2017) on a held out test set. For each dataset, we used the official train-test splits. For comparison versus baselines we have selected among relevant existing AE based methods the following: Vanilla AE (AE); autoencoder trained with weight decay (AEW); Contractive autoencoder (CAE); autoencoder with spectral weights normalization (RAE-SN); and autoencoder with L2 regularization on decoder weights (RAE-SN). RAE-L2 and RAE-SN were recently successfully applied to this data in (Ghosh et al., 2020), demonstrating state-of-the-art performance on this task. In addition, we compare versus the Wasserstein Auto-Encoder (WAE) Tolstikhin et al. (2018), chosen as state-of-the-art among generative autoencoders. For evaluation fairness, all methods were trained using the same training hyper-parameters: network architecture, optimizer settings, batch size, number of epochs for training and learning rate scheduling. See the appendix for specific hyper-parameters values. In addition, we generated a validation set out of the training set using 10k samples for the MNIST and CIFAR-10 experiment, whereas for the CelebA experiment we used the official validation set. For each training epoch, we evaluated the reconstruction L2 loss on the validation set and chose the final network weights to be the one that achieves the minimum reconstruction. We experimented with two variants of I-AE regularizers: Lpiso and Lpiso + Liso. Table 7 logs the results. Note that I-AE produced competitive results with the current SOTA on this task. Architecture. For all methods, we used an autoencoder with Convolutional and Convolutional transpose layers. Table 6 lists the complete details. Training details. Training was done with the ADAM optimizer Kingma & Ba (2014), setting a learning rate of 0.0005 and batch size 100. I-AE parameter was set to λiso = 0.1.
1. What is the focus of the paper, and what are the authors' contributions to the field of autoencoders? 2. What are the strengths of the proposed approach, particularly in terms of preserving geometric properties in the learned manifold? 3. Are there any weaknesses or limitations in the paper, especially regarding the experimental section? 4. Do you have any concerns or suggestions for improving the proposed method, such as applying it to non-linear cases or exploring practical applications beyond data visualization?
Review
Review The authors propose a new version of the regularized autoencoder where they explicitly regularizes its decoder to be locally isometric and its encoder to be the decoder's pseudo inverse. Through a series of experiments and visualization, the IAE exhibits better manifold structure. Regarding the motivation and the math, I like the idea of isometric regularizer preserving the geometric properties in the learned manifold. The illustration in figure 1 does clearly point out the advantages of IAE over the contractive autoencoder. The math formulation primarily sticks with a linear version of the autoencoder. It would be great to get some insights for a non-linear counterpart. Regarding the experiments, indeed the authors successfully show the IAE converges its decoder to be an isometry and the proposed regularizer promotes more favoured manifold. However, the experiments mainly rely on visualization but fail to give some numeric results. For instance, can IAE be useful for semi-supervised learning (Like VAEs)? How can we practically make use of the isometry property in applications other than data visualization?
ICLR
Title Isometric Autoencoders Abstract High dimensional data is often assumed to be concentrated on or near a lowdimensional manifold. Autoencoders (AE) is a popular technique to learn representations of such data by pushing it through a neural network with a low dimension bottleneck while minimizing a reconstruction error. Using high capacity AE often leads to a large collection of minimizers, many of which represent a low dimensional manifold that fits the data well but generalizes poorly. Two sources of bad generalization are: extrinsic, where the learned manifold possesses extraneous parts that are far from the data; and intrinsic, where the encoder and decoder introduce arbitrary distortion in the low dimensional parameterization. An approach taken to alleviate these issues is to add a regularizer that favors a particular solution; common regularizers promote sparsity, small derivatives, or robustness to noise. In this paper, we advocate an isometry (i.e., local distance preserving) regularizer. Specifically, our regularizer encourages: (i) the decoder to be an isometry; and (ii) the encoder to be the decoder’s pseudo-inverse, that is, the encoder extends the inverse of the decoder to the ambient space by orthogonal projection. In a nutshell, (i) and (ii) fix both intrinsic and extrinsic degrees of freedom and provide a non-linear generalization to principal component analysis (PCA). Experimenting with the isometry regularizer on dimensionality reduction tasks produces useful low-dimensional data representations. 1 INTRODUCTION A common assumption is that high dimensional data X ⊂ RD is sampled from some distribution p concentrated on, or near, some lower d-dimensional submanifoldM⊂ RD, where d < D. The task of estimating p can therefore be decomposed into: (i) approximate the manifoldM; and (ii) approximate p restricted to, or concentrated nearM. In this paper we focus on task (i), mostly known as manifold learning. A common approach to approximate the d-dimensional manifoldM, e.g., in (Tenenbaum et al., 2000; Roweis & Saul, 2000; Belkin & Niyogi, 2002; Maaten & Hinton, 2008; McQueen et al., 2016; McInnes et al., 2018), is to embed X in Rd. This is often done by first constructing a graph G where nearby samples in X are conngected by edges, and second, optimizing for the locations of the samples in Rd striving to minimize edge length distortions in G. Autoencoders (AE) can also be seen as a method to learn low dimensional manifold representation of high dimensional data X . AE are designed to reconstruct X as the image of its low dimensional embedding. When restricting AE to linear encoders and decoders it learns linear subspaces; with mean squared reconstruction loss they reproduce principle component analysis (PCA). Using higher capacity neural networks as the encoder and decoder, allows complex manifolds to be approximated. To avoid overfitting, different regularizers are added to the AE loss. Popular regularizers include sparsity promoting (Ranzato et al., 2007; 2008; Glorot et al., 2011), contractive or penalizing large derivatives (Rifai et al., 2011a;b), and denoising (Vincent et al., 2010; Poole et al., 2014). Recent AE regularizers directly promote distance preservation of the encoder (Pai et al., 2019; Peterfreund et al., 2020). In this paper we advocate a novel AE regularization promoting isometry (i.e., local distance preservation), called Isometric-AE (I-AE). Our key idea is to promote the decoder to be isometric, and the encoder to be its pseudo-inverse. Given an isometric decoder Rd → RD, there is no well-defined inverse RD → Rd; we define the pseudo-inverse to be a projection on the image of the decoder composed with the inverse of the decoder restricted to its image. Locally, the I-AE regularization therefore encourages: (i) the differential of the decoder A ∈ RD×d to be an isometry, i.e., ATA = Id, where Id is the d× d identity matrix; and (ii) the differential of the encoder, B ∈ Rd×D to be the pseudo-inverse (now in the standard linear algebra sense) of the differential of the decoder A ∈ RD×d, namely, B = A+. In view of (i) this implies B = AT . This means that locally our decoder and encoder behave like PCA, where the encoder and decoder are linear transformations satisfying (i) and (ii); That is, the PCA encoder can be seen as a composition of an orthogonal projection on the linear subspace spanned by the decoder, followed by an orthogonal transformation (isometry) to the low dimensional space. In a sense, our method can be seen as a version of denoising/contractive AEs (DAE/CAE, respectively). DAE and CAE promote a projection from the ambient space onto the data manifold, but can distort distances and be non-injective. Locally, using differentials again, projection on the learned manifold means (AB)2 = AB. Indeed, as can be readily checked conditions (i) and (ii) above imply A(BA)B = AB. This means that I-AE also belongs to the same class of DAE/CAE, capturing the variations in tangent directions of the data,M, while ignoring orthogonal variations which often represent noise (Vincent et al., 2010; Alain & Bengio, 2014). The benefit in I-AE is that its projection on the data manifold is locally an isometry, preserving distances and sampling the learned manifold evenly. That is, I-AE does not shrink or expand the space; locally, it can be imagined as an orthogonal linear transformation. The inset shows results of a simple experiment comparing contractive AE (CAE-bottom) and isometric AE (I-AE-top). Both AEs are trained on the green data points; the red arrows depict projection of points (in blue) in vicinity of the data onto the learned manifold (in black) as calculated by applying the encoder followed by the decoder. Note that CAE indeed projects on the learned manifold but not evenly, tending to shrink space around data points; in contrast I-AE provides a more even sampling of the learned manifold. Experiments confirm that optimizing the I-AE loss results in a close-to-isometric encoder/decoder explaining the data. We further demonstrate the efficacy of I-AE for dimensionality reduction of different standard datatsets, showing its benefits over manifold learning and other AE baselines. 2 RELATED WORKS Manifold learning. Manifold learning generalizes classic dimensionality reduction methods such as PCA (F.R.S., 1901) and MDS (Kruskal, 1964; Sammon, 1969), by aiming to preserve the local geometry of the data. Tenenbaum et al. (2000) use the nn-graph to approximate the geodesic distances over the manifold, followed by MDS to preserve it in the lower dimension. Roweis & Saul (2000); Belkin & Niyogi (2002); Donoho & Grimes (2003) use spectral methods to minimize different distortion energy functions over the graph matrix. Coifman et al. (2005); Coifman & Lafon (2006) approximate the heat diffusion over the manifold by a random walk over the nn-graph, to gain a robust distance measure on the manifold. Stochastic neighboring embedding algorithms (Hinton & Roweis, 2003; Maaten & Hinton, 2008) captures the local geometry of the data as a mixture of Gaussians around each data points, and try to find a low dimension mixture model by minimizing the KL-divergence. In a relatively recent work, McInnes et al. (2018) use iterative spectral and embedding optimization using fuzzy sets. Several works tried to adapt classic manifold learning ideas to neural networks and autoencoders. Pai et al. (2019) suggest to embed high dimensional points into a low dimension with a neural network by constructing a metric between pairs of data points and minimizing the metric distortion energy. Kato et al. (2019) suggest to learn an isometric decoder by using noisy latent variables. They prove under certain conditions that it encourages isometric decoder. Peterfreund et al. (2020) suggest autoencoders that promote the isometry of the encoder over the data by approximating its differential gram matrix using sample covariance matrix. Zhan et al. (2018) encourage distance preserving autoencoders by minimizing metric distortion energy in common feature space. Modern autoencoders. There is an extensive literature on extending autoencoders to a generative model (task (ii) in section 1). That is, learning a probability distribution in addition to approximating the data manifoldM. Variational autoencoder (VAE) Kingma & Welling (2014) and its variants Makhzani et al. (2015); Burda et al. (2016); Sønderby et al. (2016); Higgins et al. (2017); Tolstikhin et al. (2018); Park et al. (2019); Zhao et al. (2019) are examples to such methods. In essence, these methods augment the AE structure with a learned probabilistic model in the low dimensional (latent) space Rd that is used to approximate the probability P that generated the observed data X . More relevant to our work, are recent works suggesting regularizers for deterministic autoencoders that together with ex-post density estimation in latent space forms a generative model. Ghosh et al. (2020) suggested to reduce the decoder degrees of freedom, either by regularizing the norm of the decoder weights or the norm of the decoder differential. Other regularizers of the differential of the decoder, aiming towards a deterministic variant of VAE, were recently suggested in Kumar & Poole (2020); Kumar et al. (2020). In contrast to our method, these methods do not regularize the encoder explicitly. 3 ISOMETRIC AUTOENCODERS We consider high dimensional data points X = {xi}ni=1 ⊂ RD sampled from some probability distribution P (x) in RD concentrated on or near some d dimensional submanifoldM⊂ RD, where d < D. Our goal is to compute isometric autoencoder (I-AE) defined as follows. Let g : RD → Rd denote the encoder, and f : Rd → RD the decoder; N is the learned manifold, i.e., the image of the decoder, N = f(Rd). I-AE is defined by the following requirements: (i) The data X is close to N . (ii) f is an isometry. (iii) g is the pseudo-inverse of f . Figure 2 is an illustration of I-AE. Let θ denote the parameters of f , and φ the parameters of g. We enforce the requirements (i)-(iii) by prescribing a loss function L(θ, φ) and optimize it using standard stochastic gradient descent (SGD). We next break down the loss L to its different components. Condition (i) is promoted with the standard reconstruction loss in AE: Lrec(θ, φ) = 1 n n∑ i=1 ‖f(g(xi))− xi‖2 , (1) where ‖·‖ is the 2-norm. Before handling conditions (ii),(iii) let us first define the notions of isometry and pseudo-inverse. A differentiable mapping f between the euclidean spaces Rd and RD is a local isometry if it has an orthogonal differential matrix df(z) ∈ RD×d, df(z)T df(z) = Id, (2) where Id ∈ Rd×d is the identity matrix, and df(z)ij = ∂f i ∂zj (z). A local isometry which is also a diffeomorphism is a global isometry. Restricting the decoder to isometry is beneficial for several reasons. First, Nash-Kuiper Embedding Theorem Nash (1956) asserts that non-expansive maps can be approximated arbitrary well with isometries if D ≥ d + 1 and hence promoting an isometry does not limit the expressive power of the decoder. Second, the low dimensional representation of the data computed with an isometric encoder preserves the geometric structure of the data. In particular volume, length, angles and probability densities are preserved between the low dimensional representation Rd, and the learned manifold N . Lastly, for a fixed manifold N there is a huge space of possible decoders such that N = f(Rd). For isometric f , this space is reduced considerably: Indeed, consider two isometries parameterizing N , i.e., f1, f2 : Rd → N . Then, since composition of isometries is an isometry we have that f−12 ◦ f1 : Rd → Rd is a dimension-preserving isometry and hence a rigid motion. That is, all decoders of the same manifold are the same up to a rigid motion. For the encoder the situation is different. Since D > d the encoder g cannot be an isometry in the standard sense. Therefore we ask g to be the pseudo-inverse of f . For that end we define the projection operator p on a submanifold N ⊂ RD as p(x) = arg min x′∈N ‖x− x′‖ . Note that the closest point is not generally unique, however the Tubular Neighborhood Theorem (see e.g., Theorem 6.24 in Lee (2013)) implies uniqueness for points x sufficiently close to the manifold N . Definition 1. We say the g is the pseudo-inverse of f if g can be written as g = f−1 ◦ p, where p is the projection on N = f(Rd). Consequently, if g is the pseudo-inverse of an isometry f then it extends the standard notion of isometry by projecting every point on a submanifold N and then applying an isometry between the d-dimensional manifolds N and Rd. See Figure 2 for an illustration. First-order characterization. To encourage f, g to satisfy the (local) isometry and the pseudoinverse properties (resp.) we will first provide a first-order (necessary) characterization using their differentials: Theorem 1. Let f be a decoder and g an encoder satisfying conditions (ii),(iii). Then their differentials A = df(z) ∈ RD×d, B = dg(f(z)) ∈ Rd×D satisfy ATA = Id (3) BBT = Id (4) B = AT (5) The theorem asserts that the differentials of the encoder and decoder are orthogonal (rectangular) matrices, and that the encoder is the pseudo-inverse of the differential of the decoder. Before proving this theorem, let us first use it to construct the relevant losses for promoting the isometry of f and pseudo-inverse g. We need to promote conditions (3), (4), (5). Since we want to avoid computing the full differentials A = df(z), B = dg(f(z)), we will replace (3) and (4) with stochastic estimations based on the following lemma: denote the unit d− 1-sphere by Sd−1 = { z ∈ Rd| ‖z‖ = 1 } . Lemma 1. Let A ∈ RD×d, where d ≤ D. If ‖Au‖ = 1 for all u ∈ Sd−1, then A is columnorthogonal, that is ATA = Id. Therefore, the isometry promoting loss, encouraging (3), is defined by Liso(θ) = Ez,u ( ‖df(z)u‖ − 1 )2 , (6) where z ∼ Piso(Rd), and Piso(Rd) is a probability measure on Rd; u ∼ P (Sd−1), and P (Sd−1) is the standard rotation invariant probability measure on the d− 1-sphere Sd−1. The pseudo-inverse promoting loss, encouraging (4) would be Lpiso(φ) = Ex,u (∥∥uT dg(x)∥∥− 1)2, (7) where x ∼ P (M) and u ∼ P (Sd−1). As usual, the expectation with respect to P (M) is computed empirically using the data samples X . Lastly, (5) might seem challenging to enforce with neural networks, however the orthogonality of A,B can be leveraged to replace this loss with a more tractable loss asking the encoder is merely the inverse of the decoder over its image: Lemma 2. Let A ∈ RD×d, and B ∈ Rd×D. If ATA = Id = BBT and BA = Id then B = A+ = AT . Fortunately, this is already taken care of by the reconstruction loss: since low reconstruction loss in equation 1 forces the encoder and the decoder to be the inverse of one another over the data manifold, i.e., g(f(z)) = z, it encourages BA = Id and therefore, by Lemma 2, automatically encourages equation 5. Note that invertability also implies bijectivity of the encoder/decoder restricted to the data manifold, pushing for global isometries (rather than local). Summing all up, we define our loss for I-AE by L(θ, φ) = Lrec(θ, φ) + λiso (Liso(θ) + Lpiso(φ)) , (8) where λiso is a parameter controlling the isometry-reconstruction trade-off. 3.1 DETAILS AND PROOFS. Let us prove Theorem 1 characterizing the relation of the differentials of isometries and pseudoisometries, A = df(z) ∈ RD×d, B = dg(f(z)) ∈ Rd×D. First, by definition of isometry (equation 2), ATA = Id. We denote by TxN the d-dimensional tangent space to N at x ∈ N ; accordingly, TxN⊥ denotes the normal tangent space. Lemma 3. The differential dp(x) ∈ RD×D at x ∈ N of the projection operator p : RD → N is dp(x)u = { u u ∈ TxN 0 u ∈ TxN⊥ (9) That is, dp(x) is the orthogonal projection on the tangent space of N at x. Proof. First, consider the squared distance function to N defined by η(x) = 12 minx′∈N ‖x− x ′‖2. The envelope theorem implies that∇η(x) = x− p(x). Differentiating both sides and rearranging we get dp(x) = ID −∇2η(x). As proved in Ambrosio & Soner (1994) (Theorem 3.1), ∇2η(x) is the orthogonal projection on TxN⊥. Let x = f(z) ∈ N . Since x ∈ N we have p(x) = x. Condition (iii) asserts that g(y) = f−1(p(y)); taking the derivative at y = x we get dg(x) = df−1(x)dp(x). Lemma 3 implies that dp(x) = AAT , since AAT is the orthogonal projection on TxN . Furthermore, df−1(x) restricted to Im(A) is AT . Putting this together we get B = dg(x) = ATAAT = AT . This implies that BBT = Id, and that B = A+ = AT . This concludes the proof of Theorem 1. Proof of Lemma 1. Writing the SVD of A = UΣV T , where Σ = diag(σ1, . . . , σd) are the singular values of A, we get that ∑d i=1 σ 2 i v 2 i = 1 for all v ∈ Sd−1. Plugging v = ej , j ∈ [d] (the standard basis) we get that all σi = 1 for i ∈ [d] and A = UV T is orthogonal as claimed. Proof of Lemma 2. Let U = [A,V ], V ∈ RD×(D−d), be a completion of A to an orthogonal matrix in RD×D. Now, Id = BUUTBT = Id + BV V TBT , and since BV V TBT 0 this means that BV = 0, that is B takes to null the orthogonal space to the column space of A. A direct computation shows that BU = ATU which in turn implies B = AT = A+. Implementation. Implementing the losses in equation 6 and equation 7 requires making a choice for the probability densities and approximating the expectations. We take Piso(Rd) to be either uniform or gaussian fit to the latent codes g(X ); and P (M) is approximated as the uniform distribution on X , as mentioned above. The expectations are estimated using Monte-Carlo sampling. That is, at each iteration we draw samples x̂ ∈ X , ẑ ∼ Piso(Rd), û ∼ P (Sd−1) and use the approximations Liso(θ) ≈ ( ‖df(ẑ)û‖ − 1 )2 Lpiso(φ) ≈ ( ∥∥ûT dg(x̂)∥∥− 1)2 The right differential multiplication df(ẑ)û and left differential multiplication ûT dg(x̂) are computed using forward and backward mode automatic differentiation (resp.). Their derivatives with respect to the networks’ parameters θ, φ are computed by another backward mode automatic differentiation. 4 EXPERIMENTS 4.1 EVALUATION We start by evaluating the effectiveness of our suggested I-AE regularizer, addressing the following questions: (i) does our suggested loss L (θ, φ) in equation 8 drive I-AE training to converge to an isometry? (ii) What is the effect of the Lpiso term? In particular, does it encourage better manifold approximations as conjectured? To that end, we examined the I-AE training on data points X sampled uniformly from 3D surfaces with known global parameterizations. Figure 3 shows qualitative comparison of the learned embeddings for various AE regularization techniques: Vanilla autoencoder (AE); Contractive autoencoder (CAE) (Rifai et al., 2011b); Contractive autoencoder with decoder weights tied to the encoder weights (TCAE) (Rifai et al., 2011a); Gradient penalty on the decoder (RAE-GP) (Ghosh et al., 2020); and Denoising autoencoder with gaussian noise (DAE) (Vincent et al., 2010). For fairness in evaluation, all methods were trained using the same training hyper-parameters. See Appendix for the complete experiment details including mathematical formulation of the different AE regularizers. In addition, we compared against popular classic manifold learning techniques: U-MAP (McInnes et al., 2018), t-SNE (Maaten & Hinton, 2008) and LLE. (Roweis & Saul, 2000). The results demonstrate that I-AE is able to learn an isometric embedding, showing some of the advantages in our method: sampling density and distances between input points is preserved in the learned low dimensional space. In addition, for the AE methods, we quantitatively evaluate how close is the learnt decoder to an isometry. For this purpose, we triangulate a grid of planar points {zi} ⊂ R2. We denote by {eij} the triangles edges incident to grid points zi and zj . Then, we measured the edge lengths ratio, lij = ‖f (zi)− f (zj)‖/‖eij‖ expected to be ≈ 1 for all edges eij in an isometry. In Table 1 we log the standard deviation (Std) of {lij} for I-AE compared to other regularized AEs. For a fair comparison, we scaled zi so the mean of lij is 1 in all experiments. As can be seen in the table, the distribution of {lij} for I-AE is significantly more concentrated than the different AE baselines. Finally, althoughLiso is already responsible for learning an isometric decoder, the pseudo-inverse encoder (enforced by the lossLpiso) helps it converge to simpler solutions. We ran AE training with and without the Lpiso term. Figure 4 shows in gray the learnt decoder surface, N , without Lpiso (left), containing extra (unnatural) surface parts compared to the learnt surface with Lpiso (right). In both cases we expect (and achieve) a decoder approximating an isometry that passes through the input data points. Nevertheless, the pseudo-inverse loss restricts some of the degrees of freedom of the encoder which in turn leads to a simpler solution. 4.2 DATA VISUALIZATION In this experiment we evaluate our method in the task of high dimension data visualization, i.e., reducing high dimensional data into two dimensional space. Usually the data is not assumed to lie on a manifold with such a low dimension, and it is therefore impossible to preserve all of its geometric properties. A common artifact when squeezing higher dimensional data into the plane is crowding (Maaten & Hinton, 2008), that is planar embedded points are crowded around the origin. We evaluate our method on three standard datasets of images: MNIST (LeCun, 1998) (60k handwritten digits), Fashion-MNIST (60k Zalando’s article images) (Xiao et al., 2017) and COIL20 (Nene et al., 1996) (20 different images of object rotated with 72 even rotations). For baselines we take: Vanilla AE; CAE; GP-RAE; DAE; U-MAP and t-SNE. We use the same architecture for all auto-encoder methods on each dataset. MNIST and FMNIST we evaluated in two scenarios: (i) Both encoder and decoder are fully-connected (MLP) networks; and (ii) Both encoder and decoder are Convolutional Neural Network (CNN). For COIL20 dataset both encoder and decoder are Convolutional Neural Network. Full implementation details and hyper-parameters values can be found in the Appendix. The results are presented in figure 5; where each embedded point z is colored by its ground-truth class/label. We make several observation. First, in all the datasets our method is more resilient to crowding compared to the baseline AEs, and provide a more even spread. U-MAP and t-SNE produce better separated clusters. However, this separation can come at a cost: See the COIL20 result (third row) and blow-ups of three of the classes (bottom row). In this dataset we expect evenly spaced points that correspond to the even rotations of the objects in the images. Note (in the blow-ups) that U-MAP maps the three classes on top of each other (non-injectivity of the "encoder"), t-SNE is somewhat better but does not preserve well the distance between pairs of data points (we expect them to be more or less equidistant in this dataset). In I-AE the rings are better separated and points are more equidistant; the baseline AEs tend to densify the points near the origin. Lastly, considering the inter and intra-class variations for the MNIST and FMNIST datasets, we are not sure that isometric embeddings are expected to produce strongly separated clusters as in U-MAP and t-SNE (e.g., think about similar digits of different classes and dissimilar digits of the same class with distances measured in euclidean norm). 4.3 DOWNSTREAM CLASSIFICATION To quantitatively evaluate the unsupervised low-dimensional embedding computed with the I-AE we performed the following experiment: We trained simple classifiers on the embedded vectors computed by I-AE and baseline AEs and compared their performance (i.e., accuracy). Note that the process of learning the embedding is unsupervised and completely oblivious to the labels, which are used solely for training and testing the classifiers. We evaluate on the same datasets as in Section 4.2: In MNIST and FMNIST we use the standard train-test split, and on COIL20 we split 75%-25% randomly. As AE baselines we take vanilla AE, CAE, DAE and RAE-GP, as described above. We repeat each experiment with 3 different latent dimensions, {16, 64, 256}, and use two different simple classification algorithms: linear Support vector machines (SVM) (Cortes & Vapnik, 1995) and K-nearest neighbors (K-NN), with K = 5. Table 2 logs the results, where for both types of classifiers I-AE outperforms the baseline AEs in almost all combinations, where the SVM experiments demonstrate larger margins in favor of I-AE. The results of the K-NN indicate that euclidean metric captures similarity in our embedding, and the results of the SVM, especially on the MNIST and COIL20 datasets, indicate that I-AE is able to embed the data in an arguably simpler, linearly separable manner. The very high classification rates in COIL20 are probably due to the size and structure of this dataset. Nevertheless with SVM, already in 16 dimensions I-AE provides an accuracy of 95%, with 5% margin from 2nd place. 4.4 HYPER-PARAMETERS SENSITIVITY To evaluate the affect of λiso on the output we compared the visualizations and optimized loss values of MNIST and FMNIST, trained with same CNN architecture as in Section 4.2 with λiso ∈ {0, 0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 0.75, 0.1}. Figure 6 shows the different visualization results as well as Lrec, Liso, Lpiso as a function of λiso. As can be seen in both datasets the visualizations and losses are stable for λiso values between 0.01 and 0.5, where a significant change to the embedding is noticeable at 0.75. The trends in the loss values are also rather stable; Liso and Lpiso start very high in the regular AE, i.e., λiso = 0, and quickly stabilize. As for Lrec on FMNIST we see a stable increase while in MNIST it also starts with a steady increase until it reaches 0.75 and then it starts to be rockier, which is also noticeable in the visualizations. λiso = 0 0.01 0.025 0.05 0.075 0.1 0.25 0.5 0.75 1 Lrec Liso Lpiso 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 iso 0.07 0.08 0.09 0.1 0.11 0.12 0.13 0.14 0.15 0.16 L re c MNIST FMNIST 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 iso -10 -8 -6 -4 -2 0 2 4 6 lo g (L is o ) MNIST FMNIST 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 iso -7 -6 -5 -4 -3 -2 -1 0 lo g (L p is o ) MNIST FMNIST Figure 6: Sensitivity to hyper-parameters. Top: visualizations of MNIST (1st row) and FMNIST (2nd row) datasets trained with different λiso values. Bottom: plots of the final train losses as a function of λiso; left to right: Lrec (linear scale), Liso (log scale), and Lpiso (log scale). 5 CONCLUSIONS We have introduced I-AE, a regularizer for autoencoders that promotes isometry of the decoder and pseudo-inverse of the encoder. Our goal was two-fold: (i) producing a favorable low dimensional manifold approximation to high dimensional data, isometrically parameterized for preserving, as much as possible, its geometric properties; and (ii) avoiding complex isometric solutions based on the notion of psuedo-inverse. Our regularizers are simple to implement and can be easily incorporated into existing autoencoders architectures. We have tested I-AE on common manifold learning tasks, demonstrating the usefulness of isometric autoencoders. An interesting future work venue is to consider task (ii) from section 1, namely incorporating I-AE losses in a probabilistic model and examine the potential benefits of the isometry prior for generative models. One motivation is the fact that isometries push probability distributions by a simple change of coordinates, P (z) = P (f(z)). A APPENDIX A.1 IMPLEMENTATION DETAILS All experiments were conducted on a Tesla V100 Nvidia GPU using PYTORCH framework Paszke et al. (2017). A.1.1 NOTATIONS Table 3 describes the notation for the different network layers. A.1.2 EVALUATION Architecture. We used an autoencoder consisted of 5 FC 256 layers followed by a LIN 2 layer for the encoder; similarly, 5 FC 256 layers followed by a LIN 3 layer were used for the decoder. Training details. All methods were trained for a relatively long period of 100K epochs. Training was done with the ADAM optimizer Kingma & Ba (2014), setting a fixed learning rate of 0.001 and a full batch. I-AE parameter was set to λiso = 0.01. Baselines. The following regularizers were used as baselines: Contractive autoencoder (CAE) Rifai et al. (2011b); Contractive autoencoder with decoder weights tied to the encoder weights (TCAE) Rifai et al. (2011a); Gradient penalty on the decoder (RAE-GP) Ghosh et al. (2020); Denoising autoencoder with gaussian noise (DAE) Vincent et al. (2010). For both CAE, and TCAE the regularization term is ‖dg(x)‖2. For RAE-GP the regularization term is ‖df(z)‖2. For U-MAP McInnes et al. (2018), we set the number of neighbours to 30. For t-SNE Maaten & Hinton (2008), we set perplexity= 50. A.1.3 DATA VISUALIZATION Architecture. Table 4 lists the complete architecture details of this experiment. Both MNIST and FMNIST were trained with FC-NN and S-CNN, and COIL20 was trained with L-CNN. Training details. Training was done using ADAM optimizer Kingma & Ba (2014). The rest of the training details are on table 5. Baselines. The following regularizers were used as baselines: Contractive autoencoder (CAE) Rifai et al. (2011b); Gradient penalty on the decoder (RAE-GP) Ghosh et al. (2020); Denoising autoencoder with gaussian noise (DAE) Vincent et al. (2010). For CAE the regularization term is ‖dg(x)‖2. For RAE-GP the regularization term is ‖df(z)‖2. We used U-MAP McInnes et al. (2018) official implementation with random_state = 42, and Ulyanov (2016) multicore implementation for t-SNE Maaten & Hinton (2008) with default parameters. A.2 ADDITIONAL EXPERIMENTS A.2.1 GENERALIZATION IN HIGH DIMENSIONAL SPACE Next, we evaluate how well our suggested isometric prior induces manifolds that generalizes well to unseen data. We experimented with three different images datasets: MNIST (LeCun, 1998); CIFAR10 (Krizhevsky et al., 2009); and CelebA (Liu et al., 2015). We quantitatively estimate methods performance by measuring the L2 distance and the Fréchet Inception Distance (FID) Heusel et al. (2017) on a held out test set. For each dataset, we used the official train-test splits. For comparison versus baselines we have selected among relevant existing AE based methods the following: Vanilla AE (AE); autoencoder trained with weight decay (AEW); Contractive autoencoder (CAE); autoencoder with spectral weights normalization (RAE-SN); and autoencoder with L2 regularization on decoder weights (RAE-SN). RAE-L2 and RAE-SN were recently successfully applied to this data in (Ghosh et al., 2020), demonstrating state-of-the-art performance on this task. In addition, we compare versus the Wasserstein Auto-Encoder (WAE) Tolstikhin et al. (2018), chosen as state-of-the-art among generative autoencoders. For evaluation fairness, all methods were trained using the same training hyper-parameters: network architecture, optimizer settings, batch size, number of epochs for training and learning rate scheduling. See the appendix for specific hyper-parameters values. In addition, we generated a validation set out of the training set using 10k samples for the MNIST and CIFAR-10 experiment, whereas for the CelebA experiment we used the official validation set. For each training epoch, we evaluated the reconstruction L2 loss on the validation set and chose the final network weights to be the one that achieves the minimum reconstruction. We experimented with two variants of I-AE regularizers: Lpiso and Lpiso + Liso. Table 7 logs the results. Note that I-AE produced competitive results with the current SOTA on this task. Architecture. For all methods, we used an autoencoder with Convolutional and Convolutional transpose layers. Table 6 lists the complete details. Training details. Training was done with the ADAM optimizer Kingma & Ba (2014), setting a learning rate of 0.0005 and batch size 100. I-AE parameter was set to λiso = 0.1.
1. What is the novel method proposed by the paper for training a local isometric autoencoder? 2. What are the strengths of the paper regarding its theories and writing quality? 3. What are the weaknesses of the paper, particularly regarding its argument for global isometry and experimental results? 4. How does the reviewer assess the importance of preserving local Euclidean distances in manifold learning and related cases? 5. Does the paper adequately discuss the distance metric used in the original space, considering the local geometry of the data? 6. Are there any concerns regarding the benefit of using t-SNE for data visualization, and how does it compare to the proposed method? 7. What are some minor comments regarding typos, notation, and clarity in certain parts of the paper?
Review
Review Strength: This paper provided a novel method to train a local isometric autoencoder, which can preserve the local Euclidean distances well between the original space and the latent space. The theories are well presented and explained pretty well. Also, Isometry is a very important property in several cases including manifold-learning, etc. Apart from the typos and several tiny errors, the overall writing is sound and smooth. Weakness: I have to say that the argument of global isometry is too ambitious. In the theory and method part, this method only guarantees the local isometry. The authors do mention that "a local isometry which is also a diffeomorphism is a global isometry" on page 3 bottom paragraph. However, there's no discussion about the "diffeomorphism" in the following sections. Also, the first experiment (3D → 2D) only supports the local isometry. Due to the fact that the distance is computed only based on the triangular meshes between edges. Is there any possibility that the author can provide one more toy example for the global isometry when the data lie on some manifold shape? This would strongly support the global argument. From my understanding, the distance in the original space is the Euclidean distance without considering the local geometry of the data. Can the author provide some comments on this? The distance in the original space should be the geodesic distance when arguing the Isometry. The experiment of the data visualization is somewhat weak. The benefit of the Isometry Autoencoder is not well addressed. The t-SNE is well used for visualization with almost nothing wrong. The only benefit comes when arguing the "even" sampling. Can the author provide some comments on this about why we need the "even" in the visualization? Some other minor comments: There's a typo in the last line of the first page. The encoder should be R D → R d with d < D . Similar to the inverse R d → R D . The differential of the decoder should be the Jacobian matrix right? This would be more clear than just mentioning differential. Table 1 should be Figure 1. Also, can the author provide more details about this figure? Is this figure for illustration only or this result is actually trained and plotted? The form "evenly" is a strong word that needs more explanation of the definition. The order of Figure 3 and Figure 2 is messed up.
ICLR
Title DialogWAE: Multimodal Response Generation with Conditional Wasserstein Auto-Encoder Abstract Variational autoencoders (VAEs) have shown a promise in data-driven conversation modeling. However, most VAE conversation models match the approximate posterior distribution over the latent variables to a simple prior such as standard normal distribution, thereby restricting the generated responses to a relatively simple (e.g., unimodal) scope. In this paper, we propose DialogWAE, a conditional Wasserstein autoencoder (WAE) specially designed for dialogue modeling. Unlike VAEs that impose a simple distribution over the latent variables, DialogWAE models the distribution of data by training a GAN within the latent variable space. Specifically, our model samples from the prior and posterior distributions over the latent variables by transforming context-dependent random noise using neural networks and minimizes the Wasserstein distance between the two distributions. We further develop a Gaussian mixture prior network to enrich the latent space. Experiments on two popular datasets show that DialogWAE outperforms the state-of-the-art approaches in generating more coherent, informative and diverse responses. 1 INTRODUCTION Neural response generation has been a long interest of natural language research. Most of the recent approaches to data-driven conversation modeling primarily build upon sequence-to-sequence learning (Cho et al., 2014; Sutskever et al., 2014). Previous research has demonstrated that sequenceto-sequence conversation models often suffer from the safe response problem and fail to generate meaningful, diverse on-topic responses (Li et al., 2015; Sato et al., 2017). Conditional variational autoencoders (CVAE) have shown promising results in addressing the safe response issue (Zhao et al., 2017; Shen et al., 2018). CVAE generates the response conditioned on a latent variable - representing topics, tones and situations of the response - and approximate the posterior distribution over latent variables using a neural network. The latent variable captures variabilities in the dialogue and thus generates more diverse responses. However, previous studies have shown that VAE models tend to suffer from the posterior collapse problem, where the decoder learns to ignore the latent variable and degrades to a vanilla RNN (Shen et al., 2018; Park et al., 2018; Bowman et al., 2015). Furthermore, they match the approximate posterior distribution over the latent variables to a simple prior such as standard normal distribution, thereby restricting the generated responses to a relatively simple (e.g., unimodal) scope (Goyal et al., 2017). A number of studies have sought GAN-based approaches (Goodfellow et al., 2014; Li et al., 2017a; Xu et al., 2017) which directly model the distribution of the responses. However, adversarial training over discrete tokens has been known to be difficult due to the non-differentiability. Li et al. (2017a) proposed a hybrid model of GAN and reinforcement learning (RL) where the score predicted by a discriminator is used as a reward to train the generator. However, training with REINFORCE has been observed to be unstable due to the high variance of the gradient estimate (Shen et al., 2017). Xu et al. (2017) make the GAN model differentiable with an approximate word embedding layer. However, their model only injects variability at the word level, thus limited to represent high-level response variabilities such as topics and situations. In this paper, we propose DialogWAE, a novel variant of GAN for neural conversation modeling. Unlike VAE conversation models that impose a simple distribution over latent variables, DialogWAE models the data distribution by training a GAN within the latent variable space. Specifically, it samples from the prior and posterior distributions over the latent variables by transforming contextdependent random noise with neural networks, and minimizes the Wasserstein distance (Arjovsky et al., 2017) between the prior and the approximate posterior distributions. Furthermore, our model takes into account a multimodal1 nature of responses by using a Gaussian mixture prior network. Adversarial training with the Gaussian mixture prior network enables DialogWAE to capture a richer latent space, yielding more coherent, informative and diverse responses. Our main contributions are two-fold: (1) A novel GAN-based model for neural dialogue modeling, which employs GAN to generate samples of latent variables. (2) A Gaussian mixture prior network to sample random noise from a multimodal prior distribution. To the best of our knowledge, the proposed DialogWAE is the first GAN conversation model that exploits multimodal latent structures. We evaluate our model on two benchmark datasets, SwitchBoard (Godfrey and Holliman, 1997) and DailyDialog (Li et al., 2017b). The results demonstrate that our model substantially outperforms the state-of-the-art methods in terms of BLEU, word embedding similarity, and distinct. Furthermore, we highlight how the GAN architecture with a Gaussian mixture prior network facilitates the generation of more diverse and informative responses. 2 RELATED WORK Encoder-decoder variants To address the “safe response” problem of the naive encoder-decoder conversation model, a number of variants have been proposed. Li et al. (2015) proposed a diversitypromoting objective function to encourage more various responses. Sato et al. (2017) propose to incorporate various types of situations behind conversations when encoding utterances and decoding their responses, respectively. Xing et al. (2017) incorporate topic information into the sequence-tosequence framework to generate informative and interesting responses. Our work is different from the aforementioned studies, as it does not rely on extra information such as situations and topics. VAE conversation models The variational autoencoder (VAE) (Kingma and Welling, 2014) is among the most popular frameworks for dialogue modeling (Zhao et al., 2017; Shen et al., 2018; Park et al., 2018). Serban et al. (2017) propose VHRED, a hierarchical latent variable sequenceto-sequence model that explicitly models multiple levels of variability in the responses. A main challenge for the VAE conversation models is the so-called “posterior collapse”. To alleviate the problem, Zhao et al. (2017) introduce an auxiliary bag-of-words loss to the decoder. They further incorporate extra dialogue information such as dialogue acts and speaker profiles. Shen et al. (2018) propose a collaborative CVAE model which samples the latent variable by transforming a Gaussian noise using neural networks and matches the prior and posterior distributions of the Gaussian noise with KL divergence. Park et al. (2018) propose a variational hierarchical conversation RNN (VHCR) which incorporates a hierarchical structure to latent variables. DialogWAE addresses the limitation of VAE conversation models by using a GAN architecture in the latent space. GAN conversation models Although GAN/CGAN has shown great success in image generation, adapting it to natural dialog generators is a non-trivial task. This is due to the non-differentiable nature of natural language tokens (Shen et al., 2017; Xu et al., 2017). Li et al. (2017a) address this problem by combining GAN with Reinforcement Learning (RL) where the discriminator predicts a reward to optimize the generator. However, training with REINFORCE can be unstable due to the high variance of the sampled gradient (Shen et al., 2017). Xu et al. (2017) make the sequenceto-sequence GAN differentiable by directly multiplying the word probabilities obtained from the decoder to the corresponding word vectors, yielding an approximately vectorized representation of the target sequence. However, their approach injects diversity in the word level rather than the level of the whole responses. DialogWAE differs from exiting GAN conversation models in that it shapes the distribution of responses in a high level latent space rather than direct tokens and does not rely on RL where the gradient variances are large. 1A multimodal distribution is a continuous probability distribution with two or more modes. 3 PROPOSED APPROACH 3.1 PROBLEM STATEMENT Let d=[u1, ..., uk] denote a dialogue of k utterances where ui=[w1, ..., w|ui|] represents an utterance andwn denotes the n-th word in ui. Let c=[u1, ..., uk−1] denote a dialogue context, the k-1 historical utterances, and x=uk be a response which means the next utterance. Our goal is to estimate the conditional distribution pθ(x|c). As x and c are sequences of discrete tokens, it is non-trivial to find a direct coupling between them. Instead, we introduce a continuous latent variable z that represents the high-level representation of the response. The response generation can be viewed as a two-step procedure, where a latent variable z is sampled from a distribution pθ(z|c) on a latent space Z , and then the response x is decoded from z with pθ(x|z, c). Under this model, the likelihood of a response is pθ(x|c) = ∫ z p(x|c, z)p(z|c)dz. (1) The exact log-probability is difficult to compute since it is intractable to marginalize out z. Therefore, we approximate the posterior distribution of z as qφ(z|x, c) which can be computed by a neural network named recognition network. Using this approximate posterior, we can instead compute the evidence lower bound (ELBO): log pθ(x|c) = log ∫ z p(x|c, z)p(z|c)dz ≥ `(x, c) = Ez∼qφ(z|x,c)[log pψ(x|c, z)]−KL(qφ(z|x, c)||p(z|c)), (2) where p(z|c) represents the prior distribution of z given c and can be modeled with a neural network named prior network. 3.2 CONDITIONAL WASSERSTEIN AUTO-ENCODERS FOR DIALOGUE MODELING The conventional VAE conversation models assume that the latent variable z follows a simple prior distribution such as the normal distribution. However, the latent space of real responses is more complicated and difficult to be estimated with such a simple distribution. This often leads to the posterior collapse problem (Shen et al., 2018). Inspired by GAN and the adversarial auto-encoder (AAE) (Makhzani et al., 2015; Tolstikhin et al., 2017; Zhao et al., 2018), we model the distribution of z by training a GAN within the latent space. We sample from the prior and posterior over the latent variables by transforming random noise using neural networks. Specifically, the prior sample z̃∼pθ(z|c) is generated by a generator G from context-dependent random noise ̃, while the approximate posterior sample z∼qφ(z|c, x) is generated by a generator Q from context-dependent random noise . Both ̃ and are drawn from a normal distribution whose mean and covariance matrix (assumed diagonal) are computed from c with feed-forward neural networks, prior network and recognition network, respectively: z̃ = Gθ(̃), ̃ ∼ N ( ; µ̃, σ̃2I), [ µ̃ log σ̃2 ] = W̃fθ(c) + b̃ (3) z = Qφ( ), ∼ N ( ;µ, σ2I), [ µ log σ2 ] =Wgφ( [ x c ] ) + b, (4) where fθ(·) and gφ(·) are feed-forward neural networks. Our goal is to minimize the divergence between pθ(z|c) and qφ(z|x, c) while maximizing the log-probability of a reconstructed response from z. We thus solve the following problem: min θ,φ,ψ −Eqφ(z|x,c) log pψ(x|z, c) +W (qφ(z|x, c)||pθ(z|c)), (5) where pθ(z|c) and qφ(z|x, c) are neural networks implementing Equations 3 and 4, respectively. pψ(x|z, c) is a decoder. W(·||·) represents the Wasserstein distance between these two distributions (Arjovsky et al., 2017). We choose the Wasserstein distance as the divergence since the WGAN has been shown to produce good results in text generation (Zhao et al., 2018). Figure 1 illustrates an overview of our model. The utterance encoder (RNN) transforms each utterance (including the response x) in the dialogue into a real-valued vector. For the i-th utterance in the context, the context encoder (RNN) takes as input the concatenation of its encoding vector and the conversation floor (1 if the utterance is from the speaker of the response, otherwise 0) and computes its hidden state hctxi . The final hidden state of the context encoder is used as the context representation. At generation time, the model draws a random noise ̃ from the prior network (PriNet) which transforms c through a feed-forward network followed by two matrix multiplications which result in the mean and diagonal covariance, respectively. Then, the generator G generates a sample of latent variable z̃ from the noise through a feed-forward network. The decoder RNN decodes the generated z̃ into a response. At training time, the model infers the posterior distribution of the latent variable conditioned on the context c and the response x. The recognition network (RecNet) takes as input the concatenation of both x and c and transforms them through a feed-forward network followed by two matrix multiplications which define the normal mean and diagonal covariance, respectively. A Gaussian noise is drawn from the recognition network with the re-parametrization trick. Then, the generator Q transforms the Gaussian noise into a sample of latent variable z through a feed-forward network. The response decoder (RNN) computes the reconstruction loss: Lrec = −Ez=Q( ), ∼RecNet(x,c) log pψ(x|c, z) (6) We match the approximate posterior with the prior distributions of z by introducing an adversarial discriminator D which tells apart the prior samples from posterior samples. D is implemented as a feed-forward neural network which takes as input the concatenation of z and c and outputs a real value. We train D by minimizing the discriminator loss: Ldisc = E ∼RecNet(x,c)[D(Q( ), c)]− Ẽ∼PriNet(c)[D(G(̃), c)] (7) 3.3 MULTIMODAL RESPONSE GENERATION WITH A GAUSSIAN MIXTURE PRIOR NETWORK It is a usual practice for the prior distribution in the AAE architecture to be a normal distribution. However, responses often have a multimodal nature reflecting many equally possible situations (Sato et al., 2017), topics and sentiments. A random noise with normal distribution could restrict the generator to output a latent space with a single dominant mode due to the unimodal nature of Gaussian distribution. Consequently, the generated responses could follow simple prototypes. To capture multiple modes in the probability distribution over the latent variable, we further propose to use a distribution that explicitly defines more than one mode. Each time, the noise to generate the latent variable is selected from one of the modes. To achieve so, we make the prior network to capture a mixture of Gaussian distributions, namely, GMM({πk, µk, σ2kI}Kk=1), where πk, µk and σk are parameters of the k-th component. This allows it to learn a multimodal manifold in the latent variable space in a two-step generation process – first choosing a component k with πk, and then sampling Gaussian noise within the selected component: p( |c) = K∑ k=1 vkN ( ;µk, σ2kI), (8) Algorithm 1: DialogWAE Training (UEnc: utterance encoder; CEnc: context encoder; RecNet: recognition network; PriNet: prior network; Dec: decoder) K=3, ncritic=5 in all experiments In: a dialog corpus D={(ci, xi)}|D|i=1, the number of prior modes K, discriminator iterations ncritic 1 Initialize {θUEnc, θCEnc, θPriNet, θRecNet, θQ, θG, θD, θDec} 2 while not convergence do 3 Initialize D 4 while D has unsampled batches do 5 Sample a mini-batch of N instances {(xn, cn)}Nn=1 from D 6 Get the representations of context and response xn=UEnc(xn), cn=CEnc(cn) 7 Sample n from RecNet(xn, cn) according to Equation 4 8 Sample ̂n from PriNet(cn, K) according to Equation 8–10 9 Generate zn = Q( n), z̃n = G(̂n) 10 Update {θQ, θG, θPriNet, θRecNet} by gradient ascent on discriminator loss 11 Ldisc = 1N ∑N n=1D(zn, cn)− 1 N ∑N n=1D(z̃n, cn) 12 for i ∈ {1, · · · , ncritic} do 13 Repeat 5–9 14 Update θD by gradient descent on the discriminator loss Ldisc with gradient penalty 15 end 16 Update {θUEnc, θCEnc, θRecNet, θQ, θDec} by gradient descent on the reconstruction loss 17 Lrec = − 1N ∑N n=1 log p(xn|zn, cn) 18 end 19 end where vk∈∆K−1 is a component indicator with class probabilities π1,· · · ,πK ; πk is the mixture coefficient of the k-th component of the GMM. They are computed as πk= exp(ek)∑K i=1 exp(ei) , where ekµk log σ2k =Wkfθ(c) + bk (9) Instead of exact sampling, we use Gumbel-Softmax re-parametrization (Kusner and HernándezLobato, 2016) to sample an instance of v: vk = exp((ek + gk)/τ)∑K i=1 exp((ei + gi)/τ) , (10) where gi is a Gumbel noise computed as gi = −log(−log(ui)), ui ∼ U(0, 1) and τ∈[0,1] is the softmax temperature which is set to 0.1 in all experiments. We refer to this framework as DialogWAE-GMP. A comparison of performance with different numbers of prior components will be shown in Section 5.1. 3.4 TRAINING Our model is trained epochwise until a convergence is reached. In each epoch, we train the model iteratively by alternating two phases− an AE phase during which the reconstruction loss of decoded responses is minimized, and a GAN phase which minimizes the Wasserstein distance between the prior and approximate posterior distributions over the latent variables. The detailed procedures are presented in Algorithm 1 4 EXPERIMENTAL SETUP Datasets We evaluate our model on two dialogue datasets, Dailydialog (Li et al., 2017b) and Switchboard (Godfrey and Holliman, 1997), which have been widely used in recent studies (Shen et al., 2018; Zhao et al., 2017). Dailydialog has 13,118 daily conversations for a English learner in a daily life. Switchboard contains 2,400 two-way telephone conversations under 70 specified topics. The datasets are separated into training, validation, and test sets with the same ratios as in the baseline papers, that is, 2316:60:62 for Switchboard (Zhao et al., 2017) and 10:1:1 for Dailydialog (Shen et al., 2018), respectively. Metrics To measure the performance of DialogWAE, we adopted several standard metrics widely used in existing studies: BLEU (Papineni et al., 2002), BOW Embedding (Liu et al., 2016) and distinct (Li et al., 2015). In particular, BLEU measures how much a generated response contains n-gram overlaps with the reference. We compute BLEU scores for n<4 using smoothing techniques (smoothing 7)2 (Chen and Cherry, 2014). For each test context, we sample 10 responses from the models and compute their BLEU scores. We define n-gram precision and n-gram recall as the average and the maximum score respectively (Zhao et al., 2017). BOW embedding metric is the cosine similarity of bag-of-words embeddings between the hypothesis and the reference. We use three metrics to compute the word embedding similarity: 1. Greedy: greedily matching words in two utterances based on the cosine similarities between their embeddings, and to average the obtained scores (Rus and Lintean, 2012). 2. Average: cosine similarity between the averaged word embeddings in the two utterances (Mitchell and Lapata, 2008). 3. Extrema: cosine similarity between the largest extreme values among the word embeddings in the two utterances (Forgues et al., 2014). We use Glove vectors (Pennington et al., 2014) as the embeddings which will be discussed later in this section. For each test context, we report the maximum BOW embedding score among the 10 sampled responses. Distinct computes the diversity of the generated responses. dist-n is defined as the ratio of unique n-grams (n=1,2) over all n-grams in the generated responses. As we sample multiple responses for each test context, we evaluate diversities for both within and among the sampled responses. We define intra-dist as the average of distinct values within each sampled response and inter-dist as the distinct value among all sampled responses. Baselines We compare the performance of DialogWAE with seven recently-proposed baselines for dialogue modeling: (i) HRED: a generalized sequence-to-sequence model with hierarchical RNN encoder (Serban et al., 2016), (ii) SeqGAN: a GAN based model for sequence generation (Li et al., 2017a), (iii) CVAE: a conditional VAE model with KL-annealing (Zhao et al., 2017), (iv) CVAEBOW: a conditional VAE model with a BOW loss (Zhao et al., 2017), (v) CVAE-CO: a collaborative conditional VAE model (Shen et al., 2018), (vi) VHRED: a hierarchical VAE model (Serban et al., 2017), and (vii) VHCR: a hierarchical VAE model with conversation modeling (Park et al., 2018). Training and Evaluation Details We use the gated recurrent units (GRU) (Cho et al., 2014) for the RNN encoders and decoders. The utterance encoder is a bidirectional GRU with 300 hidden units in each direction. The context encoder and decoder are both GRUs with 300 hidden units. The prior and the recognition networks are both 2-layer feed-forward networks of size 200 with tanh non-linearity. The generators Q and G as well as the discriminator D are 3-layer feed-forward networks with ReLU non-linearity (Nair and Hinton, 2010) and hidden sizes of 200, 200 and 400, respectively. The dimension of a latent variable z is set to 200. The initial weights for all fully connected layers are sampled from a uniform distribution [-0.02, 0.02]. The gradient penalty is used when trainingD (Gulrajani et al., 2017) and its hyper-parameter λ is set to 10. We set the vocabulary size to 10,000 and define all the out-of-vocabulary words to a special token <unk>. The word embedding size is 200 and initialized with Glove vectors pre-trained on Twitter (Pennington et al., 2014). The size of context window is set to 10 with a maximum utterance length of 40. We sample responses with greedy decoding so that the randomness entirely come from the latent variables. The baselines were implemented with the same set of hyper-parameters. All the models are implemented with Pytorch 0.4.03, and fine-tuned with NAVER Smart Machine Learning (NSML) platform (Sung et al., 2017; Kim et al., 2018). The models are trained with mini-batches containing 32 examples each in an end-to-end manner. In the AE phase, the models are trained by SGD with an initial learning rate of 1.0 and gradient clipping at 1 (Pascanu et al., 2013). We decay the learning rate by 40% every 10th epoch. In the GAN phase, the models are updated using RMSprop (Tieleman and Hinton) with fixed learning rates of 5×10−5 2https://www.nltk.org/_modules/nltk/translate/bleu_score.html 3https://pytorch.org and 1×10−5 for the generator and the discriminator, respectively. We tune the hyper-parameters on the validation set and measure the performance on the test set. 5 EXPERIMENTAL RESULTS 5.1 QUANTITATIVE ANALYSIS Tables 1 and 2 show the performance of DialogWAE and baselines on the two datasets. DialogWAE outperforms the baselines in the majority of the experiments. In terms of BLEU scores, DialogWAE (with a Gaussian mixture prior network) generates more relevant responses, with the average recall of 42.0% and 37.2% on both of the datasets. These are significantly higher than those of the CVAE baselines (29.9% and 26.5%). We observe a similar trend to the BOW embedding metrics. DialogWAE generates more diverse responses than the baselines do. The inter-dist scores are significantly higher than those of the baseline models. This indicates the sampled responses contain more distinct n-grams. DialogWAE does not show better intra-distinct scores. We conjecture that this is due to the relatively long responses generated by the DialogWAE as shown in the last columns of both tables. It is highly unlikely for there to be many repeated n-grams in a short response. We further investigate the effects of the number of prior components (K). Figure 2 shows the performance of DialogWAE-GMP with respect to the number of prior components K. We vary K from 1 to 9. As shown in the results, in most cases, the performance increases with K and decreases once K reaches a certain threshold, for example, three. The optimal K on both of the datasets was around 3. We attribute this degradation to training difficulty of a mixture density network and the lack of appropriate regularization, which is left for future investigation. 5.2 QUALITATIVE ANALYSIS Table 3 presents examples of responses generated by the models on the DailyDialog dataset. Due to the space limitation, we report the results of CVAE-CO and DialogWAE-GMP, which are the representative models among the baselines and the proposed models. For each context in the test set, we show three samples of generated responses from each model. As we expected, DialogWAE generates more coherent and diverse responses that cover multiple plausible aspects. Furthermore, we notice that the generated response is long and exhibits informative content. By contrast, the responses generated by the baseline model exhibit relatively limited variations. Although the responses show some variants in contents, most of them share a similar prefix such as “how much”. We further investigate the interpretability of Gaussian components in the prior network, that is, what each Gaussian model has captured before generation. We pick a dialogue context “I’d like to invite you to dinner tonight, do you have time?” which is also used in (Shen et al., 2018) for analysis and generate five responses for each Gaussian component. As shown in Table 4, different Gaussian models generate different types of responses: component 1 expresses a strong will, while component 2 expresses some uncertainty, and component 3 generates strong negative responses. The overlap between components is marginal (around 1/5). The results indicate that the Gaussian mixture prior network can successfully capture the multimodal distribution of the responses. To validate the previous results, we further conduct a human evaluation with Amazon Mechanical Turk. We randomly selected 50 dialogues from the test set of DailyDialog. For each dialogue context, we generated 10 responses from each of the four models. Responses for each context were inspected by 5 participants who were asked to choose the model which performs the best in regarding to coherence, diversity and informative while being blind to the underlying algorithms. The average percentages that each model was selected as the best to a specific criterion are shown in Table 5. The proposed approach clearly outperforms the current state of the art, CVAE-CO and VHCR, by a large margin in terms of all three metrics. This improvement is especially clear when the Gaussian mixture prior was used. 6 CONCLUSION In this paper, we introduced a new approach, named DialogWAE, for dialogue modeling. Different from existing VAE models which impose a simple prior distribution over the latent variables, DialogWAE samples the prior and posterior samples of latent variables by transforming contextdependent Gaussian noise using neural networks, and minimizes the Wasserstein distance between the prior and posterior distributions. Furthermore, we enhance the model with a Gaussian mixture prior network to enrich the latent space. Experiments on two widely used datasets show that our model outperforms state-of-the-art VAE models and generates more coherent, informative and diverse responses. ACKNOWLEDGMENTS This work was supported by the Creative Industrial Technology Development Program (10053249) funded by the Ministry of Trade, Industry and Energy (MOTIE, Korea).
1. What is the main contribution of the paper in dialog response generation? 2. How does the proposed approach differ from prior works in conditional variational autoencoders and adversarial autoencoders? 3. What are the strengths and weaknesses of the proposed Wasserstein GAN approach in dialog modeling? 4. Do you have any concerns regarding the multi-modal distribution sampling in the proposed method? 5. How do the experimental results support the effectiveness of the proposed approach? 6. Are there any limitations or areas for improvement in the proposed method or experimental design?
Review
Review This paper uses Wasserstein GAN in conditional modeling of the dialog response generation. The main goal is to learn to use two network architecture to approximate the posterior distribution of the prior network. Instead of a KL divergence, like in VAE training, they use adversarial training and instead of using softmax output from the discriminator, they use Wasserstein distance. They also introduce a multi-modal distribution, GMM, while sampling from a the posterior during training, prior during the test time. The multi-modal sampling is based on gumbel-softmax over K possible G-distributions. They experiment on Daily Dialog and Switchborad datasets and show promising improvements on qualitative measures like BLEU and BOW embedding similarities, as well as qualitative measures including human evaluations comparing againsts substantial amount of baselines. The paper presents a marriage of a few ideas together. First of, it uses the conditional structure presented in the ACL 2017 paper "Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders". It's great that they used that paper as their baseline. The extension is to use a GAN objective function (the discriminator) as critic and use Wasserstein GAN to to resolve the gradient vanishing issue and produce smooth gradients everywhere. In ACL 2017 paper they use KL divergence to make the posterior from the prior and rec-networks as close to each other so at test time the prior network can generate the samples similar to the true data features distribution. In this paper instead of KL, they use a Discriminator as in 'Adversarial AutoEncoders' paper. This paper extends AAE, instead uses the Wasserstein distance instead (1-Lipschitz function instead of softmax for the discriminator). The W-GAN has been shown to produce good results in text generation in this year's ICML 2018 with the paper 'Adversarially Regularized GAN' (AARE). The idea was to resolve VAE posterior collapse issue by using a discriminator as a regularizer instead of KL divergence with a stronger sampler from the output of the generator to map from noise sampler into the latent space. Interestingly, AARE paper is not cited in this work, which i think is an issue. I understand that paper was just for generation purpose not specific to the dialog modeling, but it makes the claims in the paper misleading such as: "Unlike VAE conversation models that impose a simple distribution over latent variables, DialogWAE models the data distribution by training a GAN within the latent variable space". The part that i liked is the fact that they used multimodal gaussian distributions. I agree with the authors that using Gaussian for the approximating distribution only limits the sampling space and can weaken the models capability of variation. Although it is not proven for text, in image, the gaussian posteriors during training converge together into a single gaussian, causing blurry images. In this text this might correspond to dull responses in dialog. I would like the authors to comment on the interpretability of the components. Perhaps show a sample from each component (in the end the model decides which modal to choose before generation. Are these GMMs overlapping and how much ? Can you measure the difference between the means ? I find the experiments extensive except the datasets are weaker. I like the fact that they included human evaluations.
ICLR
Title DialogWAE: Multimodal Response Generation with Conditional Wasserstein Auto-Encoder Abstract Variational autoencoders (VAEs) have shown a promise in data-driven conversation modeling. However, most VAE conversation models match the approximate posterior distribution over the latent variables to a simple prior such as standard normal distribution, thereby restricting the generated responses to a relatively simple (e.g., unimodal) scope. In this paper, we propose DialogWAE, a conditional Wasserstein autoencoder (WAE) specially designed for dialogue modeling. Unlike VAEs that impose a simple distribution over the latent variables, DialogWAE models the distribution of data by training a GAN within the latent variable space. Specifically, our model samples from the prior and posterior distributions over the latent variables by transforming context-dependent random noise using neural networks and minimizes the Wasserstein distance between the two distributions. We further develop a Gaussian mixture prior network to enrich the latent space. Experiments on two popular datasets show that DialogWAE outperforms the state-of-the-art approaches in generating more coherent, informative and diverse responses. 1 INTRODUCTION Neural response generation has been a long interest of natural language research. Most of the recent approaches to data-driven conversation modeling primarily build upon sequence-to-sequence learning (Cho et al., 2014; Sutskever et al., 2014). Previous research has demonstrated that sequenceto-sequence conversation models often suffer from the safe response problem and fail to generate meaningful, diverse on-topic responses (Li et al., 2015; Sato et al., 2017). Conditional variational autoencoders (CVAE) have shown promising results in addressing the safe response issue (Zhao et al., 2017; Shen et al., 2018). CVAE generates the response conditioned on a latent variable - representing topics, tones and situations of the response - and approximate the posterior distribution over latent variables using a neural network. The latent variable captures variabilities in the dialogue and thus generates more diverse responses. However, previous studies have shown that VAE models tend to suffer from the posterior collapse problem, where the decoder learns to ignore the latent variable and degrades to a vanilla RNN (Shen et al., 2018; Park et al., 2018; Bowman et al., 2015). Furthermore, they match the approximate posterior distribution over the latent variables to a simple prior such as standard normal distribution, thereby restricting the generated responses to a relatively simple (e.g., unimodal) scope (Goyal et al., 2017). A number of studies have sought GAN-based approaches (Goodfellow et al., 2014; Li et al., 2017a; Xu et al., 2017) which directly model the distribution of the responses. However, adversarial training over discrete tokens has been known to be difficult due to the non-differentiability. Li et al. (2017a) proposed a hybrid model of GAN and reinforcement learning (RL) where the score predicted by a discriminator is used as a reward to train the generator. However, training with REINFORCE has been observed to be unstable due to the high variance of the gradient estimate (Shen et al., 2017). Xu et al. (2017) make the GAN model differentiable with an approximate word embedding layer. However, their model only injects variability at the word level, thus limited to represent high-level response variabilities such as topics and situations. In this paper, we propose DialogWAE, a novel variant of GAN for neural conversation modeling. Unlike VAE conversation models that impose a simple distribution over latent variables, DialogWAE models the data distribution by training a GAN within the latent variable space. Specifically, it samples from the prior and posterior distributions over the latent variables by transforming contextdependent random noise with neural networks, and minimizes the Wasserstein distance (Arjovsky et al., 2017) between the prior and the approximate posterior distributions. Furthermore, our model takes into account a multimodal1 nature of responses by using a Gaussian mixture prior network. Adversarial training with the Gaussian mixture prior network enables DialogWAE to capture a richer latent space, yielding more coherent, informative and diverse responses. Our main contributions are two-fold: (1) A novel GAN-based model for neural dialogue modeling, which employs GAN to generate samples of latent variables. (2) A Gaussian mixture prior network to sample random noise from a multimodal prior distribution. To the best of our knowledge, the proposed DialogWAE is the first GAN conversation model that exploits multimodal latent structures. We evaluate our model on two benchmark datasets, SwitchBoard (Godfrey and Holliman, 1997) and DailyDialog (Li et al., 2017b). The results demonstrate that our model substantially outperforms the state-of-the-art methods in terms of BLEU, word embedding similarity, and distinct. Furthermore, we highlight how the GAN architecture with a Gaussian mixture prior network facilitates the generation of more diverse and informative responses. 2 RELATED WORK Encoder-decoder variants To address the “safe response” problem of the naive encoder-decoder conversation model, a number of variants have been proposed. Li et al. (2015) proposed a diversitypromoting objective function to encourage more various responses. Sato et al. (2017) propose to incorporate various types of situations behind conversations when encoding utterances and decoding their responses, respectively. Xing et al. (2017) incorporate topic information into the sequence-tosequence framework to generate informative and interesting responses. Our work is different from the aforementioned studies, as it does not rely on extra information such as situations and topics. VAE conversation models The variational autoencoder (VAE) (Kingma and Welling, 2014) is among the most popular frameworks for dialogue modeling (Zhao et al., 2017; Shen et al., 2018; Park et al., 2018). Serban et al. (2017) propose VHRED, a hierarchical latent variable sequenceto-sequence model that explicitly models multiple levels of variability in the responses. A main challenge for the VAE conversation models is the so-called “posterior collapse”. To alleviate the problem, Zhao et al. (2017) introduce an auxiliary bag-of-words loss to the decoder. They further incorporate extra dialogue information such as dialogue acts and speaker profiles. Shen et al. (2018) propose a collaborative CVAE model which samples the latent variable by transforming a Gaussian noise using neural networks and matches the prior and posterior distributions of the Gaussian noise with KL divergence. Park et al. (2018) propose a variational hierarchical conversation RNN (VHCR) which incorporates a hierarchical structure to latent variables. DialogWAE addresses the limitation of VAE conversation models by using a GAN architecture in the latent space. GAN conversation models Although GAN/CGAN has shown great success in image generation, adapting it to natural dialog generators is a non-trivial task. This is due to the non-differentiable nature of natural language tokens (Shen et al., 2017; Xu et al., 2017). Li et al. (2017a) address this problem by combining GAN with Reinforcement Learning (RL) where the discriminator predicts a reward to optimize the generator. However, training with REINFORCE can be unstable due to the high variance of the sampled gradient (Shen et al., 2017). Xu et al. (2017) make the sequenceto-sequence GAN differentiable by directly multiplying the word probabilities obtained from the decoder to the corresponding word vectors, yielding an approximately vectorized representation of the target sequence. However, their approach injects diversity in the word level rather than the level of the whole responses. DialogWAE differs from exiting GAN conversation models in that it shapes the distribution of responses in a high level latent space rather than direct tokens and does not rely on RL where the gradient variances are large. 1A multimodal distribution is a continuous probability distribution with two or more modes. 3 PROPOSED APPROACH 3.1 PROBLEM STATEMENT Let d=[u1, ..., uk] denote a dialogue of k utterances where ui=[w1, ..., w|ui|] represents an utterance andwn denotes the n-th word in ui. Let c=[u1, ..., uk−1] denote a dialogue context, the k-1 historical utterances, and x=uk be a response which means the next utterance. Our goal is to estimate the conditional distribution pθ(x|c). As x and c are sequences of discrete tokens, it is non-trivial to find a direct coupling between them. Instead, we introduce a continuous latent variable z that represents the high-level representation of the response. The response generation can be viewed as a two-step procedure, where a latent variable z is sampled from a distribution pθ(z|c) on a latent space Z , and then the response x is decoded from z with pθ(x|z, c). Under this model, the likelihood of a response is pθ(x|c) = ∫ z p(x|c, z)p(z|c)dz. (1) The exact log-probability is difficult to compute since it is intractable to marginalize out z. Therefore, we approximate the posterior distribution of z as qφ(z|x, c) which can be computed by a neural network named recognition network. Using this approximate posterior, we can instead compute the evidence lower bound (ELBO): log pθ(x|c) = log ∫ z p(x|c, z)p(z|c)dz ≥ `(x, c) = Ez∼qφ(z|x,c)[log pψ(x|c, z)]−KL(qφ(z|x, c)||p(z|c)), (2) where p(z|c) represents the prior distribution of z given c and can be modeled with a neural network named prior network. 3.2 CONDITIONAL WASSERSTEIN AUTO-ENCODERS FOR DIALOGUE MODELING The conventional VAE conversation models assume that the latent variable z follows a simple prior distribution such as the normal distribution. However, the latent space of real responses is more complicated and difficult to be estimated with such a simple distribution. This often leads to the posterior collapse problem (Shen et al., 2018). Inspired by GAN and the adversarial auto-encoder (AAE) (Makhzani et al., 2015; Tolstikhin et al., 2017; Zhao et al., 2018), we model the distribution of z by training a GAN within the latent space. We sample from the prior and posterior over the latent variables by transforming random noise using neural networks. Specifically, the prior sample z̃∼pθ(z|c) is generated by a generator G from context-dependent random noise ̃, while the approximate posterior sample z∼qφ(z|c, x) is generated by a generator Q from context-dependent random noise . Both ̃ and are drawn from a normal distribution whose mean and covariance matrix (assumed diagonal) are computed from c with feed-forward neural networks, prior network and recognition network, respectively: z̃ = Gθ(̃), ̃ ∼ N ( ; µ̃, σ̃2I), [ µ̃ log σ̃2 ] = W̃fθ(c) + b̃ (3) z = Qφ( ), ∼ N ( ;µ, σ2I), [ µ log σ2 ] =Wgφ( [ x c ] ) + b, (4) where fθ(·) and gφ(·) are feed-forward neural networks. Our goal is to minimize the divergence between pθ(z|c) and qφ(z|x, c) while maximizing the log-probability of a reconstructed response from z. We thus solve the following problem: min θ,φ,ψ −Eqφ(z|x,c) log pψ(x|z, c) +W (qφ(z|x, c)||pθ(z|c)), (5) where pθ(z|c) and qφ(z|x, c) are neural networks implementing Equations 3 and 4, respectively. pψ(x|z, c) is a decoder. W(·||·) represents the Wasserstein distance between these two distributions (Arjovsky et al., 2017). We choose the Wasserstein distance as the divergence since the WGAN has been shown to produce good results in text generation (Zhao et al., 2018). Figure 1 illustrates an overview of our model. The utterance encoder (RNN) transforms each utterance (including the response x) in the dialogue into a real-valued vector. For the i-th utterance in the context, the context encoder (RNN) takes as input the concatenation of its encoding vector and the conversation floor (1 if the utterance is from the speaker of the response, otherwise 0) and computes its hidden state hctxi . The final hidden state of the context encoder is used as the context representation. At generation time, the model draws a random noise ̃ from the prior network (PriNet) which transforms c through a feed-forward network followed by two matrix multiplications which result in the mean and diagonal covariance, respectively. Then, the generator G generates a sample of latent variable z̃ from the noise through a feed-forward network. The decoder RNN decodes the generated z̃ into a response. At training time, the model infers the posterior distribution of the latent variable conditioned on the context c and the response x. The recognition network (RecNet) takes as input the concatenation of both x and c and transforms them through a feed-forward network followed by two matrix multiplications which define the normal mean and diagonal covariance, respectively. A Gaussian noise is drawn from the recognition network with the re-parametrization trick. Then, the generator Q transforms the Gaussian noise into a sample of latent variable z through a feed-forward network. The response decoder (RNN) computes the reconstruction loss: Lrec = −Ez=Q( ), ∼RecNet(x,c) log pψ(x|c, z) (6) We match the approximate posterior with the prior distributions of z by introducing an adversarial discriminator D which tells apart the prior samples from posterior samples. D is implemented as a feed-forward neural network which takes as input the concatenation of z and c and outputs a real value. We train D by minimizing the discriminator loss: Ldisc = E ∼RecNet(x,c)[D(Q( ), c)]− Ẽ∼PriNet(c)[D(G(̃), c)] (7) 3.3 MULTIMODAL RESPONSE GENERATION WITH A GAUSSIAN MIXTURE PRIOR NETWORK It is a usual practice for the prior distribution in the AAE architecture to be a normal distribution. However, responses often have a multimodal nature reflecting many equally possible situations (Sato et al., 2017), topics and sentiments. A random noise with normal distribution could restrict the generator to output a latent space with a single dominant mode due to the unimodal nature of Gaussian distribution. Consequently, the generated responses could follow simple prototypes. To capture multiple modes in the probability distribution over the latent variable, we further propose to use a distribution that explicitly defines more than one mode. Each time, the noise to generate the latent variable is selected from one of the modes. To achieve so, we make the prior network to capture a mixture of Gaussian distributions, namely, GMM({πk, µk, σ2kI}Kk=1), where πk, µk and σk are parameters of the k-th component. This allows it to learn a multimodal manifold in the latent variable space in a two-step generation process – first choosing a component k with πk, and then sampling Gaussian noise within the selected component: p( |c) = K∑ k=1 vkN ( ;µk, σ2kI), (8) Algorithm 1: DialogWAE Training (UEnc: utterance encoder; CEnc: context encoder; RecNet: recognition network; PriNet: prior network; Dec: decoder) K=3, ncritic=5 in all experiments In: a dialog corpus D={(ci, xi)}|D|i=1, the number of prior modes K, discriminator iterations ncritic 1 Initialize {θUEnc, θCEnc, θPriNet, θRecNet, θQ, θG, θD, θDec} 2 while not convergence do 3 Initialize D 4 while D has unsampled batches do 5 Sample a mini-batch of N instances {(xn, cn)}Nn=1 from D 6 Get the representations of context and response xn=UEnc(xn), cn=CEnc(cn) 7 Sample n from RecNet(xn, cn) according to Equation 4 8 Sample ̂n from PriNet(cn, K) according to Equation 8–10 9 Generate zn = Q( n), z̃n = G(̂n) 10 Update {θQ, θG, θPriNet, θRecNet} by gradient ascent on discriminator loss 11 Ldisc = 1N ∑N n=1D(zn, cn)− 1 N ∑N n=1D(z̃n, cn) 12 for i ∈ {1, · · · , ncritic} do 13 Repeat 5–9 14 Update θD by gradient descent on the discriminator loss Ldisc with gradient penalty 15 end 16 Update {θUEnc, θCEnc, θRecNet, θQ, θDec} by gradient descent on the reconstruction loss 17 Lrec = − 1N ∑N n=1 log p(xn|zn, cn) 18 end 19 end where vk∈∆K−1 is a component indicator with class probabilities π1,· · · ,πK ; πk is the mixture coefficient of the k-th component of the GMM. They are computed as πk= exp(ek)∑K i=1 exp(ei) , where ekµk log σ2k =Wkfθ(c) + bk (9) Instead of exact sampling, we use Gumbel-Softmax re-parametrization (Kusner and HernándezLobato, 2016) to sample an instance of v: vk = exp((ek + gk)/τ)∑K i=1 exp((ei + gi)/τ) , (10) where gi is a Gumbel noise computed as gi = −log(−log(ui)), ui ∼ U(0, 1) and τ∈[0,1] is the softmax temperature which is set to 0.1 in all experiments. We refer to this framework as DialogWAE-GMP. A comparison of performance with different numbers of prior components will be shown in Section 5.1. 3.4 TRAINING Our model is trained epochwise until a convergence is reached. In each epoch, we train the model iteratively by alternating two phases− an AE phase during which the reconstruction loss of decoded responses is minimized, and a GAN phase which minimizes the Wasserstein distance between the prior and approximate posterior distributions over the latent variables. The detailed procedures are presented in Algorithm 1 4 EXPERIMENTAL SETUP Datasets We evaluate our model on two dialogue datasets, Dailydialog (Li et al., 2017b) and Switchboard (Godfrey and Holliman, 1997), which have been widely used in recent studies (Shen et al., 2018; Zhao et al., 2017). Dailydialog has 13,118 daily conversations for a English learner in a daily life. Switchboard contains 2,400 two-way telephone conversations under 70 specified topics. The datasets are separated into training, validation, and test sets with the same ratios as in the baseline papers, that is, 2316:60:62 for Switchboard (Zhao et al., 2017) and 10:1:1 for Dailydialog (Shen et al., 2018), respectively. Metrics To measure the performance of DialogWAE, we adopted several standard metrics widely used in existing studies: BLEU (Papineni et al., 2002), BOW Embedding (Liu et al., 2016) and distinct (Li et al., 2015). In particular, BLEU measures how much a generated response contains n-gram overlaps with the reference. We compute BLEU scores for n<4 using smoothing techniques (smoothing 7)2 (Chen and Cherry, 2014). For each test context, we sample 10 responses from the models and compute their BLEU scores. We define n-gram precision and n-gram recall as the average and the maximum score respectively (Zhao et al., 2017). BOW embedding metric is the cosine similarity of bag-of-words embeddings between the hypothesis and the reference. We use three metrics to compute the word embedding similarity: 1. Greedy: greedily matching words in two utterances based on the cosine similarities between their embeddings, and to average the obtained scores (Rus and Lintean, 2012). 2. Average: cosine similarity between the averaged word embeddings in the two utterances (Mitchell and Lapata, 2008). 3. Extrema: cosine similarity between the largest extreme values among the word embeddings in the two utterances (Forgues et al., 2014). We use Glove vectors (Pennington et al., 2014) as the embeddings which will be discussed later in this section. For each test context, we report the maximum BOW embedding score among the 10 sampled responses. Distinct computes the diversity of the generated responses. dist-n is defined as the ratio of unique n-grams (n=1,2) over all n-grams in the generated responses. As we sample multiple responses for each test context, we evaluate diversities for both within and among the sampled responses. We define intra-dist as the average of distinct values within each sampled response and inter-dist as the distinct value among all sampled responses. Baselines We compare the performance of DialogWAE with seven recently-proposed baselines for dialogue modeling: (i) HRED: a generalized sequence-to-sequence model with hierarchical RNN encoder (Serban et al., 2016), (ii) SeqGAN: a GAN based model for sequence generation (Li et al., 2017a), (iii) CVAE: a conditional VAE model with KL-annealing (Zhao et al., 2017), (iv) CVAEBOW: a conditional VAE model with a BOW loss (Zhao et al., 2017), (v) CVAE-CO: a collaborative conditional VAE model (Shen et al., 2018), (vi) VHRED: a hierarchical VAE model (Serban et al., 2017), and (vii) VHCR: a hierarchical VAE model with conversation modeling (Park et al., 2018). Training and Evaluation Details We use the gated recurrent units (GRU) (Cho et al., 2014) for the RNN encoders and decoders. The utterance encoder is a bidirectional GRU with 300 hidden units in each direction. The context encoder and decoder are both GRUs with 300 hidden units. The prior and the recognition networks are both 2-layer feed-forward networks of size 200 with tanh non-linearity. The generators Q and G as well as the discriminator D are 3-layer feed-forward networks with ReLU non-linearity (Nair and Hinton, 2010) and hidden sizes of 200, 200 and 400, respectively. The dimension of a latent variable z is set to 200. The initial weights for all fully connected layers are sampled from a uniform distribution [-0.02, 0.02]. The gradient penalty is used when trainingD (Gulrajani et al., 2017) and its hyper-parameter λ is set to 10. We set the vocabulary size to 10,000 and define all the out-of-vocabulary words to a special token <unk>. The word embedding size is 200 and initialized with Glove vectors pre-trained on Twitter (Pennington et al., 2014). The size of context window is set to 10 with a maximum utterance length of 40. We sample responses with greedy decoding so that the randomness entirely come from the latent variables. The baselines were implemented with the same set of hyper-parameters. All the models are implemented with Pytorch 0.4.03, and fine-tuned with NAVER Smart Machine Learning (NSML) platform (Sung et al., 2017; Kim et al., 2018). The models are trained with mini-batches containing 32 examples each in an end-to-end manner. In the AE phase, the models are trained by SGD with an initial learning rate of 1.0 and gradient clipping at 1 (Pascanu et al., 2013). We decay the learning rate by 40% every 10th epoch. In the GAN phase, the models are updated using RMSprop (Tieleman and Hinton) with fixed learning rates of 5×10−5 2https://www.nltk.org/_modules/nltk/translate/bleu_score.html 3https://pytorch.org and 1×10−5 for the generator and the discriminator, respectively. We tune the hyper-parameters on the validation set and measure the performance on the test set. 5 EXPERIMENTAL RESULTS 5.1 QUANTITATIVE ANALYSIS Tables 1 and 2 show the performance of DialogWAE and baselines on the two datasets. DialogWAE outperforms the baselines in the majority of the experiments. In terms of BLEU scores, DialogWAE (with a Gaussian mixture prior network) generates more relevant responses, with the average recall of 42.0% and 37.2% on both of the datasets. These are significantly higher than those of the CVAE baselines (29.9% and 26.5%). We observe a similar trend to the BOW embedding metrics. DialogWAE generates more diverse responses than the baselines do. The inter-dist scores are significantly higher than those of the baseline models. This indicates the sampled responses contain more distinct n-grams. DialogWAE does not show better intra-distinct scores. We conjecture that this is due to the relatively long responses generated by the DialogWAE as shown in the last columns of both tables. It is highly unlikely for there to be many repeated n-grams in a short response. We further investigate the effects of the number of prior components (K). Figure 2 shows the performance of DialogWAE-GMP with respect to the number of prior components K. We vary K from 1 to 9. As shown in the results, in most cases, the performance increases with K and decreases once K reaches a certain threshold, for example, three. The optimal K on both of the datasets was around 3. We attribute this degradation to training difficulty of a mixture density network and the lack of appropriate regularization, which is left for future investigation. 5.2 QUALITATIVE ANALYSIS Table 3 presents examples of responses generated by the models on the DailyDialog dataset. Due to the space limitation, we report the results of CVAE-CO and DialogWAE-GMP, which are the representative models among the baselines and the proposed models. For each context in the test set, we show three samples of generated responses from each model. As we expected, DialogWAE generates more coherent and diverse responses that cover multiple plausible aspects. Furthermore, we notice that the generated response is long and exhibits informative content. By contrast, the responses generated by the baseline model exhibit relatively limited variations. Although the responses show some variants in contents, most of them share a similar prefix such as “how much”. We further investigate the interpretability of Gaussian components in the prior network, that is, what each Gaussian model has captured before generation. We pick a dialogue context “I’d like to invite you to dinner tonight, do you have time?” which is also used in (Shen et al., 2018) for analysis and generate five responses for each Gaussian component. As shown in Table 4, different Gaussian models generate different types of responses: component 1 expresses a strong will, while component 2 expresses some uncertainty, and component 3 generates strong negative responses. The overlap between components is marginal (around 1/5). The results indicate that the Gaussian mixture prior network can successfully capture the multimodal distribution of the responses. To validate the previous results, we further conduct a human evaluation with Amazon Mechanical Turk. We randomly selected 50 dialogues from the test set of DailyDialog. For each dialogue context, we generated 10 responses from each of the four models. Responses for each context were inspected by 5 participants who were asked to choose the model which performs the best in regarding to coherence, diversity and informative while being blind to the underlying algorithms. The average percentages that each model was selected as the best to a specific criterion are shown in Table 5. The proposed approach clearly outperforms the current state of the art, CVAE-CO and VHCR, by a large margin in terms of all three metrics. This improvement is especially clear when the Gaussian mixture prior was used. 6 CONCLUSION In this paper, we introduced a new approach, named DialogWAE, for dialogue modeling. Different from existing VAE models which impose a simple prior distribution over the latent variables, DialogWAE samples the prior and posterior samples of latent variables by transforming contextdependent Gaussian noise using neural networks, and minimizes the Wasserstein distance between the prior and posterior distributions. Furthermore, we enhance the model with a Gaussian mixture prior network to enrich the latent space. Experiments on two widely used datasets show that our model outperforms state-of-the-art VAE models and generates more coherent, informative and diverse responses. ACKNOWLEDGMENTS This work was supported by the Creative Industrial Technology Development Program (10053249) funded by the Ministry of Trade, Industry and Energy (MOTIE, Korea).
1. What is the novel dialogue modeling framework proposed by the paper? 2. What are the strengths of the proposed models, particularly in terms of their performance on various metrics and comparisons with baseline methods? 3. Do you have any minor comments or questions regarding the paper, such as missing citations or specific implementation details?
Review
Review This paper proposes a novel dialogue modeling framework DialogWAE, which adopts conditional Wasserstein Auto-Encoder to learn continuous latent variables z that represents the high-level representation of responses. To enrich the diversity of the latent representations and capture multiple modes in the latent variables, the authors propose an advanced version (DialogWAE-GMP) of DialogWAE and models the prior distribution with a mixture of Gaussian distributions instead one. Strength: The idea is clear and the paper is very well written. The authors evaluate the proposed models on a variety of reasonable metrics and compare against seven recently-proposed baselines. Results show that both DialogWAE and DialogWAE-GMP generate responses that are both more similar to the references (BLEU and BOW embeddings) and more diverse (inter-dist). Human evaluations also show that the proposed models generate better responses than two representative baselines. Minor comments/questions: 1) Missing citation, the optimization problem of this paper (Equation 5) is similar to the Adversarially Regularized Autoencoders (ICML 2018). 2) The authors use Gumbel-Softmax re-parametrization to sample an instance for the Gaussian Mixture prior network. Are you using the Straight-Through estimator or the original one? If the original Gumbel-Softmax estimator is used, it is better to show a comparison between simply using the Softmax with Gumbel softmax. Since the discrete sampling is not crucial in this case, a mixture of weighted representation may also work. 3) The DialogWAE-GMP with Gaussian Mixture prior network achieves great evaluation results and is better than the non-mixture version. I'd be interested to see some analysis on what each Gaussian model has captured. Will different Gaussian model generate different types of responses? Are the differences interpretable?
ICLR
Title DialogWAE: Multimodal Response Generation with Conditional Wasserstein Auto-Encoder Abstract Variational autoencoders (VAEs) have shown a promise in data-driven conversation modeling. However, most VAE conversation models match the approximate posterior distribution over the latent variables to a simple prior such as standard normal distribution, thereby restricting the generated responses to a relatively simple (e.g., unimodal) scope. In this paper, we propose DialogWAE, a conditional Wasserstein autoencoder (WAE) specially designed for dialogue modeling. Unlike VAEs that impose a simple distribution over the latent variables, DialogWAE models the distribution of data by training a GAN within the latent variable space. Specifically, our model samples from the prior and posterior distributions over the latent variables by transforming context-dependent random noise using neural networks and minimizes the Wasserstein distance between the two distributions. We further develop a Gaussian mixture prior network to enrich the latent space. Experiments on two popular datasets show that DialogWAE outperforms the state-of-the-art approaches in generating more coherent, informative and diverse responses. 1 INTRODUCTION Neural response generation has been a long interest of natural language research. Most of the recent approaches to data-driven conversation modeling primarily build upon sequence-to-sequence learning (Cho et al., 2014; Sutskever et al., 2014). Previous research has demonstrated that sequenceto-sequence conversation models often suffer from the safe response problem and fail to generate meaningful, diverse on-topic responses (Li et al., 2015; Sato et al., 2017). Conditional variational autoencoders (CVAE) have shown promising results in addressing the safe response issue (Zhao et al., 2017; Shen et al., 2018). CVAE generates the response conditioned on a latent variable - representing topics, tones and situations of the response - and approximate the posterior distribution over latent variables using a neural network. The latent variable captures variabilities in the dialogue and thus generates more diverse responses. However, previous studies have shown that VAE models tend to suffer from the posterior collapse problem, where the decoder learns to ignore the latent variable and degrades to a vanilla RNN (Shen et al., 2018; Park et al., 2018; Bowman et al., 2015). Furthermore, they match the approximate posterior distribution over the latent variables to a simple prior such as standard normal distribution, thereby restricting the generated responses to a relatively simple (e.g., unimodal) scope (Goyal et al., 2017). A number of studies have sought GAN-based approaches (Goodfellow et al., 2014; Li et al., 2017a; Xu et al., 2017) which directly model the distribution of the responses. However, adversarial training over discrete tokens has been known to be difficult due to the non-differentiability. Li et al. (2017a) proposed a hybrid model of GAN and reinforcement learning (RL) where the score predicted by a discriminator is used as a reward to train the generator. However, training with REINFORCE has been observed to be unstable due to the high variance of the gradient estimate (Shen et al., 2017). Xu et al. (2017) make the GAN model differentiable with an approximate word embedding layer. However, their model only injects variability at the word level, thus limited to represent high-level response variabilities such as topics and situations. In this paper, we propose DialogWAE, a novel variant of GAN for neural conversation modeling. Unlike VAE conversation models that impose a simple distribution over latent variables, DialogWAE models the data distribution by training a GAN within the latent variable space. Specifically, it samples from the prior and posterior distributions over the latent variables by transforming contextdependent random noise with neural networks, and minimizes the Wasserstein distance (Arjovsky et al., 2017) between the prior and the approximate posterior distributions. Furthermore, our model takes into account a multimodal1 nature of responses by using a Gaussian mixture prior network. Adversarial training with the Gaussian mixture prior network enables DialogWAE to capture a richer latent space, yielding more coherent, informative and diverse responses. Our main contributions are two-fold: (1) A novel GAN-based model for neural dialogue modeling, which employs GAN to generate samples of latent variables. (2) A Gaussian mixture prior network to sample random noise from a multimodal prior distribution. To the best of our knowledge, the proposed DialogWAE is the first GAN conversation model that exploits multimodal latent structures. We evaluate our model on two benchmark datasets, SwitchBoard (Godfrey and Holliman, 1997) and DailyDialog (Li et al., 2017b). The results demonstrate that our model substantially outperforms the state-of-the-art methods in terms of BLEU, word embedding similarity, and distinct. Furthermore, we highlight how the GAN architecture with a Gaussian mixture prior network facilitates the generation of more diverse and informative responses. 2 RELATED WORK Encoder-decoder variants To address the “safe response” problem of the naive encoder-decoder conversation model, a number of variants have been proposed. Li et al. (2015) proposed a diversitypromoting objective function to encourage more various responses. Sato et al. (2017) propose to incorporate various types of situations behind conversations when encoding utterances and decoding their responses, respectively. Xing et al. (2017) incorporate topic information into the sequence-tosequence framework to generate informative and interesting responses. Our work is different from the aforementioned studies, as it does not rely on extra information such as situations and topics. VAE conversation models The variational autoencoder (VAE) (Kingma and Welling, 2014) is among the most popular frameworks for dialogue modeling (Zhao et al., 2017; Shen et al., 2018; Park et al., 2018). Serban et al. (2017) propose VHRED, a hierarchical latent variable sequenceto-sequence model that explicitly models multiple levels of variability in the responses. A main challenge for the VAE conversation models is the so-called “posterior collapse”. To alleviate the problem, Zhao et al. (2017) introduce an auxiliary bag-of-words loss to the decoder. They further incorporate extra dialogue information such as dialogue acts and speaker profiles. Shen et al. (2018) propose a collaborative CVAE model which samples the latent variable by transforming a Gaussian noise using neural networks and matches the prior and posterior distributions of the Gaussian noise with KL divergence. Park et al. (2018) propose a variational hierarchical conversation RNN (VHCR) which incorporates a hierarchical structure to latent variables. DialogWAE addresses the limitation of VAE conversation models by using a GAN architecture in the latent space. GAN conversation models Although GAN/CGAN has shown great success in image generation, adapting it to natural dialog generators is a non-trivial task. This is due to the non-differentiable nature of natural language tokens (Shen et al., 2017; Xu et al., 2017). Li et al. (2017a) address this problem by combining GAN with Reinforcement Learning (RL) where the discriminator predicts a reward to optimize the generator. However, training with REINFORCE can be unstable due to the high variance of the sampled gradient (Shen et al., 2017). Xu et al. (2017) make the sequenceto-sequence GAN differentiable by directly multiplying the word probabilities obtained from the decoder to the corresponding word vectors, yielding an approximately vectorized representation of the target sequence. However, their approach injects diversity in the word level rather than the level of the whole responses. DialogWAE differs from exiting GAN conversation models in that it shapes the distribution of responses in a high level latent space rather than direct tokens and does not rely on RL where the gradient variances are large. 1A multimodal distribution is a continuous probability distribution with two or more modes. 3 PROPOSED APPROACH 3.1 PROBLEM STATEMENT Let d=[u1, ..., uk] denote a dialogue of k utterances where ui=[w1, ..., w|ui|] represents an utterance andwn denotes the n-th word in ui. Let c=[u1, ..., uk−1] denote a dialogue context, the k-1 historical utterances, and x=uk be a response which means the next utterance. Our goal is to estimate the conditional distribution pθ(x|c). As x and c are sequences of discrete tokens, it is non-trivial to find a direct coupling between them. Instead, we introduce a continuous latent variable z that represents the high-level representation of the response. The response generation can be viewed as a two-step procedure, where a latent variable z is sampled from a distribution pθ(z|c) on a latent space Z , and then the response x is decoded from z with pθ(x|z, c). Under this model, the likelihood of a response is pθ(x|c) = ∫ z p(x|c, z)p(z|c)dz. (1) The exact log-probability is difficult to compute since it is intractable to marginalize out z. Therefore, we approximate the posterior distribution of z as qφ(z|x, c) which can be computed by a neural network named recognition network. Using this approximate posterior, we can instead compute the evidence lower bound (ELBO): log pθ(x|c) = log ∫ z p(x|c, z)p(z|c)dz ≥ `(x, c) = Ez∼qφ(z|x,c)[log pψ(x|c, z)]−KL(qφ(z|x, c)||p(z|c)), (2) where p(z|c) represents the prior distribution of z given c and can be modeled with a neural network named prior network. 3.2 CONDITIONAL WASSERSTEIN AUTO-ENCODERS FOR DIALOGUE MODELING The conventional VAE conversation models assume that the latent variable z follows a simple prior distribution such as the normal distribution. However, the latent space of real responses is more complicated and difficult to be estimated with such a simple distribution. This often leads to the posterior collapse problem (Shen et al., 2018). Inspired by GAN and the adversarial auto-encoder (AAE) (Makhzani et al., 2015; Tolstikhin et al., 2017; Zhao et al., 2018), we model the distribution of z by training a GAN within the latent space. We sample from the prior and posterior over the latent variables by transforming random noise using neural networks. Specifically, the prior sample z̃∼pθ(z|c) is generated by a generator G from context-dependent random noise ̃, while the approximate posterior sample z∼qφ(z|c, x) is generated by a generator Q from context-dependent random noise . Both ̃ and are drawn from a normal distribution whose mean and covariance matrix (assumed diagonal) are computed from c with feed-forward neural networks, prior network and recognition network, respectively: z̃ = Gθ(̃), ̃ ∼ N ( ; µ̃, σ̃2I), [ µ̃ log σ̃2 ] = W̃fθ(c) + b̃ (3) z = Qφ( ), ∼ N ( ;µ, σ2I), [ µ log σ2 ] =Wgφ( [ x c ] ) + b, (4) where fθ(·) and gφ(·) are feed-forward neural networks. Our goal is to minimize the divergence between pθ(z|c) and qφ(z|x, c) while maximizing the log-probability of a reconstructed response from z. We thus solve the following problem: min θ,φ,ψ −Eqφ(z|x,c) log pψ(x|z, c) +W (qφ(z|x, c)||pθ(z|c)), (5) where pθ(z|c) and qφ(z|x, c) are neural networks implementing Equations 3 and 4, respectively. pψ(x|z, c) is a decoder. W(·||·) represents the Wasserstein distance between these two distributions (Arjovsky et al., 2017). We choose the Wasserstein distance as the divergence since the WGAN has been shown to produce good results in text generation (Zhao et al., 2018). Figure 1 illustrates an overview of our model. The utterance encoder (RNN) transforms each utterance (including the response x) in the dialogue into a real-valued vector. For the i-th utterance in the context, the context encoder (RNN) takes as input the concatenation of its encoding vector and the conversation floor (1 if the utterance is from the speaker of the response, otherwise 0) and computes its hidden state hctxi . The final hidden state of the context encoder is used as the context representation. At generation time, the model draws a random noise ̃ from the prior network (PriNet) which transforms c through a feed-forward network followed by two matrix multiplications which result in the mean and diagonal covariance, respectively. Then, the generator G generates a sample of latent variable z̃ from the noise through a feed-forward network. The decoder RNN decodes the generated z̃ into a response. At training time, the model infers the posterior distribution of the latent variable conditioned on the context c and the response x. The recognition network (RecNet) takes as input the concatenation of both x and c and transforms them through a feed-forward network followed by two matrix multiplications which define the normal mean and diagonal covariance, respectively. A Gaussian noise is drawn from the recognition network with the re-parametrization trick. Then, the generator Q transforms the Gaussian noise into a sample of latent variable z through a feed-forward network. The response decoder (RNN) computes the reconstruction loss: Lrec = −Ez=Q( ), ∼RecNet(x,c) log pψ(x|c, z) (6) We match the approximate posterior with the prior distributions of z by introducing an adversarial discriminator D which tells apart the prior samples from posterior samples. D is implemented as a feed-forward neural network which takes as input the concatenation of z and c and outputs a real value. We train D by minimizing the discriminator loss: Ldisc = E ∼RecNet(x,c)[D(Q( ), c)]− Ẽ∼PriNet(c)[D(G(̃), c)] (7) 3.3 MULTIMODAL RESPONSE GENERATION WITH A GAUSSIAN MIXTURE PRIOR NETWORK It is a usual practice for the prior distribution in the AAE architecture to be a normal distribution. However, responses often have a multimodal nature reflecting many equally possible situations (Sato et al., 2017), topics and sentiments. A random noise with normal distribution could restrict the generator to output a latent space with a single dominant mode due to the unimodal nature of Gaussian distribution. Consequently, the generated responses could follow simple prototypes. To capture multiple modes in the probability distribution over the latent variable, we further propose to use a distribution that explicitly defines more than one mode. Each time, the noise to generate the latent variable is selected from one of the modes. To achieve so, we make the prior network to capture a mixture of Gaussian distributions, namely, GMM({πk, µk, σ2kI}Kk=1), where πk, µk and σk are parameters of the k-th component. This allows it to learn a multimodal manifold in the latent variable space in a two-step generation process – first choosing a component k with πk, and then sampling Gaussian noise within the selected component: p( |c) = K∑ k=1 vkN ( ;µk, σ2kI), (8) Algorithm 1: DialogWAE Training (UEnc: utterance encoder; CEnc: context encoder; RecNet: recognition network; PriNet: prior network; Dec: decoder) K=3, ncritic=5 in all experiments In: a dialog corpus D={(ci, xi)}|D|i=1, the number of prior modes K, discriminator iterations ncritic 1 Initialize {θUEnc, θCEnc, θPriNet, θRecNet, θQ, θG, θD, θDec} 2 while not convergence do 3 Initialize D 4 while D has unsampled batches do 5 Sample a mini-batch of N instances {(xn, cn)}Nn=1 from D 6 Get the representations of context and response xn=UEnc(xn), cn=CEnc(cn) 7 Sample n from RecNet(xn, cn) according to Equation 4 8 Sample ̂n from PriNet(cn, K) according to Equation 8–10 9 Generate zn = Q( n), z̃n = G(̂n) 10 Update {θQ, θG, θPriNet, θRecNet} by gradient ascent on discriminator loss 11 Ldisc = 1N ∑N n=1D(zn, cn)− 1 N ∑N n=1D(z̃n, cn) 12 for i ∈ {1, · · · , ncritic} do 13 Repeat 5–9 14 Update θD by gradient descent on the discriminator loss Ldisc with gradient penalty 15 end 16 Update {θUEnc, θCEnc, θRecNet, θQ, θDec} by gradient descent on the reconstruction loss 17 Lrec = − 1N ∑N n=1 log p(xn|zn, cn) 18 end 19 end where vk∈∆K−1 is a component indicator with class probabilities π1,· · · ,πK ; πk is the mixture coefficient of the k-th component of the GMM. They are computed as πk= exp(ek)∑K i=1 exp(ei) , where ekµk log σ2k =Wkfθ(c) + bk (9) Instead of exact sampling, we use Gumbel-Softmax re-parametrization (Kusner and HernándezLobato, 2016) to sample an instance of v: vk = exp((ek + gk)/τ)∑K i=1 exp((ei + gi)/τ) , (10) where gi is a Gumbel noise computed as gi = −log(−log(ui)), ui ∼ U(0, 1) and τ∈[0,1] is the softmax temperature which is set to 0.1 in all experiments. We refer to this framework as DialogWAE-GMP. A comparison of performance with different numbers of prior components will be shown in Section 5.1. 3.4 TRAINING Our model is trained epochwise until a convergence is reached. In each epoch, we train the model iteratively by alternating two phases− an AE phase during which the reconstruction loss of decoded responses is minimized, and a GAN phase which minimizes the Wasserstein distance between the prior and approximate posterior distributions over the latent variables. The detailed procedures are presented in Algorithm 1 4 EXPERIMENTAL SETUP Datasets We evaluate our model on two dialogue datasets, Dailydialog (Li et al., 2017b) and Switchboard (Godfrey and Holliman, 1997), which have been widely used in recent studies (Shen et al., 2018; Zhao et al., 2017). Dailydialog has 13,118 daily conversations for a English learner in a daily life. Switchboard contains 2,400 two-way telephone conversations under 70 specified topics. The datasets are separated into training, validation, and test sets with the same ratios as in the baseline papers, that is, 2316:60:62 for Switchboard (Zhao et al., 2017) and 10:1:1 for Dailydialog (Shen et al., 2018), respectively. Metrics To measure the performance of DialogWAE, we adopted several standard metrics widely used in existing studies: BLEU (Papineni et al., 2002), BOW Embedding (Liu et al., 2016) and distinct (Li et al., 2015). In particular, BLEU measures how much a generated response contains n-gram overlaps with the reference. We compute BLEU scores for n<4 using smoothing techniques (smoothing 7)2 (Chen and Cherry, 2014). For each test context, we sample 10 responses from the models and compute their BLEU scores. We define n-gram precision and n-gram recall as the average and the maximum score respectively (Zhao et al., 2017). BOW embedding metric is the cosine similarity of bag-of-words embeddings between the hypothesis and the reference. We use three metrics to compute the word embedding similarity: 1. Greedy: greedily matching words in two utterances based on the cosine similarities between their embeddings, and to average the obtained scores (Rus and Lintean, 2012). 2. Average: cosine similarity between the averaged word embeddings in the two utterances (Mitchell and Lapata, 2008). 3. Extrema: cosine similarity between the largest extreme values among the word embeddings in the two utterances (Forgues et al., 2014). We use Glove vectors (Pennington et al., 2014) as the embeddings which will be discussed later in this section. For each test context, we report the maximum BOW embedding score among the 10 sampled responses. Distinct computes the diversity of the generated responses. dist-n is defined as the ratio of unique n-grams (n=1,2) over all n-grams in the generated responses. As we sample multiple responses for each test context, we evaluate diversities for both within and among the sampled responses. We define intra-dist as the average of distinct values within each sampled response and inter-dist as the distinct value among all sampled responses. Baselines We compare the performance of DialogWAE with seven recently-proposed baselines for dialogue modeling: (i) HRED: a generalized sequence-to-sequence model with hierarchical RNN encoder (Serban et al., 2016), (ii) SeqGAN: a GAN based model for sequence generation (Li et al., 2017a), (iii) CVAE: a conditional VAE model with KL-annealing (Zhao et al., 2017), (iv) CVAEBOW: a conditional VAE model with a BOW loss (Zhao et al., 2017), (v) CVAE-CO: a collaborative conditional VAE model (Shen et al., 2018), (vi) VHRED: a hierarchical VAE model (Serban et al., 2017), and (vii) VHCR: a hierarchical VAE model with conversation modeling (Park et al., 2018). Training and Evaluation Details We use the gated recurrent units (GRU) (Cho et al., 2014) for the RNN encoders and decoders. The utterance encoder is a bidirectional GRU with 300 hidden units in each direction. The context encoder and decoder are both GRUs with 300 hidden units. The prior and the recognition networks are both 2-layer feed-forward networks of size 200 with tanh non-linearity. The generators Q and G as well as the discriminator D are 3-layer feed-forward networks with ReLU non-linearity (Nair and Hinton, 2010) and hidden sizes of 200, 200 and 400, respectively. The dimension of a latent variable z is set to 200. The initial weights for all fully connected layers are sampled from a uniform distribution [-0.02, 0.02]. The gradient penalty is used when trainingD (Gulrajani et al., 2017) and its hyper-parameter λ is set to 10. We set the vocabulary size to 10,000 and define all the out-of-vocabulary words to a special token <unk>. The word embedding size is 200 and initialized with Glove vectors pre-trained on Twitter (Pennington et al., 2014). The size of context window is set to 10 with a maximum utterance length of 40. We sample responses with greedy decoding so that the randomness entirely come from the latent variables. The baselines were implemented with the same set of hyper-parameters. All the models are implemented with Pytorch 0.4.03, and fine-tuned with NAVER Smart Machine Learning (NSML) platform (Sung et al., 2017; Kim et al., 2018). The models are trained with mini-batches containing 32 examples each in an end-to-end manner. In the AE phase, the models are trained by SGD with an initial learning rate of 1.0 and gradient clipping at 1 (Pascanu et al., 2013). We decay the learning rate by 40% every 10th epoch. In the GAN phase, the models are updated using RMSprop (Tieleman and Hinton) with fixed learning rates of 5×10−5 2https://www.nltk.org/_modules/nltk/translate/bleu_score.html 3https://pytorch.org and 1×10−5 for the generator and the discriminator, respectively. We tune the hyper-parameters on the validation set and measure the performance on the test set. 5 EXPERIMENTAL RESULTS 5.1 QUANTITATIVE ANALYSIS Tables 1 and 2 show the performance of DialogWAE and baselines on the two datasets. DialogWAE outperforms the baselines in the majority of the experiments. In terms of BLEU scores, DialogWAE (with a Gaussian mixture prior network) generates more relevant responses, with the average recall of 42.0% and 37.2% on both of the datasets. These are significantly higher than those of the CVAE baselines (29.9% and 26.5%). We observe a similar trend to the BOW embedding metrics. DialogWAE generates more diverse responses than the baselines do. The inter-dist scores are significantly higher than those of the baseline models. This indicates the sampled responses contain more distinct n-grams. DialogWAE does not show better intra-distinct scores. We conjecture that this is due to the relatively long responses generated by the DialogWAE as shown in the last columns of both tables. It is highly unlikely for there to be many repeated n-grams in a short response. We further investigate the effects of the number of prior components (K). Figure 2 shows the performance of DialogWAE-GMP with respect to the number of prior components K. We vary K from 1 to 9. As shown in the results, in most cases, the performance increases with K and decreases once K reaches a certain threshold, for example, three. The optimal K on both of the datasets was around 3. We attribute this degradation to training difficulty of a mixture density network and the lack of appropriate regularization, which is left for future investigation. 5.2 QUALITATIVE ANALYSIS Table 3 presents examples of responses generated by the models on the DailyDialog dataset. Due to the space limitation, we report the results of CVAE-CO and DialogWAE-GMP, which are the representative models among the baselines and the proposed models. For each context in the test set, we show three samples of generated responses from each model. As we expected, DialogWAE generates more coherent and diverse responses that cover multiple plausible aspects. Furthermore, we notice that the generated response is long and exhibits informative content. By contrast, the responses generated by the baseline model exhibit relatively limited variations. Although the responses show some variants in contents, most of them share a similar prefix such as “how much”. We further investigate the interpretability of Gaussian components in the prior network, that is, what each Gaussian model has captured before generation. We pick a dialogue context “I’d like to invite you to dinner tonight, do you have time?” which is also used in (Shen et al., 2018) for analysis and generate five responses for each Gaussian component. As shown in Table 4, different Gaussian models generate different types of responses: component 1 expresses a strong will, while component 2 expresses some uncertainty, and component 3 generates strong negative responses. The overlap between components is marginal (around 1/5). The results indicate that the Gaussian mixture prior network can successfully capture the multimodal distribution of the responses. To validate the previous results, we further conduct a human evaluation with Amazon Mechanical Turk. We randomly selected 50 dialogues from the test set of DailyDialog. For each dialogue context, we generated 10 responses from each of the four models. Responses for each context were inspected by 5 participants who were asked to choose the model which performs the best in regarding to coherence, diversity and informative while being blind to the underlying algorithms. The average percentages that each model was selected as the best to a specific criterion are shown in Table 5. The proposed approach clearly outperforms the current state of the art, CVAE-CO and VHCR, by a large margin in terms of all three metrics. This improvement is especially clear when the Gaussian mixture prior was used. 6 CONCLUSION In this paper, we introduced a new approach, named DialogWAE, for dialogue modeling. Different from existing VAE models which impose a simple prior distribution over the latent variables, DialogWAE samples the prior and posterior samples of latent variables by transforming contextdependent Gaussian noise using neural networks, and minimizes the Wasserstein distance between the prior and posterior distributions. Furthermore, we enhance the model with a Gaussian mixture prior network to enrich the latent space. Experiments on two widely used datasets show that our model outperforms state-of-the-art VAE models and generates more coherent, informative and diverse responses. ACKNOWLEDGMENTS This work was supported by the Creative Industrial Technology Development Program (10053249) funded by the Ministry of Trade, Industry and Energy (MOTIE, Korea).
1. What is the focus and contribution of the paper on dialogue response generation? 2. What are the strengths of the proposed approach, particularly in terms of its ability to encode and decode responses in a dialogue? 3. Do you have any concerns regarding the usage of the Wasserstein distance in the proposed model? 4. How does the reviewer assess the originality of the paper's content, particularly in relation to its combination of existing frameworks and tricks? 5. What are the weaknesses of the paper, especially in terms of identifying which aspects contribute to its superior performance?
Review
Review This paper presents a dialogue response generation model based on the framework of adversarial autoencoder. Specifically, the proposed model uses an autoencoder to encode and decode a response in a dialogue, conditioning on the context of the dialogue. The RNN encoded context is used as the prior of the latent variable in the autoencoder, and the whole dialogue (context + response) is used to infer the posterior of the latent variable. The inference is done by the adversarial training to match the prior and the posterior of the latent variable. Besides constructing the prior with a single Gaussian, the variant of the proposed model is also proposed where the prior is constructed with a Gaussian mixture model. My comments are as follows: 1. The paper is well-written and easy to follow. 2. The experiments seem quite strong and the compared models are properly selected. I'm not an expert in the specific area of the dialogue generation. But to me, the results seem convincing to me. 3. The usage of the Wasserstein distance in the proposed model does not make sense to me. Both the adversarial training in AAE and minimising the Wasserstein distance are able to match the prior and posterior of the latent variable. If the former is used in the proposed model, then how is the Wasserstein distance used at the same time? I also checked Algorithm 1 and did not find how the Wasserstein distance comes in. This is the first question that needs the authors to clarify. 4. To me, the significance of this paper mainly goes to combining several existing frameworks and tricks into the specific area of dialogue generation. Although the empirical results show the proposed model outperforms several existing models, my concern is still on the originality of the paper. Specifically, one of the main contributions goes to using the Gaussian mixture to construct the prior, but this is not a whole new idea in VAE or GAN, nor using the Gumbel trick. 5. It is good to see that the authors showed some comparisons between DialogWAE and DialogWAE-GMP, letting us see GMP does help the performance. But a minor concern is that it seems hard to identify which part makes DialogWAE get superior performance than others. Are all the models running with the same experiment settings including the implementation of the RNNs?
ICLR
Title A SPIKING SEQUENTIAL MODEL: RECURRENT LEAKY INTEGRATE-AND-FIRE Abstract Stemming from neuroscience, Spiking neural networks (SNNs), a brain-inspired neural network that is a versatile solution to fault-tolerant and energy efficient information processing pertains to the ”event-driven” characteristic as the analogy of the behavior of biological neurons. However, they are inferior to artificial neural networks (ANNs) in real complicated tasks and only had it been achieved good results in rather simple applications. When ANNs usually being questioned about it expensive processing costs and lack of essential biological plausibility, the temporal characteristic of RNN-based architecture makes it suitable to incorporate SNN inside as imitating the transition of membrane potential through time, and a brain-inspired Recurrent Leaky Integrate-and-Fire (RLIF) model has been put forward to overcome a series of challenges, such as discrete binary output and dynamical trait. The experiment results show that our recurrent architecture has an ultra anti-interference ability and strictly follows the guideline of SNN that spike output through it is discrete. Furthermore, this architecture achieves a good result on neuromorphic datasets and can be extended to tasks like text summarization and video understanding. 1 INTRODUCTION The terms of deep learning and the corresponding artificial neural networks (ANNs) derivatives have been dominating in subject of computer science and keep the current state-of-the-art performance in a widespread of machine learning’s application scenario such as computer vision (Simonyan & Zisserman, 2014), natural language processing (Collobert & Weston, 2008), speech/audio recognition (Hinton et al., 2012), video understanding (Ye et al., 2015) since the first arising of the AlexNet (Krizhevsky et al., 2012), even some of them has beat the humans’ cognitive level in certain tasks. However, ANNs fail to uptake the advantages of the Neuronal Dynamics, which instantiates as high-power consumption, relatively low responses and etc. Spiking Neuron Networks(SNNs) (Maass, 1997), with inspiration for the propagation of the cortex neurons (Perrett et al., 1982; Tuckwell, 1988), have been presented continuous attention as a new, power-efficient and hardware friendly technology. In contrast to the mere implementation of spatial information and complicated float point computation of ANNs, SNNs utilize spatial-temporal dynamics to mimic the bio-behavior of neurons, as well as its dyadic-valued computation whose feeding electrical sequential impulses (i.e., spikes), belong to the binary-like set of {0,1}. Benefit from the capabilities of processing binary-spiking signal and consequential effectiveness, there is an alternative for SNNs that has a feasibility of further development of machine learning and neuromorphic application, which has been long-term significantly deployed in many neuromorphic hardware including SpiNNaker (Furber et al., 2014), TrueNorth (Akopyan et al., 2015) and Loihi (Davies et al., 2018). In contrast to the ANNs’ well advanced, salient, proficient training methodology that indicate the conception of BackPropagation(BP) (LeCun et al., 1998) along with its derivatives that consequently give rise to the convergence of ANNs and diverse categories of frameworks(ie. TensorFlow, PyTorch, et al.) that make it succinct and available to train more deeper networks. However, for one thing, there are not so much theoretically supported or potent procedure for tackling the issue of training SNNs, which limits SNNs from going deeper, therefore SNNs hardly fulfill the ability in real-world complex missions, such as video-based recognition/detection, natural language pro- cessing et al.. For another thing, there no exit practical auxiliary frameworks that are capable to promote the mature structure of SNNs, which leads to the consequence of few application and rare forward-step development of SNNs. There are still various efforts to make progress in training, deepening the depth and applications of SNNs, whereas many obstacles block the development of SNNs at the same time. As for training, there are many circumvention ways to strengthen the accuracy of SNNs, except for neuromorphic methodology such as spike-timing-dependent plasticity (STDP) (Serrano-Gotarredona et al., 2013), winner-taken all (WTA) (Makhzani & Frey, 2015). In the first alternative scheme, an ANN is trained firstly, then it is transformed into the SNN version whose network structure is the same as the abovementioned ANN, and neurons analog the behavior of ANN neurons (Diehl et al., 2015). The other is the direct supervised learning, also called Gradient descend, which is a superior, prevalent optimization method for this learning procedure. In order to solve the issue of the non-differential problems of spikes, (Lee et al., 2016) proposed an alternate that treats membrane potential as differential signals and directly uses BP algorithm to train deep SNNs. To act as more bio-behavior, (Ponulak & Kasiński, 2010) introduced the remote supervised STDP-like rule to be capable of the learning of sequential output spike. Besides, (Urbanczik & Senn, 2009) proposed a novel leaning rule whose information will be embedded into the spatio-temporal information during learning of the spike signals. Nevertheless, most of the learning methods presented above are merely engaged in a single aspect of either spatial or temporal information. The applications started to spring up due to the incoming of the event-based cameras composed of Dynamic Visual Sensors(DVS) (Shi et al., 2018). The mechanism of DVS can be outlined as a simulation of the visual path structures and functionalities of the biological visual systems whose neurons asynchronously communicate and encode the visual information from environment as spatiotemporally sparse light intensity change in the form of spikes. On the strength of the event-based cameras, diverse event-based datasets were acquired such as Poker-DVS, MNIST-DVS (Serrano-Gotarredona & Linares-Barranco, 2015) and CIFAR10-DVS (Wu et al., 2019). Embracing the event-based cameras and their derived datasets, a variety of monographs demonstrate the different methodologies whose intentions are to make a plausibility of the application of accordingly components. (Peng et al., 2016) proposed an event-based classification based on static learning method, named Bag of Events (BOE in short). This method denotes the events of corresponding to the activated pixel of the DVS as joint probability distribution. Moreover, this method tests on multiple datasets such as NMNIST, MNIST-DVS, Poker-DVS, and it reveals that BOE can significantly achieve competitive results in real-time for feature extraction and implementation time as well as the classification accuracy. (Neil & Liu, 2016) proposed a deep CNN to pre-process spiking data from DVS, which is used in various deep network architecture and is also used to achieve an accuracy of 97.40% on N-MNIST datasets, in spite of its complicated pre-processing approach. In terms of SNNs, (Indiveri et al., 2015) proposed a SNN architecture, named Feedforward SNN, which is based on spike-based learning and temporary learning, and it achieves 87.41% accuracy on MNIST-DVS datasets. (Stromatias et al., 2015) proposed a composite system, including convolutional SNNs, non-spiking fully connected classifier, and spiking output layer with its performance of 97.95% of accuracy. Together with improving the performance and enhancing the convergence rate of SNNs, the goal that whether a method that can absorb both advantages of ANNs and SNNs can be achieved. To this end, we propose RLIF with both low computational complexity and biological plausibility, to explore its usage in real-world tasks. In summary, the major contributions of this paper can be listed as follows: • We propose RLIF, which absorbs the biological traits from SNNs, follows the unroll structure of RNNs, and enables a seamless way to insert into any sequential model in common deep learning frameworks. • A mass throughput can be implemented through the transition of binary information be- tween an interlayer of RLIF and other sequential layers, which meets the basic principle that the emission of neuron trains are binary values. Furthermore, RLIF can be easily extended into neuromorphic chips since its peculiarity of hardware-friendly. • The experiments conducted in general DVS-based datasets (MNIST-DVS, CIFAR10-DVS) and Chinese text summarization (LCSTS-2.0) show that our RLIF is capable of capturing key information through time and has lower parameters compared to its counterparts. 2 PREMISE OF UNDERSTANDING RLIF As mentioned before, the core idea in our architecture is about how to absorb the biological traits of SNN into RNN. To this end, learning algorithm in SNN will be introduced first and then we do a simple analysis on basic LIF neuron model, which aims to highlight the most relevant parts to our RLIF. 2.1 LEARNING ALGORITHM FOR SNN To the best of our knowledge, the learning algorithm for SNN could be divided into two categories: i) unsupervised learning algorithms represented by spike timing dependent plasticity (STDP) and ii) direct supervised learning algorithms represented by gradient-based backpropagation. Classical STDP and its reward-modulated variants (Legenstein et al., 2008; Frémaux & Gerstner, 2016), the typical SNN learning method which only use the local information to update the weights of model, surrender to difficulties in the convergence of models with many layers on complex datasets (Masquelier & Thorpe, 2007; Diehl & Cook, 2015; Tavanaei & Maida, 2016). Illuminated by observing the huge success of backpropagation in ANN, researchers start to explore a new way about how can backpropagation be used in training SNN under the end-to-end paradigm. (Lee et al., 2016; Jin et al., 2018) have introduce spatial backpropagation method into training SNN which mainly based on conventional backpropagation. As to imitate the temporal characteristics of SNN, (Wu et al., 2018) pioneered the use of backpropagation in both spatial and temporal domains to train SNN directly, through which it achieved the state-of-the-art accuracy on MNIST and N-MNIST datasets. (Huh & Sejnowski, 2018) introduce a differentiable formulation of spiking dynamics and derive the exact gradient calculation to achieve this and (Neftci et al., 2019) use surrogate gradient methods to conquer the difficulties associated with the discontinuous nonlinearity. As a step further to increase the speed of training, (Wu et al., 2019) convert the leaky integrateand-fire (LIF) model into an explicitly iterative version so as to train deep SNN with tens of times speedup under backpropagation through time (BPTT). 2.2 LIF NEURON MODEL Leaky Integrateand-Fire (LIF) is the most common and simple model which can modeling neuron operations and some basic dynamic traits effectively with low computational costs. In general, we describe LIF neuron (layer l and index i) in differential form as τmem dU li dt = −(U li − Urest) +RI li (1) where Ui refers to the membrane potential, Urest is the resting potential, τmem is the membrane time constant, R is the input resistance, and Ii is the input current (Gerstner et al., 2014). When the membrane voltage of neuron reaches it firing threshold ϑ, spikes was released to communicate their output to other neurons. After each spike, Ui is reset to the original resting potential Urest. since the input current is typically generated by synaptic currents triggered by the arrival of presynaptic spikes Slj , (Neftci et al., 2019) model the dynamics of operations during approximating the time course as an exponentially decaying current following each presynaptic spike by dI li dt = − I l i τsyn︸ ︷︷ ︸ decay + ∑ j W lij · Sl−1j︸ ︷︷ ︸ feed forward + ∑ j V lij · Slj︸ ︷︷ ︸ recurrent (2) Based on this, the simulation of single LIF neuron can be decomposed into solving two linear differential equations. As RNN, who accepts both the current input xt and the previously hidden state ht−1 and updates the current state via non-linear activation function σ(...), the basic form is yt = σ(Wx · xt +Wh · ht−1 + b) (3) Apparently, Equation 2 has the similar structure with basic RNN, which provides an insight about paraphrasing LIF into recurrent paradigm. 3 RLIF ARCHITECTURE In this section, we will present the architecture of the Recurrent Leaky Integrate-and-Fire model (RLIF). The principle idea we hold is to enable RLIF with more biologically properties and achieve high computational efficiency meanwhile. As our architecture following by the paradigm of ANN, we treat the synaptic current of SNN as continuous probability distribution whereas keep the spike as discrete through a novel gradient broaden strategy, which allows the standard backpropagation through time in RLIF. 3.1 RLIF DEFINITION Based on Equation 2 and 3 of LIF, we bring it into recurrent neural network’s paradigm and the fusion form was described as follows: V t = U t + ut−1 (4) F t = V t ≥ V tthres (5) utd = F t V treset+!F t V t (6) ut =M t + β (7) where V t refers to the membrane potential with regard to the current voltage at timestep t and recurrent membrane potential at timestep t − 1, F t denotes whether the current voltage of neurons has reaches its own firing threshold V treset, if it reaches its firing threshold then label this neuron with 1 otherwise 0. Next, we reset the firing neurons to its resting potential and let the membrane voltage of other neurons remain unchanged, as shown in Equation 6, the processed membrane potential utd are thus retrieved. Then we calculate ut with more biological plausibility as to mimic the random noise and accumulate with leakage. Y tj = F t (8) Here, the information firing between layers is Y tj (binary output, as depicted in Figure 1, we denote this mode as spike). At current timestep t, whereXt denotes the input, U t refers to the calculation of current voltage and M t represents the updating process of membrane potential. V treset is the reset voltage which produces the same effect (Lee et al., 2016) like Vreset in Equation 6 as to simulate the inhibitory response of neurons and V tthres is the firing threshold targeting at whether a neuron is fire or not. Besides, we propose two patterns in the calculation of U t and M t: FC (short for Fully Connected) and Conv (short for Convolution): U t = { Wvolt ·Xt + bvolt, FC Wvolt ⊗Xt + bvolt, Conv (9) M t = { α utd + bmem, FC α⊗ utd + bmem, Conv (10) where · represents matrix multiplication, refers to the hadamard product whereas ⊗ refers to the convolution product. α refers to the leakage to accumulate membrane potentials of each discrete timestep and β is the mechanism of simulating random noise in mammal neurons. As depicted in Figure 1, which is rather simple to further extend it into real-world complex tasks. Therefore, we set V t in Equation 4 as the hidden state ht of LSTM (Hochreiter & Schmidhuber, 1997) (as shown in Equation 11). Then followed by the same procedure as Equation 5 to Equation 8. The attention needs here is that we replace ht to membrane potential ut and keep the cell state ct unchanged. The usage of this RLIF variant (LIF-LSTM) will be introduced in the experiment of text summarization. f t = σ(Wf ·Xt + Uf · ut−1 + bf ) it = σ(Wi ·Xt + Ui · ut−1 + bi) ot = σ(Wo ·Xt + Uo · ut−1 + bo) ct = tanh(Wc ·Xt + Uc · ut−1 + bc) ht = ot tanh(ct) (11) 3.2 GRADIENT BROADING: broadgrad() Since a critical problem has arisen due to the spike output mode: the nondifferentiable property of the binary spike trains output. Here, a rectangular function (Wu et al., 2018) grad(...) was chosen to broaden the range of spike derivatives on the backward phase. grad(F t) = { 1, [V tthres − a, V tthres + a] 0, other (12) As shown in Figure 2, hyperparameter a is essential to determinate the range of grad(F t), which further exhibits considerable influence on the convergence of a network. 4 EXPERIMENTS We evaluate our proposed RLIF on image classification task to verify its effectiveness as compared with other brain-inspired methods. Moreover, we extend it into text summarization, a classical natural language processing task, the experiment shows that RLIF and its variant, with pluggability and flexibility inside, could be applied successfully into complex real-world tasks. 4.1 CLASSIFICATION ON NEUROMORPHIC DATASETS Here, we used two neuromorphic datasets, MNIST-DVS and CIFAR10-DVS, with pixel resolution of 128 * 128 to verify the classification performance of RLIF. Both event-based datasets are taken from the original dataset by the DVS sensors, which in particular takes samples through moving along a fixed trajectory in front of the LCD monitor. MNIST-DVS dataset contains 30,000 eventstream records from handwritten digits 0 to 9, among which 80% are used for training and 20% for testing. The CIFAR10-DVS dataset contains 10 categories as well, totaling 10,000 event-stream records, each category containing 1,000 records. DATA PREPARATION The design of the pre-processing part is based on the statistical information of DVS data, which can effectively represent its temporal and spatial information. First of all, the pre-processing algorithm slides on the original event-streams sorted by timestamp according to a specific length event-window. The sliding step of the event-window is equal to its length. When the event-window slides, a new event-stream thus generated to represent the data with the same number of event-stream recordings as the event-window. Finally, each new event-stream is expanded into a three-dimensional data frame which we call event-frame. Therefore, an event-frame was retrieved by converting from a set of events, which represents the information of recording data at one timestep. After T times of processing, a record with timestep T can be obtained, which contains both spatial and temporal information of the original event-stream data, and its dimension is (T, 128, 128, 2). NETWORK STRUCTURE As shown in Figure 3, the network receives the event-frame record of T timesteps followed by the pre-processing module and performs feature extraction. The addition of RIF in our network is the highlight, which serves as a key for high-efficient use of temporal information. The special layer (we denote it as SumLayer), which ultimately transformed the discrete binary event-stream into a continuous representation for overall prediction, through a way of integrating information of all timesteps. Most worthy of mention is, our model does not require complex pre-processing of the DVS raw event stream, and it can achieve better performance. Table 1 compares the performance of our model and the state-of-the-art methods in the MNIST-DVS and CIFAR10-DVS dataset, and our model achieves a relatively high accuracy in the test set. We obtained 98.43% accuracy on MNISTDVS dataset, which is similar to the performance of ordinary convolutional network. Compared to the MNIST-DVS dataset, the CIFAR10-DVS dataset is more complex and contains much more information and noise than MNIST-DVS, but we ultimately achieved an accuracy of 56.93% that are better than all the SOTAs. In order to verify the feasibility of the system, we compared the results of scale-4, scale-8, scale-16, in which the same preprocessing tactics were conducted on MNIST-DVS and trained with the same network model. The final test results are shown in Table 2. 4.2 TEXT SUMMARIZATION Here, we proposed a sequence-to-sequence model (Seq2Seq) with LIF-LSTM (RLIF’s variant) on LCSTS dataset. DATASET LCSTS is a large-scale Chinese short text summarization dataset, consisting of pairs of (short text, summary) collected by (Hu et al., 2015). The whole dataset, which consists of more than 2,400,000 pairs, was split into three parts under the same process as (Li et al., 2017; Ma et al., 2018) described. The noteworthy part is we only reserve pairs with scores no less than 3, thus we take PART I for training, filtered PART II for validation, and filtered PART III for testing. During our experiments, word segmentation was excluded whereas we only take Chinese character sequence as input. EVALUATION METRIC Here, we use the most common metric in evaluating the effect of text summarization: ROUGE score (Lin, 2004). The core idea of ROUGE is to compute the number of overlapping units between generated summaries and its reference summaries, including n-grams, word sequences, and word pairs. We use ROUGE-1 (unigram), ROUGE-2 (bi-gram) and ROUGE-L (LCS) as with previous exercises (Hu et al., 2015; Li et al., 2017; Ma et al., 2018) in the experimental results. NETWORK STRUCTURE As shown in Figure 4, our network is based on the sequence-to-sequence model where encoder is a stack of Layer Normalization (LN) and Bi-LSTM and decoder is similar to encoder of which Bi-LSTM is replaced by Uni-LIF-LSTM. In general, we use the final decoder layer and the final encoder layer output for obtaining the recurrent attention context through multihead attention (Vaswani et al., 2017) and teacher-forcing strategy to supervise the learning of the representation of the source content with the corresponding summary. Under the assumption that word appears in the summary may existed in the text, a prior distribution is adopted here to make model prefer the word in text rather than others. As the experiment result as depicted in Table 3, our LIF-LSTM appears to be a good substitute for LSTM but a step further of its biological plausibity, which demonstrates the feasibility of the usage of LIF-LSTM into real-world tasks. 5 CONCLUSION In this paper, we propose the Recurrent Leaky Integrate-and-Fire (RLIF) model which has complementary advantages of ANNs and SNNs. RLIF is a more in-depth simulation on mammal neuron and can be easily plugged into the prevalent ANNs framework with the advantages of BPTT. The hybrid network of the combination of traditional ANNs module and RLIF converges easier than conventional SNNs method. The experiments show that our RLIF and its variant are of good application prospects due to their adaptability and stability, especially reflected in the text summary task. We believe that RLIF and its variant can be applied to many real-world challenging tasks such as neural machine translation, video understanding, which may lead to a shift in the public view about SNNs. A SUMMARIZATION EXAMPLE OF OUR MODEL AND OTHER WORKS.
1. What is the main contribution of the paper in terms of its neural network architecture? 2. What are the strengths of the proposed model compared to other existing methods? 3. What are the weaknesses or limitations of the paper's claims regarding its advantages? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review This paper proposes a brain-inspired recurrent neural network architecture, named Recurrent Leaky Integrate-and-Fire (RLIF). Computationally, the model is designed to mimic how biological neurons behave, e.g. producing binary values. The hope is that this will allow such computational models to be easily implemented on neuromorphic chips and the solution will be more energy-efficient. On neuromorphic MNIST and CIFAR, the proposed model achieves higher classification accuracy than other listed methods. On ROGUE, a text summarization benchmark, the proposed model achieves competitive performance. I am leaning towards rejecting this paper. The main advantage of the proposed computational model was not supported by evidence in the paper. The presented evidence only suggests that the computational model has the capacity for learning to solve real-world tasks to a degree that is on par with other existing computational models. But what supposedly distinguishes the proposed one from the rest, i.e. being more hardware-friendly and energy-efficient, was not demonstrated.
ICLR
Title A SPIKING SEQUENTIAL MODEL: RECURRENT LEAKY INTEGRATE-AND-FIRE Abstract Stemming from neuroscience, Spiking neural networks (SNNs), a brain-inspired neural network that is a versatile solution to fault-tolerant and energy efficient information processing pertains to the ”event-driven” characteristic as the analogy of the behavior of biological neurons. However, they are inferior to artificial neural networks (ANNs) in real complicated tasks and only had it been achieved good results in rather simple applications. When ANNs usually being questioned about it expensive processing costs and lack of essential biological plausibility, the temporal characteristic of RNN-based architecture makes it suitable to incorporate SNN inside as imitating the transition of membrane potential through time, and a brain-inspired Recurrent Leaky Integrate-and-Fire (RLIF) model has been put forward to overcome a series of challenges, such as discrete binary output and dynamical trait. The experiment results show that our recurrent architecture has an ultra anti-interference ability and strictly follows the guideline of SNN that spike output through it is discrete. Furthermore, this architecture achieves a good result on neuromorphic datasets and can be extended to tasks like text summarization and video understanding. 1 INTRODUCTION The terms of deep learning and the corresponding artificial neural networks (ANNs) derivatives have been dominating in subject of computer science and keep the current state-of-the-art performance in a widespread of machine learning’s application scenario such as computer vision (Simonyan & Zisserman, 2014), natural language processing (Collobert & Weston, 2008), speech/audio recognition (Hinton et al., 2012), video understanding (Ye et al., 2015) since the first arising of the AlexNet (Krizhevsky et al., 2012), even some of them has beat the humans’ cognitive level in certain tasks. However, ANNs fail to uptake the advantages of the Neuronal Dynamics, which instantiates as high-power consumption, relatively low responses and etc. Spiking Neuron Networks(SNNs) (Maass, 1997), with inspiration for the propagation of the cortex neurons (Perrett et al., 1982; Tuckwell, 1988), have been presented continuous attention as a new, power-efficient and hardware friendly technology. In contrast to the mere implementation of spatial information and complicated float point computation of ANNs, SNNs utilize spatial-temporal dynamics to mimic the bio-behavior of neurons, as well as its dyadic-valued computation whose feeding electrical sequential impulses (i.e., spikes), belong to the binary-like set of {0,1}. Benefit from the capabilities of processing binary-spiking signal and consequential effectiveness, there is an alternative for SNNs that has a feasibility of further development of machine learning and neuromorphic application, which has been long-term significantly deployed in many neuromorphic hardware including SpiNNaker (Furber et al., 2014), TrueNorth (Akopyan et al., 2015) and Loihi (Davies et al., 2018). In contrast to the ANNs’ well advanced, salient, proficient training methodology that indicate the conception of BackPropagation(BP) (LeCun et al., 1998) along with its derivatives that consequently give rise to the convergence of ANNs and diverse categories of frameworks(ie. TensorFlow, PyTorch, et al.) that make it succinct and available to train more deeper networks. However, for one thing, there are not so much theoretically supported or potent procedure for tackling the issue of training SNNs, which limits SNNs from going deeper, therefore SNNs hardly fulfill the ability in real-world complex missions, such as video-based recognition/detection, natural language pro- cessing et al.. For another thing, there no exit practical auxiliary frameworks that are capable to promote the mature structure of SNNs, which leads to the consequence of few application and rare forward-step development of SNNs. There are still various efforts to make progress in training, deepening the depth and applications of SNNs, whereas many obstacles block the development of SNNs at the same time. As for training, there are many circumvention ways to strengthen the accuracy of SNNs, except for neuromorphic methodology such as spike-timing-dependent plasticity (STDP) (Serrano-Gotarredona et al., 2013), winner-taken all (WTA) (Makhzani & Frey, 2015). In the first alternative scheme, an ANN is trained firstly, then it is transformed into the SNN version whose network structure is the same as the abovementioned ANN, and neurons analog the behavior of ANN neurons (Diehl et al., 2015). The other is the direct supervised learning, also called Gradient descend, which is a superior, prevalent optimization method for this learning procedure. In order to solve the issue of the non-differential problems of spikes, (Lee et al., 2016) proposed an alternate that treats membrane potential as differential signals and directly uses BP algorithm to train deep SNNs. To act as more bio-behavior, (Ponulak & Kasiński, 2010) introduced the remote supervised STDP-like rule to be capable of the learning of sequential output spike. Besides, (Urbanczik & Senn, 2009) proposed a novel leaning rule whose information will be embedded into the spatio-temporal information during learning of the spike signals. Nevertheless, most of the learning methods presented above are merely engaged in a single aspect of either spatial or temporal information. The applications started to spring up due to the incoming of the event-based cameras composed of Dynamic Visual Sensors(DVS) (Shi et al., 2018). The mechanism of DVS can be outlined as a simulation of the visual path structures and functionalities of the biological visual systems whose neurons asynchronously communicate and encode the visual information from environment as spatiotemporally sparse light intensity change in the form of spikes. On the strength of the event-based cameras, diverse event-based datasets were acquired such as Poker-DVS, MNIST-DVS (Serrano-Gotarredona & Linares-Barranco, 2015) and CIFAR10-DVS (Wu et al., 2019). Embracing the event-based cameras and their derived datasets, a variety of monographs demonstrate the different methodologies whose intentions are to make a plausibility of the application of accordingly components. (Peng et al., 2016) proposed an event-based classification based on static learning method, named Bag of Events (BOE in short). This method denotes the events of corresponding to the activated pixel of the DVS as joint probability distribution. Moreover, this method tests on multiple datasets such as NMNIST, MNIST-DVS, Poker-DVS, and it reveals that BOE can significantly achieve competitive results in real-time for feature extraction and implementation time as well as the classification accuracy. (Neil & Liu, 2016) proposed a deep CNN to pre-process spiking data from DVS, which is used in various deep network architecture and is also used to achieve an accuracy of 97.40% on N-MNIST datasets, in spite of its complicated pre-processing approach. In terms of SNNs, (Indiveri et al., 2015) proposed a SNN architecture, named Feedforward SNN, which is based on spike-based learning and temporary learning, and it achieves 87.41% accuracy on MNIST-DVS datasets. (Stromatias et al., 2015) proposed a composite system, including convolutional SNNs, non-spiking fully connected classifier, and spiking output layer with its performance of 97.95% of accuracy. Together with improving the performance and enhancing the convergence rate of SNNs, the goal that whether a method that can absorb both advantages of ANNs and SNNs can be achieved. To this end, we propose RLIF with both low computational complexity and biological plausibility, to explore its usage in real-world tasks. In summary, the major contributions of this paper can be listed as follows: • We propose RLIF, which absorbs the biological traits from SNNs, follows the unroll structure of RNNs, and enables a seamless way to insert into any sequential model in common deep learning frameworks. • A mass throughput can be implemented through the transition of binary information be- tween an interlayer of RLIF and other sequential layers, which meets the basic principle that the emission of neuron trains are binary values. Furthermore, RLIF can be easily extended into neuromorphic chips since its peculiarity of hardware-friendly. • The experiments conducted in general DVS-based datasets (MNIST-DVS, CIFAR10-DVS) and Chinese text summarization (LCSTS-2.0) show that our RLIF is capable of capturing key information through time and has lower parameters compared to its counterparts. 2 PREMISE OF UNDERSTANDING RLIF As mentioned before, the core idea in our architecture is about how to absorb the biological traits of SNN into RNN. To this end, learning algorithm in SNN will be introduced first and then we do a simple analysis on basic LIF neuron model, which aims to highlight the most relevant parts to our RLIF. 2.1 LEARNING ALGORITHM FOR SNN To the best of our knowledge, the learning algorithm for SNN could be divided into two categories: i) unsupervised learning algorithms represented by spike timing dependent plasticity (STDP) and ii) direct supervised learning algorithms represented by gradient-based backpropagation. Classical STDP and its reward-modulated variants (Legenstein et al., 2008; Frémaux & Gerstner, 2016), the typical SNN learning method which only use the local information to update the weights of model, surrender to difficulties in the convergence of models with many layers on complex datasets (Masquelier & Thorpe, 2007; Diehl & Cook, 2015; Tavanaei & Maida, 2016). Illuminated by observing the huge success of backpropagation in ANN, researchers start to explore a new way about how can backpropagation be used in training SNN under the end-to-end paradigm. (Lee et al., 2016; Jin et al., 2018) have introduce spatial backpropagation method into training SNN which mainly based on conventional backpropagation. As to imitate the temporal characteristics of SNN, (Wu et al., 2018) pioneered the use of backpropagation in both spatial and temporal domains to train SNN directly, through which it achieved the state-of-the-art accuracy on MNIST and N-MNIST datasets. (Huh & Sejnowski, 2018) introduce a differentiable formulation of spiking dynamics and derive the exact gradient calculation to achieve this and (Neftci et al., 2019) use surrogate gradient methods to conquer the difficulties associated with the discontinuous nonlinearity. As a step further to increase the speed of training, (Wu et al., 2019) convert the leaky integrateand-fire (LIF) model into an explicitly iterative version so as to train deep SNN with tens of times speedup under backpropagation through time (BPTT). 2.2 LIF NEURON MODEL Leaky Integrateand-Fire (LIF) is the most common and simple model which can modeling neuron operations and some basic dynamic traits effectively with low computational costs. In general, we describe LIF neuron (layer l and index i) in differential form as τmem dU li dt = −(U li − Urest) +RI li (1) where Ui refers to the membrane potential, Urest is the resting potential, τmem is the membrane time constant, R is the input resistance, and Ii is the input current (Gerstner et al., 2014). When the membrane voltage of neuron reaches it firing threshold ϑ, spikes was released to communicate their output to other neurons. After each spike, Ui is reset to the original resting potential Urest. since the input current is typically generated by synaptic currents triggered by the arrival of presynaptic spikes Slj , (Neftci et al., 2019) model the dynamics of operations during approximating the time course as an exponentially decaying current following each presynaptic spike by dI li dt = − I l i τsyn︸ ︷︷ ︸ decay + ∑ j W lij · Sl−1j︸ ︷︷ ︸ feed forward + ∑ j V lij · Slj︸ ︷︷ ︸ recurrent (2) Based on this, the simulation of single LIF neuron can be decomposed into solving two linear differential equations. As RNN, who accepts both the current input xt and the previously hidden state ht−1 and updates the current state via non-linear activation function σ(...), the basic form is yt = σ(Wx · xt +Wh · ht−1 + b) (3) Apparently, Equation 2 has the similar structure with basic RNN, which provides an insight about paraphrasing LIF into recurrent paradigm. 3 RLIF ARCHITECTURE In this section, we will present the architecture of the Recurrent Leaky Integrate-and-Fire model (RLIF). The principle idea we hold is to enable RLIF with more biologically properties and achieve high computational efficiency meanwhile. As our architecture following by the paradigm of ANN, we treat the synaptic current of SNN as continuous probability distribution whereas keep the spike as discrete through a novel gradient broaden strategy, which allows the standard backpropagation through time in RLIF. 3.1 RLIF DEFINITION Based on Equation 2 and 3 of LIF, we bring it into recurrent neural network’s paradigm and the fusion form was described as follows: V t = U t + ut−1 (4) F t = V t ≥ V tthres (5) utd = F t V treset+!F t V t (6) ut =M t + β (7) where V t refers to the membrane potential with regard to the current voltage at timestep t and recurrent membrane potential at timestep t − 1, F t denotes whether the current voltage of neurons has reaches its own firing threshold V treset, if it reaches its firing threshold then label this neuron with 1 otherwise 0. Next, we reset the firing neurons to its resting potential and let the membrane voltage of other neurons remain unchanged, as shown in Equation 6, the processed membrane potential utd are thus retrieved. Then we calculate ut with more biological plausibility as to mimic the random noise and accumulate with leakage. Y tj = F t (8) Here, the information firing between layers is Y tj (binary output, as depicted in Figure 1, we denote this mode as spike). At current timestep t, whereXt denotes the input, U t refers to the calculation of current voltage and M t represents the updating process of membrane potential. V treset is the reset voltage which produces the same effect (Lee et al., 2016) like Vreset in Equation 6 as to simulate the inhibitory response of neurons and V tthres is the firing threshold targeting at whether a neuron is fire or not. Besides, we propose two patterns in the calculation of U t and M t: FC (short for Fully Connected) and Conv (short for Convolution): U t = { Wvolt ·Xt + bvolt, FC Wvolt ⊗Xt + bvolt, Conv (9) M t = { α utd + bmem, FC α⊗ utd + bmem, Conv (10) where · represents matrix multiplication, refers to the hadamard product whereas ⊗ refers to the convolution product. α refers to the leakage to accumulate membrane potentials of each discrete timestep and β is the mechanism of simulating random noise in mammal neurons. As depicted in Figure 1, which is rather simple to further extend it into real-world complex tasks. Therefore, we set V t in Equation 4 as the hidden state ht of LSTM (Hochreiter & Schmidhuber, 1997) (as shown in Equation 11). Then followed by the same procedure as Equation 5 to Equation 8. The attention needs here is that we replace ht to membrane potential ut and keep the cell state ct unchanged. The usage of this RLIF variant (LIF-LSTM) will be introduced in the experiment of text summarization. f t = σ(Wf ·Xt + Uf · ut−1 + bf ) it = σ(Wi ·Xt + Ui · ut−1 + bi) ot = σ(Wo ·Xt + Uo · ut−1 + bo) ct = tanh(Wc ·Xt + Uc · ut−1 + bc) ht = ot tanh(ct) (11) 3.2 GRADIENT BROADING: broadgrad() Since a critical problem has arisen due to the spike output mode: the nondifferentiable property of the binary spike trains output. Here, a rectangular function (Wu et al., 2018) grad(...) was chosen to broaden the range of spike derivatives on the backward phase. grad(F t) = { 1, [V tthres − a, V tthres + a] 0, other (12) As shown in Figure 2, hyperparameter a is essential to determinate the range of grad(F t), which further exhibits considerable influence on the convergence of a network. 4 EXPERIMENTS We evaluate our proposed RLIF on image classification task to verify its effectiveness as compared with other brain-inspired methods. Moreover, we extend it into text summarization, a classical natural language processing task, the experiment shows that RLIF and its variant, with pluggability and flexibility inside, could be applied successfully into complex real-world tasks. 4.1 CLASSIFICATION ON NEUROMORPHIC DATASETS Here, we used two neuromorphic datasets, MNIST-DVS and CIFAR10-DVS, with pixel resolution of 128 * 128 to verify the classification performance of RLIF. Both event-based datasets are taken from the original dataset by the DVS sensors, which in particular takes samples through moving along a fixed trajectory in front of the LCD monitor. MNIST-DVS dataset contains 30,000 eventstream records from handwritten digits 0 to 9, among which 80% are used for training and 20% for testing. The CIFAR10-DVS dataset contains 10 categories as well, totaling 10,000 event-stream records, each category containing 1,000 records. DATA PREPARATION The design of the pre-processing part is based on the statistical information of DVS data, which can effectively represent its temporal and spatial information. First of all, the pre-processing algorithm slides on the original event-streams sorted by timestamp according to a specific length event-window. The sliding step of the event-window is equal to its length. When the event-window slides, a new event-stream thus generated to represent the data with the same number of event-stream recordings as the event-window. Finally, each new event-stream is expanded into a three-dimensional data frame which we call event-frame. Therefore, an event-frame was retrieved by converting from a set of events, which represents the information of recording data at one timestep. After T times of processing, a record with timestep T can be obtained, which contains both spatial and temporal information of the original event-stream data, and its dimension is (T, 128, 128, 2). NETWORK STRUCTURE As shown in Figure 3, the network receives the event-frame record of T timesteps followed by the pre-processing module and performs feature extraction. The addition of RIF in our network is the highlight, which serves as a key for high-efficient use of temporal information. The special layer (we denote it as SumLayer), which ultimately transformed the discrete binary event-stream into a continuous representation for overall prediction, through a way of integrating information of all timesteps. Most worthy of mention is, our model does not require complex pre-processing of the DVS raw event stream, and it can achieve better performance. Table 1 compares the performance of our model and the state-of-the-art methods in the MNIST-DVS and CIFAR10-DVS dataset, and our model achieves a relatively high accuracy in the test set. We obtained 98.43% accuracy on MNISTDVS dataset, which is similar to the performance of ordinary convolutional network. Compared to the MNIST-DVS dataset, the CIFAR10-DVS dataset is more complex and contains much more information and noise than MNIST-DVS, but we ultimately achieved an accuracy of 56.93% that are better than all the SOTAs. In order to verify the feasibility of the system, we compared the results of scale-4, scale-8, scale-16, in which the same preprocessing tactics were conducted on MNIST-DVS and trained with the same network model. The final test results are shown in Table 2. 4.2 TEXT SUMMARIZATION Here, we proposed a sequence-to-sequence model (Seq2Seq) with LIF-LSTM (RLIF’s variant) on LCSTS dataset. DATASET LCSTS is a large-scale Chinese short text summarization dataset, consisting of pairs of (short text, summary) collected by (Hu et al., 2015). The whole dataset, which consists of more than 2,400,000 pairs, was split into three parts under the same process as (Li et al., 2017; Ma et al., 2018) described. The noteworthy part is we only reserve pairs with scores no less than 3, thus we take PART I for training, filtered PART II for validation, and filtered PART III for testing. During our experiments, word segmentation was excluded whereas we only take Chinese character sequence as input. EVALUATION METRIC Here, we use the most common metric in evaluating the effect of text summarization: ROUGE score (Lin, 2004). The core idea of ROUGE is to compute the number of overlapping units between generated summaries and its reference summaries, including n-grams, word sequences, and word pairs. We use ROUGE-1 (unigram), ROUGE-2 (bi-gram) and ROUGE-L (LCS) as with previous exercises (Hu et al., 2015; Li et al., 2017; Ma et al., 2018) in the experimental results. NETWORK STRUCTURE As shown in Figure 4, our network is based on the sequence-to-sequence model where encoder is a stack of Layer Normalization (LN) and Bi-LSTM and decoder is similar to encoder of which Bi-LSTM is replaced by Uni-LIF-LSTM. In general, we use the final decoder layer and the final encoder layer output for obtaining the recurrent attention context through multihead attention (Vaswani et al., 2017) and teacher-forcing strategy to supervise the learning of the representation of the source content with the corresponding summary. Under the assumption that word appears in the summary may existed in the text, a prior distribution is adopted here to make model prefer the word in text rather than others. As the experiment result as depicted in Table 3, our LIF-LSTM appears to be a good substitute for LSTM but a step further of its biological plausibity, which demonstrates the feasibility of the usage of LIF-LSTM into real-world tasks. 5 CONCLUSION In this paper, we propose the Recurrent Leaky Integrate-and-Fire (RLIF) model which has complementary advantages of ANNs and SNNs. RLIF is a more in-depth simulation on mammal neuron and can be easily plugged into the prevalent ANNs framework with the advantages of BPTT. The hybrid network of the combination of traditional ANNs module and RLIF converges easier than conventional SNNs method. The experiments show that our RLIF and its variant are of good application prospects due to their adaptability and stability, especially reflected in the text summary task. We believe that RLIF and its variant can be applied to many real-world challenging tasks such as neural machine translation, video understanding, which may lead to a shift in the public view about SNNs. A SUMMARIZATION EXAMPLE OF OUR MODEL AND OTHER WORKS.
1. What is the main contribution of the paper, and how does it relate to previous work in neural computation and deep learning? 2. What are the strengths and weaknesses of the proposed architecture, particularly in comparison to other models in the field? 3. How does the reviewer assess the novelty and impact of the paper's content, especially regarding its claim to combine the advantages of Deep Learning and Spiking Neural Nets? 4. Are there any concerns or suggestions regarding the experimental design, methodology, or results presented in the paper? 5. How might the authors improve their work, either in terms of addressing specific criticisms or exploring new avenues for research?
Review
Review #### A. Summarize what the paper claims to do/contribute. Be positive and generous. #### The paper translates the Leaky Integrate and Fire model of neural computation via spike trains into a discrete-time RNN core similar to LSTM. The architecture would be readily amenable to the modern deep learning toolkit if not for the non-differentiability of the hard decision to spike or not. The hard decision is made by thresholding. The paper adopts a simple approximation of backpropagating a "gradient" of 1.0 through the operation if the threshold is within a neighbourhood [thresh - a, thresh + a], and otherwise 0.0, so the system can be trained by backpropagation. The architecture is tested on a few "neuromorphic" video classification datasets including MNIST-DVS and CIFAR-DVS. Experiments are also run on a text summarization task. #### B. Clearly state your decision (accept or reject) with one or two key reasons for this choice. #### The reviewer thinks the paper should be rejected in its current state. The proposed architecture is a straightforward change to a standard LSTM core. Thus it should be compared head-to-head to LSTM on standard datasets for these models (e.g. classic synthetic tasks, language modeling, speech recognition, machine translation, etc) with everything else held constant (hidden size, learning rate, sequence length, etc etc). It also doesn't really carry over any of the benefits of Spiking Neural Nets even though it is inspired by Leaky Integrate and Fire because it operates in discrete time like a normal RNN, just with an extra binary output produced by spiking. It's unclear that a spiking inductive bias is actually useful, even though event-driven computation could in theory allow much less computation, the proposed method does not have that property. So the paper doesn't really provide evidence to back up their claim that the proposed model combines the complimentary advantages of Deep Learning and Spiking Neural Nets. #### C. Provide supporting arguments for the reasons for the decision. While the proposed method is in-spirit inspired by the leaky integrate and fire model, it is operated/trained in discrete time which does not allow it to achieve the benefits of continuous time integrate-and-fire models which allow for less computation and time-discretization-invariance. The conversion of the spiking model to the deep learning framework is rather crude, as the differentiable approximation to the non-differentiable threshold operation is biased and not well-motivated either empirically, intuitively, or theoretically (i.e. there are no comparisons to alternative choices). There are new techniques for marrying continuous-time models and deep learning which seem more promising to investigate to this end (e.g. Neural ODE). So in summary, the method doesn't have the computational benefits of a biologically plausible spiking algorithms and is not well-tested against competing deep learning methods, making it hard to verify the motivation of pushing toward a performant yet biologically plausible algorithm. #### #### D. Provide additional feedback with the aim to improve the paper. Make it clear that these points are here to help, and not necessarily part of your decision assessment. #### There are many grammatical and word-choice mistakes which make the paper hard to read. Mainly, from a practical perspective, the paper would be much-improved by showing what benefit the spiking inductive bias confers over a standard LSTM on standard tasks in the deep learning community. The method/landscape should be developed and studied in further detail until claims can be made about combining the strengths of spiking and deep-learning models.
ICLR
Title A SPIKING SEQUENTIAL MODEL: RECURRENT LEAKY INTEGRATE-AND-FIRE Abstract Stemming from neuroscience, Spiking neural networks (SNNs), a brain-inspired neural network that is a versatile solution to fault-tolerant and energy efficient information processing pertains to the ”event-driven” characteristic as the analogy of the behavior of biological neurons. However, they are inferior to artificial neural networks (ANNs) in real complicated tasks and only had it been achieved good results in rather simple applications. When ANNs usually being questioned about it expensive processing costs and lack of essential biological plausibility, the temporal characteristic of RNN-based architecture makes it suitable to incorporate SNN inside as imitating the transition of membrane potential through time, and a brain-inspired Recurrent Leaky Integrate-and-Fire (RLIF) model has been put forward to overcome a series of challenges, such as discrete binary output and dynamical trait. The experiment results show that our recurrent architecture has an ultra anti-interference ability and strictly follows the guideline of SNN that spike output through it is discrete. Furthermore, this architecture achieves a good result on neuromorphic datasets and can be extended to tasks like text summarization and video understanding. 1 INTRODUCTION The terms of deep learning and the corresponding artificial neural networks (ANNs) derivatives have been dominating in subject of computer science and keep the current state-of-the-art performance in a widespread of machine learning’s application scenario such as computer vision (Simonyan & Zisserman, 2014), natural language processing (Collobert & Weston, 2008), speech/audio recognition (Hinton et al., 2012), video understanding (Ye et al., 2015) since the first arising of the AlexNet (Krizhevsky et al., 2012), even some of them has beat the humans’ cognitive level in certain tasks. However, ANNs fail to uptake the advantages of the Neuronal Dynamics, which instantiates as high-power consumption, relatively low responses and etc. Spiking Neuron Networks(SNNs) (Maass, 1997), with inspiration for the propagation of the cortex neurons (Perrett et al., 1982; Tuckwell, 1988), have been presented continuous attention as a new, power-efficient and hardware friendly technology. In contrast to the mere implementation of spatial information and complicated float point computation of ANNs, SNNs utilize spatial-temporal dynamics to mimic the bio-behavior of neurons, as well as its dyadic-valued computation whose feeding electrical sequential impulses (i.e., spikes), belong to the binary-like set of {0,1}. Benefit from the capabilities of processing binary-spiking signal and consequential effectiveness, there is an alternative for SNNs that has a feasibility of further development of machine learning and neuromorphic application, which has been long-term significantly deployed in many neuromorphic hardware including SpiNNaker (Furber et al., 2014), TrueNorth (Akopyan et al., 2015) and Loihi (Davies et al., 2018). In contrast to the ANNs’ well advanced, salient, proficient training methodology that indicate the conception of BackPropagation(BP) (LeCun et al., 1998) along with its derivatives that consequently give rise to the convergence of ANNs and diverse categories of frameworks(ie. TensorFlow, PyTorch, et al.) that make it succinct and available to train more deeper networks. However, for one thing, there are not so much theoretically supported or potent procedure for tackling the issue of training SNNs, which limits SNNs from going deeper, therefore SNNs hardly fulfill the ability in real-world complex missions, such as video-based recognition/detection, natural language pro- cessing et al.. For another thing, there no exit practical auxiliary frameworks that are capable to promote the mature structure of SNNs, which leads to the consequence of few application and rare forward-step development of SNNs. There are still various efforts to make progress in training, deepening the depth and applications of SNNs, whereas many obstacles block the development of SNNs at the same time. As for training, there are many circumvention ways to strengthen the accuracy of SNNs, except for neuromorphic methodology such as spike-timing-dependent plasticity (STDP) (Serrano-Gotarredona et al., 2013), winner-taken all (WTA) (Makhzani & Frey, 2015). In the first alternative scheme, an ANN is trained firstly, then it is transformed into the SNN version whose network structure is the same as the abovementioned ANN, and neurons analog the behavior of ANN neurons (Diehl et al., 2015). The other is the direct supervised learning, also called Gradient descend, which is a superior, prevalent optimization method for this learning procedure. In order to solve the issue of the non-differential problems of spikes, (Lee et al., 2016) proposed an alternate that treats membrane potential as differential signals and directly uses BP algorithm to train deep SNNs. To act as more bio-behavior, (Ponulak & Kasiński, 2010) introduced the remote supervised STDP-like rule to be capable of the learning of sequential output spike. Besides, (Urbanczik & Senn, 2009) proposed a novel leaning rule whose information will be embedded into the spatio-temporal information during learning of the spike signals. Nevertheless, most of the learning methods presented above are merely engaged in a single aspect of either spatial or temporal information. The applications started to spring up due to the incoming of the event-based cameras composed of Dynamic Visual Sensors(DVS) (Shi et al., 2018). The mechanism of DVS can be outlined as a simulation of the visual path structures and functionalities of the biological visual systems whose neurons asynchronously communicate and encode the visual information from environment as spatiotemporally sparse light intensity change in the form of spikes. On the strength of the event-based cameras, diverse event-based datasets were acquired such as Poker-DVS, MNIST-DVS (Serrano-Gotarredona & Linares-Barranco, 2015) and CIFAR10-DVS (Wu et al., 2019). Embracing the event-based cameras and their derived datasets, a variety of monographs demonstrate the different methodologies whose intentions are to make a plausibility of the application of accordingly components. (Peng et al., 2016) proposed an event-based classification based on static learning method, named Bag of Events (BOE in short). This method denotes the events of corresponding to the activated pixel of the DVS as joint probability distribution. Moreover, this method tests on multiple datasets such as NMNIST, MNIST-DVS, Poker-DVS, and it reveals that BOE can significantly achieve competitive results in real-time for feature extraction and implementation time as well as the classification accuracy. (Neil & Liu, 2016) proposed a deep CNN to pre-process spiking data from DVS, which is used in various deep network architecture and is also used to achieve an accuracy of 97.40% on N-MNIST datasets, in spite of its complicated pre-processing approach. In terms of SNNs, (Indiveri et al., 2015) proposed a SNN architecture, named Feedforward SNN, which is based on spike-based learning and temporary learning, and it achieves 87.41% accuracy on MNIST-DVS datasets. (Stromatias et al., 2015) proposed a composite system, including convolutional SNNs, non-spiking fully connected classifier, and spiking output layer with its performance of 97.95% of accuracy. Together with improving the performance and enhancing the convergence rate of SNNs, the goal that whether a method that can absorb both advantages of ANNs and SNNs can be achieved. To this end, we propose RLIF with both low computational complexity and biological plausibility, to explore its usage in real-world tasks. In summary, the major contributions of this paper can be listed as follows: • We propose RLIF, which absorbs the biological traits from SNNs, follows the unroll structure of RNNs, and enables a seamless way to insert into any sequential model in common deep learning frameworks. • A mass throughput can be implemented through the transition of binary information be- tween an interlayer of RLIF and other sequential layers, which meets the basic principle that the emission of neuron trains are binary values. Furthermore, RLIF can be easily extended into neuromorphic chips since its peculiarity of hardware-friendly. • The experiments conducted in general DVS-based datasets (MNIST-DVS, CIFAR10-DVS) and Chinese text summarization (LCSTS-2.0) show that our RLIF is capable of capturing key information through time and has lower parameters compared to its counterparts. 2 PREMISE OF UNDERSTANDING RLIF As mentioned before, the core idea in our architecture is about how to absorb the biological traits of SNN into RNN. To this end, learning algorithm in SNN will be introduced first and then we do a simple analysis on basic LIF neuron model, which aims to highlight the most relevant parts to our RLIF. 2.1 LEARNING ALGORITHM FOR SNN To the best of our knowledge, the learning algorithm for SNN could be divided into two categories: i) unsupervised learning algorithms represented by spike timing dependent plasticity (STDP) and ii) direct supervised learning algorithms represented by gradient-based backpropagation. Classical STDP and its reward-modulated variants (Legenstein et al., 2008; Frémaux & Gerstner, 2016), the typical SNN learning method which only use the local information to update the weights of model, surrender to difficulties in the convergence of models with many layers on complex datasets (Masquelier & Thorpe, 2007; Diehl & Cook, 2015; Tavanaei & Maida, 2016). Illuminated by observing the huge success of backpropagation in ANN, researchers start to explore a new way about how can backpropagation be used in training SNN under the end-to-end paradigm. (Lee et al., 2016; Jin et al., 2018) have introduce spatial backpropagation method into training SNN which mainly based on conventional backpropagation. As to imitate the temporal characteristics of SNN, (Wu et al., 2018) pioneered the use of backpropagation in both spatial and temporal domains to train SNN directly, through which it achieved the state-of-the-art accuracy on MNIST and N-MNIST datasets. (Huh & Sejnowski, 2018) introduce a differentiable formulation of spiking dynamics and derive the exact gradient calculation to achieve this and (Neftci et al., 2019) use surrogate gradient methods to conquer the difficulties associated with the discontinuous nonlinearity. As a step further to increase the speed of training, (Wu et al., 2019) convert the leaky integrateand-fire (LIF) model into an explicitly iterative version so as to train deep SNN with tens of times speedup under backpropagation through time (BPTT). 2.2 LIF NEURON MODEL Leaky Integrateand-Fire (LIF) is the most common and simple model which can modeling neuron operations and some basic dynamic traits effectively with low computational costs. In general, we describe LIF neuron (layer l and index i) in differential form as τmem dU li dt = −(U li − Urest) +RI li (1) where Ui refers to the membrane potential, Urest is the resting potential, τmem is the membrane time constant, R is the input resistance, and Ii is the input current (Gerstner et al., 2014). When the membrane voltage of neuron reaches it firing threshold ϑ, spikes was released to communicate their output to other neurons. After each spike, Ui is reset to the original resting potential Urest. since the input current is typically generated by synaptic currents triggered by the arrival of presynaptic spikes Slj , (Neftci et al., 2019) model the dynamics of operations during approximating the time course as an exponentially decaying current following each presynaptic spike by dI li dt = − I l i τsyn︸ ︷︷ ︸ decay + ∑ j W lij · Sl−1j︸ ︷︷ ︸ feed forward + ∑ j V lij · Slj︸ ︷︷ ︸ recurrent (2) Based on this, the simulation of single LIF neuron can be decomposed into solving two linear differential equations. As RNN, who accepts both the current input xt and the previously hidden state ht−1 and updates the current state via non-linear activation function σ(...), the basic form is yt = σ(Wx · xt +Wh · ht−1 + b) (3) Apparently, Equation 2 has the similar structure with basic RNN, which provides an insight about paraphrasing LIF into recurrent paradigm. 3 RLIF ARCHITECTURE In this section, we will present the architecture of the Recurrent Leaky Integrate-and-Fire model (RLIF). The principle idea we hold is to enable RLIF with more biologically properties and achieve high computational efficiency meanwhile. As our architecture following by the paradigm of ANN, we treat the synaptic current of SNN as continuous probability distribution whereas keep the spike as discrete through a novel gradient broaden strategy, which allows the standard backpropagation through time in RLIF. 3.1 RLIF DEFINITION Based on Equation 2 and 3 of LIF, we bring it into recurrent neural network’s paradigm and the fusion form was described as follows: V t = U t + ut−1 (4) F t = V t ≥ V tthres (5) utd = F t V treset+!F t V t (6) ut =M t + β (7) where V t refers to the membrane potential with regard to the current voltage at timestep t and recurrent membrane potential at timestep t − 1, F t denotes whether the current voltage of neurons has reaches its own firing threshold V treset, if it reaches its firing threshold then label this neuron with 1 otherwise 0. Next, we reset the firing neurons to its resting potential and let the membrane voltage of other neurons remain unchanged, as shown in Equation 6, the processed membrane potential utd are thus retrieved. Then we calculate ut with more biological plausibility as to mimic the random noise and accumulate with leakage. Y tj = F t (8) Here, the information firing between layers is Y tj (binary output, as depicted in Figure 1, we denote this mode as spike). At current timestep t, whereXt denotes the input, U t refers to the calculation of current voltage and M t represents the updating process of membrane potential. V treset is the reset voltage which produces the same effect (Lee et al., 2016) like Vreset in Equation 6 as to simulate the inhibitory response of neurons and V tthres is the firing threshold targeting at whether a neuron is fire or not. Besides, we propose two patterns in the calculation of U t and M t: FC (short for Fully Connected) and Conv (short for Convolution): U t = { Wvolt ·Xt + bvolt, FC Wvolt ⊗Xt + bvolt, Conv (9) M t = { α utd + bmem, FC α⊗ utd + bmem, Conv (10) where · represents matrix multiplication, refers to the hadamard product whereas ⊗ refers to the convolution product. α refers to the leakage to accumulate membrane potentials of each discrete timestep and β is the mechanism of simulating random noise in mammal neurons. As depicted in Figure 1, which is rather simple to further extend it into real-world complex tasks. Therefore, we set V t in Equation 4 as the hidden state ht of LSTM (Hochreiter & Schmidhuber, 1997) (as shown in Equation 11). Then followed by the same procedure as Equation 5 to Equation 8. The attention needs here is that we replace ht to membrane potential ut and keep the cell state ct unchanged. The usage of this RLIF variant (LIF-LSTM) will be introduced in the experiment of text summarization. f t = σ(Wf ·Xt + Uf · ut−1 + bf ) it = σ(Wi ·Xt + Ui · ut−1 + bi) ot = σ(Wo ·Xt + Uo · ut−1 + bo) ct = tanh(Wc ·Xt + Uc · ut−1 + bc) ht = ot tanh(ct) (11) 3.2 GRADIENT BROADING: broadgrad() Since a critical problem has arisen due to the spike output mode: the nondifferentiable property of the binary spike trains output. Here, a rectangular function (Wu et al., 2018) grad(...) was chosen to broaden the range of spike derivatives on the backward phase. grad(F t) = { 1, [V tthres − a, V tthres + a] 0, other (12) As shown in Figure 2, hyperparameter a is essential to determinate the range of grad(F t), which further exhibits considerable influence on the convergence of a network. 4 EXPERIMENTS We evaluate our proposed RLIF on image classification task to verify its effectiveness as compared with other brain-inspired methods. Moreover, we extend it into text summarization, a classical natural language processing task, the experiment shows that RLIF and its variant, with pluggability and flexibility inside, could be applied successfully into complex real-world tasks. 4.1 CLASSIFICATION ON NEUROMORPHIC DATASETS Here, we used two neuromorphic datasets, MNIST-DVS and CIFAR10-DVS, with pixel resolution of 128 * 128 to verify the classification performance of RLIF. Both event-based datasets are taken from the original dataset by the DVS sensors, which in particular takes samples through moving along a fixed trajectory in front of the LCD monitor. MNIST-DVS dataset contains 30,000 eventstream records from handwritten digits 0 to 9, among which 80% are used for training and 20% for testing. The CIFAR10-DVS dataset contains 10 categories as well, totaling 10,000 event-stream records, each category containing 1,000 records. DATA PREPARATION The design of the pre-processing part is based on the statistical information of DVS data, which can effectively represent its temporal and spatial information. First of all, the pre-processing algorithm slides on the original event-streams sorted by timestamp according to a specific length event-window. The sliding step of the event-window is equal to its length. When the event-window slides, a new event-stream thus generated to represent the data with the same number of event-stream recordings as the event-window. Finally, each new event-stream is expanded into a three-dimensional data frame which we call event-frame. Therefore, an event-frame was retrieved by converting from a set of events, which represents the information of recording data at one timestep. After T times of processing, a record with timestep T can be obtained, which contains both spatial and temporal information of the original event-stream data, and its dimension is (T, 128, 128, 2). NETWORK STRUCTURE As shown in Figure 3, the network receives the event-frame record of T timesteps followed by the pre-processing module and performs feature extraction. The addition of RIF in our network is the highlight, which serves as a key for high-efficient use of temporal information. The special layer (we denote it as SumLayer), which ultimately transformed the discrete binary event-stream into a continuous representation for overall prediction, through a way of integrating information of all timesteps. Most worthy of mention is, our model does not require complex pre-processing of the DVS raw event stream, and it can achieve better performance. Table 1 compares the performance of our model and the state-of-the-art methods in the MNIST-DVS and CIFAR10-DVS dataset, and our model achieves a relatively high accuracy in the test set. We obtained 98.43% accuracy on MNISTDVS dataset, which is similar to the performance of ordinary convolutional network. Compared to the MNIST-DVS dataset, the CIFAR10-DVS dataset is more complex and contains much more information and noise than MNIST-DVS, but we ultimately achieved an accuracy of 56.93% that are better than all the SOTAs. In order to verify the feasibility of the system, we compared the results of scale-4, scale-8, scale-16, in which the same preprocessing tactics were conducted on MNIST-DVS and trained with the same network model. The final test results are shown in Table 2. 4.2 TEXT SUMMARIZATION Here, we proposed a sequence-to-sequence model (Seq2Seq) with LIF-LSTM (RLIF’s variant) on LCSTS dataset. DATASET LCSTS is a large-scale Chinese short text summarization dataset, consisting of pairs of (short text, summary) collected by (Hu et al., 2015). The whole dataset, which consists of more than 2,400,000 pairs, was split into three parts under the same process as (Li et al., 2017; Ma et al., 2018) described. The noteworthy part is we only reserve pairs with scores no less than 3, thus we take PART I for training, filtered PART II for validation, and filtered PART III for testing. During our experiments, word segmentation was excluded whereas we only take Chinese character sequence as input. EVALUATION METRIC Here, we use the most common metric in evaluating the effect of text summarization: ROUGE score (Lin, 2004). The core idea of ROUGE is to compute the number of overlapping units between generated summaries and its reference summaries, including n-grams, word sequences, and word pairs. We use ROUGE-1 (unigram), ROUGE-2 (bi-gram) and ROUGE-L (LCS) as with previous exercises (Hu et al., 2015; Li et al., 2017; Ma et al., 2018) in the experimental results. NETWORK STRUCTURE As shown in Figure 4, our network is based on the sequence-to-sequence model where encoder is a stack of Layer Normalization (LN) and Bi-LSTM and decoder is similar to encoder of which Bi-LSTM is replaced by Uni-LIF-LSTM. In general, we use the final decoder layer and the final encoder layer output for obtaining the recurrent attention context through multihead attention (Vaswani et al., 2017) and teacher-forcing strategy to supervise the learning of the representation of the source content with the corresponding summary. Under the assumption that word appears in the summary may existed in the text, a prior distribution is adopted here to make model prefer the word in text rather than others. As the experiment result as depicted in Table 3, our LIF-LSTM appears to be a good substitute for LSTM but a step further of its biological plausibity, which demonstrates the feasibility of the usage of LIF-LSTM into real-world tasks. 5 CONCLUSION In this paper, we propose the Recurrent Leaky Integrate-and-Fire (RLIF) model which has complementary advantages of ANNs and SNNs. RLIF is a more in-depth simulation on mammal neuron and can be easily plugged into the prevalent ANNs framework with the advantages of BPTT. The hybrid network of the combination of traditional ANNs module and RLIF converges easier than conventional SNNs method. The experiments show that our RLIF and its variant are of good application prospects due to their adaptability and stability, especially reflected in the text summary task. We believe that RLIF and its variant can be applied to many real-world challenging tasks such as neural machine translation, video understanding, which may lead to a shift in the public view about SNNs. A SUMMARIZATION EXAMPLE OF OUR MODEL AND OTHER WORKS.
1. What are the strengths and weaknesses of the proposed approach in training spiking neural networks using backpropagation through time? 2. How do the authors extend the leaky integrate-and-fire neuron model, and what are the differences between the variations proposed? 3. Are there any concerns regarding the presentation and clarity of the equations and variables introduced in the paper? 4. What are the issues with the experimental results presented in the paper, particularly for the vision tasks? 5. Do you have any suggestions for improving the paper or its presentation?
Review
Review Recently, it has been shown that spiking neural networks (SNN) can be trained efficiently, in a supervised manner, using backpropagation through time. Indeed, the most commonly used spiking neuron model, the leaky integrate-and-fire neuron (LIF), obeys a differential equation which can be approximated using discrete time steps, leading to a recurrent relation for the potential. The firing threshold causes a non-differentiability issue, but it can be overcome using a surrogate gradient. In practice, it means that SNNs can be trained on GPUs using standard deep learning frameworks such as PyTorch or TensorFlow. Here the authors extend this approach by proposing two variations of the LIF model, called RLIF and LIF-LSTM. However, the presentation of these models is not clear at all. For example: * what is U^t in Equation 4? * what is M^t in Equation 7? * what is the difference (if any) between u^t and u_d^t? Equation 8 is even more obscure. Why bothering defining a new variable Y if it is equal to F? What is index j, and why is it used only on the left hand side of the equation? The description of the LIF-LSTM is even more obscure, nothing is defined. Figure 2 has an error. On the left, with the heavyside activation function, the gradient is actually defined everywhere (with a value of 0) but on the red segment!!! In addition, the experiments are not convincing. I am not an expert in NLP, so I will focus on the vision experiments. Table 1 is incomplete. Wu et al 2019 (which they cite elsewhere!!!), reached 60.5% on DVS-CIFAR10, which is much better than this paper (56.93%) For all these reasons, I recommend rejection.
ICLR
Title Pareto Rank-Preserving Supernetwork for HW-NAS Abstract In neural architecture search (NAS), training every sampled architecture is very time-consuming and should be avoided. Weight-sharing is a promising solution to speed up the evaluation process. However, a sampled subnetwork is not guaranteed to be estimated precisely unless a complete individual training process is done. Additionally, practical deep learning engineering processes require incorporating realistic hardware-performance metrics into the NAS evaluation process, also known as hardware-aware NAS (HW-NAS). HW-NAS results in a Pareto front, a set of all architectures that optimize conflicting objectives, i.e. taskspecific performance and hardware efficiency. This paper proposes a supernetwork training methodology that preserves the Pareto ranking between its different subnetworks resulting in more efficient and accurate neural networks for a variety of hardware platforms. The results show a 97% near Pareto front approximation in less than 2 GPU days of search, which provides x2 speed up compared to stateof-the-art methods. We validate our methodology on NAS-Bench-201, DARTS and ImageNet. Our optimal model achieves 77.2% accuracy (+1.7% compared to baseline) with an inference time of 3.68ms on Edge GPU for ImageNet. 1 INTRODUCTION A key element in solving real-world deep learning (DL) problems is the optimal selection of the sequence of operations and their hyperparameters, called DL architecture. Neural architecture search (NAS) (Santra et al. (2021)) automates the design of DL architectures by searching for the best architecture within a set of possible architectures, called search space. When considering hardware constraints, hardware-aware neural architecture search (Benmeziane et al. (2021); Sekanina (2021)) (HW-NAS) simultaneously optimizes the task-specific performance, such as accuracy, and the hardware efficiency computed by the latency, energy consumption, memory occupancy, and chip area. HW-NAS works (Cai et al. (2019); Lin et al. (2021); Wang et al. (2022)) showed the usefulness and discovered state-of-the-art architectures for Image Classification (Lin et al. (2021)), Object detection (Chen et al. (2019)), and Keyword spotting (Busia et al. (2022)). HW-NAS is cast as a multi-objective optimization problem. Techniques for HW-NAS span evolutionary search, Bayesian optimization, reinforcement learning and gradient-based methods. These require evaluating each sampled architecture on the targeted task and hardware platform. However, the evaluation is extremely time-consuming, especially for task-specific performance, which requires training in the architecture. Many estimation strategies (White et al. (2021)) are used to alleviate this problem, such as neural predictor methods (Benmeziane et al. (2022a); Ning et al. (2020)), zero-cost learning (Lopes et al. (2021); Abdelfattah et al. (2021)), and weight sharing (Chu et al. (2021); Chen et al. (2021)). These strategies are evaluated on how well they respect the ground truth ranking between the architectures in the search space. Weight sharing is an estimation strategy that formulates the search space into a supernetwork. A supernetwork is an over-parameterized architecture where each path can be sampled. At the end of this sampling, a sub-network of the supernetwork is obtained. In each layer, all possible operations are trained. With this definition, we can classify weight-sharing NAS in two categories: (1)a twostage NAS in which we first train the supernetwork on the targeted task. Then, using the pre-trained supernetwork, each sampled sub-network’s performance can be estimated using a search strategy, such as an evolutionary algorithm. (2) a one-stage NAS in which we simultaneously search and train the supernetwork. Additional parameters are assigned to each possible operation per layer. These parameters are trained to select which operation is appropriate for each layer. Both Weight-sharing categories assume that the rank between different sub-networks is preserved. Two architectures with the same rank imply that they have the same accuracy. State-of-the-art works (Zhang et al. (2020); Peng et al. (2021); Zhao et al. (2021)) have highlighted the training inefficiency in this approach by computing the ranking correlation between the architectures’ actual rankings and the estimated rankings. Some solutions have been proposed to train the supernetwork with strict constraints on fairness to preserve the ranking for accuracy, such as FairNAS (Chu et al. (2021)). Others train a graph convolutional network in parallel to fit the performance of sampled sub-networks Chen et al. (2021). However, current solutions have two main drawbacks: 1. In the multi-objective context of HW-NAS, different objectives such as accuracy and latency have to be estimated. The result is a Pareto front, a set of architectures that better respects the trade-off between the conflicting objectives. The ranking following one objective is no longer a good metric for the estimator. In this setting, we need to take into account the dominance concept in the ranking. Both estimations hinder the final Pareto front approximation and affect the search exploration when considering the accuracy and latency as objectives. 2. Many works (Chen et al. (2021); Zhao et al. (2021); Guo et al. (2020)) attempt to fix the supernetwork sampling after its training. We believe that this strategy is inefficient due to the pre-training of supernetwork. Its accuracy-based ranking correlation is bad. In Dong & Yang (2020), a reduced Kendall’s tau-b rank correlation coefficient of 0.47 has been obtained on NAS-Bench-201 when using this approach. The accuracy estimation is thus non-conclusive and will mislead any NAS search strategy. To overcome the aforementioned issues, we propose a new training methodology for supernetworks to preserve the Pareto ranking of sub-networks in HW-NAS and avoid additional ranking correction steps. The contributions of this paper are summarized as follows: • We define the Pareto ranking as a novel metric to compare HW-NAS evaluator in the multi-objective context. Our study shows that optimizing this metric while training the supernetwork increases the Kendall rank correlation coefficient from 0.47 to 0.97 for a Vanilla Weight-sharing NAS. • We introduce a novel one-stage weight-sharing supernetwork training methodology. The training optimizes the task-specific loss function (e.g. cross-entropy loss) and a Pareto ranking listwise loss function to select the adequate operation per layer accurately. • During training, we prune the operations that are the least likely to be in the architecture of the optimal Pareto front. The pruning is done by overlapping the worst Paretoranked sub-networks and removing the operations that are only used in these sub-networks. We demonstrate that using our methodology on three different search spaces, namely NAS-Bench201 (Dong & Yang (2020)), DARTS (Liu et al. (2019)) and ProxylessNAS search space (Cai et al. (2019)), we achieve a higher Pareto front approximation compared to current state-of-the-art methods. For example, we obtained 97% Pareto front approximation when One-Shot- NAS-GCN (Chen et al. (2021)) depicts only 87% on NAS-Bench-201. 2 BACKGROUND & RELATED WORK This section summarizes the state-of-the-art in accelerating multi-objective optimization HW-NAS. 2.1 ACCELERATING HARDWARE-AWARE NAS Given a target hardware platform and a DL task, Hardware-aware Neural Architecture Search (HW-NAS) (Benmeziane et al. (2021)) automates the design of efficient DL architectures. HWNAS is a multi-objective optimization problem where different and contradictory objectives, such as accuracy, latency, energy consumption, memory occupancy, and chip area, have to be optimized. HW-NAS has three main components: (1) the search space ,(2) the evaluation method and (3) the search strategy The main time-consuming component in HW-NAS is the evaluation method. Several state-of-the-art works (White et al. (2021)) have been proposed to alleviate this problem. Predictor-based methods (Ning et al. (2020); Lomurno et al. (2021)) are the most popular strategies where machine learning models are used to predict the accuracy or latency from the architecture features (e.g. number of convolutions, widening factor, etc.) or its representation using Graph Neural Networks (GNN) (Ning et al. (2020)) and Recurrent Neural Networks (RNN) (Lomurno et al. (2021)). However, these methods are not flexible to different search spaces as they require training a sampled dataset and then training the predictor. Weight-sharing approaches (Chu et al. (2021); Chen et al. (2021); Zhao et al. (2021); Guo et al. (2020)), on the other hand, define the search space as a supernetwork. In each layer, the supernetwork combines the results of possible operations. A sequence of operations from the input to the output is called a sub-network and constitutes a possible architecture. Training the supernetwork consists of training several paths at once. The input is forwarded through a series of parallel operations whose outputs are summed after each layer. There are two main issues when training a supernetwork: 1. The order of the sampled sub-networks matters: Assume we have two sub-networks A and B. Both A and B start with the same operation op1 in layer 1. During the first training iteration, A is sampled and op1 weights are adjusted. The second iteration samples B and adjusts op1 weights again. If we want to evaluate A, we would use the new adjusted weights of op1 which degrades the estimation. 2. Unfair Bias: Sub-networks with an initial better task-specific performance are more likely to be sampled next and maintain a higher coefficient in one-stage supernetwork. Fairnas (Chu et al. (2021)) defines strict fairness constraints that ensure that each operation’s weights are updated the same amount of times at each stage. 2.2 MULTI-OBJECTIVE OPTIMIZATION IN HW-NAS Optimizing conflicting objectives simultaneously requires the definition of a decision metric. In multi-objective optimization Batista et al. (2011), this metric is the dominance criteria. In a twoobjectives minimization problem, dominance is defined as: If s1 and s2 denote two solutions, s1 dominates s2 (s1 ≻ s2) if and only if ∀i fi(s1) ≤ fi(s2) AND ∃j fj(s1) < fj(s2). fi and fj are conflicting objective functions such as latency and accuracy. Using the dominance, there is no single solution that dominates all the others. We instead build the Pareto front; the set of all dominant solutions. The Pareto front approximation is evaluated using the hypervolume metric. The hypervolume measures the area dominated by a Pareto front approximation P and a reference point. The reference point is defined as an architecture with a high latency and low accuracy (furthest from the optimal points). The maximization of hypervolume leads to a high-qualified and diverse Pareto front approximation set. In HW-NAS, computing the hardware efficiency is expensive due to the time-consuming deployment and measurements on the hardware. Using multiple performance estimators is thus popular Hu et al. (2019); Elsken et al. (2019); Lu et al. (2020); Huang & Chu (2021). Current multi-objective HW-NAS approaches focus on optimizing the search algorithm at the expense of poor performance estimators. However, using a performance estimator per objective is not optimal Benmeziane et al. (2022b). In this paper, we present an original weight-sharing technique that directly predicts a multi-objective metric, called Pareto ranking. 3 METHODS The core motivation for a novel training methodology is to achieve an efficient sub-networks evaluation for HW-NAS. The proposed training methodology must preserve the Pareto ranking between different sub-networks while reducing the overall search time. 3.1 PARETO RANKING In this section, we define the Pareto ranking metric used to train and evaluate the supernetwork. Pareto Ranking Solving the multi-objective optimization problem on a set of sub-networks results in a Pareto front. This set of architectures in this front is denoted as F1, i.e., all the architectures have a rank of 1. We achieve the lower ranks by successfully solving the problem on the set of subnetworks pruned from the previous solutions. The lowest rank is assigned to the sub-networks that do not dominate any sub-network. We formally define the Pareto ranking in equation 1, where S is the entire supernetwork, Fk′ is a set of sub-networks ranked k′, and ≻ is the dominance operation. Using this ranking, multiple architectures may have the same rank. This happens when none of them can dominate the others. a is ranked k ⇐⇒ ∀â ∈ S − ⋃ si∈Fk′∧k′<k , â ≻ a (1) Pareto Ranking Correlation. We evaluate the quality of an estimator using ranking correlations such as Kendall’s tau-b Correlation or Spearman Correlation. Kendall’s tau-b determines whether there is a monotonic relationship between two variables and is suitable when variables contain many tied ranks Benmeziane et al. (2021), which is our case. In the rest of the paper, we compute Kendall’s Tau-b correlation between the ground truth ranks (i.e. the Pareto ranks obtained from independently training the sub-networks), and the Pareto ranks obtained by evaluating each architecture with the supernetwork shared weights. 3.2 PARETO RANK-PRESERVING TRAINING Our training methodology aims at preserving the Pareto ranking obtained by the weight-sharing evaluation. Figure 3 shows a representation of the supernetwork definition and the different parameters we aim to learn. A sub-network is a path from the input to the output. All extracted sub-networks are of the same depth. We train the supernetwork with two goals: 1) enhance the task-specific loss function by adjusting W , the task-specific weights of the original model associated with the neural network operations such as the kernels in convolution, and 2) improve the Pareto ranking loss between its different paths by adjusting α, the weights associated with the operation selection. α measures which operation is critical and which one is selected. Algorithm 1 and figure 1 summarize the training procedure. • Step 1: Train with Strict Fairness We train our supernetwork using FairNAS (Chu et al. (2021)) strict fairness constraint. This step adjusts the weights of all the sub-networks W and gives a good starting point for the Pareto ranking training. Additionally, the accuracy estimation on the task-specific loss at this point is well estimated. We use these estimations to compute the true Pareto ranks in case no accuracy was provided by the benchmark. • Step 2: Pareto ranking training For each iteration, we apply: - Training to solve the task: A mini-batch is sampled from the training set, and a subnetwork is chosen according to each operation’s highest α. The operation’s weights are updated using the task-specific loss, e.g., cross-entropy loss for image classification. - Pareto rank training: In this phase, we purposefully bias the training towards better Pareto-ranked architectures using the α parameters. α parameters are trained using the loss function provided in equation 2. During the forward pass, we Pareto rank the sampled sub-networks. We compute the number of times an operation opi appears in layer lj on N top-ranked sub-networks, denoted as g(opi, lj). N is a hyperparameter defined before training. We denote by ĝ(opi, lj), the ground truth. Equation 2 computes the hinge loss over all layers in the sampled sub-networks and compares the number of times the operation with the highest α appears in the predicted Pareto front and the ground truth one. L = L∑ j=1 ∑ i,g(opi,lj)>ĝ(opi,lj),i̸=argmax(α) max[0,m− g(argmax(α), lj)− ĝ(opi, lj)] (2) We adjust each operation’s α parameters and compute each sampled sub-network’s latency using a lookup table. We define the predicted Pareto score according to Ps = ∑ op∈a αop, i.e., the sum of selected operations’ alpha values. Next, we compute the listwise ranking loss defined by the cross entropy between the ranking scores and the Pareto ranks (ground truth). • Step 3: Pruning by Pareto Ranking Sub-networks We drop sub-networks furthest from the optimal Pareto front to accelerate the training. First, we select the sub-networks belonging to the two first Pareto ranks. Then, based on the hypervolume improvement (HVI) (Emmerich et al. (2011)), we select n sub-networks. The operations never used by any sub-network in this selection are removed for each layer. Equation 3 illustrates how the hypervolume improvement is computed in this context. oij denotes operation i in layer j. HV denotes the hypervolume function and {Soij} denotes the set of sampled sub-networks using operation i in layer j. HV I(oij , P ) = HV (P ⋃ {Soij})−HV (P − {Soij}) (3) Finally, going over all the layers to select the operations with the highest α would suffice to find the most efficient DNN within the search space. Figure 4 shows the training results. We compare our methodology to FairNAS (Chu et al. (2021)) strict fairness training. During training, the Pareto ranking correlation increases with the quality of the estimations. When using our training methodology without considering the alpha parameters, the ranking correlation saturates at 0.7. FairNAS achieves the same behaviour with reduced variance among the different training runs. However, if we consider the alpha parameters, the selection is more efficient and the architectures’ rankings are well represented with 0.94. Algorithm 1 Supernetwork Training Algorithm Input: Search space S, number of epochs for fairness training Nf , number of epochs for Pareto training Np, Supernetwork parameters (W,α), training dataloader D, task-specific loss Loss, Pareto raking loss LossPR, number of sampled sub-network n procedure TRAIN Initialize W and α for each operation in Supernetwork Strict fairness training for Nf epochs for i=1 to Np do for data, labels in D do Build model with argmax(α) following step 2 Reset gradients to zero for all W parameters Calculate gradients based on Loss, data, labels and update W by gradients end for Sample n sub-networks, models Compute: Pareto rank of models, LossPR between scores and Pareto rank. Update α by gradients end for end procedure 4 EXPERIMENTS In this section, we evaluate our training methodology on three search spaces: NAS-Bench201 (Dong & Yang (2020)), DARTS (Liu et al. (2019)) and ProxylessNAS Search space (Cai et al. (2019)). 4.1 SETUP Search Spaces: Several search spaces have been used to evaluate our method’s performance. NASBench-201 (Dong & Yang (2020)) is a tabular benchmark that contains 15k convolutional neural networks. Each architecture is trained on CIFAR-10, CIFAR-100 and ImageNet-16-120 (Chrabaszcz et al. (2017)). We use the latency values obtained from HW-NAS-Bench (Li et al. (2021)). DARTS Liu et al. (2019) is a supernetwork benchmark that contains 1018 architectures. Each architecture is trained on CIFAR-10 and is transferable to ImageNet. We also validate our methodology on ImageNet using ProxylessNAS search space Cai et al. (2019) whose size goes to 619. All training hyperparameters are listed in Table 5 in Appendix F. 4.2 SEARCH RESULTS In these experiments, we consider two objectives: accuracy and latency (inference time). The latency is either given by HW-NAS-Bench (Li et al. (2021)), or computed using a lookup table as explained in section 3. Figure 5 shows the Pareto front approximations obtained using different methods on NAS-Bench201 for CIFAR-10 and ProxylessNAS Search space for ImageNet. We obtain a 10% hypervolume increase on NAS-Bench-201 and a 43% hypervolume increase on ImageNet compared to the best baselines: One-Shot-NAS-GCN and FairNAS, respectively. 4.2.1 SEARCH ON NAS-BENCH-201 Table 1 shows the results of our methodology on NAS-Bench-201 compared to state-of-the-art methods. PRP-NAS-BL, PRP-NAS-BA and PRP-NAS-O are three sampled architectures from our final Pareto front. BL stands for ”Best Latency”. BA stands for ”Best Accuracy”, and O stands for ”Optimal”. Notably, our architecture obtains highly competitive results. The optimal architecture, PRPNAS-O, outperforms current state-of-the-art methods in accuracy and latency. Including hardware awareness during the search allows us to obtain flexible results according to the targeted hardware platform. Besides, multiple training runs show the stability of our method compared to other baselines. The acceleration in the search cost is mainly due to applying the pruning while training. This cost can vary according to the used GPU. We used GPU V100 to train the supernetwork. Results on other targeted platforms, can be found in Appendix B. 4.2.2 SEARCH ON IMAGENET Similar conclusions can be extracted when searching on ImageNet. Table 2 summarizes the results. Our optimal model surpasses FairNAS-A (+1.9%) and One-Shot-NAS-GCN (+1.7%) while running faster. Training on Imagenet is time-consuming due to the difference in image resolution, which explains the increase in the search cost. We still surpass most of the methods in terms of search time. We compare two ProxylessNAS architectures; ProxylessNAS-R is specific to Mobile inference. When using data augmentation and architecture tricks, namely Squeeze-and-excitation and AutoAugment, in the optimal architecture, we achieve 78.6% accuracy on Imagenet. However, this may affect the latency badly. On FPGA ZCU102, the latency increases from 4.63ms to 7.9ms. 4.3 RANKING QUALITY The ranking preservation measures the quality of the evaluation component in NAS. In HW-NAS, we argue that this measure should consider the Pareto ranking instead of the independent ranks of each objective. We compare different estimators used in HW-NAS using Kendall’s Tau Correlation between the predicted Pareto ranks and the Pareto ranks obtained from independently training the architectures. These latter are extracted from NAS-Bench-201. Figure 6 shows the correlation results. In general, it is more complex to train a supernetwork to respect the Pareto ranks because of the impact of the sub-networks on each other, i.e., the outputs of each layer are summed together. The increase in Kendall’s tau correlation of previous weight-sharing methodology is due to the improvement in the accuracy estimation provided by the supernetwork. Predictor-based evaluators use the learning-to-rank theory and train their predictors only to predict the ranking. Methods such as GATES (Ning et al. (2020)) or BRP-NAS (Dudziak et al. (2020)) train many independent predictors, one for each objective. HW-PR-NAS (Benmeziane et al. (2022a)) trains a single predictor to fit the Pareto ranks. However, their methodology is not flexible for supernetwork training. 4.3.1 ANALYSIS OF α PARAMETER Figure 7 illustrates the evolution of alpha parameters for each operation in layer 1 and 2 during the training. It clearly shows how alpha favors one operation over the others during training. At the end of the training, we take the operations with the highest alpha that represents the operations constructing architectures in the final Pareto front. If one layer has a clear candidate such as layer 1, with conv3x3 that exceeds 60%, this operation is then chosen. If a layer contains multiple operations with similar alpha values, we constructs all the path of that layer. 4.4 BATTERY USAGE PRESERVATION The amount of energy consumed by each model can be different. It is mainly attributed to the number of multi-adds computed. We take supernetwork usage to another level by adequately scheduling the run of different sub-networks according to the system’s battery life. In this experiment, the training is done with two objectives: accuracy and energy consumption. Once the training is done, only the Pareto front solutions are kept in the supernetwork, thanks to the pruning. We further select, from the final Pareto front, s architectures. In this experiment s = 5. The total size of the supernetwork is then reduced to 20.5MB, comparable to MobileNet-V3 Large with 21.11MB. We deploy the model on a smartphone application that is always on. The application repeats the inference classification of one image. The application initially uses the sub-network with the highest accuracy. We switch to a lower accurate model every five hours for a better energy preserving. Figure 8 shows the results of the system’s battery life while running the application for 24 hours. We use three scenarios: 1. Worst Battery Usage: From the Pareto front, we select the most accurate architecture. This is the only architecture the application runs and is the only one loaded in memory. 2. Best Battery Usage: Similar to the worst battery usage, we select the most energy-efficient. 3. Adequate Battery Usage: We load the complete supernetwork and switch the sub-network every 5 hours. Using this strategy helps save up to 34% of the battery life while using highly accurate models most of the time. The average accuracy of the five selected sub-networks is 75.2%. 5 CONCLUSION This work analyzes Hardware-aware weightsharing NAS where the multi-objective context requires the estimator to preserve the Pareto rankings between sub-networks accurately. Contrary to standard baselines that independently estimate each objective, we propose a supernetwork training methodology able to preserve the Pareto rankings during the search. Using our methodology, we achieve 97% near Pareto front approximation on NAS-Bench- 201, DARTS, and ProxylessNAS Search Spaces. We find a 77.2% accuracy model on ImageNet while only training the supernetwork for 3.8 days. Using the supernetwork capabilities, we saved up to 34% of the battery capacity with an average accuracy of 75.2%. A RESULTS ON IMAGENET B ADDITIONAL RESULTS Table 3 shows the results of our training methodology on FPGA ZCU 102 and Raspberry Pi3. Our methodology consistently outperforms state-of-the-art methods on different hardware platforms. C NUMBER OF SAMPLED SUB-NETWORKS Figure 9 shows the effect of increasing the number of sampled sub-networks on the search results. Generally, increasing the number of samples, increases the hypervolume. The hypervolume is used to evaluate Pareto front approximations. It computes the area contained by the Pareto front points found by the search and a reference point. Our reference point is set as a pre-sampled architecture from the supernetwork, with a low accuracy and high latency. When the number of sampled subnetworks is too high, each layer’s output is the sum of multiple operations that can or cannot be within the final Pareto front which induces a bias when adjusting the alpha parameters. 20 40 60 80 100 D PRUNING ALGORITHM We validate the results of our pruning algorithm by comparing the results of our algorithm with and without it in table 4. Obviously without the pruning, the search time exponentially increases from 3.8 GPU days to 8.1. However, the hypervolume improves slightly. The final most accurate architecture is in both Pareto front obtained with and without pruning. The optimal architecture using pruning is better in terms of accuracy and latency. The latency is computed on Jetson Nano Edge GPU. E LATENCY ESTIMATION In this section, we compare different latency estimators to validate the use of LUT during the search. We randomly extract 1000 architectures from NAS-Bench-201 and 1000 from DARTS. We measure the exact latency on Jetson Nano for each architecture. We train two predictor-based models, namely XGBoost and MLP with 3 layers. The training dataset contains 700 architectures and 300 were used for testing. On NAS-Bench-201, the architectures have a sequential execution which made LUT the most accurate in terms of latency ranking the architectures. On DARTS, XGBoost prediction was the most suitable methods. But, LUT was not far with 0.915 against 0.942. Computing the LUT in our algorithm is simple. Using a hook during the forward function on a PyTorch model is sufficient and much more direct than calling a surrogate model. We thus use this strategy to estimate the latency in our method. F TRAINING HYPERPARAMETERS The training hyperparameters are listed in Table 5. It takes 2, 3.8, 3.8 GPU-days for NAS-Bench-201, DARTS and ProxylessNAS search space to train each supernetwork to fullness. Our training is 5x faster than previous works due to the pruning strategy. To be consistent with previous works, we do not employ data augmentation tricks such as cutout or mixup. We also do not employ any special operations such as squeeze-and-excitation. All these methods can further improve the scores on the test set.
1. What is the focus and contribution of the paper on hardware-aware NAS? 2. What are the strengths of the proposed approach, particularly in terms of accuracy and latency trade-offs? 3. What are the weaknesses of the paper regarding the derivation of the alpha parameter and modeling of accuracy and latency? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes an hardware-aware NAS on a superset that preserves the Pareto ranking (accuracy and latency). It demonstrates better accuracy-latency tradeoff when compared to SOTA approaches. Strengths And Weaknesses Strength Good trade-off between latency and accuracy is achieved using the proposed approach comparing to the SOTA. Weakness It is unclear how the alpha parameter is derived and how does the correctness of accuracy / latency modelling affect the quality of results. Clarity, Quality, Novelty And Reproducibility Clarity This paper is generally well structured. Some details are missing and need to be clarified (see below). Quality The approach is generally well described and is developed based on existing proven method. Some clarifications are needed to justify the correctness (see below). Novelty This work appears to be an extension of superset training to include the proposed Pareto-rank. Reproducibility Cannot be evaluated based on the existing materials.
ICLR
Title Pareto Rank-Preserving Supernetwork for HW-NAS Abstract In neural architecture search (NAS), training every sampled architecture is very time-consuming and should be avoided. Weight-sharing is a promising solution to speed up the evaluation process. However, a sampled subnetwork is not guaranteed to be estimated precisely unless a complete individual training process is done. Additionally, practical deep learning engineering processes require incorporating realistic hardware-performance metrics into the NAS evaluation process, also known as hardware-aware NAS (HW-NAS). HW-NAS results in a Pareto front, a set of all architectures that optimize conflicting objectives, i.e. taskspecific performance and hardware efficiency. This paper proposes a supernetwork training methodology that preserves the Pareto ranking between its different subnetworks resulting in more efficient and accurate neural networks for a variety of hardware platforms. The results show a 97% near Pareto front approximation in less than 2 GPU days of search, which provides x2 speed up compared to stateof-the-art methods. We validate our methodology on NAS-Bench-201, DARTS and ImageNet. Our optimal model achieves 77.2% accuracy (+1.7% compared to baseline) with an inference time of 3.68ms on Edge GPU for ImageNet. 1 INTRODUCTION A key element in solving real-world deep learning (DL) problems is the optimal selection of the sequence of operations and their hyperparameters, called DL architecture. Neural architecture search (NAS) (Santra et al. (2021)) automates the design of DL architectures by searching for the best architecture within a set of possible architectures, called search space. When considering hardware constraints, hardware-aware neural architecture search (Benmeziane et al. (2021); Sekanina (2021)) (HW-NAS) simultaneously optimizes the task-specific performance, such as accuracy, and the hardware efficiency computed by the latency, energy consumption, memory occupancy, and chip area. HW-NAS works (Cai et al. (2019); Lin et al. (2021); Wang et al. (2022)) showed the usefulness and discovered state-of-the-art architectures for Image Classification (Lin et al. (2021)), Object detection (Chen et al. (2019)), and Keyword spotting (Busia et al. (2022)). HW-NAS is cast as a multi-objective optimization problem. Techniques for HW-NAS span evolutionary search, Bayesian optimization, reinforcement learning and gradient-based methods. These require evaluating each sampled architecture on the targeted task and hardware platform. However, the evaluation is extremely time-consuming, especially for task-specific performance, which requires training in the architecture. Many estimation strategies (White et al. (2021)) are used to alleviate this problem, such as neural predictor methods (Benmeziane et al. (2022a); Ning et al. (2020)), zero-cost learning (Lopes et al. (2021); Abdelfattah et al. (2021)), and weight sharing (Chu et al. (2021); Chen et al. (2021)). These strategies are evaluated on how well they respect the ground truth ranking between the architectures in the search space. Weight sharing is an estimation strategy that formulates the search space into a supernetwork. A supernetwork is an over-parameterized architecture where each path can be sampled. At the end of this sampling, a sub-network of the supernetwork is obtained. In each layer, all possible operations are trained. With this definition, we can classify weight-sharing NAS in two categories: (1)a twostage NAS in which we first train the supernetwork on the targeted task. Then, using the pre-trained supernetwork, each sampled sub-network’s performance can be estimated using a search strategy, such as an evolutionary algorithm. (2) a one-stage NAS in which we simultaneously search and train the supernetwork. Additional parameters are assigned to each possible operation per layer. These parameters are trained to select which operation is appropriate for each layer. Both Weight-sharing categories assume that the rank between different sub-networks is preserved. Two architectures with the same rank imply that they have the same accuracy. State-of-the-art works (Zhang et al. (2020); Peng et al. (2021); Zhao et al. (2021)) have highlighted the training inefficiency in this approach by computing the ranking correlation between the architectures’ actual rankings and the estimated rankings. Some solutions have been proposed to train the supernetwork with strict constraints on fairness to preserve the ranking for accuracy, such as FairNAS (Chu et al. (2021)). Others train a graph convolutional network in parallel to fit the performance of sampled sub-networks Chen et al. (2021). However, current solutions have two main drawbacks: 1. In the multi-objective context of HW-NAS, different objectives such as accuracy and latency have to be estimated. The result is a Pareto front, a set of architectures that better respects the trade-off between the conflicting objectives. The ranking following one objective is no longer a good metric for the estimator. In this setting, we need to take into account the dominance concept in the ranking. Both estimations hinder the final Pareto front approximation and affect the search exploration when considering the accuracy and latency as objectives. 2. Many works (Chen et al. (2021); Zhao et al. (2021); Guo et al. (2020)) attempt to fix the supernetwork sampling after its training. We believe that this strategy is inefficient due to the pre-training of supernetwork. Its accuracy-based ranking correlation is bad. In Dong & Yang (2020), a reduced Kendall’s tau-b rank correlation coefficient of 0.47 has been obtained on NAS-Bench-201 when using this approach. The accuracy estimation is thus non-conclusive and will mislead any NAS search strategy. To overcome the aforementioned issues, we propose a new training methodology for supernetworks to preserve the Pareto ranking of sub-networks in HW-NAS and avoid additional ranking correction steps. The contributions of this paper are summarized as follows: • We define the Pareto ranking as a novel metric to compare HW-NAS evaluator in the multi-objective context. Our study shows that optimizing this metric while training the supernetwork increases the Kendall rank correlation coefficient from 0.47 to 0.97 for a Vanilla Weight-sharing NAS. • We introduce a novel one-stage weight-sharing supernetwork training methodology. The training optimizes the task-specific loss function (e.g. cross-entropy loss) and a Pareto ranking listwise loss function to select the adequate operation per layer accurately. • During training, we prune the operations that are the least likely to be in the architecture of the optimal Pareto front. The pruning is done by overlapping the worst Paretoranked sub-networks and removing the operations that are only used in these sub-networks. We demonstrate that using our methodology on three different search spaces, namely NAS-Bench201 (Dong & Yang (2020)), DARTS (Liu et al. (2019)) and ProxylessNAS search space (Cai et al. (2019)), we achieve a higher Pareto front approximation compared to current state-of-the-art methods. For example, we obtained 97% Pareto front approximation when One-Shot- NAS-GCN (Chen et al. (2021)) depicts only 87% on NAS-Bench-201. 2 BACKGROUND & RELATED WORK This section summarizes the state-of-the-art in accelerating multi-objective optimization HW-NAS. 2.1 ACCELERATING HARDWARE-AWARE NAS Given a target hardware platform and a DL task, Hardware-aware Neural Architecture Search (HW-NAS) (Benmeziane et al. (2021)) automates the design of efficient DL architectures. HWNAS is a multi-objective optimization problem where different and contradictory objectives, such as accuracy, latency, energy consumption, memory occupancy, and chip area, have to be optimized. HW-NAS has three main components: (1) the search space ,(2) the evaluation method and (3) the search strategy The main time-consuming component in HW-NAS is the evaluation method. Several state-of-the-art works (White et al. (2021)) have been proposed to alleviate this problem. Predictor-based methods (Ning et al. (2020); Lomurno et al. (2021)) are the most popular strategies where machine learning models are used to predict the accuracy or latency from the architecture features (e.g. number of convolutions, widening factor, etc.) or its representation using Graph Neural Networks (GNN) (Ning et al. (2020)) and Recurrent Neural Networks (RNN) (Lomurno et al. (2021)). However, these methods are not flexible to different search spaces as they require training a sampled dataset and then training the predictor. Weight-sharing approaches (Chu et al. (2021); Chen et al. (2021); Zhao et al. (2021); Guo et al. (2020)), on the other hand, define the search space as a supernetwork. In each layer, the supernetwork combines the results of possible operations. A sequence of operations from the input to the output is called a sub-network and constitutes a possible architecture. Training the supernetwork consists of training several paths at once. The input is forwarded through a series of parallel operations whose outputs are summed after each layer. There are two main issues when training a supernetwork: 1. The order of the sampled sub-networks matters: Assume we have two sub-networks A and B. Both A and B start with the same operation op1 in layer 1. During the first training iteration, A is sampled and op1 weights are adjusted. The second iteration samples B and adjusts op1 weights again. If we want to evaluate A, we would use the new adjusted weights of op1 which degrades the estimation. 2. Unfair Bias: Sub-networks with an initial better task-specific performance are more likely to be sampled next and maintain a higher coefficient in one-stage supernetwork. Fairnas (Chu et al. (2021)) defines strict fairness constraints that ensure that each operation’s weights are updated the same amount of times at each stage. 2.2 MULTI-OBJECTIVE OPTIMIZATION IN HW-NAS Optimizing conflicting objectives simultaneously requires the definition of a decision metric. In multi-objective optimization Batista et al. (2011), this metric is the dominance criteria. In a twoobjectives minimization problem, dominance is defined as: If s1 and s2 denote two solutions, s1 dominates s2 (s1 ≻ s2) if and only if ∀i fi(s1) ≤ fi(s2) AND ∃j fj(s1) < fj(s2). fi and fj are conflicting objective functions such as latency and accuracy. Using the dominance, there is no single solution that dominates all the others. We instead build the Pareto front; the set of all dominant solutions. The Pareto front approximation is evaluated using the hypervolume metric. The hypervolume measures the area dominated by a Pareto front approximation P and a reference point. The reference point is defined as an architecture with a high latency and low accuracy (furthest from the optimal points). The maximization of hypervolume leads to a high-qualified and diverse Pareto front approximation set. In HW-NAS, computing the hardware efficiency is expensive due to the time-consuming deployment and measurements on the hardware. Using multiple performance estimators is thus popular Hu et al. (2019); Elsken et al. (2019); Lu et al. (2020); Huang & Chu (2021). Current multi-objective HW-NAS approaches focus on optimizing the search algorithm at the expense of poor performance estimators. However, using a performance estimator per objective is not optimal Benmeziane et al. (2022b). In this paper, we present an original weight-sharing technique that directly predicts a multi-objective metric, called Pareto ranking. 3 METHODS The core motivation for a novel training methodology is to achieve an efficient sub-networks evaluation for HW-NAS. The proposed training methodology must preserve the Pareto ranking between different sub-networks while reducing the overall search time. 3.1 PARETO RANKING In this section, we define the Pareto ranking metric used to train and evaluate the supernetwork. Pareto Ranking Solving the multi-objective optimization problem on a set of sub-networks results in a Pareto front. This set of architectures in this front is denoted as F1, i.e., all the architectures have a rank of 1. We achieve the lower ranks by successfully solving the problem on the set of subnetworks pruned from the previous solutions. The lowest rank is assigned to the sub-networks that do not dominate any sub-network. We formally define the Pareto ranking in equation 1, where S is the entire supernetwork, Fk′ is a set of sub-networks ranked k′, and ≻ is the dominance operation. Using this ranking, multiple architectures may have the same rank. This happens when none of them can dominate the others. a is ranked k ⇐⇒ ∀â ∈ S − ⋃ si∈Fk′∧k′<k , â ≻ a (1) Pareto Ranking Correlation. We evaluate the quality of an estimator using ranking correlations such as Kendall’s tau-b Correlation or Spearman Correlation. Kendall’s tau-b determines whether there is a monotonic relationship between two variables and is suitable when variables contain many tied ranks Benmeziane et al. (2021), which is our case. In the rest of the paper, we compute Kendall’s Tau-b correlation between the ground truth ranks (i.e. the Pareto ranks obtained from independently training the sub-networks), and the Pareto ranks obtained by evaluating each architecture with the supernetwork shared weights. 3.2 PARETO RANK-PRESERVING TRAINING Our training methodology aims at preserving the Pareto ranking obtained by the weight-sharing evaluation. Figure 3 shows a representation of the supernetwork definition and the different parameters we aim to learn. A sub-network is a path from the input to the output. All extracted sub-networks are of the same depth. We train the supernetwork with two goals: 1) enhance the task-specific loss function by adjusting W , the task-specific weights of the original model associated with the neural network operations such as the kernels in convolution, and 2) improve the Pareto ranking loss between its different paths by adjusting α, the weights associated with the operation selection. α measures which operation is critical and which one is selected. Algorithm 1 and figure 1 summarize the training procedure. • Step 1: Train with Strict Fairness We train our supernetwork using FairNAS (Chu et al. (2021)) strict fairness constraint. This step adjusts the weights of all the sub-networks W and gives a good starting point for the Pareto ranking training. Additionally, the accuracy estimation on the task-specific loss at this point is well estimated. We use these estimations to compute the true Pareto ranks in case no accuracy was provided by the benchmark. • Step 2: Pareto ranking training For each iteration, we apply: - Training to solve the task: A mini-batch is sampled from the training set, and a subnetwork is chosen according to each operation’s highest α. The operation’s weights are updated using the task-specific loss, e.g., cross-entropy loss for image classification. - Pareto rank training: In this phase, we purposefully bias the training towards better Pareto-ranked architectures using the α parameters. α parameters are trained using the loss function provided in equation 2. During the forward pass, we Pareto rank the sampled sub-networks. We compute the number of times an operation opi appears in layer lj on N top-ranked sub-networks, denoted as g(opi, lj). N is a hyperparameter defined before training. We denote by ĝ(opi, lj), the ground truth. Equation 2 computes the hinge loss over all layers in the sampled sub-networks and compares the number of times the operation with the highest α appears in the predicted Pareto front and the ground truth one. L = L∑ j=1 ∑ i,g(opi,lj)>ĝ(opi,lj),i̸=argmax(α) max[0,m− g(argmax(α), lj)− ĝ(opi, lj)] (2) We adjust each operation’s α parameters and compute each sampled sub-network’s latency using a lookup table. We define the predicted Pareto score according to Ps = ∑ op∈a αop, i.e., the sum of selected operations’ alpha values. Next, we compute the listwise ranking loss defined by the cross entropy between the ranking scores and the Pareto ranks (ground truth). • Step 3: Pruning by Pareto Ranking Sub-networks We drop sub-networks furthest from the optimal Pareto front to accelerate the training. First, we select the sub-networks belonging to the two first Pareto ranks. Then, based on the hypervolume improvement (HVI) (Emmerich et al. (2011)), we select n sub-networks. The operations never used by any sub-network in this selection are removed for each layer. Equation 3 illustrates how the hypervolume improvement is computed in this context. oij denotes operation i in layer j. HV denotes the hypervolume function and {Soij} denotes the set of sampled sub-networks using operation i in layer j. HV I(oij , P ) = HV (P ⋃ {Soij})−HV (P − {Soij}) (3) Finally, going over all the layers to select the operations with the highest α would suffice to find the most efficient DNN within the search space. Figure 4 shows the training results. We compare our methodology to FairNAS (Chu et al. (2021)) strict fairness training. During training, the Pareto ranking correlation increases with the quality of the estimations. When using our training methodology without considering the alpha parameters, the ranking correlation saturates at 0.7. FairNAS achieves the same behaviour with reduced variance among the different training runs. However, if we consider the alpha parameters, the selection is more efficient and the architectures’ rankings are well represented with 0.94. Algorithm 1 Supernetwork Training Algorithm Input: Search space S, number of epochs for fairness training Nf , number of epochs for Pareto training Np, Supernetwork parameters (W,α), training dataloader D, task-specific loss Loss, Pareto raking loss LossPR, number of sampled sub-network n procedure TRAIN Initialize W and α for each operation in Supernetwork Strict fairness training for Nf epochs for i=1 to Np do for data, labels in D do Build model with argmax(α) following step 2 Reset gradients to zero for all W parameters Calculate gradients based on Loss, data, labels and update W by gradients end for Sample n sub-networks, models Compute: Pareto rank of models, LossPR between scores and Pareto rank. Update α by gradients end for end procedure 4 EXPERIMENTS In this section, we evaluate our training methodology on three search spaces: NAS-Bench201 (Dong & Yang (2020)), DARTS (Liu et al. (2019)) and ProxylessNAS Search space (Cai et al. (2019)). 4.1 SETUP Search Spaces: Several search spaces have been used to evaluate our method’s performance. NASBench-201 (Dong & Yang (2020)) is a tabular benchmark that contains 15k convolutional neural networks. Each architecture is trained on CIFAR-10, CIFAR-100 and ImageNet-16-120 (Chrabaszcz et al. (2017)). We use the latency values obtained from HW-NAS-Bench (Li et al. (2021)). DARTS Liu et al. (2019) is a supernetwork benchmark that contains 1018 architectures. Each architecture is trained on CIFAR-10 and is transferable to ImageNet. We also validate our methodology on ImageNet using ProxylessNAS search space Cai et al. (2019) whose size goes to 619. All training hyperparameters are listed in Table 5 in Appendix F. 4.2 SEARCH RESULTS In these experiments, we consider two objectives: accuracy and latency (inference time). The latency is either given by HW-NAS-Bench (Li et al. (2021)), or computed using a lookup table as explained in section 3. Figure 5 shows the Pareto front approximations obtained using different methods on NAS-Bench201 for CIFAR-10 and ProxylessNAS Search space for ImageNet. We obtain a 10% hypervolume increase on NAS-Bench-201 and a 43% hypervolume increase on ImageNet compared to the best baselines: One-Shot-NAS-GCN and FairNAS, respectively. 4.2.1 SEARCH ON NAS-BENCH-201 Table 1 shows the results of our methodology on NAS-Bench-201 compared to state-of-the-art methods. PRP-NAS-BL, PRP-NAS-BA and PRP-NAS-O are three sampled architectures from our final Pareto front. BL stands for ”Best Latency”. BA stands for ”Best Accuracy”, and O stands for ”Optimal”. Notably, our architecture obtains highly competitive results. The optimal architecture, PRPNAS-O, outperforms current state-of-the-art methods in accuracy and latency. Including hardware awareness during the search allows us to obtain flexible results according to the targeted hardware platform. Besides, multiple training runs show the stability of our method compared to other baselines. The acceleration in the search cost is mainly due to applying the pruning while training. This cost can vary according to the used GPU. We used GPU V100 to train the supernetwork. Results on other targeted platforms, can be found in Appendix B. 4.2.2 SEARCH ON IMAGENET Similar conclusions can be extracted when searching on ImageNet. Table 2 summarizes the results. Our optimal model surpasses FairNAS-A (+1.9%) and One-Shot-NAS-GCN (+1.7%) while running faster. Training on Imagenet is time-consuming due to the difference in image resolution, which explains the increase in the search cost. We still surpass most of the methods in terms of search time. We compare two ProxylessNAS architectures; ProxylessNAS-R is specific to Mobile inference. When using data augmentation and architecture tricks, namely Squeeze-and-excitation and AutoAugment, in the optimal architecture, we achieve 78.6% accuracy on Imagenet. However, this may affect the latency badly. On FPGA ZCU102, the latency increases from 4.63ms to 7.9ms. 4.3 RANKING QUALITY The ranking preservation measures the quality of the evaluation component in NAS. In HW-NAS, we argue that this measure should consider the Pareto ranking instead of the independent ranks of each objective. We compare different estimators used in HW-NAS using Kendall’s Tau Correlation between the predicted Pareto ranks and the Pareto ranks obtained from independently training the architectures. These latter are extracted from NAS-Bench-201. Figure 6 shows the correlation results. In general, it is more complex to train a supernetwork to respect the Pareto ranks because of the impact of the sub-networks on each other, i.e., the outputs of each layer are summed together. The increase in Kendall’s tau correlation of previous weight-sharing methodology is due to the improvement in the accuracy estimation provided by the supernetwork. Predictor-based evaluators use the learning-to-rank theory and train their predictors only to predict the ranking. Methods such as GATES (Ning et al. (2020)) or BRP-NAS (Dudziak et al. (2020)) train many independent predictors, one for each objective. HW-PR-NAS (Benmeziane et al. (2022a)) trains a single predictor to fit the Pareto ranks. However, their methodology is not flexible for supernetwork training. 4.3.1 ANALYSIS OF α PARAMETER Figure 7 illustrates the evolution of alpha parameters for each operation in layer 1 and 2 during the training. It clearly shows how alpha favors one operation over the others during training. At the end of the training, we take the operations with the highest alpha that represents the operations constructing architectures in the final Pareto front. If one layer has a clear candidate such as layer 1, with conv3x3 that exceeds 60%, this operation is then chosen. If a layer contains multiple operations with similar alpha values, we constructs all the path of that layer. 4.4 BATTERY USAGE PRESERVATION The amount of energy consumed by each model can be different. It is mainly attributed to the number of multi-adds computed. We take supernetwork usage to another level by adequately scheduling the run of different sub-networks according to the system’s battery life. In this experiment, the training is done with two objectives: accuracy and energy consumption. Once the training is done, only the Pareto front solutions are kept in the supernetwork, thanks to the pruning. We further select, from the final Pareto front, s architectures. In this experiment s = 5. The total size of the supernetwork is then reduced to 20.5MB, comparable to MobileNet-V3 Large with 21.11MB. We deploy the model on a smartphone application that is always on. The application repeats the inference classification of one image. The application initially uses the sub-network with the highest accuracy. We switch to a lower accurate model every five hours for a better energy preserving. Figure 8 shows the results of the system’s battery life while running the application for 24 hours. We use three scenarios: 1. Worst Battery Usage: From the Pareto front, we select the most accurate architecture. This is the only architecture the application runs and is the only one loaded in memory. 2. Best Battery Usage: Similar to the worst battery usage, we select the most energy-efficient. 3. Adequate Battery Usage: We load the complete supernetwork and switch the sub-network every 5 hours. Using this strategy helps save up to 34% of the battery life while using highly accurate models most of the time. The average accuracy of the five selected sub-networks is 75.2%. 5 CONCLUSION This work analyzes Hardware-aware weightsharing NAS where the multi-objective context requires the estimator to preserve the Pareto rankings between sub-networks accurately. Contrary to standard baselines that independently estimate each objective, we propose a supernetwork training methodology able to preserve the Pareto rankings during the search. Using our methodology, we achieve 97% near Pareto front approximation on NAS-Bench- 201, DARTS, and ProxylessNAS Search Spaces. We find a 77.2% accuracy model on ImageNet while only training the supernetwork for 3.8 days. Using the supernetwork capabilities, we saved up to 34% of the battery capacity with an average accuracy of 75.2%. A RESULTS ON IMAGENET B ADDITIONAL RESULTS Table 3 shows the results of our training methodology on FPGA ZCU 102 and Raspberry Pi3. Our methodology consistently outperforms state-of-the-art methods on different hardware platforms. C NUMBER OF SAMPLED SUB-NETWORKS Figure 9 shows the effect of increasing the number of sampled sub-networks on the search results. Generally, increasing the number of samples, increases the hypervolume. The hypervolume is used to evaluate Pareto front approximations. It computes the area contained by the Pareto front points found by the search and a reference point. Our reference point is set as a pre-sampled architecture from the supernetwork, with a low accuracy and high latency. When the number of sampled subnetworks is too high, each layer’s output is the sum of multiple operations that can or cannot be within the final Pareto front which induces a bias when adjusting the alpha parameters. 20 40 60 80 100 D PRUNING ALGORITHM We validate the results of our pruning algorithm by comparing the results of our algorithm with and without it in table 4. Obviously without the pruning, the search time exponentially increases from 3.8 GPU days to 8.1. However, the hypervolume improves slightly. The final most accurate architecture is in both Pareto front obtained with and without pruning. The optimal architecture using pruning is better in terms of accuracy and latency. The latency is computed on Jetson Nano Edge GPU. E LATENCY ESTIMATION In this section, we compare different latency estimators to validate the use of LUT during the search. We randomly extract 1000 architectures from NAS-Bench-201 and 1000 from DARTS. We measure the exact latency on Jetson Nano for each architecture. We train two predictor-based models, namely XGBoost and MLP with 3 layers. The training dataset contains 700 architectures and 300 were used for testing. On NAS-Bench-201, the architectures have a sequential execution which made LUT the most accurate in terms of latency ranking the architectures. On DARTS, XGBoost prediction was the most suitable methods. But, LUT was not far with 0.915 against 0.942. Computing the LUT in our algorithm is simple. Using a hook during the forward function on a PyTorch model is sufficient and much more direct than calling a surrogate model. We thus use this strategy to estimate the latency in our method. F TRAINING HYPERPARAMETERS The training hyperparameters are listed in Table 5. It takes 2, 3.8, 3.8 GPU-days for NAS-Bench-201, DARTS and ProxylessNAS search space to train each supernetwork to fullness. Our training is 5x faster than previous works due to the pruning strategy. To be consistent with previous works, we do not employ data augmentation tricks such as cutout or mixup. We also do not employ any special operations such as squeeze-and-excitation. All these methods can further improve the scores on the test set.
1. What is the focus and contribution of the paper on pareto front sub-network identification? 2. What are the strengths of the proposed methodology, particularly in its effectiveness and validation on various search spaces? 3. What are the weaknesses of the paper, especially regarding the second phase of training and its enforcement of fairness? 4. Do you have any concerns or questions regarding the correlation between argmax(a) and kendall tau rank correlation, and how the 'Truth' in the CELoss(Score, Truth) formulation is calculated? 5. Are there any minor suggestions or comments you have for improving the paper's clarity and quality?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper In this paper, the authors introduce a methodology for pareto front sub-network identification from a supernetwork. This is done in a two step process, by first conducting fair subsampling of subnetworks for Nf epochs, followed by Pareto-Rank training for Np epochs. Sub-networks that are furthest from the pareto front are also dropped to accelerate training. The proposed method achieves state of the art pareto front approximation (97%) in 2 GPU days. A new parameter, α is introduced to measure which operation is critical, and pareto-ranking loss is utilized to adjust the α for different paths. The sum of the selected operations alpha values is utilized as the pareto score and Cross Entropy Loss is taken between the ranking scores and pareto ranks (ground truth) to update α . Strengths And Weaknesses Strengths: The idea is straightforward to implement on existing supernetwork based training schemes, and seems to be effective from the results. The methodology is validated on NASBench201, DARTS and ImageNet. Weakness: It is not clear how the second phase of training (argmax(a)) enforces fairness. Justification and implication of unfair sub-network sampling in this stage may be a useful discussion. A table showing the pareto approximation effectiveness for each of the tested search spaces would be very useful. The current results highlight the fact that PRP-NAS-BL finds a model with low latency, and PRP-NAS-BA finds a model with high accuracy. (As expected.) If possible, comparing with the ground truth optimal architectures in each case may add more value/context to the result. Questions: If argmax(a) selection scheme does not enforce fairness, it is natural to assume that the correlation between argmax(a) and kendall tau rank correlation would increase simply due to the nature of the CELoss minimization, with no bearing on the ground truth. How is the 'Truth' in the CELoss(Score, Truth) formulation calculated? If Truth is a list of intermediate accuracies of the sub-networks, then naturally the kendall tau rank correlation will keep increasing, simply due to sampling bias. Comments on this issue would be appreciated. Minor comments: Table 1 (Training hyperparameters) can be moved to the Appendix. Fix 3.2 : 'Algorithm 1 and Figure 1 summarizes summarizes..' Clarity, Quality, Novelty And Reproducibility see above
ICLR
Title Pareto Rank-Preserving Supernetwork for HW-NAS Abstract In neural architecture search (NAS), training every sampled architecture is very time-consuming and should be avoided. Weight-sharing is a promising solution to speed up the evaluation process. However, a sampled subnetwork is not guaranteed to be estimated precisely unless a complete individual training process is done. Additionally, practical deep learning engineering processes require incorporating realistic hardware-performance metrics into the NAS evaluation process, also known as hardware-aware NAS (HW-NAS). HW-NAS results in a Pareto front, a set of all architectures that optimize conflicting objectives, i.e. taskspecific performance and hardware efficiency. This paper proposes a supernetwork training methodology that preserves the Pareto ranking between its different subnetworks resulting in more efficient and accurate neural networks for a variety of hardware platforms. The results show a 97% near Pareto front approximation in less than 2 GPU days of search, which provides x2 speed up compared to stateof-the-art methods. We validate our methodology on NAS-Bench-201, DARTS and ImageNet. Our optimal model achieves 77.2% accuracy (+1.7% compared to baseline) with an inference time of 3.68ms on Edge GPU for ImageNet. 1 INTRODUCTION A key element in solving real-world deep learning (DL) problems is the optimal selection of the sequence of operations and their hyperparameters, called DL architecture. Neural architecture search (NAS) (Santra et al. (2021)) automates the design of DL architectures by searching for the best architecture within a set of possible architectures, called search space. When considering hardware constraints, hardware-aware neural architecture search (Benmeziane et al. (2021); Sekanina (2021)) (HW-NAS) simultaneously optimizes the task-specific performance, such as accuracy, and the hardware efficiency computed by the latency, energy consumption, memory occupancy, and chip area. HW-NAS works (Cai et al. (2019); Lin et al. (2021); Wang et al. (2022)) showed the usefulness and discovered state-of-the-art architectures for Image Classification (Lin et al. (2021)), Object detection (Chen et al. (2019)), and Keyword spotting (Busia et al. (2022)). HW-NAS is cast as a multi-objective optimization problem. Techniques for HW-NAS span evolutionary search, Bayesian optimization, reinforcement learning and gradient-based methods. These require evaluating each sampled architecture on the targeted task and hardware platform. However, the evaluation is extremely time-consuming, especially for task-specific performance, which requires training in the architecture. Many estimation strategies (White et al. (2021)) are used to alleviate this problem, such as neural predictor methods (Benmeziane et al. (2022a); Ning et al. (2020)), zero-cost learning (Lopes et al. (2021); Abdelfattah et al. (2021)), and weight sharing (Chu et al. (2021); Chen et al. (2021)). These strategies are evaluated on how well they respect the ground truth ranking between the architectures in the search space. Weight sharing is an estimation strategy that formulates the search space into a supernetwork. A supernetwork is an over-parameterized architecture where each path can be sampled. At the end of this sampling, a sub-network of the supernetwork is obtained. In each layer, all possible operations are trained. With this definition, we can classify weight-sharing NAS in two categories: (1)a twostage NAS in which we first train the supernetwork on the targeted task. Then, using the pre-trained supernetwork, each sampled sub-network’s performance can be estimated using a search strategy, such as an evolutionary algorithm. (2) a one-stage NAS in which we simultaneously search and train the supernetwork. Additional parameters are assigned to each possible operation per layer. These parameters are trained to select which operation is appropriate for each layer. Both Weight-sharing categories assume that the rank between different sub-networks is preserved. Two architectures with the same rank imply that they have the same accuracy. State-of-the-art works (Zhang et al. (2020); Peng et al. (2021); Zhao et al. (2021)) have highlighted the training inefficiency in this approach by computing the ranking correlation between the architectures’ actual rankings and the estimated rankings. Some solutions have been proposed to train the supernetwork with strict constraints on fairness to preserve the ranking for accuracy, such as FairNAS (Chu et al. (2021)). Others train a graph convolutional network in parallel to fit the performance of sampled sub-networks Chen et al. (2021). However, current solutions have two main drawbacks: 1. In the multi-objective context of HW-NAS, different objectives such as accuracy and latency have to be estimated. The result is a Pareto front, a set of architectures that better respects the trade-off between the conflicting objectives. The ranking following one objective is no longer a good metric for the estimator. In this setting, we need to take into account the dominance concept in the ranking. Both estimations hinder the final Pareto front approximation and affect the search exploration when considering the accuracy and latency as objectives. 2. Many works (Chen et al. (2021); Zhao et al. (2021); Guo et al. (2020)) attempt to fix the supernetwork sampling after its training. We believe that this strategy is inefficient due to the pre-training of supernetwork. Its accuracy-based ranking correlation is bad. In Dong & Yang (2020), a reduced Kendall’s tau-b rank correlation coefficient of 0.47 has been obtained on NAS-Bench-201 when using this approach. The accuracy estimation is thus non-conclusive and will mislead any NAS search strategy. To overcome the aforementioned issues, we propose a new training methodology for supernetworks to preserve the Pareto ranking of sub-networks in HW-NAS and avoid additional ranking correction steps. The contributions of this paper are summarized as follows: • We define the Pareto ranking as a novel metric to compare HW-NAS evaluator in the multi-objective context. Our study shows that optimizing this metric while training the supernetwork increases the Kendall rank correlation coefficient from 0.47 to 0.97 for a Vanilla Weight-sharing NAS. • We introduce a novel one-stage weight-sharing supernetwork training methodology. The training optimizes the task-specific loss function (e.g. cross-entropy loss) and a Pareto ranking listwise loss function to select the adequate operation per layer accurately. • During training, we prune the operations that are the least likely to be in the architecture of the optimal Pareto front. The pruning is done by overlapping the worst Paretoranked sub-networks and removing the operations that are only used in these sub-networks. We demonstrate that using our methodology on three different search spaces, namely NAS-Bench201 (Dong & Yang (2020)), DARTS (Liu et al. (2019)) and ProxylessNAS search space (Cai et al. (2019)), we achieve a higher Pareto front approximation compared to current state-of-the-art methods. For example, we obtained 97% Pareto front approximation when One-Shot- NAS-GCN (Chen et al. (2021)) depicts only 87% on NAS-Bench-201. 2 BACKGROUND & RELATED WORK This section summarizes the state-of-the-art in accelerating multi-objective optimization HW-NAS. 2.1 ACCELERATING HARDWARE-AWARE NAS Given a target hardware platform and a DL task, Hardware-aware Neural Architecture Search (HW-NAS) (Benmeziane et al. (2021)) automates the design of efficient DL architectures. HWNAS is a multi-objective optimization problem where different and contradictory objectives, such as accuracy, latency, energy consumption, memory occupancy, and chip area, have to be optimized. HW-NAS has three main components: (1) the search space ,(2) the evaluation method and (3) the search strategy The main time-consuming component in HW-NAS is the evaluation method. Several state-of-the-art works (White et al. (2021)) have been proposed to alleviate this problem. Predictor-based methods (Ning et al. (2020); Lomurno et al. (2021)) are the most popular strategies where machine learning models are used to predict the accuracy or latency from the architecture features (e.g. number of convolutions, widening factor, etc.) or its representation using Graph Neural Networks (GNN) (Ning et al. (2020)) and Recurrent Neural Networks (RNN) (Lomurno et al. (2021)). However, these methods are not flexible to different search spaces as they require training a sampled dataset and then training the predictor. Weight-sharing approaches (Chu et al. (2021); Chen et al. (2021); Zhao et al. (2021); Guo et al. (2020)), on the other hand, define the search space as a supernetwork. In each layer, the supernetwork combines the results of possible operations. A sequence of operations from the input to the output is called a sub-network and constitutes a possible architecture. Training the supernetwork consists of training several paths at once. The input is forwarded through a series of parallel operations whose outputs are summed after each layer. There are two main issues when training a supernetwork: 1. The order of the sampled sub-networks matters: Assume we have two sub-networks A and B. Both A and B start with the same operation op1 in layer 1. During the first training iteration, A is sampled and op1 weights are adjusted. The second iteration samples B and adjusts op1 weights again. If we want to evaluate A, we would use the new adjusted weights of op1 which degrades the estimation. 2. Unfair Bias: Sub-networks with an initial better task-specific performance are more likely to be sampled next and maintain a higher coefficient in one-stage supernetwork. Fairnas (Chu et al. (2021)) defines strict fairness constraints that ensure that each operation’s weights are updated the same amount of times at each stage. 2.2 MULTI-OBJECTIVE OPTIMIZATION IN HW-NAS Optimizing conflicting objectives simultaneously requires the definition of a decision metric. In multi-objective optimization Batista et al. (2011), this metric is the dominance criteria. In a twoobjectives minimization problem, dominance is defined as: If s1 and s2 denote two solutions, s1 dominates s2 (s1 ≻ s2) if and only if ∀i fi(s1) ≤ fi(s2) AND ∃j fj(s1) < fj(s2). fi and fj are conflicting objective functions such as latency and accuracy. Using the dominance, there is no single solution that dominates all the others. We instead build the Pareto front; the set of all dominant solutions. The Pareto front approximation is evaluated using the hypervolume metric. The hypervolume measures the area dominated by a Pareto front approximation P and a reference point. The reference point is defined as an architecture with a high latency and low accuracy (furthest from the optimal points). The maximization of hypervolume leads to a high-qualified and diverse Pareto front approximation set. In HW-NAS, computing the hardware efficiency is expensive due to the time-consuming deployment and measurements on the hardware. Using multiple performance estimators is thus popular Hu et al. (2019); Elsken et al. (2019); Lu et al. (2020); Huang & Chu (2021). Current multi-objective HW-NAS approaches focus on optimizing the search algorithm at the expense of poor performance estimators. However, using a performance estimator per objective is not optimal Benmeziane et al. (2022b). In this paper, we present an original weight-sharing technique that directly predicts a multi-objective metric, called Pareto ranking. 3 METHODS The core motivation for a novel training methodology is to achieve an efficient sub-networks evaluation for HW-NAS. The proposed training methodology must preserve the Pareto ranking between different sub-networks while reducing the overall search time. 3.1 PARETO RANKING In this section, we define the Pareto ranking metric used to train and evaluate the supernetwork. Pareto Ranking Solving the multi-objective optimization problem on a set of sub-networks results in a Pareto front. This set of architectures in this front is denoted as F1, i.e., all the architectures have a rank of 1. We achieve the lower ranks by successfully solving the problem on the set of subnetworks pruned from the previous solutions. The lowest rank is assigned to the sub-networks that do not dominate any sub-network. We formally define the Pareto ranking in equation 1, where S is the entire supernetwork, Fk′ is a set of sub-networks ranked k′, and ≻ is the dominance operation. Using this ranking, multiple architectures may have the same rank. This happens when none of them can dominate the others. a is ranked k ⇐⇒ ∀â ∈ S − ⋃ si∈Fk′∧k′<k , â ≻ a (1) Pareto Ranking Correlation. We evaluate the quality of an estimator using ranking correlations such as Kendall’s tau-b Correlation or Spearman Correlation. Kendall’s tau-b determines whether there is a monotonic relationship between two variables and is suitable when variables contain many tied ranks Benmeziane et al. (2021), which is our case. In the rest of the paper, we compute Kendall’s Tau-b correlation between the ground truth ranks (i.e. the Pareto ranks obtained from independently training the sub-networks), and the Pareto ranks obtained by evaluating each architecture with the supernetwork shared weights. 3.2 PARETO RANK-PRESERVING TRAINING Our training methodology aims at preserving the Pareto ranking obtained by the weight-sharing evaluation. Figure 3 shows a representation of the supernetwork definition and the different parameters we aim to learn. A sub-network is a path from the input to the output. All extracted sub-networks are of the same depth. We train the supernetwork with two goals: 1) enhance the task-specific loss function by adjusting W , the task-specific weights of the original model associated with the neural network operations such as the kernels in convolution, and 2) improve the Pareto ranking loss between its different paths by adjusting α, the weights associated with the operation selection. α measures which operation is critical and which one is selected. Algorithm 1 and figure 1 summarize the training procedure. • Step 1: Train with Strict Fairness We train our supernetwork using FairNAS (Chu et al. (2021)) strict fairness constraint. This step adjusts the weights of all the sub-networks W and gives a good starting point for the Pareto ranking training. Additionally, the accuracy estimation on the task-specific loss at this point is well estimated. We use these estimations to compute the true Pareto ranks in case no accuracy was provided by the benchmark. • Step 2: Pareto ranking training For each iteration, we apply: - Training to solve the task: A mini-batch is sampled from the training set, and a subnetwork is chosen according to each operation’s highest α. The operation’s weights are updated using the task-specific loss, e.g., cross-entropy loss for image classification. - Pareto rank training: In this phase, we purposefully bias the training towards better Pareto-ranked architectures using the α parameters. α parameters are trained using the loss function provided in equation 2. During the forward pass, we Pareto rank the sampled sub-networks. We compute the number of times an operation opi appears in layer lj on N top-ranked sub-networks, denoted as g(opi, lj). N is a hyperparameter defined before training. We denote by ĝ(opi, lj), the ground truth. Equation 2 computes the hinge loss over all layers in the sampled sub-networks and compares the number of times the operation with the highest α appears in the predicted Pareto front and the ground truth one. L = L∑ j=1 ∑ i,g(opi,lj)>ĝ(opi,lj),i̸=argmax(α) max[0,m− g(argmax(α), lj)− ĝ(opi, lj)] (2) We adjust each operation’s α parameters and compute each sampled sub-network’s latency using a lookup table. We define the predicted Pareto score according to Ps = ∑ op∈a αop, i.e., the sum of selected operations’ alpha values. Next, we compute the listwise ranking loss defined by the cross entropy between the ranking scores and the Pareto ranks (ground truth). • Step 3: Pruning by Pareto Ranking Sub-networks We drop sub-networks furthest from the optimal Pareto front to accelerate the training. First, we select the sub-networks belonging to the two first Pareto ranks. Then, based on the hypervolume improvement (HVI) (Emmerich et al. (2011)), we select n sub-networks. The operations never used by any sub-network in this selection are removed for each layer. Equation 3 illustrates how the hypervolume improvement is computed in this context. oij denotes operation i in layer j. HV denotes the hypervolume function and {Soij} denotes the set of sampled sub-networks using operation i in layer j. HV I(oij , P ) = HV (P ⋃ {Soij})−HV (P − {Soij}) (3) Finally, going over all the layers to select the operations with the highest α would suffice to find the most efficient DNN within the search space. Figure 4 shows the training results. We compare our methodology to FairNAS (Chu et al. (2021)) strict fairness training. During training, the Pareto ranking correlation increases with the quality of the estimations. When using our training methodology without considering the alpha parameters, the ranking correlation saturates at 0.7. FairNAS achieves the same behaviour with reduced variance among the different training runs. However, if we consider the alpha parameters, the selection is more efficient and the architectures’ rankings are well represented with 0.94. Algorithm 1 Supernetwork Training Algorithm Input: Search space S, number of epochs for fairness training Nf , number of epochs for Pareto training Np, Supernetwork parameters (W,α), training dataloader D, task-specific loss Loss, Pareto raking loss LossPR, number of sampled sub-network n procedure TRAIN Initialize W and α for each operation in Supernetwork Strict fairness training for Nf epochs for i=1 to Np do for data, labels in D do Build model with argmax(α) following step 2 Reset gradients to zero for all W parameters Calculate gradients based on Loss, data, labels and update W by gradients end for Sample n sub-networks, models Compute: Pareto rank of models, LossPR between scores and Pareto rank. Update α by gradients end for end procedure 4 EXPERIMENTS In this section, we evaluate our training methodology on three search spaces: NAS-Bench201 (Dong & Yang (2020)), DARTS (Liu et al. (2019)) and ProxylessNAS Search space (Cai et al. (2019)). 4.1 SETUP Search Spaces: Several search spaces have been used to evaluate our method’s performance. NASBench-201 (Dong & Yang (2020)) is a tabular benchmark that contains 15k convolutional neural networks. Each architecture is trained on CIFAR-10, CIFAR-100 and ImageNet-16-120 (Chrabaszcz et al. (2017)). We use the latency values obtained from HW-NAS-Bench (Li et al. (2021)). DARTS Liu et al. (2019) is a supernetwork benchmark that contains 1018 architectures. Each architecture is trained on CIFAR-10 and is transferable to ImageNet. We also validate our methodology on ImageNet using ProxylessNAS search space Cai et al. (2019) whose size goes to 619. All training hyperparameters are listed in Table 5 in Appendix F. 4.2 SEARCH RESULTS In these experiments, we consider two objectives: accuracy and latency (inference time). The latency is either given by HW-NAS-Bench (Li et al. (2021)), or computed using a lookup table as explained in section 3. Figure 5 shows the Pareto front approximations obtained using different methods on NAS-Bench201 for CIFAR-10 and ProxylessNAS Search space for ImageNet. We obtain a 10% hypervolume increase on NAS-Bench-201 and a 43% hypervolume increase on ImageNet compared to the best baselines: One-Shot-NAS-GCN and FairNAS, respectively. 4.2.1 SEARCH ON NAS-BENCH-201 Table 1 shows the results of our methodology on NAS-Bench-201 compared to state-of-the-art methods. PRP-NAS-BL, PRP-NAS-BA and PRP-NAS-O are three sampled architectures from our final Pareto front. BL stands for ”Best Latency”. BA stands for ”Best Accuracy”, and O stands for ”Optimal”. Notably, our architecture obtains highly competitive results. The optimal architecture, PRPNAS-O, outperforms current state-of-the-art methods in accuracy and latency. Including hardware awareness during the search allows us to obtain flexible results according to the targeted hardware platform. Besides, multiple training runs show the stability of our method compared to other baselines. The acceleration in the search cost is mainly due to applying the pruning while training. This cost can vary according to the used GPU. We used GPU V100 to train the supernetwork. Results on other targeted platforms, can be found in Appendix B. 4.2.2 SEARCH ON IMAGENET Similar conclusions can be extracted when searching on ImageNet. Table 2 summarizes the results. Our optimal model surpasses FairNAS-A (+1.9%) and One-Shot-NAS-GCN (+1.7%) while running faster. Training on Imagenet is time-consuming due to the difference in image resolution, which explains the increase in the search cost. We still surpass most of the methods in terms of search time. We compare two ProxylessNAS architectures; ProxylessNAS-R is specific to Mobile inference. When using data augmentation and architecture tricks, namely Squeeze-and-excitation and AutoAugment, in the optimal architecture, we achieve 78.6% accuracy on Imagenet. However, this may affect the latency badly. On FPGA ZCU102, the latency increases from 4.63ms to 7.9ms. 4.3 RANKING QUALITY The ranking preservation measures the quality of the evaluation component in NAS. In HW-NAS, we argue that this measure should consider the Pareto ranking instead of the independent ranks of each objective. We compare different estimators used in HW-NAS using Kendall’s Tau Correlation between the predicted Pareto ranks and the Pareto ranks obtained from independently training the architectures. These latter are extracted from NAS-Bench-201. Figure 6 shows the correlation results. In general, it is more complex to train a supernetwork to respect the Pareto ranks because of the impact of the sub-networks on each other, i.e., the outputs of each layer are summed together. The increase in Kendall’s tau correlation of previous weight-sharing methodology is due to the improvement in the accuracy estimation provided by the supernetwork. Predictor-based evaluators use the learning-to-rank theory and train their predictors only to predict the ranking. Methods such as GATES (Ning et al. (2020)) or BRP-NAS (Dudziak et al. (2020)) train many independent predictors, one for each objective. HW-PR-NAS (Benmeziane et al. (2022a)) trains a single predictor to fit the Pareto ranks. However, their methodology is not flexible for supernetwork training. 4.3.1 ANALYSIS OF α PARAMETER Figure 7 illustrates the evolution of alpha parameters for each operation in layer 1 and 2 during the training. It clearly shows how alpha favors one operation over the others during training. At the end of the training, we take the operations with the highest alpha that represents the operations constructing architectures in the final Pareto front. If one layer has a clear candidate such as layer 1, with conv3x3 that exceeds 60%, this operation is then chosen. If a layer contains multiple operations with similar alpha values, we constructs all the path of that layer. 4.4 BATTERY USAGE PRESERVATION The amount of energy consumed by each model can be different. It is mainly attributed to the number of multi-adds computed. We take supernetwork usage to another level by adequately scheduling the run of different sub-networks according to the system’s battery life. In this experiment, the training is done with two objectives: accuracy and energy consumption. Once the training is done, only the Pareto front solutions are kept in the supernetwork, thanks to the pruning. We further select, from the final Pareto front, s architectures. In this experiment s = 5. The total size of the supernetwork is then reduced to 20.5MB, comparable to MobileNet-V3 Large with 21.11MB. We deploy the model on a smartphone application that is always on. The application repeats the inference classification of one image. The application initially uses the sub-network with the highest accuracy. We switch to a lower accurate model every five hours for a better energy preserving. Figure 8 shows the results of the system’s battery life while running the application for 24 hours. We use three scenarios: 1. Worst Battery Usage: From the Pareto front, we select the most accurate architecture. This is the only architecture the application runs and is the only one loaded in memory. 2. Best Battery Usage: Similar to the worst battery usage, we select the most energy-efficient. 3. Adequate Battery Usage: We load the complete supernetwork and switch the sub-network every 5 hours. Using this strategy helps save up to 34% of the battery life while using highly accurate models most of the time. The average accuracy of the five selected sub-networks is 75.2%. 5 CONCLUSION This work analyzes Hardware-aware weightsharing NAS where the multi-objective context requires the estimator to preserve the Pareto rankings between sub-networks accurately. Contrary to standard baselines that independently estimate each objective, we propose a supernetwork training methodology able to preserve the Pareto rankings during the search. Using our methodology, we achieve 97% near Pareto front approximation on NAS-Bench- 201, DARTS, and ProxylessNAS Search Spaces. We find a 77.2% accuracy model on ImageNet while only training the supernetwork for 3.8 days. Using the supernetwork capabilities, we saved up to 34% of the battery capacity with an average accuracy of 75.2%. A RESULTS ON IMAGENET B ADDITIONAL RESULTS Table 3 shows the results of our training methodology on FPGA ZCU 102 and Raspberry Pi3. Our methodology consistently outperforms state-of-the-art methods on different hardware platforms. C NUMBER OF SAMPLED SUB-NETWORKS Figure 9 shows the effect of increasing the number of sampled sub-networks on the search results. Generally, increasing the number of samples, increases the hypervolume. The hypervolume is used to evaluate Pareto front approximations. It computes the area contained by the Pareto front points found by the search and a reference point. Our reference point is set as a pre-sampled architecture from the supernetwork, with a low accuracy and high latency. When the number of sampled subnetworks is too high, each layer’s output is the sum of multiple operations that can or cannot be within the final Pareto front which induces a bias when adjusting the alpha parameters. 20 40 60 80 100 D PRUNING ALGORITHM We validate the results of our pruning algorithm by comparing the results of our algorithm with and without it in table 4. Obviously without the pruning, the search time exponentially increases from 3.8 GPU days to 8.1. However, the hypervolume improves slightly. The final most accurate architecture is in both Pareto front obtained with and without pruning. The optimal architecture using pruning is better in terms of accuracy and latency. The latency is computed on Jetson Nano Edge GPU. E LATENCY ESTIMATION In this section, we compare different latency estimators to validate the use of LUT during the search. We randomly extract 1000 architectures from NAS-Bench-201 and 1000 from DARTS. We measure the exact latency on Jetson Nano for each architecture. We train two predictor-based models, namely XGBoost and MLP with 3 layers. The training dataset contains 700 architectures and 300 were used for testing. On NAS-Bench-201, the architectures have a sequential execution which made LUT the most accurate in terms of latency ranking the architectures. On DARTS, XGBoost prediction was the most suitable methods. But, LUT was not far with 0.915 against 0.942. Computing the LUT in our algorithm is simple. Using a hook during the forward function on a PyTorch model is sufficient and much more direct than calling a surrogate model. We thus use this strategy to estimate the latency in our method. F TRAINING HYPERPARAMETERS The training hyperparameters are listed in Table 5. It takes 2, 3.8, 3.8 GPU-days for NAS-Bench-201, DARTS and ProxylessNAS search space to train each supernetwork to fullness. Our training is 5x faster than previous works due to the pruning strategy. To be consistent with previous works, we do not employ data augmentation tricks such as cutout or mixup. We also do not employ any special operations such as squeeze-and-excitation. All these methods can further improve the scores on the test set.
1. What is the main contribution of the paper on multi-objective Neural Architecture Search (NAS)? 2. What are the strengths and weaknesses of the proposed method, particularly in optimizing for multiple objectives? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any questions or concerns regarding the algorithm, its description, and the evaluation of the Pareto parameter training? 5. How does the paper compare to other works in the NAS area, especially those that have explored Pareto-based multi-objective formulations?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper presents a method for multi-objective Neural Architecture Search (NAS), i.e., finding the best NAs based both on task prediction and hardware-based metrics (e.g., latency). The proposed method (HW-NAS, HW for "hardware") builds on single-stage supernetwork techniques (the union of possible architectural choices for the neural final architecture) by co-training the supernetwork's parameters and a "Pareto" parameter to find (and rank) the sub-models. They propose to use Pareto-ranks as the primary search objective and NAS evaluation criteria. They implemented their algorithm and compare the found rank of the models on the Pareto frontier with Pareto rank of the individually trained sub-models for multiple benchmarks. In addition the trained Pareto parameter allows them to decrease supernetwork training time by pruning sub-networks not be on the Pareto front. By doing so, they achieve a 97% Pareto front approximation (vs 87%) for the resulting sub-models. Strengths And Weaknesses Positive Points: Multi-objective NAS is an important area, and the proposed method (Pareto-based search training) appears novel Evaluation used 3 benchmarks/search spaces (DARTs, NAS-Bench-201, ProxylessNAS) and additionally compares to FairNAS and a recent one-shot GCN-based technique. Results are promising: optimal sub-networks often meet or improve accuracy/latency while also taking less time to train. Negative Points The paper is written as if optimizing for multiple objectives was a relatively new idea. But searching for models on the Pareto front has been previously explored in the NAS area. Related work doesn't describe much of the work in that multi-objective space. The description of the algorithm is light on details, specifically the Pareto loss function, the Pareto parameter updates, and how NAs are found. This makes reproducibility difficult. Given the above limited description of the operation of the Pareto-based training, it isn't clear how easy it is to extend to multiple objectives. There is little to no evaluation of the Pareto parameter training in isolation to understand its behavior / performance. Clarity, Quality, Novelty And Reproducibility Overall, this paper appears to propose a relatively novel take on discovering sub-networks on the Pareto frontier. However, a lack of related work and detail around the algorithm muddy the presentation, ultimately making it difficult to assess novelty, reproducibility, and applicability. The end of Sec 1 states that it achieves a "97% near Pareto front approximation" -- it might be straightforward but be very clear if it's a critical metric. Perhaps the largest omission concerns Algorithm 1. Section 3.2's "steps" do not seem to accurately reflect the algorithm. In step 2 "training to solve the task", the bullet only states taking a mini-batch and choosing a subnetwork. It does not describe training that subnetwork nor give intuition behind that action. The next section "Pareto-rank training" starts with the sentence "After completing the iteration" but step 2 is part of the iteration? At least the appendix could spell out the "list wise CE loss" function (and Algorithm 1 could refer to it as opposed to just "Loss_{pr}". Is \alpha just a single parameter (value) or something else (vector)? What does "Adjust \alpha" mean? It's not clear whether you're training the sampled subnetworks from scratch. I would assume this was the case, since the Pareto training compares the found Pareto ranks (so far) to "ground truth." However, part of the point of the work was to avoid expensive retraining of the sub-network. It looks like the benchmarks provided trained networks and in one section you mentioned being able to use those for estimating the Pareto Kendall measure (sec 4.3). So, does training time (GPU Hours) include that time? Another gap is that the paper does not investigate the performance / convergence of the Pareto training. How does the search proceed with N_p? How many sub-networks should be sampled (n)? It isn't clear what "Hyper Volume" is or why the reader should care. It seems to have something to do with completeness of the frontier. How often will this approach return only rank 1 models? How often will it return the complete frontier? How can we give the reader an intuition or proof of that behavior? I.e., before we compare HW-NAS' final models, it wasn't clear how well the Pareto-training performed. I'm curious if there is a discussion to be had around the differences of Pareto ranks and performance ranks. This mechanism will almost always return all models as rank 1 (sec 4.4), but performance ranks will likely have many fewer identical ranks. Would it be possible to return a set of sub-networks scored rank of 1, but actually they all belong to the second frontier? When you compute the "ground truth" rank, was it only Pareto ranking the sampled set? In other words, if the ground truth set was scored in isolation, they would look like rank 1, and the correlation would be artificially high. The end of 3.1 says we'll only use Pareto ranks, but it would still be nice to repeat in Fig 5 caption. Related work should take into consideration other pieces of work that have looked at Pareto-based multi-objective formulations, such as Elsken, Thomas, Metzen, Jan Hendrik, and Hutter, Frank. Efficient multi-objective neural architecture search via lamarckian evolution, ICLR 2019, and Efficient Forward Architecture Search, Hanzhang Hu, John Langford, Rich Caruana, Saurajit Mukherjee Eric Horvitz, Debadeepta Dey, Neurips 2019. BTW, I got these by looking at prior openreview at ICLR.
ICLR
Title Pareto Rank-Preserving Supernetwork for HW-NAS Abstract In neural architecture search (NAS), training every sampled architecture is very time-consuming and should be avoided. Weight-sharing is a promising solution to speed up the evaluation process. However, a sampled subnetwork is not guaranteed to be estimated precisely unless a complete individual training process is done. Additionally, practical deep learning engineering processes require incorporating realistic hardware-performance metrics into the NAS evaluation process, also known as hardware-aware NAS (HW-NAS). HW-NAS results in a Pareto front, a set of all architectures that optimize conflicting objectives, i.e. taskspecific performance and hardware efficiency. This paper proposes a supernetwork training methodology that preserves the Pareto ranking between its different subnetworks resulting in more efficient and accurate neural networks for a variety of hardware platforms. The results show a 97% near Pareto front approximation in less than 2 GPU days of search, which provides x2 speed up compared to stateof-the-art methods. We validate our methodology on NAS-Bench-201, DARTS and ImageNet. Our optimal model achieves 77.2% accuracy (+1.7% compared to baseline) with an inference time of 3.68ms on Edge GPU for ImageNet. 1 INTRODUCTION A key element in solving real-world deep learning (DL) problems is the optimal selection of the sequence of operations and their hyperparameters, called DL architecture. Neural architecture search (NAS) (Santra et al. (2021)) automates the design of DL architectures by searching for the best architecture within a set of possible architectures, called search space. When considering hardware constraints, hardware-aware neural architecture search (Benmeziane et al. (2021); Sekanina (2021)) (HW-NAS) simultaneously optimizes the task-specific performance, such as accuracy, and the hardware efficiency computed by the latency, energy consumption, memory occupancy, and chip area. HW-NAS works (Cai et al. (2019); Lin et al. (2021); Wang et al. (2022)) showed the usefulness and discovered state-of-the-art architectures for Image Classification (Lin et al. (2021)), Object detection (Chen et al. (2019)), and Keyword spotting (Busia et al. (2022)). HW-NAS is cast as a multi-objective optimization problem. Techniques for HW-NAS span evolutionary search, Bayesian optimization, reinforcement learning and gradient-based methods. These require evaluating each sampled architecture on the targeted task and hardware platform. However, the evaluation is extremely time-consuming, especially for task-specific performance, which requires training in the architecture. Many estimation strategies (White et al. (2021)) are used to alleviate this problem, such as neural predictor methods (Benmeziane et al. (2022a); Ning et al. (2020)), zero-cost learning (Lopes et al. (2021); Abdelfattah et al. (2021)), and weight sharing (Chu et al. (2021); Chen et al. (2021)). These strategies are evaluated on how well they respect the ground truth ranking between the architectures in the search space. Weight sharing is an estimation strategy that formulates the search space into a supernetwork. A supernetwork is an over-parameterized architecture where each path can be sampled. At the end of this sampling, a sub-network of the supernetwork is obtained. In each layer, all possible operations are trained. With this definition, we can classify weight-sharing NAS in two categories: (1)a twostage NAS in which we first train the supernetwork on the targeted task. Then, using the pre-trained supernetwork, each sampled sub-network’s performance can be estimated using a search strategy, such as an evolutionary algorithm. (2) a one-stage NAS in which we simultaneously search and train the supernetwork. Additional parameters are assigned to each possible operation per layer. These parameters are trained to select which operation is appropriate for each layer. Both Weight-sharing categories assume that the rank between different sub-networks is preserved. Two architectures with the same rank imply that they have the same accuracy. State-of-the-art works (Zhang et al. (2020); Peng et al. (2021); Zhao et al. (2021)) have highlighted the training inefficiency in this approach by computing the ranking correlation between the architectures’ actual rankings and the estimated rankings. Some solutions have been proposed to train the supernetwork with strict constraints on fairness to preserve the ranking for accuracy, such as FairNAS (Chu et al. (2021)). Others train a graph convolutional network in parallel to fit the performance of sampled sub-networks Chen et al. (2021). However, current solutions have two main drawbacks: 1. In the multi-objective context of HW-NAS, different objectives such as accuracy and latency have to be estimated. The result is a Pareto front, a set of architectures that better respects the trade-off between the conflicting objectives. The ranking following one objective is no longer a good metric for the estimator. In this setting, we need to take into account the dominance concept in the ranking. Both estimations hinder the final Pareto front approximation and affect the search exploration when considering the accuracy and latency as objectives. 2. Many works (Chen et al. (2021); Zhao et al. (2021); Guo et al. (2020)) attempt to fix the supernetwork sampling after its training. We believe that this strategy is inefficient due to the pre-training of supernetwork. Its accuracy-based ranking correlation is bad. In Dong & Yang (2020), a reduced Kendall’s tau-b rank correlation coefficient of 0.47 has been obtained on NAS-Bench-201 when using this approach. The accuracy estimation is thus non-conclusive and will mislead any NAS search strategy. To overcome the aforementioned issues, we propose a new training methodology for supernetworks to preserve the Pareto ranking of sub-networks in HW-NAS and avoid additional ranking correction steps. The contributions of this paper are summarized as follows: • We define the Pareto ranking as a novel metric to compare HW-NAS evaluator in the multi-objective context. Our study shows that optimizing this metric while training the supernetwork increases the Kendall rank correlation coefficient from 0.47 to 0.97 for a Vanilla Weight-sharing NAS. • We introduce a novel one-stage weight-sharing supernetwork training methodology. The training optimizes the task-specific loss function (e.g. cross-entropy loss) and a Pareto ranking listwise loss function to select the adequate operation per layer accurately. • During training, we prune the operations that are the least likely to be in the architecture of the optimal Pareto front. The pruning is done by overlapping the worst Paretoranked sub-networks and removing the operations that are only used in these sub-networks. We demonstrate that using our methodology on three different search spaces, namely NAS-Bench201 (Dong & Yang (2020)), DARTS (Liu et al. (2019)) and ProxylessNAS search space (Cai et al. (2019)), we achieve a higher Pareto front approximation compared to current state-of-the-art methods. For example, we obtained 97% Pareto front approximation when One-Shot- NAS-GCN (Chen et al. (2021)) depicts only 87% on NAS-Bench-201. 2 BACKGROUND & RELATED WORK This section summarizes the state-of-the-art in accelerating multi-objective optimization HW-NAS. 2.1 ACCELERATING HARDWARE-AWARE NAS Given a target hardware platform and a DL task, Hardware-aware Neural Architecture Search (HW-NAS) (Benmeziane et al. (2021)) automates the design of efficient DL architectures. HWNAS is a multi-objective optimization problem where different and contradictory objectives, such as accuracy, latency, energy consumption, memory occupancy, and chip area, have to be optimized. HW-NAS has three main components: (1) the search space ,(2) the evaluation method and (3) the search strategy The main time-consuming component in HW-NAS is the evaluation method. Several state-of-the-art works (White et al. (2021)) have been proposed to alleviate this problem. Predictor-based methods (Ning et al. (2020); Lomurno et al. (2021)) are the most popular strategies where machine learning models are used to predict the accuracy or latency from the architecture features (e.g. number of convolutions, widening factor, etc.) or its representation using Graph Neural Networks (GNN) (Ning et al. (2020)) and Recurrent Neural Networks (RNN) (Lomurno et al. (2021)). However, these methods are not flexible to different search spaces as they require training a sampled dataset and then training the predictor. Weight-sharing approaches (Chu et al. (2021); Chen et al. (2021); Zhao et al. (2021); Guo et al. (2020)), on the other hand, define the search space as a supernetwork. In each layer, the supernetwork combines the results of possible operations. A sequence of operations from the input to the output is called a sub-network and constitutes a possible architecture. Training the supernetwork consists of training several paths at once. The input is forwarded through a series of parallel operations whose outputs are summed after each layer. There are two main issues when training a supernetwork: 1. The order of the sampled sub-networks matters: Assume we have two sub-networks A and B. Both A and B start with the same operation op1 in layer 1. During the first training iteration, A is sampled and op1 weights are adjusted. The second iteration samples B and adjusts op1 weights again. If we want to evaluate A, we would use the new adjusted weights of op1 which degrades the estimation. 2. Unfair Bias: Sub-networks with an initial better task-specific performance are more likely to be sampled next and maintain a higher coefficient in one-stage supernetwork. Fairnas (Chu et al. (2021)) defines strict fairness constraints that ensure that each operation’s weights are updated the same amount of times at each stage. 2.2 MULTI-OBJECTIVE OPTIMIZATION IN HW-NAS Optimizing conflicting objectives simultaneously requires the definition of a decision metric. In multi-objective optimization Batista et al. (2011), this metric is the dominance criteria. In a twoobjectives minimization problem, dominance is defined as: If s1 and s2 denote two solutions, s1 dominates s2 (s1 ≻ s2) if and only if ∀i fi(s1) ≤ fi(s2) AND ∃j fj(s1) < fj(s2). fi and fj are conflicting objective functions such as latency and accuracy. Using the dominance, there is no single solution that dominates all the others. We instead build the Pareto front; the set of all dominant solutions. The Pareto front approximation is evaluated using the hypervolume metric. The hypervolume measures the area dominated by a Pareto front approximation P and a reference point. The reference point is defined as an architecture with a high latency and low accuracy (furthest from the optimal points). The maximization of hypervolume leads to a high-qualified and diverse Pareto front approximation set. In HW-NAS, computing the hardware efficiency is expensive due to the time-consuming deployment and measurements on the hardware. Using multiple performance estimators is thus popular Hu et al. (2019); Elsken et al. (2019); Lu et al. (2020); Huang & Chu (2021). Current multi-objective HW-NAS approaches focus on optimizing the search algorithm at the expense of poor performance estimators. However, using a performance estimator per objective is not optimal Benmeziane et al. (2022b). In this paper, we present an original weight-sharing technique that directly predicts a multi-objective metric, called Pareto ranking. 3 METHODS The core motivation for a novel training methodology is to achieve an efficient sub-networks evaluation for HW-NAS. The proposed training methodology must preserve the Pareto ranking between different sub-networks while reducing the overall search time. 3.1 PARETO RANKING In this section, we define the Pareto ranking metric used to train and evaluate the supernetwork. Pareto Ranking Solving the multi-objective optimization problem on a set of sub-networks results in a Pareto front. This set of architectures in this front is denoted as F1, i.e., all the architectures have a rank of 1. We achieve the lower ranks by successfully solving the problem on the set of subnetworks pruned from the previous solutions. The lowest rank is assigned to the sub-networks that do not dominate any sub-network. We formally define the Pareto ranking in equation 1, where S is the entire supernetwork, Fk′ is a set of sub-networks ranked k′, and ≻ is the dominance operation. Using this ranking, multiple architectures may have the same rank. This happens when none of them can dominate the others. a is ranked k ⇐⇒ ∀â ∈ S − ⋃ si∈Fk′∧k′<k , â ≻ a (1) Pareto Ranking Correlation. We evaluate the quality of an estimator using ranking correlations such as Kendall’s tau-b Correlation or Spearman Correlation. Kendall’s tau-b determines whether there is a monotonic relationship between two variables and is suitable when variables contain many tied ranks Benmeziane et al. (2021), which is our case. In the rest of the paper, we compute Kendall’s Tau-b correlation between the ground truth ranks (i.e. the Pareto ranks obtained from independently training the sub-networks), and the Pareto ranks obtained by evaluating each architecture with the supernetwork shared weights. 3.2 PARETO RANK-PRESERVING TRAINING Our training methodology aims at preserving the Pareto ranking obtained by the weight-sharing evaluation. Figure 3 shows a representation of the supernetwork definition and the different parameters we aim to learn. A sub-network is a path from the input to the output. All extracted sub-networks are of the same depth. We train the supernetwork with two goals: 1) enhance the task-specific loss function by adjusting W , the task-specific weights of the original model associated with the neural network operations such as the kernels in convolution, and 2) improve the Pareto ranking loss between its different paths by adjusting α, the weights associated with the operation selection. α measures which operation is critical and which one is selected. Algorithm 1 and figure 1 summarize the training procedure. • Step 1: Train with Strict Fairness We train our supernetwork using FairNAS (Chu et al. (2021)) strict fairness constraint. This step adjusts the weights of all the sub-networks W and gives a good starting point for the Pareto ranking training. Additionally, the accuracy estimation on the task-specific loss at this point is well estimated. We use these estimations to compute the true Pareto ranks in case no accuracy was provided by the benchmark. • Step 2: Pareto ranking training For each iteration, we apply: - Training to solve the task: A mini-batch is sampled from the training set, and a subnetwork is chosen according to each operation’s highest α. The operation’s weights are updated using the task-specific loss, e.g., cross-entropy loss for image classification. - Pareto rank training: In this phase, we purposefully bias the training towards better Pareto-ranked architectures using the α parameters. α parameters are trained using the loss function provided in equation 2. During the forward pass, we Pareto rank the sampled sub-networks. We compute the number of times an operation opi appears in layer lj on N top-ranked sub-networks, denoted as g(opi, lj). N is a hyperparameter defined before training. We denote by ĝ(opi, lj), the ground truth. Equation 2 computes the hinge loss over all layers in the sampled sub-networks and compares the number of times the operation with the highest α appears in the predicted Pareto front and the ground truth one. L = L∑ j=1 ∑ i,g(opi,lj)>ĝ(opi,lj),i̸=argmax(α) max[0,m− g(argmax(α), lj)− ĝ(opi, lj)] (2) We adjust each operation’s α parameters and compute each sampled sub-network’s latency using a lookup table. We define the predicted Pareto score according to Ps = ∑ op∈a αop, i.e., the sum of selected operations’ alpha values. Next, we compute the listwise ranking loss defined by the cross entropy between the ranking scores and the Pareto ranks (ground truth). • Step 3: Pruning by Pareto Ranking Sub-networks We drop sub-networks furthest from the optimal Pareto front to accelerate the training. First, we select the sub-networks belonging to the two first Pareto ranks. Then, based on the hypervolume improvement (HVI) (Emmerich et al. (2011)), we select n sub-networks. The operations never used by any sub-network in this selection are removed for each layer. Equation 3 illustrates how the hypervolume improvement is computed in this context. oij denotes operation i in layer j. HV denotes the hypervolume function and {Soij} denotes the set of sampled sub-networks using operation i in layer j. HV I(oij , P ) = HV (P ⋃ {Soij})−HV (P − {Soij}) (3) Finally, going over all the layers to select the operations with the highest α would suffice to find the most efficient DNN within the search space. Figure 4 shows the training results. We compare our methodology to FairNAS (Chu et al. (2021)) strict fairness training. During training, the Pareto ranking correlation increases with the quality of the estimations. When using our training methodology without considering the alpha parameters, the ranking correlation saturates at 0.7. FairNAS achieves the same behaviour with reduced variance among the different training runs. However, if we consider the alpha parameters, the selection is more efficient and the architectures’ rankings are well represented with 0.94. Algorithm 1 Supernetwork Training Algorithm Input: Search space S, number of epochs for fairness training Nf , number of epochs for Pareto training Np, Supernetwork parameters (W,α), training dataloader D, task-specific loss Loss, Pareto raking loss LossPR, number of sampled sub-network n procedure TRAIN Initialize W and α for each operation in Supernetwork Strict fairness training for Nf epochs for i=1 to Np do for data, labels in D do Build model with argmax(α) following step 2 Reset gradients to zero for all W parameters Calculate gradients based on Loss, data, labels and update W by gradients end for Sample n sub-networks, models Compute: Pareto rank of models, LossPR between scores and Pareto rank. Update α by gradients end for end procedure 4 EXPERIMENTS In this section, we evaluate our training methodology on three search spaces: NAS-Bench201 (Dong & Yang (2020)), DARTS (Liu et al. (2019)) and ProxylessNAS Search space (Cai et al. (2019)). 4.1 SETUP Search Spaces: Several search spaces have been used to evaluate our method’s performance. NASBench-201 (Dong & Yang (2020)) is a tabular benchmark that contains 15k convolutional neural networks. Each architecture is trained on CIFAR-10, CIFAR-100 and ImageNet-16-120 (Chrabaszcz et al. (2017)). We use the latency values obtained from HW-NAS-Bench (Li et al. (2021)). DARTS Liu et al. (2019) is a supernetwork benchmark that contains 1018 architectures. Each architecture is trained on CIFAR-10 and is transferable to ImageNet. We also validate our methodology on ImageNet using ProxylessNAS search space Cai et al. (2019) whose size goes to 619. All training hyperparameters are listed in Table 5 in Appendix F. 4.2 SEARCH RESULTS In these experiments, we consider two objectives: accuracy and latency (inference time). The latency is either given by HW-NAS-Bench (Li et al. (2021)), or computed using a lookup table as explained in section 3. Figure 5 shows the Pareto front approximations obtained using different methods on NAS-Bench201 for CIFAR-10 and ProxylessNAS Search space for ImageNet. We obtain a 10% hypervolume increase on NAS-Bench-201 and a 43% hypervolume increase on ImageNet compared to the best baselines: One-Shot-NAS-GCN and FairNAS, respectively. 4.2.1 SEARCH ON NAS-BENCH-201 Table 1 shows the results of our methodology on NAS-Bench-201 compared to state-of-the-art methods. PRP-NAS-BL, PRP-NAS-BA and PRP-NAS-O are three sampled architectures from our final Pareto front. BL stands for ”Best Latency”. BA stands for ”Best Accuracy”, and O stands for ”Optimal”. Notably, our architecture obtains highly competitive results. The optimal architecture, PRPNAS-O, outperforms current state-of-the-art methods in accuracy and latency. Including hardware awareness during the search allows us to obtain flexible results according to the targeted hardware platform. Besides, multiple training runs show the stability of our method compared to other baselines. The acceleration in the search cost is mainly due to applying the pruning while training. This cost can vary according to the used GPU. We used GPU V100 to train the supernetwork. Results on other targeted platforms, can be found in Appendix B. 4.2.2 SEARCH ON IMAGENET Similar conclusions can be extracted when searching on ImageNet. Table 2 summarizes the results. Our optimal model surpasses FairNAS-A (+1.9%) and One-Shot-NAS-GCN (+1.7%) while running faster. Training on Imagenet is time-consuming due to the difference in image resolution, which explains the increase in the search cost. We still surpass most of the methods in terms of search time. We compare two ProxylessNAS architectures; ProxylessNAS-R is specific to Mobile inference. When using data augmentation and architecture tricks, namely Squeeze-and-excitation and AutoAugment, in the optimal architecture, we achieve 78.6% accuracy on Imagenet. However, this may affect the latency badly. On FPGA ZCU102, the latency increases from 4.63ms to 7.9ms. 4.3 RANKING QUALITY The ranking preservation measures the quality of the evaluation component in NAS. In HW-NAS, we argue that this measure should consider the Pareto ranking instead of the independent ranks of each objective. We compare different estimators used in HW-NAS using Kendall’s Tau Correlation between the predicted Pareto ranks and the Pareto ranks obtained from independently training the architectures. These latter are extracted from NAS-Bench-201. Figure 6 shows the correlation results. In general, it is more complex to train a supernetwork to respect the Pareto ranks because of the impact of the sub-networks on each other, i.e., the outputs of each layer are summed together. The increase in Kendall’s tau correlation of previous weight-sharing methodology is due to the improvement in the accuracy estimation provided by the supernetwork. Predictor-based evaluators use the learning-to-rank theory and train their predictors only to predict the ranking. Methods such as GATES (Ning et al. (2020)) or BRP-NAS (Dudziak et al. (2020)) train many independent predictors, one for each objective. HW-PR-NAS (Benmeziane et al. (2022a)) trains a single predictor to fit the Pareto ranks. However, their methodology is not flexible for supernetwork training. 4.3.1 ANALYSIS OF α PARAMETER Figure 7 illustrates the evolution of alpha parameters for each operation in layer 1 and 2 during the training. It clearly shows how alpha favors one operation over the others during training. At the end of the training, we take the operations with the highest alpha that represents the operations constructing architectures in the final Pareto front. If one layer has a clear candidate such as layer 1, with conv3x3 that exceeds 60%, this operation is then chosen. If a layer contains multiple operations with similar alpha values, we constructs all the path of that layer. 4.4 BATTERY USAGE PRESERVATION The amount of energy consumed by each model can be different. It is mainly attributed to the number of multi-adds computed. We take supernetwork usage to another level by adequately scheduling the run of different sub-networks according to the system’s battery life. In this experiment, the training is done with two objectives: accuracy and energy consumption. Once the training is done, only the Pareto front solutions are kept in the supernetwork, thanks to the pruning. We further select, from the final Pareto front, s architectures. In this experiment s = 5. The total size of the supernetwork is then reduced to 20.5MB, comparable to MobileNet-V3 Large with 21.11MB. We deploy the model on a smartphone application that is always on. The application repeats the inference classification of one image. The application initially uses the sub-network with the highest accuracy. We switch to a lower accurate model every five hours for a better energy preserving. Figure 8 shows the results of the system’s battery life while running the application for 24 hours. We use three scenarios: 1. Worst Battery Usage: From the Pareto front, we select the most accurate architecture. This is the only architecture the application runs and is the only one loaded in memory. 2. Best Battery Usage: Similar to the worst battery usage, we select the most energy-efficient. 3. Adequate Battery Usage: We load the complete supernetwork and switch the sub-network every 5 hours. Using this strategy helps save up to 34% of the battery life while using highly accurate models most of the time. The average accuracy of the five selected sub-networks is 75.2%. 5 CONCLUSION This work analyzes Hardware-aware weightsharing NAS where the multi-objective context requires the estimator to preserve the Pareto rankings between sub-networks accurately. Contrary to standard baselines that independently estimate each objective, we propose a supernetwork training methodology able to preserve the Pareto rankings during the search. Using our methodology, we achieve 97% near Pareto front approximation on NAS-Bench- 201, DARTS, and ProxylessNAS Search Spaces. We find a 77.2% accuracy model on ImageNet while only training the supernetwork for 3.8 days. Using the supernetwork capabilities, we saved up to 34% of the battery capacity with an average accuracy of 75.2%. A RESULTS ON IMAGENET B ADDITIONAL RESULTS Table 3 shows the results of our training methodology on FPGA ZCU 102 and Raspberry Pi3. Our methodology consistently outperforms state-of-the-art methods on different hardware platforms. C NUMBER OF SAMPLED SUB-NETWORKS Figure 9 shows the effect of increasing the number of sampled sub-networks on the search results. Generally, increasing the number of samples, increases the hypervolume. The hypervolume is used to evaluate Pareto front approximations. It computes the area contained by the Pareto front points found by the search and a reference point. Our reference point is set as a pre-sampled architecture from the supernetwork, with a low accuracy and high latency. When the number of sampled subnetworks is too high, each layer’s output is the sum of multiple operations that can or cannot be within the final Pareto front which induces a bias when adjusting the alpha parameters. 20 40 60 80 100 D PRUNING ALGORITHM We validate the results of our pruning algorithm by comparing the results of our algorithm with and without it in table 4. Obviously without the pruning, the search time exponentially increases from 3.8 GPU days to 8.1. However, the hypervolume improves slightly. The final most accurate architecture is in both Pareto front obtained with and without pruning. The optimal architecture using pruning is better in terms of accuracy and latency. The latency is computed on Jetson Nano Edge GPU. E LATENCY ESTIMATION In this section, we compare different latency estimators to validate the use of LUT during the search. We randomly extract 1000 architectures from NAS-Bench-201 and 1000 from DARTS. We measure the exact latency on Jetson Nano for each architecture. We train two predictor-based models, namely XGBoost and MLP with 3 layers. The training dataset contains 700 architectures and 300 were used for testing. On NAS-Bench-201, the architectures have a sequential execution which made LUT the most accurate in terms of latency ranking the architectures. On DARTS, XGBoost prediction was the most suitable methods. But, LUT was not far with 0.915 against 0.942. Computing the LUT in our algorithm is simple. Using a hook during the forward function on a PyTorch model is sufficient and much more direct than calling a surrogate model. We thus use this strategy to estimate the latency in our method. F TRAINING HYPERPARAMETERS The training hyperparameters are listed in Table 5. It takes 2, 3.8, 3.8 GPU-days for NAS-Bench-201, DARTS and ProxylessNAS search space to train each supernetwork to fullness. Our training is 5x faster than previous works due to the pruning strategy. To be consistent with previous works, we do not employ data augmentation tricks such as cutout or mixup. We also do not employ any special operations such as squeeze-and-excitation. All these methods can further improve the scores on the test set.
1. What is the focus of the paper regarding handware-aware architecture search? 2. What are the strengths and weaknesses of the proposed supernetwork training strategy? 3. Do you have any concerns or questions regarding the proposed method, especially its description and experiment settings? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a supernetwork training strategy for preserving the Pareto ranking between different subnetworks for handware-aware architecture search. To maintain a higher ranking correlation supernetwork, the authors propose to prune the operations based on the optimal Pareto front during the training of the supernetwork. Moreover, the authors define a new metric for better evaluating the architecture in a multi-objective context. Experimental results on three benchmarks demonstrate the effectiveness of the algorithm in some cases. However, the proposed method is not clearly described and the experiments can be further improved. My detailed comments are as follows. Strengths And Weaknesses Positive points: The authors introduce the Pareto ranking as a novel metric for multi-objective architecture search. The authors propose a supernetwork training strategy for preserving the Pareto ranking between different subnetworks. The authors propose to prune the operations based on the optimal Pareto front while training the supernetwork. Negative points: In the Pareto rank-preserving training section, the details of the proposed method are missing. In other word, after reading this paper, I can not re-implement the proposed method. It would better for the authors to make it more clear and detailed. a) The description of the listwise ranking loss is not clear. More explanations are needed. b) Detailed descriptions for obtaining the Pareto ranks (ground truth) for different subnetworks are needed. c) The detail of the hypervolume improvement should be provided since it’s an important metric for pruning the operations. Besides, the motivation for choosing HVI is unclear. Moreover, the number of sample networks n seems important for the proposed method. How to choose a good n? Estimating architecture latency using a lookup table has been proposed and investigated in FBNet. In my opinion, this is not a good way to evaluate the latency of candidate architectures. In practice, the inference framework (e.g., TensorRT) would perform layer/operation fusion. For instance, TensorRT will fuse conv and batch norm layer into a single layer. In this case, the overall latency of the architecture is not equal to the sum of that of each layers. The reality is often more complicated than the above case. Thus, I think it would be difficult to accurately estimate the latency based on the lookup table. In Eq. 1, the authors state F k ′ is a set of sub-networks ranked k’. I have no idea what is F k ′ even aftering carefully reading it. More expalinations should be provide. The ablation studies on “the number of sampled sub-network” and “the number of epochs for Pareto-training” are missing. It would be better for the authors to provide more ablation studies. Some other multi-objective search algorithms [1][2] should also be compared in Tables 2 and 3. The details of the experimental setting in Figures 3 and 5 are unclear. More explanations are needed. The ablation study on the proposed pruning strategy is missing. More experiments are expected. Minor issues: In Section 3.2, “Algorithm 1 and figure 1 summarizes summarizes the training procedure” should be “Algorithm 1 and figure 1 summarizes the training procedure”. Reference [1] Nsganetv2: Evolutionary multi-objective surrogate-assisted neural architecture search ECCV 2020. [2] Ponas: Progressive one-shot neural architecture search for very efficient deployment. Clarity, Quality, Novelty And Reproducibility Lots of important details of the proposed methods are missing, I can not reproduce the proposed method based on the given manuscript.
ICLR
Title Creative Sketch Generation Abstract Sketching or doodling is a popular creative activity that people engage in. However, most existing work in automatic sketch understanding or generation has focused on sketches that are quite mundane. In this work, we introduce two datasets of creative sketches – Creative Birds and Creative Creatures – containing 10k sketches each along with part annotations. We propose DoodlerGAN – a part-based Generative Adversarial Network (GAN) – to generate unseen compositions of novel part appearances. Quantitative evaluations as well as human studies demonstrate that sketches generated by our approach are more creative and of higher quality than existing approaches. In fact, in Creative Birds, subjects prefer sketches generated by DoodlerGAN over those drawn by humans! 1 INTRODUCTION The true sign of intelligence is not knowledge but imagination. – Albert Einstein From serving as a communication tool since prehistoric times to its growing prevalence with ubiquitous touch-screen devices – sketches are an indispensable visual modality. Sketching is often used during brainstorming to help the creative process, and is a popular creative activity in itself. Sketch-related AI so far has primarily focused on mimicking the human ability to perceive rich visual information from simple line drawings (Yu et al., 2015; Li et al., 2018) and to generate minimal depictions that capture the salient aspects of our visual world (Ha & Eck, 2018; Isola et al., 2017). Most existing datasets contain sketches drawn by humans to realistically mimic common objects (Eitz et al., 2012; Sangkloy et al., 2016; Jongejan et al., 2016; Wang et al., 2019). In this work we focus on creative sketches. AI systems that can generate and interpret creative sketches can inspire, enhance or augment the human creative process or final artifact. Concrete scenarios include automatically generating an initial sketch that a user can build on, proposing the next set of strokes or completions based on partial sketches drawn by a user, presenting the user with possible interpretations of the sketch that may inspire further ideas, etc. ∗The work was done when the first author interned at Facebook AI Research. AI for creative sketches is challenging. They are diverse and complex. They are unusual depictions of visual concepts while simultaneously being recognizable. They have subjective interpretations like aesthetics and style, and are semantically rich – often conveying a story or emotions. To facilitate progress in AI-assisted creative sketching, we collect two datasets – Creative Birds and Creative Creatures (Figure 1) – containing 10k creative sketches of birds and generic creatures respectively, along with part annotations (Figure 2 right columns). To engage subjects in a creative exercise during data collection, we take inspiration from a process doodling artists often follow. We setup a sketching interface where subjects are asked to draw an eye arbitrarily around a random initial stroke generated by the interface. Subjects are then asked to imagine a bird or generic creature that incorporates the eye and initial stroke, and draw it one part at a time. Figure 2 shows example sketches from our datasets. Notice the larger diversity and creativity of birds in our dataset than those from existing datasets with more canonical and mundane birds. We focus on creative sketch generation. Generating novel artifacts is key to creativity. To this end we propose DoodlerGAN – a part-based Generative Adversarial Network (GAN) that generates novel part appearances and composes them in previously unseen configurations. During inference, the model automatically determines the appropriate order of parts to generate. This makes the model well suited for human-in-the-loop interactive interfaces where it can make suggestions based on user drawn partial sketches. Quantitative evaluation and human studies show that our approach generates more creative and higher quality sketches than existing approaches. In fact, subjects prefer sketches generated by DoodlerGAN over human sketches from the Creative Birds dataset! Our datasets, code, and a web demo are publicly available 1. 2 RELATED WORK Sketches have been studied extensively as a visual modality that is expressive yet minimal. The sparsity of sketches compared to natural images has inspired novel modelling techniques. We discuss existing sketch datasets and sketch generation approaches. Other related work includes sketch recognition (Yu et al., 2015; Li et al., 2018), sketch-based image retrieval (Yu et al., 2016; Liu et al., 2017; Ribeiro et al., 2020) and generation (Gao et al., 2020; Lu et al., 2018; Park et al., 2019). An overview of deep learning approaches for sketches can be found in this survey (Xu et al., 2020) Sketch datasets. Existing sketch datasets such as TU-Berlin (Eitz et al., 2012), Sketchy (Sangkloy et al., 2016), ImageNet-Sketch (Wang et al., 2019) and QuickDraw (Jongejan et al., 2016) are typically focused on realistic and canonical depictions of everyday objects. For instance, sketches in the Sketchy dataset (Sangkloy et al., 2016) were drawn by humans mimicking a natural image. Sketches in the QuickDraw dataset (Jongejan et al., 2016) were collected in a Pictionary-like game setting – they were drawn under 20 seconds to be easily recognized as a target object category. This is in stark contrast with how people engage in doodling as a creative activity, where they take their time, engage their imagination, and draw previously unseen depictions of visual concepts. These depictions may be quite unrealistic – including exaggerations or combinations of multiple categories. Our datasets contain such creative sketches of birds and generic creatures. See Figures 1 and 2. Our data collection protocol was explicitly designed to engage users in a creative process. Also note that while not the focus of this paper, our datasets are a valuable resource for sketch segmentation approaches. See Section A in the Appendix for further discussion. 1songweige.github.io/projects/creative_sketech_generation/home.html Sketch generation. Earlier studies generated a sketch from an image via an image-to-image translation approach (Isola et al., 2017; Song et al., 2018; Li et al., 2019b). Subsequently, fueled by the release of large-scale free-hand sketch datasets, methods that generate human-like sketches from scratch have interested many researchers. One of the early works in this direction was SketchRNN (Ha & Eck, 2018) – a sequence-to-sequence Variational Autoencoder (VAE) that models the temporal dependence among strokes in a sketch. Later approaches (Chen et al., 2017; Cao et al., 2019) incorporated a convolutional encoder to capture the spatial layout of sketches. Reinforcement learning has also been studied to model the sequential decision making in sketch generation (Zhou et al., 2018; Huang et al., 2019; Mellor et al., 2019). In contrast with generating entire sketches, sketch completion approaches generate missing parts given a partial sketch. SketchGAN (Liu et al., 2019) adopted a conditional GAN model as the backbone to generate the missing part. Inspired by the large-scale language pretraining approaches (Devlin et al., 2019), SketchBERT (Lin et al., 2020) learns representations that capture the “sketch gestalt”. We propose a part-based approach, DoodlerGAN, for sketch generation. Compared to previous approaches (Ha & Eck, 2018; Cao et al., 2019) that learn to mimic how humans draw, our goal is to create sketches that were not seen in human drawings. The idea of generating different components as parts of the overall sketch can be traced back to compositional models (Chen et al., 2006; Xu et al., 2008). We explore these ideas in the context of deep generative models for sketches. Also relevant are approaches that exploit spatial structure when generating natural images – foreground vs. background (Yang et al., 2017) or objects and their spatial relationships (Johnson et al., 2018). A discussion on creative tools based on sketch generation can be found in Section B the Appendix. 3 CREATIVE SKETCHES DATASETS To facilitate progress at the intersection of machine learning and artificial creativity in the context of sketches, we collected two datasets: Creative Birds and Creative Creatures. The first is focused on birds, and the second is more diverse and challenging containing a variety of creatures. 3.1 DATASET COLLECTION Both datasets were collected on Amazon Mechanical Turk using a sketching web interface2. In order to engage subjects in a creative exercise, and encourage diversity across sketches, the interface generates a random initial stroke formed by connecting K keypoints on the canvas via Bezier curves. These points were picked via heuristics such that the strokes are smooth with similar lengths and have limited self occlusion (see Figure 9 in the Appendix). Next, inspired by a process doodling artists (e.g., Michal Levy) use, subjects were told to add an eye to the canvas wherever they like. They were then asked to step back, and visualize how the initial stroke and the eye can be incorporated in a creative sketch of a bird or arbitrary creature (depending on the dataset). Subjects were then asked to draw one part of the bird or creature at a time, indicating via a drop down menu which part they were drawing. Each part can be drawn using multiple strokes. For Creative Birds, the options include 7 common parts of birds (head, body, beak, tail, mouth, legs, wings). For Creative Creatures, the 16 parts (e.g., paws, horn, fins, wings) cover terrestrial, aquatic, and aerial creatures. In our pilot studies we found that providing subjects this structure of drawing one part at a time increased the quality of sketches, and giving subjects the option to pick parts themselves made the exercise more natural and increased the diversity of sketches. As a by product, each stroke has a corresponding part annotation (see Figure 2), which can be a rich resource for sketch segmentation and part-based sketch recognition or generation models. Subjects are required to add at least five parts to the sketch (in addition to the eye and initial stroke) before they can submit the sketch. Once done with adding parts, subjects were given the option to add additional details to the sketch. Finally, they were asked to add a free-form natural language phrase to title their sketch to make it more expressive. In this work we do not use the details and phrases, but they may be useful in the future for more detailed sketch generation and automatic creative sketch interpretation. Except for Figure 1, all example sketches shown in the paper are without the details unless noted otherwise. In addition to contributing to the creativity and diversity of sketches, the initial random strokes also add constraints, making sketch creation more challenging. Subjects have more constraints 2Our interface can be seen at https://streamable.com/jt4sw1 when asked to draw only birds in Creative Birds, so we give them less starting constraints (K = 3 keypoints in the initial strokes), but for Creative Creatures subjects have more freedom in the animals they draw so we provide more constraints (K = 6). Example initial strokes for both datasets and some insights from pilot data collection can be found in Figure 9 and Section C in the Appendix. 3.2 DATA ANALYSIS We collected 10k sketches for both datasets. Filtering out sketches from workers who did not follow the instructions well, we have 8067 and 9097 sketches in Creative Birds and Creative Creatures respectively. See Figure 2 for random examples of sketches from both datasets. For comparison, we also show birds from other existing datasets. Notice that the sketches in our datasets are more creative, and sketches from Creative Creatures are more diverse and complex than those in Creative Birds. We conducted a human study comparing 100 sketches from Creative Birds to 100 from QuickDraw across 5 subjects each. Our sketches were rated as more creative 67% of the time. More example sketches from our datasets can be found in Figures 12 and 13 in the Appendix. In Figure 14 in the Appendix we also show examples where different sketches incorporate similar initial strokes, suggesting room for creativity and variation that a generative model could learn. Sketches of individual parts are shown in Figure 10 and 11 in the Appendix. Analysis of the part annotations, including the order in which parts tend to be drawn, is provided in Section D in the Appendix. 4 DOODLERGAN: A PART-BASED SKETCH GENERATION MODEL Objects are typically a configuration of parts (e.g., animals have two or four legs, a mouth below two eyes). Moreover, humans often sketch by drawing one part at a time. We approach creative sketch generation by generating novel appearances of parts and composing them in previously unseen configurations. Another benefit of parts is that creative sketches exhibit large diversity in appearance, but this complexity is significantly reduced if the sketches are decomposed into individual parts. Our approach, DoodlerGAN, is a part-based GAN that sequentially generates one part at a time, while ensuring at each step that the appearance of the parts and partial sketches comes from corresponding distributions observed in human sketches. Humans do not draw parts in the same order across sketches, but patterns exist. To mimic this, and to adapt well to a (future) human-in-the-loop setting, DoodlerGAN automatically determines the order of parts. Concretely, DoodlerGAN contains two modules: the part generator and the part selector, as shown in Figure 3. Given a part-based representation of a partial sketch, the part selector predicts which part category to draw next. Given a part-based representation of a partial sketch and a part category, the part generator generates a raster image of the part (which represents both the appearance and location of the part). 4.1 ARCHITECTURE We represent sketches as raster images (i.e., grids of pixels) as opposed to vector representations of strokes. This makes detailed spatial information readily available, which helps the model better assess where a part connects with the rest of the sketch. A vector representation captures the lowlevel sequence, which may not be relevant to the overall quality of the generated sketch (e.g., a circular head will look the same whether it is drawn clockwise or counter-clockwise). Our partbased approach models the sequence at an appropriate level of abstraction (parts). The part generator is a conditional GAN based on the StyleGAN2 architecture (Karras et al., 2019; 2020). We generate sketches at a 64 × 64 resolution. We use a 5-layer StyleGAN2 generator with [512, 256, 128, 64, 32] output channels and starting from a 4×4×64 constant feature map. To encode the input partial sketch, we use a 5-layer CNN with [16, 32, 64, 128, 256] output channels. Each layer contains two convolutional layers with 3× 3 kernels followed by a LeakyRelu activation with a negative slope of 0.2. Inspired by the design of U-Net (Ronneberger et al., 2015), we downsample the intermediate feature map after each encoder layer and concatenate it channel-wise with the corresponding layers in the generator. See dashed lines in Figure 3a. This gives the generator access to the hierarchical spatial structure in the input sketch. The input partial sketch is represented by stacking each part as a channel, along with an additional channel for the entire partial sketch. If a part has not been drawn, the corresponding channel is a zero image. We find a part-based representation to be crucial for predicting appropriate part locations. For instance, without the part channels, the generated legs were often not connected to the body. Following StyleGAN2, we borrow the discriminator architecture from (Karras et al., 2018). We give the discriminator access to the input partial sketch and the corresponding part channels, so that the discriminator is able to distinguish fake sketches from real ones not just based on the appearance of the part, but also based on whether the generated part is placed at an appropriate location relative to other parts (e.g., heads are often around the eyes). Specifically, similar to the generator, the input to the discriminator is an image with (number of parts + 1) channels. In Figure 3a, the difference between the real and fake data fed into the discriminator occurs in the red channel where the fake data contains the generated part (as opposed to the real one), and the last white channel where the generated (vs. real) part is combined with the input partial sketch using a max operation. The generator and discriminator parameters are not shared across parts. In our experiments, a unified model across parts failed to capture the details in parts such as wings. We also experimented with fine tuning the model end-to-end but generally observed inferior generation quality. For the part selector, we use the same architecture as the encoder but with a linear layer added at the end to produce logits for different parts. The part selector is also expected to decide when the sketch is complete and no further parts need to be generated. Therefore, the output dimension of the linear layer is set to (number of parts + 1). We convert the human sketches in our dataset to (partial sketch, next-part) pairs and use that to train the part selector in a supervised way. 4.2 TRAINING AND INFERENCE With strong conditional information, the discriminator in the part generator gets easily optimized and consequently provides zero gradient for the generator. As a result, the generator gets stuck in the early stages of training and only generates the zero image afterwards. We introduce a sparsity loss as an additional regularizer, which is the L2 norm between the sum of pixels in the generated and real parts, encouraging the model to generate parts of similar sizes as the real parts. This helps stabilize the training and helps the generator learn even when little signal is provided by the discriminator. During training, we also augment the training sketches by applying small affine transformations to the vector sketch images before converting them to raster images. During inference, we follow the same procedure as data collection: we first automatically sample an initial stroke from our interface (see Figure 9) and then use the part selector and part generators iteratively to complete the sketch. Conditional GANs have been shown to not be sensitive to noise, generating the same sample for different noise vectors (Zhu et al., 2017; Donahue et al., 2017). We encountered similar issues, especially for parts drawn at a later stage when more conditional information is available. This is expected; the more complete the partial sketch, the fewer ways there are to reasonably complete it. We found that our model is sensitive to minor perturbations to the input partial sketch. To increase the diversity in generations, we propose a simple trick we call conditioning perturbation. We apply random translations sampled from N (0, 2) to the input partial sketch to generate multiple parts and then translate the generated parts back to align with the original partial sketch. For further details of training, data augmentation and inference see Section F in the Appendix. 5 RESULTS We quantitatively and qualitatively (via human studies) evaluate our approach compared to strong baselines and existing state-of-the-art approaches trained on our creative datasets: StyleGAN2 Unconditional. We train StyleGAN2 using the same hyperparameters and data augmentation settings used for DoodlerGAN, to generate the entire sketches in one step. To avoid mode collapse, we add the minibatch discriminator layer (Salimans et al., 2016) in its discriminator. This represents a state-of-the-art approach in image generation. StyleGAN2 Conditional. This approach is the same as the one described above, but we condition the one-step generation on the initial stroke (encoded using the same encoder as in DoodlerGAN). SketchRNN Unconditional. We use the SketchRNN model (Ha & Eck, 2018) trained on our datasets. This represents the state-of-the-art in sketch generation. We optimized the architecture (encoder, decoder and latent space sizes, temperature γ) and used heuristic post processing to eliminate obvious failure cases to adapt this approach the best we could to our datasets. SketchRNN Conditional. This approach is SketchRNN described above, except during inference we fix the first few points and the pen states based on the random initial stroke. These are fed as input to continue the sequential generation of the sketch. Percentage-based. This approach is DoodlerGAN, except instead of using parts, we divide the sketch into 20% chunks (based on the order in which strokes were drawn) and use them as “parts”. A comparison to this approach allows us to demonstrate the effectiveness of semantic parts. Randomly selected generations from each approach are shown in Figure 4. In StyleGAN2 generations we can see a general bird contour but clean details are lacking. In SketchRNN we can see the generated eye but later strokes do not form a coherent sketch. The failure to generate complex sketches with SketchRNN has also been reported in (Ha & Eck, 2018). DoodlerGAN generates noticeably higher quality sketches, with different parts of the sketch clearly discernible. More sketches generated by DoodlerGAN can be found in Figure 15. In human evaluation, we also compare bird generations from DoodlerGAN trained on Creative Birds to the human-drawn bird sketches from QuickDraw to represent the “best” (oracle) generator one can hope to get by training on QuickDraw. (A natural equivalent for Creative Creatures is unclear.) In addition to the five generation-based methods, we also compare our method with a strong retrievalbased method in Section J in the Appendix. To evaluate, we generate 10, 000 sketches using all approaches based on a randomly sampled set of previously unseen initial strokes (for the conditional approaches). 5.1 QUANTITATIVE EVALUATION For quantitative evaluation, we consider two metrics used in previous studies: Fréchet inception distances (FID) (Heusel et al., 2017) and generation diversity (GD) (Cao et al., 2019). For this, we trained an Inception model on the QuickDraw3.8M dataset (Xu et al., 2020). See Section G in the Appendix for more training details. In Table 1 we see that DoodlerGAN has the best FID score relative to other approaches, while maintaining similar diversity. We also introduce two additional metrics. The first is the characteristic score (CS) that checks how often a generated sketch is classified to be a bird (for Creative Birds) or creature (for Creative Creatures) by the trained Inception model. The higher the score – the more recognizable a sketch is as a bird or creature – the better the sketch quality. The second is the semantic diversity score (SDS) that captures how diverse the sketches are in terms of the different creature categories they represent (this is more meaningful for the Creative Creatures dataset). Higher is better. Note that GD captures a more fine-grained notion of diversity. For instance, if all generated sketches are different dog sketches, GD would still be high but SDS would be low. See Section H in the Appendix for further details about these two metrics. In Table 1 we see that DoodlerGAN outperforms existing approaches on both metrics. In fact, DoodlerGAN even outperforms the human sketches on the Creative Birds dataset! This trend repeats in human evaluation. 5.2 HUMAN EVALUATION Automatic evaluation of image generation and creative artifacts are open research questions. Therefore, we ran human studies on Amazon Mechanical Turk (AMT). Specifically, we showed subjects pairs of sketches – one generated by DoodlerGAN and the other by a competing approach – and ask which one (1) is more creative? (2) looks more like a bird / creature? (3) they like better? (4) is more likely to be drawn by a human? For the conditional baselines, we also ask (5) in which sketch is the initial stroke (displayed in a different color) better integrated with the rest of the sketch? We evaluated 200 random sketches from each approach. Each pair was annotated by 5 unique subjects. Figure 5 shows the percentage of times DoodlerGAN is preferred over the competing approach. We also plot the Bernoulli confidence intervals (p = 0.05). Values outside the band are statistically significantly different than 50%, rejecting the null hypothesis that both approaches are equally good. For Creative Birds, DoodlerGAN significantly outperforms the five baselines on all five questions. It even beats the real human-drawn sketches, not just from QuickDraw, but also from our Creative Birds dataset on most dimensions! Creative Creatures is a more challenging dataset. DoodlerGAN outperforms other approaches but not human sketches. DoodlerGAN is not statistically significantly better (or worse) than SketchRNN Conditional in terms of creativity. Note that creativity in itself may not be sufficient – an arbitrary pattern of strokes may seem creative but may not be recognized as a bird or creature. The combination of creativity and looking like a bird or creature is a more holistic view, which we believe is better captured by the overall metric of which sketch subjects like better. DoodlerGAN statistically significantly outperforms SketchRNN on that metric. Similarly, subjects can not differentiate between DoodlerGAN and SketchRNN Conditional in terms of being more likely to be drawn by humans. Note that unlike some other AI tasks (e.g., recognizing image content), an average human need not be the gold standard for creative tasks such as sketching. 5.3 NOVELTY Figure 16 in the Appendix shows generated sketches and their nearest neighbor training sketches using Chamfer distance. We see that the closest training sketches are different, demonstrating that DoodlerGAN is generating previously unseen creative sketches. In fact, by evaluating a sketch classification model trained on QuickDraw on our generated Creative Creatures sketches, we identify several sketches that have a high response (>0.25) to more than one category. We find that our model has generated hybrids of a penguin and mouse, a panda and a snake, a cow and a parrot (see Figure 6) – a strong indication of creativity (Boden, 1998; Yu & Nickerson, 2011)! In the Appendix, we include several other evaluation studies to demonstrate (1) DoodlerGAN’s ability to generate a diverse set of parts in an interpretable fashion where smoothly varying the input noise to one layer changes the part shape and to another layer changes the position (Section I) (2) DoodlerGAN’s ability to complete a sketch better than a retrieval-based approach (Section J) (3) the improvements in sketch quality due to the different design choices made in the part generator and selector (Section K). Finally, we also briefly discuss a heuristic method to convert the generated raster images to a vector representation (Section L) that enables rendering sketches at arbitrary resolutions. This conversion process adds an unintended but delightful aesthetic to the sketches! In human studies we find that subjects prefer this aesthetic to the raw generated images 96% of the times. We also ran all our human studies with this aesthetic and find that similar trends hold. 6 CONCLUSION In this work we draw attention to creative sketches. We collect two creative sketch datasets that we hope will encourage future work in creative sketch interpretation, segmentation, and generation. In this paper we focus on the latter, and propose DoodlerGAN – a part-based GAN model for creative sketch generation. We compare our approach to existing approaches via quantitative evaluation and human studies. We find that subjects prefer sketches generated by DoodlerGAN. In fact, for birds, subjects prefer DoodlerGAN’s sketches over those drawn by humans! There is significant room for improvement in generating more complex creative sketches (e.g., Creative Creatures). Future work also involves exploring human-machine collaborative settings for sketching. ACKNOWLEDGMENTS The Georgia Tech effort was supported in part by AFRL, DARPA, ONR YIP and Amazon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor. A SKETCH SEGMENTATION DATASETS Previous sketch segmentation datasets (Schneider & Tuytelaars, 2016; Yang et al., 2020; Li et al., 2019a; Wu et al., 2018) are too sparsely annotated to effectively train deep models. For example, the SPG dataset (Li et al., 2019a) contains 800 sketches per category with part annotations. Note that most approaches train one model per category. The SketchSeg (Wu et al., 2018) dataset contains 100 labeled sketches per category, then further augmented by automatically generating more sketches with a trained SketchRNN (Ha & Eck, 2018) model. Our proposed datasets are exhaustively annotated – all 10k sketches in both datasets have part annotations. B EXISTING CREATIVE TOOLS BASED ON SKETCH GENERATION The growing availability of sketch generation approaches has spurred a variety of creative applications and art projects. The book Dreaming of Electric Sheep (Diaz-Aviles, 2018) inspired by Philip K. Dick’s famous book contains 10,000 sheep sketches generated by DCGAN (Radford et al., 2015) trained on the QuickDraw dataset (Jongejan et al., 2016). Quick, Draw! (quickdraw.withgoogle.com) recognizes what a user is drawing as they draw it. AutoDraw (Lab, 2017) returns high quality illustrations or sketches created by experts that match a novice’s sketch. Edges2cats (Hesse, 2017) is an online tool based on pix2pix (Isola et al., 2017) that converts free-hand drawings to realistic images. Our dataset and approach can be used to boost more applications related to draw a creative sketch. C INSIGHTS FROM PILOT DATA COLLECTION PILOT We found that when using a shorter initial stroke, the generic creatures were often recognizable as specific animals. But with longer strokes, we more frequently found fictional creatures that are high quality (look like creatures) but not recognizable as a real animal, indicating higher creativity. Piloting longer initial strokes with birds indicated that subjects had trouble incorporating the initial stroke well while maintaining coherence. Shorter strokes led to simpler sketches (still significantly more complex and creative than QuickDraw, Figure 2), making Creative Birds a more approachable first dataset for creative sketch generation, and Creative Creatures a notch above in difficulty. D STROKE- AND PART-LEVEL ANALYSIS We compare our Creative Birds and Creative Creatures datasets to previous datasets in Table 2. As a proxy for complexity, we report the average stroke length (normalized by the image size) and number of strokes across the bird sketches in each dataset. We see that our datasets have the longest and most number of strokes. The difference is especially stark w.r.t. QuickDraw (twice as long and three times as many strokes), one of the most commonly used sketch datasets. Next we analyze the part annotations in our datasets. Table 3 shows the number of sketches containing each of the parts. Notice that each part in Creative Creatures has significantly fewer examples (in the extreme case, only 302 for paws.) than most parts in Creative Birds, making Creative Creatures additionally challenging. Most parts in Creative Birds are present in more than half the sketches except for the mouth, which we find is often subsumed in beak. For Creative Creatures, the numbers of sketches containing different parts has higher variance. Common parts like body and head have similar occurrence in both datasets. Example parts are shown in Figures 10 and 11. Note that the initial stroke can be used as any (entire or partial) part, including body but will not be annotated as such. This means that sketches without an annotated body likely still contain a body. In Figures 7 and 8 we analyze the order in which parts tend to be drawn in both datasets. E EXAMPLE SKETCHES, PARTS, AND INITIAL STROKES See Figures 12 and 13 for example sketches from the Creative Birds and Creative Creatures datasets respectively. See Figures 10 and 11 for example parts from both datasets. See Figure 9 for examples of the initial strokes used in the data collection and sketch generation process. F IMPLEMENTATION DETAILS Training. We found that the default learning rates used in the StyleGAN papers lead to overoptimization of the discriminator during the early training phase and prevent effective optimization of the generator on our datasets. Therefore, we did a grid search on the discriminator and generator learning rates in {10−3, 5−3, 10−4, 5−4} and batch sizes in {8, 40, 200}, looking for training stability and the high generation quality. We picked a learning rate of 10−4 and a batch size of 40 for both the discriminator and generator. We use the Adam optimizer Kingma & Ba (2014) following StyleGAN papers (Karras et al., 2019; 2020) with β1 = 0, β2 = 0.99, = 10−8. We multiply a trade-off factor equals to 0.01 to the sparsity loss. As for the part selectors for both datasets, we train each on 80% of the data with learning rate 2−4 and batch size 128 for 100 epochs when the training losses are observed to reach the plateau and a reasonable testing accuracy is achieved on the other 20% data. Specifically, we get 63.58% and 48.86% part selection test accuracy on the Creative Birds and Creative Creatures datasets respectively. Our training time of each creature and bird part generator is approximately 4 and 2 days on a single NVIDIA Quadro GV100 Volta GPU. Data augmentation. As discussed in Section 4, we apply a random affine transformation to the vector sketches as a data augmentation step in addition to the random horizontal flips during training. Specifically, we used two sets of affine transformation parameters: 1) random rotation with angles θ ∈ U [−15◦, 15◦], random scaling with ratio s ∈ U [0.9, 1.1], random translation with ratio t ∈ U [−0.01, 0.01], and a fixed stroke width l = 2 pixels; 2) θ ∈ U [−45◦, 45◦], s ∈ U [0.75, 1.25], t ∈ U [−0.05, 0.05], and l ∈ U [0.5, 2.5] pixels. We found that a larger transformation and longer training time is especially useful when training part generators on the Creative Creatures dataset which is more challenging and parts often have fewer examples than Creative Birds. We also found that the larger transformation also helps to increase the variation in the generation based on the noise but the general quality of generated parts suffers on the Creative Birds dataset. Consequently, we apply the larger augmentation when training the creature part generators and bird eye generator and smaller augmentation when training the other models. And we train the creature part generators for 60, 000 steps and bird part generators for 30, 000 steps. Inference. For Creative Birds, we noticed that the quality of the generated sketch was higher when the generated eye was not too small and not too close to the initial stroke. Motivated by this observation, we used the following trick: We sample 10 eyes for each initial stroke and rank them based on the pixel sum and distance to the initial stroke. We combine the ranks using Borda count and pick the highest ranked eye for the following steps. Using this we see an improvement in the generation quality for Creative Birds, but not Creative Creatures. Since humans in our data collection process were asked to add at least five parts to the sketch, we employ the same rule for our part selector – only after five parts have been added, the selector is given the option to predict stop. Once a part has been drawn, that part can not be selected again. G DETAILS OF MODEL TRAINED TO COMPUTE FID AND DIVERSITY SCORES FID computes the similarity between two image distributions. It first embeds the images into a feature space defined by a trained Inception model (Szegedy et al., 2016). It then fits a multivariate Gaussian to the feature vectors from both sets of images. Finally, it computes the Fréchet distance between the two Gaussians. To this end, we trained an Inception model on the QuickDraw3.8M dataset (Xu et al., 2020). Specifically, we preprocess the dataset based on the instructions in the paper (Xu et al., 2020) and render each vector sketch into a 64 × 64 raster image. The dataset contains 345 classes and each class contains 9, 000 training samples, 1, 000 validation samples, and 1, 000 test samples. We train the Inception model with the RMSprop optimizer (Tieleman & Hinton, 2017) for 50 epochs. We use an initial learning rate of 0.045 with an exponential decay at a rate 0.98 after every epoch. The trained model achieved a reasonable test accuracy 76.42% ( state-of-the-art accuracy is 80.51% reported in SketchMate (Xu et al., 2020) using 224× 224 images). Generation diversity calculates the average pairwise Euclidean distance between the features calculated for sketches with a trained model to reflect the diversity of the generation. To be consistent, we use the same Inception model trained on QuickDraw3.8M to calculate these features. H PROPOSED METRICS Here we provide more details about the the characteristic score (CS) and semantic diversity score (SDS) metrics introduced in Section 5.1. We first construct the label sets B and C which include the labels from the 345 QuickDraw classes that represent a bird or creature. The details of the two sets can be found below. Then for the Creative Birds and Creative Creatures datasets, we define the characteristic score as the percentage of the time a generated sketch is classified as a label in B and C respectively. The characteristic score gives us a basic understanding of the generation quality, indicating whether they are recognizable as the relevant concepts (birds and creatures) we used to create the dataset. We define the semantic diversity score as follows: SDS = pC ∑ l∈C − pl pC log pl pC = − ∑ l∈C pl log pl pC , where pl represents the average probability that a sketch in the generation collection is classified as label l, and pC represents the average probability that a generated sketch is classified as a creature i.e., pC = ∑ l∈C pl. The intuition is similar to Inception Score (IS) (Salimans et al., 2016). Except that SDS does not penalize a sketch for receiving a high probability for more than one label. In fact, a sketch that looks like a combination of two creatures is probably quite creative! Such examples are shown in Figure 6 in the paper. Thus, in SDS we only look at the sum of the label distribution of the generation collection to capture two aspects of creativity: Across generations 1) the distribution should have a high variety across creature categories (measured by the entropy of the marginal creature distribution) and 2) the distribution should have the most mass on creature categories. The bird label set B and creature label set C from the QuickDraw categories are as follows: B = [bird, duck, flamingo, parrot] C = [ant, bear, bee, bird, butterfly, camel, cat, cow, crab, crocodile, dog, dolphin, duck, elephant, fish, flamingo, frog, giraffe, hedgehog, horse, kangaroo, lion, lobster, monkey, mosquito, mouse, octopus, owl, panda, parrot, penguin, pig, rabbit, raccoon, rhinoceros, scorpion, sea turtle, shark, sheep, snail, snake, spider, squirrel, swan, tiger, whale, zebra] I MULTIMODAL PART GENERATION Conditional GANs tend to ignore the input noise and base their generations completely on the conditional information (Isola et al., 2017). We encounter similar challenges in our approach, especially for the parts later in the order when the conditional information is strong. For the eye generator, when the conditional signal is weak (just the initial stroke), the model responds to the noise well. We demonstrate this with the style mixing experiment conducted in the StyleGAN paper (Karras et al., 2019). Specifically, given the same initial strokes, we sample two random noise vectors and feed them into different layers of the generator. In Figure 18a, each row has the same input noise for the lower generator layers and each column has the input same noise for the higher generator layers. We see that the lower-layer noise controls the position of the generated eyes while the higher-layer controls the size and shape of the eyes. However, for generators of later parts, the generation tends to have little variation given different noise vectors as illustrated by the tail generation results in Figure 18b. Although the generator does not respond to the noise, we find that it is very sensitive to the conditional information. Specifically, if we slightly translate the input partial sketch, the generator generates a part with different appearance and positions. As shown in the Figure 17, we randomly translate the input partial sketch, feed them to the generator, and then translate the generated parts back to the correct positions. Figure 18c shows different generations for the tail given the same input partial sketch. We call this trick conditioning perturbation. This trick is motivated by the formulation in SketchRNN (Ha & Eck, 2018) where GMM parameters instead of deterministic points are used as the RNN output as a mechanism to introduce variation. J GENERATIVE V.S. RETRIEVAL In addition to the generation-based methods (StyleGAN2, SketchRNN), that we compared to in the main paper, we now consider a strong retrieval-based baseline. Specifically, we hold 5% of the sketches in our datasets to use as query sketches. We extract the first N parts from these sketches and use them to match against partial sketches containing N parts from the remaining 95% of the dataset using the average Chamfer distance across parts. The rest of the matched sketch is returned to complete the query partial sketch. A limitation of this approach is that the quality of the match will deteriorate if the application requires multiple completions of the same query partial sketch (e.g., to present as options to a humanin-the-loop). A retrieval approach will have to retrieve farther neighbors to find more completions. A generative approach can simply generate multiple samples as completions, with no systematic degradation in quality across samples as shown in Figure 18c. We evaluate this via human studies, we hypothesize a scenario where 32 candidate completions are desired. We compare the 32nd retrieved completion with a “32nd” sample from DoodlerGAN (in our approach there is no natural ordering to the generations), and ask human subjects in which sketch the query is better integrated. We experiment with 200 queries containing 2 and 3 parts each. DoodlerGAN is statistically significantly better than the retrieval approach, preferred 54% and 55% of the times with 2- and 3-part queries respectively. Recall that the retrieval approach, by definition, reproduces human sketches as completions. On the other hand, we saw in Figure 16 that DoodlerGAN generates sketches that are different from training data, making it conceptually a more creative approach. K ABLATION STUDY These ablation studies demonstrate the effectiveness of different design choices in DoodlerGAN. Part generator: Figure 19a shows sketches generated if the entire partial sketch is used as the conditioning information without the part-wise channel representation. We see that it is difficult for the part generator to learn the spatial relationships between different parts. Notice that the head (red part) often does not include the eye inside it. Out of a sample of 512 head generations, we find that 57.4% of the time the generated head does not surround the eye when not using part channels, compared to 88.7% of the time with part channels. Figure 19b shows generations without the U-Net architecture, directly concatenating the encoder output with the noise vector instead. We see that the quality of the part composition is much worse than when using the U-Net (Figure 19c) which encodes the spatial information of the conditioning sketch in the feature map. Part selector: Figure 20 shows sketches generated if instead of using a part selector, we generate parts in a fixed order (the most common order in the dataset). We find that the use of a part selector allows the model to use the initial stroke much more effectively. Consider the two sketches with red boundaries shown in Figure 20a. They have the same initial strokes. When using a part selector, a head is not generated and the initial stroke is used as the head. Whereas the generation with a fixed order still generates the head and this error propagates to subsequent part generations. Similar observation can be made for beak generation in the two sketches with blue boundaries shown in Figure 20a. Furthermore, Part selector plays an important role in choosing appropriate part lists and stopping the generation in an appropriate step. For instance, it is not reasonable to generate all 16 part in every creature sketch, which greatly undermines the quality as shown in Figure 20b. A part selector is also important if one wants to implement our method in a human-in-the-loop scenario where the model would need to identify what the human has already drawn, and then decide what to draw next. L CONVERTING GENERATED SKETCHES TO VECTOR IMAGES DoodlerGAN generates sketches in raster image format. One disadvantage of raster images is that their resolution can not be easily increased without introducing artifacts. The benefit of vector images is that they can be rendered at arbitrary resolutions. We convert raster images of sketches to vector representations (SVGs) using the open source tool mkbitmap, which relies on the potrace program ((Selinger, 2003)) based on a polygon tracing algorithm. The parameters used are shown in Table 4. Examples conversions are shown in Figure 21. Notice that in addition to the change in format, the conversion also introduces a different aesthetic to the sketch. We conduct a human study to evaluate this aesthetic. We found that subjects prefer the new aesthetic 96% of the times! Note that this is while keeping the resolution of the two aesthetics the same. We hypothesize that this may be because the converted images do not have any blurry artifacts and the strokes are smoother. For completeness, in Figure 21 we also show a couple of these converted sketches at a higher resolution.
1. What are the strengths and weaknesses of the proposed dataset compared to existing sketch datasets? 2. How does the proposed generative model differ from other GAN models, and what are its advantages and limitations? 3. What are the potential applications or approaches made possible by the new dataset that were not possible before? 4. How effective is the user-in-the-loop interface in generating high-quality sketches, and how does it compare to other interfaces? 5. What insights can be gained from studying the order of individual strokes in sketch datasets, and how might this information improve the generative model? 6. Is there a preference for raster images over vector representations in generative models for sketches, and why? 7. How was the data collection process for the doodling task conducted, and what was the criteria for selecting the 100 sketches for the user study? 8. Are there any ablation studies or additional experiments that could support the claimed benefits of certain architecture choices in the generative model?
Review
Review Summary This paper introduces two creative sketch datasets of birds and creatures, segmented into parts, each with ~10k doodles collected from Amazon MTurk workers. In a user study, people tend to favor sketches from their dataset over the similar Gogole QuickDraw sketches. Additionally, the authors propose a GAN architecture for generating novel sketches in an incremental fashion, one part at a time. They provide many qualitative results as well as human studies to validate their approach. Explanation of Rating While the new datasets look nice, I'm not sure that they sufficiently different from or better than existing sketch datasets. Overall, the scale, complexity, and diversity of the sketches are roughly the same as some of the other datasets mentioned in the paper. It isn't obvious to me which novel applications or approaches are made possible with this dataset that were not possible before. With respect to the proposed generative model, I think a user-in-the-loop interface is a reasonable approach, but though the model seems compatible with such a system, the authors do not actually implement it. It would be nice to see this interface in practice, since without it, the results do look a bit better but are not particularly more useful than those from other GAN models. Pros The UI and methodology for data collection, in which a user is asked to first draw an eye, followed by semantic parts of their choosing is interesting and appears to be conducive to better sketch quality. The paper is well written and contains many qualitative results and figures. The labeling of sketches by semantic parts presents an advantage over previous datasets. The iterative/incremental design of the generative model is a nice step towards computer-aided user-in-the-loop creative design. Cons While the authors claim that Figure 2 shows that their dataset is more diverse and creative, I'm not sure if that's objectively the case and if the marginal improvement is sufficiently important for downstream tasks. I understand the authors' argument that part-level granularity is more reasonable than stroke-based granularity, but I think that one major insight that can be gained from studying sketch datasets is with respect to how humans draw. For this reason, it would be useful to have information about the order of individual strokes, which the dataset does not appear to include. I'm not sure about the argument that raster images are a better format for a generative model than vector. It seems like the long-range relationships and connectivity as well as the sparsity inherent in a vector representation are pretty important in a sketch. Consequently, the some of the results produced by the model appear to suffer from topological artifacts and poorly-defined line segments. It would be useful to provide a citation for the doodling process used for data collection. How were the 100 sketches from Creative Birds and QuickDraw chosen for the user study? The paper claims the importance of certain architecture choices, e..g, that part channels help the model better predict the part locations. Some ablation study to verify this would be appropriate. Page 4: "subjetcs" (typo) Thank you to the authors for their comments and clarifications. I still am skeptical that the proposed dataset is a fundamental improvement over Quick Draw---ultimately, both datasets contain compact sketches from a single category containing relatively few strokes. But given that your updates and clarifications have addressed many of my questions, I am raising my score.
ICLR
Title Creative Sketch Generation Abstract Sketching or doodling is a popular creative activity that people engage in. However, most existing work in automatic sketch understanding or generation has focused on sketches that are quite mundane. In this work, we introduce two datasets of creative sketches – Creative Birds and Creative Creatures – containing 10k sketches each along with part annotations. We propose DoodlerGAN – a part-based Generative Adversarial Network (GAN) – to generate unseen compositions of novel part appearances. Quantitative evaluations as well as human studies demonstrate that sketches generated by our approach are more creative and of higher quality than existing approaches. In fact, in Creative Birds, subjects prefer sketches generated by DoodlerGAN over those drawn by humans! 1 INTRODUCTION The true sign of intelligence is not knowledge but imagination. – Albert Einstein From serving as a communication tool since prehistoric times to its growing prevalence with ubiquitous touch-screen devices – sketches are an indispensable visual modality. Sketching is often used during brainstorming to help the creative process, and is a popular creative activity in itself. Sketch-related AI so far has primarily focused on mimicking the human ability to perceive rich visual information from simple line drawings (Yu et al., 2015; Li et al., 2018) and to generate minimal depictions that capture the salient aspects of our visual world (Ha & Eck, 2018; Isola et al., 2017). Most existing datasets contain sketches drawn by humans to realistically mimic common objects (Eitz et al., 2012; Sangkloy et al., 2016; Jongejan et al., 2016; Wang et al., 2019). In this work we focus on creative sketches. AI systems that can generate and interpret creative sketches can inspire, enhance or augment the human creative process or final artifact. Concrete scenarios include automatically generating an initial sketch that a user can build on, proposing the next set of strokes or completions based on partial sketches drawn by a user, presenting the user with possible interpretations of the sketch that may inspire further ideas, etc. ∗The work was done when the first author interned at Facebook AI Research. AI for creative sketches is challenging. They are diverse and complex. They are unusual depictions of visual concepts while simultaneously being recognizable. They have subjective interpretations like aesthetics and style, and are semantically rich – often conveying a story or emotions. To facilitate progress in AI-assisted creative sketching, we collect two datasets – Creative Birds and Creative Creatures (Figure 1) – containing 10k creative sketches of birds and generic creatures respectively, along with part annotations (Figure 2 right columns). To engage subjects in a creative exercise during data collection, we take inspiration from a process doodling artists often follow. We setup a sketching interface where subjects are asked to draw an eye arbitrarily around a random initial stroke generated by the interface. Subjects are then asked to imagine a bird or generic creature that incorporates the eye and initial stroke, and draw it one part at a time. Figure 2 shows example sketches from our datasets. Notice the larger diversity and creativity of birds in our dataset than those from existing datasets with more canonical and mundane birds. We focus on creative sketch generation. Generating novel artifacts is key to creativity. To this end we propose DoodlerGAN – a part-based Generative Adversarial Network (GAN) that generates novel part appearances and composes them in previously unseen configurations. During inference, the model automatically determines the appropriate order of parts to generate. This makes the model well suited for human-in-the-loop interactive interfaces where it can make suggestions based on user drawn partial sketches. Quantitative evaluation and human studies show that our approach generates more creative and higher quality sketches than existing approaches. In fact, subjects prefer sketches generated by DoodlerGAN over human sketches from the Creative Birds dataset! Our datasets, code, and a web demo are publicly available 1. 2 RELATED WORK Sketches have been studied extensively as a visual modality that is expressive yet minimal. The sparsity of sketches compared to natural images has inspired novel modelling techniques. We discuss existing sketch datasets and sketch generation approaches. Other related work includes sketch recognition (Yu et al., 2015; Li et al., 2018), sketch-based image retrieval (Yu et al., 2016; Liu et al., 2017; Ribeiro et al., 2020) and generation (Gao et al., 2020; Lu et al., 2018; Park et al., 2019). An overview of deep learning approaches for sketches can be found in this survey (Xu et al., 2020) Sketch datasets. Existing sketch datasets such as TU-Berlin (Eitz et al., 2012), Sketchy (Sangkloy et al., 2016), ImageNet-Sketch (Wang et al., 2019) and QuickDraw (Jongejan et al., 2016) are typically focused on realistic and canonical depictions of everyday objects. For instance, sketches in the Sketchy dataset (Sangkloy et al., 2016) were drawn by humans mimicking a natural image. Sketches in the QuickDraw dataset (Jongejan et al., 2016) were collected in a Pictionary-like game setting – they were drawn under 20 seconds to be easily recognized as a target object category. This is in stark contrast with how people engage in doodling as a creative activity, where they take their time, engage their imagination, and draw previously unseen depictions of visual concepts. These depictions may be quite unrealistic – including exaggerations or combinations of multiple categories. Our datasets contain such creative sketches of birds and generic creatures. See Figures 1 and 2. Our data collection protocol was explicitly designed to engage users in a creative process. Also note that while not the focus of this paper, our datasets are a valuable resource for sketch segmentation approaches. See Section A in the Appendix for further discussion. 1songweige.github.io/projects/creative_sketech_generation/home.html Sketch generation. Earlier studies generated a sketch from an image via an image-to-image translation approach (Isola et al., 2017; Song et al., 2018; Li et al., 2019b). Subsequently, fueled by the release of large-scale free-hand sketch datasets, methods that generate human-like sketches from scratch have interested many researchers. One of the early works in this direction was SketchRNN (Ha & Eck, 2018) – a sequence-to-sequence Variational Autoencoder (VAE) that models the temporal dependence among strokes in a sketch. Later approaches (Chen et al., 2017; Cao et al., 2019) incorporated a convolutional encoder to capture the spatial layout of sketches. Reinforcement learning has also been studied to model the sequential decision making in sketch generation (Zhou et al., 2018; Huang et al., 2019; Mellor et al., 2019). In contrast with generating entire sketches, sketch completion approaches generate missing parts given a partial sketch. SketchGAN (Liu et al., 2019) adopted a conditional GAN model as the backbone to generate the missing part. Inspired by the large-scale language pretraining approaches (Devlin et al., 2019), SketchBERT (Lin et al., 2020) learns representations that capture the “sketch gestalt”. We propose a part-based approach, DoodlerGAN, for sketch generation. Compared to previous approaches (Ha & Eck, 2018; Cao et al., 2019) that learn to mimic how humans draw, our goal is to create sketches that were not seen in human drawings. The idea of generating different components as parts of the overall sketch can be traced back to compositional models (Chen et al., 2006; Xu et al., 2008). We explore these ideas in the context of deep generative models for sketches. Also relevant are approaches that exploit spatial structure when generating natural images – foreground vs. background (Yang et al., 2017) or objects and their spatial relationships (Johnson et al., 2018). A discussion on creative tools based on sketch generation can be found in Section B the Appendix. 3 CREATIVE SKETCHES DATASETS To facilitate progress at the intersection of machine learning and artificial creativity in the context of sketches, we collected two datasets: Creative Birds and Creative Creatures. The first is focused on birds, and the second is more diverse and challenging containing a variety of creatures. 3.1 DATASET COLLECTION Both datasets were collected on Amazon Mechanical Turk using a sketching web interface2. In order to engage subjects in a creative exercise, and encourage diversity across sketches, the interface generates a random initial stroke formed by connecting K keypoints on the canvas via Bezier curves. These points were picked via heuristics such that the strokes are smooth with similar lengths and have limited self occlusion (see Figure 9 in the Appendix). Next, inspired by a process doodling artists (e.g., Michal Levy) use, subjects were told to add an eye to the canvas wherever they like. They were then asked to step back, and visualize how the initial stroke and the eye can be incorporated in a creative sketch of a bird or arbitrary creature (depending on the dataset). Subjects were then asked to draw one part of the bird or creature at a time, indicating via a drop down menu which part they were drawing. Each part can be drawn using multiple strokes. For Creative Birds, the options include 7 common parts of birds (head, body, beak, tail, mouth, legs, wings). For Creative Creatures, the 16 parts (e.g., paws, horn, fins, wings) cover terrestrial, aquatic, and aerial creatures. In our pilot studies we found that providing subjects this structure of drawing one part at a time increased the quality of sketches, and giving subjects the option to pick parts themselves made the exercise more natural and increased the diversity of sketches. As a by product, each stroke has a corresponding part annotation (see Figure 2), which can be a rich resource for sketch segmentation and part-based sketch recognition or generation models. Subjects are required to add at least five parts to the sketch (in addition to the eye and initial stroke) before they can submit the sketch. Once done with adding parts, subjects were given the option to add additional details to the sketch. Finally, they were asked to add a free-form natural language phrase to title their sketch to make it more expressive. In this work we do not use the details and phrases, but they may be useful in the future for more detailed sketch generation and automatic creative sketch interpretation. Except for Figure 1, all example sketches shown in the paper are without the details unless noted otherwise. In addition to contributing to the creativity and diversity of sketches, the initial random strokes also add constraints, making sketch creation more challenging. Subjects have more constraints 2Our interface can be seen at https://streamable.com/jt4sw1 when asked to draw only birds in Creative Birds, so we give them less starting constraints (K = 3 keypoints in the initial strokes), but for Creative Creatures subjects have more freedom in the animals they draw so we provide more constraints (K = 6). Example initial strokes for both datasets and some insights from pilot data collection can be found in Figure 9 and Section C in the Appendix. 3.2 DATA ANALYSIS We collected 10k sketches for both datasets. Filtering out sketches from workers who did not follow the instructions well, we have 8067 and 9097 sketches in Creative Birds and Creative Creatures respectively. See Figure 2 for random examples of sketches from both datasets. For comparison, we also show birds from other existing datasets. Notice that the sketches in our datasets are more creative, and sketches from Creative Creatures are more diverse and complex than those in Creative Birds. We conducted a human study comparing 100 sketches from Creative Birds to 100 from QuickDraw across 5 subjects each. Our sketches were rated as more creative 67% of the time. More example sketches from our datasets can be found in Figures 12 and 13 in the Appendix. In Figure 14 in the Appendix we also show examples where different sketches incorporate similar initial strokes, suggesting room for creativity and variation that a generative model could learn. Sketches of individual parts are shown in Figure 10 and 11 in the Appendix. Analysis of the part annotations, including the order in which parts tend to be drawn, is provided in Section D in the Appendix. 4 DOODLERGAN: A PART-BASED SKETCH GENERATION MODEL Objects are typically a configuration of parts (e.g., animals have two or four legs, a mouth below two eyes). Moreover, humans often sketch by drawing one part at a time. We approach creative sketch generation by generating novel appearances of parts and composing them in previously unseen configurations. Another benefit of parts is that creative sketches exhibit large diversity in appearance, but this complexity is significantly reduced if the sketches are decomposed into individual parts. Our approach, DoodlerGAN, is a part-based GAN that sequentially generates one part at a time, while ensuring at each step that the appearance of the parts and partial sketches comes from corresponding distributions observed in human sketches. Humans do not draw parts in the same order across sketches, but patterns exist. To mimic this, and to adapt well to a (future) human-in-the-loop setting, DoodlerGAN automatically determines the order of parts. Concretely, DoodlerGAN contains two modules: the part generator and the part selector, as shown in Figure 3. Given a part-based representation of a partial sketch, the part selector predicts which part category to draw next. Given a part-based representation of a partial sketch and a part category, the part generator generates a raster image of the part (which represents both the appearance and location of the part). 4.1 ARCHITECTURE We represent sketches as raster images (i.e., grids of pixels) as opposed to vector representations of strokes. This makes detailed spatial information readily available, which helps the model better assess where a part connects with the rest of the sketch. A vector representation captures the lowlevel sequence, which may not be relevant to the overall quality of the generated sketch (e.g., a circular head will look the same whether it is drawn clockwise or counter-clockwise). Our partbased approach models the sequence at an appropriate level of abstraction (parts). The part generator is a conditional GAN based on the StyleGAN2 architecture (Karras et al., 2019; 2020). We generate sketches at a 64 × 64 resolution. We use a 5-layer StyleGAN2 generator with [512, 256, 128, 64, 32] output channels and starting from a 4×4×64 constant feature map. To encode the input partial sketch, we use a 5-layer CNN with [16, 32, 64, 128, 256] output channels. Each layer contains two convolutional layers with 3× 3 kernels followed by a LeakyRelu activation with a negative slope of 0.2. Inspired by the design of U-Net (Ronneberger et al., 2015), we downsample the intermediate feature map after each encoder layer and concatenate it channel-wise with the corresponding layers in the generator. See dashed lines in Figure 3a. This gives the generator access to the hierarchical spatial structure in the input sketch. The input partial sketch is represented by stacking each part as a channel, along with an additional channel for the entire partial sketch. If a part has not been drawn, the corresponding channel is a zero image. We find a part-based representation to be crucial for predicting appropriate part locations. For instance, without the part channels, the generated legs were often not connected to the body. Following StyleGAN2, we borrow the discriminator architecture from (Karras et al., 2018). We give the discriminator access to the input partial sketch and the corresponding part channels, so that the discriminator is able to distinguish fake sketches from real ones not just based on the appearance of the part, but also based on whether the generated part is placed at an appropriate location relative to other parts (e.g., heads are often around the eyes). Specifically, similar to the generator, the input to the discriminator is an image with (number of parts + 1) channels. In Figure 3a, the difference between the real and fake data fed into the discriminator occurs in the red channel where the fake data contains the generated part (as opposed to the real one), and the last white channel where the generated (vs. real) part is combined with the input partial sketch using a max operation. The generator and discriminator parameters are not shared across parts. In our experiments, a unified model across parts failed to capture the details in parts such as wings. We also experimented with fine tuning the model end-to-end but generally observed inferior generation quality. For the part selector, we use the same architecture as the encoder but with a linear layer added at the end to produce logits for different parts. The part selector is also expected to decide when the sketch is complete and no further parts need to be generated. Therefore, the output dimension of the linear layer is set to (number of parts + 1). We convert the human sketches in our dataset to (partial sketch, next-part) pairs and use that to train the part selector in a supervised way. 4.2 TRAINING AND INFERENCE With strong conditional information, the discriminator in the part generator gets easily optimized and consequently provides zero gradient for the generator. As a result, the generator gets stuck in the early stages of training and only generates the zero image afterwards. We introduce a sparsity loss as an additional regularizer, which is the L2 norm between the sum of pixels in the generated and real parts, encouraging the model to generate parts of similar sizes as the real parts. This helps stabilize the training and helps the generator learn even when little signal is provided by the discriminator. During training, we also augment the training sketches by applying small affine transformations to the vector sketch images before converting them to raster images. During inference, we follow the same procedure as data collection: we first automatically sample an initial stroke from our interface (see Figure 9) and then use the part selector and part generators iteratively to complete the sketch. Conditional GANs have been shown to not be sensitive to noise, generating the same sample for different noise vectors (Zhu et al., 2017; Donahue et al., 2017). We encountered similar issues, especially for parts drawn at a later stage when more conditional information is available. This is expected; the more complete the partial sketch, the fewer ways there are to reasonably complete it. We found that our model is sensitive to minor perturbations to the input partial sketch. To increase the diversity in generations, we propose a simple trick we call conditioning perturbation. We apply random translations sampled from N (0, 2) to the input partial sketch to generate multiple parts and then translate the generated parts back to align with the original partial sketch. For further details of training, data augmentation and inference see Section F in the Appendix. 5 RESULTS We quantitatively and qualitatively (via human studies) evaluate our approach compared to strong baselines and existing state-of-the-art approaches trained on our creative datasets: StyleGAN2 Unconditional. We train StyleGAN2 using the same hyperparameters and data augmentation settings used for DoodlerGAN, to generate the entire sketches in one step. To avoid mode collapse, we add the minibatch discriminator layer (Salimans et al., 2016) in its discriminator. This represents a state-of-the-art approach in image generation. StyleGAN2 Conditional. This approach is the same as the one described above, but we condition the one-step generation on the initial stroke (encoded using the same encoder as in DoodlerGAN). SketchRNN Unconditional. We use the SketchRNN model (Ha & Eck, 2018) trained on our datasets. This represents the state-of-the-art in sketch generation. We optimized the architecture (encoder, decoder and latent space sizes, temperature γ) and used heuristic post processing to eliminate obvious failure cases to adapt this approach the best we could to our datasets. SketchRNN Conditional. This approach is SketchRNN described above, except during inference we fix the first few points and the pen states based on the random initial stroke. These are fed as input to continue the sequential generation of the sketch. Percentage-based. This approach is DoodlerGAN, except instead of using parts, we divide the sketch into 20% chunks (based on the order in which strokes were drawn) and use them as “parts”. A comparison to this approach allows us to demonstrate the effectiveness of semantic parts. Randomly selected generations from each approach are shown in Figure 4. In StyleGAN2 generations we can see a general bird contour but clean details are lacking. In SketchRNN we can see the generated eye but later strokes do not form a coherent sketch. The failure to generate complex sketches with SketchRNN has also been reported in (Ha & Eck, 2018). DoodlerGAN generates noticeably higher quality sketches, with different parts of the sketch clearly discernible. More sketches generated by DoodlerGAN can be found in Figure 15. In human evaluation, we also compare bird generations from DoodlerGAN trained on Creative Birds to the human-drawn bird sketches from QuickDraw to represent the “best” (oracle) generator one can hope to get by training on QuickDraw. (A natural equivalent for Creative Creatures is unclear.) In addition to the five generation-based methods, we also compare our method with a strong retrievalbased method in Section J in the Appendix. To evaluate, we generate 10, 000 sketches using all approaches based on a randomly sampled set of previously unseen initial strokes (for the conditional approaches). 5.1 QUANTITATIVE EVALUATION For quantitative evaluation, we consider two metrics used in previous studies: Fréchet inception distances (FID) (Heusel et al., 2017) and generation diversity (GD) (Cao et al., 2019). For this, we trained an Inception model on the QuickDraw3.8M dataset (Xu et al., 2020). See Section G in the Appendix for more training details. In Table 1 we see that DoodlerGAN has the best FID score relative to other approaches, while maintaining similar diversity. We also introduce two additional metrics. The first is the characteristic score (CS) that checks how often a generated sketch is classified to be a bird (for Creative Birds) or creature (for Creative Creatures) by the trained Inception model. The higher the score – the more recognizable a sketch is as a bird or creature – the better the sketch quality. The second is the semantic diversity score (SDS) that captures how diverse the sketches are in terms of the different creature categories they represent (this is more meaningful for the Creative Creatures dataset). Higher is better. Note that GD captures a more fine-grained notion of diversity. For instance, if all generated sketches are different dog sketches, GD would still be high but SDS would be low. See Section H in the Appendix for further details about these two metrics. In Table 1 we see that DoodlerGAN outperforms existing approaches on both metrics. In fact, DoodlerGAN even outperforms the human sketches on the Creative Birds dataset! This trend repeats in human evaluation. 5.2 HUMAN EVALUATION Automatic evaluation of image generation and creative artifacts are open research questions. Therefore, we ran human studies on Amazon Mechanical Turk (AMT). Specifically, we showed subjects pairs of sketches – one generated by DoodlerGAN and the other by a competing approach – and ask which one (1) is more creative? (2) looks more like a bird / creature? (3) they like better? (4) is more likely to be drawn by a human? For the conditional baselines, we also ask (5) in which sketch is the initial stroke (displayed in a different color) better integrated with the rest of the sketch? We evaluated 200 random sketches from each approach. Each pair was annotated by 5 unique subjects. Figure 5 shows the percentage of times DoodlerGAN is preferred over the competing approach. We also plot the Bernoulli confidence intervals (p = 0.05). Values outside the band are statistically significantly different than 50%, rejecting the null hypothesis that both approaches are equally good. For Creative Birds, DoodlerGAN significantly outperforms the five baselines on all five questions. It even beats the real human-drawn sketches, not just from QuickDraw, but also from our Creative Birds dataset on most dimensions! Creative Creatures is a more challenging dataset. DoodlerGAN outperforms other approaches but not human sketches. DoodlerGAN is not statistically significantly better (or worse) than SketchRNN Conditional in terms of creativity. Note that creativity in itself may not be sufficient – an arbitrary pattern of strokes may seem creative but may not be recognized as a bird or creature. The combination of creativity and looking like a bird or creature is a more holistic view, which we believe is better captured by the overall metric of which sketch subjects like better. DoodlerGAN statistically significantly outperforms SketchRNN on that metric. Similarly, subjects can not differentiate between DoodlerGAN and SketchRNN Conditional in terms of being more likely to be drawn by humans. Note that unlike some other AI tasks (e.g., recognizing image content), an average human need not be the gold standard for creative tasks such as sketching. 5.3 NOVELTY Figure 16 in the Appendix shows generated sketches and their nearest neighbor training sketches using Chamfer distance. We see that the closest training sketches are different, demonstrating that DoodlerGAN is generating previously unseen creative sketches. In fact, by evaluating a sketch classification model trained on QuickDraw on our generated Creative Creatures sketches, we identify several sketches that have a high response (>0.25) to more than one category. We find that our model has generated hybrids of a penguin and mouse, a panda and a snake, a cow and a parrot (see Figure 6) – a strong indication of creativity (Boden, 1998; Yu & Nickerson, 2011)! In the Appendix, we include several other evaluation studies to demonstrate (1) DoodlerGAN’s ability to generate a diverse set of parts in an interpretable fashion where smoothly varying the input noise to one layer changes the part shape and to another layer changes the position (Section I) (2) DoodlerGAN’s ability to complete a sketch better than a retrieval-based approach (Section J) (3) the improvements in sketch quality due to the different design choices made in the part generator and selector (Section K). Finally, we also briefly discuss a heuristic method to convert the generated raster images to a vector representation (Section L) that enables rendering sketches at arbitrary resolutions. This conversion process adds an unintended but delightful aesthetic to the sketches! In human studies we find that subjects prefer this aesthetic to the raw generated images 96% of the times. We also ran all our human studies with this aesthetic and find that similar trends hold. 6 CONCLUSION In this work we draw attention to creative sketches. We collect two creative sketch datasets that we hope will encourage future work in creative sketch interpretation, segmentation, and generation. In this paper we focus on the latter, and propose DoodlerGAN – a part-based GAN model for creative sketch generation. We compare our approach to existing approaches via quantitative evaluation and human studies. We find that subjects prefer sketches generated by DoodlerGAN. In fact, for birds, subjects prefer DoodlerGAN’s sketches over those drawn by humans! There is significant room for improvement in generating more complex creative sketches (e.g., Creative Creatures). Future work also involves exploring human-machine collaborative settings for sketching. ACKNOWLEDGMENTS The Georgia Tech effort was supported in part by AFRL, DARPA, ONR YIP and Amazon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor. A SKETCH SEGMENTATION DATASETS Previous sketch segmentation datasets (Schneider & Tuytelaars, 2016; Yang et al., 2020; Li et al., 2019a; Wu et al., 2018) are too sparsely annotated to effectively train deep models. For example, the SPG dataset (Li et al., 2019a) contains 800 sketches per category with part annotations. Note that most approaches train one model per category. The SketchSeg (Wu et al., 2018) dataset contains 100 labeled sketches per category, then further augmented by automatically generating more sketches with a trained SketchRNN (Ha & Eck, 2018) model. Our proposed datasets are exhaustively annotated – all 10k sketches in both datasets have part annotations. B EXISTING CREATIVE TOOLS BASED ON SKETCH GENERATION The growing availability of sketch generation approaches has spurred a variety of creative applications and art projects. The book Dreaming of Electric Sheep (Diaz-Aviles, 2018) inspired by Philip K. Dick’s famous book contains 10,000 sheep sketches generated by DCGAN (Radford et al., 2015) trained on the QuickDraw dataset (Jongejan et al., 2016). Quick, Draw! (quickdraw.withgoogle.com) recognizes what a user is drawing as they draw it. AutoDraw (Lab, 2017) returns high quality illustrations or sketches created by experts that match a novice’s sketch. Edges2cats (Hesse, 2017) is an online tool based on pix2pix (Isola et al., 2017) that converts free-hand drawings to realistic images. Our dataset and approach can be used to boost more applications related to draw a creative sketch. C INSIGHTS FROM PILOT DATA COLLECTION PILOT We found that when using a shorter initial stroke, the generic creatures were often recognizable as specific animals. But with longer strokes, we more frequently found fictional creatures that are high quality (look like creatures) but not recognizable as a real animal, indicating higher creativity. Piloting longer initial strokes with birds indicated that subjects had trouble incorporating the initial stroke well while maintaining coherence. Shorter strokes led to simpler sketches (still significantly more complex and creative than QuickDraw, Figure 2), making Creative Birds a more approachable first dataset for creative sketch generation, and Creative Creatures a notch above in difficulty. D STROKE- AND PART-LEVEL ANALYSIS We compare our Creative Birds and Creative Creatures datasets to previous datasets in Table 2. As a proxy for complexity, we report the average stroke length (normalized by the image size) and number of strokes across the bird sketches in each dataset. We see that our datasets have the longest and most number of strokes. The difference is especially stark w.r.t. QuickDraw (twice as long and three times as many strokes), one of the most commonly used sketch datasets. Next we analyze the part annotations in our datasets. Table 3 shows the number of sketches containing each of the parts. Notice that each part in Creative Creatures has significantly fewer examples (in the extreme case, only 302 for paws.) than most parts in Creative Birds, making Creative Creatures additionally challenging. Most parts in Creative Birds are present in more than half the sketches except for the mouth, which we find is often subsumed in beak. For Creative Creatures, the numbers of sketches containing different parts has higher variance. Common parts like body and head have similar occurrence in both datasets. Example parts are shown in Figures 10 and 11. Note that the initial stroke can be used as any (entire or partial) part, including body but will not be annotated as such. This means that sketches without an annotated body likely still contain a body. In Figures 7 and 8 we analyze the order in which parts tend to be drawn in both datasets. E EXAMPLE SKETCHES, PARTS, AND INITIAL STROKES See Figures 12 and 13 for example sketches from the Creative Birds and Creative Creatures datasets respectively. See Figures 10 and 11 for example parts from both datasets. See Figure 9 for examples of the initial strokes used in the data collection and sketch generation process. F IMPLEMENTATION DETAILS Training. We found that the default learning rates used in the StyleGAN papers lead to overoptimization of the discriminator during the early training phase and prevent effective optimization of the generator on our datasets. Therefore, we did a grid search on the discriminator and generator learning rates in {10−3, 5−3, 10−4, 5−4} and batch sizes in {8, 40, 200}, looking for training stability and the high generation quality. We picked a learning rate of 10−4 and a batch size of 40 for both the discriminator and generator. We use the Adam optimizer Kingma & Ba (2014) following StyleGAN papers (Karras et al., 2019; 2020) with β1 = 0, β2 = 0.99, = 10−8. We multiply a trade-off factor equals to 0.01 to the sparsity loss. As for the part selectors for both datasets, we train each on 80% of the data with learning rate 2−4 and batch size 128 for 100 epochs when the training losses are observed to reach the plateau and a reasonable testing accuracy is achieved on the other 20% data. Specifically, we get 63.58% and 48.86% part selection test accuracy on the Creative Birds and Creative Creatures datasets respectively. Our training time of each creature and bird part generator is approximately 4 and 2 days on a single NVIDIA Quadro GV100 Volta GPU. Data augmentation. As discussed in Section 4, we apply a random affine transformation to the vector sketches as a data augmentation step in addition to the random horizontal flips during training. Specifically, we used two sets of affine transformation parameters: 1) random rotation with angles θ ∈ U [−15◦, 15◦], random scaling with ratio s ∈ U [0.9, 1.1], random translation with ratio t ∈ U [−0.01, 0.01], and a fixed stroke width l = 2 pixels; 2) θ ∈ U [−45◦, 45◦], s ∈ U [0.75, 1.25], t ∈ U [−0.05, 0.05], and l ∈ U [0.5, 2.5] pixels. We found that a larger transformation and longer training time is especially useful when training part generators on the Creative Creatures dataset which is more challenging and parts often have fewer examples than Creative Birds. We also found that the larger transformation also helps to increase the variation in the generation based on the noise but the general quality of generated parts suffers on the Creative Birds dataset. Consequently, we apply the larger augmentation when training the creature part generators and bird eye generator and smaller augmentation when training the other models. And we train the creature part generators for 60, 000 steps and bird part generators for 30, 000 steps. Inference. For Creative Birds, we noticed that the quality of the generated sketch was higher when the generated eye was not too small and not too close to the initial stroke. Motivated by this observation, we used the following trick: We sample 10 eyes for each initial stroke and rank them based on the pixel sum and distance to the initial stroke. We combine the ranks using Borda count and pick the highest ranked eye for the following steps. Using this we see an improvement in the generation quality for Creative Birds, but not Creative Creatures. Since humans in our data collection process were asked to add at least five parts to the sketch, we employ the same rule for our part selector – only after five parts have been added, the selector is given the option to predict stop. Once a part has been drawn, that part can not be selected again. G DETAILS OF MODEL TRAINED TO COMPUTE FID AND DIVERSITY SCORES FID computes the similarity between two image distributions. It first embeds the images into a feature space defined by a trained Inception model (Szegedy et al., 2016). It then fits a multivariate Gaussian to the feature vectors from both sets of images. Finally, it computes the Fréchet distance between the two Gaussians. To this end, we trained an Inception model on the QuickDraw3.8M dataset (Xu et al., 2020). Specifically, we preprocess the dataset based on the instructions in the paper (Xu et al., 2020) and render each vector sketch into a 64 × 64 raster image. The dataset contains 345 classes and each class contains 9, 000 training samples, 1, 000 validation samples, and 1, 000 test samples. We train the Inception model with the RMSprop optimizer (Tieleman & Hinton, 2017) for 50 epochs. We use an initial learning rate of 0.045 with an exponential decay at a rate 0.98 after every epoch. The trained model achieved a reasonable test accuracy 76.42% ( state-of-the-art accuracy is 80.51% reported in SketchMate (Xu et al., 2020) using 224× 224 images). Generation diversity calculates the average pairwise Euclidean distance between the features calculated for sketches with a trained model to reflect the diversity of the generation. To be consistent, we use the same Inception model trained on QuickDraw3.8M to calculate these features. H PROPOSED METRICS Here we provide more details about the the characteristic score (CS) and semantic diversity score (SDS) metrics introduced in Section 5.1. We first construct the label sets B and C which include the labels from the 345 QuickDraw classes that represent a bird or creature. The details of the two sets can be found below. Then for the Creative Birds and Creative Creatures datasets, we define the characteristic score as the percentage of the time a generated sketch is classified as a label in B and C respectively. The characteristic score gives us a basic understanding of the generation quality, indicating whether they are recognizable as the relevant concepts (birds and creatures) we used to create the dataset. We define the semantic diversity score as follows: SDS = pC ∑ l∈C − pl pC log pl pC = − ∑ l∈C pl log pl pC , where pl represents the average probability that a sketch in the generation collection is classified as label l, and pC represents the average probability that a generated sketch is classified as a creature i.e., pC = ∑ l∈C pl. The intuition is similar to Inception Score (IS) (Salimans et al., 2016). Except that SDS does not penalize a sketch for receiving a high probability for more than one label. In fact, a sketch that looks like a combination of two creatures is probably quite creative! Such examples are shown in Figure 6 in the paper. Thus, in SDS we only look at the sum of the label distribution of the generation collection to capture two aspects of creativity: Across generations 1) the distribution should have a high variety across creature categories (measured by the entropy of the marginal creature distribution) and 2) the distribution should have the most mass on creature categories. The bird label set B and creature label set C from the QuickDraw categories are as follows: B = [bird, duck, flamingo, parrot] C = [ant, bear, bee, bird, butterfly, camel, cat, cow, crab, crocodile, dog, dolphin, duck, elephant, fish, flamingo, frog, giraffe, hedgehog, horse, kangaroo, lion, lobster, monkey, mosquito, mouse, octopus, owl, panda, parrot, penguin, pig, rabbit, raccoon, rhinoceros, scorpion, sea turtle, shark, sheep, snail, snake, spider, squirrel, swan, tiger, whale, zebra] I MULTIMODAL PART GENERATION Conditional GANs tend to ignore the input noise and base their generations completely on the conditional information (Isola et al., 2017). We encounter similar challenges in our approach, especially for the parts later in the order when the conditional information is strong. For the eye generator, when the conditional signal is weak (just the initial stroke), the model responds to the noise well. We demonstrate this with the style mixing experiment conducted in the StyleGAN paper (Karras et al., 2019). Specifically, given the same initial strokes, we sample two random noise vectors and feed them into different layers of the generator. In Figure 18a, each row has the same input noise for the lower generator layers and each column has the input same noise for the higher generator layers. We see that the lower-layer noise controls the position of the generated eyes while the higher-layer controls the size and shape of the eyes. However, for generators of later parts, the generation tends to have little variation given different noise vectors as illustrated by the tail generation results in Figure 18b. Although the generator does not respond to the noise, we find that it is very sensitive to the conditional information. Specifically, if we slightly translate the input partial sketch, the generator generates a part with different appearance and positions. As shown in the Figure 17, we randomly translate the input partial sketch, feed them to the generator, and then translate the generated parts back to the correct positions. Figure 18c shows different generations for the tail given the same input partial sketch. We call this trick conditioning perturbation. This trick is motivated by the formulation in SketchRNN (Ha & Eck, 2018) where GMM parameters instead of deterministic points are used as the RNN output as a mechanism to introduce variation. J GENERATIVE V.S. RETRIEVAL In addition to the generation-based methods (StyleGAN2, SketchRNN), that we compared to in the main paper, we now consider a strong retrieval-based baseline. Specifically, we hold 5% of the sketches in our datasets to use as query sketches. We extract the first N parts from these sketches and use them to match against partial sketches containing N parts from the remaining 95% of the dataset using the average Chamfer distance across parts. The rest of the matched sketch is returned to complete the query partial sketch. A limitation of this approach is that the quality of the match will deteriorate if the application requires multiple completions of the same query partial sketch (e.g., to present as options to a humanin-the-loop). A retrieval approach will have to retrieve farther neighbors to find more completions. A generative approach can simply generate multiple samples as completions, with no systematic degradation in quality across samples as shown in Figure 18c. We evaluate this via human studies, we hypothesize a scenario where 32 candidate completions are desired. We compare the 32nd retrieved completion with a “32nd” sample from DoodlerGAN (in our approach there is no natural ordering to the generations), and ask human subjects in which sketch the query is better integrated. We experiment with 200 queries containing 2 and 3 parts each. DoodlerGAN is statistically significantly better than the retrieval approach, preferred 54% and 55% of the times with 2- and 3-part queries respectively. Recall that the retrieval approach, by definition, reproduces human sketches as completions. On the other hand, we saw in Figure 16 that DoodlerGAN generates sketches that are different from training data, making it conceptually a more creative approach. K ABLATION STUDY These ablation studies demonstrate the effectiveness of different design choices in DoodlerGAN. Part generator: Figure 19a shows sketches generated if the entire partial sketch is used as the conditioning information without the part-wise channel representation. We see that it is difficult for the part generator to learn the spatial relationships between different parts. Notice that the head (red part) often does not include the eye inside it. Out of a sample of 512 head generations, we find that 57.4% of the time the generated head does not surround the eye when not using part channels, compared to 88.7% of the time with part channels. Figure 19b shows generations without the U-Net architecture, directly concatenating the encoder output with the noise vector instead. We see that the quality of the part composition is much worse than when using the U-Net (Figure 19c) which encodes the spatial information of the conditioning sketch in the feature map. Part selector: Figure 20 shows sketches generated if instead of using a part selector, we generate parts in a fixed order (the most common order in the dataset). We find that the use of a part selector allows the model to use the initial stroke much more effectively. Consider the two sketches with red boundaries shown in Figure 20a. They have the same initial strokes. When using a part selector, a head is not generated and the initial stroke is used as the head. Whereas the generation with a fixed order still generates the head and this error propagates to subsequent part generations. Similar observation can be made for beak generation in the two sketches with blue boundaries shown in Figure 20a. Furthermore, Part selector plays an important role in choosing appropriate part lists and stopping the generation in an appropriate step. For instance, it is not reasonable to generate all 16 part in every creature sketch, which greatly undermines the quality as shown in Figure 20b. A part selector is also important if one wants to implement our method in a human-in-the-loop scenario where the model would need to identify what the human has already drawn, and then decide what to draw next. L CONVERTING GENERATED SKETCHES TO VECTOR IMAGES DoodlerGAN generates sketches in raster image format. One disadvantage of raster images is that their resolution can not be easily increased without introducing artifacts. The benefit of vector images is that they can be rendered at arbitrary resolutions. We convert raster images of sketches to vector representations (SVGs) using the open source tool mkbitmap, which relies on the potrace program ((Selinger, 2003)) based on a polygon tracing algorithm. The parameters used are shown in Table 4. Examples conversions are shown in Figure 21. Notice that in addition to the change in format, the conversion also introduces a different aesthetic to the sketch. We conduct a human study to evaluate this aesthetic. We found that subjects prefer the new aesthetic 96% of the times! Note that this is while keeping the resolution of the two aesthetics the same. We hypothesize that this may be because the converted images do not have any blurry artifacts and the strokes are smoother. For completeness, in Figure 21 we also show a couple of these converted sketches at a higher resolution.
1. What are the contributions of the paper regarding creative datasets and their applications? 2. What are the strengths of the proposed approach in sketch generation, particularly in its multi-stage generation fashion? 3. Do you have any concerns or questions regarding the experimental design or results? 4. How does the reviewer assess the clarity and quality of the paper's content?
Review
Review This paper put forward two Creative datasets - Creative Birds and Creative Creatures, which have corresponding part annotations. Based on Part selector and Part generator. The authors demonstrate a multi-stage sketch generation approach - DoodlerGAN, and achieve the STOA compared with other methods in several metrics. The topic of creative generation is interesting and gradually becomes the future research direction in the image generation field. There are some pros: The datasets provide a novel perspective to decouple the basic elements of sketch images with fine edge semantic labels and text descriptions. These annotations could be used in the fine-grained domain adaption task, text-to-sketch generation or some reverse tasks. The reviewer thinks this contribution is meaningful and solid. The sketch generation fashion is enlightening. Although the idea of multi-stage generation has been used in many synthesis tasks, it's mainly for different resolutions or granularities (local and global). The partial strokes combine the overall sketch images is a domain-specific technique. The authors reimplement in deep learning way. The reviewer has to say that it may have some limitations on other fields, it's appreciated if the authors can share new possibilities, but that's not the point. The experiments are solid. Quantitative evaluation and human evaluation both outperform other baseline methods. It's convincing. Some concerns are also proposed: How are the predicted labels added to the generator? It doesn't seem to show up on Figure 3(a). The reviewer wants to know more details. The reviewer has some confusion about the noise range in Figure 3(b), which is N(0,1). However, there is different descriptions in sec 4.2 - N(0,2). Which one is the true sampling distribution? The full loss functions are lack in the mainly body, which should be depicted more clearly.