venue
stringclasses 2
values | paper_content
stringlengths 7.54k
83.7k
| prompt
stringlengths 161
2.5k
| format
stringclasses 5
values | review
stringlengths 293
9.84k
|
---|---|---|---|---|
NIPS | Title
Provably Efficient Causal Reinforcement Learning with Confounded Observational Data
Abstract
Empowered by neural networks, deep reinforcement learning (DRL) achieves tremendous empirical success. However, DRL requires a large dataset by interacting with the environment, which is unrealistic in critical scenarios such as autonomous driving and personalized medicine. In this paper, we study how to incorporate the dataset collected in the offline setting to improve the sample efficiency in the online setting. To incorporate the observational data, we face two challenges. (a) The behavior policy that generates the observational data may depend on unobserved random variables (confounders), which affect the received rewards and transition dynamics. (b) Exploration in the online setting requires quantifying the uncertainty given both the observational and interventional data. To tackle such challenges, we propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner. DOVI explicitly adjusts for the confounding bias in the observational data, where the confounders are partially observed or unobserved. In both cases, such adjustments allow us to construct the bonus based on a notion of information gain, which takes into account the amount of information acquired from the offline setting. In particular, we prove that the regret of DOVI is smaller than the optimal regret achievable in the pure online setting when the confounded observational data are informative upon the adjustments.
N/A
Empowered by neural networks, deep reinforcement learning (DRL) achieves tremendous empirical success. However, DRL requires a large dataset by interacting with the environment, which is unrealistic in critical scenarios such as autonomous driving and personalized medicine. In this paper, we study how to incorporate the dataset collected in the offline setting to improve the sample efficiency in the online setting. To incorporate the observational data, we face two challenges. (a) The behavior policy that generates the observational data may depend on unobserved random variables (confounders), which affect the received rewards and transition dynamics. (b) Exploration in the online setting requires quantifying the uncertainty given both the observational and interventional data. To tackle such challenges, we propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner. DOVI explicitly adjusts for the confounding bias in the observational data, where the confounders are partially observed or unobserved. In both cases, such adjustments allow us to construct the bonus based on a notion of information gain, which takes into account the amount of information acquired from the offline setting. In particular, we prove that the regret of DOVI is smaller than the optimal regret achievable in the pure online setting when the confounded observational data are informative upon the adjustments.
1 Introduction
Empowered by the breakthrough in neural networks, deep reinforcement learning (DRL) achieves significant empirical successes in various scenarios [19, 23, 36, 37]. Learning an expressive function approximator necessitates collecting a large dataset. Specifically, in the online setting, it requires the agent to interact with the environment for a large number of steps. For example, to learn a human-level policy for playing Atari games, the agent has to interact with a simulator for more than 108 steps [13]. However, in most scenarios, we do not have access to a simulator that allows for trial and error without any cost. Meanwhile, in critical scenarios, e.g., autonomous driving and personalized medicine, trial and error in the real world is unsafe and even unethical. As a result, it remains challenging to apply DRL to more scenarios.
To bypass such a barrier, we study how to incorporate the dataset collected offline, namely the observational data, to improve the sample efficiency of RL in the online setting [21]. In contrast to the interventional data collected online in possibly expensive ways, observational data are often abundantly available in various scenarios. For example, in autonomous driving, we have access to trajectories generated by the drivers. As another example, in personalized medicine, we have access to electronic health records from doctors. However, to incorporate the observational data in a provably efficient way, we have to address two challenges.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
• The observational data are possibly confounded. Specifically, there often exist unobserved random variables, namely confounders, that causally affect the agent and the environment at the same time. In particular, the policy used to generate the observational data, namely the behavior policy, possibly depends on the confounders. Meanwhile, the confounders possibly affect the received rewards and the transition dynamics. In the example of autonomous driving [9, 22], the drivers may be affected by complicated traffic or poor road design, resulting in traffic accidents even without misconduct. The complicated traffic and poor road design subsequently affect both the action of the drivers and the outcome. Therefore, it is unclear from the observational data whether the accidents are due to the actions adopted by the drivers. Agents trained with such observational data may be unwilling to take any actions under complicated traffic, jeopardizing the safety of passengers. In the example of personalized medicine [8, 29], the patients may not be compliant with prescriptions and instructions, which subsequently affects both the treatment and the outcome. As another example, the doctor may prescribe medicine to patients based on patients’ socioeconomic status (which could be inferred by the doctor through interacting with the patients). Meanwhile, socioeconomic status affects the patients’ health condition and subsequently plays the role of the confounder. In both scenarios, such confounders may be unavailable due to privacy or ethical concerns. Such a confounding issue makes the observational data uninformative and even misleading for identifying and estimating the causal effect, which is crucial for decision-making in the online setting. In all the examples, it is unclear from the observational data whether the outcome is due to the actions adopted.
• Even without the confounding issue, it remains unclear how the observational data may facilitate exploration in the online setting, which is the key to the sample efficiency of RL. At the core of exploration is uncertainty quantification. Specifically, quantifying the uncertainty that remains given the dataset collected up to the current step, including the observational data and the interventional data, allows us to construct a bonus. When incorporated into the reward, such a bonus encourages the agent to explore the less visited state-action pairs with more uncertainty. In particular, constructing such a bonus requires quantifying the amount of information carried over by the observational data from the offline setting, which also plays a key role in characterizing the regret, especially how much the observational data may facilitate reducing the regret. Uncertainty quantification becomes even more challenging when the observational data are confounded. Specifically, as the behavior policy depends on the confounders, there is a mismatch between the data generating processes in the offline setting and the online setting. As a result, it remains challenging to quantify how much information carried over from the offline setting is useful for the online setting, as the observational data are uninformative and even misleading due to the confounding issue.
Contribution. To study causal reinforcement learning, we propose a class of Markov decision processes (MDPs), namely confounded MDPs, which captures the data generating processes in both the offline setting and the online setting as well as their mismatch due to the confounding issue. In particular, we study two tractable cases of confounded MDPs in the episodic setting with linear function approximation [7, 16, 42, 43].
• In the first case, the confounders are partially observed in the observational data. Assuming that an observed subset of the confounders satisfies the backdoor criterion [32], we propose the deconfounded optimistic value iteration (DOVI) algorithm, which explicitly corrects for the confounding bias in the observational data using the backdoor adjustment.
• In the second case, the confounders are unobserved in the observational data. Assuming that there exists an observed set of intermediate states that satisfies the frontdoor criterion [32], we propose an extension of DOVI, namely DOVI+, which explicitly corrects for the confounding bias in the observational data using the composition of two backdoor adjustments. We remark that DOVI+ follows the same principle of design as DOVI and defer the discussion of DOVI+ to §A.
In both cases, the adjustments allow DOVI and DOVI+ to incorporate the observational data into the interventional data while bypassing the confounding issue. It further enables estimating the causal effect of a policy on the received rewards and the transition dynamics with enlarged effective sample size. Moreover, such adjustments allow us to construct the bonus based on a notion of information gain, which takes into account the amount of information carried over from the offline setting.
In particular, we prove that DOVI and DOVI+ attain the ∆H · √ d3H3T -regret up to logarithmic factors, where d is the dimension of features, H is the length of each episode, and T = HK is the number of steps taken in the online setting, where K is the number of episodes. Here the multiplicative factor ∆H > 0 depends on d, H , and a notion of information gain that quantifies the amount of information obtained from the interventional data additionally when given the properly adjusted observational data. When the observational data are unavailable or uninformative upon the adjustments, ∆H is a logarithmic factor. Correspondingly, DOVI and DOVI+ attain the optimal√ T -regret achievable in the pure online setting [7, 16, 42, 43]. When the observational data are sufficiently informative upon the adjustments, ∆H decreases towards zero as the effective sample size of the observational data increases, which quantifies how much the observational data may facilitate exploration in the online setting.
Related Work. Our work is related to the study of causal bandit [20]. The goal of causal bandit is to obtain the optimal intervention in the online setting where the data generating process is described by a causal diagram. The previous study establishes causal bandit algorithms in the online setting [26, 34], the offline setting [17, 18], and a combination of both settings [11]. In contrast to this line of work, we study causal RL in a combination of the online setting and the offline setting. Causal RL is more challenging than causal bandit, which corresponds toH = 1, as it involves the transition dynamics and is more challenging in exploration. See §B for a detailed literature review on causal bandit.
Our work is related to the study of causal RL considered in various settings. [45] propose a modelbased RL algorithm that solves dynamic treatment regimes (DTR), which involve a combination of the online setting and the offline setting. Their algorithm hinges on the analysis of sensitivity [3, 27, 38, 44], which constructs a set of feasible models of the transition dynamics based on the confounded observational data. Correspondingly, their algorithm achieves exploration by choosing an optimistic model of the transition dynamics from such a feasible set. In contrast, we propose a model-free RL algorithm, which achieves exploration through the bonus based on a notion of information gain. It is worth mentioning that the assumption of [45] is weaker than ours as theirs does not allow for identifying the causal effect. As a result of partial identification, the regret of their algorithm is the same as the regret in the pure online setting as T → +∞. In contrast, our work instantiates the following framework in handling confounders for reinforcement learning. (a) First, we propose the estimation equation based on the observations, which identifies the causal effect of actions on the cumulative reward. (b) Second, we conduct point estimation and uncertainty quantification based on observations and the estimation equation. (c) Finally, we conduct exploration based on the uncertainty quantification and achieve the regret reduction in the online setting. Consequently, the regret of our algorithm is smaller than the regret in the pure online setting by a multiplicative factor for all T . [25] propose a model-based RL algorithm in a combination of the online setting and the offline setting. Their algorithm uses a variational autoencoder (VAE) for estimating a structural causal model (SCM) based on the confounded observational data. In particular, their algorithm utilizes the actor-critic algorithm to obtain the optimal policy in such an SCM. However, the regret of their algorithm remains unclear. [6] propose a model-based RL algorithm in the pure online setting that learns the optimal policy in a partially observable Markov decision process (POMDP). The regret of their algorithm also remains unclear. [35] utilize generative adversarial reinforcement learning to reconstruct transition dynamics with confounder, and [40] propose a model-based approach for POMDP based on adjustment with proxy variables. [30] consider offpolicy policy evaluation under one-decision confounding and constructs worst-case bounds with theoretical guarantee. [4] utilizes states and actions as proxy variables to tackle off-policy policy evaluation with confounders. In contrast, our work utilizes backdoor and frontdoor adjustments to handle confounded observation.
2 Confounded Reinforcement Learning
Structural Causal Model. We denote a structural causal model (SCM) [32] by a tuple (A,B, F, P ). Here A is the set of exogenous (unobserved) variables, B is the set of endogenous (observed) variables, F is the set of structural functions capturing the causal relations, which determines an endogenous variable v ∈ B based on the other exogenous and endogenous variables, and P is the distribution of all the exogenous variables. We say that a pair of variables Y and Z are confounded by a variable W if they are both caused by W .
An intervention on a set of endogenous variables X ⊆ B assigns a value x to X regardless of the other exogenous and endogenous variables as well as the structural functions. We denote by do(X = x) the intervention on X and write do(x) if it is clear from the context. Similarly, a stochastic intervention [10, 28] on a set of endogenous variables X ⊆ B assigns a distribution p to X regardless of the other exogenous and endogenous variables as well as the structural functions. We denote by do(X ∼ p) the stochastic intervention on X .
Confounded Markov Decision Process. To characterize a Markov decision process (MDP) in the offline setting with observational data, which are possibly confounded, we introduce an SCM, where the endogenous variables are the states {sh}h∈[H], actions {ah}h∈[H], and rewards {rh}h∈[H]. Let {wh}h∈[H] be the confounders. In §3, we assume that the confounders are partially observed, while in §A, we assume that they are unobserved. The set of structural functions F consists of the transition of states sh+1 ∼ Ph(· | sh, ah, wh), the transition of confounders wh ∼ P̃h(· | sh), the behavior policy ah ∼ νh(· | sh, wh), which depends on the confounder wh, and the reward function rh(sh, ah, wh). See Figure 1 for the causal diagram that describes such an SCM.
Here ah and sh+1 are confounded by wh in addition to sh. We denote such a confounded MDP by the tuple (S,A,W, H,P, r), where H is the length of an episode, S, A, andW are the spaces of states, actions, and confounders, respectively, r = {rh}h∈[H] is the set of reward functions, and P = {Ph, P̃h}h∈H is the set of transition kernels. In the sequel, we assume without loss of generality that rh takes value in [0, 1] for all h ∈ [H]. In the online setting that allows for intervention, we assume that the confounders {wh}h∈[H] are unobserved. A policy π = {πh}h∈[H] induces the stochastic intervention do(a1 ∼ π1(· | s1), . . . , aH ∼ πH(· | sH)), which does not depend on the confounders. In particular, an agent interacts with the environment as follows. At the beginning of the k-th episode, the environment arbitrarily selects an initial state sk1 and the agent selects a policy π
k = {πkh}h∈[H]. At the h-th step of the k-th episode, the agent observes the state skh and takes the action a k h ∼ πkh(· | skh). The environment randomly selects the confounder wkh ∼ P̃h(· | skh), which is unobserved, and the agent receives the reward rkh = rh(s k h, a k h, w k h). The environment then transits into the next state skh+1 ∼ Ph(· | skh, akh, wkh).
For a policy π = {πh}h∈H , which does not depend on the confounders {wh}h∈[H], we define the value function V π = {V πh }h∈[H] as follows,
V πh (s) = Eπ [ H∑ j=h rj(sj , aj , wj) ∣∣∣∣ sh = s], ∀h ∈ [H], (2.1) where we denote by Eπ the expectation with respect to the confounders {wj}Hj=h and the trajectory {(sj , aj)}Hj=h, starting from the state sj = s and following the policy π. Correspondingly, we define the action-value function Qπ = {Qπh}h∈[H] as follows,
Qπh(s, a) = Eπ [ H∑ j=h rj(sj , aj , wj) ∣∣∣∣ sh = s,do(ah = a)], ∀h ∈ [H]. (2.2)
We assess the performance of an algorithm using the regret against the globally optimal policy π∗ = {π∗h}h∈[H] in hindsight after K episodes, which is defined as follows,
Regret(T ) = max π K∑ k=1 ( V π1 (s k 1)− V π k 1 (s k 1) ) = K∑ k=1 ( V π ∗ 1 (s k 1)− V π k 1 (s k 1) ) . (2.3)
Here T = HK is the total number of steps.
Our goal is to design an algorithm that minimizes the regret defined in (2.3), where π∗ does not depend on the confounders {wh}h∈[H]. In the online setting that allows for intervention, it is well understood how to minimize such a regret [2, 14–16]. However, it remains unclear how to efficiently utilize the observational data obtained in the offline setting, which are possibly confounded. In realworld applications, e.g., autonomous driving and personalized medicine, such observational data are often abundant, whereas intervention in the online setting is often restricted. We refer to §C for a comparison between the confounded MDP and other extensions of MDP, including the dynamics treatment regime (DTR), partially observable MDP (POMDP), and contextual MDP (CMDP).
Why is Incorporating Confounded Observational Data Challenging? Straightforwardly incorporating the confounded observational data into an online algorithm possibly leads to an undesirable regret due to the mismatch between the online and offline data generating processes. In particular, due to the existence of the confounders {wh}h∈[H], which are partially observed (§3) or unobserved (§A), the conditional probability P(sh+1 | sh, ah) in the offline setting is different from the causal effect P(sh+1 | sh,do(ah)) in the online setting [33]. More specifically, it holds that
P(sh+1 | sh, ah) = Ewh∼P̃h(· | sh)
[ Ph(sh+1 | sh, ah, wh) · νh(ah | sh, wh) ] Ewh∼P̃h(· | sh) [ νh(ah | sh, wh)
] , P ( sh+1
∣∣ sh,do(ah)) = Ewh∼P̃h(· | sh)[Ph(· | sh, ah, wh)]. In other words, without proper covariate adjustments [32], the confounded observational data may be not informative for estimating the transition dynamics and the associated action-value function in the online setting. To this end, we propose an algorithm that incorporates the confounded observational data in a provably efficient manner. Moreover, our analysis quantifies the amount of information carried over by the confounded observational data from the offline setting and to what extent it helps reducing the regret in the online setting.
3 Algorithm and Theory for Partially Observed Confounder
In this section, we propose the Deconfounded Optimistic Value Iteration (DOVI) algorithm. DOVI handles the case where the confounders are unobserved in the online setting but are partially observed in the offline setting. We then characterize the regret of DOVI. We defer the extension of DOVI, namely DOVI+, to §A which handles the case where the confounders are unobserved in both the online setting and the offline setting.
3.1 Algorithm
Backdoor Adjustment. In the online setting that allows for intervention, the causal effect of ah on sh+1 given sh, that is, P(sh+1 | sh,do(ah)), plays a key role in the estimation of the action-value function. Meanwhile, the confounded observational data may not allow us to identify the causal effect P(sh+1 | sh,do(ah)) if the confounder wh is unobserved. However, if the confounder wh is partially observed in the offline setting, the observed subset uh of wh allows us to identify the causal effect P(sh+1 | sh,do(ah)), as long as uh satisfies the following backdoor criterion. Assumption 3.1 (Backdoor Criterion [32, 33]). In the SCM defined in §2 and its induced directed acyclic graph (DAG), for all h ∈ [H], there exists an observed subset uh of wh that satisfies the backdoor criterion, that is,
• the elements of uh are not the descendants of ah, and
• conditioning on sh, the elements of uh d-separate every path between ah and sh+1, rh that has an incoming arrow into ah.
See Figure 2 for an example that satisfies the backdoor criterion. In particular, we identify the causal effect P(sh+1 | sh,do(ah)) as follows.
Proposition 3.2 (Backdoor Adjustment [32]). Under Assumption 3.1, it holds for all h ∈ [H] that P ( sh+1 ∣∣ sh,do(ah)) = Euh∼P(· | sh)[P(sh+1 | sh, ah, uh)], E [ rh(sh, ah, wh)
∣∣ sh,do(ah)] = Euh∼P(· | sh)[E[rh(sh, ah, wh) ∣∣ sh, ah, uh]]. Here (sh+1, sh, ah, uh) follows the SCM defined in §2, which generates the confounded observational data.
Proof. See [32] for a detailed proof.
With a slight abuse of notation, we write P(sh+1 | sh, ah, uh) as Ph(sh+1 | sh, ah, uh) and P(uh | sh) as P̃h(uh | sh), since they are induced by the SCM defined in §2. In the sequel, we define U the space of observed state uh and write rh = rh(sh, ah, wh) for notational simplicity. Backdoor-Adjusted Bellman Equation. We now formulate the Bellman equation for the confounded MDP. It holds for all (sh, ah) ∈ S ×A that
Qπh(sh, ah) = Eπ [ H∑ j=h rj(sj , aj , uj) ∣∣∣∣ sh,do(ah)] = E[rh ∣∣ sh,do(ah)]+ Esh+1[V πh+1(sh+1)], where Esh+1 denotes the expectation with respect to sh+1 ∼ P(·
∣∣ sh,do(ah)). Here E[rh
∣∣ sh,do(ah)] and P(· ∣∣ sh,do(ah)) are characterized in Proposition 3.2. In the sequel, we define the following transition operator and counterfactual reward function,
(PhV )(sh, ah) = Esh+1∼P(· | sh,do(ah)) [ V (sh+1) ] , ∀V : S 7→ R, (sh, ah) ∈ S ×A, (3.1)
Rh(sh, ah) = E [ rh ∣∣ sh,do(ah)], ∀(sh, ah) ∈ S ×A. (3.2)
We have the following Bellman equation, Qπh(sh, ah) = Rh(sh, ah) + (PhV πh+1)(sh, ah), ∀h ∈ [H], (sh, ah) ∈ S ×A. (3.3)
Correspondingly, the Bellman optimality equation takes the following form, Q∗h(sh, ah) = Rh(sh, ah) + (PhV ∗h+1)(sh, ah), V ∗h (sh) = max
ah∈A Q∗h(sh, ah), (3.4)
which holds for all h ∈ [H] and (sh, ah) ∈ S × A. Such a Bellman optimality equation allows us to adapt the least-squares value iteration (LSVI) algorithm [2, 5, 14, 16, 31].
Linear Function Approximation. We focus on the following setting with linear transition kernels and reward functions [7, 16, 42, 43], which corresponds to a linear SCM [33]. Assumption 3.3 (Linear Confounded MDP). We assume that Ph(sh+1 | sh, ah, uh) = 〈φh(sh, ah, uh), µh(sh+1)〉, ∀h ∈ [H], (sh+1, sh, ah) ∈ S × S ×A, where φh(·, ·, ·) and µh(·) = (µ1,h(·), . . . , µd,h(·))> are Rd-valued functions. We assume that∑d i=1 ‖µi,h‖21 ≤ d and ‖φh(sh, ah, uh)‖2 ≤ 1 for all h ∈ [H] and (sh, ah, uh) ∈ S × A × U . Meanwhile, we assume that E[rh | sh, ah, uh] = φh(sh, ah, uh)>θh, ∀h ∈ [H], (sh, ah, uh) ∈ S ×A× U , (3.5) where θh ∈ Rd and ‖θh‖2 ≤ √ d for all h ∈ [H].
Such a linear setting generalizes the tabular setting where S , A, and U are finite. Proposition 3.4. We define the backdoor-adjusted feature as follows,
ψh(sh, ah) = Euh∼P̃h(· | sh) [ φh(sh, ah, uh) ] , ∀h ∈ [H], (sh, ah) ∈ S ×A. (3.6)
Under Assumption 3.1, it holds that
P(sh+1 | sh,do(ah)) = 〈ψh(sh, ah), µh(sh+1)〉, ∀h ∈ [H], (sh+1, sh, ah) ∈ S × S ×A. Moreover, the action-value functions Qπh and Q ∗ h are linear in the backdoor-adjusted feature ψh for all π.
Proof. See §F.1 for a detailed proof.
Such an observation allows us to estimate the action-value function based on the backdoor-adjusted features {ψh}h∈[H] in the online setting. See §D for a detailed discussion. In the sequel, we assume that either the density of {P̃h(· | sh)}h∈[H] is known or the backdoor-adjusted feature {ψh}h∈[H] is known.
In the sequel, we introduce the DOVI algorithm (Algorithm 1). Each iteration of DOVI consists of two components, namely point estimation, where we estimateQ∗h based on the confounded observational data and the interventional data, and uncertainty quantification, where we construct the upper confidence bound (UCB) of the point estimator.
Algorithm 1 Deconfounded Optimistic Value Iteration (DOVI) for Confounded MDP
Require: Observational data {(sih, aih, uih, rih)}i∈[n],h∈[H], tuning parameters λ, β > 0, backdooradjusted feature {ψh}h∈[H], which is defined in (3.6).
1: Initialization: Set {Q0h, V 0h }h∈[H] as zero functions and V kH+1 as a zero function for k ∈ [K]. 2: for k = 1, . . . ,K do 3: for h = H, . . . , 1 do 4: Set ωkh ← argminω∈Rd ∑k−1 τ=1(r τ h + V τ h+1(s τ h+1) − ω>ψh(sτh, aτh))2 + λ‖ω‖22 + Lkh(ω), where Lkh is defined in (3.8). 5: Set Qkh(·, ·)← min{ψh(·, ·)>ωkh + Γkh(·, ·), H − h}, where Γkh is defined in (3.12). 6: Set πkh(· | sh)← argmaxah∈AQ k h(sh, ah) for all sh ∈ S. 7: Set V kh (·)← 〈πkh(· | ·), Qkh(·, ·)〉A. 8: end for 9: Obtain sk1 from the environment.
10: for h = 1, . . . ,H do 11: Take akh ∼ πkh(· | skh). Obtain rkh = rh(skh, akh, ukh) and skh+1. 12: end for 13: end for
Point Estimation. To solve the Bellman optimality equation in (3.4), we minimize the empirical mean-squared Bellman error as follows at each step,
ωkh ← argmin ω∈Rd k−1∑ τ=1 ( rτh + V τ h+1(s τ h+1)− ω>ψh(sτh, aτh) )2 + λ‖ω‖22 + Lkh(ω), h = H, . . . , 1,
(3.7)
where we set V kH+1 = 0 for all k ∈ [K] and V τh+1 is defined in Line 7 of Algorithm 1 for all (τ, h) ∈ [K] × [H − 1]. Here k is the index of episode, λ > 0 is a tuning parameter, and Lkh is a regularizer, which is constructed based on the confounded observational data. More specifically, we define
Lkh(ω) = n∑ i=1 ( rih + V k h+1(s i h+1)− ω>φh(sih, aih, uih) )2 , ∀(k, h) ∈ [K]× [H], (3.8)
which corresponds to the least-squares loss for regressing rih + V k h+1(s i h+1) against φh(s i h, a i h, u i h) for all i ∈ [n]. Here {(sih, aih, uih, rih)}(i,h)∈[n]×[H] are the confounded observational data, where
uih ∼ P̃h(· | sih), sih+1 ∼ Ph(· | sih, aih, uih), and aih ∼ νh(· | sih, wih) with ν = {νh}h∈[H] being the behavior policy. Here recall that, with a slight abuse of notation, we write P(sh+1 | sh, ah, uh) as Ph(sh+1 | sh, ah, uh) and P(uh | sh) as P̃h(uh | sh), since they are induced by the SCM defined in §2. The update in (3.7) takes the following explicit form,
ωkh ← (Λkh)−1 ( k−1∑ τ=1 ψh(s τ h, a τ h) · ( V kh+1(s τ h+1) + r τ h ) +
n∑ i=1 φh(s i h, a i h, u i h) · ( V kh+1(s i h+1) + r i h )) , (3.9)
where
Λkh = k−1∑ τ=1 ψh(s τ h, a τ h)ψh(s τ h, a τ h) > + n∑ i=1 φh(s i h, a i h, u i h)φh(s i h, a i h, u i h) > + λI. (3.10)
Uncertainty Quantification. We now construct the UCB Γkh(·, ·) of the point estimator ψh(·, ·)>ωkh obtained from (3.9), which encourages the exploration of the less visited state-action pairs. To this end, we employ the following notion of information gain to motivate the UCB,
Γkh(s k h, a k h) ∝ H(ωkh | ξk−1)−H ( ωkh | ξk−1 ∪ {(skh, akh)} ) , (3.11)
where H(ωkh | ξk−1) is the differential entropy of the random variable ωkh given the data ξk−1. In particular, ξk−1 = {(sτh, aτh, rτh)}(τ,h)∈[k−1]×[H] ∪ {(sih, aih, uih, rih)}(i,h)∈[n]×[H] consists of the confounded observational data and the interventional data up to the (k − 1)-th episode. However, it is challenging to characterize the distribution of ωkh. To this end, we consider a Bayesian counterpart of the confounded MDP, where the prior of ωkh is N(0, I/λ) and the residual of the regression problem in (3.7) is N(0, 1). In such a “parallel” confounded MDP, the posterior of ωkh follows N(µk,h, (Λ k h) −1), where Λkh is defined in (3.10) and µk,h coincides with the right-hand side of (3.9). Moreover, it holds for all (skh, a k h) ∈ S ×A that
H(ωkh | ξk−1) = 1/2 · log det ( (2πe)d · (Λkh)−1 ) ,
H ( ωkh ∣∣ ξk−1 ∪ {(skh, akh)}) = 1/2 · log det((2πe)d · (Λkh + ψh(skh, akh)ψh(skh, akh)>)−1).
Correspondingly, we employ the following UCB, which instantiates (3.11), that is,
Γkh(s k h, a k h) = β ·
( log det ( Λkh + ψh(s k h, a k h)ψh(s k h, a k h) >)− log det(Λkh))1/2 (3.12)
for all (skh, a k h) ∈ S × A. Here β > 0 is a tuning parameter. We highlight that, although the information gain in (3.11) relies on the “parallel” confounded MDP, the UCB in (3.12), which is used in Line 5 of Algorithm 1, does not rely on the Bayesian perspective. Also, our analysis establishes the frequentist regret.
Regularization with Observational Data: A Bayesian Perspective. In the “parallel” confounded MDP, it holds that
ωkh ∼ N(0, I/λ), ωkh | ξ0 ∼ N ( µ1,h, (Λ 1 h) −1), ωkh | ξk−1 ∼ N(µk,h, (Λkh)−1),
where µk,h coincides with the right-hand side of (3.9) and µ1,h is defined by setting k = 1 in µk,h. Here ξ0 = {(sih, aih, uih, rih)}(i,h)∈[n]×[H] are the confounded observational data. Hence, the regularizer Lkh in (3.8) corresponds to using ω k h | ξ0 as the prior for the Bayesian regression problem given only the interventional data ξk−1 \ ξ0 = {(sτh, aτh, rτh)}(τ,h)∈[k−1]×[H].
3.2 Theory
The following theorem characterizes the regret of DOVI, which is defined in (2.3).
Theorem 3.5 (Regret of DOVI). Let β = CdH √
log(d(T + nH)/ζ) and λ = 1, where C > 0 and ζ ∈ (0, 1] are absolute constants. Under Assumptions 3.1 and 3.3, it holds with probability at least 1− 5ζ/2 that
Regret(T ) ≤ C ′ ·∆H · √ d3H3T · √ log ( d(T + nH)/ζ ) , (3.13)
where C ′ > 0 is an absolute constant and
∆H = 1√ dH2 H∑ h=1 ( log det(ΛK+1h )− log det(Λ 1 h) )1/2 . (3.14)
Proof. See §F.3 for a detailed proof.
Note that ΛK+1h (n + K + λ)I and Λ1h λI for all h ∈ [H]. Hence, it holds that ∆H = O( √ log(n+K + 1)) in the worst case. Thus, the regret of DOVI isO( √ d3H3T ) up to logarithmic factors, which is optimal in the total number of steps T if we only consider the online setting. However, ∆H is possibly much smaller than O( √ log(n+K + 1)), depending on the amount of information carried over by the confounded observational data from the offline setting, which is quantified in the following.
Interpretation of ∆H : An Information-Theoretic Perspective. Let ω∗h be the parameter of the globally optimal action-value function Q∗h, which corresponds to π
∗ in (2.3). Recall that we denote by ξ0 and ξK the confounded observational data {(sih, aih, uih, rih)}(i,h)∈[n]×[H] and the union {(sih, aih, uih, rih)}(i,h)∈[n]×[H] ∪ {(skh, akh, rkh)}(k,h)∈[K]×[H] of the confounded observational data and the interventional data up to the K-th episode, respectively. We consider the aforementioned Bayesian counterpart of the confounded MDP, where the prior of ω∗h is also N(0, I/λ). In such a “parallel” confounded MDP, we have
ω∗h ∼ N(0, I/λ), ω∗h | ξ0 ∼ N ( µ∗1,h, (Λ 1 h) −1), ω∗h | ξK ∼ N(µ∗K,h, (ΛK+1h )−1), (3.15)
where
µ∗1,h = (Λ 1 h) −1 n∑ i=1 φh(s i h, a i h, u i h) · ( V ∗h+1(s i h+1) + r i h ) ,
µ∗K,h = (Λ K+1 h )
−1 (
Λ1hµ ∗ 1,h + K∑ τ=1 ψh(s τ h, a τ h) · ( V ∗h+1(s τ h+1) + r τ h )) .
It then holds for the right-hand side of (3.14) that
1/2 · log det(ΛK+1h )− 1/2 · log det(Λ 1 h) = H(ω ∗ h | ξ0)−H(ω∗h | ξK). (3.16)
The left-hand side of (3.16) characterizes the information gain of intervention in the online setting given the confounded observational data in the offline setting. In other words, if the confounded observational data are sufficiently informative upon the backdoor adjustment, then ∆H is small, which implies that the regret is small. More specifically, the matrices (Λ1h) −1 and (ΛK+1h ) −1 defined in (3.10) characterize the ellipsoidal confidence sets given ξ0 and ξK , respectively. If the confounded observational data are sufficiently informative upon the backdoor adjustment, ΛK+1h is close to Λ1h. To illustrate, let {ψh(sτh, aτh)}(τ,h)∈[K]×[H] and {φh(sih, aih, uih)}(i,h)∈[n]×[H] be sampled uniformly at random from the canonical basis {e`}`∈[d] of Rd. It then holds that ΛK+1h ≈ (K + n)I/d + λI and Λ1h ≈ nI/d + λI . Hence, for λ = 1 and sufficiently large n and K, we have ∆H = O( √ log(1 +K/(n+ d))) = O( √ K/(n+ d)). For example, for n = Ω(K2), it holds that ∆H = O(n−1/2), which implies that the regret of DOVI is O(n−1/2 · √ d3H3T ). In other words, if the confounded observational data are sufficiently informative upon the backdoor adjustment, the regret of DOVI can be arbitrarily small given a sufficiently large sample size n of the confounded observational data, which is often the case in practice [8, 9, 21, 22, 29].
4 Conclusion
In this paper, we propose the deconfounded optimistic value iteration (DOVI) algorithm and its variant DOVI+, which incorporate the confounded observational data to the online reinforcement learning in a provably efficient manner. DOVI and DOVI+ explicitly adjust for the confounding bias in the observational data via the backdoor and frontdoor adjustments, respectively. In both cases, such adjustments allow us to construct the bonus based on a notion of information gain, which considers the amount of information acquired from the offline dataset. We further conduct regret analysis of DOVI and DOVI+. Our analysis suggests that practitioners can tackle the confounding issue in the offline dataset by estimating the counterfactual reward for value function estimations, given that a proper adjustment such as the backdoor or frontdoor adjustment is available. In the case of backdoor and frontdoor adjustment, we prove that the regret of DOVI is smaller than the optimal regret achievable in the pure online setting when the confounded observational data are informative upon the adjustments, suggesting that one can exploit the confounded observational data in reinforcement learning upon proper adjustments. In our future study, we wish to incorporate proxy variables that are native to MDPs for the adjustments of the offline dataset, such as the variables exploited by [4, 24, 40].
Acknolodgements
Zhaoran Wang acknowledges National Science Foundation (Awards 2048075, 2008827, 2015568, 1934931), Simons Institute (Theory of Reinforcement Learning), Amazon, J.P. Morgan, and Two Sigma for their supports. Zhuoran Yang acknowledges Simons Institute (Theory of Reinforcement Learning). The authors also thank the anonymous reviewers, whose invaluable suggestions help the authors to improve the paper. | 1. What is the main contribution of the paper regarding reinforcement learning with confounded observational data?
2. How does the proposed algorithm compare to other approaches in terms of sample efficiency and regret bound?
3. Can you explain how the frontdoor and backdoor criteria are used in the context of reinforcement learning?
4. How does the paper's approach differ from previous works that have addressed confounding in reinforcement learning?
5. Are there any limitations to the applicability of the proposed algorithms in real-world scenarios? If so, what are they?
6. How would you respond to the reviewer's concern about the paper's connection to existing reinforcement learning literature?
7. Can you elaborate on how the partially observed MDP relates to Block MDP?
8. How should observational data be collected to incorporate confounders in an online setup?
9. What practical scenarios can you think of where the assumptions of the backdoor criterion hold?
10. How would you address the reviewer's suggestion for a conclusion or discussion section summarizing the contribution and results? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposed an algorithm to learn optimal policy reinforcement learning in the presence of confounded observational data. It proposes to remove confounders from the data and improve sample efficiency in online settings. It addresses two scenarios, partially observed and unobserved confounder, by using two techniques, the frontdoor, and backdoor criterion. It further theoretically analyzes the regret bound in these settings.
Review
Originality: This paper addresses an important problem of leveraging confounded observational data in learning optimal policy in the reinforcement learning framework. The work is novel, and it incorporates some well-known techniques (Frontdoor, backdoor adjustment) in the context of reinforcement learning. Related works need to be included in how these works fall under the existing (deep) reinforcement learning framework.
Quality: The work is technically sound,while an empirical evaluation is needed to confirm the effectiveness of the proposed algorithms
Clarity: The paper is well-written and easy to follow. Necessary assumptions are stated.
Significance: The work has value and has the potential to be used by others. However, there are further discussions needed on how the works fit into the existing reinforcement learning literature.
Detailed comments:
The proposed algorithms and bound is analyzed based on the linearity assumptions (i.e., linear SCM, linear confounded MDP). How is the analysis dependent on this assumption? How can it be generalized to non-linear cases? If it is not straightforward, I suggest explicitly mentioning this assumption in the abstract and introduction.
The paper motivates the neural network function approximation in deep reinforcement learning. However, it is still unclear to me if such a scenario is discussed in the context of the proposed method.
Is Algorithm 1 assumed a tabular reinforcement learning setup? If that is the case, how those bounds hold with function approximation, such as using a neural network to approximate value function, Q-function, and policy. These are the setups used extensively in current RL literature (deep reinforcement learning).
In Algorithm 1, how the online data is used to estimate Value, Q-function, and policy? It seems the online data line 10-12 is not saved in any buffer. How does it then influence the policy?
How do these assumptions, frontdoor exist, be mapped from the real-world example?
How are these methods relevant to deep reinforcement learning, as is motivated in the abstract and intro? Need some discussion on this line, though the author mentions this is relevant to causal bandits.
How partially observed MDP is handled, that is, backdoor criterion assumption limits the use of the proposed algorithms. What are some practical scenarios that hold this assumption in the context of reinforcement learning? Justifying this would be useful for readers. How to derive a practical RL algorithm?
Backdoor adjustment limits the applicability of the algorithm/method. Can you give some practical scenarios where these are held?
What if the confounder is fully observed? What difference it will make compared to partially observed. How does it impact the regret bound or algorithmic convergence?
Not clear why the reward function in equation 3.2 is called the counterfactual reward function.
Is the observational data (first line of Algorithm 1) collected in an offline setting? How the online data (line 10-12 in Algorithm 1) is used? Is it stored in a buffer and then merged with the observational data? What policy (random or human demonstration) was used to collect the observational data?
Empirical evaluation will strengthen the papers and verify the feasibility of the proposed algorithm in practice. While I understand this paper focuses on theory, the feasibility of the assumptions needs to be discussed in the paper. For example, an example can be added with stating the assumptions of how they relate to some real-world settings, which will help readers better understand the implication of the proposed algorithms.
The literature review should address how this paper’s algorithm fits into existing RL literature, which seems to be missing in the paper. Section B discusses the comparison with causal bandits.
How does the confounded MDP relate to the previously proposed Block MDP[1]?
The observational data requires an additional record of confounders (partially observed confounders and intermediate state) which standard policy in the RL framework does not provide. So how do the observational data need to be collected to be incorporated in the online setup? Need to discuss in the paper.
The paper seems to end abruptly without a proper conclusion. I suggest adding a conclusion or discussion section summarizing the contribution and results.
Reference: [1] Provably efficient rl with rich observations via latent state decoding. Simon Du, Akshay Krishnamurthy, Nan Jiang, Alekh Agarwal, Miroslav Dudik, and John Langford. In International Conference on Machine Learning, pages 1665–1674. PMLR, 2019. |
NIPS | Title
Provably Efficient Causal Reinforcement Learning with Confounded Observational Data
Abstract
Empowered by neural networks, deep reinforcement learning (DRL) achieves tremendous empirical success. However, DRL requires a large dataset by interacting with the environment, which is unrealistic in critical scenarios such as autonomous driving and personalized medicine. In this paper, we study how to incorporate the dataset collected in the offline setting to improve the sample efficiency in the online setting. To incorporate the observational data, we face two challenges. (a) The behavior policy that generates the observational data may depend on unobserved random variables (confounders), which affect the received rewards and transition dynamics. (b) Exploration in the online setting requires quantifying the uncertainty given both the observational and interventional data. To tackle such challenges, we propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner. DOVI explicitly adjusts for the confounding bias in the observational data, where the confounders are partially observed or unobserved. In both cases, such adjustments allow us to construct the bonus based on a notion of information gain, which takes into account the amount of information acquired from the offline setting. In particular, we prove that the regret of DOVI is smaller than the optimal regret achievable in the pure online setting when the confounded observational data are informative upon the adjustments.
N/A
Empowered by neural networks, deep reinforcement learning (DRL) achieves tremendous empirical success. However, DRL requires a large dataset by interacting with the environment, which is unrealistic in critical scenarios such as autonomous driving and personalized medicine. In this paper, we study how to incorporate the dataset collected in the offline setting to improve the sample efficiency in the online setting. To incorporate the observational data, we face two challenges. (a) The behavior policy that generates the observational data may depend on unobserved random variables (confounders), which affect the received rewards and transition dynamics. (b) Exploration in the online setting requires quantifying the uncertainty given both the observational and interventional data. To tackle such challenges, we propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner. DOVI explicitly adjusts for the confounding bias in the observational data, where the confounders are partially observed or unobserved. In both cases, such adjustments allow us to construct the bonus based on a notion of information gain, which takes into account the amount of information acquired from the offline setting. In particular, we prove that the regret of DOVI is smaller than the optimal regret achievable in the pure online setting when the confounded observational data are informative upon the adjustments.
1 Introduction
Empowered by the breakthrough in neural networks, deep reinforcement learning (DRL) achieves significant empirical successes in various scenarios [19, 23, 36, 37]. Learning an expressive function approximator necessitates collecting a large dataset. Specifically, in the online setting, it requires the agent to interact with the environment for a large number of steps. For example, to learn a human-level policy for playing Atari games, the agent has to interact with a simulator for more than 108 steps [13]. However, in most scenarios, we do not have access to a simulator that allows for trial and error without any cost. Meanwhile, in critical scenarios, e.g., autonomous driving and personalized medicine, trial and error in the real world is unsafe and even unethical. As a result, it remains challenging to apply DRL to more scenarios.
To bypass such a barrier, we study how to incorporate the dataset collected offline, namely the observational data, to improve the sample efficiency of RL in the online setting [21]. In contrast to the interventional data collected online in possibly expensive ways, observational data are often abundantly available in various scenarios. For example, in autonomous driving, we have access to trajectories generated by the drivers. As another example, in personalized medicine, we have access to electronic health records from doctors. However, to incorporate the observational data in a provably efficient way, we have to address two challenges.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
• The observational data are possibly confounded. Specifically, there often exist unobserved random variables, namely confounders, that causally affect the agent and the environment at the same time. In particular, the policy used to generate the observational data, namely the behavior policy, possibly depends on the confounders. Meanwhile, the confounders possibly affect the received rewards and the transition dynamics. In the example of autonomous driving [9, 22], the drivers may be affected by complicated traffic or poor road design, resulting in traffic accidents even without misconduct. The complicated traffic and poor road design subsequently affect both the action of the drivers and the outcome. Therefore, it is unclear from the observational data whether the accidents are due to the actions adopted by the drivers. Agents trained with such observational data may be unwilling to take any actions under complicated traffic, jeopardizing the safety of passengers. In the example of personalized medicine [8, 29], the patients may not be compliant with prescriptions and instructions, which subsequently affects both the treatment and the outcome. As another example, the doctor may prescribe medicine to patients based on patients’ socioeconomic status (which could be inferred by the doctor through interacting with the patients). Meanwhile, socioeconomic status affects the patients’ health condition and subsequently plays the role of the confounder. In both scenarios, such confounders may be unavailable due to privacy or ethical concerns. Such a confounding issue makes the observational data uninformative and even misleading for identifying and estimating the causal effect, which is crucial for decision-making in the online setting. In all the examples, it is unclear from the observational data whether the outcome is due to the actions adopted.
• Even without the confounding issue, it remains unclear how the observational data may facilitate exploration in the online setting, which is the key to the sample efficiency of RL. At the core of exploration is uncertainty quantification. Specifically, quantifying the uncertainty that remains given the dataset collected up to the current step, including the observational data and the interventional data, allows us to construct a bonus. When incorporated into the reward, such a bonus encourages the agent to explore the less visited state-action pairs with more uncertainty. In particular, constructing such a bonus requires quantifying the amount of information carried over by the observational data from the offline setting, which also plays a key role in characterizing the regret, especially how much the observational data may facilitate reducing the regret. Uncertainty quantification becomes even more challenging when the observational data are confounded. Specifically, as the behavior policy depends on the confounders, there is a mismatch between the data generating processes in the offline setting and the online setting. As a result, it remains challenging to quantify how much information carried over from the offline setting is useful for the online setting, as the observational data are uninformative and even misleading due to the confounding issue.
Contribution. To study causal reinforcement learning, we propose a class of Markov decision processes (MDPs), namely confounded MDPs, which captures the data generating processes in both the offline setting and the online setting as well as their mismatch due to the confounding issue. In particular, we study two tractable cases of confounded MDPs in the episodic setting with linear function approximation [7, 16, 42, 43].
• In the first case, the confounders are partially observed in the observational data. Assuming that an observed subset of the confounders satisfies the backdoor criterion [32], we propose the deconfounded optimistic value iteration (DOVI) algorithm, which explicitly corrects for the confounding bias in the observational data using the backdoor adjustment.
• In the second case, the confounders are unobserved in the observational data. Assuming that there exists an observed set of intermediate states that satisfies the frontdoor criterion [32], we propose an extension of DOVI, namely DOVI+, which explicitly corrects for the confounding bias in the observational data using the composition of two backdoor adjustments. We remark that DOVI+ follows the same principle of design as DOVI and defer the discussion of DOVI+ to §A.
In both cases, the adjustments allow DOVI and DOVI+ to incorporate the observational data into the interventional data while bypassing the confounding issue. It further enables estimating the causal effect of a policy on the received rewards and the transition dynamics with enlarged effective sample size. Moreover, such adjustments allow us to construct the bonus based on a notion of information gain, which takes into account the amount of information carried over from the offline setting.
In particular, we prove that DOVI and DOVI+ attain the ∆H · √ d3H3T -regret up to logarithmic factors, where d is the dimension of features, H is the length of each episode, and T = HK is the number of steps taken in the online setting, where K is the number of episodes. Here the multiplicative factor ∆H > 0 depends on d, H , and a notion of information gain that quantifies the amount of information obtained from the interventional data additionally when given the properly adjusted observational data. When the observational data are unavailable or uninformative upon the adjustments, ∆H is a logarithmic factor. Correspondingly, DOVI and DOVI+ attain the optimal√ T -regret achievable in the pure online setting [7, 16, 42, 43]. When the observational data are sufficiently informative upon the adjustments, ∆H decreases towards zero as the effective sample size of the observational data increases, which quantifies how much the observational data may facilitate exploration in the online setting.
Related Work. Our work is related to the study of causal bandit [20]. The goal of causal bandit is to obtain the optimal intervention in the online setting where the data generating process is described by a causal diagram. The previous study establishes causal bandit algorithms in the online setting [26, 34], the offline setting [17, 18], and a combination of both settings [11]. In contrast to this line of work, we study causal RL in a combination of the online setting and the offline setting. Causal RL is more challenging than causal bandit, which corresponds toH = 1, as it involves the transition dynamics and is more challenging in exploration. See §B for a detailed literature review on causal bandit.
Our work is related to the study of causal RL considered in various settings. [45] propose a modelbased RL algorithm that solves dynamic treatment regimes (DTR), which involve a combination of the online setting and the offline setting. Their algorithm hinges on the analysis of sensitivity [3, 27, 38, 44], which constructs a set of feasible models of the transition dynamics based on the confounded observational data. Correspondingly, their algorithm achieves exploration by choosing an optimistic model of the transition dynamics from such a feasible set. In contrast, we propose a model-free RL algorithm, which achieves exploration through the bonus based on a notion of information gain. It is worth mentioning that the assumption of [45] is weaker than ours as theirs does not allow for identifying the causal effect. As a result of partial identification, the regret of their algorithm is the same as the regret in the pure online setting as T → +∞. In contrast, our work instantiates the following framework in handling confounders for reinforcement learning. (a) First, we propose the estimation equation based on the observations, which identifies the causal effect of actions on the cumulative reward. (b) Second, we conduct point estimation and uncertainty quantification based on observations and the estimation equation. (c) Finally, we conduct exploration based on the uncertainty quantification and achieve the regret reduction in the online setting. Consequently, the regret of our algorithm is smaller than the regret in the pure online setting by a multiplicative factor for all T . [25] propose a model-based RL algorithm in a combination of the online setting and the offline setting. Their algorithm uses a variational autoencoder (VAE) for estimating a structural causal model (SCM) based on the confounded observational data. In particular, their algorithm utilizes the actor-critic algorithm to obtain the optimal policy in such an SCM. However, the regret of their algorithm remains unclear. [6] propose a model-based RL algorithm in the pure online setting that learns the optimal policy in a partially observable Markov decision process (POMDP). The regret of their algorithm also remains unclear. [35] utilize generative adversarial reinforcement learning to reconstruct transition dynamics with confounder, and [40] propose a model-based approach for POMDP based on adjustment with proxy variables. [30] consider offpolicy policy evaluation under one-decision confounding and constructs worst-case bounds with theoretical guarantee. [4] utilizes states and actions as proxy variables to tackle off-policy policy evaluation with confounders. In contrast, our work utilizes backdoor and frontdoor adjustments to handle confounded observation.
2 Confounded Reinforcement Learning
Structural Causal Model. We denote a structural causal model (SCM) [32] by a tuple (A,B, F, P ). Here A is the set of exogenous (unobserved) variables, B is the set of endogenous (observed) variables, F is the set of structural functions capturing the causal relations, which determines an endogenous variable v ∈ B based on the other exogenous and endogenous variables, and P is the distribution of all the exogenous variables. We say that a pair of variables Y and Z are confounded by a variable W if they are both caused by W .
An intervention on a set of endogenous variables X ⊆ B assigns a value x to X regardless of the other exogenous and endogenous variables as well as the structural functions. We denote by do(X = x) the intervention on X and write do(x) if it is clear from the context. Similarly, a stochastic intervention [10, 28] on a set of endogenous variables X ⊆ B assigns a distribution p to X regardless of the other exogenous and endogenous variables as well as the structural functions. We denote by do(X ∼ p) the stochastic intervention on X .
Confounded Markov Decision Process. To characterize a Markov decision process (MDP) in the offline setting with observational data, which are possibly confounded, we introduce an SCM, where the endogenous variables are the states {sh}h∈[H], actions {ah}h∈[H], and rewards {rh}h∈[H]. Let {wh}h∈[H] be the confounders. In §3, we assume that the confounders are partially observed, while in §A, we assume that they are unobserved. The set of structural functions F consists of the transition of states sh+1 ∼ Ph(· | sh, ah, wh), the transition of confounders wh ∼ P̃h(· | sh), the behavior policy ah ∼ νh(· | sh, wh), which depends on the confounder wh, and the reward function rh(sh, ah, wh). See Figure 1 for the causal diagram that describes such an SCM.
Here ah and sh+1 are confounded by wh in addition to sh. We denote such a confounded MDP by the tuple (S,A,W, H,P, r), where H is the length of an episode, S, A, andW are the spaces of states, actions, and confounders, respectively, r = {rh}h∈[H] is the set of reward functions, and P = {Ph, P̃h}h∈H is the set of transition kernels. In the sequel, we assume without loss of generality that rh takes value in [0, 1] for all h ∈ [H]. In the online setting that allows for intervention, we assume that the confounders {wh}h∈[H] are unobserved. A policy π = {πh}h∈[H] induces the stochastic intervention do(a1 ∼ π1(· | s1), . . . , aH ∼ πH(· | sH)), which does not depend on the confounders. In particular, an agent interacts with the environment as follows. At the beginning of the k-th episode, the environment arbitrarily selects an initial state sk1 and the agent selects a policy π
k = {πkh}h∈[H]. At the h-th step of the k-th episode, the agent observes the state skh and takes the action a k h ∼ πkh(· | skh). The environment randomly selects the confounder wkh ∼ P̃h(· | skh), which is unobserved, and the agent receives the reward rkh = rh(s k h, a k h, w k h). The environment then transits into the next state skh+1 ∼ Ph(· | skh, akh, wkh).
For a policy π = {πh}h∈H , which does not depend on the confounders {wh}h∈[H], we define the value function V π = {V πh }h∈[H] as follows,
V πh (s) = Eπ [ H∑ j=h rj(sj , aj , wj) ∣∣∣∣ sh = s], ∀h ∈ [H], (2.1) where we denote by Eπ the expectation with respect to the confounders {wj}Hj=h and the trajectory {(sj , aj)}Hj=h, starting from the state sj = s and following the policy π. Correspondingly, we define the action-value function Qπ = {Qπh}h∈[H] as follows,
Qπh(s, a) = Eπ [ H∑ j=h rj(sj , aj , wj) ∣∣∣∣ sh = s,do(ah = a)], ∀h ∈ [H]. (2.2)
We assess the performance of an algorithm using the regret against the globally optimal policy π∗ = {π∗h}h∈[H] in hindsight after K episodes, which is defined as follows,
Regret(T ) = max π K∑ k=1 ( V π1 (s k 1)− V π k 1 (s k 1) ) = K∑ k=1 ( V π ∗ 1 (s k 1)− V π k 1 (s k 1) ) . (2.3)
Here T = HK is the total number of steps.
Our goal is to design an algorithm that minimizes the regret defined in (2.3), where π∗ does not depend on the confounders {wh}h∈[H]. In the online setting that allows for intervention, it is well understood how to minimize such a regret [2, 14–16]. However, it remains unclear how to efficiently utilize the observational data obtained in the offline setting, which are possibly confounded. In realworld applications, e.g., autonomous driving and personalized medicine, such observational data are often abundant, whereas intervention in the online setting is often restricted. We refer to §C for a comparison between the confounded MDP and other extensions of MDP, including the dynamics treatment regime (DTR), partially observable MDP (POMDP), and contextual MDP (CMDP).
Why is Incorporating Confounded Observational Data Challenging? Straightforwardly incorporating the confounded observational data into an online algorithm possibly leads to an undesirable regret due to the mismatch between the online and offline data generating processes. In particular, due to the existence of the confounders {wh}h∈[H], which are partially observed (§3) or unobserved (§A), the conditional probability P(sh+1 | sh, ah) in the offline setting is different from the causal effect P(sh+1 | sh,do(ah)) in the online setting [33]. More specifically, it holds that
P(sh+1 | sh, ah) = Ewh∼P̃h(· | sh)
[ Ph(sh+1 | sh, ah, wh) · νh(ah | sh, wh) ] Ewh∼P̃h(· | sh) [ νh(ah | sh, wh)
] , P ( sh+1
∣∣ sh,do(ah)) = Ewh∼P̃h(· | sh)[Ph(· | sh, ah, wh)]. In other words, without proper covariate adjustments [32], the confounded observational data may be not informative for estimating the transition dynamics and the associated action-value function in the online setting. To this end, we propose an algorithm that incorporates the confounded observational data in a provably efficient manner. Moreover, our analysis quantifies the amount of information carried over by the confounded observational data from the offline setting and to what extent it helps reducing the regret in the online setting.
3 Algorithm and Theory for Partially Observed Confounder
In this section, we propose the Deconfounded Optimistic Value Iteration (DOVI) algorithm. DOVI handles the case where the confounders are unobserved in the online setting but are partially observed in the offline setting. We then characterize the regret of DOVI. We defer the extension of DOVI, namely DOVI+, to §A which handles the case where the confounders are unobserved in both the online setting and the offline setting.
3.1 Algorithm
Backdoor Adjustment. In the online setting that allows for intervention, the causal effect of ah on sh+1 given sh, that is, P(sh+1 | sh,do(ah)), plays a key role in the estimation of the action-value function. Meanwhile, the confounded observational data may not allow us to identify the causal effect P(sh+1 | sh,do(ah)) if the confounder wh is unobserved. However, if the confounder wh is partially observed in the offline setting, the observed subset uh of wh allows us to identify the causal effect P(sh+1 | sh,do(ah)), as long as uh satisfies the following backdoor criterion. Assumption 3.1 (Backdoor Criterion [32, 33]). In the SCM defined in §2 and its induced directed acyclic graph (DAG), for all h ∈ [H], there exists an observed subset uh of wh that satisfies the backdoor criterion, that is,
• the elements of uh are not the descendants of ah, and
• conditioning on sh, the elements of uh d-separate every path between ah and sh+1, rh that has an incoming arrow into ah.
See Figure 2 for an example that satisfies the backdoor criterion. In particular, we identify the causal effect P(sh+1 | sh,do(ah)) as follows.
Proposition 3.2 (Backdoor Adjustment [32]). Under Assumption 3.1, it holds for all h ∈ [H] that P ( sh+1 ∣∣ sh,do(ah)) = Euh∼P(· | sh)[P(sh+1 | sh, ah, uh)], E [ rh(sh, ah, wh)
∣∣ sh,do(ah)] = Euh∼P(· | sh)[E[rh(sh, ah, wh) ∣∣ sh, ah, uh]]. Here (sh+1, sh, ah, uh) follows the SCM defined in §2, which generates the confounded observational data.
Proof. See [32] for a detailed proof.
With a slight abuse of notation, we write P(sh+1 | sh, ah, uh) as Ph(sh+1 | sh, ah, uh) and P(uh | sh) as P̃h(uh | sh), since they are induced by the SCM defined in §2. In the sequel, we define U the space of observed state uh and write rh = rh(sh, ah, wh) for notational simplicity. Backdoor-Adjusted Bellman Equation. We now formulate the Bellman equation for the confounded MDP. It holds for all (sh, ah) ∈ S ×A that
Qπh(sh, ah) = Eπ [ H∑ j=h rj(sj , aj , uj) ∣∣∣∣ sh,do(ah)] = E[rh ∣∣ sh,do(ah)]+ Esh+1[V πh+1(sh+1)], where Esh+1 denotes the expectation with respect to sh+1 ∼ P(·
∣∣ sh,do(ah)). Here E[rh
∣∣ sh,do(ah)] and P(· ∣∣ sh,do(ah)) are characterized in Proposition 3.2. In the sequel, we define the following transition operator and counterfactual reward function,
(PhV )(sh, ah) = Esh+1∼P(· | sh,do(ah)) [ V (sh+1) ] , ∀V : S 7→ R, (sh, ah) ∈ S ×A, (3.1)
Rh(sh, ah) = E [ rh ∣∣ sh,do(ah)], ∀(sh, ah) ∈ S ×A. (3.2)
We have the following Bellman equation, Qπh(sh, ah) = Rh(sh, ah) + (PhV πh+1)(sh, ah), ∀h ∈ [H], (sh, ah) ∈ S ×A. (3.3)
Correspondingly, the Bellman optimality equation takes the following form, Q∗h(sh, ah) = Rh(sh, ah) + (PhV ∗h+1)(sh, ah), V ∗h (sh) = max
ah∈A Q∗h(sh, ah), (3.4)
which holds for all h ∈ [H] and (sh, ah) ∈ S × A. Such a Bellman optimality equation allows us to adapt the least-squares value iteration (LSVI) algorithm [2, 5, 14, 16, 31].
Linear Function Approximation. We focus on the following setting with linear transition kernels and reward functions [7, 16, 42, 43], which corresponds to a linear SCM [33]. Assumption 3.3 (Linear Confounded MDP). We assume that Ph(sh+1 | sh, ah, uh) = 〈φh(sh, ah, uh), µh(sh+1)〉, ∀h ∈ [H], (sh+1, sh, ah) ∈ S × S ×A, where φh(·, ·, ·) and µh(·) = (µ1,h(·), . . . , µd,h(·))> are Rd-valued functions. We assume that∑d i=1 ‖µi,h‖21 ≤ d and ‖φh(sh, ah, uh)‖2 ≤ 1 for all h ∈ [H] and (sh, ah, uh) ∈ S × A × U . Meanwhile, we assume that E[rh | sh, ah, uh] = φh(sh, ah, uh)>θh, ∀h ∈ [H], (sh, ah, uh) ∈ S ×A× U , (3.5) where θh ∈ Rd and ‖θh‖2 ≤ √ d for all h ∈ [H].
Such a linear setting generalizes the tabular setting where S , A, and U are finite. Proposition 3.4. We define the backdoor-adjusted feature as follows,
ψh(sh, ah) = Euh∼P̃h(· | sh) [ φh(sh, ah, uh) ] , ∀h ∈ [H], (sh, ah) ∈ S ×A. (3.6)
Under Assumption 3.1, it holds that
P(sh+1 | sh,do(ah)) = 〈ψh(sh, ah), µh(sh+1)〉, ∀h ∈ [H], (sh+1, sh, ah) ∈ S × S ×A. Moreover, the action-value functions Qπh and Q ∗ h are linear in the backdoor-adjusted feature ψh for all π.
Proof. See §F.1 for a detailed proof.
Such an observation allows us to estimate the action-value function based on the backdoor-adjusted features {ψh}h∈[H] in the online setting. See §D for a detailed discussion. In the sequel, we assume that either the density of {P̃h(· | sh)}h∈[H] is known or the backdoor-adjusted feature {ψh}h∈[H] is known.
In the sequel, we introduce the DOVI algorithm (Algorithm 1). Each iteration of DOVI consists of two components, namely point estimation, where we estimateQ∗h based on the confounded observational data and the interventional data, and uncertainty quantification, where we construct the upper confidence bound (UCB) of the point estimator.
Algorithm 1 Deconfounded Optimistic Value Iteration (DOVI) for Confounded MDP
Require: Observational data {(sih, aih, uih, rih)}i∈[n],h∈[H], tuning parameters λ, β > 0, backdooradjusted feature {ψh}h∈[H], which is defined in (3.6).
1: Initialization: Set {Q0h, V 0h }h∈[H] as zero functions and V kH+1 as a zero function for k ∈ [K]. 2: for k = 1, . . . ,K do 3: for h = H, . . . , 1 do 4: Set ωkh ← argminω∈Rd ∑k−1 τ=1(r τ h + V τ h+1(s τ h+1) − ω>ψh(sτh, aτh))2 + λ‖ω‖22 + Lkh(ω), where Lkh is defined in (3.8). 5: Set Qkh(·, ·)← min{ψh(·, ·)>ωkh + Γkh(·, ·), H − h}, where Γkh is defined in (3.12). 6: Set πkh(· | sh)← argmaxah∈AQ k h(sh, ah) for all sh ∈ S. 7: Set V kh (·)← 〈πkh(· | ·), Qkh(·, ·)〉A. 8: end for 9: Obtain sk1 from the environment.
10: for h = 1, . . . ,H do 11: Take akh ∼ πkh(· | skh). Obtain rkh = rh(skh, akh, ukh) and skh+1. 12: end for 13: end for
Point Estimation. To solve the Bellman optimality equation in (3.4), we minimize the empirical mean-squared Bellman error as follows at each step,
ωkh ← argmin ω∈Rd k−1∑ τ=1 ( rτh + V τ h+1(s τ h+1)− ω>ψh(sτh, aτh) )2 + λ‖ω‖22 + Lkh(ω), h = H, . . . , 1,
(3.7)
where we set V kH+1 = 0 for all k ∈ [K] and V τh+1 is defined in Line 7 of Algorithm 1 for all (τ, h) ∈ [K] × [H − 1]. Here k is the index of episode, λ > 0 is a tuning parameter, and Lkh is a regularizer, which is constructed based on the confounded observational data. More specifically, we define
Lkh(ω) = n∑ i=1 ( rih + V k h+1(s i h+1)− ω>φh(sih, aih, uih) )2 , ∀(k, h) ∈ [K]× [H], (3.8)
which corresponds to the least-squares loss for regressing rih + V k h+1(s i h+1) against φh(s i h, a i h, u i h) for all i ∈ [n]. Here {(sih, aih, uih, rih)}(i,h)∈[n]×[H] are the confounded observational data, where
uih ∼ P̃h(· | sih), sih+1 ∼ Ph(· | sih, aih, uih), and aih ∼ νh(· | sih, wih) with ν = {νh}h∈[H] being the behavior policy. Here recall that, with a slight abuse of notation, we write P(sh+1 | sh, ah, uh) as Ph(sh+1 | sh, ah, uh) and P(uh | sh) as P̃h(uh | sh), since they are induced by the SCM defined in §2. The update in (3.7) takes the following explicit form,
ωkh ← (Λkh)−1 ( k−1∑ τ=1 ψh(s τ h, a τ h) · ( V kh+1(s τ h+1) + r τ h ) +
n∑ i=1 φh(s i h, a i h, u i h) · ( V kh+1(s i h+1) + r i h )) , (3.9)
where
Λkh = k−1∑ τ=1 ψh(s τ h, a τ h)ψh(s τ h, a τ h) > + n∑ i=1 φh(s i h, a i h, u i h)φh(s i h, a i h, u i h) > + λI. (3.10)
Uncertainty Quantification. We now construct the UCB Γkh(·, ·) of the point estimator ψh(·, ·)>ωkh obtained from (3.9), which encourages the exploration of the less visited state-action pairs. To this end, we employ the following notion of information gain to motivate the UCB,
Γkh(s k h, a k h) ∝ H(ωkh | ξk−1)−H ( ωkh | ξk−1 ∪ {(skh, akh)} ) , (3.11)
where H(ωkh | ξk−1) is the differential entropy of the random variable ωkh given the data ξk−1. In particular, ξk−1 = {(sτh, aτh, rτh)}(τ,h)∈[k−1]×[H] ∪ {(sih, aih, uih, rih)}(i,h)∈[n]×[H] consists of the confounded observational data and the interventional data up to the (k − 1)-th episode. However, it is challenging to characterize the distribution of ωkh. To this end, we consider a Bayesian counterpart of the confounded MDP, where the prior of ωkh is N(0, I/λ) and the residual of the regression problem in (3.7) is N(0, 1). In such a “parallel” confounded MDP, the posterior of ωkh follows N(µk,h, (Λ k h) −1), where Λkh is defined in (3.10) and µk,h coincides with the right-hand side of (3.9). Moreover, it holds for all (skh, a k h) ∈ S ×A that
H(ωkh | ξk−1) = 1/2 · log det ( (2πe)d · (Λkh)−1 ) ,
H ( ωkh ∣∣ ξk−1 ∪ {(skh, akh)}) = 1/2 · log det((2πe)d · (Λkh + ψh(skh, akh)ψh(skh, akh)>)−1).
Correspondingly, we employ the following UCB, which instantiates (3.11), that is,
Γkh(s k h, a k h) = β ·
( log det ( Λkh + ψh(s k h, a k h)ψh(s k h, a k h) >)− log det(Λkh))1/2 (3.12)
for all (skh, a k h) ∈ S × A. Here β > 0 is a tuning parameter. We highlight that, although the information gain in (3.11) relies on the “parallel” confounded MDP, the UCB in (3.12), which is used in Line 5 of Algorithm 1, does not rely on the Bayesian perspective. Also, our analysis establishes the frequentist regret.
Regularization with Observational Data: A Bayesian Perspective. In the “parallel” confounded MDP, it holds that
ωkh ∼ N(0, I/λ), ωkh | ξ0 ∼ N ( µ1,h, (Λ 1 h) −1), ωkh | ξk−1 ∼ N(µk,h, (Λkh)−1),
where µk,h coincides with the right-hand side of (3.9) and µ1,h is defined by setting k = 1 in µk,h. Here ξ0 = {(sih, aih, uih, rih)}(i,h)∈[n]×[H] are the confounded observational data. Hence, the regularizer Lkh in (3.8) corresponds to using ω k h | ξ0 as the prior for the Bayesian regression problem given only the interventional data ξk−1 \ ξ0 = {(sτh, aτh, rτh)}(τ,h)∈[k−1]×[H].
3.2 Theory
The following theorem characterizes the regret of DOVI, which is defined in (2.3).
Theorem 3.5 (Regret of DOVI). Let β = CdH √
log(d(T + nH)/ζ) and λ = 1, where C > 0 and ζ ∈ (0, 1] are absolute constants. Under Assumptions 3.1 and 3.3, it holds with probability at least 1− 5ζ/2 that
Regret(T ) ≤ C ′ ·∆H · √ d3H3T · √ log ( d(T + nH)/ζ ) , (3.13)
where C ′ > 0 is an absolute constant and
∆H = 1√ dH2 H∑ h=1 ( log det(ΛK+1h )− log det(Λ 1 h) )1/2 . (3.14)
Proof. See §F.3 for a detailed proof.
Note that ΛK+1h (n + K + λ)I and Λ1h λI for all h ∈ [H]. Hence, it holds that ∆H = O( √ log(n+K + 1)) in the worst case. Thus, the regret of DOVI isO( √ d3H3T ) up to logarithmic factors, which is optimal in the total number of steps T if we only consider the online setting. However, ∆H is possibly much smaller than O( √ log(n+K + 1)), depending on the amount of information carried over by the confounded observational data from the offline setting, which is quantified in the following.
Interpretation of ∆H : An Information-Theoretic Perspective. Let ω∗h be the parameter of the globally optimal action-value function Q∗h, which corresponds to π
∗ in (2.3). Recall that we denote by ξ0 and ξK the confounded observational data {(sih, aih, uih, rih)}(i,h)∈[n]×[H] and the union {(sih, aih, uih, rih)}(i,h)∈[n]×[H] ∪ {(skh, akh, rkh)}(k,h)∈[K]×[H] of the confounded observational data and the interventional data up to the K-th episode, respectively. We consider the aforementioned Bayesian counterpart of the confounded MDP, where the prior of ω∗h is also N(0, I/λ). In such a “parallel” confounded MDP, we have
ω∗h ∼ N(0, I/λ), ω∗h | ξ0 ∼ N ( µ∗1,h, (Λ 1 h) −1), ω∗h | ξK ∼ N(µ∗K,h, (ΛK+1h )−1), (3.15)
where
µ∗1,h = (Λ 1 h) −1 n∑ i=1 φh(s i h, a i h, u i h) · ( V ∗h+1(s i h+1) + r i h ) ,
µ∗K,h = (Λ K+1 h )
−1 (
Λ1hµ ∗ 1,h + K∑ τ=1 ψh(s τ h, a τ h) · ( V ∗h+1(s τ h+1) + r τ h )) .
It then holds for the right-hand side of (3.14) that
1/2 · log det(ΛK+1h )− 1/2 · log det(Λ 1 h) = H(ω ∗ h | ξ0)−H(ω∗h | ξK). (3.16)
The left-hand side of (3.16) characterizes the information gain of intervention in the online setting given the confounded observational data in the offline setting. In other words, if the confounded observational data are sufficiently informative upon the backdoor adjustment, then ∆H is small, which implies that the regret is small. More specifically, the matrices (Λ1h) −1 and (ΛK+1h ) −1 defined in (3.10) characterize the ellipsoidal confidence sets given ξ0 and ξK , respectively. If the confounded observational data are sufficiently informative upon the backdoor adjustment, ΛK+1h is close to Λ1h. To illustrate, let {ψh(sτh, aτh)}(τ,h)∈[K]×[H] and {φh(sih, aih, uih)}(i,h)∈[n]×[H] be sampled uniformly at random from the canonical basis {e`}`∈[d] of Rd. It then holds that ΛK+1h ≈ (K + n)I/d + λI and Λ1h ≈ nI/d + λI . Hence, for λ = 1 and sufficiently large n and K, we have ∆H = O( √ log(1 +K/(n+ d))) = O( √ K/(n+ d)). For example, for n = Ω(K2), it holds that ∆H = O(n−1/2), which implies that the regret of DOVI is O(n−1/2 · √ d3H3T ). In other words, if the confounded observational data are sufficiently informative upon the backdoor adjustment, the regret of DOVI can be arbitrarily small given a sufficiently large sample size n of the confounded observational data, which is often the case in practice [8, 9, 21, 22, 29].
4 Conclusion
In this paper, we propose the deconfounded optimistic value iteration (DOVI) algorithm and its variant DOVI+, which incorporate the confounded observational data to the online reinforcement learning in a provably efficient manner. DOVI and DOVI+ explicitly adjust for the confounding bias in the observational data via the backdoor and frontdoor adjustments, respectively. In both cases, such adjustments allow us to construct the bonus based on a notion of information gain, which considers the amount of information acquired from the offline dataset. We further conduct regret analysis of DOVI and DOVI+. Our analysis suggests that practitioners can tackle the confounding issue in the offline dataset by estimating the counterfactual reward for value function estimations, given that a proper adjustment such as the backdoor or frontdoor adjustment is available. In the case of backdoor and frontdoor adjustment, we prove that the regret of DOVI is smaller than the optimal regret achievable in the pure online setting when the confounded observational data are informative upon the adjustments, suggesting that one can exploit the confounded observational data in reinforcement learning upon proper adjustments. In our future study, we wish to incorporate proxy variables that are native to MDPs for the adjustments of the offline dataset, such as the variables exploited by [4, 24, 40].
Acknolodgements
Zhaoran Wang acknowledges National Science Foundation (Awards 2048075, 2008827, 2015568, 1934931), Simons Institute (Theory of Reinforcement Learning), Amazon, J.P. Morgan, and Two Sigma for their supports. Zhuoran Yang acknowledges Simons Institute (Theory of Reinforcement Learning). The authors also thank the anonymous reviewers, whose invaluable suggestions help the authors to improve the paper. | 1. What is the focus of the paper regarding incorporating offline observational data in online reinforcement learning?
2. What are the strengths of the proposed approach, particularly in leveraging tools from causal inference?
3. Do you have any concerns or confusion regarding the definitions and applications of interventions and observational data in the paper?
4. Are there any potential improvements or additions that could enhance the paper's contributions?
5. Can you provide any references or examples that support the relevance and usefulness of the paper's theoretical results in real-life problems? | Summary Of The Paper
Review | Summary Of The Paper
This is a technical paper presenting how to incorporate offline observational data to improve the sample efficiency in the online reinforcement learning setting. The issue is the potential presence of unobserved confounders in the observational data which impact the transition dynamics and the rewards and how to adjust the exploration bonus used in the online setting. The authors suggest an algorithm (DOVI) which adjusts for the confounding bias (where the coufounders are partially observed or unobserved). They then derive a bound on the regret when using linear function approximation which shows that the regret is smaller than the optimal online regret thanks to the use of offline observational data if they are informative.
Review
The paper is well-written and clear. There are no experiments. While this is not particularly an issue for this kind of technical paper, a toy example illustrating the advantage of using offline observational data and the different regimes of the regret bound would have been beneficial.
The paper is well-motivated: it is very often the case that observational data are available and it is indeed relevant to try to use this data to reduce the sample cost of existing deep reinforcement learning solutions, especially when simulators are not available as is the case for most real-life problems (engineering systems, health, ....). Leveraging tools from causal inference is an interesting direction that is increasingly popular. I believe the theoretical result to be interesting for the community.
Major comments:
the definition of an intervention states that the value is assigned regardless of the other exogenous and endogenous variables. However in the online context, when using a policy, the authors uses the do operator (as an intervention) but the policy selects an action based on the state
s
. This is confusing as the action is thus not assigned regardless of the state which is an endogenous variable.
Are the authors the first to state a Backdoor Adjusted Bellman Equation? this is not clear in the paper.
It is also not clear to me where the distinction between the observational and interventional data is in algorithm 1?
Minor comments:
the abstract format is not correct.
line 4: semple -> sample
lines 56 and 57: please give references. |
NIPS | Title
Provably Efficient Causal Reinforcement Learning with Confounded Observational Data
Abstract
Empowered by neural networks, deep reinforcement learning (DRL) achieves tremendous empirical success. However, DRL requires a large dataset by interacting with the environment, which is unrealistic in critical scenarios such as autonomous driving and personalized medicine. In this paper, we study how to incorporate the dataset collected in the offline setting to improve the sample efficiency in the online setting. To incorporate the observational data, we face two challenges. (a) The behavior policy that generates the observational data may depend on unobserved random variables (confounders), which affect the received rewards and transition dynamics. (b) Exploration in the online setting requires quantifying the uncertainty given both the observational and interventional data. To tackle such challenges, we propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner. DOVI explicitly adjusts for the confounding bias in the observational data, where the confounders are partially observed or unobserved. In both cases, such adjustments allow us to construct the bonus based on a notion of information gain, which takes into account the amount of information acquired from the offline setting. In particular, we prove that the regret of DOVI is smaller than the optimal regret achievable in the pure online setting when the confounded observational data are informative upon the adjustments.
N/A
Empowered by neural networks, deep reinforcement learning (DRL) achieves tremendous empirical success. However, DRL requires a large dataset by interacting with the environment, which is unrealistic in critical scenarios such as autonomous driving and personalized medicine. In this paper, we study how to incorporate the dataset collected in the offline setting to improve the sample efficiency in the online setting. To incorporate the observational data, we face two challenges. (a) The behavior policy that generates the observational data may depend on unobserved random variables (confounders), which affect the received rewards and transition dynamics. (b) Exploration in the online setting requires quantifying the uncertainty given both the observational and interventional data. To tackle such challenges, we propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner. DOVI explicitly adjusts for the confounding bias in the observational data, where the confounders are partially observed or unobserved. In both cases, such adjustments allow us to construct the bonus based on a notion of information gain, which takes into account the amount of information acquired from the offline setting. In particular, we prove that the regret of DOVI is smaller than the optimal regret achievable in the pure online setting when the confounded observational data are informative upon the adjustments.
1 Introduction
Empowered by the breakthrough in neural networks, deep reinforcement learning (DRL) achieves significant empirical successes in various scenarios [19, 23, 36, 37]. Learning an expressive function approximator necessitates collecting a large dataset. Specifically, in the online setting, it requires the agent to interact with the environment for a large number of steps. For example, to learn a human-level policy for playing Atari games, the agent has to interact with a simulator for more than 108 steps [13]. However, in most scenarios, we do not have access to a simulator that allows for trial and error without any cost. Meanwhile, in critical scenarios, e.g., autonomous driving and personalized medicine, trial and error in the real world is unsafe and even unethical. As a result, it remains challenging to apply DRL to more scenarios.
To bypass such a barrier, we study how to incorporate the dataset collected offline, namely the observational data, to improve the sample efficiency of RL in the online setting [21]. In contrast to the interventional data collected online in possibly expensive ways, observational data are often abundantly available in various scenarios. For example, in autonomous driving, we have access to trajectories generated by the drivers. As another example, in personalized medicine, we have access to electronic health records from doctors. However, to incorporate the observational data in a provably efficient way, we have to address two challenges.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
• The observational data are possibly confounded. Specifically, there often exist unobserved random variables, namely confounders, that causally affect the agent and the environment at the same time. In particular, the policy used to generate the observational data, namely the behavior policy, possibly depends on the confounders. Meanwhile, the confounders possibly affect the received rewards and the transition dynamics. In the example of autonomous driving [9, 22], the drivers may be affected by complicated traffic or poor road design, resulting in traffic accidents even without misconduct. The complicated traffic and poor road design subsequently affect both the action of the drivers and the outcome. Therefore, it is unclear from the observational data whether the accidents are due to the actions adopted by the drivers. Agents trained with such observational data may be unwilling to take any actions under complicated traffic, jeopardizing the safety of passengers. In the example of personalized medicine [8, 29], the patients may not be compliant with prescriptions and instructions, which subsequently affects both the treatment and the outcome. As another example, the doctor may prescribe medicine to patients based on patients’ socioeconomic status (which could be inferred by the doctor through interacting with the patients). Meanwhile, socioeconomic status affects the patients’ health condition and subsequently plays the role of the confounder. In both scenarios, such confounders may be unavailable due to privacy or ethical concerns. Such a confounding issue makes the observational data uninformative and even misleading for identifying and estimating the causal effect, which is crucial for decision-making in the online setting. In all the examples, it is unclear from the observational data whether the outcome is due to the actions adopted.
• Even without the confounding issue, it remains unclear how the observational data may facilitate exploration in the online setting, which is the key to the sample efficiency of RL. At the core of exploration is uncertainty quantification. Specifically, quantifying the uncertainty that remains given the dataset collected up to the current step, including the observational data and the interventional data, allows us to construct a bonus. When incorporated into the reward, such a bonus encourages the agent to explore the less visited state-action pairs with more uncertainty. In particular, constructing such a bonus requires quantifying the amount of information carried over by the observational data from the offline setting, which also plays a key role in characterizing the regret, especially how much the observational data may facilitate reducing the regret. Uncertainty quantification becomes even more challenging when the observational data are confounded. Specifically, as the behavior policy depends on the confounders, there is a mismatch between the data generating processes in the offline setting and the online setting. As a result, it remains challenging to quantify how much information carried over from the offline setting is useful for the online setting, as the observational data are uninformative and even misleading due to the confounding issue.
Contribution. To study causal reinforcement learning, we propose a class of Markov decision processes (MDPs), namely confounded MDPs, which captures the data generating processes in both the offline setting and the online setting as well as their mismatch due to the confounding issue. In particular, we study two tractable cases of confounded MDPs in the episodic setting with linear function approximation [7, 16, 42, 43].
• In the first case, the confounders are partially observed in the observational data. Assuming that an observed subset of the confounders satisfies the backdoor criterion [32], we propose the deconfounded optimistic value iteration (DOVI) algorithm, which explicitly corrects for the confounding bias in the observational data using the backdoor adjustment.
• In the second case, the confounders are unobserved in the observational data. Assuming that there exists an observed set of intermediate states that satisfies the frontdoor criterion [32], we propose an extension of DOVI, namely DOVI+, which explicitly corrects for the confounding bias in the observational data using the composition of two backdoor adjustments. We remark that DOVI+ follows the same principle of design as DOVI and defer the discussion of DOVI+ to §A.
In both cases, the adjustments allow DOVI and DOVI+ to incorporate the observational data into the interventional data while bypassing the confounding issue. It further enables estimating the causal effect of a policy on the received rewards and the transition dynamics with enlarged effective sample size. Moreover, such adjustments allow us to construct the bonus based on a notion of information gain, which takes into account the amount of information carried over from the offline setting.
In particular, we prove that DOVI and DOVI+ attain the ∆H · √ d3H3T -regret up to logarithmic factors, where d is the dimension of features, H is the length of each episode, and T = HK is the number of steps taken in the online setting, where K is the number of episodes. Here the multiplicative factor ∆H > 0 depends on d, H , and a notion of information gain that quantifies the amount of information obtained from the interventional data additionally when given the properly adjusted observational data. When the observational data are unavailable or uninformative upon the adjustments, ∆H is a logarithmic factor. Correspondingly, DOVI and DOVI+ attain the optimal√ T -regret achievable in the pure online setting [7, 16, 42, 43]. When the observational data are sufficiently informative upon the adjustments, ∆H decreases towards zero as the effective sample size of the observational data increases, which quantifies how much the observational data may facilitate exploration in the online setting.
Related Work. Our work is related to the study of causal bandit [20]. The goal of causal bandit is to obtain the optimal intervention in the online setting where the data generating process is described by a causal diagram. The previous study establishes causal bandit algorithms in the online setting [26, 34], the offline setting [17, 18], and a combination of both settings [11]. In contrast to this line of work, we study causal RL in a combination of the online setting and the offline setting. Causal RL is more challenging than causal bandit, which corresponds toH = 1, as it involves the transition dynamics and is more challenging in exploration. See §B for a detailed literature review on causal bandit.
Our work is related to the study of causal RL considered in various settings. [45] propose a modelbased RL algorithm that solves dynamic treatment regimes (DTR), which involve a combination of the online setting and the offline setting. Their algorithm hinges on the analysis of sensitivity [3, 27, 38, 44], which constructs a set of feasible models of the transition dynamics based on the confounded observational data. Correspondingly, their algorithm achieves exploration by choosing an optimistic model of the transition dynamics from such a feasible set. In contrast, we propose a model-free RL algorithm, which achieves exploration through the bonus based on a notion of information gain. It is worth mentioning that the assumption of [45] is weaker than ours as theirs does not allow for identifying the causal effect. As a result of partial identification, the regret of their algorithm is the same as the regret in the pure online setting as T → +∞. In contrast, our work instantiates the following framework in handling confounders for reinforcement learning. (a) First, we propose the estimation equation based on the observations, which identifies the causal effect of actions on the cumulative reward. (b) Second, we conduct point estimation and uncertainty quantification based on observations and the estimation equation. (c) Finally, we conduct exploration based on the uncertainty quantification and achieve the regret reduction in the online setting. Consequently, the regret of our algorithm is smaller than the regret in the pure online setting by a multiplicative factor for all T . [25] propose a model-based RL algorithm in a combination of the online setting and the offline setting. Their algorithm uses a variational autoencoder (VAE) for estimating a structural causal model (SCM) based on the confounded observational data. In particular, their algorithm utilizes the actor-critic algorithm to obtain the optimal policy in such an SCM. However, the regret of their algorithm remains unclear. [6] propose a model-based RL algorithm in the pure online setting that learns the optimal policy in a partially observable Markov decision process (POMDP). The regret of their algorithm also remains unclear. [35] utilize generative adversarial reinforcement learning to reconstruct transition dynamics with confounder, and [40] propose a model-based approach for POMDP based on adjustment with proxy variables. [30] consider offpolicy policy evaluation under one-decision confounding and constructs worst-case bounds with theoretical guarantee. [4] utilizes states and actions as proxy variables to tackle off-policy policy evaluation with confounders. In contrast, our work utilizes backdoor and frontdoor adjustments to handle confounded observation.
2 Confounded Reinforcement Learning
Structural Causal Model. We denote a structural causal model (SCM) [32] by a tuple (A,B, F, P ). Here A is the set of exogenous (unobserved) variables, B is the set of endogenous (observed) variables, F is the set of structural functions capturing the causal relations, which determines an endogenous variable v ∈ B based on the other exogenous and endogenous variables, and P is the distribution of all the exogenous variables. We say that a pair of variables Y and Z are confounded by a variable W if they are both caused by W .
An intervention on a set of endogenous variables X ⊆ B assigns a value x to X regardless of the other exogenous and endogenous variables as well as the structural functions. We denote by do(X = x) the intervention on X and write do(x) if it is clear from the context. Similarly, a stochastic intervention [10, 28] on a set of endogenous variables X ⊆ B assigns a distribution p to X regardless of the other exogenous and endogenous variables as well as the structural functions. We denote by do(X ∼ p) the stochastic intervention on X .
Confounded Markov Decision Process. To characterize a Markov decision process (MDP) in the offline setting with observational data, which are possibly confounded, we introduce an SCM, where the endogenous variables are the states {sh}h∈[H], actions {ah}h∈[H], and rewards {rh}h∈[H]. Let {wh}h∈[H] be the confounders. In §3, we assume that the confounders are partially observed, while in §A, we assume that they are unobserved. The set of structural functions F consists of the transition of states sh+1 ∼ Ph(· | sh, ah, wh), the transition of confounders wh ∼ P̃h(· | sh), the behavior policy ah ∼ νh(· | sh, wh), which depends on the confounder wh, and the reward function rh(sh, ah, wh). See Figure 1 for the causal diagram that describes such an SCM.
Here ah and sh+1 are confounded by wh in addition to sh. We denote such a confounded MDP by the tuple (S,A,W, H,P, r), where H is the length of an episode, S, A, andW are the spaces of states, actions, and confounders, respectively, r = {rh}h∈[H] is the set of reward functions, and P = {Ph, P̃h}h∈H is the set of transition kernels. In the sequel, we assume without loss of generality that rh takes value in [0, 1] for all h ∈ [H]. In the online setting that allows for intervention, we assume that the confounders {wh}h∈[H] are unobserved. A policy π = {πh}h∈[H] induces the stochastic intervention do(a1 ∼ π1(· | s1), . . . , aH ∼ πH(· | sH)), which does not depend on the confounders. In particular, an agent interacts with the environment as follows. At the beginning of the k-th episode, the environment arbitrarily selects an initial state sk1 and the agent selects a policy π
k = {πkh}h∈[H]. At the h-th step of the k-th episode, the agent observes the state skh and takes the action a k h ∼ πkh(· | skh). The environment randomly selects the confounder wkh ∼ P̃h(· | skh), which is unobserved, and the agent receives the reward rkh = rh(s k h, a k h, w k h). The environment then transits into the next state skh+1 ∼ Ph(· | skh, akh, wkh).
For a policy π = {πh}h∈H , which does not depend on the confounders {wh}h∈[H], we define the value function V π = {V πh }h∈[H] as follows,
V πh (s) = Eπ [ H∑ j=h rj(sj , aj , wj) ∣∣∣∣ sh = s], ∀h ∈ [H], (2.1) where we denote by Eπ the expectation with respect to the confounders {wj}Hj=h and the trajectory {(sj , aj)}Hj=h, starting from the state sj = s and following the policy π. Correspondingly, we define the action-value function Qπ = {Qπh}h∈[H] as follows,
Qπh(s, a) = Eπ [ H∑ j=h rj(sj , aj , wj) ∣∣∣∣ sh = s,do(ah = a)], ∀h ∈ [H]. (2.2)
We assess the performance of an algorithm using the regret against the globally optimal policy π∗ = {π∗h}h∈[H] in hindsight after K episodes, which is defined as follows,
Regret(T ) = max π K∑ k=1 ( V π1 (s k 1)− V π k 1 (s k 1) ) = K∑ k=1 ( V π ∗ 1 (s k 1)− V π k 1 (s k 1) ) . (2.3)
Here T = HK is the total number of steps.
Our goal is to design an algorithm that minimizes the regret defined in (2.3), where π∗ does not depend on the confounders {wh}h∈[H]. In the online setting that allows for intervention, it is well understood how to minimize such a regret [2, 14–16]. However, it remains unclear how to efficiently utilize the observational data obtained in the offline setting, which are possibly confounded. In realworld applications, e.g., autonomous driving and personalized medicine, such observational data are often abundant, whereas intervention in the online setting is often restricted. We refer to §C for a comparison between the confounded MDP and other extensions of MDP, including the dynamics treatment regime (DTR), partially observable MDP (POMDP), and contextual MDP (CMDP).
Why is Incorporating Confounded Observational Data Challenging? Straightforwardly incorporating the confounded observational data into an online algorithm possibly leads to an undesirable regret due to the mismatch between the online and offline data generating processes. In particular, due to the existence of the confounders {wh}h∈[H], which are partially observed (§3) or unobserved (§A), the conditional probability P(sh+1 | sh, ah) in the offline setting is different from the causal effect P(sh+1 | sh,do(ah)) in the online setting [33]. More specifically, it holds that
P(sh+1 | sh, ah) = Ewh∼P̃h(· | sh)
[ Ph(sh+1 | sh, ah, wh) · νh(ah | sh, wh) ] Ewh∼P̃h(· | sh) [ νh(ah | sh, wh)
] , P ( sh+1
∣∣ sh,do(ah)) = Ewh∼P̃h(· | sh)[Ph(· | sh, ah, wh)]. In other words, without proper covariate adjustments [32], the confounded observational data may be not informative for estimating the transition dynamics and the associated action-value function in the online setting. To this end, we propose an algorithm that incorporates the confounded observational data in a provably efficient manner. Moreover, our analysis quantifies the amount of information carried over by the confounded observational data from the offline setting and to what extent it helps reducing the regret in the online setting.
3 Algorithm and Theory for Partially Observed Confounder
In this section, we propose the Deconfounded Optimistic Value Iteration (DOVI) algorithm. DOVI handles the case where the confounders are unobserved in the online setting but are partially observed in the offline setting. We then characterize the regret of DOVI. We defer the extension of DOVI, namely DOVI+, to §A which handles the case where the confounders are unobserved in both the online setting and the offline setting.
3.1 Algorithm
Backdoor Adjustment. In the online setting that allows for intervention, the causal effect of ah on sh+1 given sh, that is, P(sh+1 | sh,do(ah)), plays a key role in the estimation of the action-value function. Meanwhile, the confounded observational data may not allow us to identify the causal effect P(sh+1 | sh,do(ah)) if the confounder wh is unobserved. However, if the confounder wh is partially observed in the offline setting, the observed subset uh of wh allows us to identify the causal effect P(sh+1 | sh,do(ah)), as long as uh satisfies the following backdoor criterion. Assumption 3.1 (Backdoor Criterion [32, 33]). In the SCM defined in §2 and its induced directed acyclic graph (DAG), for all h ∈ [H], there exists an observed subset uh of wh that satisfies the backdoor criterion, that is,
• the elements of uh are not the descendants of ah, and
• conditioning on sh, the elements of uh d-separate every path between ah and sh+1, rh that has an incoming arrow into ah.
See Figure 2 for an example that satisfies the backdoor criterion. In particular, we identify the causal effect P(sh+1 | sh,do(ah)) as follows.
Proposition 3.2 (Backdoor Adjustment [32]). Under Assumption 3.1, it holds for all h ∈ [H] that P ( sh+1 ∣∣ sh,do(ah)) = Euh∼P(· | sh)[P(sh+1 | sh, ah, uh)], E [ rh(sh, ah, wh)
∣∣ sh,do(ah)] = Euh∼P(· | sh)[E[rh(sh, ah, wh) ∣∣ sh, ah, uh]]. Here (sh+1, sh, ah, uh) follows the SCM defined in §2, which generates the confounded observational data.
Proof. See [32] for a detailed proof.
With a slight abuse of notation, we write P(sh+1 | sh, ah, uh) as Ph(sh+1 | sh, ah, uh) and P(uh | sh) as P̃h(uh | sh), since they are induced by the SCM defined in §2. In the sequel, we define U the space of observed state uh and write rh = rh(sh, ah, wh) for notational simplicity. Backdoor-Adjusted Bellman Equation. We now formulate the Bellman equation for the confounded MDP. It holds for all (sh, ah) ∈ S ×A that
Qπh(sh, ah) = Eπ [ H∑ j=h rj(sj , aj , uj) ∣∣∣∣ sh,do(ah)] = E[rh ∣∣ sh,do(ah)]+ Esh+1[V πh+1(sh+1)], where Esh+1 denotes the expectation with respect to sh+1 ∼ P(·
∣∣ sh,do(ah)). Here E[rh
∣∣ sh,do(ah)] and P(· ∣∣ sh,do(ah)) are characterized in Proposition 3.2. In the sequel, we define the following transition operator and counterfactual reward function,
(PhV )(sh, ah) = Esh+1∼P(· | sh,do(ah)) [ V (sh+1) ] , ∀V : S 7→ R, (sh, ah) ∈ S ×A, (3.1)
Rh(sh, ah) = E [ rh ∣∣ sh,do(ah)], ∀(sh, ah) ∈ S ×A. (3.2)
We have the following Bellman equation, Qπh(sh, ah) = Rh(sh, ah) + (PhV πh+1)(sh, ah), ∀h ∈ [H], (sh, ah) ∈ S ×A. (3.3)
Correspondingly, the Bellman optimality equation takes the following form, Q∗h(sh, ah) = Rh(sh, ah) + (PhV ∗h+1)(sh, ah), V ∗h (sh) = max
ah∈A Q∗h(sh, ah), (3.4)
which holds for all h ∈ [H] and (sh, ah) ∈ S × A. Such a Bellman optimality equation allows us to adapt the least-squares value iteration (LSVI) algorithm [2, 5, 14, 16, 31].
Linear Function Approximation. We focus on the following setting with linear transition kernels and reward functions [7, 16, 42, 43], which corresponds to a linear SCM [33]. Assumption 3.3 (Linear Confounded MDP). We assume that Ph(sh+1 | sh, ah, uh) = 〈φh(sh, ah, uh), µh(sh+1)〉, ∀h ∈ [H], (sh+1, sh, ah) ∈ S × S ×A, where φh(·, ·, ·) and µh(·) = (µ1,h(·), . . . , µd,h(·))> are Rd-valued functions. We assume that∑d i=1 ‖µi,h‖21 ≤ d and ‖φh(sh, ah, uh)‖2 ≤ 1 for all h ∈ [H] and (sh, ah, uh) ∈ S × A × U . Meanwhile, we assume that E[rh | sh, ah, uh] = φh(sh, ah, uh)>θh, ∀h ∈ [H], (sh, ah, uh) ∈ S ×A× U , (3.5) where θh ∈ Rd and ‖θh‖2 ≤ √ d for all h ∈ [H].
Such a linear setting generalizes the tabular setting where S , A, and U are finite. Proposition 3.4. We define the backdoor-adjusted feature as follows,
ψh(sh, ah) = Euh∼P̃h(· | sh) [ φh(sh, ah, uh) ] , ∀h ∈ [H], (sh, ah) ∈ S ×A. (3.6)
Under Assumption 3.1, it holds that
P(sh+1 | sh,do(ah)) = 〈ψh(sh, ah), µh(sh+1)〉, ∀h ∈ [H], (sh+1, sh, ah) ∈ S × S ×A. Moreover, the action-value functions Qπh and Q ∗ h are linear in the backdoor-adjusted feature ψh for all π.
Proof. See §F.1 for a detailed proof.
Such an observation allows us to estimate the action-value function based on the backdoor-adjusted features {ψh}h∈[H] in the online setting. See §D for a detailed discussion. In the sequel, we assume that either the density of {P̃h(· | sh)}h∈[H] is known or the backdoor-adjusted feature {ψh}h∈[H] is known.
In the sequel, we introduce the DOVI algorithm (Algorithm 1). Each iteration of DOVI consists of two components, namely point estimation, where we estimateQ∗h based on the confounded observational data and the interventional data, and uncertainty quantification, where we construct the upper confidence bound (UCB) of the point estimator.
Algorithm 1 Deconfounded Optimistic Value Iteration (DOVI) for Confounded MDP
Require: Observational data {(sih, aih, uih, rih)}i∈[n],h∈[H], tuning parameters λ, β > 0, backdooradjusted feature {ψh}h∈[H], which is defined in (3.6).
1: Initialization: Set {Q0h, V 0h }h∈[H] as zero functions and V kH+1 as a zero function for k ∈ [K]. 2: for k = 1, . . . ,K do 3: for h = H, . . . , 1 do 4: Set ωkh ← argminω∈Rd ∑k−1 τ=1(r τ h + V τ h+1(s τ h+1) − ω>ψh(sτh, aτh))2 + λ‖ω‖22 + Lkh(ω), where Lkh is defined in (3.8). 5: Set Qkh(·, ·)← min{ψh(·, ·)>ωkh + Γkh(·, ·), H − h}, where Γkh is defined in (3.12). 6: Set πkh(· | sh)← argmaxah∈AQ k h(sh, ah) for all sh ∈ S. 7: Set V kh (·)← 〈πkh(· | ·), Qkh(·, ·)〉A. 8: end for 9: Obtain sk1 from the environment.
10: for h = 1, . . . ,H do 11: Take akh ∼ πkh(· | skh). Obtain rkh = rh(skh, akh, ukh) and skh+1. 12: end for 13: end for
Point Estimation. To solve the Bellman optimality equation in (3.4), we minimize the empirical mean-squared Bellman error as follows at each step,
ωkh ← argmin ω∈Rd k−1∑ τ=1 ( rτh + V τ h+1(s τ h+1)− ω>ψh(sτh, aτh) )2 + λ‖ω‖22 + Lkh(ω), h = H, . . . , 1,
(3.7)
where we set V kH+1 = 0 for all k ∈ [K] and V τh+1 is defined in Line 7 of Algorithm 1 for all (τ, h) ∈ [K] × [H − 1]. Here k is the index of episode, λ > 0 is a tuning parameter, and Lkh is a regularizer, which is constructed based on the confounded observational data. More specifically, we define
Lkh(ω) = n∑ i=1 ( rih + V k h+1(s i h+1)− ω>φh(sih, aih, uih) )2 , ∀(k, h) ∈ [K]× [H], (3.8)
which corresponds to the least-squares loss for regressing rih + V k h+1(s i h+1) against φh(s i h, a i h, u i h) for all i ∈ [n]. Here {(sih, aih, uih, rih)}(i,h)∈[n]×[H] are the confounded observational data, where
uih ∼ P̃h(· | sih), sih+1 ∼ Ph(· | sih, aih, uih), and aih ∼ νh(· | sih, wih) with ν = {νh}h∈[H] being the behavior policy. Here recall that, with a slight abuse of notation, we write P(sh+1 | sh, ah, uh) as Ph(sh+1 | sh, ah, uh) and P(uh | sh) as P̃h(uh | sh), since they are induced by the SCM defined in §2. The update in (3.7) takes the following explicit form,
ωkh ← (Λkh)−1 ( k−1∑ τ=1 ψh(s τ h, a τ h) · ( V kh+1(s τ h+1) + r τ h ) +
n∑ i=1 φh(s i h, a i h, u i h) · ( V kh+1(s i h+1) + r i h )) , (3.9)
where
Λkh = k−1∑ τ=1 ψh(s τ h, a τ h)ψh(s τ h, a τ h) > + n∑ i=1 φh(s i h, a i h, u i h)φh(s i h, a i h, u i h) > + λI. (3.10)
Uncertainty Quantification. We now construct the UCB Γkh(·, ·) of the point estimator ψh(·, ·)>ωkh obtained from (3.9), which encourages the exploration of the less visited state-action pairs. To this end, we employ the following notion of information gain to motivate the UCB,
Γkh(s k h, a k h) ∝ H(ωkh | ξk−1)−H ( ωkh | ξk−1 ∪ {(skh, akh)} ) , (3.11)
where H(ωkh | ξk−1) is the differential entropy of the random variable ωkh given the data ξk−1. In particular, ξk−1 = {(sτh, aτh, rτh)}(τ,h)∈[k−1]×[H] ∪ {(sih, aih, uih, rih)}(i,h)∈[n]×[H] consists of the confounded observational data and the interventional data up to the (k − 1)-th episode. However, it is challenging to characterize the distribution of ωkh. To this end, we consider a Bayesian counterpart of the confounded MDP, where the prior of ωkh is N(0, I/λ) and the residual of the regression problem in (3.7) is N(0, 1). In such a “parallel” confounded MDP, the posterior of ωkh follows N(µk,h, (Λ k h) −1), where Λkh is defined in (3.10) and µk,h coincides with the right-hand side of (3.9). Moreover, it holds for all (skh, a k h) ∈ S ×A that
H(ωkh | ξk−1) = 1/2 · log det ( (2πe)d · (Λkh)−1 ) ,
H ( ωkh ∣∣ ξk−1 ∪ {(skh, akh)}) = 1/2 · log det((2πe)d · (Λkh + ψh(skh, akh)ψh(skh, akh)>)−1).
Correspondingly, we employ the following UCB, which instantiates (3.11), that is,
Γkh(s k h, a k h) = β ·
( log det ( Λkh + ψh(s k h, a k h)ψh(s k h, a k h) >)− log det(Λkh))1/2 (3.12)
for all (skh, a k h) ∈ S × A. Here β > 0 is a tuning parameter. We highlight that, although the information gain in (3.11) relies on the “parallel” confounded MDP, the UCB in (3.12), which is used in Line 5 of Algorithm 1, does not rely on the Bayesian perspective. Also, our analysis establishes the frequentist regret.
Regularization with Observational Data: A Bayesian Perspective. In the “parallel” confounded MDP, it holds that
ωkh ∼ N(0, I/λ), ωkh | ξ0 ∼ N ( µ1,h, (Λ 1 h) −1), ωkh | ξk−1 ∼ N(µk,h, (Λkh)−1),
where µk,h coincides with the right-hand side of (3.9) and µ1,h is defined by setting k = 1 in µk,h. Here ξ0 = {(sih, aih, uih, rih)}(i,h)∈[n]×[H] are the confounded observational data. Hence, the regularizer Lkh in (3.8) corresponds to using ω k h | ξ0 as the prior for the Bayesian regression problem given only the interventional data ξk−1 \ ξ0 = {(sτh, aτh, rτh)}(τ,h)∈[k−1]×[H].
3.2 Theory
The following theorem characterizes the regret of DOVI, which is defined in (2.3).
Theorem 3.5 (Regret of DOVI). Let β = CdH √
log(d(T + nH)/ζ) and λ = 1, where C > 0 and ζ ∈ (0, 1] are absolute constants. Under Assumptions 3.1 and 3.3, it holds with probability at least 1− 5ζ/2 that
Regret(T ) ≤ C ′ ·∆H · √ d3H3T · √ log ( d(T + nH)/ζ ) , (3.13)
where C ′ > 0 is an absolute constant and
∆H = 1√ dH2 H∑ h=1 ( log det(ΛK+1h )− log det(Λ 1 h) )1/2 . (3.14)
Proof. See §F.3 for a detailed proof.
Note that ΛK+1h (n + K + λ)I and Λ1h λI for all h ∈ [H]. Hence, it holds that ∆H = O( √ log(n+K + 1)) in the worst case. Thus, the regret of DOVI isO( √ d3H3T ) up to logarithmic factors, which is optimal in the total number of steps T if we only consider the online setting. However, ∆H is possibly much smaller than O( √ log(n+K + 1)), depending on the amount of information carried over by the confounded observational data from the offline setting, which is quantified in the following.
Interpretation of ∆H : An Information-Theoretic Perspective. Let ω∗h be the parameter of the globally optimal action-value function Q∗h, which corresponds to π
∗ in (2.3). Recall that we denote by ξ0 and ξK the confounded observational data {(sih, aih, uih, rih)}(i,h)∈[n]×[H] and the union {(sih, aih, uih, rih)}(i,h)∈[n]×[H] ∪ {(skh, akh, rkh)}(k,h)∈[K]×[H] of the confounded observational data and the interventional data up to the K-th episode, respectively. We consider the aforementioned Bayesian counterpart of the confounded MDP, where the prior of ω∗h is also N(0, I/λ). In such a “parallel” confounded MDP, we have
ω∗h ∼ N(0, I/λ), ω∗h | ξ0 ∼ N ( µ∗1,h, (Λ 1 h) −1), ω∗h | ξK ∼ N(µ∗K,h, (ΛK+1h )−1), (3.15)
where
µ∗1,h = (Λ 1 h) −1 n∑ i=1 φh(s i h, a i h, u i h) · ( V ∗h+1(s i h+1) + r i h ) ,
µ∗K,h = (Λ K+1 h )
−1 (
Λ1hµ ∗ 1,h + K∑ τ=1 ψh(s τ h, a τ h) · ( V ∗h+1(s τ h+1) + r τ h )) .
It then holds for the right-hand side of (3.14) that
1/2 · log det(ΛK+1h )− 1/2 · log det(Λ 1 h) = H(ω ∗ h | ξ0)−H(ω∗h | ξK). (3.16)
The left-hand side of (3.16) characterizes the information gain of intervention in the online setting given the confounded observational data in the offline setting. In other words, if the confounded observational data are sufficiently informative upon the backdoor adjustment, then ∆H is small, which implies that the regret is small. More specifically, the matrices (Λ1h) −1 and (ΛK+1h ) −1 defined in (3.10) characterize the ellipsoidal confidence sets given ξ0 and ξK , respectively. If the confounded observational data are sufficiently informative upon the backdoor adjustment, ΛK+1h is close to Λ1h. To illustrate, let {ψh(sτh, aτh)}(τ,h)∈[K]×[H] and {φh(sih, aih, uih)}(i,h)∈[n]×[H] be sampled uniformly at random from the canonical basis {e`}`∈[d] of Rd. It then holds that ΛK+1h ≈ (K + n)I/d + λI and Λ1h ≈ nI/d + λI . Hence, for λ = 1 and sufficiently large n and K, we have ∆H = O( √ log(1 +K/(n+ d))) = O( √ K/(n+ d)). For example, for n = Ω(K2), it holds that ∆H = O(n−1/2), which implies that the regret of DOVI is O(n−1/2 · √ d3H3T ). In other words, if the confounded observational data are sufficiently informative upon the backdoor adjustment, the regret of DOVI can be arbitrarily small given a sufficiently large sample size n of the confounded observational data, which is often the case in practice [8, 9, 21, 22, 29].
4 Conclusion
In this paper, we propose the deconfounded optimistic value iteration (DOVI) algorithm and its variant DOVI+, which incorporate the confounded observational data to the online reinforcement learning in a provably efficient manner. DOVI and DOVI+ explicitly adjust for the confounding bias in the observational data via the backdoor and frontdoor adjustments, respectively. In both cases, such adjustments allow us to construct the bonus based on a notion of information gain, which considers the amount of information acquired from the offline dataset. We further conduct regret analysis of DOVI and DOVI+. Our analysis suggests that practitioners can tackle the confounding issue in the offline dataset by estimating the counterfactual reward for value function estimations, given that a proper adjustment such as the backdoor or frontdoor adjustment is available. In the case of backdoor and frontdoor adjustment, we prove that the regret of DOVI is smaller than the optimal regret achievable in the pure online setting when the confounded observational data are informative upon the adjustments, suggesting that one can exploit the confounded observational data in reinforcement learning upon proper adjustments. In our future study, we wish to incorporate proxy variables that are native to MDPs for the adjustments of the offline dataset, such as the variables exploited by [4, 24, 40].
Acknolodgements
Zhaoran Wang acknowledges National Science Foundation (Awards 2048075, 2008827, 2015568, 1934931), Simons Institute (Theory of Reinforcement Learning), Amazon, J.P. Morgan, and Two Sigma for their supports. Zhuoran Yang acknowledges Simons Institute (Theory of Reinforcement Learning). The authors also thank the anonymous reviewers, whose invaluable suggestions help the authors to improve the paper. | 1. What is the focus and contribution of the paper regarding online reinforcement learning with unobserved confounding?
2. What are the strengths and weaknesses of the proposed algorithm for online learning that incorporates offline data as a regularization term?
3. How does the paper address the issue of unobserved confounding in logged data, and what conditions does it propose for identifying RL models from confounded data?
4. What are some recent literature on using confounded off-line RL data that the paper could consider including in its related work section?
5. Are there any concerns or suggestions regarding the presentation and discussion of DOVI+ in the paper's appendix? | Summary Of The Paper
Review | Summary Of The Paper
The paper studies the problem of online reinforcement learning, while making use existing offline data that was previously logged by some behavior policy, where there is unobserved confounding in the logged data. They provide some conditions under which the RL model can be identified from the confounded logged data, and under these conditions propose a variant of optimistic value iteration for online learning that incorporates this offline data as a regularization term, which has an intuitive Bayesian interpretation based on conditioning on this data. Finally, they provide regret bounds for their algorithm, which has a leading term incorporating the effect of the logged data, which decays to zero as the amount of logged data grows to infinity.
Review
Overall, the paper has many positive aspects; it presents a well-motivated algorithm for an important problem, and provides regret bounds that shrink in terms of the amount of confounded logged data available. In this sense, the paper seems to fill an important niche in the literature. That being said, the paper has several significant weaknesses outlined below, which limits my recommendation. In particular, it feels somewhat lacking/incomplete without any kind of empirical evaluation.
Major weaknesses:
There is no empirical evaluation of their proposed algorithm, or even an empirical “proof of concept” on synthetic data. This makes it extremely difficult to assess how the algorithm is likely to fair in practice.
The backdoor criterion in Assumption 3.1 seems to be extremely central to the theory, but it is not explained at all. Ideally, there would be an explanation of when we expect this requirement to be satisfied versus not. Furthermore, it would be very beneficial in motivating the entire paper to provide a compelling clear/concrete example of a problem with unmeasured confounding that fits within your framework and satisfies assumption 3.1. Without such an example, it is difficult to assess how useful the theory is.
Along a similar vein to the previous concern, the confounding model presented in section 2 seems very weird and non-standard (since, the way it is written, it seems like the confounders are caused by the observed state). This is in contrast to standard kinds of confounding models, where e.g. confounders are independent at each time step, follow in a Markov chain, or follow the POMDP model, etc. Again, it would be great to have a compelling example of a problem with this kind of confounding structure.
The method is presented as being “provably efficient”. But what does this actually mean? It doesn’t seem to be actually justified as efficient in any concrete sense. One possible sense of efficiency is in terms of semi parametric estimation, but that doesn’t seem to be applied anywhere here. Another sense could be in terms of optimal regret, but this doesn’t seem to be established; although the
Δ
H
term in the regret bound has the nice property that it should decay as
n
→
∞
as discussed in the paper, nothing as far as I can tell rules out the possibility of an alternative algorithm that has faster regret decay as
n
→
∞
. Given this, presenting the work as “provably efficient” in the title is extremely misleading.
There seems to be a bunch of important recent literature on using confounded off-line RL data that is missing from the related work, for example Namkoong et al. (2020) “Off-policy Policy Evaluation For Sequential Decisions Under Unobserved Confounding”, and Bennett et al. (2021) “Off-policy Evaluation in Infinite-horizon Reinforcement Learning with Latent Confounders”.
Other minor issues / comments / suggestions / typos:
Typo in line 4 “semple" -> “sample”
On line 41 you say that “The complicated traffic and poor road design subsequently affect both the action of the drivers and the outcome”. But do we expect these things to be unobserved? This only makes sense as an example of confounding if they are things not observed in the state space, but which the agent took into account. It would be good to elaborate here.
For fairness, since all presentation and discussion of DOVI+ was deferred to appendix, I am not taking it into account when assessing the paper.
The discussion of structural causal models at the start of section 2 is very confusingly presented and mostly doesn’t seem to contribute much other than defining the “do” notation (it presents a bunch of notation that is then never used). Also, it would be good to define what the “do calculus” notation means more explicitly for people not already familiar with it.
The definition of Regret in equation 2.3 defines it as a random quantity (since the definition involves the initial state
s
1
k
for each
k
∈
[
K
]
, which seems somewhat odd. Should there have been an expectation over the initial state?
You refer to Figure 4 on line 216, but there appears to be no Figure 4 in the paper.
In a bunch of different places, you talk about replacing
P
with
P
h
. I don’t understand what is meant to be the difference between these two functions; this needs to be explained concretely.
In assumption 3.3, is
ϕ
h
meant to be a fixed given feature map? This is not explained.
Also, in assumption 3.3, you refer to the reward function parameters as
θ
h
, but later in the paper this appears to instead be referred to as
ω
h
.
Typo on lines 249-250 “is know” -> “is known”.
What is the interpretation of the
β
tuning parameter in your algorithm? This doesn’t not appear to be clearly explained. Also, what would be the expected impact of increasing versus decreasing
β
on how the algorithm operates? |
NIPS | Title
Active Exploration for Learning Symbolic Representations
Abstract
We introduce an online active exploration algorithm for data-efficiently learning an abstract symbolic model of an environment. Our algorithm is divided into two parts: the first part quickly generates an intermediate Bayesian symbolic model from the data that the agent has collected so far, which the agent can then use along with the second part to guide its future exploration towards regions of the state space that the model is uncertain about. We show that our algorithm outperforms random and greedy exploration policies on two different computer game domains. The first domain is an Asteroids-inspired game with complex dynamics but basic logical structure. The second is the Treasure Game, with simpler dynamics but more complex logical structure.
1 Introduction
Much work has been done in artificial intelligence and robotics on how high-level state abstractions can be used to significantly improve planning [19]. However, building these abstractions is difficult, and consequently, they are typically hand-crafted [15, 13, 7, 4, 5, 6, 20, 9].
A major open question is then the problem of abstraction: how can an intelligent agent learn highlevel models that can be used to improve decision making, using only noisy observations from its high-dimensional sensor and actuation spaces? Recent work [11, 12] has shown how to automatically generate symbolic representations suitable for planning in high-dimensional, continuous domains. This work is based on the hierarchical reinforcement learning framework [1], where the agent has access to high-level skills that abstract away the low-level details of control. The agent then learns representations for the (potentially abstract) effect of using these skills. For instance, opening a door is a high-level skill, while knowing that opening a door typically allows one to enter a building would be part of the representation for this skill. The key result of that work was that the symbols required to determine the probability of a plan succeeding are directly determined by characteristics of the skills available to an agent. The agent can learn these symbols autonomously by exploring the environment, which removes the need to hand-design symbolic representations of the world.
It is therefore possible to learn the symbols by naively collecting samples from the environment, for example by random exploration. However, in an online setting the agent shall be able to use its previously collected data to compute an exploration policy which leads to better data efficiency. We introduce such an algorithm, which is divided into two parts: the first part quickly generates an intermediate Bayesian symbolic model from the data that the agent has collected so far, while the second part uses the model plus Monte-Carlo tree search to guide the agent’s future exploration towards regions of the state space that the model is uncertain about. We show that our algorithm is significantly more data-efficient than more naive methods in two different computer game domains. The first domain is an Asteroids-inspired game with complex dynamics but basic logical structure. The second is the Treasure Game, with simpler dynamics but more complex logical structure.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
2 Background
As a motivating example, imagine deciding the route you are going to take to the grocery store; instead of planning over the various sequences of muscle contractions that you would use to complete the trip, you would consider a small number of high-level alternatives such as whether to take one route or another. You also would avoid considering how your exact low-level state affected your decision making, and instead use an abstract (symbolic) representation of your state with components such as whether you are at home or an work, whether you have to get gas, whether there is traffic, etc. This simplification reduces computational complexity, and allows for increased generalization over past experiences. In the following sections, we introduce the frameworks that we use to represent the agent’s high-level skills, and symbolic models for those skills.
2.1 Semi-Markov Decision Processes
We assume that the agent’s environment can be described by a semi-Markov decision process (SMDP), given by a tuple D = (S,O,R, P, γ), where S ⊆ Rd is a d-dimensional continuous state space, O(s) returns a set of temporally extended actions, or options [19] available in state s ∈ S, R(s′, t, s, o) and P (s′, t | s, o) are the reward received and probability of termination in state s′ ∈ S after t time steps following the execution of option o ∈ O(s) in state s ∈ S, and γ ∈ (0, 1] is a discount factor. In this paper, we are not concerned with the time taken to execute o, so we use P (s′ | s, o) = ∫ P (s′, t | s, o)dt.
An option o is given by three components: πo, the option policy that is executed when the option is invoked, Io, the initiation set consisting of the states where the option can be executed from, and βo(s)→ [0, 1], the termination condition, which returns the probability that the option will terminate upon reaching state s. Learning models for the initiation set, rewards, and transitions for each option, allows the agent to reason about the effect of its actions in the environment. To learn these option models, the agent has the ability to collect observations of the forms (s,O(s)) when entering a state s and (s, o, s′, r, t) upon executing option o from s.
2.2 Abstract Representations for Planning
We are specifically interested in learning option models which allow the agent to easily evaluate the success probability of plans. A plan is a sequence of options to be executed from some starting state, and it succeeds if and only if it is able to be run to completion (regardless of the reward). Thus, a plan {o1, o2, ..., on} with starting state s succeeds if and only if s ∈ Io1 and the termination state of each option (except for the last) lies in the initiation set of the following option, i.e. s′ ∼ P (s′ | s, o1) ∈ Io2 , s′′ ∼ P (s′′ | s′, o2) ∈ Io3 , and so on. Recent work [11, 12] has shown how to automatically generate a symbolic representation that supports such queries, and is therefore suitable for planning. This work is based on the idea of a probabilistic symbol, a compact representation of a distribution over infinitely many continuous, low-level states. For example, a probabilistic symbol could be used to classify whether or not the agent is currently in front of a door, or one could be used to represent the state that the agent would find itself in after executing its ‘open the door’ option. In both cases, using probabilistic symbols also allows the agent to be uncertain about its state.
The following two probabilistic symbols are provably sufficient for evaluating the success probability of any plan [12]; the probabilistic precondition: Pre(o) = P (s ∈ Io), which expresses the probability that an option o can be executed from each state s ∈ S, and the probabilistic image operator:
Im(o, Z) =
∫ S P (s′ | s, o)Z(s)P (Io | s)ds∫
S Z(s)P (Io | s)ds
,
which represents the distribution over termination states if an option o is executed from a distribution over starting states Z. These symbols can be used to compute the probability that each successive option in the plan can be executed, and these probabilities can then be multiplied to compute the overall success probability of the plan; see Figure 1 for a visual demonstration of a plan of length 2.
Subgoal Options Unfortunately, it is difficult to model Im(o, Z) for arbitrary options, so we focus on restricted types of options. A subgoal option [17] is a special class of option where the distribution over termination states (referred to as the subgoal) is independent of the distribution over starting
states that it was executed from, e.g. if you make the decision to walk to your kitchen, the end result will be the same regardless of where you started from.
For subgoal options, the image operator can be replaced with the effects distribution: Eff(o) = Im(o, Z),∀Z(S), the resulting distribution over states after executing o from any start distribution Z(S). Planning with a set of subgoal options is simple because for each ordered pair of options oi and oj , it is possible to compute and store G(oi, oj), the probability that oj can be executed immediately after executing oi: G(oi, oj) = ∫ S Pre(oj , s)Eff(oi)(s)ds.
We use the following two generalizations of subgoal options: abstract subgoal options model the more general case where executing an option leads to a subgoal for a subset of the state variables (called the mask), leaving the rest unchanged. For example, walking to the kitchen leaves the amount of gas in your car unchanged. More formally, the state vector can be partitioned into two parts s = [a, b], such that executing o leaves the agent in state s′ = [a, b′], where P (b′) is independent of the distribution over starting states. The second generalization is the (abstract) partitioned subgoal option, which can be partitioned into a finite number of (abstract) subgoal options. For instance, an option for opening doors is not a subgoal option because there are many doors in the world, however it can be partitioned into a set of subgoal options, with one for every door.
The subgoal (and abstract subgoal) assumptions propose that the exact state from which option execution starts does not really affect the options that can be executed next. This is somewhat restrictive and often does not hold for options as given, but can hold for options once they have been partitioned. Additionally, the assumptions need only hold approximately in practice.
3 Online Active Symbol Acquisition
Previous approaches for learning symbolic models from data [11, 12] used random exploration. However, real world data from high-level skills is very expensive to collect, so it is important to use a more data-efficient approach. In this section, we introduce a new method for learning abstract models data-efficiently. Our approach maintains a distribution over symbolic models which is updated after every new observation. This distribution is used to choose the sequence of options that in expectation maximally reduces the amount of uncertainty in the posterior distribution over models. Our approach has two components: an active exploration algorithm which takes as input a distribution over symbolic models and returns the next option to execute, and an algorithm for quickly building a distribution over symbolic models from data. The second component is an improvement upon previous approaches in that it returns a distribution and is fast enough to be updated online, both of which we require.
3.1 Fast Construction of a Distribution over Symbolic Option Models
Now we show how to construct a more general model than G that can be used for planning with abstract partitioned subgoal options. The advantages of our approach versus previous methods are that our algorithm is much faster, and the resulting model is Bayesian, both of which are necessary for the active exploration algorithm introduced in the next section.
Recall that the agent can collect observations of the forms (s, o, s′) upon executing option o from s, and (s,O(s)) when entering a state s, where O(s) is the set of available options in state s. Given a sequence of observations of this form, the first step of our approach is to find the factors [12],
partitions of state variables that always change together in the observed data. For example, consider a robot which has options for moving to the nearest table and picking up a glass on an adjacent table. Moving to a table changes the x and y coordinates of the robot without changing the joint angles of the robot’s arms, while picking up a glass does the opposite. Thus, the x and y coordinates and the arm joint angles of the robot belong to different factors. Splitting the state space into factors reduces the number of potential masks (see end of Section 2.2) because we assume that if state variables i and j always change together in the observations, then this will always occur, e.g. we assume that moving to the table will never move the robot’s arms.1
Finding the Factors Compute the set of observed masks M from the (s, o, s′) observations: each observation’s mask is the subset of state variables that differ substantially between s and s′. Since we work in continuous, stochastic domains, we must detect the difference between minor random noise (independent of the action) and a substantial change in a state variable caused by action execution. In principle this requires modeling action-independent and action-dependent differences, and distinguishing between them, but this is difficult to implement. Fortunately we have found that in practice allowing some noise and having a simple threshold is often effective, even in more noisy and complex domains. For each state variable i, let Mi ⊆M be the subset of the observed masks that contain i. Two state variables i and j belong to the same factor f ∈ F if and only if Mi = Mj . Each factor f is given by a set of state variables and thus corresponds to a subspace Sf . The factors are updated after every new observation.
Let S∗ be the set of states that the agent has observed and let S∗f be the projection of S ∗ onto the subspace Sf for some factor f , e.g. in the previous example there is a S∗f which consists of the set of observed robot (x, y) coordinates. It is important to note that the agent’s observations come only from executing partitioned abstract subgoal options. This means that S∗f consists only of abstract subgoals, because for each s ∈ S∗, sf was either unchanged from the previous state, or changed to another abstract subgoal. In the robot example, all (x, y) observations must be adjacent to a table because the robot can only execute an option that terminates with it adjacent to a table or one that does not change its (x, y) coordinates. Thus, the states in S∗ can be imagined as a collection of abstract subgoals for each of the factors. Our next step is to build a set of symbols for each factor to represent its abstract subgoals, which we do using unsupervised clustering.
Finding the Symbols For each factor f ∈ F , we find the set of symbols Zf by clustering S∗f . Let Zf (sf ) be the corresponding symbol for state s and factor f . We then map the observed states s ∈ S∗ to their corresponding symbolic states sd = {Zf (sf ),∀f ∈ F}, and the observations (s,O(s)) and (s, o, s′) to (sd, O(s)) and (sd, o, s′d), respectively.
In the robot example, the (x, y) observations would be clustered around tables that the robot could travel to, so there would be a symbol corresponding to each table.
We want to build our models within the symbolic state space Sd. Thus we define the symbolic precondition, Pre(o, sd), which returns the probability that the agent can execute an option from some symbolic state, and the symbolic effects distribution for a subgoal option o, Eff (o), maps to a subgoal distribution over symbolic states. For example, the robot’s ‘move to the nearest table’ option maps the robot’s current (x, y) symbol to the one which corresponds to the nearest table.
The next step is to partition the options into abstract subgoal options (in the symbolic state space), e.g. we want to partition the ‘move to the nearest table’ option in the symbolic state space so that the symbolic states in each partition have the same nearest table.
Partitioning the Options For each option o, we initialize the partitioning P o so that each symbolic state starts in its own partition. We use independent Bayesian sparse Dirichlet-categorical models [18] for the symbolic effects distribution of each option partition.2 We then perform Bayesian Hierarchical Clustering [8] to merge partitions which have similar symbolic effects distributions.3
1The factors assumption is not strictly necessary as we can assign each state variable to its own factor. However, using this uncompressed representation can lead to an exponential increase in the size of the symbolic state space and a corresponding increase in the sample complexity of learning the symbolic models.
2We use sparse Dirichlet-categorical models because there are a combinatorial number of possible symbolic state transitions, but we expect that each partition has non-zero probability for only a small number of them.
3We use the closed form solutions for Dirichlet-multinomial models provided by the paper.
Algorithm 1 Fast Construction of a Distribution over Symbolic Option Models 1: Find the set of observed masks M . 2: Find the factors F . 3: ∀f ∈ F , find the set of symbols Zf . 4: Map the observed states s ∈ S∗ to symbolic states sd ∈ S∗d. 5: Map the observations (s,O(s)) and (s, o, s′) to (sd, O(s)) and (sd, o, s′d). 6: ∀o ∈ O, initialize P o and perform Bayesian Hierarchical Clustering on it. 7: ∀o ∈ O, find Ao and F o∗ .
There is a special case where the agent has observed that an option o was available in some symbolic states Sda , but has yet to actually execute it from any s
d ∈ Sda . These are not included in the Bayesian Hierarchical Clustering, instead we have a special prior for the partition of o that they belong to. After completing the merge step, the agent has a partitioning P o for each option o. Our prior is that with probability qo,4 each sd ∈ Sda belongs to the partition po ∈ P o which contains the symbolic states most similar to sd, and with probability 1− qo each sd belongs to its own partition. To determine the partition which is most similar to some symbolic state, we first find Ao, the smallest subset of factors which can still be used to correctly classify P o. We then map each sd ∈ Sda to the most similar partition by trying to match sd masked by Ao with a masked symbolic state already in one of the partitions. If there is no match, sd is placed in its own partition.
Our final consideration is how to model the symbolic preconditions. The main concern is that many factors are often irrelevant for determining if some option can be executed. For example, whether or not you have keys in your pocket does not affect whether you can put on your shoe.
Modeling the Symbolic Preconditions Given an option o and subset of factors F o, let SdF o be the symbolic state space projected onto F o. We use independent Bayesian Beta-Bernoulli models for the symbolic precondition of o in each masked symbolic state sdF o ∈ SdF o . For each option o, we use Bayesian model selection to find the the subset of factors F o∗ which maximizes the likelihood of the symbolic precondition models.
The final result is a distribution over symbolic option models H , which consists of the combined sets of independent symbolic precondition models {Pre(o, sdF o∗ );∀o ∈ O,∀s d F o∗ ∈ SdF o∗ } and independent symbolic effects distribution models {Eff (o, po);∀o ∈ O,∀po ∈ P o}. The complete procedure is given in Algorithm 1. A symbolic option model h ∼ H can be sampled by drawing parameters for each of the Bernoulli and categorical distributions from the corresponding Beta and sparse Dirichlet distributions, and drawing outcomes for each qo. It is also possible to consider distributions over other parts of the model such as the symbolic state space and/or a more complicated one for the option partitionings, which we leave for future work.
3.2 Optimal Exploration
In the previous section we have shown how to efficiently compute a distribution over symbolic option models H . Recall that the ultimate purpose of H is to compute the success probabilities of plans (see Section 2.2). Thus, the quality of H is determined by the accuracy of its predicted plan success probabilities, and efficiently learning H corresponds to selecting the sequence of observations which maximizes the expected accuracy of H . However, it is difficult to calculate the expected accuracy of H over all possible plans, so we define a proxy measure to optimize which is intended to represent the amount of uncertainty in H . In this section, we introduce our proxy measure, followed by an algorithm for finding the exploration policy which optimizes it. The algorithm operates in an online manner, building H from the data collected so far, using H to select an option to execute, updating H with the new observation, and so on.
First we define the standard deviation σH , the quantity we use to represent the amount of uncertainty in H . To define the standard deviation, we need to also define the distance and mean.
4This is a user specified parameter.
We define the distance K from h2 ∈ H to h1 ∈ H , to be the sum of the Kullback-Leibler (KL) divergences5 between their individual symbolic effect distributions plus the sum of the KL divergences between their individual symbolic precondition distributions:6
K(h1, h2) = ∑ o∈O [ ∑
sd Fo∗ ∈Sd Fo∗
DKL(Pre h1(o, sdF o∗ ) || Pre h2(o, sdF o∗ ))
+ ∑ po∈P o DKL(Eff h1(o, po) || Eff h2(o, po))].
We define the mean, E[H], to be the symbolic option model such that each Bernoulli symbolic precondition and categorical symbolic effects distribution is equal to the mean of the corresponding Beta or sparse Dirichlet distribution:
∀o ∈ O, ∀po ∈ P o, Eff E[H](o, po) = Eh∼H [Eff h(o, po)],
∀o ∈ O, ∀sdF o∗ ∈ S d F o∗ , PreE[H](o, sdF o∗ )) = Eh∼H [Pre h(o, sdF o∗ ))].
The standard deviation σH is then simply: σH = Eh∼H [K(h,E[H])]. This represents the expected amount of information which is lost if E[H] is used to approximate H . Now we define the optimal exploration policy for the agent, which aims to maximize the expected reduction in σH after H is updated with new observations. Let H(w) be the posterior distribution over symbolic models when H is updated with symbolic observations w (the partitioning is not updated, only the symbolic effects distribution and symbolic precondition models), and letW (H, i, π) be the distribution over symbolic observations drawn from the posterior of H if the agent follows policy π for i steps. We define the optimal exploration policy π∗ as:
π∗ = argmax π σH − Ew∼W (H,i,π)[σH(w)].
For the convenience of our algorithm, we rewrite the second term by switching the order of the expectations: Ew∼W (H,i,π)[Eh∼H(w)[K(h,E[H(w)])]] = Ew∼W (h,i,π)[K(h,E[H(w)])]]. Note that the objective function is non-Markovian because H is continuously updated with the agent’s new observations, which changes σH . This means that π∗ is non-stationary, so Algorithm 2 approximates π∗ in an online manner using Monte-Carlo tree search (MCTS) [3] with the UCT tree policy [10]. πT is the combined tree and rollout policy for MCTS, given tree T .
There is a special case when the agent simulates the observation of a previously unobserved transition, which can occur under the sparse Dirichlet-categorical model. In this case, the amount of information gained is very large, and furthermore, the agent is likely to transition to a novel symbolic state. Rather than modeling the unexplored state space, instead, if an unobserved transition is encountered during an MCTS update, it immediately terminates with a large bonus to the score, a similar approach to that of the R-max algorithm [2]. The form of the bonus is -zg, where g is the depth that the update terminated and z is a constant. The bonus reflects the opportunity cost of not experiencing something novel as quickly as possible, and in practice it tends to dominate (as it should).
4 The Asteroids Domain
The Asteroids domain is shown in Figure 2a and was implemented using physics simulator pybox2d. The agent controls a ship by either applying a thrust in the direction it is facing or applying a torque in either direction. The goal of the agent is to be able to navigate the environment without colliding with any of the four “asteroids.” The agent’s starting location is next to asteroid 1. The agent is given the following 6 options (see Appendix A for additional details):
1. move-counterclockwise and move-clockwise: the ship moves from the current face it is adjacent to, to the midpoint of the face which is counterclockwise/clockwise on the same asteroid from the current face. Only available if the ship is at an asteroid.
5The KL divergence has previously been used in other active exploration scenarios [16, 14]. 6Similarly to other active exploration papers, we define the distance to depend only on the transition models
and not the reward models.
Algorithm 2 Optimal Exploration Input: Number of remaining option executions i.
1: while i ≥ 0 do 2: Build H from observations (Algorithm 1). 3: Initialize tree T for MCTS. 4: while number updates < threshold do 5: Sample a symbolic model h ∼ H . 6: Do an MCTS update of T with dynamics given by h. 7: Terminate current update if depth g is ≥ i, or unobserved transition is encountered. 8: Store simulated observations w ∼W (h, g, πT ). 9: Score = K(h,E[H])−K(h,E[H(w)])− zg.
10: end while 11: return most visited child of root node. 12: Execute corresponding option; Update observations; i--. 13: end while
2. move-to-asteroid-1, move-to-asteroid-2, move-to-asteroid-3, and move-to-asteroid-4: the ship moves to the midpoint of the closest face of asteroid 1-4 to which it has an unobstructed path. Only available if the ship is not already at the asteroid and an unobstructed path to some face exists.
Exploring with these options results in only one factor (for the entire state space), with symbols corresponding to each of the 35 asteroid faces as shown in Figure 2a.
Results We tested the performance of three exploration algorithms: random, greedy, and our algorithm. For the greedy algorithm, the agent first computes the symbolic state space using steps 1-5 of Algorithm 1, and then chooses the option with the lowest execution count from its current symbolic state. The hyperparameter settings that we use for our algorithm are given in Appendix A.
Figures 3a, 3b, and 3c show the percentage of time that the agent spends on exploring asteroids 1, 3, and 4, respectively. The random and greedy policies have difficulty escaping asteroid 1, and are rarely able to reach asteroid 4. On the other hand, our algorithm allocates its time much more proportionally. Figure 4d shows the number of symbolic transitions that the agent has not observed (out of 115 possible).7 As we discussed in Section 3, the number of unobserved symbolic transitions is a good representation of the amount of information that the models are missing from the environment.
Our algorithm significantly outperforms random and greedy exploration. Note that these results are using an uninformative prior and the performance of our algorithm could be significantly improved by
7We used Algorithm 1 to build symbolic models from the data gathered by each exploration algorithms.
starting with more information about the environment. To try to give additional intuition, in Appendix A we show heatmaps of the (x, y) coordinates visited by each of the exploration algorithms.
5 The Treasure Game Domain
The Treasure Game [12], shown in Figure 2b, features an agent in a 2D, 528× 528 pixel video-game like world, whose goal is to obtain treasure and return to its starting position on a ladder at the top of the screen. The 9-dimensional state space is given by the x and y positions of the agent, key, and treasure, the angles of the two handles, and the state of the lock.
The agent is given 9 options: go-left, go-right, up-ladder, down-ladder, jump-left, jump-right, downright, down-left, and interact. See Appendix A for a more detailed description of the options and the environment dynamics. Given these options, the 7 factors with their corresponding number of symbols are: player-x, 10; player-y, 9; handle1-angle, 2; handle2-angle, 2; key-x and key-y, 3; bolt-locked, 2; and goldcoin-x and goldcoin-y, 2.
Results We tested the performance of the same three algorithms: random, greedy, and our algorithm. Figure 4a shows the fraction of time that the agent spends without having the key and with the lock still locked. Figures 4b and 4c show the number of times that the agent obtains the key and treasure, respectively. Figure 4d shows the number of unobserved symbolic transitions (out of 240 possible). Again, our algorithm performs significantly better than random and greedy exploration. The data
from our algorithm has much better coverage, and thus leads to more accurate symbolic models. For instance in Figure 4c you can see that random and greedy exploration did not obtain the treasure after 200 executions; without that data the agent would not know that it should have a symbol that corresponds to possessing the treasure.
6 Conclusion
We have introduced a two-part algorithm for data-efficiently learning an abstract symbolic representation of an environment which is suitable for planning with high-level skills. The first part of the algorithm quickly generates an intermediate Bayesian symbolic model directly from data. The second part guides the agent’s exploration towards areas of the environment that the model is uncertain about. This algorithm is useful when the cost of data collection is high, as is the case in most real world artificial intelligence applications. Our results show that the algorithm is significantly more data efficient than using more naive exploration policies.
7 Acknowledgements
This research was supported in part by the National Institutes of Health under award number R01MH109177. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. | 1. What is the main contribution of the paper in the field of artificial intelligence and decision-making?
2. How does the proposed method differ from previous works in terms of its exploration strategy and use of symbolic representation?
3. Can you explain the two-phase strategy used in the proposed method and how it drives exploration based on prediction goodness?
4. How was the effectiveness of the approach demonstrated in the two domains used in the experiments?
5. What are the strengths and weaknesses of the paper regarding its clarity, density, and comparisons with other works? | Review | Review
The paper proposes an active exploration scheme for more efficiently building a symbolic representation of the world in which the agent operates. This builds on earlier work that introduces a rich symbolic language for representing decision making problems in continuous state spaces. The current work proposes a two phase strategy - the first phase builds a Bayesian model for the options based on data gathered through execution of the options. The second phase uses of a MCTS based exploration strategy that drives the exploration based on the goodness of prediction of the occurrence of the symbols, given a set of options. They demonstrate the utility of the approach on two domains - one inspired by asteroid game and the other a treasure hunting game. The line of inquiry of this paper is very promising and quite needed. A truly intelligent agent should be able to actively seek experience that would better enable it to "understand" the domain.
The paper is somewhat well written. The basic setting of the work is well explained, but it is a little hard to understand the framework proposed in this work, since the writing in sections 3.1 and 3.2 is a bit dense and hard to follow. It is not entirely clear what is the specific feature that makes the approach data efficient. The comparisons are against random and greedy exploration. Given that the greedy algorithm is a count based exploration strategy, it is surprising that it does so poorly. Is there some explanation for this? It would be nice to see some interpretation of the nature of the models learned when the active exploration strategy is used.
One nitpick: If one ignores the duration of option completion, then what we have is a MDP and no longer a SMDP. |
NIPS | Title
Active Exploration for Learning Symbolic Representations
Abstract
We introduce an online active exploration algorithm for data-efficiently learning an abstract symbolic model of an environment. Our algorithm is divided into two parts: the first part quickly generates an intermediate Bayesian symbolic model from the data that the agent has collected so far, which the agent can then use along with the second part to guide its future exploration towards regions of the state space that the model is uncertain about. We show that our algorithm outperforms random and greedy exploration policies on two different computer game domains. The first domain is an Asteroids-inspired game with complex dynamics but basic logical structure. The second is the Treasure Game, with simpler dynamics but more complex logical structure.
1 Introduction
Much work has been done in artificial intelligence and robotics on how high-level state abstractions can be used to significantly improve planning [19]. However, building these abstractions is difficult, and consequently, they are typically hand-crafted [15, 13, 7, 4, 5, 6, 20, 9].
A major open question is then the problem of abstraction: how can an intelligent agent learn highlevel models that can be used to improve decision making, using only noisy observations from its high-dimensional sensor and actuation spaces? Recent work [11, 12] has shown how to automatically generate symbolic representations suitable for planning in high-dimensional, continuous domains. This work is based on the hierarchical reinforcement learning framework [1], where the agent has access to high-level skills that abstract away the low-level details of control. The agent then learns representations for the (potentially abstract) effect of using these skills. For instance, opening a door is a high-level skill, while knowing that opening a door typically allows one to enter a building would be part of the representation for this skill. The key result of that work was that the symbols required to determine the probability of a plan succeeding are directly determined by characteristics of the skills available to an agent. The agent can learn these symbols autonomously by exploring the environment, which removes the need to hand-design symbolic representations of the world.
It is therefore possible to learn the symbols by naively collecting samples from the environment, for example by random exploration. However, in an online setting the agent shall be able to use its previously collected data to compute an exploration policy which leads to better data efficiency. We introduce such an algorithm, which is divided into two parts: the first part quickly generates an intermediate Bayesian symbolic model from the data that the agent has collected so far, while the second part uses the model plus Monte-Carlo tree search to guide the agent’s future exploration towards regions of the state space that the model is uncertain about. We show that our algorithm is significantly more data-efficient than more naive methods in two different computer game domains. The first domain is an Asteroids-inspired game with complex dynamics but basic logical structure. The second is the Treasure Game, with simpler dynamics but more complex logical structure.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
2 Background
As a motivating example, imagine deciding the route you are going to take to the grocery store; instead of planning over the various sequences of muscle contractions that you would use to complete the trip, you would consider a small number of high-level alternatives such as whether to take one route or another. You also would avoid considering how your exact low-level state affected your decision making, and instead use an abstract (symbolic) representation of your state with components such as whether you are at home or an work, whether you have to get gas, whether there is traffic, etc. This simplification reduces computational complexity, and allows for increased generalization over past experiences. In the following sections, we introduce the frameworks that we use to represent the agent’s high-level skills, and symbolic models for those skills.
2.1 Semi-Markov Decision Processes
We assume that the agent’s environment can be described by a semi-Markov decision process (SMDP), given by a tuple D = (S,O,R, P, γ), where S ⊆ Rd is a d-dimensional continuous state space, O(s) returns a set of temporally extended actions, or options [19] available in state s ∈ S, R(s′, t, s, o) and P (s′, t | s, o) are the reward received and probability of termination in state s′ ∈ S after t time steps following the execution of option o ∈ O(s) in state s ∈ S, and γ ∈ (0, 1] is a discount factor. In this paper, we are not concerned with the time taken to execute o, so we use P (s′ | s, o) = ∫ P (s′, t | s, o)dt.
An option o is given by three components: πo, the option policy that is executed when the option is invoked, Io, the initiation set consisting of the states where the option can be executed from, and βo(s)→ [0, 1], the termination condition, which returns the probability that the option will terminate upon reaching state s. Learning models for the initiation set, rewards, and transitions for each option, allows the agent to reason about the effect of its actions in the environment. To learn these option models, the agent has the ability to collect observations of the forms (s,O(s)) when entering a state s and (s, o, s′, r, t) upon executing option o from s.
2.2 Abstract Representations for Planning
We are specifically interested in learning option models which allow the agent to easily evaluate the success probability of plans. A plan is a sequence of options to be executed from some starting state, and it succeeds if and only if it is able to be run to completion (regardless of the reward). Thus, a plan {o1, o2, ..., on} with starting state s succeeds if and only if s ∈ Io1 and the termination state of each option (except for the last) lies in the initiation set of the following option, i.e. s′ ∼ P (s′ | s, o1) ∈ Io2 , s′′ ∼ P (s′′ | s′, o2) ∈ Io3 , and so on. Recent work [11, 12] has shown how to automatically generate a symbolic representation that supports such queries, and is therefore suitable for planning. This work is based on the idea of a probabilistic symbol, a compact representation of a distribution over infinitely many continuous, low-level states. For example, a probabilistic symbol could be used to classify whether or not the agent is currently in front of a door, or one could be used to represent the state that the agent would find itself in after executing its ‘open the door’ option. In both cases, using probabilistic symbols also allows the agent to be uncertain about its state.
The following two probabilistic symbols are provably sufficient for evaluating the success probability of any plan [12]; the probabilistic precondition: Pre(o) = P (s ∈ Io), which expresses the probability that an option o can be executed from each state s ∈ S, and the probabilistic image operator:
Im(o, Z) =
∫ S P (s′ | s, o)Z(s)P (Io | s)ds∫
S Z(s)P (Io | s)ds
,
which represents the distribution over termination states if an option o is executed from a distribution over starting states Z. These symbols can be used to compute the probability that each successive option in the plan can be executed, and these probabilities can then be multiplied to compute the overall success probability of the plan; see Figure 1 for a visual demonstration of a plan of length 2.
Subgoal Options Unfortunately, it is difficult to model Im(o, Z) for arbitrary options, so we focus on restricted types of options. A subgoal option [17] is a special class of option where the distribution over termination states (referred to as the subgoal) is independent of the distribution over starting
states that it was executed from, e.g. if you make the decision to walk to your kitchen, the end result will be the same regardless of where you started from.
For subgoal options, the image operator can be replaced with the effects distribution: Eff(o) = Im(o, Z),∀Z(S), the resulting distribution over states after executing o from any start distribution Z(S). Planning with a set of subgoal options is simple because for each ordered pair of options oi and oj , it is possible to compute and store G(oi, oj), the probability that oj can be executed immediately after executing oi: G(oi, oj) = ∫ S Pre(oj , s)Eff(oi)(s)ds.
We use the following two generalizations of subgoal options: abstract subgoal options model the more general case where executing an option leads to a subgoal for a subset of the state variables (called the mask), leaving the rest unchanged. For example, walking to the kitchen leaves the amount of gas in your car unchanged. More formally, the state vector can be partitioned into two parts s = [a, b], such that executing o leaves the agent in state s′ = [a, b′], where P (b′) is independent of the distribution over starting states. The second generalization is the (abstract) partitioned subgoal option, which can be partitioned into a finite number of (abstract) subgoal options. For instance, an option for opening doors is not a subgoal option because there are many doors in the world, however it can be partitioned into a set of subgoal options, with one for every door.
The subgoal (and abstract subgoal) assumptions propose that the exact state from which option execution starts does not really affect the options that can be executed next. This is somewhat restrictive and often does not hold for options as given, but can hold for options once they have been partitioned. Additionally, the assumptions need only hold approximately in practice.
3 Online Active Symbol Acquisition
Previous approaches for learning symbolic models from data [11, 12] used random exploration. However, real world data from high-level skills is very expensive to collect, so it is important to use a more data-efficient approach. In this section, we introduce a new method for learning abstract models data-efficiently. Our approach maintains a distribution over symbolic models which is updated after every new observation. This distribution is used to choose the sequence of options that in expectation maximally reduces the amount of uncertainty in the posterior distribution over models. Our approach has two components: an active exploration algorithm which takes as input a distribution over symbolic models and returns the next option to execute, and an algorithm for quickly building a distribution over symbolic models from data. The second component is an improvement upon previous approaches in that it returns a distribution and is fast enough to be updated online, both of which we require.
3.1 Fast Construction of a Distribution over Symbolic Option Models
Now we show how to construct a more general model than G that can be used for planning with abstract partitioned subgoal options. The advantages of our approach versus previous methods are that our algorithm is much faster, and the resulting model is Bayesian, both of which are necessary for the active exploration algorithm introduced in the next section.
Recall that the agent can collect observations of the forms (s, o, s′) upon executing option o from s, and (s,O(s)) when entering a state s, where O(s) is the set of available options in state s. Given a sequence of observations of this form, the first step of our approach is to find the factors [12],
partitions of state variables that always change together in the observed data. For example, consider a robot which has options for moving to the nearest table and picking up a glass on an adjacent table. Moving to a table changes the x and y coordinates of the robot without changing the joint angles of the robot’s arms, while picking up a glass does the opposite. Thus, the x and y coordinates and the arm joint angles of the robot belong to different factors. Splitting the state space into factors reduces the number of potential masks (see end of Section 2.2) because we assume that if state variables i and j always change together in the observations, then this will always occur, e.g. we assume that moving to the table will never move the robot’s arms.1
Finding the Factors Compute the set of observed masks M from the (s, o, s′) observations: each observation’s mask is the subset of state variables that differ substantially between s and s′. Since we work in continuous, stochastic domains, we must detect the difference between minor random noise (independent of the action) and a substantial change in a state variable caused by action execution. In principle this requires modeling action-independent and action-dependent differences, and distinguishing between them, but this is difficult to implement. Fortunately we have found that in practice allowing some noise and having a simple threshold is often effective, even in more noisy and complex domains. For each state variable i, let Mi ⊆M be the subset of the observed masks that contain i. Two state variables i and j belong to the same factor f ∈ F if and only if Mi = Mj . Each factor f is given by a set of state variables and thus corresponds to a subspace Sf . The factors are updated after every new observation.
Let S∗ be the set of states that the agent has observed and let S∗f be the projection of S ∗ onto the subspace Sf for some factor f , e.g. in the previous example there is a S∗f which consists of the set of observed robot (x, y) coordinates. It is important to note that the agent’s observations come only from executing partitioned abstract subgoal options. This means that S∗f consists only of abstract subgoals, because for each s ∈ S∗, sf was either unchanged from the previous state, or changed to another abstract subgoal. In the robot example, all (x, y) observations must be adjacent to a table because the robot can only execute an option that terminates with it adjacent to a table or one that does not change its (x, y) coordinates. Thus, the states in S∗ can be imagined as a collection of abstract subgoals for each of the factors. Our next step is to build a set of symbols for each factor to represent its abstract subgoals, which we do using unsupervised clustering.
Finding the Symbols For each factor f ∈ F , we find the set of symbols Zf by clustering S∗f . Let Zf (sf ) be the corresponding symbol for state s and factor f . We then map the observed states s ∈ S∗ to their corresponding symbolic states sd = {Zf (sf ),∀f ∈ F}, and the observations (s,O(s)) and (s, o, s′) to (sd, O(s)) and (sd, o, s′d), respectively.
In the robot example, the (x, y) observations would be clustered around tables that the robot could travel to, so there would be a symbol corresponding to each table.
We want to build our models within the symbolic state space Sd. Thus we define the symbolic precondition, Pre(o, sd), which returns the probability that the agent can execute an option from some symbolic state, and the symbolic effects distribution for a subgoal option o, Eff (o), maps to a subgoal distribution over symbolic states. For example, the robot’s ‘move to the nearest table’ option maps the robot’s current (x, y) symbol to the one which corresponds to the nearest table.
The next step is to partition the options into abstract subgoal options (in the symbolic state space), e.g. we want to partition the ‘move to the nearest table’ option in the symbolic state space so that the symbolic states in each partition have the same nearest table.
Partitioning the Options For each option o, we initialize the partitioning P o so that each symbolic state starts in its own partition. We use independent Bayesian sparse Dirichlet-categorical models [18] for the symbolic effects distribution of each option partition.2 We then perform Bayesian Hierarchical Clustering [8] to merge partitions which have similar symbolic effects distributions.3
1The factors assumption is not strictly necessary as we can assign each state variable to its own factor. However, using this uncompressed representation can lead to an exponential increase in the size of the symbolic state space and a corresponding increase in the sample complexity of learning the symbolic models.
2We use sparse Dirichlet-categorical models because there are a combinatorial number of possible symbolic state transitions, but we expect that each partition has non-zero probability for only a small number of them.
3We use the closed form solutions for Dirichlet-multinomial models provided by the paper.
Algorithm 1 Fast Construction of a Distribution over Symbolic Option Models 1: Find the set of observed masks M . 2: Find the factors F . 3: ∀f ∈ F , find the set of symbols Zf . 4: Map the observed states s ∈ S∗ to symbolic states sd ∈ S∗d. 5: Map the observations (s,O(s)) and (s, o, s′) to (sd, O(s)) and (sd, o, s′d). 6: ∀o ∈ O, initialize P o and perform Bayesian Hierarchical Clustering on it. 7: ∀o ∈ O, find Ao and F o∗ .
There is a special case where the agent has observed that an option o was available in some symbolic states Sda , but has yet to actually execute it from any s
d ∈ Sda . These are not included in the Bayesian Hierarchical Clustering, instead we have a special prior for the partition of o that they belong to. After completing the merge step, the agent has a partitioning P o for each option o. Our prior is that with probability qo,4 each sd ∈ Sda belongs to the partition po ∈ P o which contains the symbolic states most similar to sd, and with probability 1− qo each sd belongs to its own partition. To determine the partition which is most similar to some symbolic state, we first find Ao, the smallest subset of factors which can still be used to correctly classify P o. We then map each sd ∈ Sda to the most similar partition by trying to match sd masked by Ao with a masked symbolic state already in one of the partitions. If there is no match, sd is placed in its own partition.
Our final consideration is how to model the symbolic preconditions. The main concern is that many factors are often irrelevant for determining if some option can be executed. For example, whether or not you have keys in your pocket does not affect whether you can put on your shoe.
Modeling the Symbolic Preconditions Given an option o and subset of factors F o, let SdF o be the symbolic state space projected onto F o. We use independent Bayesian Beta-Bernoulli models for the symbolic precondition of o in each masked symbolic state sdF o ∈ SdF o . For each option o, we use Bayesian model selection to find the the subset of factors F o∗ which maximizes the likelihood of the symbolic precondition models.
The final result is a distribution over symbolic option models H , which consists of the combined sets of independent symbolic precondition models {Pre(o, sdF o∗ );∀o ∈ O,∀s d F o∗ ∈ SdF o∗ } and independent symbolic effects distribution models {Eff (o, po);∀o ∈ O,∀po ∈ P o}. The complete procedure is given in Algorithm 1. A symbolic option model h ∼ H can be sampled by drawing parameters for each of the Bernoulli and categorical distributions from the corresponding Beta and sparse Dirichlet distributions, and drawing outcomes for each qo. It is also possible to consider distributions over other parts of the model such as the symbolic state space and/or a more complicated one for the option partitionings, which we leave for future work.
3.2 Optimal Exploration
In the previous section we have shown how to efficiently compute a distribution over symbolic option models H . Recall that the ultimate purpose of H is to compute the success probabilities of plans (see Section 2.2). Thus, the quality of H is determined by the accuracy of its predicted plan success probabilities, and efficiently learning H corresponds to selecting the sequence of observations which maximizes the expected accuracy of H . However, it is difficult to calculate the expected accuracy of H over all possible plans, so we define a proxy measure to optimize which is intended to represent the amount of uncertainty in H . In this section, we introduce our proxy measure, followed by an algorithm for finding the exploration policy which optimizes it. The algorithm operates in an online manner, building H from the data collected so far, using H to select an option to execute, updating H with the new observation, and so on.
First we define the standard deviation σH , the quantity we use to represent the amount of uncertainty in H . To define the standard deviation, we need to also define the distance and mean.
4This is a user specified parameter.
We define the distance K from h2 ∈ H to h1 ∈ H , to be the sum of the Kullback-Leibler (KL) divergences5 between their individual symbolic effect distributions plus the sum of the KL divergences between their individual symbolic precondition distributions:6
K(h1, h2) = ∑ o∈O [ ∑
sd Fo∗ ∈Sd Fo∗
DKL(Pre h1(o, sdF o∗ ) || Pre h2(o, sdF o∗ ))
+ ∑ po∈P o DKL(Eff h1(o, po) || Eff h2(o, po))].
We define the mean, E[H], to be the symbolic option model such that each Bernoulli symbolic precondition and categorical symbolic effects distribution is equal to the mean of the corresponding Beta or sparse Dirichlet distribution:
∀o ∈ O, ∀po ∈ P o, Eff E[H](o, po) = Eh∼H [Eff h(o, po)],
∀o ∈ O, ∀sdF o∗ ∈ S d F o∗ , PreE[H](o, sdF o∗ )) = Eh∼H [Pre h(o, sdF o∗ ))].
The standard deviation σH is then simply: σH = Eh∼H [K(h,E[H])]. This represents the expected amount of information which is lost if E[H] is used to approximate H . Now we define the optimal exploration policy for the agent, which aims to maximize the expected reduction in σH after H is updated with new observations. Let H(w) be the posterior distribution over symbolic models when H is updated with symbolic observations w (the partitioning is not updated, only the symbolic effects distribution and symbolic precondition models), and letW (H, i, π) be the distribution over symbolic observations drawn from the posterior of H if the agent follows policy π for i steps. We define the optimal exploration policy π∗ as:
π∗ = argmax π σH − Ew∼W (H,i,π)[σH(w)].
For the convenience of our algorithm, we rewrite the second term by switching the order of the expectations: Ew∼W (H,i,π)[Eh∼H(w)[K(h,E[H(w)])]] = Ew∼W (h,i,π)[K(h,E[H(w)])]]. Note that the objective function is non-Markovian because H is continuously updated with the agent’s new observations, which changes σH . This means that π∗ is non-stationary, so Algorithm 2 approximates π∗ in an online manner using Monte-Carlo tree search (MCTS) [3] with the UCT tree policy [10]. πT is the combined tree and rollout policy for MCTS, given tree T .
There is a special case when the agent simulates the observation of a previously unobserved transition, which can occur under the sparse Dirichlet-categorical model. In this case, the amount of information gained is very large, and furthermore, the agent is likely to transition to a novel symbolic state. Rather than modeling the unexplored state space, instead, if an unobserved transition is encountered during an MCTS update, it immediately terminates with a large bonus to the score, a similar approach to that of the R-max algorithm [2]. The form of the bonus is -zg, where g is the depth that the update terminated and z is a constant. The bonus reflects the opportunity cost of not experiencing something novel as quickly as possible, and in practice it tends to dominate (as it should).
4 The Asteroids Domain
The Asteroids domain is shown in Figure 2a and was implemented using physics simulator pybox2d. The agent controls a ship by either applying a thrust in the direction it is facing or applying a torque in either direction. The goal of the agent is to be able to navigate the environment without colliding with any of the four “asteroids.” The agent’s starting location is next to asteroid 1. The agent is given the following 6 options (see Appendix A for additional details):
1. move-counterclockwise and move-clockwise: the ship moves from the current face it is adjacent to, to the midpoint of the face which is counterclockwise/clockwise on the same asteroid from the current face. Only available if the ship is at an asteroid.
5The KL divergence has previously been used in other active exploration scenarios [16, 14]. 6Similarly to other active exploration papers, we define the distance to depend only on the transition models
and not the reward models.
Algorithm 2 Optimal Exploration Input: Number of remaining option executions i.
1: while i ≥ 0 do 2: Build H from observations (Algorithm 1). 3: Initialize tree T for MCTS. 4: while number updates < threshold do 5: Sample a symbolic model h ∼ H . 6: Do an MCTS update of T with dynamics given by h. 7: Terminate current update if depth g is ≥ i, or unobserved transition is encountered. 8: Store simulated observations w ∼W (h, g, πT ). 9: Score = K(h,E[H])−K(h,E[H(w)])− zg.
10: end while 11: return most visited child of root node. 12: Execute corresponding option; Update observations; i--. 13: end while
2. move-to-asteroid-1, move-to-asteroid-2, move-to-asteroid-3, and move-to-asteroid-4: the ship moves to the midpoint of the closest face of asteroid 1-4 to which it has an unobstructed path. Only available if the ship is not already at the asteroid and an unobstructed path to some face exists.
Exploring with these options results in only one factor (for the entire state space), with symbols corresponding to each of the 35 asteroid faces as shown in Figure 2a.
Results We tested the performance of three exploration algorithms: random, greedy, and our algorithm. For the greedy algorithm, the agent first computes the symbolic state space using steps 1-5 of Algorithm 1, and then chooses the option with the lowest execution count from its current symbolic state. The hyperparameter settings that we use for our algorithm are given in Appendix A.
Figures 3a, 3b, and 3c show the percentage of time that the agent spends on exploring asteroids 1, 3, and 4, respectively. The random and greedy policies have difficulty escaping asteroid 1, and are rarely able to reach asteroid 4. On the other hand, our algorithm allocates its time much more proportionally. Figure 4d shows the number of symbolic transitions that the agent has not observed (out of 115 possible).7 As we discussed in Section 3, the number of unobserved symbolic transitions is a good representation of the amount of information that the models are missing from the environment.
Our algorithm significantly outperforms random and greedy exploration. Note that these results are using an uninformative prior and the performance of our algorithm could be significantly improved by
7We used Algorithm 1 to build symbolic models from the data gathered by each exploration algorithms.
starting with more information about the environment. To try to give additional intuition, in Appendix A we show heatmaps of the (x, y) coordinates visited by each of the exploration algorithms.
5 The Treasure Game Domain
The Treasure Game [12], shown in Figure 2b, features an agent in a 2D, 528× 528 pixel video-game like world, whose goal is to obtain treasure and return to its starting position on a ladder at the top of the screen. The 9-dimensional state space is given by the x and y positions of the agent, key, and treasure, the angles of the two handles, and the state of the lock.
The agent is given 9 options: go-left, go-right, up-ladder, down-ladder, jump-left, jump-right, downright, down-left, and interact. See Appendix A for a more detailed description of the options and the environment dynamics. Given these options, the 7 factors with their corresponding number of symbols are: player-x, 10; player-y, 9; handle1-angle, 2; handle2-angle, 2; key-x and key-y, 3; bolt-locked, 2; and goldcoin-x and goldcoin-y, 2.
Results We tested the performance of the same three algorithms: random, greedy, and our algorithm. Figure 4a shows the fraction of time that the agent spends without having the key and with the lock still locked. Figures 4b and 4c show the number of times that the agent obtains the key and treasure, respectively. Figure 4d shows the number of unobserved symbolic transitions (out of 240 possible). Again, our algorithm performs significantly better than random and greedy exploration. The data
from our algorithm has much better coverage, and thus leads to more accurate symbolic models. For instance in Figure 4c you can see that random and greedy exploration did not obtain the treasure after 200 executions; without that data the agent would not know that it should have a symbol that corresponds to possessing the treasure.
6 Conclusion
We have introduced a two-part algorithm for data-efficiently learning an abstract symbolic representation of an environment which is suitable for planning with high-level skills. The first part of the algorithm quickly generates an intermediate Bayesian symbolic model directly from data. The second part guides the agent’s exploration towards areas of the environment that the model is uncertain about. This algorithm is useful when the cost of data collection is high, as is the case in most real world artificial intelligence applications. Our results show that the algorithm is significantly more data efficient than using more naive exploration policies.
7 Acknowledgements
This research was supported in part by the National Institutes of Health under award number R01MH109177. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. | 1. What is the focus and contribution of the paper regarding unsupervised hierarchical reinforcement learning?
2. What are the strengths and weaknesses of the proposed approach, particularly in its current form?
3. How does the reviewer assess the clarity and quality of the paper's content?
4. What are the limitations and potential risks associated with the method for finding factors, as well as the assumption made regarding the coverage of the initial data?
5. How do the proposed options in Section 4 align with or contradict the desired properties discussed in Section 2.2?
6. Can you provide additional explanations or justifications for specific aspects of the paper, such as the definition of the "greedy" baseline or the difference in performance between the greedy and MCTS versions in Figure 4? | Review | Review
This is a very interesting paper, with multiple complementary ideas. It advocates model-based active exploration (model learning + seeking regions of uncertainty). Instead of doing this in raw state space, it proposes a method for abstracting states to symbols based on factoring and clustering the state space. The exploration is then done by MCTS-planning in a (sampled) symbolic model. The task setup evaluates pure exploration (ignoring all rewards) on a two different domains.
This approach to unsupervised hierarchical reinforcement learning is novel and ambitious, the paper is clear and well-written. The proposed method may be somewhat brittle in its current form, and it is unclear to what problem complexity it can scale, but that can be resolved by future work. My main recommendation is to add a thorough discussion of its weaknesses and limitations.
Other comments:
* Section 2.1: maybe itâs not necessary to introduce discounts and rewards at all, given that neither are used in the paper?
* Section 3.1: the method for finding the factors seems very brittle, and to rely on disentangled feature representations that are not noisy. Please discuss these limitations, and maybe hint at how factors could be found if the observations were a noisy sensory stream like vision.
* Line 192: freezing the partitioning in the first iteration seems like a risky choice that makes strong assumptions about the coverage of the initial data. At least discuss the limitations of this.
* Section 4: there is a mismatch between these options and the desired properties discussed in section 2.2: in particular, the proposed options are not âsubgoal optionsâ because their distribution over termination states strongly depends on the start states? Same for the Treasure Game.
* Line 218: explicitly define what the âgreedyâ baseline is.
* Figure 4: Comparing the greedy results between (b) and (c), it appears that whenever a key is obtained, the treasure is almost always found too, contrasting with the MCTS version that explores a lot of key-but-no-treasure states. Can you explain this? |
NIPS | Title
Active Exploration for Learning Symbolic Representations
Abstract
We introduce an online active exploration algorithm for data-efficiently learning an abstract symbolic model of an environment. Our algorithm is divided into two parts: the first part quickly generates an intermediate Bayesian symbolic model from the data that the agent has collected so far, which the agent can then use along with the second part to guide its future exploration towards regions of the state space that the model is uncertain about. We show that our algorithm outperforms random and greedy exploration policies on two different computer game domains. The first domain is an Asteroids-inspired game with complex dynamics but basic logical structure. The second is the Treasure Game, with simpler dynamics but more complex logical structure.
1 Introduction
Much work has been done in artificial intelligence and robotics on how high-level state abstractions can be used to significantly improve planning [19]. However, building these abstractions is difficult, and consequently, they are typically hand-crafted [15, 13, 7, 4, 5, 6, 20, 9].
A major open question is then the problem of abstraction: how can an intelligent agent learn highlevel models that can be used to improve decision making, using only noisy observations from its high-dimensional sensor and actuation spaces? Recent work [11, 12] has shown how to automatically generate symbolic representations suitable for planning in high-dimensional, continuous domains. This work is based on the hierarchical reinforcement learning framework [1], where the agent has access to high-level skills that abstract away the low-level details of control. The agent then learns representations for the (potentially abstract) effect of using these skills. For instance, opening a door is a high-level skill, while knowing that opening a door typically allows one to enter a building would be part of the representation for this skill. The key result of that work was that the symbols required to determine the probability of a plan succeeding are directly determined by characteristics of the skills available to an agent. The agent can learn these symbols autonomously by exploring the environment, which removes the need to hand-design symbolic representations of the world.
It is therefore possible to learn the symbols by naively collecting samples from the environment, for example by random exploration. However, in an online setting the agent shall be able to use its previously collected data to compute an exploration policy which leads to better data efficiency. We introduce such an algorithm, which is divided into two parts: the first part quickly generates an intermediate Bayesian symbolic model from the data that the agent has collected so far, while the second part uses the model plus Monte-Carlo tree search to guide the agent’s future exploration towards regions of the state space that the model is uncertain about. We show that our algorithm is significantly more data-efficient than more naive methods in two different computer game domains. The first domain is an Asteroids-inspired game with complex dynamics but basic logical structure. The second is the Treasure Game, with simpler dynamics but more complex logical structure.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
2 Background
As a motivating example, imagine deciding the route you are going to take to the grocery store; instead of planning over the various sequences of muscle contractions that you would use to complete the trip, you would consider a small number of high-level alternatives such as whether to take one route or another. You also would avoid considering how your exact low-level state affected your decision making, and instead use an abstract (symbolic) representation of your state with components such as whether you are at home or an work, whether you have to get gas, whether there is traffic, etc. This simplification reduces computational complexity, and allows for increased generalization over past experiences. In the following sections, we introduce the frameworks that we use to represent the agent’s high-level skills, and symbolic models for those skills.
2.1 Semi-Markov Decision Processes
We assume that the agent’s environment can be described by a semi-Markov decision process (SMDP), given by a tuple D = (S,O,R, P, γ), where S ⊆ Rd is a d-dimensional continuous state space, O(s) returns a set of temporally extended actions, or options [19] available in state s ∈ S, R(s′, t, s, o) and P (s′, t | s, o) are the reward received and probability of termination in state s′ ∈ S after t time steps following the execution of option o ∈ O(s) in state s ∈ S, and γ ∈ (0, 1] is a discount factor. In this paper, we are not concerned with the time taken to execute o, so we use P (s′ | s, o) = ∫ P (s′, t | s, o)dt.
An option o is given by three components: πo, the option policy that is executed when the option is invoked, Io, the initiation set consisting of the states where the option can be executed from, and βo(s)→ [0, 1], the termination condition, which returns the probability that the option will terminate upon reaching state s. Learning models for the initiation set, rewards, and transitions for each option, allows the agent to reason about the effect of its actions in the environment. To learn these option models, the agent has the ability to collect observations of the forms (s,O(s)) when entering a state s and (s, o, s′, r, t) upon executing option o from s.
2.2 Abstract Representations for Planning
We are specifically interested in learning option models which allow the agent to easily evaluate the success probability of plans. A plan is a sequence of options to be executed from some starting state, and it succeeds if and only if it is able to be run to completion (regardless of the reward). Thus, a plan {o1, o2, ..., on} with starting state s succeeds if and only if s ∈ Io1 and the termination state of each option (except for the last) lies in the initiation set of the following option, i.e. s′ ∼ P (s′ | s, o1) ∈ Io2 , s′′ ∼ P (s′′ | s′, o2) ∈ Io3 , and so on. Recent work [11, 12] has shown how to automatically generate a symbolic representation that supports such queries, and is therefore suitable for planning. This work is based on the idea of a probabilistic symbol, a compact representation of a distribution over infinitely many continuous, low-level states. For example, a probabilistic symbol could be used to classify whether or not the agent is currently in front of a door, or one could be used to represent the state that the agent would find itself in after executing its ‘open the door’ option. In both cases, using probabilistic symbols also allows the agent to be uncertain about its state.
The following two probabilistic symbols are provably sufficient for evaluating the success probability of any plan [12]; the probabilistic precondition: Pre(o) = P (s ∈ Io), which expresses the probability that an option o can be executed from each state s ∈ S, and the probabilistic image operator:
Im(o, Z) =
∫ S P (s′ | s, o)Z(s)P (Io | s)ds∫
S Z(s)P (Io | s)ds
,
which represents the distribution over termination states if an option o is executed from a distribution over starting states Z. These symbols can be used to compute the probability that each successive option in the plan can be executed, and these probabilities can then be multiplied to compute the overall success probability of the plan; see Figure 1 for a visual demonstration of a plan of length 2.
Subgoal Options Unfortunately, it is difficult to model Im(o, Z) for arbitrary options, so we focus on restricted types of options. A subgoal option [17] is a special class of option where the distribution over termination states (referred to as the subgoal) is independent of the distribution over starting
states that it was executed from, e.g. if you make the decision to walk to your kitchen, the end result will be the same regardless of where you started from.
For subgoal options, the image operator can be replaced with the effects distribution: Eff(o) = Im(o, Z),∀Z(S), the resulting distribution over states after executing o from any start distribution Z(S). Planning with a set of subgoal options is simple because for each ordered pair of options oi and oj , it is possible to compute and store G(oi, oj), the probability that oj can be executed immediately after executing oi: G(oi, oj) = ∫ S Pre(oj , s)Eff(oi)(s)ds.
We use the following two generalizations of subgoal options: abstract subgoal options model the more general case where executing an option leads to a subgoal for a subset of the state variables (called the mask), leaving the rest unchanged. For example, walking to the kitchen leaves the amount of gas in your car unchanged. More formally, the state vector can be partitioned into two parts s = [a, b], such that executing o leaves the agent in state s′ = [a, b′], where P (b′) is independent of the distribution over starting states. The second generalization is the (abstract) partitioned subgoal option, which can be partitioned into a finite number of (abstract) subgoal options. For instance, an option for opening doors is not a subgoal option because there are many doors in the world, however it can be partitioned into a set of subgoal options, with one for every door.
The subgoal (and abstract subgoal) assumptions propose that the exact state from which option execution starts does not really affect the options that can be executed next. This is somewhat restrictive and often does not hold for options as given, but can hold for options once they have been partitioned. Additionally, the assumptions need only hold approximately in practice.
3 Online Active Symbol Acquisition
Previous approaches for learning symbolic models from data [11, 12] used random exploration. However, real world data from high-level skills is very expensive to collect, so it is important to use a more data-efficient approach. In this section, we introduce a new method for learning abstract models data-efficiently. Our approach maintains a distribution over symbolic models which is updated after every new observation. This distribution is used to choose the sequence of options that in expectation maximally reduces the amount of uncertainty in the posterior distribution over models. Our approach has two components: an active exploration algorithm which takes as input a distribution over symbolic models and returns the next option to execute, and an algorithm for quickly building a distribution over symbolic models from data. The second component is an improvement upon previous approaches in that it returns a distribution and is fast enough to be updated online, both of which we require.
3.1 Fast Construction of a Distribution over Symbolic Option Models
Now we show how to construct a more general model than G that can be used for planning with abstract partitioned subgoal options. The advantages of our approach versus previous methods are that our algorithm is much faster, and the resulting model is Bayesian, both of which are necessary for the active exploration algorithm introduced in the next section.
Recall that the agent can collect observations of the forms (s, o, s′) upon executing option o from s, and (s,O(s)) when entering a state s, where O(s) is the set of available options in state s. Given a sequence of observations of this form, the first step of our approach is to find the factors [12],
partitions of state variables that always change together in the observed data. For example, consider a robot which has options for moving to the nearest table and picking up a glass on an adjacent table. Moving to a table changes the x and y coordinates of the robot without changing the joint angles of the robot’s arms, while picking up a glass does the opposite. Thus, the x and y coordinates and the arm joint angles of the robot belong to different factors. Splitting the state space into factors reduces the number of potential masks (see end of Section 2.2) because we assume that if state variables i and j always change together in the observations, then this will always occur, e.g. we assume that moving to the table will never move the robot’s arms.1
Finding the Factors Compute the set of observed masks M from the (s, o, s′) observations: each observation’s mask is the subset of state variables that differ substantially between s and s′. Since we work in continuous, stochastic domains, we must detect the difference between minor random noise (independent of the action) and a substantial change in a state variable caused by action execution. In principle this requires modeling action-independent and action-dependent differences, and distinguishing between them, but this is difficult to implement. Fortunately we have found that in practice allowing some noise and having a simple threshold is often effective, even in more noisy and complex domains. For each state variable i, let Mi ⊆M be the subset of the observed masks that contain i. Two state variables i and j belong to the same factor f ∈ F if and only if Mi = Mj . Each factor f is given by a set of state variables and thus corresponds to a subspace Sf . The factors are updated after every new observation.
Let S∗ be the set of states that the agent has observed and let S∗f be the projection of S ∗ onto the subspace Sf for some factor f , e.g. in the previous example there is a S∗f which consists of the set of observed robot (x, y) coordinates. It is important to note that the agent’s observations come only from executing partitioned abstract subgoal options. This means that S∗f consists only of abstract subgoals, because for each s ∈ S∗, sf was either unchanged from the previous state, or changed to another abstract subgoal. In the robot example, all (x, y) observations must be adjacent to a table because the robot can only execute an option that terminates with it adjacent to a table or one that does not change its (x, y) coordinates. Thus, the states in S∗ can be imagined as a collection of abstract subgoals for each of the factors. Our next step is to build a set of symbols for each factor to represent its abstract subgoals, which we do using unsupervised clustering.
Finding the Symbols For each factor f ∈ F , we find the set of symbols Zf by clustering S∗f . Let Zf (sf ) be the corresponding symbol for state s and factor f . We then map the observed states s ∈ S∗ to their corresponding symbolic states sd = {Zf (sf ),∀f ∈ F}, and the observations (s,O(s)) and (s, o, s′) to (sd, O(s)) and (sd, o, s′d), respectively.
In the robot example, the (x, y) observations would be clustered around tables that the robot could travel to, so there would be a symbol corresponding to each table.
We want to build our models within the symbolic state space Sd. Thus we define the symbolic precondition, Pre(o, sd), which returns the probability that the agent can execute an option from some symbolic state, and the symbolic effects distribution for a subgoal option o, Eff (o), maps to a subgoal distribution over symbolic states. For example, the robot’s ‘move to the nearest table’ option maps the robot’s current (x, y) symbol to the one which corresponds to the nearest table.
The next step is to partition the options into abstract subgoal options (in the symbolic state space), e.g. we want to partition the ‘move to the nearest table’ option in the symbolic state space so that the symbolic states in each partition have the same nearest table.
Partitioning the Options For each option o, we initialize the partitioning P o so that each symbolic state starts in its own partition. We use independent Bayesian sparse Dirichlet-categorical models [18] for the symbolic effects distribution of each option partition.2 We then perform Bayesian Hierarchical Clustering [8] to merge partitions which have similar symbolic effects distributions.3
1The factors assumption is not strictly necessary as we can assign each state variable to its own factor. However, using this uncompressed representation can lead to an exponential increase in the size of the symbolic state space and a corresponding increase in the sample complexity of learning the symbolic models.
2We use sparse Dirichlet-categorical models because there are a combinatorial number of possible symbolic state transitions, but we expect that each partition has non-zero probability for only a small number of them.
3We use the closed form solutions for Dirichlet-multinomial models provided by the paper.
Algorithm 1 Fast Construction of a Distribution over Symbolic Option Models 1: Find the set of observed masks M . 2: Find the factors F . 3: ∀f ∈ F , find the set of symbols Zf . 4: Map the observed states s ∈ S∗ to symbolic states sd ∈ S∗d. 5: Map the observations (s,O(s)) and (s, o, s′) to (sd, O(s)) and (sd, o, s′d). 6: ∀o ∈ O, initialize P o and perform Bayesian Hierarchical Clustering on it. 7: ∀o ∈ O, find Ao and F o∗ .
There is a special case where the agent has observed that an option o was available in some symbolic states Sda , but has yet to actually execute it from any s
d ∈ Sda . These are not included in the Bayesian Hierarchical Clustering, instead we have a special prior for the partition of o that they belong to. After completing the merge step, the agent has a partitioning P o for each option o. Our prior is that with probability qo,4 each sd ∈ Sda belongs to the partition po ∈ P o which contains the symbolic states most similar to sd, and with probability 1− qo each sd belongs to its own partition. To determine the partition which is most similar to some symbolic state, we first find Ao, the smallest subset of factors which can still be used to correctly classify P o. We then map each sd ∈ Sda to the most similar partition by trying to match sd masked by Ao with a masked symbolic state already in one of the partitions. If there is no match, sd is placed in its own partition.
Our final consideration is how to model the symbolic preconditions. The main concern is that many factors are often irrelevant for determining if some option can be executed. For example, whether or not you have keys in your pocket does not affect whether you can put on your shoe.
Modeling the Symbolic Preconditions Given an option o and subset of factors F o, let SdF o be the symbolic state space projected onto F o. We use independent Bayesian Beta-Bernoulli models for the symbolic precondition of o in each masked symbolic state sdF o ∈ SdF o . For each option o, we use Bayesian model selection to find the the subset of factors F o∗ which maximizes the likelihood of the symbolic precondition models.
The final result is a distribution over symbolic option models H , which consists of the combined sets of independent symbolic precondition models {Pre(o, sdF o∗ );∀o ∈ O,∀s d F o∗ ∈ SdF o∗ } and independent symbolic effects distribution models {Eff (o, po);∀o ∈ O,∀po ∈ P o}. The complete procedure is given in Algorithm 1. A symbolic option model h ∼ H can be sampled by drawing parameters for each of the Bernoulli and categorical distributions from the corresponding Beta and sparse Dirichlet distributions, and drawing outcomes for each qo. It is also possible to consider distributions over other parts of the model such as the symbolic state space and/or a more complicated one for the option partitionings, which we leave for future work.
3.2 Optimal Exploration
In the previous section we have shown how to efficiently compute a distribution over symbolic option models H . Recall that the ultimate purpose of H is to compute the success probabilities of plans (see Section 2.2). Thus, the quality of H is determined by the accuracy of its predicted plan success probabilities, and efficiently learning H corresponds to selecting the sequence of observations which maximizes the expected accuracy of H . However, it is difficult to calculate the expected accuracy of H over all possible plans, so we define a proxy measure to optimize which is intended to represent the amount of uncertainty in H . In this section, we introduce our proxy measure, followed by an algorithm for finding the exploration policy which optimizes it. The algorithm operates in an online manner, building H from the data collected so far, using H to select an option to execute, updating H with the new observation, and so on.
First we define the standard deviation σH , the quantity we use to represent the amount of uncertainty in H . To define the standard deviation, we need to also define the distance and mean.
4This is a user specified parameter.
We define the distance K from h2 ∈ H to h1 ∈ H , to be the sum of the Kullback-Leibler (KL) divergences5 between their individual symbolic effect distributions plus the sum of the KL divergences between their individual symbolic precondition distributions:6
K(h1, h2) = ∑ o∈O [ ∑
sd Fo∗ ∈Sd Fo∗
DKL(Pre h1(o, sdF o∗ ) || Pre h2(o, sdF o∗ ))
+ ∑ po∈P o DKL(Eff h1(o, po) || Eff h2(o, po))].
We define the mean, E[H], to be the symbolic option model such that each Bernoulli symbolic precondition and categorical symbolic effects distribution is equal to the mean of the corresponding Beta or sparse Dirichlet distribution:
∀o ∈ O, ∀po ∈ P o, Eff E[H](o, po) = Eh∼H [Eff h(o, po)],
∀o ∈ O, ∀sdF o∗ ∈ S d F o∗ , PreE[H](o, sdF o∗ )) = Eh∼H [Pre h(o, sdF o∗ ))].
The standard deviation σH is then simply: σH = Eh∼H [K(h,E[H])]. This represents the expected amount of information which is lost if E[H] is used to approximate H . Now we define the optimal exploration policy for the agent, which aims to maximize the expected reduction in σH after H is updated with new observations. Let H(w) be the posterior distribution over symbolic models when H is updated with symbolic observations w (the partitioning is not updated, only the symbolic effects distribution and symbolic precondition models), and letW (H, i, π) be the distribution over symbolic observations drawn from the posterior of H if the agent follows policy π for i steps. We define the optimal exploration policy π∗ as:
π∗ = argmax π σH − Ew∼W (H,i,π)[σH(w)].
For the convenience of our algorithm, we rewrite the second term by switching the order of the expectations: Ew∼W (H,i,π)[Eh∼H(w)[K(h,E[H(w)])]] = Ew∼W (h,i,π)[K(h,E[H(w)])]]. Note that the objective function is non-Markovian because H is continuously updated with the agent’s new observations, which changes σH . This means that π∗ is non-stationary, so Algorithm 2 approximates π∗ in an online manner using Monte-Carlo tree search (MCTS) [3] with the UCT tree policy [10]. πT is the combined tree and rollout policy for MCTS, given tree T .
There is a special case when the agent simulates the observation of a previously unobserved transition, which can occur under the sparse Dirichlet-categorical model. In this case, the amount of information gained is very large, and furthermore, the agent is likely to transition to a novel symbolic state. Rather than modeling the unexplored state space, instead, if an unobserved transition is encountered during an MCTS update, it immediately terminates with a large bonus to the score, a similar approach to that of the R-max algorithm [2]. The form of the bonus is -zg, where g is the depth that the update terminated and z is a constant. The bonus reflects the opportunity cost of not experiencing something novel as quickly as possible, and in practice it tends to dominate (as it should).
4 The Asteroids Domain
The Asteroids domain is shown in Figure 2a and was implemented using physics simulator pybox2d. The agent controls a ship by either applying a thrust in the direction it is facing or applying a torque in either direction. The goal of the agent is to be able to navigate the environment without colliding with any of the four “asteroids.” The agent’s starting location is next to asteroid 1. The agent is given the following 6 options (see Appendix A for additional details):
1. move-counterclockwise and move-clockwise: the ship moves from the current face it is adjacent to, to the midpoint of the face which is counterclockwise/clockwise on the same asteroid from the current face. Only available if the ship is at an asteroid.
5The KL divergence has previously been used in other active exploration scenarios [16, 14]. 6Similarly to other active exploration papers, we define the distance to depend only on the transition models
and not the reward models.
Algorithm 2 Optimal Exploration Input: Number of remaining option executions i.
1: while i ≥ 0 do 2: Build H from observations (Algorithm 1). 3: Initialize tree T for MCTS. 4: while number updates < threshold do 5: Sample a symbolic model h ∼ H . 6: Do an MCTS update of T with dynamics given by h. 7: Terminate current update if depth g is ≥ i, or unobserved transition is encountered. 8: Store simulated observations w ∼W (h, g, πT ). 9: Score = K(h,E[H])−K(h,E[H(w)])− zg.
10: end while 11: return most visited child of root node. 12: Execute corresponding option; Update observations; i--. 13: end while
2. move-to-asteroid-1, move-to-asteroid-2, move-to-asteroid-3, and move-to-asteroid-4: the ship moves to the midpoint of the closest face of asteroid 1-4 to which it has an unobstructed path. Only available if the ship is not already at the asteroid and an unobstructed path to some face exists.
Exploring with these options results in only one factor (for the entire state space), with symbols corresponding to each of the 35 asteroid faces as shown in Figure 2a.
Results We tested the performance of three exploration algorithms: random, greedy, and our algorithm. For the greedy algorithm, the agent first computes the symbolic state space using steps 1-5 of Algorithm 1, and then chooses the option with the lowest execution count from its current symbolic state. The hyperparameter settings that we use for our algorithm are given in Appendix A.
Figures 3a, 3b, and 3c show the percentage of time that the agent spends on exploring asteroids 1, 3, and 4, respectively. The random and greedy policies have difficulty escaping asteroid 1, and are rarely able to reach asteroid 4. On the other hand, our algorithm allocates its time much more proportionally. Figure 4d shows the number of symbolic transitions that the agent has not observed (out of 115 possible).7 As we discussed in Section 3, the number of unobserved symbolic transitions is a good representation of the amount of information that the models are missing from the environment.
Our algorithm significantly outperforms random and greedy exploration. Note that these results are using an uninformative prior and the performance of our algorithm could be significantly improved by
7We used Algorithm 1 to build symbolic models from the data gathered by each exploration algorithms.
starting with more information about the environment. To try to give additional intuition, in Appendix A we show heatmaps of the (x, y) coordinates visited by each of the exploration algorithms.
5 The Treasure Game Domain
The Treasure Game [12], shown in Figure 2b, features an agent in a 2D, 528× 528 pixel video-game like world, whose goal is to obtain treasure and return to its starting position on a ladder at the top of the screen. The 9-dimensional state space is given by the x and y positions of the agent, key, and treasure, the angles of the two handles, and the state of the lock.
The agent is given 9 options: go-left, go-right, up-ladder, down-ladder, jump-left, jump-right, downright, down-left, and interact. See Appendix A for a more detailed description of the options and the environment dynamics. Given these options, the 7 factors with their corresponding number of symbols are: player-x, 10; player-y, 9; handle1-angle, 2; handle2-angle, 2; key-x and key-y, 3; bolt-locked, 2; and goldcoin-x and goldcoin-y, 2.
Results We tested the performance of the same three algorithms: random, greedy, and our algorithm. Figure 4a shows the fraction of time that the agent spends without having the key and with the lock still locked. Figures 4b and 4c show the number of times that the agent obtains the key and treasure, respectively. Figure 4d shows the number of unobserved symbolic transitions (out of 240 possible). Again, our algorithm performs significantly better than random and greedy exploration. The data
from our algorithm has much better coverage, and thus leads to more accurate symbolic models. For instance in Figure 4c you can see that random and greedy exploration did not obtain the treasure after 200 executions; without that data the agent would not know that it should have a symbol that corresponds to possessing the treasure.
6 Conclusion
We have introduced a two-part algorithm for data-efficiently learning an abstract symbolic representation of an environment which is suitable for planning with high-level skills. The first part of the algorithm quickly generates an intermediate Bayesian symbolic model directly from data. The second part guides the agent’s exploration towards areas of the environment that the model is uncertain about. This algorithm is useful when the cost of data collection is high, as is the case in most real world artificial intelligence applications. Our results show that the algorithm is significantly more data efficient than using more naive exploration policies.
7 Acknowledgements
This research was supported in part by the National Institutes of Health under award number R01MH109177. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. | 1. How can an agent actively explore to build a probabilistic symbolic model of the environment from a given set of options and state abstractions?
2. Does building a probabilistic symbolic model help make future exploration easier via MCTS?
3. What are the contributions of the paper, particularly regarding the MCTS algorithm?
4. How can the paper improve its clarity and articulation in the writing and experiments?
5. What are the implications of the restrictive assumptions made in the paper, such as the restriction to subgoal options and the assumption that state variables always change together in the observations?
6. How can the experimental setup be improved to better visualize and discuss interesting questions, such as the evolution of the symbol model with active exploration and the space of policies at various intermediate epochs? | Review | Review
Key question asked in the paper: Can an agent actively explore to build a probabilistic symbolic model of the environment from a given set of option and state abstractions? And does this help make future exploration easy via MCTS? This paper proposes a knowledge representation framework for addressing these questions.
Proposal: A known continuous state space and discrete set of options (restricted to subgoal options) is given. An options model is treated a semi-MDP process. A plan is defined as a sequence of options. The idea of a probabilistic symbol is invoked from earlier work to refer to a distribution over infinitely many continuous low-level states. The idea of state masks is introduced to find independent factors of variations. Then each abstract subgoal option is defined as a policy that leads to a subgoal for the masked states, for e.g. opening a door. But since there could be many doors in the environment, the idea of a partitioned abstract subgoal option is proposed to bind together subgoal options for each instance of a door. The agent then uses these partitioned abstract subgoal options during exploration. Optimal exploration then is defined as finding a policy via MCTS that leads to greatest reduction of uncertainty over distributions over proposed options symbolic model.
Feedback:
- There are many ideas proposed in this paper. This makes it hard to clearly communicate the contributions. This work builds on previous work from Konidaris et al. Is the novel contribution in the MCTS algorithm? The contributions should be made more explicit for unfamiliar readers.
- It is difficult to keep track of what's going on in sec 3. I would additionally make alg1 more self-inclusive and use one of the working examples (asteroid or maze) to explain things.
- Discuss assumption of restriction to subgoal option. What kind of behaviors does this framework not permit? discuss this in the paper
- Another restrictive assumption: "We use factors to reduce the number of potential masks, i.e. we assume that if state variables i and j always change together in the observations, then this will always occur. An example of a factor could be the (x, y, z) position of your keys, because they are almost never moved along only one axis". What are the implications of this assumptions on the possible range of behaviors?
- The experimental setup needs to be developed more to answer, visualize and discuss a few interesting questions: (1) how does the symbol model change with active exploration? (2) visualize the space of policies at various intermediate epochs. Without this it is hard to get an intuition for the ideas and ways to expand them in the future.
- I believe this is a very important line of work. However, clarity and articulation in the writing/experiments remains my main concern for a clear accept. Since some of the abstractions are fairly new, the authors also need to clearly discuss the implication of all their underlying assumptions. |
NIPS | Title
UnModNet: Learning to Unwrap a Modulo Image for High Dynamic Range Imaging
Abstract
A conventional camera often suffers from overor under-exposure when recording a real-world scene with a very high dynamic range (HDR). In contrast, a modulo camera with a Markov random field (MRF) based unwrapping algorithm can theoretically accomplish unbounded dynamic range but shows degenerate performances when there are modulus-intensity ambiguity, strong local contrast, and color misalignment. In this paper, we reformulate the modulo image unwrapping problem into a series of binary labeling problems and propose a modulo edge-aware model, named as UnModNet, to iteratively estimate the binary rollover masks of the modulo image for unwrapping. Experimental results show that our approach can generate 12-bit HDR images from 8-bit modulo images reliably, and runs much faster than the previous MRF-based algorithm thanks to the GPU acceleration.
1 Introduction
Real-world scenes have a very high dynamic range (HDR) so that object contours are mostly lost in the over-exposed and under-exposed regions when captured by a conventional camera with a limited dynamic range and saved as an 8-bit image. To increase the dynamic range of captured images, many HDR reconstruction approaches have been proposed to increase the camera bit depth via hardware modifications [22, 36], as well as using computational methods to merge multi-bracketed captures [5] or a series of bursts [17]. Yet the dynamic range they can achieve is limited and the details of the HDR content often cannot be faithfully recovered. A modulo camera [59] can theoretically achieve unbounded dynamic range by recording the least significant bits of the irradiance signal, i.e., the camera hardware “resets” the scene radiance arriving at the sensor before reading it out whenever it reaches saturation (e.g., for an 8-bit image, 256 will be reset to 0 and re-start the counting again as long as the shutter keeps open). By unwrapping the captured modulo image with a customized Markov random field (MRF) based algorithm, the HDR image could be practically restored, as shown in Figure 1 (top left). We denote the irradiance of an HDR image as I = {I(x, y, c)}, and its corresponding modulo image as Im = {Im(x, y, c)}, where (x, y) is the pixel coordinate and c denotes the color channel index. Im is equivalent to the least significant N bits of I. As illustrated in the bottom row of Figure 1, their relationship can be expressed as:
Im = mod(I, 2N ) or I = Im + 2N ·K, (1) where K = {K(x, y, c)} is the number of rollovers per pixel.
∗Corresponding author.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
However, as shown in Figure 1 (top right), the MRF-based unwrapping algorithm [59] is not robust due to several fundamental issues:
(1) Modulus-intensity ambiguity. It is difficult to discriminate whether a pixel value is a modulus or an intensity (non-modulus). The previous method would often incorrectly unwrap non-modulo pixels as it used a cost function without a data term during optimization.
(2) Strong local contrast. Dense modulo fringes (marked with green arrows in the bottom row of Figure 1) are usually caused by strong local contrast in irradiance. The previous method often fails around these regions as it only focused on local smoothness and ignored contextual information and structural patterns.
(3) Color misalignment. The previous method independently unwraps each color channel, resulting in severe color misalignment artifacts across three channels, so it cannot handle RGB images robustly.
In this paper, we reformulate the unwrapping of a modulo image into a series of binary labeling problems and propose a learning-based framework named UnModNet, as shown in Figure 2, to iteratively estimate the binary rollover mask of the input modulo image. Concretely, we have some key observations on the characteristics of modulo images: continuous irradiance regions are split up by the modulo operation, resulting in a large edge magnitude around modulo fringes; the over-exposed regions are concentratedly distributed in an image, which makes the modulo pixels likely to cluster in local regions. Based on these unique features of modulo pixels and edges, we design UnModNet to be two-stage accordingly: the first stage is a modulo edge separator that estimates channel-wise edges unique to modulo images; the second stage is a rollover mask predictor that achieves high-accuracy rollover mask prediction with the guidance of modulo edges.
To summarize, our learning strategy for modulo image unwrapping proposes three customized model designs to solve the three issues in the previous MRF-based algorithm [59] as follows:
(1) Modulo edge separator is proposed to distinguish the semantic and boundary information of the scene to relieve the modulus-intensity ambiguity and indicate correct regions to unwrap in a context-aware manner.
(2) Rollover mask predictor is adopted to deal with strong local contrast and dense modulo fringes to increase the capability of unwrapping a higher dynamic range in a structure-aware manner.
(3) Consistent color prediction is achieved by joint unwrapping across RGB channels, so that our model restores natural color appearance reliably.
Experimental results show that our approach can generate 12-bit HDR images from 8-bit modulo images reliably, and runs much faster than the previous MRF-based algorithm [59] thanks to the GPU acceleration.
2 Related Work
Multi-image HDR reconstruction. One of the most representative multi-image HDR reconstruction methods, proposed by Debevec and Malik [5], merges several low dynamic range (LDR) photographs under different exposures. However, it suffers from ghosting artifacts in HDR results when there is misalignment caused by camera movement or scene change during the exposure time. This problem provokes a series of studies on ghosting removal in HDR images [25, 39, 43]. Instead of using bracketed exposures, Hasinoff et al. [17] fused a burst of frames of constant exposure, which reduces the exposure time substantially and makes alignment more robust. Recently, several deep convolutional neural networks (CNNs) based approaches [24, 56, 58] have been developed to rebuild an HDR image from multiple LDR images. In contrast, we focus on single-image HDR reconstruction.
Single-image HDR reconstruction. Single-image HDR reconstruction, which aims to reconstruct the HDR image from a single LDR image, is also named as inverse tone mapping [1]. It is free of ghosting artifacts but more challenging than its multi-image counterpart due to the lack of irradiance information in badly-exposed areas. This ill-posed problem can be solved by several approaches [32, 41] based on numerical optimization. Recently, two categories of new methods emerged: learning-based HDR restoration, which hallucinates plausible HDR content from a single LDR image; and unconventional cameras, which captures additional information in a single photo from the scene. Learning-based methods discover HDR image priors from a large amount of training data. HDRCNN [6] adopted an encoder-decoder architecture to restore saturated areas in LDR images. ExpandNet [31] concatenated and fused different levels of features extracted by CNN to get HDR images directly. Endo et al. [7] used CNN to predict the LDR images under multiple exposures and merged them by the classical method [5]. Metzler et al. [34] jointly optimized a diffractive optical element-based encoder and a CNN-based decoder to recover saturated scene details. Liu et al. [30] trained a CNN to reverse the camera pipeline to reconstruct the HDR image. Methods using unconventional cameras attempt to gather additional cues about dynamic range from the scene to address the ill-posed nature of this problem. Nayar et al. [36] placed an optical mask adjacent to a conventional image detector array to make spatially varying pixel exposures. Hirakawa and Simon [22] placed a combination of photographic filter over the lens and color filter array on a conventional camera sensor. Neuromorphic cameras are also shown to be useful in guiding the process of HDR imaging [16, 53, 55]. Furthermore, some concept cameras have been proposed. Tumblin et al. [49] proposed a log-gradient camera that does well in capturing detailed high contrast scenes. Zhao et al. [59] proposed a modulo camera-based framework to push out the boundary of the dynamic range.
Phase unwrapping. Phase unwrapping is a classic signal processing problem that refers to recovering the original phase value from the principal value (wrapped phase). It is widely used in domains like optical metrology [8], synthetic aperture radar (SAR) interferometry [15], and medical imaging [4]. Phase unwrapping can be solved by Poisson’s equation [46], MRF-based iterative method [2], path-following method [20], etc. Recently deep CNNs have also been used to handle this problem [42, 47, 52]. However, these methods are designed for handling phase images, which have completely different properties from natural images.
Natural image unwrapping. Natural image unwrapping aims to recover the original scene radiance from its modulo counterpart, which is previously defined as what the modulo camera-based framework [59] tries to achieve. Although it is analogous to phase unwrapping, methods designed for phase unwrapping problem cannot be directly applied because phase images and natural images are two types of data with a huge domain gap. Recently, several solutions [28, 44, 45] have been proposed to deal with the natural image unwrapping problem. However, these methods require multiple modulo images as input and refuse to work when only a single modulo image is available. By exploring natural image statistics, the MRF-based algorithm proposed in [59] successfully demonstrates the
feasibility of unwrapping a single modulo image to expand the dynamic range. However, failure cases are also commonly observed, as the example shown in Figure 1 (top right).
3 Method
In this section, we first introduce the iterative formulation of the problem, and show the overall pipeline of UnModNet in Section 3.1 and Figure 2. Then, we detail our two-stage UnModNet model designs in Section 3.2 and Section 3.3. Implementation details are presented in Section 3.4.
3.1 Problem formulation and overall pipeline
We aim to restore the ground truth HDR image I by unwrapping a single modulo image Im captured by a modulo camera. According to Equation (1), it is equivalent to estimating the number of rollovers K. Putting it into a probabilistic framework, our goal is to estimate argmaxK P (K|Im). Theoretically, the label space of K is the whole non-negative integer space {0, 1, 2, . . .}. Given a modulo image, it is non-trivial to predict the label space either, which poses a major challenge to directly estimate the likelihood.
Therefore, we make the model more tractable by factorizing over the number of rollovers K as
P (K|Im) = ∞∏ k=1 P (M(k+1)|M(1), . . . ,M(k), Im)P (M(1)|Im), (2)
where M(k) = {M (k)(x, y, c)} represents a binary rollover mask in k-th factor term which satisfies
M (k)(x, y, c) = { 1 if k ≤ K(x, y, c) 0 otherwise and ∞∑ k=1 M(k) = K, (3)
as shown in Figure 3 (left). With an arbitrary number of binary rollover masks, we can always render an updated modulo image by
I(k)m = Im + 2 N · (M(1) + · · ·+ M(k)), (4)
so we further transform Equation (2) into P (K|Im) = ∞∏ k=0 P (M(k+1)|I(k)m ), (5)
where I(0)m = Im. To this end, estimating P (M (k+1)|I(k)m ) in k-th factor is equivalent to estimating the corresponding binary rollover mask given a modulo image, and the original problem becomes an iterative per-pixel binary labeling problem terminating when M(k+1) = 0.
As shown in Figure 2, UnModNet takes a single modulo image Im as input, iteratively updates it by predicting the binary rollover mask M, and outputs the HDR result I until the algorithm terminates. Each unwrapping iteration can be written as:
I(k+1)m = I (k) m + 2 N ·M(k+1) = I(k)m + g(I (k) m ), (6)
where g represents the proposed UnModNet. An example is shown in Figure 3 (right).
3.2 Modulo edge separator
Despite the pixel intensity distribution of modulo regions is irregular, modulo images are always comprised of distinctive dense edges. Prominently, we recognize that a modulo camera brings abrupt intensity changes in continuous irradiance regions, resulting in modulo edges with large magnitude. Edges are effective cues for various image restoration tasks, such as reflection separation [29], moiré patterns removal [18], image inpainting [37], etc., because the sparse nature of edges could relieve the ill-posedness of these problems. Similarly, we expect that a modulo edge separator could assist our goal of rollover mask prediction.
We first design a network module to predict channel-wise modulo edges Em from a single modulo image Im, as shown in the first stage of Figure 2. Modulo edges Em, which encode boundary information about modulo regions, can be defined as Em = bin(El − En), where bin stands for binarization, El denotes the channel-wise edge map (edges of the modulo image Im), and En represents the intensity edges (edges of the ground truth HDR image I). Since the modulo edges appear when “reset” of intensity from maximum to zero is triggered, its magnitude should be larger than most of the intensity edges. A simple verification is that by measuring the average edge magnitude of 3000 synthetic modulo images, we find that the magnitude of El−En is around 4 times larger than En.2 This is helpful for the separation of Em from En. To better exploit this property, we propose to learn the residual between El and Em instead of predicting Em directly. Channel-wise Laplace kernels are used to filter the input modulo image Im to obtain the edge map El. Such a network can be described as:
Em = El + ge(cat(El, Im)), (7)
where ge denotes the backbone network and cat stands for feature concatenation. In practice, we construct ge using an autoencoder [21] architecture with residual bottleneck blocks [19] to boost network depth, non-local operations [54] to enlarge receptive fields, and skip-connections to magnify the response of modulo edges.
Obtaining modulo edges Em makes unwrapping much easier because modulo edges could be used as a priori which contains abundant boundary information about modulo regions. The modulo edges are jointly predicted for all channels, resulting in a more consistent estimation.
2Please refer to the supplementary material for more details about modulo edges and experimental validation.
3.3 Rollover mask predictor
We have observed that modulo pixels are more likely to cluster in local regions, which is consistent with the fact that high dynamic range pixels are usually concentrated in small areas of an image (see Figure 1). This makes modulo regions distinctive from intensity regions since visually they show unnatural color appearances. Moreover, two modulo images with a difference of only one binary rollover mask (say I(k)m and I (k+1) m ) share similar structure patterns, i.e. I (k+1) m can be viewed as an updated modulo image whose maximum intensity is “one-period” (in our case 256) larger than I(k)m .
We therefore design another network module to predict the binary rollover mask M, given a modulo image Im and its channel-wise modulo edges Em as input, as shown in the second stage of Figure 2. Directly feeding the concatenation of Im and Em to the network makes the model hard to converge, because of the large domain gaps between the two types of data. To overcome this difficulty, we use convolutions and non-local blocks to extract the local and global features of Im and Em, and fuse them with a concatenation and a squeeze-and-excitation (SE) block [23]. SE block learns normalized weights in each channel and recalibrates feature maps by re-weighting them. The predicted binary rollover mask produced from this module can be presented mathematically as follows:
M = gm(SE(cat(Fi(Im),Fe(Em)))), (8)
where gm denotes the backbone network, SE represents the SE block, Fi and Fe indicate the feature extraction processes for Im and Em respectively. As for gm, we choose Attention U-Net architecture [40], and use residual bottleneck blocks and strided convolutions to substitute double convolution blocks and max-pooling layers in each scale respectively.
With the rollover masks becomes available, we can treat the unwrapping problem as an iterative per-pixel binary labeling problem as we have discussed in Section 3.1. With the semantic information provided by modulo images Im and the boundary information provided by modulo edges Em, the estimation of the rollover mask tends to be more robust and the unwrapped image suffers less from modulus-intensity ambiguity.
3.4 Implementation details
Loss function. The total loss function of UnModNet is L = α · Le + Lm, where Le defines the loss of the modulo edge separator, Lm defines the loss of the rollover mask predictor, and α is set to 1.0 empirically. The binary cross entropy loss is used for both Le and Lm. Dataset preparation. Learning-based methods depend heavily on training data, but there is no existing dataset for our task. Therefore, we collect HDR images from a various of image and video sources [10, 11, 12, 13, 14, 27, 38, 57] and propose an effective dataset creation pipeline. The generation of the ground truth HDR image I can be expressed as I = ⌊ (2B − 1) · clip(E ·∆t, [0, 1]) ⌋ , where B denotes the quantization bit depth, E indicates the relative irradiance values of each raw HDR image (E ∈ [0, 1]), and ∆t is an appropriate exposure time to control the over-exposure rate.3 The corresponding modulo image Im and LDR image Il can be calculated by Equation (1) (N is set to 8 for 8-bit modulo images) and Il = clip(I, [0, 255]) respectively. We choose B = 12 (i.e., 12-bit HDR images with a maximum intensity 4095) and set the over-exposure rate between 5% and 30%. The images are resized and randomly cropped to 256× 256 patches during the training process, and cropped to 512× 512 patches for test. Training strategy. We implement UnModNet4 using PyTorch and apply a two-stage training strategy. First, to ensure a stable initialization of the training process, we train the modulo edge separator and rollover mask predictor independently for 400 and 200 epochs respectively. Then, we fix the modulo edge separator and train the entire network end-to-end for another 200 epochs. ADAM optimizer [26] is used with an initial learning rate 1× 10−4 for the first 200 epochs, and a linear decay to 5× 10−5 in the next 200 epochs. Dropout noise [48] and instance normalization [51] are added during training.
3More details about the dataset creation pipeline can be found in the supplementary material. 4Detailed network architecture can be found in the supplementary material.
4 Experiments
4.1 Evaluation on synthetic data
We compare the results of UnModNet to the MRF-based algorithm [59] which takes a single modulo image as input and three state-of-the-art learning-based HDR reconstruction methods which take a single LDR image as input: DrTMO [7], ExpandNet [31], and HDRCNN [6]. Since our method keeps the same set of parameters for all test cases, for a fair comparison, we fix the parameters of the MRF algorithm for evaluation as well. Note that comparing with learning-based single-image HDR reconstruction methods (DrTMO [7], ExpandNet [31], and HDRCNN [6]) might be a bit unfair because of the difference in types of input data (LDR image vs. modulo image), and we conduct such a comparison to show the effectiveness of using modulo images w.r.t. state-of-the-art single-image approaches. Visual quality comparisons of tone-mapped HDR images are shown in
Figure 45. Compared to the MRF-based algorithm using a modulo image, our model is robust under
5More synthetic results can be found in the supplementary material.
strong local contrast or dense modulo fringes, while avoiding unwrapping incorrect regions and color misalignment. For example, the lighthouse (red box) in the middle row of Figure 4, which has drastic dynamic range changes, is correctly unwrapped by UnModNet, while the MRF-based algorithm fails to discriminate pixels in modulo from intensity regions and suffers from severe color misalignment artifacts. Compared to learning-based methods using an LDR image, our method performs better in recovering high contrast areas, and resembles the ground truth more closely. To evaluate the results quantitatively, we adopt four frequently-used image quality metrics including SSIM, MS-SSIM (multi-scale SSIM), PSNR, and Q-Score (produced by HDR-VDP-2.2 [35]). Results are shown in Table 1 (also for examples in Figure 4). Our model consistently outperforms the MRF-based and learning-based HDR reconstruction methods on all metrics. Furthermore, we evaluate the runtime of UnModNet on an NVIDIA 2080Ti GPU and the MRF-based algorithm on an Intel Core i7-8700K CPU (using a single core). Note that the MRF-based algorithm could not apply the GPU acceleration, so we can only run it on a CPU. At each iteration, UnModNet takes around 200ms to process a 512× 512 modulo image, which is around 120 times faster than the MRF-based algorithm.
4.2 Evaluation on real data
Modulo images from real RGB images. We use a Fujifilm X-T20 mirrorless digital camera6 to create a real dataset from RGB images. First, we take a series of images (around 7 ∼ 9) with bracketed exposures, and use the classical multi-image HDR reconstruction method [5] to merge them into an HDR image. The exposure value of each images are increased by 2 stops. Then, we use the dataset generation pipeline proposed in Section 3.4 to get the ground truth I, the modulo image Im, and the corresponding LDR image Il. As shown in Figure 57, our model is able to reconstruct visually impressive HDR images with less artifacts and higher quantitative scores than other methods.
Modulo images from a real sensor. There are several technologies which could mod the scene radiance as a modulo image before converting to digital signals, such as digital-pixel focal plane array (DFPA) [3, 9, 50] (used in [59]), programmable readout circuit [33], and intelligent vision sensors (e.g., Sony IMX5008), etc. We configure a retina-inspired fovea-like sampling model (FSM)
6https://fujifilm-x.com/global/products/cameras/x-t20/ 7More real RGB results can be found in the supplementary material. 8https://www.sony.net/SonyInfo/News/Press/202005/20-037E/
4.3 Ablation study
To verify the validity of each model design choice, we conduct a series of ablation studies and show comparisons in Table 2. We first show the effectiveness of our iterative unwrapping pipeline by comparing with a model that directly predicts the number of rollovers K. Then, we verify the necessity of the modulo edge separator by removing it and show the effectiveness of learning the residual between the edge map El and modulo edges Em in the modulo edge separator by removing the Laplace operation. Finally, we validate the two-stage training strategy by training the entire network in an end-to-end manner.
5 Conclusion
We presented a learning-based framework for modulo image unwrapping to realize high dynamic range imaging. To deal with the ill-posedness of this problem, we reformulated it into a series of binary labeling problems and proposed UnModNet to iteratively estimate the binary rollover masks of an input modulo image. Our model design solved some fundamental issues in the previous MRF-based algorithm [59], including modulus-intensity ambiguity, strong local contrast, and color misalignment.
The highest bit depth that can be achieved by the existing model is constrained by the configuration of training data.10 As future work, we plan to extend UnModNet to support dynamic bit depth.
9Please refer to the supplementary material for how we use SpiCam-Mod to capture modulo images. 10We demonstrate 16-bit HDR reconstruction results in the supplementary material.
Broader Impact
Our research is about a new camera framework that aims to capture high-quality HDR images. It could be integrated into the image processing pipeline of camera sensors to improve the ability of recording scenes with a very high dynamic range. The users of mobile cameras may benefit from this research because they could conveniently take photos without being annoyed by over- or under-exposure artifacts. Besides, it might be helpful to build a scientific imaging system that needs to record high dynamic range scenes, such as astronomy and microscope cameras.
Although the modulo camera-based framework could theoretically achieve unbounded dynamic range, its generalization capability is limited by the diversity of the training data. The unwrapping algorithm may fail when the captured scene has a very high dynamic range which exceeds the maximum dynamic range of the images in the training data by a large margin. If that happens in a large region of pixels, we would recommend using LDR images instead since they have more natural color appearances.
Acknowledgments and Disclosure of Funding
This work was supported in part by National Natural Science Foundation of China under Grant No. 61872012, No. 61876007, National Key R&D Program of China (2019YFF0302902), Beijing Academy of Artificial Intelligence (BAAI), Beijing major science and technology projects (Z191100010618003), and Australian Research Council Grant DE-180101438. | 1. What is the focus of the paper regarding image processing?
2. What are the strengths of the proposed method, particularly its structure and heuristics?
3. What are the weaknesses of the paper, especially regarding its problem scope and baseline evaluation?
4. How does the reviewer assess the significance and practicality of the problem addressed in the paper?
5. What kind of comparison would the reviewer suggest for a more informative evaluation? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper presents a deep learning solution to the problem of processing an image from a "modulo camera". This is a highly experimental camera in which the sensor "wraps around" instead of saturating, which theoretically gives it an unbounded dynamic range. This paper shows how to frame this problem such that it can be solved with neural networks.
Strengths
As someone who is interested in computational imaging, I find modulo cameras to be quite interesting and compelling. The proposed method seems reasonable to me. The "rollover" structure and the heuristics that are used to inform the treatment of edges and "fringes" seem solid and insightful to me. The paper very clearly demonstrates that the proposed model beats the baseline from [55] (which appears to be based on graph cuts) by a very large margin.
Weaknesses
My primary concern with this paper is that the problem it is addressing is *extremely* niche --- Modulo cameras are a somewhat obscure problem even within the realm of the computational imaging community. If I was reviewing this paper for a computational imaging/photography conference, I would be more charitable towards this paper. But this subject is unlikely to be of interest to the general NeurIPS audience, and this paper seems unlikely to reach its intended audience if presented at NeurIPS. And the specifics of this neural network architecture are so specifically tailored to this particular problem that I'm not sure what a general ML researcher could come away from this paper with, nor am I convinced that this is a problem that should be popularized with ML researchers as, again, a solution to this problem has limited practical value given that modulo cameras are still a largely hypothetical concept. My other concern with this paper (which would be a significant concern even if I were reviewing this paper in a computational imaging conference) is that the baseline evaluation is misleading. The most important comparison is to [55], which is the paper that originates the idea of a modulo camera, and which the evaluation suggests is outperformed by an enormous margin (I'm actually a bit confused as to why this margin is so significant as the images shown in [55] look much better than the images shown here in the evaluation against [55], but I'm willing to believe that the images in [55] were cherry-picked). This evaluation would have been easier to parse if the authors had simply used some of the images presented in [55], so at least a qualitative comparison could have been made. But my biggest concern is comparison with the "baseline" techniques in Table 1 other than [55], which are *not for the modulo camera task*! The techniques of [7, 30, 6] are learning-based methods for regressing from an LDR image *from a conventional camera* to an HDR image --- they are not techniques for processing modulo images. The input to the baseline techniques is not the same input as is used by the proposed model. This means that this baseline evaluation is not actually evaluating the accuracy of the proposed model, it is evaluating the accuracy of a modulo camera versus a conventional camera. I do not see the point of this evaluation outside of a computational imaging context. The actual evaluation that I would like to see is against other basic neural networks, or against other non-learned techniques for processing modulo imagery. I am not sure very many such techniques exist because (as previously stated) this problem is not well-studied, but a quick search let me find three papers that appear to propose solutions to the problem of [55]: “Robust Multi-Image HDR Reconstruction for the Modulo Camera”, “Reconstruction from Periodic Nonlinearities, with Applications to HDR Imaging”, and “Signal Reconstruction from Modulo Observations”. Or, the baseline evaluation could be performed by applying conventional CNNs (U-Nets, etc) to the modulo imaging tasks, as this would show that the architecture presented here is indeed necessary for this task. But I see very little value in having "baseline" comparisons in which the input comes from a completely different kind of camera as the images used as input to the proposed model, if the goal of the evaluation is to show the value of the model (and not the kind of camera). |
NIPS | Title
UnModNet: Learning to Unwrap a Modulo Image for High Dynamic Range Imaging
Abstract
A conventional camera often suffers from overor under-exposure when recording a real-world scene with a very high dynamic range (HDR). In contrast, a modulo camera with a Markov random field (MRF) based unwrapping algorithm can theoretically accomplish unbounded dynamic range but shows degenerate performances when there are modulus-intensity ambiguity, strong local contrast, and color misalignment. In this paper, we reformulate the modulo image unwrapping problem into a series of binary labeling problems and propose a modulo edge-aware model, named as UnModNet, to iteratively estimate the binary rollover masks of the modulo image for unwrapping. Experimental results show that our approach can generate 12-bit HDR images from 8-bit modulo images reliably, and runs much faster than the previous MRF-based algorithm thanks to the GPU acceleration.
1 Introduction
Real-world scenes have a very high dynamic range (HDR) so that object contours are mostly lost in the over-exposed and under-exposed regions when captured by a conventional camera with a limited dynamic range and saved as an 8-bit image. To increase the dynamic range of captured images, many HDR reconstruction approaches have been proposed to increase the camera bit depth via hardware modifications [22, 36], as well as using computational methods to merge multi-bracketed captures [5] or a series of bursts [17]. Yet the dynamic range they can achieve is limited and the details of the HDR content often cannot be faithfully recovered. A modulo camera [59] can theoretically achieve unbounded dynamic range by recording the least significant bits of the irradiance signal, i.e., the camera hardware “resets” the scene radiance arriving at the sensor before reading it out whenever it reaches saturation (e.g., for an 8-bit image, 256 will be reset to 0 and re-start the counting again as long as the shutter keeps open). By unwrapping the captured modulo image with a customized Markov random field (MRF) based algorithm, the HDR image could be practically restored, as shown in Figure 1 (top left). We denote the irradiance of an HDR image as I = {I(x, y, c)}, and its corresponding modulo image as Im = {Im(x, y, c)}, where (x, y) is the pixel coordinate and c denotes the color channel index. Im is equivalent to the least significant N bits of I. As illustrated in the bottom row of Figure 1, their relationship can be expressed as:
Im = mod(I, 2N ) or I = Im + 2N ·K, (1) where K = {K(x, y, c)} is the number of rollovers per pixel.
∗Corresponding author.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
However, as shown in Figure 1 (top right), the MRF-based unwrapping algorithm [59] is not robust due to several fundamental issues:
(1) Modulus-intensity ambiguity. It is difficult to discriminate whether a pixel value is a modulus or an intensity (non-modulus). The previous method would often incorrectly unwrap non-modulo pixels as it used a cost function without a data term during optimization.
(2) Strong local contrast. Dense modulo fringes (marked with green arrows in the bottom row of Figure 1) are usually caused by strong local contrast in irradiance. The previous method often fails around these regions as it only focused on local smoothness and ignored contextual information and structural patterns.
(3) Color misalignment. The previous method independently unwraps each color channel, resulting in severe color misalignment artifacts across three channels, so it cannot handle RGB images robustly.
In this paper, we reformulate the unwrapping of a modulo image into a series of binary labeling problems and propose a learning-based framework named UnModNet, as shown in Figure 2, to iteratively estimate the binary rollover mask of the input modulo image. Concretely, we have some key observations on the characteristics of modulo images: continuous irradiance regions are split up by the modulo operation, resulting in a large edge magnitude around modulo fringes; the over-exposed regions are concentratedly distributed in an image, which makes the modulo pixels likely to cluster in local regions. Based on these unique features of modulo pixels and edges, we design UnModNet to be two-stage accordingly: the first stage is a modulo edge separator that estimates channel-wise edges unique to modulo images; the second stage is a rollover mask predictor that achieves high-accuracy rollover mask prediction with the guidance of modulo edges.
To summarize, our learning strategy for modulo image unwrapping proposes three customized model designs to solve the three issues in the previous MRF-based algorithm [59] as follows:
(1) Modulo edge separator is proposed to distinguish the semantic and boundary information of the scene to relieve the modulus-intensity ambiguity and indicate correct regions to unwrap in a context-aware manner.
(2) Rollover mask predictor is adopted to deal with strong local contrast and dense modulo fringes to increase the capability of unwrapping a higher dynamic range in a structure-aware manner.
(3) Consistent color prediction is achieved by joint unwrapping across RGB channels, so that our model restores natural color appearance reliably.
Experimental results show that our approach can generate 12-bit HDR images from 8-bit modulo images reliably, and runs much faster than the previous MRF-based algorithm [59] thanks to the GPU acceleration.
2 Related Work
Multi-image HDR reconstruction. One of the most representative multi-image HDR reconstruction methods, proposed by Debevec and Malik [5], merges several low dynamic range (LDR) photographs under different exposures. However, it suffers from ghosting artifacts in HDR results when there is misalignment caused by camera movement or scene change during the exposure time. This problem provokes a series of studies on ghosting removal in HDR images [25, 39, 43]. Instead of using bracketed exposures, Hasinoff et al. [17] fused a burst of frames of constant exposure, which reduces the exposure time substantially and makes alignment more robust. Recently, several deep convolutional neural networks (CNNs) based approaches [24, 56, 58] have been developed to rebuild an HDR image from multiple LDR images. In contrast, we focus on single-image HDR reconstruction.
Single-image HDR reconstruction. Single-image HDR reconstruction, which aims to reconstruct the HDR image from a single LDR image, is also named as inverse tone mapping [1]. It is free of ghosting artifacts but more challenging than its multi-image counterpart due to the lack of irradiance information in badly-exposed areas. This ill-posed problem can be solved by several approaches [32, 41] based on numerical optimization. Recently, two categories of new methods emerged: learning-based HDR restoration, which hallucinates plausible HDR content from a single LDR image; and unconventional cameras, which captures additional information in a single photo from the scene. Learning-based methods discover HDR image priors from a large amount of training data. HDRCNN [6] adopted an encoder-decoder architecture to restore saturated areas in LDR images. ExpandNet [31] concatenated and fused different levels of features extracted by CNN to get HDR images directly. Endo et al. [7] used CNN to predict the LDR images under multiple exposures and merged them by the classical method [5]. Metzler et al. [34] jointly optimized a diffractive optical element-based encoder and a CNN-based decoder to recover saturated scene details. Liu et al. [30] trained a CNN to reverse the camera pipeline to reconstruct the HDR image. Methods using unconventional cameras attempt to gather additional cues about dynamic range from the scene to address the ill-posed nature of this problem. Nayar et al. [36] placed an optical mask adjacent to a conventional image detector array to make spatially varying pixel exposures. Hirakawa and Simon [22] placed a combination of photographic filter over the lens and color filter array on a conventional camera sensor. Neuromorphic cameras are also shown to be useful in guiding the process of HDR imaging [16, 53, 55]. Furthermore, some concept cameras have been proposed. Tumblin et al. [49] proposed a log-gradient camera that does well in capturing detailed high contrast scenes. Zhao et al. [59] proposed a modulo camera-based framework to push out the boundary of the dynamic range.
Phase unwrapping. Phase unwrapping is a classic signal processing problem that refers to recovering the original phase value from the principal value (wrapped phase). It is widely used in domains like optical metrology [8], synthetic aperture radar (SAR) interferometry [15], and medical imaging [4]. Phase unwrapping can be solved by Poisson’s equation [46], MRF-based iterative method [2], path-following method [20], etc. Recently deep CNNs have also been used to handle this problem [42, 47, 52]. However, these methods are designed for handling phase images, which have completely different properties from natural images.
Natural image unwrapping. Natural image unwrapping aims to recover the original scene radiance from its modulo counterpart, which is previously defined as what the modulo camera-based framework [59] tries to achieve. Although it is analogous to phase unwrapping, methods designed for phase unwrapping problem cannot be directly applied because phase images and natural images are two types of data with a huge domain gap. Recently, several solutions [28, 44, 45] have been proposed to deal with the natural image unwrapping problem. However, these methods require multiple modulo images as input and refuse to work when only a single modulo image is available. By exploring natural image statistics, the MRF-based algorithm proposed in [59] successfully demonstrates the
feasibility of unwrapping a single modulo image to expand the dynamic range. However, failure cases are also commonly observed, as the example shown in Figure 1 (top right).
3 Method
In this section, we first introduce the iterative formulation of the problem, and show the overall pipeline of UnModNet in Section 3.1 and Figure 2. Then, we detail our two-stage UnModNet model designs in Section 3.2 and Section 3.3. Implementation details are presented in Section 3.4.
3.1 Problem formulation and overall pipeline
We aim to restore the ground truth HDR image I by unwrapping a single modulo image Im captured by a modulo camera. According to Equation (1), it is equivalent to estimating the number of rollovers K. Putting it into a probabilistic framework, our goal is to estimate argmaxK P (K|Im). Theoretically, the label space of K is the whole non-negative integer space {0, 1, 2, . . .}. Given a modulo image, it is non-trivial to predict the label space either, which poses a major challenge to directly estimate the likelihood.
Therefore, we make the model more tractable by factorizing over the number of rollovers K as
P (K|Im) = ∞∏ k=1 P (M(k+1)|M(1), . . . ,M(k), Im)P (M(1)|Im), (2)
where M(k) = {M (k)(x, y, c)} represents a binary rollover mask in k-th factor term which satisfies
M (k)(x, y, c) = { 1 if k ≤ K(x, y, c) 0 otherwise and ∞∑ k=1 M(k) = K, (3)
as shown in Figure 3 (left). With an arbitrary number of binary rollover masks, we can always render an updated modulo image by
I(k)m = Im + 2 N · (M(1) + · · ·+ M(k)), (4)
so we further transform Equation (2) into P (K|Im) = ∞∏ k=0 P (M(k+1)|I(k)m ), (5)
where I(0)m = Im. To this end, estimating P (M (k+1)|I(k)m ) in k-th factor is equivalent to estimating the corresponding binary rollover mask given a modulo image, and the original problem becomes an iterative per-pixel binary labeling problem terminating when M(k+1) = 0.
As shown in Figure 2, UnModNet takes a single modulo image Im as input, iteratively updates it by predicting the binary rollover mask M, and outputs the HDR result I until the algorithm terminates. Each unwrapping iteration can be written as:
I(k+1)m = I (k) m + 2 N ·M(k+1) = I(k)m + g(I (k) m ), (6)
where g represents the proposed UnModNet. An example is shown in Figure 3 (right).
3.2 Modulo edge separator
Despite the pixel intensity distribution of modulo regions is irregular, modulo images are always comprised of distinctive dense edges. Prominently, we recognize that a modulo camera brings abrupt intensity changes in continuous irradiance regions, resulting in modulo edges with large magnitude. Edges are effective cues for various image restoration tasks, such as reflection separation [29], moiré patterns removal [18], image inpainting [37], etc., because the sparse nature of edges could relieve the ill-posedness of these problems. Similarly, we expect that a modulo edge separator could assist our goal of rollover mask prediction.
We first design a network module to predict channel-wise modulo edges Em from a single modulo image Im, as shown in the first stage of Figure 2. Modulo edges Em, which encode boundary information about modulo regions, can be defined as Em = bin(El − En), where bin stands for binarization, El denotes the channel-wise edge map (edges of the modulo image Im), and En represents the intensity edges (edges of the ground truth HDR image I). Since the modulo edges appear when “reset” of intensity from maximum to zero is triggered, its magnitude should be larger than most of the intensity edges. A simple verification is that by measuring the average edge magnitude of 3000 synthetic modulo images, we find that the magnitude of El−En is around 4 times larger than En.2 This is helpful for the separation of Em from En. To better exploit this property, we propose to learn the residual between El and Em instead of predicting Em directly. Channel-wise Laplace kernels are used to filter the input modulo image Im to obtain the edge map El. Such a network can be described as:
Em = El + ge(cat(El, Im)), (7)
where ge denotes the backbone network and cat stands for feature concatenation. In practice, we construct ge using an autoencoder [21] architecture with residual bottleneck blocks [19] to boost network depth, non-local operations [54] to enlarge receptive fields, and skip-connections to magnify the response of modulo edges.
Obtaining modulo edges Em makes unwrapping much easier because modulo edges could be used as a priori which contains abundant boundary information about modulo regions. The modulo edges are jointly predicted for all channels, resulting in a more consistent estimation.
2Please refer to the supplementary material for more details about modulo edges and experimental validation.
3.3 Rollover mask predictor
We have observed that modulo pixels are more likely to cluster in local regions, which is consistent with the fact that high dynamic range pixels are usually concentrated in small areas of an image (see Figure 1). This makes modulo regions distinctive from intensity regions since visually they show unnatural color appearances. Moreover, two modulo images with a difference of only one binary rollover mask (say I(k)m and I (k+1) m ) share similar structure patterns, i.e. I (k+1) m can be viewed as an updated modulo image whose maximum intensity is “one-period” (in our case 256) larger than I(k)m .
We therefore design another network module to predict the binary rollover mask M, given a modulo image Im and its channel-wise modulo edges Em as input, as shown in the second stage of Figure 2. Directly feeding the concatenation of Im and Em to the network makes the model hard to converge, because of the large domain gaps between the two types of data. To overcome this difficulty, we use convolutions and non-local blocks to extract the local and global features of Im and Em, and fuse them with a concatenation and a squeeze-and-excitation (SE) block [23]. SE block learns normalized weights in each channel and recalibrates feature maps by re-weighting them. The predicted binary rollover mask produced from this module can be presented mathematically as follows:
M = gm(SE(cat(Fi(Im),Fe(Em)))), (8)
where gm denotes the backbone network, SE represents the SE block, Fi and Fe indicate the feature extraction processes for Im and Em respectively. As for gm, we choose Attention U-Net architecture [40], and use residual bottleneck blocks and strided convolutions to substitute double convolution blocks and max-pooling layers in each scale respectively.
With the rollover masks becomes available, we can treat the unwrapping problem as an iterative per-pixel binary labeling problem as we have discussed in Section 3.1. With the semantic information provided by modulo images Im and the boundary information provided by modulo edges Em, the estimation of the rollover mask tends to be more robust and the unwrapped image suffers less from modulus-intensity ambiguity.
3.4 Implementation details
Loss function. The total loss function of UnModNet is L = α · Le + Lm, where Le defines the loss of the modulo edge separator, Lm defines the loss of the rollover mask predictor, and α is set to 1.0 empirically. The binary cross entropy loss is used for both Le and Lm. Dataset preparation. Learning-based methods depend heavily on training data, but there is no existing dataset for our task. Therefore, we collect HDR images from a various of image and video sources [10, 11, 12, 13, 14, 27, 38, 57] and propose an effective dataset creation pipeline. The generation of the ground truth HDR image I can be expressed as I = ⌊ (2B − 1) · clip(E ·∆t, [0, 1]) ⌋ , where B denotes the quantization bit depth, E indicates the relative irradiance values of each raw HDR image (E ∈ [0, 1]), and ∆t is an appropriate exposure time to control the over-exposure rate.3 The corresponding modulo image Im and LDR image Il can be calculated by Equation (1) (N is set to 8 for 8-bit modulo images) and Il = clip(I, [0, 255]) respectively. We choose B = 12 (i.e., 12-bit HDR images with a maximum intensity 4095) and set the over-exposure rate between 5% and 30%. The images are resized and randomly cropped to 256× 256 patches during the training process, and cropped to 512× 512 patches for test. Training strategy. We implement UnModNet4 using PyTorch and apply a two-stage training strategy. First, to ensure a stable initialization of the training process, we train the modulo edge separator and rollover mask predictor independently for 400 and 200 epochs respectively. Then, we fix the modulo edge separator and train the entire network end-to-end for another 200 epochs. ADAM optimizer [26] is used with an initial learning rate 1× 10−4 for the first 200 epochs, and a linear decay to 5× 10−5 in the next 200 epochs. Dropout noise [48] and instance normalization [51] are added during training.
3More details about the dataset creation pipeline can be found in the supplementary material. 4Detailed network architecture can be found in the supplementary material.
4 Experiments
4.1 Evaluation on synthetic data
We compare the results of UnModNet to the MRF-based algorithm [59] which takes a single modulo image as input and three state-of-the-art learning-based HDR reconstruction methods which take a single LDR image as input: DrTMO [7], ExpandNet [31], and HDRCNN [6]. Since our method keeps the same set of parameters for all test cases, for a fair comparison, we fix the parameters of the MRF algorithm for evaluation as well. Note that comparing with learning-based single-image HDR reconstruction methods (DrTMO [7], ExpandNet [31], and HDRCNN [6]) might be a bit unfair because of the difference in types of input data (LDR image vs. modulo image), and we conduct such a comparison to show the effectiveness of using modulo images w.r.t. state-of-the-art single-image approaches. Visual quality comparisons of tone-mapped HDR images are shown in
Figure 45. Compared to the MRF-based algorithm using a modulo image, our model is robust under
5More synthetic results can be found in the supplementary material.
strong local contrast or dense modulo fringes, while avoiding unwrapping incorrect regions and color misalignment. For example, the lighthouse (red box) in the middle row of Figure 4, which has drastic dynamic range changes, is correctly unwrapped by UnModNet, while the MRF-based algorithm fails to discriminate pixels in modulo from intensity regions and suffers from severe color misalignment artifacts. Compared to learning-based methods using an LDR image, our method performs better in recovering high contrast areas, and resembles the ground truth more closely. To evaluate the results quantitatively, we adopt four frequently-used image quality metrics including SSIM, MS-SSIM (multi-scale SSIM), PSNR, and Q-Score (produced by HDR-VDP-2.2 [35]). Results are shown in Table 1 (also for examples in Figure 4). Our model consistently outperforms the MRF-based and learning-based HDR reconstruction methods on all metrics. Furthermore, we evaluate the runtime of UnModNet on an NVIDIA 2080Ti GPU and the MRF-based algorithm on an Intel Core i7-8700K CPU (using a single core). Note that the MRF-based algorithm could not apply the GPU acceleration, so we can only run it on a CPU. At each iteration, UnModNet takes around 200ms to process a 512× 512 modulo image, which is around 120 times faster than the MRF-based algorithm.
4.2 Evaluation on real data
Modulo images from real RGB images. We use a Fujifilm X-T20 mirrorless digital camera6 to create a real dataset from RGB images. First, we take a series of images (around 7 ∼ 9) with bracketed exposures, and use the classical multi-image HDR reconstruction method [5] to merge them into an HDR image. The exposure value of each images are increased by 2 stops. Then, we use the dataset generation pipeline proposed in Section 3.4 to get the ground truth I, the modulo image Im, and the corresponding LDR image Il. As shown in Figure 57, our model is able to reconstruct visually impressive HDR images with less artifacts and higher quantitative scores than other methods.
Modulo images from a real sensor. There are several technologies which could mod the scene radiance as a modulo image before converting to digital signals, such as digital-pixel focal plane array (DFPA) [3, 9, 50] (used in [59]), programmable readout circuit [33], and intelligent vision sensors (e.g., Sony IMX5008), etc. We configure a retina-inspired fovea-like sampling model (FSM)
6https://fujifilm-x.com/global/products/cameras/x-t20/ 7More real RGB results can be found in the supplementary material. 8https://www.sony.net/SonyInfo/News/Press/202005/20-037E/
4.3 Ablation study
To verify the validity of each model design choice, we conduct a series of ablation studies and show comparisons in Table 2. We first show the effectiveness of our iterative unwrapping pipeline by comparing with a model that directly predicts the number of rollovers K. Then, we verify the necessity of the modulo edge separator by removing it and show the effectiveness of learning the residual between the edge map El and modulo edges Em in the modulo edge separator by removing the Laplace operation. Finally, we validate the two-stage training strategy by training the entire network in an end-to-end manner.
5 Conclusion
We presented a learning-based framework for modulo image unwrapping to realize high dynamic range imaging. To deal with the ill-posedness of this problem, we reformulated it into a series of binary labeling problems and proposed UnModNet to iteratively estimate the binary rollover masks of an input modulo image. Our model design solved some fundamental issues in the previous MRF-based algorithm [59], including modulus-intensity ambiguity, strong local contrast, and color misalignment.
The highest bit depth that can be achieved by the existing model is constrained by the configuration of training data.10 As future work, we plan to extend UnModNet to support dynamic bit depth.
9Please refer to the supplementary material for how we use SpiCam-Mod to capture modulo images. 10We demonstrate 16-bit HDR reconstruction results in the supplementary material.
Broader Impact
Our research is about a new camera framework that aims to capture high-quality HDR images. It could be integrated into the image processing pipeline of camera sensors to improve the ability of recording scenes with a very high dynamic range. The users of mobile cameras may benefit from this research because they could conveniently take photos without being annoyed by over- or under-exposure artifacts. Besides, it might be helpful to build a scientific imaging system that needs to record high dynamic range scenes, such as astronomy and microscope cameras.
Although the modulo camera-based framework could theoretically achieve unbounded dynamic range, its generalization capability is limited by the diversity of the training data. The unwrapping algorithm may fail when the captured scene has a very high dynamic range which exceeds the maximum dynamic range of the images in the training data by a large margin. If that happens in a large region of pixels, we would recommend using LDR images instead since they have more natural color appearances.
Acknowledgments and Disclosure of Funding
This work was supported in part by National Natural Science Foundation of China under Grant No. 61872012, No. 61876007, National Key R&D Program of China (2019YFF0302902), Beijing Academy of Artificial Intelligence (BAAI), Beijing major science and technology projects (Z191100010618003), and Australian Research Council Grant DE-180101438. | 1. What is the focus and contribution of the paper regarding phase unwarping of modulo HDR images?
2. What are the strengths of the proposed approach, particularly in its formulation and ablation study?
3. What are the weaknesses of the paper, especially regarding its comparisons with other works and lack of consideration of physical limitations?
4. Do you have any concerns about the choice of modulo image bit depth and its connection to the sensor's full well capacity?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The paper proposes a deep-leaning based technique for 2D phase unwarping of modulo HDR images. The authors demonstrate an improvement over an MRF-based solution.
Strengths
(S1) Although the idea of modulo HDR camera is not novel, the formulation of the problem as a binary labeling problem is novel and interesting. The ablation study justifies the approach. (S2) The proposed technique seems to outperform the previous work (but see also W1 below) (S3) The paper is well written and presented, the evaluation is solid, compared with a good number of methods (but see also W2) (S4) Tested with both 12 and 16-bit modulo 256 cameras. (S5) Tested with the "spike" camera.
Weaknesses
(W1) The results for the MRF-based method [55] look suspiciously bad. The results shown in the original paper look much better for similar images. Yet, the current submission does not include any of the standard HDR images that are also shown in [55]. Are those results generated with the original implementation? (W2) Could any of the 2D phase unwarping methods based on deep-learning, mentioned in Related Work, be used to reconstruct an HDR image? If not, it should be explained. If yes, the comparison with one of such methods should be included. (W3) The method does not consider the physical limitations of such modulo cameras: noise characteristic and blooming on the senor. A proper camera noise model, for example one from: Aguerrebere, C., Delon, J., Gousseau, Y., & Musé, P. (2013). Study of the digital camera acquisition process and statistical modeling of the sensor raw data. could be used to generate synthetic data. (W4) The claim that the method is 120 times faster than modulo unwarping is not entirely fair: does the compared algorithm run on the same GPU? The details on the MRF implementation should be included. (W5) The choice of 8-bits for modulo image seems arbitrary - most sensors are equipped with at least a 12-bit ADC. But the actual motivation for the modulo camera should be the limited full well capacity. It would be good if the choice of the "modulo" was linked to the physical properties of the sensor. (W6) The SSIM and PSNR results should not be computed on the linear irradiance values. Those should be used with logarithmic or PU-transforms (see for example https://doi.org/10.1117/12.765095) (W7) Can the method process images larger than 512x512? |
NIPS | Title
UnModNet: Learning to Unwrap a Modulo Image for High Dynamic Range Imaging
Abstract
A conventional camera often suffers from overor under-exposure when recording a real-world scene with a very high dynamic range (HDR). In contrast, a modulo camera with a Markov random field (MRF) based unwrapping algorithm can theoretically accomplish unbounded dynamic range but shows degenerate performances when there are modulus-intensity ambiguity, strong local contrast, and color misalignment. In this paper, we reformulate the modulo image unwrapping problem into a series of binary labeling problems and propose a modulo edge-aware model, named as UnModNet, to iteratively estimate the binary rollover masks of the modulo image for unwrapping. Experimental results show that our approach can generate 12-bit HDR images from 8-bit modulo images reliably, and runs much faster than the previous MRF-based algorithm thanks to the GPU acceleration.
1 Introduction
Real-world scenes have a very high dynamic range (HDR) so that object contours are mostly lost in the over-exposed and under-exposed regions when captured by a conventional camera with a limited dynamic range and saved as an 8-bit image. To increase the dynamic range of captured images, many HDR reconstruction approaches have been proposed to increase the camera bit depth via hardware modifications [22, 36], as well as using computational methods to merge multi-bracketed captures [5] or a series of bursts [17]. Yet the dynamic range they can achieve is limited and the details of the HDR content often cannot be faithfully recovered. A modulo camera [59] can theoretically achieve unbounded dynamic range by recording the least significant bits of the irradiance signal, i.e., the camera hardware “resets” the scene radiance arriving at the sensor before reading it out whenever it reaches saturation (e.g., for an 8-bit image, 256 will be reset to 0 and re-start the counting again as long as the shutter keeps open). By unwrapping the captured modulo image with a customized Markov random field (MRF) based algorithm, the HDR image could be practically restored, as shown in Figure 1 (top left). We denote the irradiance of an HDR image as I = {I(x, y, c)}, and its corresponding modulo image as Im = {Im(x, y, c)}, where (x, y) is the pixel coordinate and c denotes the color channel index. Im is equivalent to the least significant N bits of I. As illustrated in the bottom row of Figure 1, their relationship can be expressed as:
Im = mod(I, 2N ) or I = Im + 2N ·K, (1) where K = {K(x, y, c)} is the number of rollovers per pixel.
∗Corresponding author.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
However, as shown in Figure 1 (top right), the MRF-based unwrapping algorithm [59] is not robust due to several fundamental issues:
(1) Modulus-intensity ambiguity. It is difficult to discriminate whether a pixel value is a modulus or an intensity (non-modulus). The previous method would often incorrectly unwrap non-modulo pixels as it used a cost function without a data term during optimization.
(2) Strong local contrast. Dense modulo fringes (marked with green arrows in the bottom row of Figure 1) are usually caused by strong local contrast in irradiance. The previous method often fails around these regions as it only focused on local smoothness and ignored contextual information and structural patterns.
(3) Color misalignment. The previous method independently unwraps each color channel, resulting in severe color misalignment artifacts across three channels, so it cannot handle RGB images robustly.
In this paper, we reformulate the unwrapping of a modulo image into a series of binary labeling problems and propose a learning-based framework named UnModNet, as shown in Figure 2, to iteratively estimate the binary rollover mask of the input modulo image. Concretely, we have some key observations on the characteristics of modulo images: continuous irradiance regions are split up by the modulo operation, resulting in a large edge magnitude around modulo fringes; the over-exposed regions are concentratedly distributed in an image, which makes the modulo pixels likely to cluster in local regions. Based on these unique features of modulo pixels and edges, we design UnModNet to be two-stage accordingly: the first stage is a modulo edge separator that estimates channel-wise edges unique to modulo images; the second stage is a rollover mask predictor that achieves high-accuracy rollover mask prediction with the guidance of modulo edges.
To summarize, our learning strategy for modulo image unwrapping proposes three customized model designs to solve the three issues in the previous MRF-based algorithm [59] as follows:
(1) Modulo edge separator is proposed to distinguish the semantic and boundary information of the scene to relieve the modulus-intensity ambiguity and indicate correct regions to unwrap in a context-aware manner.
(2) Rollover mask predictor is adopted to deal with strong local contrast and dense modulo fringes to increase the capability of unwrapping a higher dynamic range in a structure-aware manner.
(3) Consistent color prediction is achieved by joint unwrapping across RGB channels, so that our model restores natural color appearance reliably.
Experimental results show that our approach can generate 12-bit HDR images from 8-bit modulo images reliably, and runs much faster than the previous MRF-based algorithm [59] thanks to the GPU acceleration.
2 Related Work
Multi-image HDR reconstruction. One of the most representative multi-image HDR reconstruction methods, proposed by Debevec and Malik [5], merges several low dynamic range (LDR) photographs under different exposures. However, it suffers from ghosting artifacts in HDR results when there is misalignment caused by camera movement or scene change during the exposure time. This problem provokes a series of studies on ghosting removal in HDR images [25, 39, 43]. Instead of using bracketed exposures, Hasinoff et al. [17] fused a burst of frames of constant exposure, which reduces the exposure time substantially and makes alignment more robust. Recently, several deep convolutional neural networks (CNNs) based approaches [24, 56, 58] have been developed to rebuild an HDR image from multiple LDR images. In contrast, we focus on single-image HDR reconstruction.
Single-image HDR reconstruction. Single-image HDR reconstruction, which aims to reconstruct the HDR image from a single LDR image, is also named as inverse tone mapping [1]. It is free of ghosting artifacts but more challenging than its multi-image counterpart due to the lack of irradiance information in badly-exposed areas. This ill-posed problem can be solved by several approaches [32, 41] based on numerical optimization. Recently, two categories of new methods emerged: learning-based HDR restoration, which hallucinates plausible HDR content from a single LDR image; and unconventional cameras, which captures additional information in a single photo from the scene. Learning-based methods discover HDR image priors from a large amount of training data. HDRCNN [6] adopted an encoder-decoder architecture to restore saturated areas in LDR images. ExpandNet [31] concatenated and fused different levels of features extracted by CNN to get HDR images directly. Endo et al. [7] used CNN to predict the LDR images under multiple exposures and merged them by the classical method [5]. Metzler et al. [34] jointly optimized a diffractive optical element-based encoder and a CNN-based decoder to recover saturated scene details. Liu et al. [30] trained a CNN to reverse the camera pipeline to reconstruct the HDR image. Methods using unconventional cameras attempt to gather additional cues about dynamic range from the scene to address the ill-posed nature of this problem. Nayar et al. [36] placed an optical mask adjacent to a conventional image detector array to make spatially varying pixel exposures. Hirakawa and Simon [22] placed a combination of photographic filter over the lens and color filter array on a conventional camera sensor. Neuromorphic cameras are also shown to be useful in guiding the process of HDR imaging [16, 53, 55]. Furthermore, some concept cameras have been proposed. Tumblin et al. [49] proposed a log-gradient camera that does well in capturing detailed high contrast scenes. Zhao et al. [59] proposed a modulo camera-based framework to push out the boundary of the dynamic range.
Phase unwrapping. Phase unwrapping is a classic signal processing problem that refers to recovering the original phase value from the principal value (wrapped phase). It is widely used in domains like optical metrology [8], synthetic aperture radar (SAR) interferometry [15], and medical imaging [4]. Phase unwrapping can be solved by Poisson’s equation [46], MRF-based iterative method [2], path-following method [20], etc. Recently deep CNNs have also been used to handle this problem [42, 47, 52]. However, these methods are designed for handling phase images, which have completely different properties from natural images.
Natural image unwrapping. Natural image unwrapping aims to recover the original scene radiance from its modulo counterpart, which is previously defined as what the modulo camera-based framework [59] tries to achieve. Although it is analogous to phase unwrapping, methods designed for phase unwrapping problem cannot be directly applied because phase images and natural images are two types of data with a huge domain gap. Recently, several solutions [28, 44, 45] have been proposed to deal with the natural image unwrapping problem. However, these methods require multiple modulo images as input and refuse to work when only a single modulo image is available. By exploring natural image statistics, the MRF-based algorithm proposed in [59] successfully demonstrates the
feasibility of unwrapping a single modulo image to expand the dynamic range. However, failure cases are also commonly observed, as the example shown in Figure 1 (top right).
3 Method
In this section, we first introduce the iterative formulation of the problem, and show the overall pipeline of UnModNet in Section 3.1 and Figure 2. Then, we detail our two-stage UnModNet model designs in Section 3.2 and Section 3.3. Implementation details are presented in Section 3.4.
3.1 Problem formulation and overall pipeline
We aim to restore the ground truth HDR image I by unwrapping a single modulo image Im captured by a modulo camera. According to Equation (1), it is equivalent to estimating the number of rollovers K. Putting it into a probabilistic framework, our goal is to estimate argmaxK P (K|Im). Theoretically, the label space of K is the whole non-negative integer space {0, 1, 2, . . .}. Given a modulo image, it is non-trivial to predict the label space either, which poses a major challenge to directly estimate the likelihood.
Therefore, we make the model more tractable by factorizing over the number of rollovers K as
P (K|Im) = ∞∏ k=1 P (M(k+1)|M(1), . . . ,M(k), Im)P (M(1)|Im), (2)
where M(k) = {M (k)(x, y, c)} represents a binary rollover mask in k-th factor term which satisfies
M (k)(x, y, c) = { 1 if k ≤ K(x, y, c) 0 otherwise and ∞∑ k=1 M(k) = K, (3)
as shown in Figure 3 (left). With an arbitrary number of binary rollover masks, we can always render an updated modulo image by
I(k)m = Im + 2 N · (M(1) + · · ·+ M(k)), (4)
so we further transform Equation (2) into P (K|Im) = ∞∏ k=0 P (M(k+1)|I(k)m ), (5)
where I(0)m = Im. To this end, estimating P (M (k+1)|I(k)m ) in k-th factor is equivalent to estimating the corresponding binary rollover mask given a modulo image, and the original problem becomes an iterative per-pixel binary labeling problem terminating when M(k+1) = 0.
As shown in Figure 2, UnModNet takes a single modulo image Im as input, iteratively updates it by predicting the binary rollover mask M, and outputs the HDR result I until the algorithm terminates. Each unwrapping iteration can be written as:
I(k+1)m = I (k) m + 2 N ·M(k+1) = I(k)m + g(I (k) m ), (6)
where g represents the proposed UnModNet. An example is shown in Figure 3 (right).
3.2 Modulo edge separator
Despite the pixel intensity distribution of modulo regions is irregular, modulo images are always comprised of distinctive dense edges. Prominently, we recognize that a modulo camera brings abrupt intensity changes in continuous irradiance regions, resulting in modulo edges with large magnitude. Edges are effective cues for various image restoration tasks, such as reflection separation [29], moiré patterns removal [18], image inpainting [37], etc., because the sparse nature of edges could relieve the ill-posedness of these problems. Similarly, we expect that a modulo edge separator could assist our goal of rollover mask prediction.
We first design a network module to predict channel-wise modulo edges Em from a single modulo image Im, as shown in the first stage of Figure 2. Modulo edges Em, which encode boundary information about modulo regions, can be defined as Em = bin(El − En), where bin stands for binarization, El denotes the channel-wise edge map (edges of the modulo image Im), and En represents the intensity edges (edges of the ground truth HDR image I). Since the modulo edges appear when “reset” of intensity from maximum to zero is triggered, its magnitude should be larger than most of the intensity edges. A simple verification is that by measuring the average edge magnitude of 3000 synthetic modulo images, we find that the magnitude of El−En is around 4 times larger than En.2 This is helpful for the separation of Em from En. To better exploit this property, we propose to learn the residual between El and Em instead of predicting Em directly. Channel-wise Laplace kernels are used to filter the input modulo image Im to obtain the edge map El. Such a network can be described as:
Em = El + ge(cat(El, Im)), (7)
where ge denotes the backbone network and cat stands for feature concatenation. In practice, we construct ge using an autoencoder [21] architecture with residual bottleneck blocks [19] to boost network depth, non-local operations [54] to enlarge receptive fields, and skip-connections to magnify the response of modulo edges.
Obtaining modulo edges Em makes unwrapping much easier because modulo edges could be used as a priori which contains abundant boundary information about modulo regions. The modulo edges are jointly predicted for all channels, resulting in a more consistent estimation.
2Please refer to the supplementary material for more details about modulo edges and experimental validation.
3.3 Rollover mask predictor
We have observed that modulo pixels are more likely to cluster in local regions, which is consistent with the fact that high dynamic range pixels are usually concentrated in small areas of an image (see Figure 1). This makes modulo regions distinctive from intensity regions since visually they show unnatural color appearances. Moreover, two modulo images with a difference of only one binary rollover mask (say I(k)m and I (k+1) m ) share similar structure patterns, i.e. I (k+1) m can be viewed as an updated modulo image whose maximum intensity is “one-period” (in our case 256) larger than I(k)m .
We therefore design another network module to predict the binary rollover mask M, given a modulo image Im and its channel-wise modulo edges Em as input, as shown in the second stage of Figure 2. Directly feeding the concatenation of Im and Em to the network makes the model hard to converge, because of the large domain gaps between the two types of data. To overcome this difficulty, we use convolutions and non-local blocks to extract the local and global features of Im and Em, and fuse them with a concatenation and a squeeze-and-excitation (SE) block [23]. SE block learns normalized weights in each channel and recalibrates feature maps by re-weighting them. The predicted binary rollover mask produced from this module can be presented mathematically as follows:
M = gm(SE(cat(Fi(Im),Fe(Em)))), (8)
where gm denotes the backbone network, SE represents the SE block, Fi and Fe indicate the feature extraction processes for Im and Em respectively. As for gm, we choose Attention U-Net architecture [40], and use residual bottleneck blocks and strided convolutions to substitute double convolution blocks and max-pooling layers in each scale respectively.
With the rollover masks becomes available, we can treat the unwrapping problem as an iterative per-pixel binary labeling problem as we have discussed in Section 3.1. With the semantic information provided by modulo images Im and the boundary information provided by modulo edges Em, the estimation of the rollover mask tends to be more robust and the unwrapped image suffers less from modulus-intensity ambiguity.
3.4 Implementation details
Loss function. The total loss function of UnModNet is L = α · Le + Lm, where Le defines the loss of the modulo edge separator, Lm defines the loss of the rollover mask predictor, and α is set to 1.0 empirically. The binary cross entropy loss is used for both Le and Lm. Dataset preparation. Learning-based methods depend heavily on training data, but there is no existing dataset for our task. Therefore, we collect HDR images from a various of image and video sources [10, 11, 12, 13, 14, 27, 38, 57] and propose an effective dataset creation pipeline. The generation of the ground truth HDR image I can be expressed as I = ⌊ (2B − 1) · clip(E ·∆t, [0, 1]) ⌋ , where B denotes the quantization bit depth, E indicates the relative irradiance values of each raw HDR image (E ∈ [0, 1]), and ∆t is an appropriate exposure time to control the over-exposure rate.3 The corresponding modulo image Im and LDR image Il can be calculated by Equation (1) (N is set to 8 for 8-bit modulo images) and Il = clip(I, [0, 255]) respectively. We choose B = 12 (i.e., 12-bit HDR images with a maximum intensity 4095) and set the over-exposure rate between 5% and 30%. The images are resized and randomly cropped to 256× 256 patches during the training process, and cropped to 512× 512 patches for test. Training strategy. We implement UnModNet4 using PyTorch and apply a two-stage training strategy. First, to ensure a stable initialization of the training process, we train the modulo edge separator and rollover mask predictor independently for 400 and 200 epochs respectively. Then, we fix the modulo edge separator and train the entire network end-to-end for another 200 epochs. ADAM optimizer [26] is used with an initial learning rate 1× 10−4 for the first 200 epochs, and a linear decay to 5× 10−5 in the next 200 epochs. Dropout noise [48] and instance normalization [51] are added during training.
3More details about the dataset creation pipeline can be found in the supplementary material. 4Detailed network architecture can be found in the supplementary material.
4 Experiments
4.1 Evaluation on synthetic data
We compare the results of UnModNet to the MRF-based algorithm [59] which takes a single modulo image as input and three state-of-the-art learning-based HDR reconstruction methods which take a single LDR image as input: DrTMO [7], ExpandNet [31], and HDRCNN [6]. Since our method keeps the same set of parameters for all test cases, for a fair comparison, we fix the parameters of the MRF algorithm for evaluation as well. Note that comparing with learning-based single-image HDR reconstruction methods (DrTMO [7], ExpandNet [31], and HDRCNN [6]) might be a bit unfair because of the difference in types of input data (LDR image vs. modulo image), and we conduct such a comparison to show the effectiveness of using modulo images w.r.t. state-of-the-art single-image approaches. Visual quality comparisons of tone-mapped HDR images are shown in
Figure 45. Compared to the MRF-based algorithm using a modulo image, our model is robust under
5More synthetic results can be found in the supplementary material.
strong local contrast or dense modulo fringes, while avoiding unwrapping incorrect regions and color misalignment. For example, the lighthouse (red box) in the middle row of Figure 4, which has drastic dynamic range changes, is correctly unwrapped by UnModNet, while the MRF-based algorithm fails to discriminate pixels in modulo from intensity regions and suffers from severe color misalignment artifacts. Compared to learning-based methods using an LDR image, our method performs better in recovering high contrast areas, and resembles the ground truth more closely. To evaluate the results quantitatively, we adopt four frequently-used image quality metrics including SSIM, MS-SSIM (multi-scale SSIM), PSNR, and Q-Score (produced by HDR-VDP-2.2 [35]). Results are shown in Table 1 (also for examples in Figure 4). Our model consistently outperforms the MRF-based and learning-based HDR reconstruction methods on all metrics. Furthermore, we evaluate the runtime of UnModNet on an NVIDIA 2080Ti GPU and the MRF-based algorithm on an Intel Core i7-8700K CPU (using a single core). Note that the MRF-based algorithm could not apply the GPU acceleration, so we can only run it on a CPU. At each iteration, UnModNet takes around 200ms to process a 512× 512 modulo image, which is around 120 times faster than the MRF-based algorithm.
4.2 Evaluation on real data
Modulo images from real RGB images. We use a Fujifilm X-T20 mirrorless digital camera6 to create a real dataset from RGB images. First, we take a series of images (around 7 ∼ 9) with bracketed exposures, and use the classical multi-image HDR reconstruction method [5] to merge them into an HDR image. The exposure value of each images are increased by 2 stops. Then, we use the dataset generation pipeline proposed in Section 3.4 to get the ground truth I, the modulo image Im, and the corresponding LDR image Il. As shown in Figure 57, our model is able to reconstruct visually impressive HDR images with less artifacts and higher quantitative scores than other methods.
Modulo images from a real sensor. There are several technologies which could mod the scene radiance as a modulo image before converting to digital signals, such as digital-pixel focal plane array (DFPA) [3, 9, 50] (used in [59]), programmable readout circuit [33], and intelligent vision sensors (e.g., Sony IMX5008), etc. We configure a retina-inspired fovea-like sampling model (FSM)
6https://fujifilm-x.com/global/products/cameras/x-t20/ 7More real RGB results can be found in the supplementary material. 8https://www.sony.net/SonyInfo/News/Press/202005/20-037E/
4.3 Ablation study
To verify the validity of each model design choice, we conduct a series of ablation studies and show comparisons in Table 2. We first show the effectiveness of our iterative unwrapping pipeline by comparing with a model that directly predicts the number of rollovers K. Then, we verify the necessity of the modulo edge separator by removing it and show the effectiveness of learning the residual between the edge map El and modulo edges Em in the modulo edge separator by removing the Laplace operation. Finally, we validate the two-stage training strategy by training the entire network in an end-to-end manner.
5 Conclusion
We presented a learning-based framework for modulo image unwrapping to realize high dynamic range imaging. To deal with the ill-posedness of this problem, we reformulated it into a series of binary labeling problems and proposed UnModNet to iteratively estimate the binary rollover masks of an input modulo image. Our model design solved some fundamental issues in the previous MRF-based algorithm [59], including modulus-intensity ambiguity, strong local contrast, and color misalignment.
The highest bit depth that can be achieved by the existing model is constrained by the configuration of training data.10 As future work, we plan to extend UnModNet to support dynamic bit depth.
9Please refer to the supplementary material for how we use SpiCam-Mod to capture modulo images. 10We demonstrate 16-bit HDR reconstruction results in the supplementary material.
Broader Impact
Our research is about a new camera framework that aims to capture high-quality HDR images. It could be integrated into the image processing pipeline of camera sensors to improve the ability of recording scenes with a very high dynamic range. The users of mobile cameras may benefit from this research because they could conveniently take photos without being annoyed by over- or under-exposure artifacts. Besides, it might be helpful to build a scientific imaging system that needs to record high dynamic range scenes, such as astronomy and microscope cameras.
Although the modulo camera-based framework could theoretically achieve unbounded dynamic range, its generalization capability is limited by the diversity of the training data. The unwrapping algorithm may fail when the captured scene has a very high dynamic range which exceeds the maximum dynamic range of the images in the training data by a large margin. If that happens in a large region of pixels, we would recommend using LDR images instead since they have more natural color appearances.
Acknowledgments and Disclosure of Funding
This work was supported in part by National Natural Science Foundation of China under Grant No. 61872012, No. 61876007, National Key R&D Program of China (2019YFF0302902), Beijing Academy of Artificial Intelligence (BAAI), Beijing major science and technology projects (Z191100010618003), and Australian Research Council Grant DE-180101438. | 1. What is the main contribution of the paper in the field of computational photography?
2. What are the strengths of the proposed approach, particularly in its problem formulation and comparison with other methods?
3. What are the weaknesses of the paper regarding its analysis of related work and reporting of runtime comparisons?
4. Are there any minor issues or suggestions for improvement in the paper's presentation or content? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper proposes a two-stage network to recover a high dynamic range (HDR) image from its modulo measurements. The two-stage supervised training, with intermediate supervision on modulo edges, is intended to make the task easier for the network to learn. This paper demonstrates better performance than the comparing methods.
Strengths
+ The problem formulation which decomposes the rollover mask K into several binary roller masks M appears to be correct. + This paper provides a good set of comparisons with other methods on different images using different metrics. + This paper falls under the area of computational photography, which is one of the NeurIPS subject area. Modulo sensors have been proposed in recent years, but these sensors are not as widely used as conventional high dynamic range sensors.
Weaknesses
- The pros and cons of different methods are not comprehensively analyzed. In section 2, Related Work - Single-image HDR reconstruction, this paper just listed some state-of-the-art methods without specifically stating the advantage and disadvantage of them. - This paper claims that the proposed method runs 120 times faster than the previous MRF-based algorithm. However, it does not report the run-time of the comparing methods. - Minor issues: -- Eqn. 6 and Fig. 2 do not match. -- The dashed-line box of “UnModNet” should not include the \plusdot on the right hand side of the “Rollover mask predictor”. |
NIPS | Title
UnModNet: Learning to Unwrap a Modulo Image for High Dynamic Range Imaging
Abstract
A conventional camera often suffers from overor under-exposure when recording a real-world scene with a very high dynamic range (HDR). In contrast, a modulo camera with a Markov random field (MRF) based unwrapping algorithm can theoretically accomplish unbounded dynamic range but shows degenerate performances when there are modulus-intensity ambiguity, strong local contrast, and color misalignment. In this paper, we reformulate the modulo image unwrapping problem into a series of binary labeling problems and propose a modulo edge-aware model, named as UnModNet, to iteratively estimate the binary rollover masks of the modulo image for unwrapping. Experimental results show that our approach can generate 12-bit HDR images from 8-bit modulo images reliably, and runs much faster than the previous MRF-based algorithm thanks to the GPU acceleration.
1 Introduction
Real-world scenes have a very high dynamic range (HDR) so that object contours are mostly lost in the over-exposed and under-exposed regions when captured by a conventional camera with a limited dynamic range and saved as an 8-bit image. To increase the dynamic range of captured images, many HDR reconstruction approaches have been proposed to increase the camera bit depth via hardware modifications [22, 36], as well as using computational methods to merge multi-bracketed captures [5] or a series of bursts [17]. Yet the dynamic range they can achieve is limited and the details of the HDR content often cannot be faithfully recovered. A modulo camera [59] can theoretically achieve unbounded dynamic range by recording the least significant bits of the irradiance signal, i.e., the camera hardware “resets” the scene radiance arriving at the sensor before reading it out whenever it reaches saturation (e.g., for an 8-bit image, 256 will be reset to 0 and re-start the counting again as long as the shutter keeps open). By unwrapping the captured modulo image with a customized Markov random field (MRF) based algorithm, the HDR image could be practically restored, as shown in Figure 1 (top left). We denote the irradiance of an HDR image as I = {I(x, y, c)}, and its corresponding modulo image as Im = {Im(x, y, c)}, where (x, y) is the pixel coordinate and c denotes the color channel index. Im is equivalent to the least significant N bits of I. As illustrated in the bottom row of Figure 1, their relationship can be expressed as:
Im = mod(I, 2N ) or I = Im + 2N ·K, (1) where K = {K(x, y, c)} is the number of rollovers per pixel.
∗Corresponding author.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
However, as shown in Figure 1 (top right), the MRF-based unwrapping algorithm [59] is not robust due to several fundamental issues:
(1) Modulus-intensity ambiguity. It is difficult to discriminate whether a pixel value is a modulus or an intensity (non-modulus). The previous method would often incorrectly unwrap non-modulo pixels as it used a cost function without a data term during optimization.
(2) Strong local contrast. Dense modulo fringes (marked with green arrows in the bottom row of Figure 1) are usually caused by strong local contrast in irradiance. The previous method often fails around these regions as it only focused on local smoothness and ignored contextual information and structural patterns.
(3) Color misalignment. The previous method independently unwraps each color channel, resulting in severe color misalignment artifacts across three channels, so it cannot handle RGB images robustly.
In this paper, we reformulate the unwrapping of a modulo image into a series of binary labeling problems and propose a learning-based framework named UnModNet, as shown in Figure 2, to iteratively estimate the binary rollover mask of the input modulo image. Concretely, we have some key observations on the characteristics of modulo images: continuous irradiance regions are split up by the modulo operation, resulting in a large edge magnitude around modulo fringes; the over-exposed regions are concentratedly distributed in an image, which makes the modulo pixels likely to cluster in local regions. Based on these unique features of modulo pixels and edges, we design UnModNet to be two-stage accordingly: the first stage is a modulo edge separator that estimates channel-wise edges unique to modulo images; the second stage is a rollover mask predictor that achieves high-accuracy rollover mask prediction with the guidance of modulo edges.
To summarize, our learning strategy for modulo image unwrapping proposes three customized model designs to solve the three issues in the previous MRF-based algorithm [59] as follows:
(1) Modulo edge separator is proposed to distinguish the semantic and boundary information of the scene to relieve the modulus-intensity ambiguity and indicate correct regions to unwrap in a context-aware manner.
(2) Rollover mask predictor is adopted to deal with strong local contrast and dense modulo fringes to increase the capability of unwrapping a higher dynamic range in a structure-aware manner.
(3) Consistent color prediction is achieved by joint unwrapping across RGB channels, so that our model restores natural color appearance reliably.
Experimental results show that our approach can generate 12-bit HDR images from 8-bit modulo images reliably, and runs much faster than the previous MRF-based algorithm [59] thanks to the GPU acceleration.
2 Related Work
Multi-image HDR reconstruction. One of the most representative multi-image HDR reconstruction methods, proposed by Debevec and Malik [5], merges several low dynamic range (LDR) photographs under different exposures. However, it suffers from ghosting artifacts in HDR results when there is misalignment caused by camera movement or scene change during the exposure time. This problem provokes a series of studies on ghosting removal in HDR images [25, 39, 43]. Instead of using bracketed exposures, Hasinoff et al. [17] fused a burst of frames of constant exposure, which reduces the exposure time substantially and makes alignment more robust. Recently, several deep convolutional neural networks (CNNs) based approaches [24, 56, 58] have been developed to rebuild an HDR image from multiple LDR images. In contrast, we focus on single-image HDR reconstruction.
Single-image HDR reconstruction. Single-image HDR reconstruction, which aims to reconstruct the HDR image from a single LDR image, is also named as inverse tone mapping [1]. It is free of ghosting artifacts but more challenging than its multi-image counterpart due to the lack of irradiance information in badly-exposed areas. This ill-posed problem can be solved by several approaches [32, 41] based on numerical optimization. Recently, two categories of new methods emerged: learning-based HDR restoration, which hallucinates plausible HDR content from a single LDR image; and unconventional cameras, which captures additional information in a single photo from the scene. Learning-based methods discover HDR image priors from a large amount of training data. HDRCNN [6] adopted an encoder-decoder architecture to restore saturated areas in LDR images. ExpandNet [31] concatenated and fused different levels of features extracted by CNN to get HDR images directly. Endo et al. [7] used CNN to predict the LDR images under multiple exposures and merged them by the classical method [5]. Metzler et al. [34] jointly optimized a diffractive optical element-based encoder and a CNN-based decoder to recover saturated scene details. Liu et al. [30] trained a CNN to reverse the camera pipeline to reconstruct the HDR image. Methods using unconventional cameras attempt to gather additional cues about dynamic range from the scene to address the ill-posed nature of this problem. Nayar et al. [36] placed an optical mask adjacent to a conventional image detector array to make spatially varying pixel exposures. Hirakawa and Simon [22] placed a combination of photographic filter over the lens and color filter array on a conventional camera sensor. Neuromorphic cameras are also shown to be useful in guiding the process of HDR imaging [16, 53, 55]. Furthermore, some concept cameras have been proposed. Tumblin et al. [49] proposed a log-gradient camera that does well in capturing detailed high contrast scenes. Zhao et al. [59] proposed a modulo camera-based framework to push out the boundary of the dynamic range.
Phase unwrapping. Phase unwrapping is a classic signal processing problem that refers to recovering the original phase value from the principal value (wrapped phase). It is widely used in domains like optical metrology [8], synthetic aperture radar (SAR) interferometry [15], and medical imaging [4]. Phase unwrapping can be solved by Poisson’s equation [46], MRF-based iterative method [2], path-following method [20], etc. Recently deep CNNs have also been used to handle this problem [42, 47, 52]. However, these methods are designed for handling phase images, which have completely different properties from natural images.
Natural image unwrapping. Natural image unwrapping aims to recover the original scene radiance from its modulo counterpart, which is previously defined as what the modulo camera-based framework [59] tries to achieve. Although it is analogous to phase unwrapping, methods designed for phase unwrapping problem cannot be directly applied because phase images and natural images are two types of data with a huge domain gap. Recently, several solutions [28, 44, 45] have been proposed to deal with the natural image unwrapping problem. However, these methods require multiple modulo images as input and refuse to work when only a single modulo image is available. By exploring natural image statistics, the MRF-based algorithm proposed in [59] successfully demonstrates the
feasibility of unwrapping a single modulo image to expand the dynamic range. However, failure cases are also commonly observed, as the example shown in Figure 1 (top right).
3 Method
In this section, we first introduce the iterative formulation of the problem, and show the overall pipeline of UnModNet in Section 3.1 and Figure 2. Then, we detail our two-stage UnModNet model designs in Section 3.2 and Section 3.3. Implementation details are presented in Section 3.4.
3.1 Problem formulation and overall pipeline
We aim to restore the ground truth HDR image I by unwrapping a single modulo image Im captured by a modulo camera. According to Equation (1), it is equivalent to estimating the number of rollovers K. Putting it into a probabilistic framework, our goal is to estimate argmaxK P (K|Im). Theoretically, the label space of K is the whole non-negative integer space {0, 1, 2, . . .}. Given a modulo image, it is non-trivial to predict the label space either, which poses a major challenge to directly estimate the likelihood.
Therefore, we make the model more tractable by factorizing over the number of rollovers K as
P (K|Im) = ∞∏ k=1 P (M(k+1)|M(1), . . . ,M(k), Im)P (M(1)|Im), (2)
where M(k) = {M (k)(x, y, c)} represents a binary rollover mask in k-th factor term which satisfies
M (k)(x, y, c) = { 1 if k ≤ K(x, y, c) 0 otherwise and ∞∑ k=1 M(k) = K, (3)
as shown in Figure 3 (left). With an arbitrary number of binary rollover masks, we can always render an updated modulo image by
I(k)m = Im + 2 N · (M(1) + · · ·+ M(k)), (4)
so we further transform Equation (2) into P (K|Im) = ∞∏ k=0 P (M(k+1)|I(k)m ), (5)
where I(0)m = Im. To this end, estimating P (M (k+1)|I(k)m ) in k-th factor is equivalent to estimating the corresponding binary rollover mask given a modulo image, and the original problem becomes an iterative per-pixel binary labeling problem terminating when M(k+1) = 0.
As shown in Figure 2, UnModNet takes a single modulo image Im as input, iteratively updates it by predicting the binary rollover mask M, and outputs the HDR result I until the algorithm terminates. Each unwrapping iteration can be written as:
I(k+1)m = I (k) m + 2 N ·M(k+1) = I(k)m + g(I (k) m ), (6)
where g represents the proposed UnModNet. An example is shown in Figure 3 (right).
3.2 Modulo edge separator
Despite the pixel intensity distribution of modulo regions is irregular, modulo images are always comprised of distinctive dense edges. Prominently, we recognize that a modulo camera brings abrupt intensity changes in continuous irradiance regions, resulting in modulo edges with large magnitude. Edges are effective cues for various image restoration tasks, such as reflection separation [29], moiré patterns removal [18], image inpainting [37], etc., because the sparse nature of edges could relieve the ill-posedness of these problems. Similarly, we expect that a modulo edge separator could assist our goal of rollover mask prediction.
We first design a network module to predict channel-wise modulo edges Em from a single modulo image Im, as shown in the first stage of Figure 2. Modulo edges Em, which encode boundary information about modulo regions, can be defined as Em = bin(El − En), where bin stands for binarization, El denotes the channel-wise edge map (edges of the modulo image Im), and En represents the intensity edges (edges of the ground truth HDR image I). Since the modulo edges appear when “reset” of intensity from maximum to zero is triggered, its magnitude should be larger than most of the intensity edges. A simple verification is that by measuring the average edge magnitude of 3000 synthetic modulo images, we find that the magnitude of El−En is around 4 times larger than En.2 This is helpful for the separation of Em from En. To better exploit this property, we propose to learn the residual between El and Em instead of predicting Em directly. Channel-wise Laplace kernels are used to filter the input modulo image Im to obtain the edge map El. Such a network can be described as:
Em = El + ge(cat(El, Im)), (7)
where ge denotes the backbone network and cat stands for feature concatenation. In practice, we construct ge using an autoencoder [21] architecture with residual bottleneck blocks [19] to boost network depth, non-local operations [54] to enlarge receptive fields, and skip-connections to magnify the response of modulo edges.
Obtaining modulo edges Em makes unwrapping much easier because modulo edges could be used as a priori which contains abundant boundary information about modulo regions. The modulo edges are jointly predicted for all channels, resulting in a more consistent estimation.
2Please refer to the supplementary material for more details about modulo edges and experimental validation.
3.3 Rollover mask predictor
We have observed that modulo pixels are more likely to cluster in local regions, which is consistent with the fact that high dynamic range pixels are usually concentrated in small areas of an image (see Figure 1). This makes modulo regions distinctive from intensity regions since visually they show unnatural color appearances. Moreover, two modulo images with a difference of only one binary rollover mask (say I(k)m and I (k+1) m ) share similar structure patterns, i.e. I (k+1) m can be viewed as an updated modulo image whose maximum intensity is “one-period” (in our case 256) larger than I(k)m .
We therefore design another network module to predict the binary rollover mask M, given a modulo image Im and its channel-wise modulo edges Em as input, as shown in the second stage of Figure 2. Directly feeding the concatenation of Im and Em to the network makes the model hard to converge, because of the large domain gaps between the two types of data. To overcome this difficulty, we use convolutions and non-local blocks to extract the local and global features of Im and Em, and fuse them with a concatenation and a squeeze-and-excitation (SE) block [23]. SE block learns normalized weights in each channel and recalibrates feature maps by re-weighting them. The predicted binary rollover mask produced from this module can be presented mathematically as follows:
M = gm(SE(cat(Fi(Im),Fe(Em)))), (8)
where gm denotes the backbone network, SE represents the SE block, Fi and Fe indicate the feature extraction processes for Im and Em respectively. As for gm, we choose Attention U-Net architecture [40], and use residual bottleneck blocks and strided convolutions to substitute double convolution blocks and max-pooling layers in each scale respectively.
With the rollover masks becomes available, we can treat the unwrapping problem as an iterative per-pixel binary labeling problem as we have discussed in Section 3.1. With the semantic information provided by modulo images Im and the boundary information provided by modulo edges Em, the estimation of the rollover mask tends to be more robust and the unwrapped image suffers less from modulus-intensity ambiguity.
3.4 Implementation details
Loss function. The total loss function of UnModNet is L = α · Le + Lm, where Le defines the loss of the modulo edge separator, Lm defines the loss of the rollover mask predictor, and α is set to 1.0 empirically. The binary cross entropy loss is used for both Le and Lm. Dataset preparation. Learning-based methods depend heavily on training data, but there is no existing dataset for our task. Therefore, we collect HDR images from a various of image and video sources [10, 11, 12, 13, 14, 27, 38, 57] and propose an effective dataset creation pipeline. The generation of the ground truth HDR image I can be expressed as I = ⌊ (2B − 1) · clip(E ·∆t, [0, 1]) ⌋ , where B denotes the quantization bit depth, E indicates the relative irradiance values of each raw HDR image (E ∈ [0, 1]), and ∆t is an appropriate exposure time to control the over-exposure rate.3 The corresponding modulo image Im and LDR image Il can be calculated by Equation (1) (N is set to 8 for 8-bit modulo images) and Il = clip(I, [0, 255]) respectively. We choose B = 12 (i.e., 12-bit HDR images with a maximum intensity 4095) and set the over-exposure rate between 5% and 30%. The images are resized and randomly cropped to 256× 256 patches during the training process, and cropped to 512× 512 patches for test. Training strategy. We implement UnModNet4 using PyTorch and apply a two-stage training strategy. First, to ensure a stable initialization of the training process, we train the modulo edge separator and rollover mask predictor independently for 400 and 200 epochs respectively. Then, we fix the modulo edge separator and train the entire network end-to-end for another 200 epochs. ADAM optimizer [26] is used with an initial learning rate 1× 10−4 for the first 200 epochs, and a linear decay to 5× 10−5 in the next 200 epochs. Dropout noise [48] and instance normalization [51] are added during training.
3More details about the dataset creation pipeline can be found in the supplementary material. 4Detailed network architecture can be found in the supplementary material.
4 Experiments
4.1 Evaluation on synthetic data
We compare the results of UnModNet to the MRF-based algorithm [59] which takes a single modulo image as input and three state-of-the-art learning-based HDR reconstruction methods which take a single LDR image as input: DrTMO [7], ExpandNet [31], and HDRCNN [6]. Since our method keeps the same set of parameters for all test cases, for a fair comparison, we fix the parameters of the MRF algorithm for evaluation as well. Note that comparing with learning-based single-image HDR reconstruction methods (DrTMO [7], ExpandNet [31], and HDRCNN [6]) might be a bit unfair because of the difference in types of input data (LDR image vs. modulo image), and we conduct such a comparison to show the effectiveness of using modulo images w.r.t. state-of-the-art single-image approaches. Visual quality comparisons of tone-mapped HDR images are shown in
Figure 45. Compared to the MRF-based algorithm using a modulo image, our model is robust under
5More synthetic results can be found in the supplementary material.
strong local contrast or dense modulo fringes, while avoiding unwrapping incorrect regions and color misalignment. For example, the lighthouse (red box) in the middle row of Figure 4, which has drastic dynamic range changes, is correctly unwrapped by UnModNet, while the MRF-based algorithm fails to discriminate pixels in modulo from intensity regions and suffers from severe color misalignment artifacts. Compared to learning-based methods using an LDR image, our method performs better in recovering high contrast areas, and resembles the ground truth more closely. To evaluate the results quantitatively, we adopt four frequently-used image quality metrics including SSIM, MS-SSIM (multi-scale SSIM), PSNR, and Q-Score (produced by HDR-VDP-2.2 [35]). Results are shown in Table 1 (also for examples in Figure 4). Our model consistently outperforms the MRF-based and learning-based HDR reconstruction methods on all metrics. Furthermore, we evaluate the runtime of UnModNet on an NVIDIA 2080Ti GPU and the MRF-based algorithm on an Intel Core i7-8700K CPU (using a single core). Note that the MRF-based algorithm could not apply the GPU acceleration, so we can only run it on a CPU. At each iteration, UnModNet takes around 200ms to process a 512× 512 modulo image, which is around 120 times faster than the MRF-based algorithm.
4.2 Evaluation on real data
Modulo images from real RGB images. We use a Fujifilm X-T20 mirrorless digital camera6 to create a real dataset from RGB images. First, we take a series of images (around 7 ∼ 9) with bracketed exposures, and use the classical multi-image HDR reconstruction method [5] to merge them into an HDR image. The exposure value of each images are increased by 2 stops. Then, we use the dataset generation pipeline proposed in Section 3.4 to get the ground truth I, the modulo image Im, and the corresponding LDR image Il. As shown in Figure 57, our model is able to reconstruct visually impressive HDR images with less artifacts and higher quantitative scores than other methods.
Modulo images from a real sensor. There are several technologies which could mod the scene radiance as a modulo image before converting to digital signals, such as digital-pixel focal plane array (DFPA) [3, 9, 50] (used in [59]), programmable readout circuit [33], and intelligent vision sensors (e.g., Sony IMX5008), etc. We configure a retina-inspired fovea-like sampling model (FSM)
6https://fujifilm-x.com/global/products/cameras/x-t20/ 7More real RGB results can be found in the supplementary material. 8https://www.sony.net/SonyInfo/News/Press/202005/20-037E/
4.3 Ablation study
To verify the validity of each model design choice, we conduct a series of ablation studies and show comparisons in Table 2. We first show the effectiveness of our iterative unwrapping pipeline by comparing with a model that directly predicts the number of rollovers K. Then, we verify the necessity of the modulo edge separator by removing it and show the effectiveness of learning the residual between the edge map El and modulo edges Em in the modulo edge separator by removing the Laplace operation. Finally, we validate the two-stage training strategy by training the entire network in an end-to-end manner.
5 Conclusion
We presented a learning-based framework for modulo image unwrapping to realize high dynamic range imaging. To deal with the ill-posedness of this problem, we reformulated it into a series of binary labeling problems and proposed UnModNet to iteratively estimate the binary rollover masks of an input modulo image. Our model design solved some fundamental issues in the previous MRF-based algorithm [59], including modulus-intensity ambiguity, strong local contrast, and color misalignment.
The highest bit depth that can be achieved by the existing model is constrained by the configuration of training data.10 As future work, we plan to extend UnModNet to support dynamic bit depth.
9Please refer to the supplementary material for how we use SpiCam-Mod to capture modulo images. 10We demonstrate 16-bit HDR reconstruction results in the supplementary material.
Broader Impact
Our research is about a new camera framework that aims to capture high-quality HDR images. It could be integrated into the image processing pipeline of camera sensors to improve the ability of recording scenes with a very high dynamic range. The users of mobile cameras may benefit from this research because they could conveniently take photos without being annoyed by over- or under-exposure artifacts. Besides, it might be helpful to build a scientific imaging system that needs to record high dynamic range scenes, such as astronomy and microscope cameras.
Although the modulo camera-based framework could theoretically achieve unbounded dynamic range, its generalization capability is limited by the diversity of the training data. The unwrapping algorithm may fail when the captured scene has a very high dynamic range which exceeds the maximum dynamic range of the images in the training data by a large margin. If that happens in a large region of pixels, we would recommend using LDR images instead since they have more natural color appearances.
Acknowledgments and Disclosure of Funding
This work was supported in part by National Natural Science Foundation of China under Grant No. 61872012, No. 61876007, National Key R&D Program of China (2019YFF0302902), Beijing Academy of Artificial Intelligence (BAAI), Beijing major science and technology projects (Z191100010618003), and Australian Research Council Grant DE-180101438. | 1. What is the focus and contribution of the paper on Modulo image unwrapping?
2. What are the strengths of the proposed approach, particularly in terms of novelty and problem reformulation?
3. What are the weaknesses of the paper regarding the provided dataset and potential overfitting?
4. How does the reviewer assess the clarity and effectiveness of the presented ideas and experiments?
5. Are there any concerns or suggestions regarding the general applicability and future research directions? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper proposes a learning based approach for Modulo image unwrapping. Modulo image only restores the modulo rather than the absolute values of image luminance, and therefore, it can store 12bit illuminance in a 8bit storage. Most importantly, physically, it allows capturing exposure in a much larger dynamic range without senor under-flow or over flow. Modulus images need to be unwrapped/reconstructed for further utilization and visualization. Existing methods are rule based. And this work proposes a deep neural network based approach. Three customized model designs (modulo edge separator, rollover mask predictor, and consistent color prediction) were proposed to solve the three fundamental issues (modulus-intensity ambiguity, strong local contrast, and color misalignment) in the previous MRF-based unwrapping algorithm [55]. A real modulo camera based on a spike camera was employed to prove the proposed concept. Experimental results show that the overall performance is superior to the previous MRF-based unwrapping algorithm and several state-of-the-art learning-based HDR reconstruction methods which take a single LDR image as input.
Strengths
The idea to use deep network for phase unwrapping is novel and interesting. The idea of using a modulo camera for HDR was proposed 5 years ago, which had unique advantages in realizing “unbounded” HDR using a single image. However, the original solution of resolving an MRF problem without data term and hand-crafted priors was quite fragile. This paper is the first work that learns the unwrapping process of a modulus image. By reformulating the problem as a series of binary labeling problems and iteratively estimating the binary rollover mask of the input modulo image, which is a novelty point of view, the problem can be solved in a much more stable manner. All modules of UnModNet (modulo edge separator, rollover mask predictor, and consistent color prediction) are carefully and specially designed to deal with the problem of unwrapping a modulus image, which means the authors do spend efforts in observing and analyzing the properties of modulus images. The idea is clearly presented and the experiments are sufficient and reasonable. The performance improvement is significant according to its own reporting. Sufficient experiment results validate that the proposed model designs successfully solved the three fundamental issues (modulus-intensity ambiguity, strong local contrast, and color misalignment) in the previous MRF-based algorithm. In addition to extensive synthetic and real simulation (using multi-bracketed methods to capture real data) experiments, this paper adopted a retina-inspired fovea-like sampling model (FSM) based spike camera to reconfigure it as a modulo camera, which makes the verification convincing and suggests an alternative solution for real modulo sensor.
Weaknesses
The paper mentions there is no existing dataset for the task and therefore has created its own dataset. However, I do not find much information about this dataset. The performance improvement is quite significant, however, it is better to show some evidence that it is not overfitting. Is there any evidence of general applicability? Since this paper targets at a quite unique problem, intuitive illustrations are important for people who are unfamiliar with this topic to follow easily. For example, the 3D color bar in Figure 1 is not easy to understand, it would be better to have some extra explanation or just use a 2D one instead. |
NIPS | Title
Global Convergence and Variance Reduction for a Class of Nonconvex-Nonconcave Minimax Problems
Abstract
Nonconvex minimax problems appear frequently in emerging machine learning applications, such as generative adversarial networks and adversarial learning. Simple algorithms such as the gradient descent ascent (GDA) are the common practice for solving these nonconvex games and receive lots of empirical success. Yet, it is known that these vanilla GDA algorithms with constant stepsize can potentially diverge even in the convex-concave setting. In this work, we show that for a subclass of nonconvex-nonconcave objectives satisfying a so-called two-sided Polyak-Łojasiewicz inequality, the alternating gradient descent ascent (AGDA) algorithm converges globally at a linear rate and the stochastic AGDA achieves a sublinear rate. We further develop a variance reduced algorithm that attains a provably faster rate than AGDA when the problem has the finite-sum structure.
1 Introduction
We consider minimax optimization problems of the forms
min x∈Rd1 max y∈Rd2 f(x, y) (1)
where f(x, y) is a possibly nonconvex-nonconcave function. Recent emerging applications in machine learning further stimulate a surge of interest in minimax problems. For example, generative adversarial networks (GANs) [23] can be viewed as a two-player game between a generator that produces synthetic data and a discriminator that differentiates between true and synthetic data. Other applications include reinforcement learning [9, 10, 11], robust optimization [42, 43], adversarial machine learning [54, 37], and so on. In many of these applications, f(x, y) may be stochastic, namely, f(x, y) = E[F (x, y; ξ)], which corresponds to the expected loss of some random data ξ; or f(x, y) may have the finite-sum structure, namely, f(x, y) = 1n ∑n i=1 fi(x, y), which corresponds to the empirical loss over n data points.
The most frequently used methods for solving minimax problems are the gradient descent ascent (GDA) algorithms (or their stochastic variants), with either simultaneous or alternating updates of the primal-dual variables, referred to as SGDA and AGDA, respectively. While these algorithms have received much empirical success especially in adversarial training, it is known that GDA algorithms with constant stepsizes could fail to converge even for the bilinear games [22, 40]; when they do converge, the stable limit point may not be a local Nash equilibrium [13, 38]. On the other hand, GDA algorithms can converge linearly to the saddle point for strongly-convex-strongly-concave functions [17]. Moreover, for many simple nonconvex-nonconcave objective functions, such as, f(x, y) = x2 + 3 sin2 x sin2 y − 4y2 − 10 sin2 y, we observe that GDA algorithms with constant
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
stepsizes converge to the global Nash equilibrium (see Figure 1). These facts naturally raise a question: Is there a general condition under which GDA algorithms converge to the global optima?
Furthermore, the use of variance reduction techniques has played a prominent role in improving the convergence over stochastic or batch algorithms for both convex and nonconvex minimization problems [27, 52, 53, 58]. However, when it comes to the minimax problems, there are limited results, except under convex-concave setting [49, 15]. This leads to another open question: Can we improve GDA algorithms for nonconvex-nonconcave minimax problems?
1.1 Our contributions
In this paper, we address these two questions and specifically focus on the alternating gradient descent ascent, namely AGDA. This is due to several considerations. First of all, using alternating updates of GDA is more stable than simultaneous updates [22, 2] and often converges faster in practice. Note that for a convex-concave matrix game, SGDA may diverge while AGDA is proven to always have bounded iterates [22]. See Figure 2 for a simple illustration. Secondly, AGDA is widely used for training GANs and other minimax problems in practice; see e.g., [33, 41]. Yet there is a lack of discussion on the convergence of AGDA for general minimax problems in the literature, even for the favorable strongly-convex-strongly-concave setting. Alternating updating schemes are perceived more challenging to analyze than simultaneous updates; the latter treats two variables equally and has been extensively studied in vast literature of variational inequality. Our main contributions are summarized as follows.
Two-sided PL condition. First, we identity a general condition that relaxes the convex-concavity requirement of the objective function while still guaranteeing global convergence of AGDA and stochastic AGDA (Stoc-AGDA). We call this the two-sided PL condition, which requires that both players’ utility functions satisfy Polyak-Łojasiewicz (PL) inequality [50]. The two-sided PL condition is very general and is satisfied by many important classes of functions: (a) all stronglyconvex-strongly-concave functions; (b) all PL-strongly-concave function (discussed in [24]) and (c) many nonconvex-nonconcave objectives. Such conditions also hold true for various applications, including robust least square, generative adversarial imitation learning for linear quadratic regulator (LQR) dynamics [5], zero-sum linear quadratic game [63], and potentially many others in adversarial learning [14], robust phase retrieval [55, 64], robust control [18], and etc. We first investigate the landscape of objectives under the two-sided PL condition. In particular, we show that three notions of optimality: saddle point, minimax point, and stationary point are equivalent.
Global convergence of AGDA. We show that under the two-sided PL condition, AGDA with proper constant stepsizes converges globally to a saddle point at a linear rate of O(1− κ−3)t, while Stoc-AGDA with proper diminishing stepsizes converges to a saddle point at a sublinear rate of O(κ5/t), where κ is the underlying condition number. To the best of our knowledge, this is the first result on the global convergence of a class of nonconvex-nonconvex problems. In contrast, most previous work deals with nonconvex-concave problems and obtains convergence to stationary points. On the other hand, because all strongly-convex-strongly-concave and PL-strongly-concave functions naturally satisfy the two-sided PL condition, our analysis fills the theoretical gap with the first convergence results of AGDA under these settings.
Variance reduced algorithm. For minimax problems with the finite-sum structure, we introduce a variance-reduced AGDA algorithm (VR-AGDA) that leverages the idea of stochastic variance reduced gradient (SVRG) [27, 52] with the alternating updates. We prove that VR-AGDA achieves the complexity ofO ( (n+ n2/3κ3) log(1/ ) ) , which improves over theO ( nκ3 log 1 ) complexity of
AGDA and the O ( κ5/ ) complexity of Stoc-AGDA when applied to finite-sum minimax problems. Our numerical experiments further demonstrate that VR-AGDA performs significantly better than AGDA and Stoc-AGDA, especially for problems with large condition numbers. To our best knowledge, this is the first work to provide a variance-reduced algorithm and theoretical guarantees in the nonconvex-nonconcave regime of minimax optimization. In contrast, most previous variance-reduced algorithms require full or partial strong convexity and only apply to simultaneous updates.
Nonconvex-PL games. Lastly, as a side contribution, we show that for a broader class of nonconvex-nonconcave problems under only one-sided PL condition, AGDA converges to a - stationary point within O( −2) iterations, thus is optimal among all first-order algorithms. Our result shaves off a logarithmic factor of the best-known rate achieved by the multi-step GDA algorithm [47]. This directly implies the same convergence rate on nonconvex-strongly-concave objectives, and to our best knowledge, we are the first to show the convergence of AGDA on this class of functions. Due to page limitation, we defer this result to Appendix ??.
1.2 Related work
Nonconvex minimax problems. There has been a recent surge in research on solving minimax optimization beyond the convex-concave regime [54, 8, 51, 56, 30, 47, 1, 32, 3, 48], but they differ from our work from various perspectives. Most of these work focus on the nonconvex-concave regime and aim for convergence to stationary points of minimax problems [8, 54, 31, 56]. Algorithms in these work require solving the inner maximization or some sub-problems with high accuracy, which are different from AGDA. Lin et al. [30] proposed an inexact proximal point method to find an - stationary point for a class of weakly-convex-weakly-concave minimax problems. Their convergence result relies on assuming the existence of a solution to the corresponding Minty variational inequality, which is hard to verify. Abernethy et al. [1] showed the linear convergence of a second-order iterative algorithm, called Hamiltonian gradient descent, for a subclass of "sufficiently bilinear" functions. Very recently, Xu et al. [60] and Boţ and Böhm [4] anslyze AGDA in nonconvex-(strongly-)concave setting. There is also a line of work in understanding the dynamics in minimax games [39, 20, 19, 21, 12, 25].
Variance-reduced minimax optimization. Palaniappan and Bach [49], Luo et al. [34], Chavdarova et al. [7] provided linear-convergent algorithms for strongly-convex-strongly-concave objectives, based on simultaneous updates. Du and Hu [15] extended the result to convex-strongly-concave objectives with full-rank coupling bilinear term. In contrast, we are dealing with a much broader class of objectives that are possibly nonconvex-nonconcave. We point out that Luo et al. [35] and Xu et al. [59] recently introduced variance-reduced algorithms for finding the stationary point of nonconvex-strongly-concave problems, which is again different from our setting.
2 Global optima and two-sided PL condition
Throughout this paper, we assume that the function f(x, y) in (1) is continuously differentiable and has Lipschitz gradient. Here ‖ · ‖ is used to denote the Euclidean norm. Assumption 1 (Lipschitz gradient). There exists a positive constant l > 0 such that
max{‖∇yf (x1, y1)−∇yf (x2, y2)‖ , ‖∇xf (x1, y1)−∇xf (x2, y2)‖} ≤ l[‖x1 − x2‖+‖y1 − y2‖],
holds for all x1, x2 ∈ Rd1 , y1, y2 ∈ Rd2 .
We now define three notions of optimality for minimax problems. The most direct notion of optimality is global minimax point, at which x∗ is an optimal solution to the function g(x) := maxy f(x, y) and y∗ is an optimal solution to maxy f(x∗, y). In the two-player zero-sum game, the notion of saddle point is also widely used [57, 44]. For a saddle point (x∗, y∗), x∗ is an optimal solution to minx f(x, y
∗) and y∗ is an optimal solution to maxy f(x∗, y). Definition 1 (Global optima).
1. (x∗, y∗) is a global minimax point, if for any (x, y) : f(x∗, y) ≤ f(x∗, y∗) ≤ maxy′ f(x, y′). 2. (x∗, y∗) is a saddle point, if for any (x, y) : f(x∗, y) ≤ f(x∗, y∗) ≤ f(x, y∗).
3. (x∗, y∗) is a stationary point, if : ∇xf(x∗, y∗) = ∇yf(x∗, y∗) = 0.
For general nonconvex-nonconcave minimax problems, these three notions of optimality are not necessarily equivalent. A stationary point may not be a saddle point or a global minimax point; a global minimax point may not be a saddle point or a stationary point. Note that for minimax problems, a saddle point or a global minimax point may not always exist. However, since our goal in this paper is to find global optima, in the remainder of the paper, we assume that a saddle point always exists. Assumption 2 (Existence of saddle point). The objective function f has at least one saddle point. We also assume that for any fixed y, minx∈Rd1 f(x, y) has a nonempty solution set and a optimal value, and for any fixed x, maxy∈Rd2 f(x, y) has a nonempty solution set and a finite optimal value.
For unconstrained minimization problems: minx∈Rn f(x), Polyak [50] proposed Polyak-Łojasiewicz (PL) condition, which is sufficient to show global linear convergence for gradient descent without assuming convexity. Specifically, a function f(·) satisfies PL condition if it has a nonempty solution set and a finite optimal value f∗, and there exists some µ > 0 such that ‖∇f(x)‖2 ≥ 2µ(f(x) − f∗),∀x. As discussed in Karimi et al. [28], PL condition is weaker, or not stronger, than other well-known conditions that guarantee linear convergence for gradient descent, such as error bounds (EB) [36], weak strong convexity (WSC) [45] and restricted secant inequality (RSI) [61].
We introduce a straightforward generalization of the PL condition to the minimax problem: function f(x, y) satisfies the PL condition with constant µ1 with respect to x, and -f satisfies PL condition with constant µ2 with respect to y. We formally state this in the following definition. Definition 2 (Two-sided PL condition). A continuously differentiable function f(x, y) satisfies the two-sided PL condition if there exist constants µ1, µ2 > 0 such that: ∀x, y,
‖∇xf(x, y)‖2 ≥ 2µ1[f(x, y)−min x f(x, y)], ‖∇yf(x, y)‖2 ≥ 2µ2[max y f(x, y)− f(x, y)].
The two-sided PL condition does not imply convexity-concavity, and it is a much weaker condition than strong-convexity-strong-concavity. In Lemma 2.1, we show that three notions of optimality are equivalent under the two-sided PL condition. Note that they may not be unique. Lemma 2.1. If the objective function f(x, y) satisfies the two-sided PL condition, then the following holds true:
(saddle point)⇔ (global minimax)⇔ (stationary point).
Below we give some examples that satisfy this condition. Example 1. The nonconvex-nonconcave function in the introduction, f(x, y) = x2+3 sin2 x sin2 y− 4y2 − 10 sin2 y satisfies the two-sided PL condition with µ1 = 1/16, µ2 = 1/11 (see Appendix ??). Example 2. f(x, y) = F (Ax,By), where F (·, ·) is strongly-convex-strongly-concave and A and B are arbitrary matrices, satisfies the two-sided PL condition. Example 3. The generative adversarial imitation learning for LQR can be formulated as minK maxθm(K, θ), where m is strongly-concave in terms of θ and satisfies PL condition in terms of K (see [5] for more details), thus satisfying the two-sided PL condition. Example 4. In a zero-sum linear quadratic (LQ) game, the system dynamics are characterized by xt+1 = Axt + But + Cvt, where xt is the system state and ut, vt are the control inputs from two-players. After parameterizing the policies of two players by ut = −Kxt and vt = −Lxt, the
value function is C(K,L) = Ex0∼D {∑∞ t=0 [ x>t Qxt + (Kxt) > Ru (Kxt)− (Lxt)>Rv (Lxt) ]} , where D is the distribution of the initial state x0 (see [63] for more details). Player 1 (player 2) wants to minimize (maximize) C(K,L), and the game is formulated as minK maxL C(K,L). Fixing L (or K), C(·, L) (or −C(K, ·)) becomes an objective of an LQR problem, and therefore satisfies the PL condition [18] when argminK C(K,L) and argmaxL C(K,L) are well-defined.
The two-sided PL condition includes rich classes of functions, including: (a) all strongly-convexstrongly-concave functions; (b) some convex-concave functions (e.g., Example 2); (c) some nonconvex-strongly-concave functions (e.g., Example 3); (d) some nonconvex-nonconcave functions (e.g., Example 1 and 4). Under the two-sided PL condition, the function g(x) := maxy f(x, y) satisfies PL condition with µ1 (see Appendix ??). Moreover, it holds that g is also L-smooth with L := l+ l2/µ2 [47]. Finally, we denote µ = min(µ1, µ2) and κ = lµ , which represents the condition number of the problem.
3 Global convergence of AGDA and Stoc-AGDA
In this section, we establish the convergence rate of the stochastic alternating gradient descent ascent (Stoc-AGDA) algorithm, which we present in Algorithm 1, under the two-sided PL condition. StocAGDA updates variables x and y sequentially using stochastic gradient descent/ascent steps. Here we make standard assumptions about stochastic gradients Gx(x, y, ξ) and Gy(x, y, ξ). Assumption 3 (Bounded variance). Gx(x, y, ξ) and Gy(x, y, ξ) are unbiased stochastic estimators of∇xf(x, y) and∇yf(x, y) and have variances bounded by σ2 > 0.
Algorithm 1 Stoc-AGDA 1: Input: (x0, y0), stepsizes {τ t1}t > 0, {τ t2}t > 0 2: for all t = 0, 1, 2, ... do 3: Draw two i.i.d. samples ξt1, ξt2 ∼ P (ξ) 4: xt+1 ← xt − τ t1Gx(xt, yt, ξt1) 5: yt+1 ← yt + τ t2Gy(xt+1, yt, ξt2) 6: end for
Note that Stoc-AGDA with constant stepsizes (i.e., τ t1 = τ1 and τ t 2 = τ2) and noiseless stochastic gradient (i.e., σ2 = 0) reduces to AGDA:
xt+1 = xt − τ1∇xf(xt, yt), yt+1 = yt + τ2∇yf(xt+1, yt). (2)
We measure the inaccuracy of (xt, yt) through the potential function
Pt := at + λ · bt, (3)
where at = E[g(xt) − g∗], bt = E[g(xt) − f(xt, yt)] and the balance parameter λ > 0 will be specified later in the theorems. Recall that g(x) := maxy f(x, y) and g∗ = minx g(x). This metric is driven by the definition of minimax point, because g(x)− g∗ and g(x)− f(x, y) are non-negative for any (x, y), and both equal to 0 if and only if (x, y) is a minimax point.
Stoc-AGDA with constant stepsizes We first consider Stoc-AGDA with constant stepsizes. We show that {(xt, yt)}t will converge linearly to a neighbourhood of the optimal set. Theorem 3.1. Suppose Assumptions 1, 2, 3 hold and f(x, y) satisfies the two-sided PL condition with µ1 and µ2. Define Pt := at + 110bt. If we run Algorithm 1 with τ t 2 = τ2 ≤ 1l and τ t 1 = τ1 ≤ µ22τ2 18l2 ,
Pt ≤(1− 1
2 µ1τ1)
tP0 + δ, (4)
where δ = (1−µ2τ2)(L+l)τ 2 1 +lτ 2 2 +10Lτ 2 1
10µ1τ1 σ2.
Remark 1. In the theorem above, we choose τ1 smaller than τ2, τ1/τ2 ≤ µ22/(18l2), because our potential function is not symmetric about x and y. Another reason is because we want yt
to approach y∗(xt) ∈ arg maxy f(xt, y) faster so that ∇xf(xt, yt) is a better approximation for ∇g(xt) (∇g(x) = ∇xf(x, y∗(x)), see Nouiehed et al. [47]). Indeed, it is common to use different learning rates for x and y in GDA algorithms for nonconvex minimax problems; see e.g., Jin et al. [26] and Lin et al. [31]. Note that the ratio between these two learning rates is quite crucial here. We also observe empirically when the same learning rate is used, even if small, the algorithm may not converge to saddle points. Remark 2. When t→∞, Pt → δ. If τ1 → 0 and τ22 /τ1 → 0, the error term δ will go to 0. When using smaller stepsizes, the algorithm reaches a smaller neighbour of the saddle point yet at the cost of a slower rate, as the contraction factor also deteriorates.
Linear convergence of AGDA Setting σ2 = 0, it follows immediately from the previous theorem that AGDA converges linearly under the two-sided PL condition. Moreover, we have the following: Theorem 3.2. Suppose Assumptions 1, 2 hold and f(x, y) satisfies the two-sided PL condition with µ1 and µ2. Define Pt := at + 110bt. If we run AGDA with τ1 = µ22 18l3 and τ2 = 1 l , then
Pt ≤ ( 1− µ1µ 2 2
36l3
)t P0. (5)
Furthermore, {(xt, yt)}t converges to some saddle point (x∗, y∗), and
‖xt − x∗‖2 + ‖yt − y∗‖2 ≤ α ( 1− µ1µ 2 2
36l3
)t P0, (6)
where α is a constant depending on µ1, µ2 and l.
The above theorem implies that the limit point of {(xt, yt)}t is a saddle point and the distance to the saddle point decreases in the order of O ( (1− κ−3)t ) . Note that in the special case when the objective is strongly-convex-strongly-concave, it is known that SGDA (GDA with simultaneous updates) achieves anO(κ2 log(1/ )) iteration complexity (see, e.g., Facchinei and Pang [17]) and this can be further improved to match the lower complexity bound O(κ log(1/ )) [62] by extragradient methods [29] or Nesterov’s dual extrapolation [46]. However, these results heavily rely on the strong monotonicity of the corresponding variational inequality, which does not apply here. Our analysis technique is totally different. Since the general two-sided PL condition contains a much broader class of functions, we do not expect to achieve the same dependency on κ, especially for a simple algorithm like AGDA. Note that even the multi-step GDA in [47] results in the same κ3 dependency, but without linear convergence rate. Hence, our conjecture is that the κ3 dependency of AGDA can not be improved without modifying the algorithm. We leave this investigation for future work.
Stoc-AGDA with diminishing stepsizes While Stoc-AGDA with constant stepsizes only converges linearly to a neighbourhood of the saddle point, Stoc-AGDA with diminishing stepsizes converges to the saddle point but at a sublinear rate O(1/t). Theorem 3.3. Suppose Assumptions 1, 2, 3 hold and f(x, y) satisfies the two-sided PL condition with µ1 and µ2. Define Pt = at + 110bt. If we run algorithm 1 with stepsizes τ t 1 = β γ+t and τ t 2 = 18l2β µ22(γ+t) for some β > 2/µ1 and γ > 0 such that τ11 ≤ min{1/L, µ22/18l2}, then we have
Pt ≤ ν
γ + t , where ν := max
{ γP0, [ (L+ l)β2 + 182l5β2/µ42 + 10Lβ 2 ] σ2
10µ1β − 20
} . (7)
Remark 3. Note the rate is affected by ν, and the first term in the definition of ν is controlled by the initial point. In practice, we can find a good initial point by running Stoc-AGDA with constant stepsizes so that only the second term in the definition of ν matters. Then by choosing β = 3/µ1, we
have ν = O ( l5σ2
µ21µ 4 2
) . Thus, the convergence rate of Stoc-AGDA is O ( κ5σ2
µt
) .
4 Stochastic variance-reduced AGDA algorithm
In this section, we study the minimax problem with the finite-sum structure: minx maxy f(x, y) = 1 n ∑n i=1 fi(x, y), which arises ubiquitously in machine learning. We are especially interested in the
Algorithm 2 VR-AGDA 1: input: (x̃0, ỹ0), stepsizes τ1, τ2, iteration numbers N,T 2: for all k = 0, 1, 2, ... do 3: for all t = 0, 1, 2, ...T − 1 do 4: xt,0 = x̃t, yt,0 = ỹt, 5: compute∇xf(x̃t, ỹt) = 1n ∑n i=1∇xfi(x̃t, ỹt) and∇yf(x̃t, ỹt) = 1 n ∑n i=1∇yfi(x̃t, ỹt)
6: for all j = 0 to N − 1 do 7: sample i.i.d. indices i1j , i 2 j uniformly from [n] 8: xt,j+1 = xt,j − τ1[∇xfi1j (xt,j , yt,j)−∇xfi1j (x̃t, ỹt) +∇xf(x̃t, ỹt)] 9: yt,j+1 = yt,j + τ2[∇yfi2j (xt,j+1, yt,j)−∇yfi2j (x̃t, ỹt) +∇yf(x̃t, ỹt)]
10: end for 11: x̃t+1 = xt,N , ỹt+1 = yt,N 12: end for 13: choose (xk, yk) from {{(xt,j , yt,j)}N−1j=0 } T−1 t=0 uniformly at random 14: x̃0 = xk, ỹ0 = yk 15: end for
case when n is large. We assume the overall objective function f(x, y) satisfies the two-sided PL condition with µ1 and µ2, but do not assume each fi to satisfy the two-sided PL condition. Instead of Assumption 1, we assume each component fi has Lipschitz gradients.
Assumption 4. Each fi has l-Lipschitz gradients.
If we run AGDA with full gradients to solve the finite-sum minimax problem, the total complexity for finding an -optimal solution is O(nκ3 log(1/ )) by Theorem 3.2. Despite the linear convergence, the per-iteration cost is high and the complexity can be huge when the number of components n and condition number κ are large. Instead, if we run Stoc-AGDA, this leads to the total complexity O ( κ5σ2
µ2
) by Remark 3, which has worse dependence on .
Motivated by the recent success of stochastic variance reduced gradient (SVRG) technique [27, 52, 49], we introduce the VR-AGDA algorithm (presented in Algorithm 2), that combines AGDA with SVRG so that the linear convergence is preserved while improving the dependency on n and κ. VR-AGDA can be viewed as the applying SVRG to AGDA with restarting: at every epoch k, we restart the SVRG subroutine by initializing it with (xk, yk), which is randomly selected from previous SVRG subroutine. This is partly inspired by the GD-SVRG algorithm for minimizing PL functions [52]. Notice when T = 1, VR-AGDA reduces to a double-loop algorithm which is similar to the SVRG for saddle point problems proposed by Palaniappan and Bach [49], except for several notable differences: (i) we are using the alternating updates rather than simultaneous updates, (ii) as a result, we require to sample two independent indices rather than one at each iteration, and (iii) most importantly, we are dealing with possibly nonconvex-nonconcave objectives that satisfy the two-sided PL condition. The following two theorems capture the convergence of VR-AGDA under different parameter setups.
Theorem 4.1. Suppose Assumptions 2 and 4 hold and f(x, y) satisfies the two-sided PL condition with µ1 and µ2. Define Pk = ak + 120b
k, where ak = E[g(xk)− g∗] and bk = E[g(xk)− f(xk, yk)]. If we run VR-AGDA with τ1 = β/(28κ8l), τ2 = β/(lκ6), N = bαβ−2/3κ9(2 + 4β1/2κ−3)−1c and T = 1, where α, β are constants irrelevant to l, n, µ1, µ2, then Pk+1 ≤ 12Pk. This implies complexity of
O ( (n+ κ9) log(1/ ) ) total for VR-AGDA to achieve an -optimal solution.
Theorem 4.2. Under the same assumptions as Theorem 4.1 , if we run VR-AGDA with τ1 = β/(28κ2ln2/3), τ2 = β/(ln2/3), N = bαβ−2/3n(2 + 4β1/2n−1/3)−1c, and T = dκ3n−1/3e, where α, β are constants irrelevant to l, n, µ1, µ2, then Pk+1 ≤ 12Pk. This implies complexity of
O ( (n+ n2/3κ3) log(1/ ) ) for VR-AGDA to achieve an -optimal solution.
Remark 4. Theorems 4.1 and 4.2 are different in their choices of stepsizes and iteration numbers, which gives rise to different complexities. VR-AGDA with the second setting has a lower complexity than the first setting in the regime n ≤ κ9, but the first setting allows for a simpler double-loop algorithm with T = 1. The two theorems imply that VR-AGDA always improves over AGDA. To the best of our knowledge, this is also the first theoretical analysis of variance-reduced algorithms with alternating updating rules for minimax optimization.
5 Numerical experiments
We present experiments on two applications: robust least square and imitation learning for LQR. We mainly focus on the comparison between AGDA, Stoc-AGDA, and VR-AGDA, which are the only algorithms with known theoretical guarantees. Because of their simplicity, only few hyperparameters are induced and are tuned based on grid search.
5.1 Robust least square
We consider the least square problems with coefficient matrix A ∈ Rn×m and noisy vector y0 ∈ Rn subject to bounded deterministic perturbation δ. Robust least square (RLS) minimizes the worst case residual, and can be formulated as [16]: minx maxδ:‖δ‖≤ρ ‖Ax− y‖2, where δ = y0 − y. We consider RLS with soft constraint:
minx maxy F (x, y) := ‖Ax− y‖2M − λ‖y − y0‖2M , (8)
where we adopt the general M-(semi-)norm in: ‖x‖2M = xTMx and M is positive semi-definite. F (x, y) satisfies the two-sided PL condition when λ > 1, because it can be written as the composition of a strongly-convex-strongly-concave function and an affine function (Example 2). However, F (x, y) is not strongly convex about x, and when M is not full-rank, it is not strongly concave about y.
Datasets. We use three datasets in the experiments, and two of them are generated in the same way as in Du and Hu [15]. We generate the first dataset with n = 1000 and m = 500 by sampling rows of A from a GaussianN (0, In) distribution and setting y0 = Ax∗ + with x∗ from GaussianN (0, 1) and from Gaussian N (0, 0.01). We set M = In and λ = 3. The second dataset is the rescaled aquatic toxicity dataset by Cassotti et al. [6], which uses 8 molecular descriptors of 546 chemicals to predict quantitative acute aquatic toxicity towards Daphnia Magna. We use M = I and λ = 2 for this dataset. The third dataset is generated with A ∈ R1000×500 from Gaussian N (0,Σ) where Σi,j = 2−|i−j|/10, M being rank-deficit with positive eigenvalues sampled from [0.2, 1.8] and λ = 1.5. These three datasets represent cases with low, median, and high condition numbers, respectively.
Evaluation. We compare four algorithms: AGDA, Stoc-AGDA, VR-AGDA and extragradient (EG) with fine-tuned stepsizes. For Stoc-AGDA, we choose constant stepsizes to form a fair comparison with the other two. We report the potential function value, i.e., Pt described in our theorems, and distance to the limit point ‖(xt, yt) − (x∗, y∗)‖2. These errors are plotted against the number of gradient evaluations normalized by n (i.e., number of full gradients). Results are reported in Figure 3. We observe that VR-AGDA and AGDA both exhibit linear convergence, and the speedup of VR-AGDA is fairly significant when the condition number is large, whereas Stoc-AGDA progresses fast at the beginning and stagnates later on. These numerical results clearly validate our theoretical findings. EG performs poorly in this example.
5.2 Generative adversarial imitation learning for LQR
The optimal control problem for LQR can be formulated as [18]:
minimize πt Ex0∼D ∞∑ t=0 x>t Qxt + u > t Rut such that xt+1 = Axt +But, ut = πt(xt),
where xt ∈ Rd is a state, ut ∈ Rk is a control,D is the distribution of initial state x0, and πt is a policy. It is known that the optimal policy is linear: ut = −K∗xt, where K∗ ∈ Rk×d. If we parametrize the policy in the linear form, ut = −Kxt, the problem can be written as: minK C(K;Q,R) := Ex0∼D [∑∞ t=0 ( x>t Qxt + (Kxt) >R(Kxt) )]
where the trajectory is induced by LQR dynamics and policy K. In generative adversarial imitation learning for LQR, the trajectories induced by an expert policy KE are observed and part of the goal is to learn the cost function parameters Q and R from the expert. This can be formulated as a minimax problem [5]:
min K max (Q,R)∈Θ
{ m(K,Q,R) := C(K;Q,R)− C(KE ;Q,R)− Φ(Q,R) } ,
where Θ = {(Q,R) : αQI Q βQI, αRI R βRI} and Φ is a strongly-convex regularizer. We sample n initial points x(1)0 , x (2) 0 , ..., x (n) 0 fromD and approximateC(K;Q,R) by sample average Cn(K;Q,R) := 1 n ∑n i=1 [∑∞ t=0 ( x>t Qxt + u > t Rut )] x0=x (i) 0 . We then consider:
min K max (Q,R)∈Θ
{mn(K,Q,R) := Cn(K;Q,R)− Cn(KE ;Q,R)− Φ(Q,R)}. (9)
Note that mn satisfies the PL condition in terms of K [18], and mn is strongly-concave in terms of (Q,R), so the function satisfies the two-sided PL condition.
In our experiment, we use Φ(Q,R) = λ(‖Q − Q̄‖2 + ‖R − R̄‖2) for some Q̄, R̄ and λ = 1. We generate a dataset with different n and k: (1) d = 3, k = 2; (2) d = 20, k = 10; (3) d = 30, k = 20. The initial distribution D is N (0, Id) and we sample n = 100 initial points. The exact gradients can be computed based on the compact forms established in Fazel et al. [18], Cai et al. [5]. We compare AGDA and VR-AGDA under fine-tuned stepsizes, and track their errors in terms of ‖Kt−K∗‖2 +‖Qt−Q∗‖2F +‖Rt−R∗‖2F . The result is reported in Figure 4, which again indicates that VR-AGDA significantly outperforms AGDA.
6 Conclusion
In this paper, we identify a subclass of nonconvex-nonconcave minimax problems, represented by the so-called two-side PL condition, for which AGDA and Stoc-AGDA can converge to global saddle points. We also propose the first linearly-convergent variance-reduced AGDA algorithm that is provably faster than AGDA, for this subclass of minimax problems . We hope this work can shed some light on the understanding of nonconvex-nonconcave minimax optimization: (1) different learning rates for two players are essential in GDA algorithms with alternating updates; (2) convexity-concavity is not a watershed to guarantee global convergence of GDA algorithms.
Acknowledgments and Disclosure of Funding
This work was supported in part by ONR grant W911NF-15-1-0479, NSF CCF-1704970, and NSF CMMI-1761699.
Broader Impact
With the boom of neural networks in every corner of machine learning, the understanding of nonconvex optimization, especially minimax optimization, becomes increasingly important. On one hand, the surge of interest in generative adversarial networks (GAN) has brought revolutionary success in many practical applications such as face synthesis , text-to-image synthesis, text generation. On the other hand, even for the simplest algorithm such as gradient descent ascent (GDA), although widely adopted by practitioners and researchers in the filed, lack theoretical understanding. It is imperative to develop a strong fundamental understanding of the success of these simple algorithms in the nonconvex regime, both to expand the usability of the methods and to accelerate future deployment in a principled and interpretable manner.
Theory. This paper takes an initial and substantial step towards the understanding of nonconvexnonconcave min-max optimization problems with "hidden convexity" as well as the convergence of the simplest alternating GDA algorithm. Despite its popularity, this algorithm has not been carefully analyzed even in the convex regime. The theory developed in this work helps explain when and why GDA performs well, how to choose stepsizes, and how to improve GDA properly. These are obviously basic yet important questions that need to be addressed in order to guide future development.
Applications. The downstream applications include but not limited to generative adversarial networks, the actor-critic game in reinforcement learning, robust machine learning and control, and other applications in games and social economics. This work could potentially inspire more interest in broadening the applicability of GDA in practice. | 1. What is the main contribution of the paper regarding AGDA algorithms for min-max problems?
2. What are the strengths and weaknesses of the proposed approach, particularly concerning its convergence rates and assumptions?
3. Do you have any concerns about the applicability and importance of the results under specific conditions?
4. How does the reviewer assess the clarity and quality of the analysis and presentation in the paper? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The paper studies the convergence of AGDA for min-max problems under PL condition. Under 2-side PL condition, the authors show AGDA has linear convergence. Furthermore, the authors present a stochastic version of AGDA, which has sublinear rate. With variance reduction technique, the rate is improved to linear.
Strengths
The major contribution of the work is to show that different versions of AGDA algorithms have linear or sublinear rate of min-max problems under PL condition, which does not need to be convex-concave. Nonconvex-nonconcave min-max problem is an important subject in ML, and there haven’t been many fundamental works on this.
Weaknesses
Though there are some merits of the paper, here are a bunch of major problems of the submission: 1. PL condition is a very strong assumption. Although it does not require convexity-concavity, it is a global condition, which roughly requires similar properties of strong convexity-concavity. I agree there are some applications of min-max problems under PL condition, as mentioned in the paper, but the applications are extremely limited, and I am not sure they are important applications to ML community. In general, nonconvex-nonconcave min-max problems won’t satisfy PL condition. To this extend, the title of the paper is a bit misleading, and it should mention PL condition explicitly. 2. It is unclear to me that AGDA is a good algorithm for min-max problem, and Figure 1 and 2 are a bit misleading. I agree that AGDA is more stable than GDA, which has been shown to be a bad algorithm for min-max problems. However, AGDA is less stable than EG. For example, when f(x,y) = xy, GDA diverges, AGDA circles, and EG converges. The objectives in Figure 1 and Figure 2 are locally strongly-convex strongly-concave, thus all three algorithms should converge linearly when the step-size is small enough. 3. The linear convergence rate shown in the paper is slow. \kappa^3 is too slow a rate for linear convergence. In most of the real problems \kappa>1000, in which case the bound becomes almost meaningless. This bound should be improvable, and the \kappa^9 bound is definitely improvable. If the authors do not agree with this, then please provide a lower-bound argument. 4. The numerical experiments are a bit toy. Robust least square is convex-concave (strongly convex-strongly concave if I understand correctly). I don’t understand the objective of LQR example, but that is a tiny problem in dimension. Is it also convex-concave? 5. The PL condition number is not known in advance, but it is required in the algorithm. This may add another layer of tuning when running the algorithm in practice. =========[after rebuttal]============ Thanks for your response, and I updated the score correspondingly. Though I see some merits of the paper, the reasons that I still did not support acceptance are: 1. The applications of two-side PL conditions are limited. The citations mentioned in the response are mostly minimization problem, not saddle-point problem. The two-side PL condition is very strong, and avoids the difficulty of nonconvex-nonconcave minimax problem, i.e., cycling. 2. The bound seems to be sub-optimal to me. One can quickly show that AGDA has O(kappa^2 log(1/eps)) rate for strongly-convex-strongly-concave problem. Though the conditions are not identical, they are similar. I invite the authors to think deeply on the possibility to improve the bound. 3. The analysis presented in the paper is hard to read, in particular the variance-reduction part. I am not sure what the readers can learn from the analysis of the submission. I suggest to improve the presentation of the analysis. |
NIPS | Title
Global Convergence and Variance Reduction for a Class of Nonconvex-Nonconcave Minimax Problems
Abstract
Nonconvex minimax problems appear frequently in emerging machine learning applications, such as generative adversarial networks and adversarial learning. Simple algorithms such as the gradient descent ascent (GDA) are the common practice for solving these nonconvex games and receive lots of empirical success. Yet, it is known that these vanilla GDA algorithms with constant stepsize can potentially diverge even in the convex-concave setting. In this work, we show that for a subclass of nonconvex-nonconcave objectives satisfying a so-called two-sided Polyak-Łojasiewicz inequality, the alternating gradient descent ascent (AGDA) algorithm converges globally at a linear rate and the stochastic AGDA achieves a sublinear rate. We further develop a variance reduced algorithm that attains a provably faster rate than AGDA when the problem has the finite-sum structure.
1 Introduction
We consider minimax optimization problems of the forms
min x∈Rd1 max y∈Rd2 f(x, y) (1)
where f(x, y) is a possibly nonconvex-nonconcave function. Recent emerging applications in machine learning further stimulate a surge of interest in minimax problems. For example, generative adversarial networks (GANs) [23] can be viewed as a two-player game between a generator that produces synthetic data and a discriminator that differentiates between true and synthetic data. Other applications include reinforcement learning [9, 10, 11], robust optimization [42, 43], adversarial machine learning [54, 37], and so on. In many of these applications, f(x, y) may be stochastic, namely, f(x, y) = E[F (x, y; ξ)], which corresponds to the expected loss of some random data ξ; or f(x, y) may have the finite-sum structure, namely, f(x, y) = 1n ∑n i=1 fi(x, y), which corresponds to the empirical loss over n data points.
The most frequently used methods for solving minimax problems are the gradient descent ascent (GDA) algorithms (or their stochastic variants), with either simultaneous or alternating updates of the primal-dual variables, referred to as SGDA and AGDA, respectively. While these algorithms have received much empirical success especially in adversarial training, it is known that GDA algorithms with constant stepsizes could fail to converge even for the bilinear games [22, 40]; when they do converge, the stable limit point may not be a local Nash equilibrium [13, 38]. On the other hand, GDA algorithms can converge linearly to the saddle point for strongly-convex-strongly-concave functions [17]. Moreover, for many simple nonconvex-nonconcave objective functions, such as, f(x, y) = x2 + 3 sin2 x sin2 y − 4y2 − 10 sin2 y, we observe that GDA algorithms with constant
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
stepsizes converge to the global Nash equilibrium (see Figure 1). These facts naturally raise a question: Is there a general condition under which GDA algorithms converge to the global optima?
Furthermore, the use of variance reduction techniques has played a prominent role in improving the convergence over stochastic or batch algorithms for both convex and nonconvex minimization problems [27, 52, 53, 58]. However, when it comes to the minimax problems, there are limited results, except under convex-concave setting [49, 15]. This leads to another open question: Can we improve GDA algorithms for nonconvex-nonconcave minimax problems?
1.1 Our contributions
In this paper, we address these two questions and specifically focus on the alternating gradient descent ascent, namely AGDA. This is due to several considerations. First of all, using alternating updates of GDA is more stable than simultaneous updates [22, 2] and often converges faster in practice. Note that for a convex-concave matrix game, SGDA may diverge while AGDA is proven to always have bounded iterates [22]. See Figure 2 for a simple illustration. Secondly, AGDA is widely used for training GANs and other minimax problems in practice; see e.g., [33, 41]. Yet there is a lack of discussion on the convergence of AGDA for general minimax problems in the literature, even for the favorable strongly-convex-strongly-concave setting. Alternating updating schemes are perceived more challenging to analyze than simultaneous updates; the latter treats two variables equally and has been extensively studied in vast literature of variational inequality. Our main contributions are summarized as follows.
Two-sided PL condition. First, we identity a general condition that relaxes the convex-concavity requirement of the objective function while still guaranteeing global convergence of AGDA and stochastic AGDA (Stoc-AGDA). We call this the two-sided PL condition, which requires that both players’ utility functions satisfy Polyak-Łojasiewicz (PL) inequality [50]. The two-sided PL condition is very general and is satisfied by many important classes of functions: (a) all stronglyconvex-strongly-concave functions; (b) all PL-strongly-concave function (discussed in [24]) and (c) many nonconvex-nonconcave objectives. Such conditions also hold true for various applications, including robust least square, generative adversarial imitation learning for linear quadratic regulator (LQR) dynamics [5], zero-sum linear quadratic game [63], and potentially many others in adversarial learning [14], robust phase retrieval [55, 64], robust control [18], and etc. We first investigate the landscape of objectives under the two-sided PL condition. In particular, we show that three notions of optimality: saddle point, minimax point, and stationary point are equivalent.
Global convergence of AGDA. We show that under the two-sided PL condition, AGDA with proper constant stepsizes converges globally to a saddle point at a linear rate of O(1− κ−3)t, while Stoc-AGDA with proper diminishing stepsizes converges to a saddle point at a sublinear rate of O(κ5/t), where κ is the underlying condition number. To the best of our knowledge, this is the first result on the global convergence of a class of nonconvex-nonconvex problems. In contrast, most previous work deals with nonconvex-concave problems and obtains convergence to stationary points. On the other hand, because all strongly-convex-strongly-concave and PL-strongly-concave functions naturally satisfy the two-sided PL condition, our analysis fills the theoretical gap with the first convergence results of AGDA under these settings.
Variance reduced algorithm. For minimax problems with the finite-sum structure, we introduce a variance-reduced AGDA algorithm (VR-AGDA) that leverages the idea of stochastic variance reduced gradient (SVRG) [27, 52] with the alternating updates. We prove that VR-AGDA achieves the complexity ofO ( (n+ n2/3κ3) log(1/ ) ) , which improves over theO ( nκ3 log 1 ) complexity of
AGDA and the O ( κ5/ ) complexity of Stoc-AGDA when applied to finite-sum minimax problems. Our numerical experiments further demonstrate that VR-AGDA performs significantly better than AGDA and Stoc-AGDA, especially for problems with large condition numbers. To our best knowledge, this is the first work to provide a variance-reduced algorithm and theoretical guarantees in the nonconvex-nonconcave regime of minimax optimization. In contrast, most previous variance-reduced algorithms require full or partial strong convexity and only apply to simultaneous updates.
Nonconvex-PL games. Lastly, as a side contribution, we show that for a broader class of nonconvex-nonconcave problems under only one-sided PL condition, AGDA converges to a - stationary point within O( −2) iterations, thus is optimal among all first-order algorithms. Our result shaves off a logarithmic factor of the best-known rate achieved by the multi-step GDA algorithm [47]. This directly implies the same convergence rate on nonconvex-strongly-concave objectives, and to our best knowledge, we are the first to show the convergence of AGDA on this class of functions. Due to page limitation, we defer this result to Appendix ??.
1.2 Related work
Nonconvex minimax problems. There has been a recent surge in research on solving minimax optimization beyond the convex-concave regime [54, 8, 51, 56, 30, 47, 1, 32, 3, 48], but they differ from our work from various perspectives. Most of these work focus on the nonconvex-concave regime and aim for convergence to stationary points of minimax problems [8, 54, 31, 56]. Algorithms in these work require solving the inner maximization or some sub-problems with high accuracy, which are different from AGDA. Lin et al. [30] proposed an inexact proximal point method to find an - stationary point for a class of weakly-convex-weakly-concave minimax problems. Their convergence result relies on assuming the existence of a solution to the corresponding Minty variational inequality, which is hard to verify. Abernethy et al. [1] showed the linear convergence of a second-order iterative algorithm, called Hamiltonian gradient descent, for a subclass of "sufficiently bilinear" functions. Very recently, Xu et al. [60] and Boţ and Böhm [4] anslyze AGDA in nonconvex-(strongly-)concave setting. There is also a line of work in understanding the dynamics in minimax games [39, 20, 19, 21, 12, 25].
Variance-reduced minimax optimization. Palaniappan and Bach [49], Luo et al. [34], Chavdarova et al. [7] provided linear-convergent algorithms for strongly-convex-strongly-concave objectives, based on simultaneous updates. Du and Hu [15] extended the result to convex-strongly-concave objectives with full-rank coupling bilinear term. In contrast, we are dealing with a much broader class of objectives that are possibly nonconvex-nonconcave. We point out that Luo et al. [35] and Xu et al. [59] recently introduced variance-reduced algorithms for finding the stationary point of nonconvex-strongly-concave problems, which is again different from our setting.
2 Global optima and two-sided PL condition
Throughout this paper, we assume that the function f(x, y) in (1) is continuously differentiable and has Lipschitz gradient. Here ‖ · ‖ is used to denote the Euclidean norm. Assumption 1 (Lipschitz gradient). There exists a positive constant l > 0 such that
max{‖∇yf (x1, y1)−∇yf (x2, y2)‖ , ‖∇xf (x1, y1)−∇xf (x2, y2)‖} ≤ l[‖x1 − x2‖+‖y1 − y2‖],
holds for all x1, x2 ∈ Rd1 , y1, y2 ∈ Rd2 .
We now define three notions of optimality for minimax problems. The most direct notion of optimality is global minimax point, at which x∗ is an optimal solution to the function g(x) := maxy f(x, y) and y∗ is an optimal solution to maxy f(x∗, y). In the two-player zero-sum game, the notion of saddle point is also widely used [57, 44]. For a saddle point (x∗, y∗), x∗ is an optimal solution to minx f(x, y
∗) and y∗ is an optimal solution to maxy f(x∗, y). Definition 1 (Global optima).
1. (x∗, y∗) is a global minimax point, if for any (x, y) : f(x∗, y) ≤ f(x∗, y∗) ≤ maxy′ f(x, y′). 2. (x∗, y∗) is a saddle point, if for any (x, y) : f(x∗, y) ≤ f(x∗, y∗) ≤ f(x, y∗).
3. (x∗, y∗) is a stationary point, if : ∇xf(x∗, y∗) = ∇yf(x∗, y∗) = 0.
For general nonconvex-nonconcave minimax problems, these three notions of optimality are not necessarily equivalent. A stationary point may not be a saddle point or a global minimax point; a global minimax point may not be a saddle point or a stationary point. Note that for minimax problems, a saddle point or a global minimax point may not always exist. However, since our goal in this paper is to find global optima, in the remainder of the paper, we assume that a saddle point always exists. Assumption 2 (Existence of saddle point). The objective function f has at least one saddle point. We also assume that for any fixed y, minx∈Rd1 f(x, y) has a nonempty solution set and a optimal value, and for any fixed x, maxy∈Rd2 f(x, y) has a nonempty solution set and a finite optimal value.
For unconstrained minimization problems: minx∈Rn f(x), Polyak [50] proposed Polyak-Łojasiewicz (PL) condition, which is sufficient to show global linear convergence for gradient descent without assuming convexity. Specifically, a function f(·) satisfies PL condition if it has a nonempty solution set and a finite optimal value f∗, and there exists some µ > 0 such that ‖∇f(x)‖2 ≥ 2µ(f(x) − f∗),∀x. As discussed in Karimi et al. [28], PL condition is weaker, or not stronger, than other well-known conditions that guarantee linear convergence for gradient descent, such as error bounds (EB) [36], weak strong convexity (WSC) [45] and restricted secant inequality (RSI) [61].
We introduce a straightforward generalization of the PL condition to the minimax problem: function f(x, y) satisfies the PL condition with constant µ1 with respect to x, and -f satisfies PL condition with constant µ2 with respect to y. We formally state this in the following definition. Definition 2 (Two-sided PL condition). A continuously differentiable function f(x, y) satisfies the two-sided PL condition if there exist constants µ1, µ2 > 0 such that: ∀x, y,
‖∇xf(x, y)‖2 ≥ 2µ1[f(x, y)−min x f(x, y)], ‖∇yf(x, y)‖2 ≥ 2µ2[max y f(x, y)− f(x, y)].
The two-sided PL condition does not imply convexity-concavity, and it is a much weaker condition than strong-convexity-strong-concavity. In Lemma 2.1, we show that three notions of optimality are equivalent under the two-sided PL condition. Note that they may not be unique. Lemma 2.1. If the objective function f(x, y) satisfies the two-sided PL condition, then the following holds true:
(saddle point)⇔ (global minimax)⇔ (stationary point).
Below we give some examples that satisfy this condition. Example 1. The nonconvex-nonconcave function in the introduction, f(x, y) = x2+3 sin2 x sin2 y− 4y2 − 10 sin2 y satisfies the two-sided PL condition with µ1 = 1/16, µ2 = 1/11 (see Appendix ??). Example 2. f(x, y) = F (Ax,By), where F (·, ·) is strongly-convex-strongly-concave and A and B are arbitrary matrices, satisfies the two-sided PL condition. Example 3. The generative adversarial imitation learning for LQR can be formulated as minK maxθm(K, θ), where m is strongly-concave in terms of θ and satisfies PL condition in terms of K (see [5] for more details), thus satisfying the two-sided PL condition. Example 4. In a zero-sum linear quadratic (LQ) game, the system dynamics are characterized by xt+1 = Axt + But + Cvt, where xt is the system state and ut, vt are the control inputs from two-players. After parameterizing the policies of two players by ut = −Kxt and vt = −Lxt, the
value function is C(K,L) = Ex0∼D {∑∞ t=0 [ x>t Qxt + (Kxt) > Ru (Kxt)− (Lxt)>Rv (Lxt) ]} , where D is the distribution of the initial state x0 (see [63] for more details). Player 1 (player 2) wants to minimize (maximize) C(K,L), and the game is formulated as minK maxL C(K,L). Fixing L (or K), C(·, L) (or −C(K, ·)) becomes an objective of an LQR problem, and therefore satisfies the PL condition [18] when argminK C(K,L) and argmaxL C(K,L) are well-defined.
The two-sided PL condition includes rich classes of functions, including: (a) all strongly-convexstrongly-concave functions; (b) some convex-concave functions (e.g., Example 2); (c) some nonconvex-strongly-concave functions (e.g., Example 3); (d) some nonconvex-nonconcave functions (e.g., Example 1 and 4). Under the two-sided PL condition, the function g(x) := maxy f(x, y) satisfies PL condition with µ1 (see Appendix ??). Moreover, it holds that g is also L-smooth with L := l+ l2/µ2 [47]. Finally, we denote µ = min(µ1, µ2) and κ = lµ , which represents the condition number of the problem.
3 Global convergence of AGDA and Stoc-AGDA
In this section, we establish the convergence rate of the stochastic alternating gradient descent ascent (Stoc-AGDA) algorithm, which we present in Algorithm 1, under the two-sided PL condition. StocAGDA updates variables x and y sequentially using stochastic gradient descent/ascent steps. Here we make standard assumptions about stochastic gradients Gx(x, y, ξ) and Gy(x, y, ξ). Assumption 3 (Bounded variance). Gx(x, y, ξ) and Gy(x, y, ξ) are unbiased stochastic estimators of∇xf(x, y) and∇yf(x, y) and have variances bounded by σ2 > 0.
Algorithm 1 Stoc-AGDA 1: Input: (x0, y0), stepsizes {τ t1}t > 0, {τ t2}t > 0 2: for all t = 0, 1, 2, ... do 3: Draw two i.i.d. samples ξt1, ξt2 ∼ P (ξ) 4: xt+1 ← xt − τ t1Gx(xt, yt, ξt1) 5: yt+1 ← yt + τ t2Gy(xt+1, yt, ξt2) 6: end for
Note that Stoc-AGDA with constant stepsizes (i.e., τ t1 = τ1 and τ t 2 = τ2) and noiseless stochastic gradient (i.e., σ2 = 0) reduces to AGDA:
xt+1 = xt − τ1∇xf(xt, yt), yt+1 = yt + τ2∇yf(xt+1, yt). (2)
We measure the inaccuracy of (xt, yt) through the potential function
Pt := at + λ · bt, (3)
where at = E[g(xt) − g∗], bt = E[g(xt) − f(xt, yt)] and the balance parameter λ > 0 will be specified later in the theorems. Recall that g(x) := maxy f(x, y) and g∗ = minx g(x). This metric is driven by the definition of minimax point, because g(x)− g∗ and g(x)− f(x, y) are non-negative for any (x, y), and both equal to 0 if and only if (x, y) is a minimax point.
Stoc-AGDA with constant stepsizes We first consider Stoc-AGDA with constant stepsizes. We show that {(xt, yt)}t will converge linearly to a neighbourhood of the optimal set. Theorem 3.1. Suppose Assumptions 1, 2, 3 hold and f(x, y) satisfies the two-sided PL condition with µ1 and µ2. Define Pt := at + 110bt. If we run Algorithm 1 with τ t 2 = τ2 ≤ 1l and τ t 1 = τ1 ≤ µ22τ2 18l2 ,
Pt ≤(1− 1
2 µ1τ1)
tP0 + δ, (4)
where δ = (1−µ2τ2)(L+l)τ 2 1 +lτ 2 2 +10Lτ 2 1
10µ1τ1 σ2.
Remark 1. In the theorem above, we choose τ1 smaller than τ2, τ1/τ2 ≤ µ22/(18l2), because our potential function is not symmetric about x and y. Another reason is because we want yt
to approach y∗(xt) ∈ arg maxy f(xt, y) faster so that ∇xf(xt, yt) is a better approximation for ∇g(xt) (∇g(x) = ∇xf(x, y∗(x)), see Nouiehed et al. [47]). Indeed, it is common to use different learning rates for x and y in GDA algorithms for nonconvex minimax problems; see e.g., Jin et al. [26] and Lin et al. [31]. Note that the ratio between these two learning rates is quite crucial here. We also observe empirically when the same learning rate is used, even if small, the algorithm may not converge to saddle points. Remark 2. When t→∞, Pt → δ. If τ1 → 0 and τ22 /τ1 → 0, the error term δ will go to 0. When using smaller stepsizes, the algorithm reaches a smaller neighbour of the saddle point yet at the cost of a slower rate, as the contraction factor also deteriorates.
Linear convergence of AGDA Setting σ2 = 0, it follows immediately from the previous theorem that AGDA converges linearly under the two-sided PL condition. Moreover, we have the following: Theorem 3.2. Suppose Assumptions 1, 2 hold and f(x, y) satisfies the two-sided PL condition with µ1 and µ2. Define Pt := at + 110bt. If we run AGDA with τ1 = µ22 18l3 and τ2 = 1 l , then
Pt ≤ ( 1− µ1µ 2 2
36l3
)t P0. (5)
Furthermore, {(xt, yt)}t converges to some saddle point (x∗, y∗), and
‖xt − x∗‖2 + ‖yt − y∗‖2 ≤ α ( 1− µ1µ 2 2
36l3
)t P0, (6)
where α is a constant depending on µ1, µ2 and l.
The above theorem implies that the limit point of {(xt, yt)}t is a saddle point and the distance to the saddle point decreases in the order of O ( (1− κ−3)t ) . Note that in the special case when the objective is strongly-convex-strongly-concave, it is known that SGDA (GDA with simultaneous updates) achieves anO(κ2 log(1/ )) iteration complexity (see, e.g., Facchinei and Pang [17]) and this can be further improved to match the lower complexity bound O(κ log(1/ )) [62] by extragradient methods [29] or Nesterov’s dual extrapolation [46]. However, these results heavily rely on the strong monotonicity of the corresponding variational inequality, which does not apply here. Our analysis technique is totally different. Since the general two-sided PL condition contains a much broader class of functions, we do not expect to achieve the same dependency on κ, especially for a simple algorithm like AGDA. Note that even the multi-step GDA in [47] results in the same κ3 dependency, but without linear convergence rate. Hence, our conjecture is that the κ3 dependency of AGDA can not be improved without modifying the algorithm. We leave this investigation for future work.
Stoc-AGDA with diminishing stepsizes While Stoc-AGDA with constant stepsizes only converges linearly to a neighbourhood of the saddle point, Stoc-AGDA with diminishing stepsizes converges to the saddle point but at a sublinear rate O(1/t). Theorem 3.3. Suppose Assumptions 1, 2, 3 hold and f(x, y) satisfies the two-sided PL condition with µ1 and µ2. Define Pt = at + 110bt. If we run algorithm 1 with stepsizes τ t 1 = β γ+t and τ t 2 = 18l2β µ22(γ+t) for some β > 2/µ1 and γ > 0 such that τ11 ≤ min{1/L, µ22/18l2}, then we have
Pt ≤ ν
γ + t , where ν := max
{ γP0, [ (L+ l)β2 + 182l5β2/µ42 + 10Lβ 2 ] σ2
10µ1β − 20
} . (7)
Remark 3. Note the rate is affected by ν, and the first term in the definition of ν is controlled by the initial point. In practice, we can find a good initial point by running Stoc-AGDA with constant stepsizes so that only the second term in the definition of ν matters. Then by choosing β = 3/µ1, we
have ν = O ( l5σ2
µ21µ 4 2
) . Thus, the convergence rate of Stoc-AGDA is O ( κ5σ2
µt
) .
4 Stochastic variance-reduced AGDA algorithm
In this section, we study the minimax problem with the finite-sum structure: minx maxy f(x, y) = 1 n ∑n i=1 fi(x, y), which arises ubiquitously in machine learning. We are especially interested in the
Algorithm 2 VR-AGDA 1: input: (x̃0, ỹ0), stepsizes τ1, τ2, iteration numbers N,T 2: for all k = 0, 1, 2, ... do 3: for all t = 0, 1, 2, ...T − 1 do 4: xt,0 = x̃t, yt,0 = ỹt, 5: compute∇xf(x̃t, ỹt) = 1n ∑n i=1∇xfi(x̃t, ỹt) and∇yf(x̃t, ỹt) = 1 n ∑n i=1∇yfi(x̃t, ỹt)
6: for all j = 0 to N − 1 do 7: sample i.i.d. indices i1j , i 2 j uniformly from [n] 8: xt,j+1 = xt,j − τ1[∇xfi1j (xt,j , yt,j)−∇xfi1j (x̃t, ỹt) +∇xf(x̃t, ỹt)] 9: yt,j+1 = yt,j + τ2[∇yfi2j (xt,j+1, yt,j)−∇yfi2j (x̃t, ỹt) +∇yf(x̃t, ỹt)]
10: end for 11: x̃t+1 = xt,N , ỹt+1 = yt,N 12: end for 13: choose (xk, yk) from {{(xt,j , yt,j)}N−1j=0 } T−1 t=0 uniformly at random 14: x̃0 = xk, ỹ0 = yk 15: end for
case when n is large. We assume the overall objective function f(x, y) satisfies the two-sided PL condition with µ1 and µ2, but do not assume each fi to satisfy the two-sided PL condition. Instead of Assumption 1, we assume each component fi has Lipschitz gradients.
Assumption 4. Each fi has l-Lipschitz gradients.
If we run AGDA with full gradients to solve the finite-sum minimax problem, the total complexity for finding an -optimal solution is O(nκ3 log(1/ )) by Theorem 3.2. Despite the linear convergence, the per-iteration cost is high and the complexity can be huge when the number of components n and condition number κ are large. Instead, if we run Stoc-AGDA, this leads to the total complexity O ( κ5σ2
µ2
) by Remark 3, which has worse dependence on .
Motivated by the recent success of stochastic variance reduced gradient (SVRG) technique [27, 52, 49], we introduce the VR-AGDA algorithm (presented in Algorithm 2), that combines AGDA with SVRG so that the linear convergence is preserved while improving the dependency on n and κ. VR-AGDA can be viewed as the applying SVRG to AGDA with restarting: at every epoch k, we restart the SVRG subroutine by initializing it with (xk, yk), which is randomly selected from previous SVRG subroutine. This is partly inspired by the GD-SVRG algorithm for minimizing PL functions [52]. Notice when T = 1, VR-AGDA reduces to a double-loop algorithm which is similar to the SVRG for saddle point problems proposed by Palaniappan and Bach [49], except for several notable differences: (i) we are using the alternating updates rather than simultaneous updates, (ii) as a result, we require to sample two independent indices rather than one at each iteration, and (iii) most importantly, we are dealing with possibly nonconvex-nonconcave objectives that satisfy the two-sided PL condition. The following two theorems capture the convergence of VR-AGDA under different parameter setups.
Theorem 4.1. Suppose Assumptions 2 and 4 hold and f(x, y) satisfies the two-sided PL condition with µ1 and µ2. Define Pk = ak + 120b
k, where ak = E[g(xk)− g∗] and bk = E[g(xk)− f(xk, yk)]. If we run VR-AGDA with τ1 = β/(28κ8l), τ2 = β/(lκ6), N = bαβ−2/3κ9(2 + 4β1/2κ−3)−1c and T = 1, where α, β are constants irrelevant to l, n, µ1, µ2, then Pk+1 ≤ 12Pk. This implies complexity of
O ( (n+ κ9) log(1/ ) ) total for VR-AGDA to achieve an -optimal solution.
Theorem 4.2. Under the same assumptions as Theorem 4.1 , if we run VR-AGDA with τ1 = β/(28κ2ln2/3), τ2 = β/(ln2/3), N = bαβ−2/3n(2 + 4β1/2n−1/3)−1c, and T = dκ3n−1/3e, where α, β are constants irrelevant to l, n, µ1, µ2, then Pk+1 ≤ 12Pk. This implies complexity of
O ( (n+ n2/3κ3) log(1/ ) ) for VR-AGDA to achieve an -optimal solution.
Remark 4. Theorems 4.1 and 4.2 are different in their choices of stepsizes and iteration numbers, which gives rise to different complexities. VR-AGDA with the second setting has a lower complexity than the first setting in the regime n ≤ κ9, but the first setting allows for a simpler double-loop algorithm with T = 1. The two theorems imply that VR-AGDA always improves over AGDA. To the best of our knowledge, this is also the first theoretical analysis of variance-reduced algorithms with alternating updating rules for minimax optimization.
5 Numerical experiments
We present experiments on two applications: robust least square and imitation learning for LQR. We mainly focus on the comparison between AGDA, Stoc-AGDA, and VR-AGDA, which are the only algorithms with known theoretical guarantees. Because of their simplicity, only few hyperparameters are induced and are tuned based on grid search.
5.1 Robust least square
We consider the least square problems with coefficient matrix A ∈ Rn×m and noisy vector y0 ∈ Rn subject to bounded deterministic perturbation δ. Robust least square (RLS) minimizes the worst case residual, and can be formulated as [16]: minx maxδ:‖δ‖≤ρ ‖Ax− y‖2, where δ = y0 − y. We consider RLS with soft constraint:
minx maxy F (x, y) := ‖Ax− y‖2M − λ‖y − y0‖2M , (8)
where we adopt the general M-(semi-)norm in: ‖x‖2M = xTMx and M is positive semi-definite. F (x, y) satisfies the two-sided PL condition when λ > 1, because it can be written as the composition of a strongly-convex-strongly-concave function and an affine function (Example 2). However, F (x, y) is not strongly convex about x, and when M is not full-rank, it is not strongly concave about y.
Datasets. We use three datasets in the experiments, and two of them are generated in the same way as in Du and Hu [15]. We generate the first dataset with n = 1000 and m = 500 by sampling rows of A from a GaussianN (0, In) distribution and setting y0 = Ax∗ + with x∗ from GaussianN (0, 1) and from Gaussian N (0, 0.01). We set M = In and λ = 3. The second dataset is the rescaled aquatic toxicity dataset by Cassotti et al. [6], which uses 8 molecular descriptors of 546 chemicals to predict quantitative acute aquatic toxicity towards Daphnia Magna. We use M = I and λ = 2 for this dataset. The third dataset is generated with A ∈ R1000×500 from Gaussian N (0,Σ) where Σi,j = 2−|i−j|/10, M being rank-deficit with positive eigenvalues sampled from [0.2, 1.8] and λ = 1.5. These three datasets represent cases with low, median, and high condition numbers, respectively.
Evaluation. We compare four algorithms: AGDA, Stoc-AGDA, VR-AGDA and extragradient (EG) with fine-tuned stepsizes. For Stoc-AGDA, we choose constant stepsizes to form a fair comparison with the other two. We report the potential function value, i.e., Pt described in our theorems, and distance to the limit point ‖(xt, yt) − (x∗, y∗)‖2. These errors are plotted against the number of gradient evaluations normalized by n (i.e., number of full gradients). Results are reported in Figure 3. We observe that VR-AGDA and AGDA both exhibit linear convergence, and the speedup of VR-AGDA is fairly significant when the condition number is large, whereas Stoc-AGDA progresses fast at the beginning and stagnates later on. These numerical results clearly validate our theoretical findings. EG performs poorly in this example.
5.2 Generative adversarial imitation learning for LQR
The optimal control problem for LQR can be formulated as [18]:
minimize πt Ex0∼D ∞∑ t=0 x>t Qxt + u > t Rut such that xt+1 = Axt +But, ut = πt(xt),
where xt ∈ Rd is a state, ut ∈ Rk is a control,D is the distribution of initial state x0, and πt is a policy. It is known that the optimal policy is linear: ut = −K∗xt, where K∗ ∈ Rk×d. If we parametrize the policy in the linear form, ut = −Kxt, the problem can be written as: minK C(K;Q,R) := Ex0∼D [∑∞ t=0 ( x>t Qxt + (Kxt) >R(Kxt) )]
where the trajectory is induced by LQR dynamics and policy K. In generative adversarial imitation learning for LQR, the trajectories induced by an expert policy KE are observed and part of the goal is to learn the cost function parameters Q and R from the expert. This can be formulated as a minimax problem [5]:
min K max (Q,R)∈Θ
{ m(K,Q,R) := C(K;Q,R)− C(KE ;Q,R)− Φ(Q,R) } ,
where Θ = {(Q,R) : αQI Q βQI, αRI R βRI} and Φ is a strongly-convex regularizer. We sample n initial points x(1)0 , x (2) 0 , ..., x (n) 0 fromD and approximateC(K;Q,R) by sample average Cn(K;Q,R) := 1 n ∑n i=1 [∑∞ t=0 ( x>t Qxt + u > t Rut )] x0=x (i) 0 . We then consider:
min K max (Q,R)∈Θ
{mn(K,Q,R) := Cn(K;Q,R)− Cn(KE ;Q,R)− Φ(Q,R)}. (9)
Note that mn satisfies the PL condition in terms of K [18], and mn is strongly-concave in terms of (Q,R), so the function satisfies the two-sided PL condition.
In our experiment, we use Φ(Q,R) = λ(‖Q − Q̄‖2 + ‖R − R̄‖2) for some Q̄, R̄ and λ = 1. We generate a dataset with different n and k: (1) d = 3, k = 2; (2) d = 20, k = 10; (3) d = 30, k = 20. The initial distribution D is N (0, Id) and we sample n = 100 initial points. The exact gradients can be computed based on the compact forms established in Fazel et al. [18], Cai et al. [5]. We compare AGDA and VR-AGDA under fine-tuned stepsizes, and track their errors in terms of ‖Kt−K∗‖2 +‖Qt−Q∗‖2F +‖Rt−R∗‖2F . The result is reported in Figure 4, which again indicates that VR-AGDA significantly outperforms AGDA.
6 Conclusion
In this paper, we identify a subclass of nonconvex-nonconcave minimax problems, represented by the so-called two-side PL condition, for which AGDA and Stoc-AGDA can converge to global saddle points. We also propose the first linearly-convergent variance-reduced AGDA algorithm that is provably faster than AGDA, for this subclass of minimax problems . We hope this work can shed some light on the understanding of nonconvex-nonconcave minimax optimization: (1) different learning rates for two players are essential in GDA algorithms with alternating updates; (2) convexity-concavity is not a watershed to guarantee global convergence of GDA algorithms.
Acknowledgments and Disclosure of Funding
This work was supported in part by ONR grant W911NF-15-1-0479, NSF CCF-1704970, and NSF CMMI-1761699.
Broader Impact
With the boom of neural networks in every corner of machine learning, the understanding of nonconvex optimization, especially minimax optimization, becomes increasingly important. On one hand, the surge of interest in generative adversarial networks (GAN) has brought revolutionary success in many practical applications such as face synthesis , text-to-image synthesis, text generation. On the other hand, even for the simplest algorithm such as gradient descent ascent (GDA), although widely adopted by practitioners and researchers in the filed, lack theoretical understanding. It is imperative to develop a strong fundamental understanding of the success of these simple algorithms in the nonconvex regime, both to expand the usability of the methods and to accelerate future deployment in a principled and interpretable manner.
Theory. This paper takes an initial and substantial step towards the understanding of nonconvexnonconcave min-max optimization problems with "hidden convexity" as well as the convergence of the simplest alternating GDA algorithm. Despite its popularity, this algorithm has not been carefully analyzed even in the convex regime. The theory developed in this work helps explain when and why GDA performs well, how to choose stepsizes, and how to improve GDA properly. These are obviously basic yet important questions that need to be addressed in order to guide future development.
Applications. The downstream applications include but not limited to generative adversarial networks, the actor-critic game in reinforcement learning, robust machine learning and control, and other applications in games and social economics. This work could potentially inspire more interest in broadening the applicability of GDA in practice. | 1. What is the main contribution of the paper regarding AGDA and Stoc-AGDA under two-sides Polyak-Łojasiewicz?
2. What are the strengths of the proposed variance reduction version of AGDA?
3. What are the weaknesses of the paper, particularly in the technical aspects?
4. Do you have any questions regarding the proof procedure and results for two-sides Polyak-Łojasiewicz?
5. How does the paper's approach differ from existing techniques in nonconvex optimization under PL setting? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper studies AGDA/Stoc-AGDA under two-sides Polyak-Łojasiewicz (PL), Moreover, this paper proposes variance reduction version of AGDA and achieves better complexity results. The motivation and the results are interesting. However, my main concerns lie in the technical part. For the two-sides Polyak-Łojasiewicz (PL), the proof procedure and results can be expected by following the proof steps in the setting of one-side Polyak-Łojasiewicz for min-max problem, i.e., [35]. For the VR-AGDA part, the variance reduction technique has been well studied in nonconvex optimization under PL setting (i.e., [r1] and so on). Hence, it appears that extending these existing techniques to the minmax problem does not seem to involve much new technical development. Based on my evaluation, I suggest to weakly reject this paper. However, I am happy to improve my score, if my questions in the weaknesses part are satisfactorily addressed in the response. [35] M. Nouiehed, M. Sanjabi, T. Huang, J. D. Lee, and M. Razaviyayn. Solving a class of nonconvex min-max games using iterative first order methods. In Advances in Neural Information Processing Systems, pages 14905–14916, 2019. [r1] Z. Li and J. Li, A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization. In Advances in Neural Information Processing Systems, 2018. ---------------------- post rebuttal comments The authors' response clarified their technical novelty, and I am happy to improve my rating to acceptance.
Strengths
1. The paper studies AGDA types algorithms under two-side Polyak-Łojasiewicz. The paper shows that AGDA with properly chosen constant stepsizes converges globally to a saddle point at a linear rate of O(1-\kappa^{-3})^t. 2. The paper also explores variance reduction for AGDA, and shows that VR-AGDA achieves the complexity of O((n + n^{2/3}\kappa^3)log(1/\epsilon) ), which improves the previous results.
Weaknesses
I have the following concerns/questions: 1. Since the minimization problem with PL condition (i.e., one-side PL problem) has already been studied in the literature, what exactly is the technical difficulty to generalize the analysis under one-side PL to that under two-side PL? 2. For the VR-AGDA part, since SVRG for the minimization problem under PL has already been studied in the literature, what exactly is the technical difficulty to generalize such analysis to that for VR-AGDA under two-side PL? What is new in the technical development in this paper? How is it different from the typical minimax techniques? It will be great if the authors can point out their technical novelties to specific proof steps. I am willing to improve my score if the above questions are satisfactorily addressed. |
NIPS | Title
Global Convergence and Variance Reduction for a Class of Nonconvex-Nonconcave Minimax Problems
Abstract
Nonconvex minimax problems appear frequently in emerging machine learning applications, such as generative adversarial networks and adversarial learning. Simple algorithms such as the gradient descent ascent (GDA) are the common practice for solving these nonconvex games and receive lots of empirical success. Yet, it is known that these vanilla GDA algorithms with constant stepsize can potentially diverge even in the convex-concave setting. In this work, we show that for a subclass of nonconvex-nonconcave objectives satisfying a so-called two-sided Polyak-Łojasiewicz inequality, the alternating gradient descent ascent (AGDA) algorithm converges globally at a linear rate and the stochastic AGDA achieves a sublinear rate. We further develop a variance reduced algorithm that attains a provably faster rate than AGDA when the problem has the finite-sum structure.
1 Introduction
We consider minimax optimization problems of the forms
min x∈Rd1 max y∈Rd2 f(x, y) (1)
where f(x, y) is a possibly nonconvex-nonconcave function. Recent emerging applications in machine learning further stimulate a surge of interest in minimax problems. For example, generative adversarial networks (GANs) [23] can be viewed as a two-player game between a generator that produces synthetic data and a discriminator that differentiates between true and synthetic data. Other applications include reinforcement learning [9, 10, 11], robust optimization [42, 43], adversarial machine learning [54, 37], and so on. In many of these applications, f(x, y) may be stochastic, namely, f(x, y) = E[F (x, y; ξ)], which corresponds to the expected loss of some random data ξ; or f(x, y) may have the finite-sum structure, namely, f(x, y) = 1n ∑n i=1 fi(x, y), which corresponds to the empirical loss over n data points.
The most frequently used methods for solving minimax problems are the gradient descent ascent (GDA) algorithms (or their stochastic variants), with either simultaneous or alternating updates of the primal-dual variables, referred to as SGDA and AGDA, respectively. While these algorithms have received much empirical success especially in adversarial training, it is known that GDA algorithms with constant stepsizes could fail to converge even for the bilinear games [22, 40]; when they do converge, the stable limit point may not be a local Nash equilibrium [13, 38]. On the other hand, GDA algorithms can converge linearly to the saddle point for strongly-convex-strongly-concave functions [17]. Moreover, for many simple nonconvex-nonconcave objective functions, such as, f(x, y) = x2 + 3 sin2 x sin2 y − 4y2 − 10 sin2 y, we observe that GDA algorithms with constant
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
stepsizes converge to the global Nash equilibrium (see Figure 1). These facts naturally raise a question: Is there a general condition under which GDA algorithms converge to the global optima?
Furthermore, the use of variance reduction techniques has played a prominent role in improving the convergence over stochastic or batch algorithms for both convex and nonconvex minimization problems [27, 52, 53, 58]. However, when it comes to the minimax problems, there are limited results, except under convex-concave setting [49, 15]. This leads to another open question: Can we improve GDA algorithms for nonconvex-nonconcave minimax problems?
1.1 Our contributions
In this paper, we address these two questions and specifically focus on the alternating gradient descent ascent, namely AGDA. This is due to several considerations. First of all, using alternating updates of GDA is more stable than simultaneous updates [22, 2] and often converges faster in practice. Note that for a convex-concave matrix game, SGDA may diverge while AGDA is proven to always have bounded iterates [22]. See Figure 2 for a simple illustration. Secondly, AGDA is widely used for training GANs and other minimax problems in practice; see e.g., [33, 41]. Yet there is a lack of discussion on the convergence of AGDA for general minimax problems in the literature, even for the favorable strongly-convex-strongly-concave setting. Alternating updating schemes are perceived more challenging to analyze than simultaneous updates; the latter treats two variables equally and has been extensively studied in vast literature of variational inequality. Our main contributions are summarized as follows.
Two-sided PL condition. First, we identity a general condition that relaxes the convex-concavity requirement of the objective function while still guaranteeing global convergence of AGDA and stochastic AGDA (Stoc-AGDA). We call this the two-sided PL condition, which requires that both players’ utility functions satisfy Polyak-Łojasiewicz (PL) inequality [50]. The two-sided PL condition is very general and is satisfied by many important classes of functions: (a) all stronglyconvex-strongly-concave functions; (b) all PL-strongly-concave function (discussed in [24]) and (c) many nonconvex-nonconcave objectives. Such conditions also hold true for various applications, including robust least square, generative adversarial imitation learning for linear quadratic regulator (LQR) dynamics [5], zero-sum linear quadratic game [63], and potentially many others in adversarial learning [14], robust phase retrieval [55, 64], robust control [18], and etc. We first investigate the landscape of objectives under the two-sided PL condition. In particular, we show that three notions of optimality: saddle point, minimax point, and stationary point are equivalent.
Global convergence of AGDA. We show that under the two-sided PL condition, AGDA with proper constant stepsizes converges globally to a saddle point at a linear rate of O(1− κ−3)t, while Stoc-AGDA with proper diminishing stepsizes converges to a saddle point at a sublinear rate of O(κ5/t), where κ is the underlying condition number. To the best of our knowledge, this is the first result on the global convergence of a class of nonconvex-nonconvex problems. In contrast, most previous work deals with nonconvex-concave problems and obtains convergence to stationary points. On the other hand, because all strongly-convex-strongly-concave and PL-strongly-concave functions naturally satisfy the two-sided PL condition, our analysis fills the theoretical gap with the first convergence results of AGDA under these settings.
Variance reduced algorithm. For minimax problems with the finite-sum structure, we introduce a variance-reduced AGDA algorithm (VR-AGDA) that leverages the idea of stochastic variance reduced gradient (SVRG) [27, 52] with the alternating updates. We prove that VR-AGDA achieves the complexity ofO ( (n+ n2/3κ3) log(1/ ) ) , which improves over theO ( nκ3 log 1 ) complexity of
AGDA and the O ( κ5/ ) complexity of Stoc-AGDA when applied to finite-sum minimax problems. Our numerical experiments further demonstrate that VR-AGDA performs significantly better than AGDA and Stoc-AGDA, especially for problems with large condition numbers. To our best knowledge, this is the first work to provide a variance-reduced algorithm and theoretical guarantees in the nonconvex-nonconcave regime of minimax optimization. In contrast, most previous variance-reduced algorithms require full or partial strong convexity and only apply to simultaneous updates.
Nonconvex-PL games. Lastly, as a side contribution, we show that for a broader class of nonconvex-nonconcave problems under only one-sided PL condition, AGDA converges to a - stationary point within O( −2) iterations, thus is optimal among all first-order algorithms. Our result shaves off a logarithmic factor of the best-known rate achieved by the multi-step GDA algorithm [47]. This directly implies the same convergence rate on nonconvex-strongly-concave objectives, and to our best knowledge, we are the first to show the convergence of AGDA on this class of functions. Due to page limitation, we defer this result to Appendix ??.
1.2 Related work
Nonconvex minimax problems. There has been a recent surge in research on solving minimax optimization beyond the convex-concave regime [54, 8, 51, 56, 30, 47, 1, 32, 3, 48], but they differ from our work from various perspectives. Most of these work focus on the nonconvex-concave regime and aim for convergence to stationary points of minimax problems [8, 54, 31, 56]. Algorithms in these work require solving the inner maximization or some sub-problems with high accuracy, which are different from AGDA. Lin et al. [30] proposed an inexact proximal point method to find an - stationary point for a class of weakly-convex-weakly-concave minimax problems. Their convergence result relies on assuming the existence of a solution to the corresponding Minty variational inequality, which is hard to verify. Abernethy et al. [1] showed the linear convergence of a second-order iterative algorithm, called Hamiltonian gradient descent, for a subclass of "sufficiently bilinear" functions. Very recently, Xu et al. [60] and Boţ and Böhm [4] anslyze AGDA in nonconvex-(strongly-)concave setting. There is also a line of work in understanding the dynamics in minimax games [39, 20, 19, 21, 12, 25].
Variance-reduced minimax optimization. Palaniappan and Bach [49], Luo et al. [34], Chavdarova et al. [7] provided linear-convergent algorithms for strongly-convex-strongly-concave objectives, based on simultaneous updates. Du and Hu [15] extended the result to convex-strongly-concave objectives with full-rank coupling bilinear term. In contrast, we are dealing with a much broader class of objectives that are possibly nonconvex-nonconcave. We point out that Luo et al. [35] and Xu et al. [59] recently introduced variance-reduced algorithms for finding the stationary point of nonconvex-strongly-concave problems, which is again different from our setting.
2 Global optima and two-sided PL condition
Throughout this paper, we assume that the function f(x, y) in (1) is continuously differentiable and has Lipschitz gradient. Here ‖ · ‖ is used to denote the Euclidean norm. Assumption 1 (Lipschitz gradient). There exists a positive constant l > 0 such that
max{‖∇yf (x1, y1)−∇yf (x2, y2)‖ , ‖∇xf (x1, y1)−∇xf (x2, y2)‖} ≤ l[‖x1 − x2‖+‖y1 − y2‖],
holds for all x1, x2 ∈ Rd1 , y1, y2 ∈ Rd2 .
We now define three notions of optimality for minimax problems. The most direct notion of optimality is global minimax point, at which x∗ is an optimal solution to the function g(x) := maxy f(x, y) and y∗ is an optimal solution to maxy f(x∗, y). In the two-player zero-sum game, the notion of saddle point is also widely used [57, 44]. For a saddle point (x∗, y∗), x∗ is an optimal solution to minx f(x, y
∗) and y∗ is an optimal solution to maxy f(x∗, y). Definition 1 (Global optima).
1. (x∗, y∗) is a global minimax point, if for any (x, y) : f(x∗, y) ≤ f(x∗, y∗) ≤ maxy′ f(x, y′). 2. (x∗, y∗) is a saddle point, if for any (x, y) : f(x∗, y) ≤ f(x∗, y∗) ≤ f(x, y∗).
3. (x∗, y∗) is a stationary point, if : ∇xf(x∗, y∗) = ∇yf(x∗, y∗) = 0.
For general nonconvex-nonconcave minimax problems, these three notions of optimality are not necessarily equivalent. A stationary point may not be a saddle point or a global minimax point; a global minimax point may not be a saddle point or a stationary point. Note that for minimax problems, a saddle point or a global minimax point may not always exist. However, since our goal in this paper is to find global optima, in the remainder of the paper, we assume that a saddle point always exists. Assumption 2 (Existence of saddle point). The objective function f has at least one saddle point. We also assume that for any fixed y, minx∈Rd1 f(x, y) has a nonempty solution set and a optimal value, and for any fixed x, maxy∈Rd2 f(x, y) has a nonempty solution set and a finite optimal value.
For unconstrained minimization problems: minx∈Rn f(x), Polyak [50] proposed Polyak-Łojasiewicz (PL) condition, which is sufficient to show global linear convergence for gradient descent without assuming convexity. Specifically, a function f(·) satisfies PL condition if it has a nonempty solution set and a finite optimal value f∗, and there exists some µ > 0 such that ‖∇f(x)‖2 ≥ 2µ(f(x) − f∗),∀x. As discussed in Karimi et al. [28], PL condition is weaker, or not stronger, than other well-known conditions that guarantee linear convergence for gradient descent, such as error bounds (EB) [36], weak strong convexity (WSC) [45] and restricted secant inequality (RSI) [61].
We introduce a straightforward generalization of the PL condition to the minimax problem: function f(x, y) satisfies the PL condition with constant µ1 with respect to x, and -f satisfies PL condition with constant µ2 with respect to y. We formally state this in the following definition. Definition 2 (Two-sided PL condition). A continuously differentiable function f(x, y) satisfies the two-sided PL condition if there exist constants µ1, µ2 > 0 such that: ∀x, y,
‖∇xf(x, y)‖2 ≥ 2µ1[f(x, y)−min x f(x, y)], ‖∇yf(x, y)‖2 ≥ 2µ2[max y f(x, y)− f(x, y)].
The two-sided PL condition does not imply convexity-concavity, and it is a much weaker condition than strong-convexity-strong-concavity. In Lemma 2.1, we show that three notions of optimality are equivalent under the two-sided PL condition. Note that they may not be unique. Lemma 2.1. If the objective function f(x, y) satisfies the two-sided PL condition, then the following holds true:
(saddle point)⇔ (global minimax)⇔ (stationary point).
Below we give some examples that satisfy this condition. Example 1. The nonconvex-nonconcave function in the introduction, f(x, y) = x2+3 sin2 x sin2 y− 4y2 − 10 sin2 y satisfies the two-sided PL condition with µ1 = 1/16, µ2 = 1/11 (see Appendix ??). Example 2. f(x, y) = F (Ax,By), where F (·, ·) is strongly-convex-strongly-concave and A and B are arbitrary matrices, satisfies the two-sided PL condition. Example 3. The generative adversarial imitation learning for LQR can be formulated as minK maxθm(K, θ), where m is strongly-concave in terms of θ and satisfies PL condition in terms of K (see [5] for more details), thus satisfying the two-sided PL condition. Example 4. In a zero-sum linear quadratic (LQ) game, the system dynamics are characterized by xt+1 = Axt + But + Cvt, where xt is the system state and ut, vt are the control inputs from two-players. After parameterizing the policies of two players by ut = −Kxt and vt = −Lxt, the
value function is C(K,L) = Ex0∼D {∑∞ t=0 [ x>t Qxt + (Kxt) > Ru (Kxt)− (Lxt)>Rv (Lxt) ]} , where D is the distribution of the initial state x0 (see [63] for more details). Player 1 (player 2) wants to minimize (maximize) C(K,L), and the game is formulated as minK maxL C(K,L). Fixing L (or K), C(·, L) (or −C(K, ·)) becomes an objective of an LQR problem, and therefore satisfies the PL condition [18] when argminK C(K,L) and argmaxL C(K,L) are well-defined.
The two-sided PL condition includes rich classes of functions, including: (a) all strongly-convexstrongly-concave functions; (b) some convex-concave functions (e.g., Example 2); (c) some nonconvex-strongly-concave functions (e.g., Example 3); (d) some nonconvex-nonconcave functions (e.g., Example 1 and 4). Under the two-sided PL condition, the function g(x) := maxy f(x, y) satisfies PL condition with µ1 (see Appendix ??). Moreover, it holds that g is also L-smooth with L := l+ l2/µ2 [47]. Finally, we denote µ = min(µ1, µ2) and κ = lµ , which represents the condition number of the problem.
3 Global convergence of AGDA and Stoc-AGDA
In this section, we establish the convergence rate of the stochastic alternating gradient descent ascent (Stoc-AGDA) algorithm, which we present in Algorithm 1, under the two-sided PL condition. StocAGDA updates variables x and y sequentially using stochastic gradient descent/ascent steps. Here we make standard assumptions about stochastic gradients Gx(x, y, ξ) and Gy(x, y, ξ). Assumption 3 (Bounded variance). Gx(x, y, ξ) and Gy(x, y, ξ) are unbiased stochastic estimators of∇xf(x, y) and∇yf(x, y) and have variances bounded by σ2 > 0.
Algorithm 1 Stoc-AGDA 1: Input: (x0, y0), stepsizes {τ t1}t > 0, {τ t2}t > 0 2: for all t = 0, 1, 2, ... do 3: Draw two i.i.d. samples ξt1, ξt2 ∼ P (ξ) 4: xt+1 ← xt − τ t1Gx(xt, yt, ξt1) 5: yt+1 ← yt + τ t2Gy(xt+1, yt, ξt2) 6: end for
Note that Stoc-AGDA with constant stepsizes (i.e., τ t1 = τ1 and τ t 2 = τ2) and noiseless stochastic gradient (i.e., σ2 = 0) reduces to AGDA:
xt+1 = xt − τ1∇xf(xt, yt), yt+1 = yt + τ2∇yf(xt+1, yt). (2)
We measure the inaccuracy of (xt, yt) through the potential function
Pt := at + λ · bt, (3)
where at = E[g(xt) − g∗], bt = E[g(xt) − f(xt, yt)] and the balance parameter λ > 0 will be specified later in the theorems. Recall that g(x) := maxy f(x, y) and g∗ = minx g(x). This metric is driven by the definition of minimax point, because g(x)− g∗ and g(x)− f(x, y) are non-negative for any (x, y), and both equal to 0 if and only if (x, y) is a minimax point.
Stoc-AGDA with constant stepsizes We first consider Stoc-AGDA with constant stepsizes. We show that {(xt, yt)}t will converge linearly to a neighbourhood of the optimal set. Theorem 3.1. Suppose Assumptions 1, 2, 3 hold and f(x, y) satisfies the two-sided PL condition with µ1 and µ2. Define Pt := at + 110bt. If we run Algorithm 1 with τ t 2 = τ2 ≤ 1l and τ t 1 = τ1 ≤ µ22τ2 18l2 ,
Pt ≤(1− 1
2 µ1τ1)
tP0 + δ, (4)
where δ = (1−µ2τ2)(L+l)τ 2 1 +lτ 2 2 +10Lτ 2 1
10µ1τ1 σ2.
Remark 1. In the theorem above, we choose τ1 smaller than τ2, τ1/τ2 ≤ µ22/(18l2), because our potential function is not symmetric about x and y. Another reason is because we want yt
to approach y∗(xt) ∈ arg maxy f(xt, y) faster so that ∇xf(xt, yt) is a better approximation for ∇g(xt) (∇g(x) = ∇xf(x, y∗(x)), see Nouiehed et al. [47]). Indeed, it is common to use different learning rates for x and y in GDA algorithms for nonconvex minimax problems; see e.g., Jin et al. [26] and Lin et al. [31]. Note that the ratio between these two learning rates is quite crucial here. We also observe empirically when the same learning rate is used, even if small, the algorithm may not converge to saddle points. Remark 2. When t→∞, Pt → δ. If τ1 → 0 and τ22 /τ1 → 0, the error term δ will go to 0. When using smaller stepsizes, the algorithm reaches a smaller neighbour of the saddle point yet at the cost of a slower rate, as the contraction factor also deteriorates.
Linear convergence of AGDA Setting σ2 = 0, it follows immediately from the previous theorem that AGDA converges linearly under the two-sided PL condition. Moreover, we have the following: Theorem 3.2. Suppose Assumptions 1, 2 hold and f(x, y) satisfies the two-sided PL condition with µ1 and µ2. Define Pt := at + 110bt. If we run AGDA with τ1 = µ22 18l3 and τ2 = 1 l , then
Pt ≤ ( 1− µ1µ 2 2
36l3
)t P0. (5)
Furthermore, {(xt, yt)}t converges to some saddle point (x∗, y∗), and
‖xt − x∗‖2 + ‖yt − y∗‖2 ≤ α ( 1− µ1µ 2 2
36l3
)t P0, (6)
where α is a constant depending on µ1, µ2 and l.
The above theorem implies that the limit point of {(xt, yt)}t is a saddle point and the distance to the saddle point decreases in the order of O ( (1− κ−3)t ) . Note that in the special case when the objective is strongly-convex-strongly-concave, it is known that SGDA (GDA with simultaneous updates) achieves anO(κ2 log(1/ )) iteration complexity (see, e.g., Facchinei and Pang [17]) and this can be further improved to match the lower complexity bound O(κ log(1/ )) [62] by extragradient methods [29] or Nesterov’s dual extrapolation [46]. However, these results heavily rely on the strong monotonicity of the corresponding variational inequality, which does not apply here. Our analysis technique is totally different. Since the general two-sided PL condition contains a much broader class of functions, we do not expect to achieve the same dependency on κ, especially for a simple algorithm like AGDA. Note that even the multi-step GDA in [47] results in the same κ3 dependency, but without linear convergence rate. Hence, our conjecture is that the κ3 dependency of AGDA can not be improved without modifying the algorithm. We leave this investigation for future work.
Stoc-AGDA with diminishing stepsizes While Stoc-AGDA with constant stepsizes only converges linearly to a neighbourhood of the saddle point, Stoc-AGDA with diminishing stepsizes converges to the saddle point but at a sublinear rate O(1/t). Theorem 3.3. Suppose Assumptions 1, 2, 3 hold and f(x, y) satisfies the two-sided PL condition with µ1 and µ2. Define Pt = at + 110bt. If we run algorithm 1 with stepsizes τ t 1 = β γ+t and τ t 2 = 18l2β µ22(γ+t) for some β > 2/µ1 and γ > 0 such that τ11 ≤ min{1/L, µ22/18l2}, then we have
Pt ≤ ν
γ + t , where ν := max
{ γP0, [ (L+ l)β2 + 182l5β2/µ42 + 10Lβ 2 ] σ2
10µ1β − 20
} . (7)
Remark 3. Note the rate is affected by ν, and the first term in the definition of ν is controlled by the initial point. In practice, we can find a good initial point by running Stoc-AGDA with constant stepsizes so that only the second term in the definition of ν matters. Then by choosing β = 3/µ1, we
have ν = O ( l5σ2
µ21µ 4 2
) . Thus, the convergence rate of Stoc-AGDA is O ( κ5σ2
µt
) .
4 Stochastic variance-reduced AGDA algorithm
In this section, we study the minimax problem with the finite-sum structure: minx maxy f(x, y) = 1 n ∑n i=1 fi(x, y), which arises ubiquitously in machine learning. We are especially interested in the
Algorithm 2 VR-AGDA 1: input: (x̃0, ỹ0), stepsizes τ1, τ2, iteration numbers N,T 2: for all k = 0, 1, 2, ... do 3: for all t = 0, 1, 2, ...T − 1 do 4: xt,0 = x̃t, yt,0 = ỹt, 5: compute∇xf(x̃t, ỹt) = 1n ∑n i=1∇xfi(x̃t, ỹt) and∇yf(x̃t, ỹt) = 1 n ∑n i=1∇yfi(x̃t, ỹt)
6: for all j = 0 to N − 1 do 7: sample i.i.d. indices i1j , i 2 j uniformly from [n] 8: xt,j+1 = xt,j − τ1[∇xfi1j (xt,j , yt,j)−∇xfi1j (x̃t, ỹt) +∇xf(x̃t, ỹt)] 9: yt,j+1 = yt,j + τ2[∇yfi2j (xt,j+1, yt,j)−∇yfi2j (x̃t, ỹt) +∇yf(x̃t, ỹt)]
10: end for 11: x̃t+1 = xt,N , ỹt+1 = yt,N 12: end for 13: choose (xk, yk) from {{(xt,j , yt,j)}N−1j=0 } T−1 t=0 uniformly at random 14: x̃0 = xk, ỹ0 = yk 15: end for
case when n is large. We assume the overall objective function f(x, y) satisfies the two-sided PL condition with µ1 and µ2, but do not assume each fi to satisfy the two-sided PL condition. Instead of Assumption 1, we assume each component fi has Lipschitz gradients.
Assumption 4. Each fi has l-Lipschitz gradients.
If we run AGDA with full gradients to solve the finite-sum minimax problem, the total complexity for finding an -optimal solution is O(nκ3 log(1/ )) by Theorem 3.2. Despite the linear convergence, the per-iteration cost is high and the complexity can be huge when the number of components n and condition number κ are large. Instead, if we run Stoc-AGDA, this leads to the total complexity O ( κ5σ2
µ2
) by Remark 3, which has worse dependence on .
Motivated by the recent success of stochastic variance reduced gradient (SVRG) technique [27, 52, 49], we introduce the VR-AGDA algorithm (presented in Algorithm 2), that combines AGDA with SVRG so that the linear convergence is preserved while improving the dependency on n and κ. VR-AGDA can be viewed as the applying SVRG to AGDA with restarting: at every epoch k, we restart the SVRG subroutine by initializing it with (xk, yk), which is randomly selected from previous SVRG subroutine. This is partly inspired by the GD-SVRG algorithm for minimizing PL functions [52]. Notice when T = 1, VR-AGDA reduces to a double-loop algorithm which is similar to the SVRG for saddle point problems proposed by Palaniappan and Bach [49], except for several notable differences: (i) we are using the alternating updates rather than simultaneous updates, (ii) as a result, we require to sample two independent indices rather than one at each iteration, and (iii) most importantly, we are dealing with possibly nonconvex-nonconcave objectives that satisfy the two-sided PL condition. The following two theorems capture the convergence of VR-AGDA under different parameter setups.
Theorem 4.1. Suppose Assumptions 2 and 4 hold and f(x, y) satisfies the two-sided PL condition with µ1 and µ2. Define Pk = ak + 120b
k, where ak = E[g(xk)− g∗] and bk = E[g(xk)− f(xk, yk)]. If we run VR-AGDA with τ1 = β/(28κ8l), τ2 = β/(lκ6), N = bαβ−2/3κ9(2 + 4β1/2κ−3)−1c and T = 1, where α, β are constants irrelevant to l, n, µ1, µ2, then Pk+1 ≤ 12Pk. This implies complexity of
O ( (n+ κ9) log(1/ ) ) total for VR-AGDA to achieve an -optimal solution.
Theorem 4.2. Under the same assumptions as Theorem 4.1 , if we run VR-AGDA with τ1 = β/(28κ2ln2/3), τ2 = β/(ln2/3), N = bαβ−2/3n(2 + 4β1/2n−1/3)−1c, and T = dκ3n−1/3e, where α, β are constants irrelevant to l, n, µ1, µ2, then Pk+1 ≤ 12Pk. This implies complexity of
O ( (n+ n2/3κ3) log(1/ ) ) for VR-AGDA to achieve an -optimal solution.
Remark 4. Theorems 4.1 and 4.2 are different in their choices of stepsizes and iteration numbers, which gives rise to different complexities. VR-AGDA with the second setting has a lower complexity than the first setting in the regime n ≤ κ9, but the first setting allows for a simpler double-loop algorithm with T = 1. The two theorems imply that VR-AGDA always improves over AGDA. To the best of our knowledge, this is also the first theoretical analysis of variance-reduced algorithms with alternating updating rules for minimax optimization.
5 Numerical experiments
We present experiments on two applications: robust least square and imitation learning for LQR. We mainly focus on the comparison between AGDA, Stoc-AGDA, and VR-AGDA, which are the only algorithms with known theoretical guarantees. Because of their simplicity, only few hyperparameters are induced and are tuned based on grid search.
5.1 Robust least square
We consider the least square problems with coefficient matrix A ∈ Rn×m and noisy vector y0 ∈ Rn subject to bounded deterministic perturbation δ. Robust least square (RLS) minimizes the worst case residual, and can be formulated as [16]: minx maxδ:‖δ‖≤ρ ‖Ax− y‖2, where δ = y0 − y. We consider RLS with soft constraint:
minx maxy F (x, y) := ‖Ax− y‖2M − λ‖y − y0‖2M , (8)
where we adopt the general M-(semi-)norm in: ‖x‖2M = xTMx and M is positive semi-definite. F (x, y) satisfies the two-sided PL condition when λ > 1, because it can be written as the composition of a strongly-convex-strongly-concave function and an affine function (Example 2). However, F (x, y) is not strongly convex about x, and when M is not full-rank, it is not strongly concave about y.
Datasets. We use three datasets in the experiments, and two of them are generated in the same way as in Du and Hu [15]. We generate the first dataset with n = 1000 and m = 500 by sampling rows of A from a GaussianN (0, In) distribution and setting y0 = Ax∗ + with x∗ from GaussianN (0, 1) and from Gaussian N (0, 0.01). We set M = In and λ = 3. The second dataset is the rescaled aquatic toxicity dataset by Cassotti et al. [6], which uses 8 molecular descriptors of 546 chemicals to predict quantitative acute aquatic toxicity towards Daphnia Magna. We use M = I and λ = 2 for this dataset. The third dataset is generated with A ∈ R1000×500 from Gaussian N (0,Σ) where Σi,j = 2−|i−j|/10, M being rank-deficit with positive eigenvalues sampled from [0.2, 1.8] and λ = 1.5. These three datasets represent cases with low, median, and high condition numbers, respectively.
Evaluation. We compare four algorithms: AGDA, Stoc-AGDA, VR-AGDA and extragradient (EG) with fine-tuned stepsizes. For Stoc-AGDA, we choose constant stepsizes to form a fair comparison with the other two. We report the potential function value, i.e., Pt described in our theorems, and distance to the limit point ‖(xt, yt) − (x∗, y∗)‖2. These errors are plotted against the number of gradient evaluations normalized by n (i.e., number of full gradients). Results are reported in Figure 3. We observe that VR-AGDA and AGDA both exhibit linear convergence, and the speedup of VR-AGDA is fairly significant when the condition number is large, whereas Stoc-AGDA progresses fast at the beginning and stagnates later on. These numerical results clearly validate our theoretical findings. EG performs poorly in this example.
5.2 Generative adversarial imitation learning for LQR
The optimal control problem for LQR can be formulated as [18]:
minimize πt Ex0∼D ∞∑ t=0 x>t Qxt + u > t Rut such that xt+1 = Axt +But, ut = πt(xt),
where xt ∈ Rd is a state, ut ∈ Rk is a control,D is the distribution of initial state x0, and πt is a policy. It is known that the optimal policy is linear: ut = −K∗xt, where K∗ ∈ Rk×d. If we parametrize the policy in the linear form, ut = −Kxt, the problem can be written as: minK C(K;Q,R) := Ex0∼D [∑∞ t=0 ( x>t Qxt + (Kxt) >R(Kxt) )]
where the trajectory is induced by LQR dynamics and policy K. In generative adversarial imitation learning for LQR, the trajectories induced by an expert policy KE are observed and part of the goal is to learn the cost function parameters Q and R from the expert. This can be formulated as a minimax problem [5]:
min K max (Q,R)∈Θ
{ m(K,Q,R) := C(K;Q,R)− C(KE ;Q,R)− Φ(Q,R) } ,
where Θ = {(Q,R) : αQI Q βQI, αRI R βRI} and Φ is a strongly-convex regularizer. We sample n initial points x(1)0 , x (2) 0 , ..., x (n) 0 fromD and approximateC(K;Q,R) by sample average Cn(K;Q,R) := 1 n ∑n i=1 [∑∞ t=0 ( x>t Qxt + u > t Rut )] x0=x (i) 0 . We then consider:
min K max (Q,R)∈Θ
{mn(K,Q,R) := Cn(K;Q,R)− Cn(KE ;Q,R)− Φ(Q,R)}. (9)
Note that mn satisfies the PL condition in terms of K [18], and mn is strongly-concave in terms of (Q,R), so the function satisfies the two-sided PL condition.
In our experiment, we use Φ(Q,R) = λ(‖Q − Q̄‖2 + ‖R − R̄‖2) for some Q̄, R̄ and λ = 1. We generate a dataset with different n and k: (1) d = 3, k = 2; (2) d = 20, k = 10; (3) d = 30, k = 20. The initial distribution D is N (0, Id) and we sample n = 100 initial points. The exact gradients can be computed based on the compact forms established in Fazel et al. [18], Cai et al. [5]. We compare AGDA and VR-AGDA under fine-tuned stepsizes, and track their errors in terms of ‖Kt−K∗‖2 +‖Qt−Q∗‖2F +‖Rt−R∗‖2F . The result is reported in Figure 4, which again indicates that VR-AGDA significantly outperforms AGDA.
6 Conclusion
In this paper, we identify a subclass of nonconvex-nonconcave minimax problems, represented by the so-called two-side PL condition, for which AGDA and Stoc-AGDA can converge to global saddle points. We also propose the first linearly-convergent variance-reduced AGDA algorithm that is provably faster than AGDA, for this subclass of minimax problems . We hope this work can shed some light on the understanding of nonconvex-nonconcave minimax optimization: (1) different learning rates for two players are essential in GDA algorithms with alternating updates; (2) convexity-concavity is not a watershed to guarantee global convergence of GDA algorithms.
Acknowledgments and Disclosure of Funding
This work was supported in part by ONR grant W911NF-15-1-0479, NSF CCF-1704970, and NSF CMMI-1761699.
Broader Impact
With the boom of neural networks in every corner of machine learning, the understanding of nonconvex optimization, especially minimax optimization, becomes increasingly important. On one hand, the surge of interest in generative adversarial networks (GAN) has brought revolutionary success in many practical applications such as face synthesis , text-to-image synthesis, text generation. On the other hand, even for the simplest algorithm such as gradient descent ascent (GDA), although widely adopted by practitioners and researchers in the filed, lack theoretical understanding. It is imperative to develop a strong fundamental understanding of the success of these simple algorithms in the nonconvex regime, both to expand the usability of the methods and to accelerate future deployment in a principled and interpretable manner.
Theory. This paper takes an initial and substantial step towards the understanding of nonconvexnonconcave min-max optimization problems with "hidden convexity" as well as the convergence of the simplest alternating GDA algorithm. Despite its popularity, this algorithm has not been carefully analyzed even in the convex regime. The theory developed in this work helps explain when and why GDA performs well, how to choose stepsizes, and how to improve GDA properly. These are obviously basic yet important questions that need to be addressed in order to guide future development.
Applications. The downstream applications include but not limited to generative adversarial networks, the actor-critic game in reinforcement learning, robust machine learning and control, and other applications in games and social economics. This work could potentially inspire more interest in broadening the applicability of GDA in practice. | 1. What is the focus and contribution of the paper regarding alternating gradient descent ascent algorithm?
2. What are the strengths of the paper, particularly in its writing quality and experimental descriptions?
3. What are the weaknesses of the paper regarding practical problems and algorithm acceleration?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This is an interesting paper which studies the convergence of alternating gradient descent ascent algorithm for two-sided PL min-max optimization problem.
Strengths
The paper is well-written and the contributions are explained clearly. The initial experiments in Figure 1 and Figure 2 describes the problem very well. To the best knowledge of the reviewer, this is the first paper analyzing the two-sided PL problems rigorously.
Weaknesses
- The reviewer would appreciate some discussion on the possibility of accelerating the proposed algorithm and whether it's optimal rate. - The paper would have been improved if more discussions is included on what practical problems potentially satisfy the two sided PL assumption (other than simple examples). ======== Edits after the rebuttal: I am lowering my score due to two reasons: 1) My concern about the practicality of two-sided PL is not fully addressed. I personally would have prefered seeing one concrete example of a two-sided PL min-max problem rather than many single PL minimziation examples. 2) The readability of the variance reduced part: This was the concern of some other reviewers and when I checked again, I agree with their comment. Also one comment that is mentioned in the discussion between the reviewers is about the specificity of the title. The title seems too generic while can be more to the point by mentioning "two-sided PL". |
NIPS | Title
Global Convergence and Variance Reduction for a Class of Nonconvex-Nonconcave Minimax Problems
Abstract
Nonconvex minimax problems appear frequently in emerging machine learning applications, such as generative adversarial networks and adversarial learning. Simple algorithms such as the gradient descent ascent (GDA) are the common practice for solving these nonconvex games and receive lots of empirical success. Yet, it is known that these vanilla GDA algorithms with constant stepsize can potentially diverge even in the convex-concave setting. In this work, we show that for a subclass of nonconvex-nonconcave objectives satisfying a so-called two-sided Polyak-Łojasiewicz inequality, the alternating gradient descent ascent (AGDA) algorithm converges globally at a linear rate and the stochastic AGDA achieves a sublinear rate. We further develop a variance reduced algorithm that attains a provably faster rate than AGDA when the problem has the finite-sum structure.
1 Introduction
We consider minimax optimization problems of the forms
min x∈Rd1 max y∈Rd2 f(x, y) (1)
where f(x, y) is a possibly nonconvex-nonconcave function. Recent emerging applications in machine learning further stimulate a surge of interest in minimax problems. For example, generative adversarial networks (GANs) [23] can be viewed as a two-player game between a generator that produces synthetic data and a discriminator that differentiates between true and synthetic data. Other applications include reinforcement learning [9, 10, 11], robust optimization [42, 43], adversarial machine learning [54, 37], and so on. In many of these applications, f(x, y) may be stochastic, namely, f(x, y) = E[F (x, y; ξ)], which corresponds to the expected loss of some random data ξ; or f(x, y) may have the finite-sum structure, namely, f(x, y) = 1n ∑n i=1 fi(x, y), which corresponds to the empirical loss over n data points.
The most frequently used methods for solving minimax problems are the gradient descent ascent (GDA) algorithms (or their stochastic variants), with either simultaneous or alternating updates of the primal-dual variables, referred to as SGDA and AGDA, respectively. While these algorithms have received much empirical success especially in adversarial training, it is known that GDA algorithms with constant stepsizes could fail to converge even for the bilinear games [22, 40]; when they do converge, the stable limit point may not be a local Nash equilibrium [13, 38]. On the other hand, GDA algorithms can converge linearly to the saddle point for strongly-convex-strongly-concave functions [17]. Moreover, for many simple nonconvex-nonconcave objective functions, such as, f(x, y) = x2 + 3 sin2 x sin2 y − 4y2 − 10 sin2 y, we observe that GDA algorithms with constant
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
stepsizes converge to the global Nash equilibrium (see Figure 1). These facts naturally raise a question: Is there a general condition under which GDA algorithms converge to the global optima?
Furthermore, the use of variance reduction techniques has played a prominent role in improving the convergence over stochastic or batch algorithms for both convex and nonconvex minimization problems [27, 52, 53, 58]. However, when it comes to the minimax problems, there are limited results, except under convex-concave setting [49, 15]. This leads to another open question: Can we improve GDA algorithms for nonconvex-nonconcave minimax problems?
1.1 Our contributions
In this paper, we address these two questions and specifically focus on the alternating gradient descent ascent, namely AGDA. This is due to several considerations. First of all, using alternating updates of GDA is more stable than simultaneous updates [22, 2] and often converges faster in practice. Note that for a convex-concave matrix game, SGDA may diverge while AGDA is proven to always have bounded iterates [22]. See Figure 2 for a simple illustration. Secondly, AGDA is widely used for training GANs and other minimax problems in practice; see e.g., [33, 41]. Yet there is a lack of discussion on the convergence of AGDA for general minimax problems in the literature, even for the favorable strongly-convex-strongly-concave setting. Alternating updating schemes are perceived more challenging to analyze than simultaneous updates; the latter treats two variables equally and has been extensively studied in vast literature of variational inequality. Our main contributions are summarized as follows.
Two-sided PL condition. First, we identity a general condition that relaxes the convex-concavity requirement of the objective function while still guaranteeing global convergence of AGDA and stochastic AGDA (Stoc-AGDA). We call this the two-sided PL condition, which requires that both players’ utility functions satisfy Polyak-Łojasiewicz (PL) inequality [50]. The two-sided PL condition is very general and is satisfied by many important classes of functions: (a) all stronglyconvex-strongly-concave functions; (b) all PL-strongly-concave function (discussed in [24]) and (c) many nonconvex-nonconcave objectives. Such conditions also hold true for various applications, including robust least square, generative adversarial imitation learning for linear quadratic regulator (LQR) dynamics [5], zero-sum linear quadratic game [63], and potentially many others in adversarial learning [14], robust phase retrieval [55, 64], robust control [18], and etc. We first investigate the landscape of objectives under the two-sided PL condition. In particular, we show that three notions of optimality: saddle point, minimax point, and stationary point are equivalent.
Global convergence of AGDA. We show that under the two-sided PL condition, AGDA with proper constant stepsizes converges globally to a saddle point at a linear rate of O(1− κ−3)t, while Stoc-AGDA with proper diminishing stepsizes converges to a saddle point at a sublinear rate of O(κ5/t), where κ is the underlying condition number. To the best of our knowledge, this is the first result on the global convergence of a class of nonconvex-nonconvex problems. In contrast, most previous work deals with nonconvex-concave problems and obtains convergence to stationary points. On the other hand, because all strongly-convex-strongly-concave and PL-strongly-concave functions naturally satisfy the two-sided PL condition, our analysis fills the theoretical gap with the first convergence results of AGDA under these settings.
Variance reduced algorithm. For minimax problems with the finite-sum structure, we introduce a variance-reduced AGDA algorithm (VR-AGDA) that leverages the idea of stochastic variance reduced gradient (SVRG) [27, 52] with the alternating updates. We prove that VR-AGDA achieves the complexity ofO ( (n+ n2/3κ3) log(1/ ) ) , which improves over theO ( nκ3 log 1 ) complexity of
AGDA and the O ( κ5/ ) complexity of Stoc-AGDA when applied to finite-sum minimax problems. Our numerical experiments further demonstrate that VR-AGDA performs significantly better than AGDA and Stoc-AGDA, especially for problems with large condition numbers. To our best knowledge, this is the first work to provide a variance-reduced algorithm and theoretical guarantees in the nonconvex-nonconcave regime of minimax optimization. In contrast, most previous variance-reduced algorithms require full or partial strong convexity and only apply to simultaneous updates.
Nonconvex-PL games. Lastly, as a side contribution, we show that for a broader class of nonconvex-nonconcave problems under only one-sided PL condition, AGDA converges to a - stationary point within O( −2) iterations, thus is optimal among all first-order algorithms. Our result shaves off a logarithmic factor of the best-known rate achieved by the multi-step GDA algorithm [47]. This directly implies the same convergence rate on nonconvex-strongly-concave objectives, and to our best knowledge, we are the first to show the convergence of AGDA on this class of functions. Due to page limitation, we defer this result to Appendix ??.
1.2 Related work
Nonconvex minimax problems. There has been a recent surge in research on solving minimax optimization beyond the convex-concave regime [54, 8, 51, 56, 30, 47, 1, 32, 3, 48], but they differ from our work from various perspectives. Most of these work focus on the nonconvex-concave regime and aim for convergence to stationary points of minimax problems [8, 54, 31, 56]. Algorithms in these work require solving the inner maximization or some sub-problems with high accuracy, which are different from AGDA. Lin et al. [30] proposed an inexact proximal point method to find an - stationary point for a class of weakly-convex-weakly-concave minimax problems. Their convergence result relies on assuming the existence of a solution to the corresponding Minty variational inequality, which is hard to verify. Abernethy et al. [1] showed the linear convergence of a second-order iterative algorithm, called Hamiltonian gradient descent, for a subclass of "sufficiently bilinear" functions. Very recently, Xu et al. [60] and Boţ and Böhm [4] anslyze AGDA in nonconvex-(strongly-)concave setting. There is also a line of work in understanding the dynamics in minimax games [39, 20, 19, 21, 12, 25].
Variance-reduced minimax optimization. Palaniappan and Bach [49], Luo et al. [34], Chavdarova et al. [7] provided linear-convergent algorithms for strongly-convex-strongly-concave objectives, based on simultaneous updates. Du and Hu [15] extended the result to convex-strongly-concave objectives with full-rank coupling bilinear term. In contrast, we are dealing with a much broader class of objectives that are possibly nonconvex-nonconcave. We point out that Luo et al. [35] and Xu et al. [59] recently introduced variance-reduced algorithms for finding the stationary point of nonconvex-strongly-concave problems, which is again different from our setting.
2 Global optima and two-sided PL condition
Throughout this paper, we assume that the function f(x, y) in (1) is continuously differentiable and has Lipschitz gradient. Here ‖ · ‖ is used to denote the Euclidean norm. Assumption 1 (Lipschitz gradient). There exists a positive constant l > 0 such that
max{‖∇yf (x1, y1)−∇yf (x2, y2)‖ , ‖∇xf (x1, y1)−∇xf (x2, y2)‖} ≤ l[‖x1 − x2‖+‖y1 − y2‖],
holds for all x1, x2 ∈ Rd1 , y1, y2 ∈ Rd2 .
We now define three notions of optimality for minimax problems. The most direct notion of optimality is global minimax point, at which x∗ is an optimal solution to the function g(x) := maxy f(x, y) and y∗ is an optimal solution to maxy f(x∗, y). In the two-player zero-sum game, the notion of saddle point is also widely used [57, 44]. For a saddle point (x∗, y∗), x∗ is an optimal solution to minx f(x, y
∗) and y∗ is an optimal solution to maxy f(x∗, y). Definition 1 (Global optima).
1. (x∗, y∗) is a global minimax point, if for any (x, y) : f(x∗, y) ≤ f(x∗, y∗) ≤ maxy′ f(x, y′). 2. (x∗, y∗) is a saddle point, if for any (x, y) : f(x∗, y) ≤ f(x∗, y∗) ≤ f(x, y∗).
3. (x∗, y∗) is a stationary point, if : ∇xf(x∗, y∗) = ∇yf(x∗, y∗) = 0.
For general nonconvex-nonconcave minimax problems, these three notions of optimality are not necessarily equivalent. A stationary point may not be a saddle point or a global minimax point; a global minimax point may not be a saddle point or a stationary point. Note that for minimax problems, a saddle point or a global minimax point may not always exist. However, since our goal in this paper is to find global optima, in the remainder of the paper, we assume that a saddle point always exists. Assumption 2 (Existence of saddle point). The objective function f has at least one saddle point. We also assume that for any fixed y, minx∈Rd1 f(x, y) has a nonempty solution set and a optimal value, and for any fixed x, maxy∈Rd2 f(x, y) has a nonempty solution set and a finite optimal value.
For unconstrained minimization problems: minx∈Rn f(x), Polyak [50] proposed Polyak-Łojasiewicz (PL) condition, which is sufficient to show global linear convergence for gradient descent without assuming convexity. Specifically, a function f(·) satisfies PL condition if it has a nonempty solution set and a finite optimal value f∗, and there exists some µ > 0 such that ‖∇f(x)‖2 ≥ 2µ(f(x) − f∗),∀x. As discussed in Karimi et al. [28], PL condition is weaker, or not stronger, than other well-known conditions that guarantee linear convergence for gradient descent, such as error bounds (EB) [36], weak strong convexity (WSC) [45] and restricted secant inequality (RSI) [61].
We introduce a straightforward generalization of the PL condition to the minimax problem: function f(x, y) satisfies the PL condition with constant µ1 with respect to x, and -f satisfies PL condition with constant µ2 with respect to y. We formally state this in the following definition. Definition 2 (Two-sided PL condition). A continuously differentiable function f(x, y) satisfies the two-sided PL condition if there exist constants µ1, µ2 > 0 such that: ∀x, y,
‖∇xf(x, y)‖2 ≥ 2µ1[f(x, y)−min x f(x, y)], ‖∇yf(x, y)‖2 ≥ 2µ2[max y f(x, y)− f(x, y)].
The two-sided PL condition does not imply convexity-concavity, and it is a much weaker condition than strong-convexity-strong-concavity. In Lemma 2.1, we show that three notions of optimality are equivalent under the two-sided PL condition. Note that they may not be unique. Lemma 2.1. If the objective function f(x, y) satisfies the two-sided PL condition, then the following holds true:
(saddle point)⇔ (global minimax)⇔ (stationary point).
Below we give some examples that satisfy this condition. Example 1. The nonconvex-nonconcave function in the introduction, f(x, y) = x2+3 sin2 x sin2 y− 4y2 − 10 sin2 y satisfies the two-sided PL condition with µ1 = 1/16, µ2 = 1/11 (see Appendix ??). Example 2. f(x, y) = F (Ax,By), where F (·, ·) is strongly-convex-strongly-concave and A and B are arbitrary matrices, satisfies the two-sided PL condition. Example 3. The generative adversarial imitation learning for LQR can be formulated as minK maxθm(K, θ), where m is strongly-concave in terms of θ and satisfies PL condition in terms of K (see [5] for more details), thus satisfying the two-sided PL condition. Example 4. In a zero-sum linear quadratic (LQ) game, the system dynamics are characterized by xt+1 = Axt + But + Cvt, where xt is the system state and ut, vt are the control inputs from two-players. After parameterizing the policies of two players by ut = −Kxt and vt = −Lxt, the
value function is C(K,L) = Ex0∼D {∑∞ t=0 [ x>t Qxt + (Kxt) > Ru (Kxt)− (Lxt)>Rv (Lxt) ]} , where D is the distribution of the initial state x0 (see [63] for more details). Player 1 (player 2) wants to minimize (maximize) C(K,L), and the game is formulated as minK maxL C(K,L). Fixing L (or K), C(·, L) (or −C(K, ·)) becomes an objective of an LQR problem, and therefore satisfies the PL condition [18] when argminK C(K,L) and argmaxL C(K,L) are well-defined.
The two-sided PL condition includes rich classes of functions, including: (a) all strongly-convexstrongly-concave functions; (b) some convex-concave functions (e.g., Example 2); (c) some nonconvex-strongly-concave functions (e.g., Example 3); (d) some nonconvex-nonconcave functions (e.g., Example 1 and 4). Under the two-sided PL condition, the function g(x) := maxy f(x, y) satisfies PL condition with µ1 (see Appendix ??). Moreover, it holds that g is also L-smooth with L := l+ l2/µ2 [47]. Finally, we denote µ = min(µ1, µ2) and κ = lµ , which represents the condition number of the problem.
3 Global convergence of AGDA and Stoc-AGDA
In this section, we establish the convergence rate of the stochastic alternating gradient descent ascent (Stoc-AGDA) algorithm, which we present in Algorithm 1, under the two-sided PL condition. StocAGDA updates variables x and y sequentially using stochastic gradient descent/ascent steps. Here we make standard assumptions about stochastic gradients Gx(x, y, ξ) and Gy(x, y, ξ). Assumption 3 (Bounded variance). Gx(x, y, ξ) and Gy(x, y, ξ) are unbiased stochastic estimators of∇xf(x, y) and∇yf(x, y) and have variances bounded by σ2 > 0.
Algorithm 1 Stoc-AGDA 1: Input: (x0, y0), stepsizes {τ t1}t > 0, {τ t2}t > 0 2: for all t = 0, 1, 2, ... do 3: Draw two i.i.d. samples ξt1, ξt2 ∼ P (ξ) 4: xt+1 ← xt − τ t1Gx(xt, yt, ξt1) 5: yt+1 ← yt + τ t2Gy(xt+1, yt, ξt2) 6: end for
Note that Stoc-AGDA with constant stepsizes (i.e., τ t1 = τ1 and τ t 2 = τ2) and noiseless stochastic gradient (i.e., σ2 = 0) reduces to AGDA:
xt+1 = xt − τ1∇xf(xt, yt), yt+1 = yt + τ2∇yf(xt+1, yt). (2)
We measure the inaccuracy of (xt, yt) through the potential function
Pt := at + λ · bt, (3)
where at = E[g(xt) − g∗], bt = E[g(xt) − f(xt, yt)] and the balance parameter λ > 0 will be specified later in the theorems. Recall that g(x) := maxy f(x, y) and g∗ = minx g(x). This metric is driven by the definition of minimax point, because g(x)− g∗ and g(x)− f(x, y) are non-negative for any (x, y), and both equal to 0 if and only if (x, y) is a minimax point.
Stoc-AGDA with constant stepsizes We first consider Stoc-AGDA with constant stepsizes. We show that {(xt, yt)}t will converge linearly to a neighbourhood of the optimal set. Theorem 3.1. Suppose Assumptions 1, 2, 3 hold and f(x, y) satisfies the two-sided PL condition with µ1 and µ2. Define Pt := at + 110bt. If we run Algorithm 1 with τ t 2 = τ2 ≤ 1l and τ t 1 = τ1 ≤ µ22τ2 18l2 ,
Pt ≤(1− 1
2 µ1τ1)
tP0 + δ, (4)
where δ = (1−µ2τ2)(L+l)τ 2 1 +lτ 2 2 +10Lτ 2 1
10µ1τ1 σ2.
Remark 1. In the theorem above, we choose τ1 smaller than τ2, τ1/τ2 ≤ µ22/(18l2), because our potential function is not symmetric about x and y. Another reason is because we want yt
to approach y∗(xt) ∈ arg maxy f(xt, y) faster so that ∇xf(xt, yt) is a better approximation for ∇g(xt) (∇g(x) = ∇xf(x, y∗(x)), see Nouiehed et al. [47]). Indeed, it is common to use different learning rates for x and y in GDA algorithms for nonconvex minimax problems; see e.g., Jin et al. [26] and Lin et al. [31]. Note that the ratio between these two learning rates is quite crucial here. We also observe empirically when the same learning rate is used, even if small, the algorithm may not converge to saddle points. Remark 2. When t→∞, Pt → δ. If τ1 → 0 and τ22 /τ1 → 0, the error term δ will go to 0. When using smaller stepsizes, the algorithm reaches a smaller neighbour of the saddle point yet at the cost of a slower rate, as the contraction factor also deteriorates.
Linear convergence of AGDA Setting σ2 = 0, it follows immediately from the previous theorem that AGDA converges linearly under the two-sided PL condition. Moreover, we have the following: Theorem 3.2. Suppose Assumptions 1, 2 hold and f(x, y) satisfies the two-sided PL condition with µ1 and µ2. Define Pt := at + 110bt. If we run AGDA with τ1 = µ22 18l3 and τ2 = 1 l , then
Pt ≤ ( 1− µ1µ 2 2
36l3
)t P0. (5)
Furthermore, {(xt, yt)}t converges to some saddle point (x∗, y∗), and
‖xt − x∗‖2 + ‖yt − y∗‖2 ≤ α ( 1− µ1µ 2 2
36l3
)t P0, (6)
where α is a constant depending on µ1, µ2 and l.
The above theorem implies that the limit point of {(xt, yt)}t is a saddle point and the distance to the saddle point decreases in the order of O ( (1− κ−3)t ) . Note that in the special case when the objective is strongly-convex-strongly-concave, it is known that SGDA (GDA with simultaneous updates) achieves anO(κ2 log(1/ )) iteration complexity (see, e.g., Facchinei and Pang [17]) and this can be further improved to match the lower complexity bound O(κ log(1/ )) [62] by extragradient methods [29] or Nesterov’s dual extrapolation [46]. However, these results heavily rely on the strong monotonicity of the corresponding variational inequality, which does not apply here. Our analysis technique is totally different. Since the general two-sided PL condition contains a much broader class of functions, we do not expect to achieve the same dependency on κ, especially for a simple algorithm like AGDA. Note that even the multi-step GDA in [47] results in the same κ3 dependency, but without linear convergence rate. Hence, our conjecture is that the κ3 dependency of AGDA can not be improved without modifying the algorithm. We leave this investigation for future work.
Stoc-AGDA with diminishing stepsizes While Stoc-AGDA with constant stepsizes only converges linearly to a neighbourhood of the saddle point, Stoc-AGDA with diminishing stepsizes converges to the saddle point but at a sublinear rate O(1/t). Theorem 3.3. Suppose Assumptions 1, 2, 3 hold and f(x, y) satisfies the two-sided PL condition with µ1 and µ2. Define Pt = at + 110bt. If we run algorithm 1 with stepsizes τ t 1 = β γ+t and τ t 2 = 18l2β µ22(γ+t) for some β > 2/µ1 and γ > 0 such that τ11 ≤ min{1/L, µ22/18l2}, then we have
Pt ≤ ν
γ + t , where ν := max
{ γP0, [ (L+ l)β2 + 182l5β2/µ42 + 10Lβ 2 ] σ2
10µ1β − 20
} . (7)
Remark 3. Note the rate is affected by ν, and the first term in the definition of ν is controlled by the initial point. In practice, we can find a good initial point by running Stoc-AGDA with constant stepsizes so that only the second term in the definition of ν matters. Then by choosing β = 3/µ1, we
have ν = O ( l5σ2
µ21µ 4 2
) . Thus, the convergence rate of Stoc-AGDA is O ( κ5σ2
µt
) .
4 Stochastic variance-reduced AGDA algorithm
In this section, we study the minimax problem with the finite-sum structure: minx maxy f(x, y) = 1 n ∑n i=1 fi(x, y), which arises ubiquitously in machine learning. We are especially interested in the
Algorithm 2 VR-AGDA 1: input: (x̃0, ỹ0), stepsizes τ1, τ2, iteration numbers N,T 2: for all k = 0, 1, 2, ... do 3: for all t = 0, 1, 2, ...T − 1 do 4: xt,0 = x̃t, yt,0 = ỹt, 5: compute∇xf(x̃t, ỹt) = 1n ∑n i=1∇xfi(x̃t, ỹt) and∇yf(x̃t, ỹt) = 1 n ∑n i=1∇yfi(x̃t, ỹt)
6: for all j = 0 to N − 1 do 7: sample i.i.d. indices i1j , i 2 j uniformly from [n] 8: xt,j+1 = xt,j − τ1[∇xfi1j (xt,j , yt,j)−∇xfi1j (x̃t, ỹt) +∇xf(x̃t, ỹt)] 9: yt,j+1 = yt,j + τ2[∇yfi2j (xt,j+1, yt,j)−∇yfi2j (x̃t, ỹt) +∇yf(x̃t, ỹt)]
10: end for 11: x̃t+1 = xt,N , ỹt+1 = yt,N 12: end for 13: choose (xk, yk) from {{(xt,j , yt,j)}N−1j=0 } T−1 t=0 uniformly at random 14: x̃0 = xk, ỹ0 = yk 15: end for
case when n is large. We assume the overall objective function f(x, y) satisfies the two-sided PL condition with µ1 and µ2, but do not assume each fi to satisfy the two-sided PL condition. Instead of Assumption 1, we assume each component fi has Lipschitz gradients.
Assumption 4. Each fi has l-Lipschitz gradients.
If we run AGDA with full gradients to solve the finite-sum minimax problem, the total complexity for finding an -optimal solution is O(nκ3 log(1/ )) by Theorem 3.2. Despite the linear convergence, the per-iteration cost is high and the complexity can be huge when the number of components n and condition number κ are large. Instead, if we run Stoc-AGDA, this leads to the total complexity O ( κ5σ2
µ2
) by Remark 3, which has worse dependence on .
Motivated by the recent success of stochastic variance reduced gradient (SVRG) technique [27, 52, 49], we introduce the VR-AGDA algorithm (presented in Algorithm 2), that combines AGDA with SVRG so that the linear convergence is preserved while improving the dependency on n and κ. VR-AGDA can be viewed as the applying SVRG to AGDA with restarting: at every epoch k, we restart the SVRG subroutine by initializing it with (xk, yk), which is randomly selected from previous SVRG subroutine. This is partly inspired by the GD-SVRG algorithm for minimizing PL functions [52]. Notice when T = 1, VR-AGDA reduces to a double-loop algorithm which is similar to the SVRG for saddle point problems proposed by Palaniappan and Bach [49], except for several notable differences: (i) we are using the alternating updates rather than simultaneous updates, (ii) as a result, we require to sample two independent indices rather than one at each iteration, and (iii) most importantly, we are dealing with possibly nonconvex-nonconcave objectives that satisfy the two-sided PL condition. The following two theorems capture the convergence of VR-AGDA under different parameter setups.
Theorem 4.1. Suppose Assumptions 2 and 4 hold and f(x, y) satisfies the two-sided PL condition with µ1 and µ2. Define Pk = ak + 120b
k, where ak = E[g(xk)− g∗] and bk = E[g(xk)− f(xk, yk)]. If we run VR-AGDA with τ1 = β/(28κ8l), τ2 = β/(lκ6), N = bαβ−2/3κ9(2 + 4β1/2κ−3)−1c and T = 1, where α, β are constants irrelevant to l, n, µ1, µ2, then Pk+1 ≤ 12Pk. This implies complexity of
O ( (n+ κ9) log(1/ ) ) total for VR-AGDA to achieve an -optimal solution.
Theorem 4.2. Under the same assumptions as Theorem 4.1 , if we run VR-AGDA with τ1 = β/(28κ2ln2/3), τ2 = β/(ln2/3), N = bαβ−2/3n(2 + 4β1/2n−1/3)−1c, and T = dκ3n−1/3e, where α, β are constants irrelevant to l, n, µ1, µ2, then Pk+1 ≤ 12Pk. This implies complexity of
O ( (n+ n2/3κ3) log(1/ ) ) for VR-AGDA to achieve an -optimal solution.
Remark 4. Theorems 4.1 and 4.2 are different in their choices of stepsizes and iteration numbers, which gives rise to different complexities. VR-AGDA with the second setting has a lower complexity than the first setting in the regime n ≤ κ9, but the first setting allows for a simpler double-loop algorithm with T = 1. The two theorems imply that VR-AGDA always improves over AGDA. To the best of our knowledge, this is also the first theoretical analysis of variance-reduced algorithms with alternating updating rules for minimax optimization.
5 Numerical experiments
We present experiments on two applications: robust least square and imitation learning for LQR. We mainly focus on the comparison between AGDA, Stoc-AGDA, and VR-AGDA, which are the only algorithms with known theoretical guarantees. Because of their simplicity, only few hyperparameters are induced and are tuned based on grid search.
5.1 Robust least square
We consider the least square problems with coefficient matrix A ∈ Rn×m and noisy vector y0 ∈ Rn subject to bounded deterministic perturbation δ. Robust least square (RLS) minimizes the worst case residual, and can be formulated as [16]: minx maxδ:‖δ‖≤ρ ‖Ax− y‖2, where δ = y0 − y. We consider RLS with soft constraint:
minx maxy F (x, y) := ‖Ax− y‖2M − λ‖y − y0‖2M , (8)
where we adopt the general M-(semi-)norm in: ‖x‖2M = xTMx and M is positive semi-definite. F (x, y) satisfies the two-sided PL condition when λ > 1, because it can be written as the composition of a strongly-convex-strongly-concave function and an affine function (Example 2). However, F (x, y) is not strongly convex about x, and when M is not full-rank, it is not strongly concave about y.
Datasets. We use three datasets in the experiments, and two of them are generated in the same way as in Du and Hu [15]. We generate the first dataset with n = 1000 and m = 500 by sampling rows of A from a GaussianN (0, In) distribution and setting y0 = Ax∗ + with x∗ from GaussianN (0, 1) and from Gaussian N (0, 0.01). We set M = In and λ = 3. The second dataset is the rescaled aquatic toxicity dataset by Cassotti et al. [6], which uses 8 molecular descriptors of 546 chemicals to predict quantitative acute aquatic toxicity towards Daphnia Magna. We use M = I and λ = 2 for this dataset. The third dataset is generated with A ∈ R1000×500 from Gaussian N (0,Σ) where Σi,j = 2−|i−j|/10, M being rank-deficit with positive eigenvalues sampled from [0.2, 1.8] and λ = 1.5. These three datasets represent cases with low, median, and high condition numbers, respectively.
Evaluation. We compare four algorithms: AGDA, Stoc-AGDA, VR-AGDA and extragradient (EG) with fine-tuned stepsizes. For Stoc-AGDA, we choose constant stepsizes to form a fair comparison with the other two. We report the potential function value, i.e., Pt described in our theorems, and distance to the limit point ‖(xt, yt) − (x∗, y∗)‖2. These errors are plotted against the number of gradient evaluations normalized by n (i.e., number of full gradients). Results are reported in Figure 3. We observe that VR-AGDA and AGDA both exhibit linear convergence, and the speedup of VR-AGDA is fairly significant when the condition number is large, whereas Stoc-AGDA progresses fast at the beginning and stagnates later on. These numerical results clearly validate our theoretical findings. EG performs poorly in this example.
5.2 Generative adversarial imitation learning for LQR
The optimal control problem for LQR can be formulated as [18]:
minimize πt Ex0∼D ∞∑ t=0 x>t Qxt + u > t Rut such that xt+1 = Axt +But, ut = πt(xt),
where xt ∈ Rd is a state, ut ∈ Rk is a control,D is the distribution of initial state x0, and πt is a policy. It is known that the optimal policy is linear: ut = −K∗xt, where K∗ ∈ Rk×d. If we parametrize the policy in the linear form, ut = −Kxt, the problem can be written as: minK C(K;Q,R) := Ex0∼D [∑∞ t=0 ( x>t Qxt + (Kxt) >R(Kxt) )]
where the trajectory is induced by LQR dynamics and policy K. In generative adversarial imitation learning for LQR, the trajectories induced by an expert policy KE are observed and part of the goal is to learn the cost function parameters Q and R from the expert. This can be formulated as a minimax problem [5]:
min K max (Q,R)∈Θ
{ m(K,Q,R) := C(K;Q,R)− C(KE ;Q,R)− Φ(Q,R) } ,
where Θ = {(Q,R) : αQI Q βQI, αRI R βRI} and Φ is a strongly-convex regularizer. We sample n initial points x(1)0 , x (2) 0 , ..., x (n) 0 fromD and approximateC(K;Q,R) by sample average Cn(K;Q,R) := 1 n ∑n i=1 [∑∞ t=0 ( x>t Qxt + u > t Rut )] x0=x (i) 0 . We then consider:
min K max (Q,R)∈Θ
{mn(K,Q,R) := Cn(K;Q,R)− Cn(KE ;Q,R)− Φ(Q,R)}. (9)
Note that mn satisfies the PL condition in terms of K [18], and mn is strongly-concave in terms of (Q,R), so the function satisfies the two-sided PL condition.
In our experiment, we use Φ(Q,R) = λ(‖Q − Q̄‖2 + ‖R − R̄‖2) for some Q̄, R̄ and λ = 1. We generate a dataset with different n and k: (1) d = 3, k = 2; (2) d = 20, k = 10; (3) d = 30, k = 20. The initial distribution D is N (0, Id) and we sample n = 100 initial points. The exact gradients can be computed based on the compact forms established in Fazel et al. [18], Cai et al. [5]. We compare AGDA and VR-AGDA under fine-tuned stepsizes, and track their errors in terms of ‖Kt−K∗‖2 +‖Qt−Q∗‖2F +‖Rt−R∗‖2F . The result is reported in Figure 4, which again indicates that VR-AGDA significantly outperforms AGDA.
6 Conclusion
In this paper, we identify a subclass of nonconvex-nonconcave minimax problems, represented by the so-called two-side PL condition, for which AGDA and Stoc-AGDA can converge to global saddle points. We also propose the first linearly-convergent variance-reduced AGDA algorithm that is provably faster than AGDA, for this subclass of minimax problems . We hope this work can shed some light on the understanding of nonconvex-nonconcave minimax optimization: (1) different learning rates for two players are essential in GDA algorithms with alternating updates; (2) convexity-concavity is not a watershed to guarantee global convergence of GDA algorithms.
Acknowledgments and Disclosure of Funding
This work was supported in part by ONR grant W911NF-15-1-0479, NSF CCF-1704970, and NSF CMMI-1761699.
Broader Impact
With the boom of neural networks in every corner of machine learning, the understanding of nonconvex optimization, especially minimax optimization, becomes increasingly important. On one hand, the surge of interest in generative adversarial networks (GAN) has brought revolutionary success in many practical applications such as face synthesis , text-to-image synthesis, text generation. On the other hand, even for the simplest algorithm such as gradient descent ascent (GDA), although widely adopted by practitioners and researchers in the filed, lack theoretical understanding. It is imperative to develop a strong fundamental understanding of the success of these simple algorithms in the nonconvex regime, both to expand the usability of the methods and to accelerate future deployment in a principled and interpretable manner.
Theory. This paper takes an initial and substantial step towards the understanding of nonconvexnonconcave min-max optimization problems with "hidden convexity" as well as the convergence of the simplest alternating GDA algorithm. Despite its popularity, this algorithm has not been carefully analyzed even in the convex regime. The theory developed in this work helps explain when and why GDA performs well, how to choose stepsizes, and how to improve GDA properly. These are obviously basic yet important questions that need to be addressed in order to guide future development.
Applications. The downstream applications include but not limited to generative adversarial networks, the actor-critic game in reinforcement learning, robust machine learning and control, and other applications in games and social economics. This work could potentially inspire more interest in broadening the applicability of GDA in practice. | 1. What is the focus and contribution of the paper on nonconvex-nonconcave minimax optimization problems?
2. What are the strengths of the proposed AGDA algorithms, particularly in terms of global convergence guarantees and convergence rates?
3. What are the weaknesses of the paper regarding the extension of the PL condition and its discussion?
4. How does the reviewer assess the suitability of the numerical experiments in demonstrating the theoretical results?
5. Do you have any other questions or concerns about the paper that the reviewer did not mention? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper studies various alternating gradient descent ascent (AGDA) algorithms specifically for a class of nonconvex-nonconcave minimax optimization problems. The authors propose to extend the notion of Polyak-Lojasiewicz (PL) functions to bivariate functions, establishing the so-called two-sided PL condition. For objectives satisfying the two-sided PL condition, the authors proved the AGDA and its stochastic variant converge globally with a linear rate and sublinear rate respectively. A stochastic variance reduced algorithm is also proposed for problems with finite-sum structure. Numerical experiments are performed to study the convergence behavior of the proposed algorithms.
Strengths
This work studies a class of nonconvex-nonconcave minimax optimization problems, which can well be one of the earliest work in this topic, which provides global convergence guarantees and the corresponding convergence rates. The authors also propose a stochastic variance reduced AGDA algorithm which converges provably faster than AGDA.
Weaknesses
The proposal of extending the PL condition to a two-sided sense seems to be interesting, but obviously the discussion of the two-sided PL condition is not sufficient, or lacking. How more general is the two-sided PL condition than convex-concave or strongly-convex-strongly-concave? And how does it compare to one-sided PL condition + (strongly) convex/concave? The authors do not illustrate such ideas detailed enough. In addition, the choices of the numerical experiments do not reveal the necessity of introducing the two-sided PL condition as well. The authors should have chosen examples of nonconvex-nonconcave minimax optimization problems which satisfy the two-sided PL condition. Robust least squares is convex-concave, and GAIL for LQR is strongly concave in m. (Two-sided) PL condition is a more general condition which might include convex-concave functions, but for the sake of the proposal of the two-sided KL condition, examples of convex-concave, convex-nonconcave or nonconvex-concave should be excluded. Otherwise, it is a very strong deviation of the central issue this work is dealing with. Then, the authors are actually not solving nonconvex-nonconcave minimax optimization problems in the experiments. In short, the choices of the numerical experiments is not even suitable (say, to demonstrate the theoretical results). |
NIPS | Title
Grounding Representation Similarity Through Statistical Testing
Abstract
To understand neural network behavior, recent works quantitatively compare different networks’ learned representations using canonical correlation analysis (CCA), centered kernel alignment (CKA), and other dissimilarity measures. Unfortunately, these widely used measures often disagree on fundamental observations, such as whether deep networks differing only in random initialization learn similar representations. These disagreements raise the question: which, if any, of these dissimilarity measures should we believe? We provide a framework to ground this question through a concrete test: measures should have sensitivity to changes that affect functional behavior, and specificity against changes that do not. We quantify this through a variety of functional behaviors including probing accuracy and robustness to distribution shift, and examine changes such as varying random initialization and deleting principal components. We find that current metrics exhibit different weaknesses, note that a classical baseline performs surprisingly well, and highlight settings where all metrics appear to fail, thus providing a challenge set for further improvement.
1 Introduction
Understanding neural networks is not only scientifically interesting, but critical for applying deep networks in high-stakes situations. Recent work has highlighted the value of analyzing not just the final outputs of a network, but also its intermediate representations [20, 29]. This has motivated the development of representation similarity measures, which can provide insight into how different training schemes, architectures, and datasets affect networks’ learned representations.
A number of similarity measures have been proposed, including centered kernel alignment (CKA) [13], ones based on canonical correlation analysis (CCA) [24, 30], single neuron alignment [20], vector space alignment [3, 6, 32], and others [2, 9, 16, 18, 21, 39]. Unfortunately, these different measures tell different stories. For instance, CKA and projection weighted CCA disagree on which layers of different networks are most similar [13]. This lack of consensus is worrying, as measures are often designed according to different and incompatible intuitive desiderata, such as whether finding a one-to-one assignment, or finding few-to-one mappings, between neurons is more appropriate [20]. As a community, we need well-chosen formal criteria for evaluating metrics to avoid over-reliance on intuition and the pitfalls of too many researcher degrees of freedom [17].
In this paper we view representation dissimilarity measures as implicitly answering a classification question–whether two representations are essentially similar or importantly different. Thus, in analogy to statistical testing, we can evaluate them based on their sensitivity to important change and specificity (non-responsiveness) against unimportant changes or noise.
As a warm-up, we first initially consider two intuitive criteria: first, that metrics should have specificity against random initialization; and second, that they should be sensitive to deleting important principal
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
components (those that affect probing accuracy). Unfortunately, popular metrics fail at least one of these two tests. CCA is not specific – random initialization noise overwhelms differences between even far-apart layers in a network (Section 3.1). CKA on the other hand is not sensitive, failing to detect changes in all but the top 10 principal components of a representation (Section 3.2).
We next construct quantitative benchmarks to evaluate a dissimilarity measure’s quality. To move beyond our intuitive criteria, we need a ground truth. For this we turn to the functional behavior of the representations we are comparing, measured through probing accuracy (an indicator of syntactic information) [4, 27, 35] and out-of-distribution performance of the model they belong to [7, 23, 25]. We then score dissimilarity measures based on their rank correlation with these measured functional differences. Overall our benchmarks contain 30,480 examples and vary representations across several axes including random seed, layer depth, and low-rank approximation (Section 4)1.
Our benchmarks confirm our two intuitive observations: on subtasks that consider layer depth and principal component deletion, we measure the rank correlation with probing accuracy and find CCA and CKA lacking as the previous warm-up experiments suggested. Meanwhile, the Orthogonal Procrustes distance, a classical but often overlooked2 dissimilarity measure, balances gracefully between CKA and CCA and consistently performs well. This underscores the need for systematic evaluation, otherwise we may fall to recency bias that undervalues classical baselines.
Other subtasks measure correlation with OOD accuracy, motivated by the observation that random initialization sometimes has large effects on OOD performance [23]. We find that dissimilarity measures can sometimes predict OOD performance using only the in-distribution representations, but we also identify a challenge set on which none of the measures do statistically better than chance. We hope this challenge set will help measure and spur progress in the future.
2 Problem Setup: Metrics and Models
Our goal is to quantify the similarity between two different groups of neurons (usually layers). We do this by comparing how their activations behave on the same dataset. Thus for a layer with p1 neurons, we define A 2 Rp1⇥n, the matrix of activations of the p1 neurons on n data points, to be that layer’s raw representation of the data. Similarly, let B 2 Rp2⇥n be a matrix of the activations of p2 neurons on the same n data points. We center and normalize these representations before computing dissimilarity, per standard practice. Specifically, for a raw representation A we first subtract the mean value from each column, then divide by the Frobenius norm, to produce the normalized representation A⇤, used in all our dissimilarity computations. In this work we study dissimilarity measures d(A⇤, B⇤) that allow for quantitative comparisons of representations both within and across different networks. We colloquially refer to values of d(A⇤, B⇤) as distances, although they do not necessarily satisfy the triangle inequality required of a proper metric.
We study five dissimilarity measures: centered kernel alignment (CKA), three measures derived from canonical correlation analysis (CCA), and a measure derived from the orthogonal Procrustes problem.
Centered kernel alignment (CKA) uses an inner product to quantify similarity between two representations. It is based on the idea that one can first choose a kernel, compute the n⇥ n kernel matrix for each representation, and then measure similarity as the alignment between these two kernel matrices. The measure of similarity thus depends on one’s choice of kernel; in this work we consider Linear CKA:
dLinear CKA(A,B) = 1 kAB>k2F
kAA>kF kBB>kF (1)
as proposed in Kornblith et al. [13]. Other choices of kernel are also valid; we focus on Linear CKA here since Kornblith et al. [13] report similar results from using either a linear or RBF kernel.
Canonical correlation analysis (CCA) finds orthogonal bases (wiA, wiB) for two matrices such that after projection onto wiA, w i B , the projected matrices have maximally correlated rows. For 1 i p1,
1Code to replicate our results can be found at https://github.com/js-d/sim_metric. 2For instance, Raghu et al. [30] and Morcos et al. [24] do not mention it, and Kornblith et al. [13] relegates it
to the appendix; although Smith et al. [32] does use it to analyze word embeddings and prefers it to CCA.
the ith canonical correlation coefficient ⇢i is computed as follows:
⇢i = max wiA,w i B
hwiA > A,wiB > Bi
kwiA > Ak · kwiB > Bk
(2)
s.t. hwiA > A,wjA > Ai = 0, 8j < i, hwiB > B,wjB > Bi = 0, 8j < i (3)
To transform the vector of correlation coefficients into a scalar measure, two options considered previously [13] are the mean correlation coefficient, ⇢̄CCA, and the mean squared correlation coefficient, R2CCA, defined as follows:
d⇢̄CCA(A,B) = 1 1
p1
X
i
⇢i, dR2CCA(A,B) = 1 1
p1
X
i
⇢2i (4)
To improve the robustness of CCA, Morcos et al. [24] propose projection-weighted CCA (PWCCA) as another scalar summary of CCA:
dPWCCA(A,B) = 1 P
i ↵i⇢iP i ↵i
, ↵i = X
j
|hhi, aji| (5)
where aj is the jth row of A, and hi = wiA > A is the projection of A onto the ith canonical direction. We find that PWCCA performs far better than ⇢̄CCA and R2CCA, so we focus on PWCCA in the main text, but include results on the other two measures in the appendix.
The orthogonal Procrustes problem consists of finding the left-rotation of A that is closest to B in Frobenius norm, i.e. solving the optimization problem:
min R
kB RAk2F, subject to R>R = I. (6)
The minimum is the squared orthogonal Procrustes distance between A and B, and is equal to
dProc(A,B) = kAk2F + kBk2F 2kA>Bk⇤, (7)
where k · k⇤ is the nuclear norm [31]. Unlike the other metrics, the orthogonal Procrustes distance is not normalized between 0 and 1, although for normalized A⇤, B⇤ it lies in [0, 2].
2.1 Models we study
In this work we study representations of both text and image inputs. For text, we investigate representations computed by Transformer architectures in the BERT model family [8] on sentences from the Multigenre Natural Language Inference (MNLI) dataset [40]. We study BERT models of two sizes: BERT base, with 12 hidden layers of 768 neurons, and BERT medium, with 8 hidden layers of 512 neurons. We use the same architectures as in the open source BERT release3, but to generate diversity we study 3 variations of these models:
1. 10 BERT base models pretrained with different random seeds but not finetuned for particular tasks, released by Zhong et al. [41]4. 2. 10 BERT medium models initialized from pretrained models released by Zhong et al. [41], that we further finetuned on MNLI with 10 different finetuning seeds (100 models total). 3. 100 BERT base models that were initialized from the pretrained BERT model in [8] and finetuned on MNLI with different seeds, released by McCoy et al. [23]5.
For images, we investigate representations computed by ResNets [11] on CIFAR-10 test set images [14]. We train 100 ResNet-14 models6 from random initialization with different seeds on the CIFAR-10 training set and collect representations after each convolutional layer.
Further training details, as well as checks that our training protocols result in models with comparable performance to the original model releases, can be found in Appendix A.
3available at https://github.com/google-research/bert 4available at https://github.com/ruiqi-zhong/acl2021-instance-level 5available at https://github.com/tommccoy1/hans/tree/master/berts_of_a_feather 6from https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py
3 Warm-up: Intuitive Tests for Sensitivity and Specificity
When designing dissimilarity measures, researchers usually consider invariants that these measures should not be sensitive to [13]; for example, symmetries in neural networks imply that permuting the neurons in a fully connected layer does not change the representations learned. We take this one step further and frame dissimilarity measures as answering whether representations are essentially the same, or importantly different. We can then evaluate measures based on whether they respond to important changes (sensitivity) while ignoring changes that don’t matter (specificity).
Assessing sensitivity and specificity requires a ground truth–which representations are truly different? To answer this, we begin with the following two intuitions7: 1) neural network representations trained on the same data but from different random initializations are similar, and 2) representations lose crucial information as principal components are deleted. These motivate the following intuitive tests of specificity and sensitivity: we expect a dissimilarity measure to: 1) assign a small distance between architecturally identical neural networks that only differ in initialization seed, and 2) assign a large distance between a representation A and the representation  after deleting important principal components (enough to affect accuracy). We will see that PWCCA fails the first test (specificity), while CKA fails the second (sensitivity).
3.1 Specificity against changes to random seed
Neural networks with the same architecture trained from different random initializations show many similarities, such as highly correlated predictions on in-distribution data points [23]. Thus it seems natural to expect a good similarity measure to assign small distances between architecturally corresponding layers of networks that are identical except for initialization seed.
To check this property, we take two BERT base models pre-trained with different random seeds and, for every layer in the first model, compute its dissimilarity to every layer in both the first and second model. We do this for 5 separate pairs of models and average the results. To pass the intuitive specificity test, a dissimilarity measure should assign relatively small distances between a layer in the first network and its corresponding layer in the second network.
Figure 1 displays the average pair-wise PWCCA, CKA, and Orthogonal Procrustes distances between layers of two networks differing only in random seed. According to PWCCA, these networks’ representations are quite dissimilar; for instance, the two layer 7 representations are further apart
7Note we will see later that these intuitions need refinement.
than they are from any other layer in the same network. PWCCA is thus not specific against random initialization, as it can outweigh even large changes in layer depth.
In contrast, CKA can separate layer 7 in a different network from layers 4 or 10 in the same network, showing better specificity to random initialization. Orthogonal Procrustes exhibits smaller but non-trivial specificity, distinguishing layers once they are 4-5 layers apart.
3.2 Sensitivity to removing principal components
Dissimilarity measures should also be sensitive to deleting important principal components of a representation.8 To quantify which components are important, we fix a layer of a pre-trained BERT base model and measure how probing accuracy degrades as principal components are deleted (starting from the smallest component), since probing accuracy is a common measure of the information captured in a representation [4]. We probe linear classification performance on the Stanford Sentiment Tree Bank task (SST-2) [33], following the experimental protocol in Tamkin et al. [34]. Figure 3b shows how probing accuracy degrades with component deletion. Ideally, dissimilarity measures should be large by the time probing accuracy has decreased substantially.
To assess whether a dissimilarity measure is large, we need a baseline to compare to. For each measure, we define a dissimilarity score to be above the detectable threshold if it is larger than the dissimilarity score between networks with different random initialization. Figure 2 plots the dissimilarity induced by deleting principal components, as well as this baseline.
For the last layer of BERT, CKA requires 97% of a representation’s principal components to be deleted for the dissimilarity to be detectable; after deleting these components, probing accuracy shown in Figure 3b drops significantly from 80% to 63% (chance is 50%). CKA thus fails to detect large accuracy drops and so fails our intuitive sensitivity test.
Other metrics perform better: Orthogonal Procrustes’s detection threshold is ⇠85% of the principal components, corresponding to an accuracy drop 80% to 70%. PWCCA’s threshold is ⇠55% of principal components, corresponding to an accuracy drop from 80% to 75%.
PWCCA’s failure of specificity and CKA’s failure of sensitivity on these intuitive tests are worrying. However, before declaring definitive failure, in the next section, we turn to making our assessments more rigorous.
8For a representation A, we define  k, the result of deleting the k smallest principal components from A, as follows: we compute the singular value decomposition U⌃V T = A, construct U k 2 Rp⇥p k by dropping the lowest k singular vectors of U , and finally take  k = UT kA.
4 Rigorously Evaluating Dissimilarity Metrics
In the previous section, we saw that CKA and PWCCA each failed intuitive tests, based on sensitivity to principal components and specificity to random initialization. However, these were based primarily on intuitive, qualitative desiderata. Is there some way for us to make these tests more rigorous and quantitative?
First consider the intuitive layer specificity test (Section 3.1), which revealed that random initialization affects PWCCA more than large changes in layer depth. To justify why this is undesirable, we can turn to probing accuracy, which is strongly affected by layer depth, and only weakly affected by random seed (Figure 3a). This suggests a path forward: we can ground the layer test in the concrete differences in functionality captured by the probe.
More generally, we want metrics to be sensitive to changes that affect functionality, while ignoring those that don’t. This motivates the following general procedure, given a distance metric d and a functionality f (which assigns a real number to a given representation):
1. Collect a set S of representations that differ along one or more axes of interest (e.g. layer depth, random seed).
2. Choose a reference representation A 2 S. When f is an accuracy metric, it is reasonable to choose A = argmaxA2S f(A).9
3. For every representation B 2 S: • Compute |f(A) f(B)| • Compute d(A,B)
4. Report the rank correlation between |f(A) f(B)| and d(A,B) (measured by Kendall’s ⌧ or Spearman ⇢).
The above procedure provides a quantitative measure of how well the distance metric d responds to the functionality f . For instance, in the layer specificity test, since depth affects probing accuracy strongly while random seed affects it only weakly, a dissimilarity measure with high rank correlation will be strongly responsive to layer depth and weakly responsive to seed; thus rank correlation quantitatively formalizes the test from Section 3.1.
Correlation metrics also capture properties that our intuition might miss. For instance, Figure 3a shows that some variation in random seed actually does affect accuracy, and our procedure rewards metrics that pick up on this, while the intuitive sensitivity test would penalize them.
Our procedure requires choosing a collection of models S; the crucial feature of S is that it contains models with diverse behavior according to f . Different sets S, combined with a functional difference f , can be thought of as miniature “benchmarks" that surface complementary perspectives on dissimilarity measures’ responsiveness to that functional difference. In the rest of this section, we instantiate this quantitative benchmark for several choices of f and S, starting with the layer and principal component tests from Section 3 and continuing on to several tests of OOD performance.
The overall results are summarized in Table 1. Note that for any single benchmark, we expect the correlation coefficients to be significantly lower than 1, since the metric D must capture all important axes of variation while f measures only one type of functionality. A good metric is one that has consistently high correlation across many different functional measures.
Benchmark 1: Layer depth. We turn the layer test into a benchmark for both text and images. For the text setting, we construct a set S of 120 representations by pretraining 10 BERT base models with different initialization seeds and including each of the 12 BERT layers as a representation. We separately consider two functionalities f : probing accuracy on QNLI [37] and SST-2 [33]. To compute the rank correlation, we take the reference representation A to be the representation with highest probing accuracy. We compute the Kendall’s ⌧ and Spearman’s ⇢ rank correlations between the dissimilarities and the probing accuracy differences and report the results in Table 1.
9Choosing the highest accuracy model as the reference makes it more likely that as accuracy changes, models are on average becoming more dissimilar. A low accuracy model may be on the “periphery” of model space, where it is dissimilar to models with high accuracy, but potentially even more dissimilar to other low accuracy models that make different mistakes.
For the image setting, we similarly construct a set S of 70 representations by training 5 ResNet-14 models with different initialization seeds and including each of the 14 layers’ representations. We also consider two functionalities f for these vision models: probing accuracy on CIFAR-100 [14] and on SVHN [26], and compute rank correlations in the same way.
We find that PWCCA has lower rank correlations compared to CKA and Procrustes for both language probing tasks. This corroborates the intuitive specificity test (Section 3.1), suggesting that PWCCA registers too large of a dissimilarity across random initializations. For the vision tasks, CKA and Procrustes achieve similar rank correlations, while PWCCA cannot be computed because n < d.
Benchmark 2: Principal component (PC) deletion. We next quantify the PC deletion test from Section 3.2, by constructing a set S of representations that vary in both random initialization and fraction of principal components deleted. We pretrain 10 BERT base models with different initializations, and for each pretrained model we obtain 14 different representations by deleting that representation’s k smallest principal components, with k 2 {0, 100, 200, 300, 400, 500, 600, 650, 700, 725, 750, 758, 763, 767}. Thus S has 10 ⇥ 14 = 140 elements. The representations themselves are the layer-` activations, for ` 2 {8, 9, . . . , 12},10 so there are 5 different choices of S. We use SST-2 probing accuracy as the functionality of interest f , and select the reference representation A as the element in S with highest accuracy. Rank correlation
10Earlier layers have near-chance accuracy on probing tasks, so we ignore them.
results are consistent across the 5 choices of S (Appendix C), so we report the average as a summary statistic in Table 1.
We find that PWCCA has the highest rank correlation between dissimilarity and probing accuracy, followed by Procrustes, and distantly followed by CKA. This corroborates the intuitive observations from Section 3.2 that CKA is not sensitive to principal component deletion.
4.1 Investigating variation in OOD performance across random seeds
So far our benchmarks have been based on probing accuracy, which only measures in-distribution behavior (the train and test set of the probe are typically i.i.d.). In addition, the BERT models were always pretrained on language modeling but not finetuned for classification. To add diversity to our benchmarks, we next consider the out-of-distribution performance of language and vision models trained for classification tasks.
Benchmark 3: Changing fine-tuning seeds. McCoy et al. [23] show that a single pretrained BERT base model finetuned on MNLI with different random initializations will produce models with similar in-distribution performance, but widely variable performance on out-of-distribution data. We thus create a benchmark S out of McCoy et al.’s 100 released fine-tuned models, using OOD accuracy on the “Lexical Heuristic (Non-entailment)" subset of the HANS dataset [22] as our functionality f . This functionality is associated with the entire model, rather than an individual layer (in contrast to the probing functionality), but we consider one layer at a time to measure whether dissimilarities
between representations at that layer correlate with f . This allows us to also localize whether certain layers are more predictive of f .
We construct 12 different S (one for each of the 12 layers of BERT base), taking the reference representation A to be that of the highest accuracy model according to f . As before, we report each dissimilarity measure’s rank correlation with f in Table 1, averaged over the 12 runs.
All three dissimilarity measures correlate with OOD accuracy, with Orthogonal Procrustes and PWCCA being more correlated than CKA. Since the representations in our benchmarks were computed on in-distribution MNLI data, this has the interesting implication that dissimilarity measures can detect OOD differences without access to OOD data. It also implies that random initialization leads to meaningful functional differences that are picked up by these measures, especially Procrustes and PWCCA. Contrast this with our intuitive specificity test in Section 3.1, where all sensitivity to random initialization was seen as a shortcoming. Our more quantitative benchmark here suggests that some of that sensitivity tracks true functionality.
To check that the differences in rank correlation for Procrustes, PWCCA, and CKA are statistically significant, we compute bootstrap estimates of their 95% confidence intervals. With 2000 bootstrapped samples, we find statistically significant differences between all pairs of measures for most choices of layer depth S, so we conclude PWCCA > Orthogonal Procrustes > CKA (the full results are in Appendix E). We do not apply this procedure for the previous two benchmarks, because the different models have correlated randomness and so any p-value based on independence assumptions would be invalid.
Benchmark 4: Challenge sets: Changing pretraining and fine-tuning seeds. We also construct benchmarks using models trained from scratch with different random seeds (for language, this is pretraining and fine-tuning, and for vision, this is standard training). For language, we construct benchmarks from a collection of 100 BERT medium models, trained with all combinations of 10 pretraining and 10 fine-tuning seeds. The models are fine-tuned on MNLI, and we consider two different functionalities of interest f : accuracy on the OOD Antonymy stress test and on the OOD Numerical stress test [25], which both show significant variation in accuracy across models (see Figure 3d). We obtain 8 different sets S (one for each of the 8 layer depths in BERT medium), again taking A to be the representation of the highest-accuracy model according to f . Rank correlations for each dissimilarity measure are averaged over the 8 runs and reported in Table 1.
For vision, we construct benchmarks from a collection of 100 ResNet-14 models, trained with different random seeds on CIFAR-10. We consider 19 different functionalities of interest—the 19 types of corruptions in the CIFAR-10C dataset [12], which show significant variation in accuracy across models (see Figure 3c). We obtain 14 different sets S (one for each of the 14 layers), taking A to be the representation of the highest-accuracy model according to f . Rank correlations for each dissimilarity measure are averaged over the 14 runs and over the 19 corruption types and reported in Table 1. Results for each of the 19 corruptions individually can be found in Appendix D..
None of the dissimilarity measures show a large rank correlation for either the language or vision tasks, and for the Numerical stress test, at most layers, the associated p-values (assuming independence) are non-significant at the 0.05 level (see Appendix C). 11 Thus we conclude that all measures fail to be sensitive to OOD accuracy in these settings. One reason for this could be that there is less variation in the OOD accuracies compared to the previous experiment with the HANS dataset (there accuracies varied from 0 to nearly 60%). Another reason could be that it is harder to correctly account for both pretraining and fine-tuning variation at the same time. Either way, we hope that future dissimilarity measures can improve upon these results, and we present this benchmark as a challenge task to motivate progress.
5 Discussion
In this work we proposed a quantitative measure for evaluating similarity metrics, based on the rank correlation with functional behavior. Using this, we generated tasks motivated by sensitivity to
11See Appendix C for p-values as produced by sci-kit learn. Strictly speaking, the p-values are invalid because they assume independence, but the pretraining seed induces correlations. However, correctly accounting for these would tend to make the p-values larger, thus preserving our conclusion of non-significance .
deleting important directions, specificity to random initialization, and sensitivity to out-of-distribution performance. Popular existing metrics such as CKA and CCA often performed poorly on these tasks, sometimes in striking ways. Meanwhile, the classical Orthogonal Procrustes transform attained consistently good performance.
Given the success of Orthogonal Procrustes, it is worth reflecting on how it differs from the other metrics and why it might perform well. To do so, we consider a simplified case where A and B have the same singular vectors but different singular values. Thus without loss of generality A = ⇤1 and B = ⇤2, where the ⇤i are both diagonal. In this case, the Orthogonal Procrustes distance reduces to k⇤1 ⇤2k2F , or the sum of the squared distances between the singular values. We will see that both CCA and CKA reduce to less reasonable formulae in this case.
Orthogonal Procrustes vs. CCA. All three metrics derived from CCA assign zero distance even when the (non-zero) singular values are arbitrarily different. This is because CCA correlation coefficients are invariant to all invertible linear transformations. This invariance property may help explain why CCA metrics generally find layers within the same network to be much more similar than networks trained with different randomness. Random initialization introduces noise, particularly in unimportant principal components, while representations within the same network more easily preserve these components, and CCA may place too much weight on their associated correlation coefficients.
Orthogonal Procrustes vs. CKA. In contrast to the squared distance of Orthogonal Procrustes, CKA actually reduces to a quartic function based on the dot products between the squared entries of ⇤1 and ⇤2. As a consequence, CKA is dominated by representations’ largest singular values, leaving it insensitive to meaningful differences in smaller singular values as illustrated in Figure 2. This lack of sensitivity to moderate-sized differences may help explain why CKA fails to track out-of-distribution error effectively.
In addition to helping understand similarity measures, our benchmarks pinpoint directions for improvement. No method was sensitive to accuracy on the Numerical stress test in our challenge set, possibly due to a lower signal-to-noise ratio. Since Orthogonal Procrustes performed well on most of our tasks, it could be a promising foundation for a new measure, and recent work shows how to regularize Orthogonal Procrustes to handle high noise [28]. Perhaps similar techniques could be adapted here.
An alternative to our benchmarking approach is to directly define two representations’ dissimilarity as their difference in a functional behavior of interest. Feng et al. [9] take this approach, defining dissimilarity as difference in accuracy on a handful of probing tasks. One drawback of this approach is that a small set of probes may not capture all the differences in representations, so it is useful to base dissimilarity measures on representations’ intrinsic properties. Intrinsically defined dissimilarities also have the potential to highlight new functional behaviors, as we found that representations with similar in-distribution probing accuracy often have highly variable OOD accuracy.
A limitation of our work is that we only consider a handful of model variations and functional behaviors, and restricting our attention to these settings could overlook other important considerations. To address this, we envision a paradigm in which a rich tapestry of benchmarks are used to ground and validate neural network interpretations. Other axes of variation in models could include training on more or fewer examples, training on shuffled labels vs. real labels, training from specifically chosen initializations [10], and using different architectures. Other functional behaviors to examine could include modularity and meta-learning capabilities. Benchmarks could also be applied to other interpretability tools beyond dissimilarity. For example, sensitivity to deleting principal components could provide an additional sanity check for saliency maps and other visualization tools [1].
More broadly, many interpretability tools are designed as audits of models, although it is often unclear what characteristics of the models are consistently audited. We position this work as a counter-audit, where by collecting models that differ in functional behavior, we can assess whether the interpretability tools CKA, PWCCA, etc., accurately reflect the behavioral differences. Many other types of counter-audits may be designed to assess other interpretability tools. For example, models that have backdoors built into them to misclassify certain inputs provide counter-audits for interpretability tools that explain model predictions–these explanations should reflect any backdoors present [5, 15, 19, 38]. We are hopeful that more comprehensive checks on interpretability tools will provide deeper understanding of neural networks, and more reliable models.
Acknowledgments and Disclosure of Funding
Thanks to Ruiqi Zhong for helpful comments and assistance in finetuning models, and thanks to Daniel Rothchild and our anonymous reviewers for helpful discussion. FD is supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE 1752814 and the Open Philanthropy Project AI Fellows Program. JSD is supported by the NSF Division of Mathematical Sciences Grant No. 2031985. | 1. What is the main contribution of the paper regarding neural network representations?
2. What are the strengths of the proposed approach, particularly in comparing dissimilarity measures?
3. What are the weaknesses of the paper, especially regarding the experiment section?
4. Do you have any concerns about the methodology used in the paper?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper focuses on giving a consistent meaning to (dis)similarity metrics among neural networks based on representations out of intermediate layers.
This consistent meaning is given by comparing the ordering that a dissimilarity measure gives to a "functional behavior". The functional behavior is defined as any function on the representations including but not limited to the accuracy of a given task.
The authors focus on three dissimilarity measures. First, they explain intuitively how these measures behave when there different initialization for the network. Then they give intuitions around what happens when they remove principal components of the representations. Then they show that they can arrive at similar conclusions using their rank comparison proposal with a functional behavior of the representations.
The authors then verify that the three measures can detect out-of-distribution differences when the pre-trained weights are the same but fine-tuning seed is different. However, they construct a benchmark to show that when the pre-trained weights are also different these three metrics do not show a statistically significant rank correlation, posing an open question around what metric is suitable in such a scenario.
Review
What is the motivation behind using orthogonal Procrustes besides the fact that it's invariant to left orthogonal transformation? What is it capturing intuitively? Geometrically it seems to be relevant for representations that need to be compared modulo a rotation factor, but neural network representations do not necessarily get optimized to produce such rotation effect.
The first paragraph of page 5 and the footnote on page 5 together are a little confusing. Do you mean the smaller principle components are removed first or the largest ones? Or by quantifying you mean you sort them based on how much they decrease probing accuracy?
Besides these, I think the paper is novel enough and has good clarity. In terms of significance, I think it provides an intuitive framework for giving meaning to dissimilarity measures which is beneficial for future research in this field. |
NIPS | Title
Grounding Representation Similarity Through Statistical Testing
Abstract
To understand neural network behavior, recent works quantitatively compare different networks’ learned representations using canonical correlation analysis (CCA), centered kernel alignment (CKA), and other dissimilarity measures. Unfortunately, these widely used measures often disagree on fundamental observations, such as whether deep networks differing only in random initialization learn similar representations. These disagreements raise the question: which, if any, of these dissimilarity measures should we believe? We provide a framework to ground this question through a concrete test: measures should have sensitivity to changes that affect functional behavior, and specificity against changes that do not. We quantify this through a variety of functional behaviors including probing accuracy and robustness to distribution shift, and examine changes such as varying random initialization and deleting principal components. We find that current metrics exhibit different weaknesses, note that a classical baseline performs surprisingly well, and highlight settings where all metrics appear to fail, thus providing a challenge set for further improvement.
1 Introduction
Understanding neural networks is not only scientifically interesting, but critical for applying deep networks in high-stakes situations. Recent work has highlighted the value of analyzing not just the final outputs of a network, but also its intermediate representations [20, 29]. This has motivated the development of representation similarity measures, which can provide insight into how different training schemes, architectures, and datasets affect networks’ learned representations.
A number of similarity measures have been proposed, including centered kernel alignment (CKA) [13], ones based on canonical correlation analysis (CCA) [24, 30], single neuron alignment [20], vector space alignment [3, 6, 32], and others [2, 9, 16, 18, 21, 39]. Unfortunately, these different measures tell different stories. For instance, CKA and projection weighted CCA disagree on which layers of different networks are most similar [13]. This lack of consensus is worrying, as measures are often designed according to different and incompatible intuitive desiderata, such as whether finding a one-to-one assignment, or finding few-to-one mappings, between neurons is more appropriate [20]. As a community, we need well-chosen formal criteria for evaluating metrics to avoid over-reliance on intuition and the pitfalls of too many researcher degrees of freedom [17].
In this paper we view representation dissimilarity measures as implicitly answering a classification question–whether two representations are essentially similar or importantly different. Thus, in analogy to statistical testing, we can evaluate them based on their sensitivity to important change and specificity (non-responsiveness) against unimportant changes or noise.
As a warm-up, we first initially consider two intuitive criteria: first, that metrics should have specificity against random initialization; and second, that they should be sensitive to deleting important principal
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
components (those that affect probing accuracy). Unfortunately, popular metrics fail at least one of these two tests. CCA is not specific – random initialization noise overwhelms differences between even far-apart layers in a network (Section 3.1). CKA on the other hand is not sensitive, failing to detect changes in all but the top 10 principal components of a representation (Section 3.2).
We next construct quantitative benchmarks to evaluate a dissimilarity measure’s quality. To move beyond our intuitive criteria, we need a ground truth. For this we turn to the functional behavior of the representations we are comparing, measured through probing accuracy (an indicator of syntactic information) [4, 27, 35] and out-of-distribution performance of the model they belong to [7, 23, 25]. We then score dissimilarity measures based on their rank correlation with these measured functional differences. Overall our benchmarks contain 30,480 examples and vary representations across several axes including random seed, layer depth, and low-rank approximation (Section 4)1.
Our benchmarks confirm our two intuitive observations: on subtasks that consider layer depth and principal component deletion, we measure the rank correlation with probing accuracy and find CCA and CKA lacking as the previous warm-up experiments suggested. Meanwhile, the Orthogonal Procrustes distance, a classical but often overlooked2 dissimilarity measure, balances gracefully between CKA and CCA and consistently performs well. This underscores the need for systematic evaluation, otherwise we may fall to recency bias that undervalues classical baselines.
Other subtasks measure correlation with OOD accuracy, motivated by the observation that random initialization sometimes has large effects on OOD performance [23]. We find that dissimilarity measures can sometimes predict OOD performance using only the in-distribution representations, but we also identify a challenge set on which none of the measures do statistically better than chance. We hope this challenge set will help measure and spur progress in the future.
2 Problem Setup: Metrics and Models
Our goal is to quantify the similarity between two different groups of neurons (usually layers). We do this by comparing how their activations behave on the same dataset. Thus for a layer with p1 neurons, we define A 2 Rp1⇥n, the matrix of activations of the p1 neurons on n data points, to be that layer’s raw representation of the data. Similarly, let B 2 Rp2⇥n be a matrix of the activations of p2 neurons on the same n data points. We center and normalize these representations before computing dissimilarity, per standard practice. Specifically, for a raw representation A we first subtract the mean value from each column, then divide by the Frobenius norm, to produce the normalized representation A⇤, used in all our dissimilarity computations. In this work we study dissimilarity measures d(A⇤, B⇤) that allow for quantitative comparisons of representations both within and across different networks. We colloquially refer to values of d(A⇤, B⇤) as distances, although they do not necessarily satisfy the triangle inequality required of a proper metric.
We study five dissimilarity measures: centered kernel alignment (CKA), three measures derived from canonical correlation analysis (CCA), and a measure derived from the orthogonal Procrustes problem.
Centered kernel alignment (CKA) uses an inner product to quantify similarity between two representations. It is based on the idea that one can first choose a kernel, compute the n⇥ n kernel matrix for each representation, and then measure similarity as the alignment between these two kernel matrices. The measure of similarity thus depends on one’s choice of kernel; in this work we consider Linear CKA:
dLinear CKA(A,B) = 1 kAB>k2F
kAA>kF kBB>kF (1)
as proposed in Kornblith et al. [13]. Other choices of kernel are also valid; we focus on Linear CKA here since Kornblith et al. [13] report similar results from using either a linear or RBF kernel.
Canonical correlation analysis (CCA) finds orthogonal bases (wiA, wiB) for two matrices such that after projection onto wiA, w i B , the projected matrices have maximally correlated rows. For 1 i p1,
1Code to replicate our results can be found at https://github.com/js-d/sim_metric. 2For instance, Raghu et al. [30] and Morcos et al. [24] do not mention it, and Kornblith et al. [13] relegates it
to the appendix; although Smith et al. [32] does use it to analyze word embeddings and prefers it to CCA.
the ith canonical correlation coefficient ⇢i is computed as follows:
⇢i = max wiA,w i B
hwiA > A,wiB > Bi
kwiA > Ak · kwiB > Bk
(2)
s.t. hwiA > A,wjA > Ai = 0, 8j < i, hwiB > B,wjB > Bi = 0, 8j < i (3)
To transform the vector of correlation coefficients into a scalar measure, two options considered previously [13] are the mean correlation coefficient, ⇢̄CCA, and the mean squared correlation coefficient, R2CCA, defined as follows:
d⇢̄CCA(A,B) = 1 1
p1
X
i
⇢i, dR2CCA(A,B) = 1 1
p1
X
i
⇢2i (4)
To improve the robustness of CCA, Morcos et al. [24] propose projection-weighted CCA (PWCCA) as another scalar summary of CCA:
dPWCCA(A,B) = 1 P
i ↵i⇢iP i ↵i
, ↵i = X
j
|hhi, aji| (5)
where aj is the jth row of A, and hi = wiA > A is the projection of A onto the ith canonical direction. We find that PWCCA performs far better than ⇢̄CCA and R2CCA, so we focus on PWCCA in the main text, but include results on the other two measures in the appendix.
The orthogonal Procrustes problem consists of finding the left-rotation of A that is closest to B in Frobenius norm, i.e. solving the optimization problem:
min R
kB RAk2F, subject to R>R = I. (6)
The minimum is the squared orthogonal Procrustes distance between A and B, and is equal to
dProc(A,B) = kAk2F + kBk2F 2kA>Bk⇤, (7)
where k · k⇤ is the nuclear norm [31]. Unlike the other metrics, the orthogonal Procrustes distance is not normalized between 0 and 1, although for normalized A⇤, B⇤ it lies in [0, 2].
2.1 Models we study
In this work we study representations of both text and image inputs. For text, we investigate representations computed by Transformer architectures in the BERT model family [8] on sentences from the Multigenre Natural Language Inference (MNLI) dataset [40]. We study BERT models of two sizes: BERT base, with 12 hidden layers of 768 neurons, and BERT medium, with 8 hidden layers of 512 neurons. We use the same architectures as in the open source BERT release3, but to generate diversity we study 3 variations of these models:
1. 10 BERT base models pretrained with different random seeds but not finetuned for particular tasks, released by Zhong et al. [41]4. 2. 10 BERT medium models initialized from pretrained models released by Zhong et al. [41], that we further finetuned on MNLI with 10 different finetuning seeds (100 models total). 3. 100 BERT base models that were initialized from the pretrained BERT model in [8] and finetuned on MNLI with different seeds, released by McCoy et al. [23]5.
For images, we investigate representations computed by ResNets [11] on CIFAR-10 test set images [14]. We train 100 ResNet-14 models6 from random initialization with different seeds on the CIFAR-10 training set and collect representations after each convolutional layer.
Further training details, as well as checks that our training protocols result in models with comparable performance to the original model releases, can be found in Appendix A.
3available at https://github.com/google-research/bert 4available at https://github.com/ruiqi-zhong/acl2021-instance-level 5available at https://github.com/tommccoy1/hans/tree/master/berts_of_a_feather 6from https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py
3 Warm-up: Intuitive Tests for Sensitivity and Specificity
When designing dissimilarity measures, researchers usually consider invariants that these measures should not be sensitive to [13]; for example, symmetries in neural networks imply that permuting the neurons in a fully connected layer does not change the representations learned. We take this one step further and frame dissimilarity measures as answering whether representations are essentially the same, or importantly different. We can then evaluate measures based on whether they respond to important changes (sensitivity) while ignoring changes that don’t matter (specificity).
Assessing sensitivity and specificity requires a ground truth–which representations are truly different? To answer this, we begin with the following two intuitions7: 1) neural network representations trained on the same data but from different random initializations are similar, and 2) representations lose crucial information as principal components are deleted. These motivate the following intuitive tests of specificity and sensitivity: we expect a dissimilarity measure to: 1) assign a small distance between architecturally identical neural networks that only differ in initialization seed, and 2) assign a large distance between a representation A and the representation  after deleting important principal components (enough to affect accuracy). We will see that PWCCA fails the first test (specificity), while CKA fails the second (sensitivity).
3.1 Specificity against changes to random seed
Neural networks with the same architecture trained from different random initializations show many similarities, such as highly correlated predictions on in-distribution data points [23]. Thus it seems natural to expect a good similarity measure to assign small distances between architecturally corresponding layers of networks that are identical except for initialization seed.
To check this property, we take two BERT base models pre-trained with different random seeds and, for every layer in the first model, compute its dissimilarity to every layer in both the first and second model. We do this for 5 separate pairs of models and average the results. To pass the intuitive specificity test, a dissimilarity measure should assign relatively small distances between a layer in the first network and its corresponding layer in the second network.
Figure 1 displays the average pair-wise PWCCA, CKA, and Orthogonal Procrustes distances between layers of two networks differing only in random seed. According to PWCCA, these networks’ representations are quite dissimilar; for instance, the two layer 7 representations are further apart
7Note we will see later that these intuitions need refinement.
than they are from any other layer in the same network. PWCCA is thus not specific against random initialization, as it can outweigh even large changes in layer depth.
In contrast, CKA can separate layer 7 in a different network from layers 4 or 10 in the same network, showing better specificity to random initialization. Orthogonal Procrustes exhibits smaller but non-trivial specificity, distinguishing layers once they are 4-5 layers apart.
3.2 Sensitivity to removing principal components
Dissimilarity measures should also be sensitive to deleting important principal components of a representation.8 To quantify which components are important, we fix a layer of a pre-trained BERT base model and measure how probing accuracy degrades as principal components are deleted (starting from the smallest component), since probing accuracy is a common measure of the information captured in a representation [4]. We probe linear classification performance on the Stanford Sentiment Tree Bank task (SST-2) [33], following the experimental protocol in Tamkin et al. [34]. Figure 3b shows how probing accuracy degrades with component deletion. Ideally, dissimilarity measures should be large by the time probing accuracy has decreased substantially.
To assess whether a dissimilarity measure is large, we need a baseline to compare to. For each measure, we define a dissimilarity score to be above the detectable threshold if it is larger than the dissimilarity score between networks with different random initialization. Figure 2 plots the dissimilarity induced by deleting principal components, as well as this baseline.
For the last layer of BERT, CKA requires 97% of a representation’s principal components to be deleted for the dissimilarity to be detectable; after deleting these components, probing accuracy shown in Figure 3b drops significantly from 80% to 63% (chance is 50%). CKA thus fails to detect large accuracy drops and so fails our intuitive sensitivity test.
Other metrics perform better: Orthogonal Procrustes’s detection threshold is ⇠85% of the principal components, corresponding to an accuracy drop 80% to 70%. PWCCA’s threshold is ⇠55% of principal components, corresponding to an accuracy drop from 80% to 75%.
PWCCA’s failure of specificity and CKA’s failure of sensitivity on these intuitive tests are worrying. However, before declaring definitive failure, in the next section, we turn to making our assessments more rigorous.
8For a representation A, we define  k, the result of deleting the k smallest principal components from A, as follows: we compute the singular value decomposition U⌃V T = A, construct U k 2 Rp⇥p k by dropping the lowest k singular vectors of U , and finally take  k = UT kA.
4 Rigorously Evaluating Dissimilarity Metrics
In the previous section, we saw that CKA and PWCCA each failed intuitive tests, based on sensitivity to principal components and specificity to random initialization. However, these were based primarily on intuitive, qualitative desiderata. Is there some way for us to make these tests more rigorous and quantitative?
First consider the intuitive layer specificity test (Section 3.1), which revealed that random initialization affects PWCCA more than large changes in layer depth. To justify why this is undesirable, we can turn to probing accuracy, which is strongly affected by layer depth, and only weakly affected by random seed (Figure 3a). This suggests a path forward: we can ground the layer test in the concrete differences in functionality captured by the probe.
More generally, we want metrics to be sensitive to changes that affect functionality, while ignoring those that don’t. This motivates the following general procedure, given a distance metric d and a functionality f (which assigns a real number to a given representation):
1. Collect a set S of representations that differ along one or more axes of interest (e.g. layer depth, random seed).
2. Choose a reference representation A 2 S. When f is an accuracy metric, it is reasonable to choose A = argmaxA2S f(A).9
3. For every representation B 2 S: • Compute |f(A) f(B)| • Compute d(A,B)
4. Report the rank correlation between |f(A) f(B)| and d(A,B) (measured by Kendall’s ⌧ or Spearman ⇢).
The above procedure provides a quantitative measure of how well the distance metric d responds to the functionality f . For instance, in the layer specificity test, since depth affects probing accuracy strongly while random seed affects it only weakly, a dissimilarity measure with high rank correlation will be strongly responsive to layer depth and weakly responsive to seed; thus rank correlation quantitatively formalizes the test from Section 3.1.
Correlation metrics also capture properties that our intuition might miss. For instance, Figure 3a shows that some variation in random seed actually does affect accuracy, and our procedure rewards metrics that pick up on this, while the intuitive sensitivity test would penalize them.
Our procedure requires choosing a collection of models S; the crucial feature of S is that it contains models with diverse behavior according to f . Different sets S, combined with a functional difference f , can be thought of as miniature “benchmarks" that surface complementary perspectives on dissimilarity measures’ responsiveness to that functional difference. In the rest of this section, we instantiate this quantitative benchmark for several choices of f and S, starting with the layer and principal component tests from Section 3 and continuing on to several tests of OOD performance.
The overall results are summarized in Table 1. Note that for any single benchmark, we expect the correlation coefficients to be significantly lower than 1, since the metric D must capture all important axes of variation while f measures only one type of functionality. A good metric is one that has consistently high correlation across many different functional measures.
Benchmark 1: Layer depth. We turn the layer test into a benchmark for both text and images. For the text setting, we construct a set S of 120 representations by pretraining 10 BERT base models with different initialization seeds and including each of the 12 BERT layers as a representation. We separately consider two functionalities f : probing accuracy on QNLI [37] and SST-2 [33]. To compute the rank correlation, we take the reference representation A to be the representation with highest probing accuracy. We compute the Kendall’s ⌧ and Spearman’s ⇢ rank correlations between the dissimilarities and the probing accuracy differences and report the results in Table 1.
9Choosing the highest accuracy model as the reference makes it more likely that as accuracy changes, models are on average becoming more dissimilar. A low accuracy model may be on the “periphery” of model space, where it is dissimilar to models with high accuracy, but potentially even more dissimilar to other low accuracy models that make different mistakes.
For the image setting, we similarly construct a set S of 70 representations by training 5 ResNet-14 models with different initialization seeds and including each of the 14 layers’ representations. We also consider two functionalities f for these vision models: probing accuracy on CIFAR-100 [14] and on SVHN [26], and compute rank correlations in the same way.
We find that PWCCA has lower rank correlations compared to CKA and Procrustes for both language probing tasks. This corroborates the intuitive specificity test (Section 3.1), suggesting that PWCCA registers too large of a dissimilarity across random initializations. For the vision tasks, CKA and Procrustes achieve similar rank correlations, while PWCCA cannot be computed because n < d.
Benchmark 2: Principal component (PC) deletion. We next quantify the PC deletion test from Section 3.2, by constructing a set S of representations that vary in both random initialization and fraction of principal components deleted. We pretrain 10 BERT base models with different initializations, and for each pretrained model we obtain 14 different representations by deleting that representation’s k smallest principal components, with k 2 {0, 100, 200, 300, 400, 500, 600, 650, 700, 725, 750, 758, 763, 767}. Thus S has 10 ⇥ 14 = 140 elements. The representations themselves are the layer-` activations, for ` 2 {8, 9, . . . , 12},10 so there are 5 different choices of S. We use SST-2 probing accuracy as the functionality of interest f , and select the reference representation A as the element in S with highest accuracy. Rank correlation
10Earlier layers have near-chance accuracy on probing tasks, so we ignore them.
results are consistent across the 5 choices of S (Appendix C), so we report the average as a summary statistic in Table 1.
We find that PWCCA has the highest rank correlation between dissimilarity and probing accuracy, followed by Procrustes, and distantly followed by CKA. This corroborates the intuitive observations from Section 3.2 that CKA is not sensitive to principal component deletion.
4.1 Investigating variation in OOD performance across random seeds
So far our benchmarks have been based on probing accuracy, which only measures in-distribution behavior (the train and test set of the probe are typically i.i.d.). In addition, the BERT models were always pretrained on language modeling but not finetuned for classification. To add diversity to our benchmarks, we next consider the out-of-distribution performance of language and vision models trained for classification tasks.
Benchmark 3: Changing fine-tuning seeds. McCoy et al. [23] show that a single pretrained BERT base model finetuned on MNLI with different random initializations will produce models with similar in-distribution performance, but widely variable performance on out-of-distribution data. We thus create a benchmark S out of McCoy et al.’s 100 released fine-tuned models, using OOD accuracy on the “Lexical Heuristic (Non-entailment)" subset of the HANS dataset [22] as our functionality f . This functionality is associated with the entire model, rather than an individual layer (in contrast to the probing functionality), but we consider one layer at a time to measure whether dissimilarities
between representations at that layer correlate with f . This allows us to also localize whether certain layers are more predictive of f .
We construct 12 different S (one for each of the 12 layers of BERT base), taking the reference representation A to be that of the highest accuracy model according to f . As before, we report each dissimilarity measure’s rank correlation with f in Table 1, averaged over the 12 runs.
All three dissimilarity measures correlate with OOD accuracy, with Orthogonal Procrustes and PWCCA being more correlated than CKA. Since the representations in our benchmarks were computed on in-distribution MNLI data, this has the interesting implication that dissimilarity measures can detect OOD differences without access to OOD data. It also implies that random initialization leads to meaningful functional differences that are picked up by these measures, especially Procrustes and PWCCA. Contrast this with our intuitive specificity test in Section 3.1, where all sensitivity to random initialization was seen as a shortcoming. Our more quantitative benchmark here suggests that some of that sensitivity tracks true functionality.
To check that the differences in rank correlation for Procrustes, PWCCA, and CKA are statistically significant, we compute bootstrap estimates of their 95% confidence intervals. With 2000 bootstrapped samples, we find statistically significant differences between all pairs of measures for most choices of layer depth S, so we conclude PWCCA > Orthogonal Procrustes > CKA (the full results are in Appendix E). We do not apply this procedure for the previous two benchmarks, because the different models have correlated randomness and so any p-value based on independence assumptions would be invalid.
Benchmark 4: Challenge sets: Changing pretraining and fine-tuning seeds. We also construct benchmarks using models trained from scratch with different random seeds (for language, this is pretraining and fine-tuning, and for vision, this is standard training). For language, we construct benchmarks from a collection of 100 BERT medium models, trained with all combinations of 10 pretraining and 10 fine-tuning seeds. The models are fine-tuned on MNLI, and we consider two different functionalities of interest f : accuracy on the OOD Antonymy stress test and on the OOD Numerical stress test [25], which both show significant variation in accuracy across models (see Figure 3d). We obtain 8 different sets S (one for each of the 8 layer depths in BERT medium), again taking A to be the representation of the highest-accuracy model according to f . Rank correlations for each dissimilarity measure are averaged over the 8 runs and reported in Table 1.
For vision, we construct benchmarks from a collection of 100 ResNet-14 models, trained with different random seeds on CIFAR-10. We consider 19 different functionalities of interest—the 19 types of corruptions in the CIFAR-10C dataset [12], which show significant variation in accuracy across models (see Figure 3c). We obtain 14 different sets S (one for each of the 14 layers), taking A to be the representation of the highest-accuracy model according to f . Rank correlations for each dissimilarity measure are averaged over the 14 runs and over the 19 corruption types and reported in Table 1. Results for each of the 19 corruptions individually can be found in Appendix D..
None of the dissimilarity measures show a large rank correlation for either the language or vision tasks, and for the Numerical stress test, at most layers, the associated p-values (assuming independence) are non-significant at the 0.05 level (see Appendix C). 11 Thus we conclude that all measures fail to be sensitive to OOD accuracy in these settings. One reason for this could be that there is less variation in the OOD accuracies compared to the previous experiment with the HANS dataset (there accuracies varied from 0 to nearly 60%). Another reason could be that it is harder to correctly account for both pretraining and fine-tuning variation at the same time. Either way, we hope that future dissimilarity measures can improve upon these results, and we present this benchmark as a challenge task to motivate progress.
5 Discussion
In this work we proposed a quantitative measure for evaluating similarity metrics, based on the rank correlation with functional behavior. Using this, we generated tasks motivated by sensitivity to
11See Appendix C for p-values as produced by sci-kit learn. Strictly speaking, the p-values are invalid because they assume independence, but the pretraining seed induces correlations. However, correctly accounting for these would tend to make the p-values larger, thus preserving our conclusion of non-significance .
deleting important directions, specificity to random initialization, and sensitivity to out-of-distribution performance. Popular existing metrics such as CKA and CCA often performed poorly on these tasks, sometimes in striking ways. Meanwhile, the classical Orthogonal Procrustes transform attained consistently good performance.
Given the success of Orthogonal Procrustes, it is worth reflecting on how it differs from the other metrics and why it might perform well. To do so, we consider a simplified case where A and B have the same singular vectors but different singular values. Thus without loss of generality A = ⇤1 and B = ⇤2, where the ⇤i are both diagonal. In this case, the Orthogonal Procrustes distance reduces to k⇤1 ⇤2k2F , or the sum of the squared distances between the singular values. We will see that both CCA and CKA reduce to less reasonable formulae in this case.
Orthogonal Procrustes vs. CCA. All three metrics derived from CCA assign zero distance even when the (non-zero) singular values are arbitrarily different. This is because CCA correlation coefficients are invariant to all invertible linear transformations. This invariance property may help explain why CCA metrics generally find layers within the same network to be much more similar than networks trained with different randomness. Random initialization introduces noise, particularly in unimportant principal components, while representations within the same network more easily preserve these components, and CCA may place too much weight on their associated correlation coefficients.
Orthogonal Procrustes vs. CKA. In contrast to the squared distance of Orthogonal Procrustes, CKA actually reduces to a quartic function based on the dot products between the squared entries of ⇤1 and ⇤2. As a consequence, CKA is dominated by representations’ largest singular values, leaving it insensitive to meaningful differences in smaller singular values as illustrated in Figure 2. This lack of sensitivity to moderate-sized differences may help explain why CKA fails to track out-of-distribution error effectively.
In addition to helping understand similarity measures, our benchmarks pinpoint directions for improvement. No method was sensitive to accuracy on the Numerical stress test in our challenge set, possibly due to a lower signal-to-noise ratio. Since Orthogonal Procrustes performed well on most of our tasks, it could be a promising foundation for a new measure, and recent work shows how to regularize Orthogonal Procrustes to handle high noise [28]. Perhaps similar techniques could be adapted here.
An alternative to our benchmarking approach is to directly define two representations’ dissimilarity as their difference in a functional behavior of interest. Feng et al. [9] take this approach, defining dissimilarity as difference in accuracy on a handful of probing tasks. One drawback of this approach is that a small set of probes may not capture all the differences in representations, so it is useful to base dissimilarity measures on representations’ intrinsic properties. Intrinsically defined dissimilarities also have the potential to highlight new functional behaviors, as we found that representations with similar in-distribution probing accuracy often have highly variable OOD accuracy.
A limitation of our work is that we only consider a handful of model variations and functional behaviors, and restricting our attention to these settings could overlook other important considerations. To address this, we envision a paradigm in which a rich tapestry of benchmarks are used to ground and validate neural network interpretations. Other axes of variation in models could include training on more or fewer examples, training on shuffled labels vs. real labels, training from specifically chosen initializations [10], and using different architectures. Other functional behaviors to examine could include modularity and meta-learning capabilities. Benchmarks could also be applied to other interpretability tools beyond dissimilarity. For example, sensitivity to deleting principal components could provide an additional sanity check for saliency maps and other visualization tools [1].
More broadly, many interpretability tools are designed as audits of models, although it is often unclear what characteristics of the models are consistently audited. We position this work as a counter-audit, where by collecting models that differ in functional behavior, we can assess whether the interpretability tools CKA, PWCCA, etc., accurately reflect the behavioral differences. Many other types of counter-audits may be designed to assess other interpretability tools. For example, models that have backdoors built into them to misclassify certain inputs provide counter-audits for interpretability tools that explain model predictions–these explanations should reflect any backdoors present [5, 15, 19, 38]. We are hopeful that more comprehensive checks on interpretability tools will provide deeper understanding of neural networks, and more reliable models.
Acknowledgments and Disclosure of Funding
Thanks to Ruiqi Zhong for helpful comments and assistance in finetuning models, and thanks to Daniel Rothchild and our anonymous reviewers for helpful discussion. FD is supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE 1752814 and the Open Philanthropy Project AI Fellows Program. JSD is supported by the NSF Division of Mathematical Sciences Grant No. 2031985. | 1. What are the contributions and findings of the paper regarding the evaluation of neural network representations using CCA, CKA, and orthogonal Procrustes distance?
2. What are the inconsistencies in the current metrics used for measuring representation similarity, and how does the proposed method address these issues?
3. How does the proposed method provide a better understanding of representation similarities, and what are its limitations?
4. What are the reviewer's concerns regarding the example used to demonstrate the proposed method, and how might the authors address them?
5. What are the reviewer's questions regarding the use of OOD data, the training of multiple models, and the reporting of standard deviations or variances? | Summary Of The Paper
Review | Summary Of The Paper
This paper evaluates recent efforts to analyze learn representations obtained by neural networks using canonical correlation analysis (CCA), centered kernel alignment (CKA), and orthogonal Procrustes distance. The authors pointed out on some inconsistencies across these metrics and suggested a method to measure of how well the distance metric is with respect to some functionality.
Review
The paper is clearly written, easy to follow, and the findings regarding CCA, CKA, and orthogonal Procrustes are interesting and valuable. I'm a little bit puzzled by the quantitative measure and its contribution. If I understand it correctly, the authors first claim that the current metrics are not consistent concerning sensitivity and specificity. The authors demonstrated it with an example. Then, the authors presented a different method, grounded by some downstream task, using the same example to assess the same thing. Can we use this method to better understand representations similarities or only to compare different distance metrics?
Overall, the method and analysis is interesting. I have some questions to the authors:
I agree with the authors that according to Figure 1 two different networks’ representations at layer 7 have higher PWCCA distance than that between layer 7 and any other layer within the same network. However, we still see the same behavior within different layers for different models, meaning the distance is the smallest to layer 7 than any other layer. Can the authors say something about that? Maybe this is a normalization issue? Will it be different if we do it with randomly initialized network / network that was trained for other task?
In line 257-259 the authors wrote: "The representations were computed on in-distribution MNLI data, meaning that the dissimilarity measures can detect OOD differences without access to OOD data". However, the authors did use OOD data (HANS) to get accuracies and learn f, no? Am I missing something?
The authors mentioned that they trained several models for each setting and averaged the results, can the authors also share the STDs / variance? |
NIPS | Title
Grounding Representation Similarity Through Statistical Testing
Abstract
To understand neural network behavior, recent works quantitatively compare different networks’ learned representations using canonical correlation analysis (CCA), centered kernel alignment (CKA), and other dissimilarity measures. Unfortunately, these widely used measures often disagree on fundamental observations, such as whether deep networks differing only in random initialization learn similar representations. These disagreements raise the question: which, if any, of these dissimilarity measures should we believe? We provide a framework to ground this question through a concrete test: measures should have sensitivity to changes that affect functional behavior, and specificity against changes that do not. We quantify this through a variety of functional behaviors including probing accuracy and robustness to distribution shift, and examine changes such as varying random initialization and deleting principal components. We find that current metrics exhibit different weaknesses, note that a classical baseline performs surprisingly well, and highlight settings where all metrics appear to fail, thus providing a challenge set for further improvement.
1 Introduction
Understanding neural networks is not only scientifically interesting, but critical for applying deep networks in high-stakes situations. Recent work has highlighted the value of analyzing not just the final outputs of a network, but also its intermediate representations [20, 29]. This has motivated the development of representation similarity measures, which can provide insight into how different training schemes, architectures, and datasets affect networks’ learned representations.
A number of similarity measures have been proposed, including centered kernel alignment (CKA) [13], ones based on canonical correlation analysis (CCA) [24, 30], single neuron alignment [20], vector space alignment [3, 6, 32], and others [2, 9, 16, 18, 21, 39]. Unfortunately, these different measures tell different stories. For instance, CKA and projection weighted CCA disagree on which layers of different networks are most similar [13]. This lack of consensus is worrying, as measures are often designed according to different and incompatible intuitive desiderata, such as whether finding a one-to-one assignment, or finding few-to-one mappings, between neurons is more appropriate [20]. As a community, we need well-chosen formal criteria for evaluating metrics to avoid over-reliance on intuition and the pitfalls of too many researcher degrees of freedom [17].
In this paper we view representation dissimilarity measures as implicitly answering a classification question–whether two representations are essentially similar or importantly different. Thus, in analogy to statistical testing, we can evaluate them based on their sensitivity to important change and specificity (non-responsiveness) against unimportant changes or noise.
As a warm-up, we first initially consider two intuitive criteria: first, that metrics should have specificity against random initialization; and second, that they should be sensitive to deleting important principal
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
components (those that affect probing accuracy). Unfortunately, popular metrics fail at least one of these two tests. CCA is not specific – random initialization noise overwhelms differences between even far-apart layers in a network (Section 3.1). CKA on the other hand is not sensitive, failing to detect changes in all but the top 10 principal components of a representation (Section 3.2).
We next construct quantitative benchmarks to evaluate a dissimilarity measure’s quality. To move beyond our intuitive criteria, we need a ground truth. For this we turn to the functional behavior of the representations we are comparing, measured through probing accuracy (an indicator of syntactic information) [4, 27, 35] and out-of-distribution performance of the model they belong to [7, 23, 25]. We then score dissimilarity measures based on their rank correlation with these measured functional differences. Overall our benchmarks contain 30,480 examples and vary representations across several axes including random seed, layer depth, and low-rank approximation (Section 4)1.
Our benchmarks confirm our two intuitive observations: on subtasks that consider layer depth and principal component deletion, we measure the rank correlation with probing accuracy and find CCA and CKA lacking as the previous warm-up experiments suggested. Meanwhile, the Orthogonal Procrustes distance, a classical but often overlooked2 dissimilarity measure, balances gracefully between CKA and CCA and consistently performs well. This underscores the need for systematic evaluation, otherwise we may fall to recency bias that undervalues classical baselines.
Other subtasks measure correlation with OOD accuracy, motivated by the observation that random initialization sometimes has large effects on OOD performance [23]. We find that dissimilarity measures can sometimes predict OOD performance using only the in-distribution representations, but we also identify a challenge set on which none of the measures do statistically better than chance. We hope this challenge set will help measure and spur progress in the future.
2 Problem Setup: Metrics and Models
Our goal is to quantify the similarity between two different groups of neurons (usually layers). We do this by comparing how their activations behave on the same dataset. Thus for a layer with p1 neurons, we define A 2 Rp1⇥n, the matrix of activations of the p1 neurons on n data points, to be that layer’s raw representation of the data. Similarly, let B 2 Rp2⇥n be a matrix of the activations of p2 neurons on the same n data points. We center and normalize these representations before computing dissimilarity, per standard practice. Specifically, for a raw representation A we first subtract the mean value from each column, then divide by the Frobenius norm, to produce the normalized representation A⇤, used in all our dissimilarity computations. In this work we study dissimilarity measures d(A⇤, B⇤) that allow for quantitative comparisons of representations both within and across different networks. We colloquially refer to values of d(A⇤, B⇤) as distances, although they do not necessarily satisfy the triangle inequality required of a proper metric.
We study five dissimilarity measures: centered kernel alignment (CKA), three measures derived from canonical correlation analysis (CCA), and a measure derived from the orthogonal Procrustes problem.
Centered kernel alignment (CKA) uses an inner product to quantify similarity between two representations. It is based on the idea that one can first choose a kernel, compute the n⇥ n kernel matrix for each representation, and then measure similarity as the alignment between these two kernel matrices. The measure of similarity thus depends on one’s choice of kernel; in this work we consider Linear CKA:
dLinear CKA(A,B) = 1 kAB>k2F
kAA>kF kBB>kF (1)
as proposed in Kornblith et al. [13]. Other choices of kernel are also valid; we focus on Linear CKA here since Kornblith et al. [13] report similar results from using either a linear or RBF kernel.
Canonical correlation analysis (CCA) finds orthogonal bases (wiA, wiB) for two matrices such that after projection onto wiA, w i B , the projected matrices have maximally correlated rows. For 1 i p1,
1Code to replicate our results can be found at https://github.com/js-d/sim_metric. 2For instance, Raghu et al. [30] and Morcos et al. [24] do not mention it, and Kornblith et al. [13] relegates it
to the appendix; although Smith et al. [32] does use it to analyze word embeddings and prefers it to CCA.
the ith canonical correlation coefficient ⇢i is computed as follows:
⇢i = max wiA,w i B
hwiA > A,wiB > Bi
kwiA > Ak · kwiB > Bk
(2)
s.t. hwiA > A,wjA > Ai = 0, 8j < i, hwiB > B,wjB > Bi = 0, 8j < i (3)
To transform the vector of correlation coefficients into a scalar measure, two options considered previously [13] are the mean correlation coefficient, ⇢̄CCA, and the mean squared correlation coefficient, R2CCA, defined as follows:
d⇢̄CCA(A,B) = 1 1
p1
X
i
⇢i, dR2CCA(A,B) = 1 1
p1
X
i
⇢2i (4)
To improve the robustness of CCA, Morcos et al. [24] propose projection-weighted CCA (PWCCA) as another scalar summary of CCA:
dPWCCA(A,B) = 1 P
i ↵i⇢iP i ↵i
, ↵i = X
j
|hhi, aji| (5)
where aj is the jth row of A, and hi = wiA > A is the projection of A onto the ith canonical direction. We find that PWCCA performs far better than ⇢̄CCA and R2CCA, so we focus on PWCCA in the main text, but include results on the other two measures in the appendix.
The orthogonal Procrustes problem consists of finding the left-rotation of A that is closest to B in Frobenius norm, i.e. solving the optimization problem:
min R
kB RAk2F, subject to R>R = I. (6)
The minimum is the squared orthogonal Procrustes distance between A and B, and is equal to
dProc(A,B) = kAk2F + kBk2F 2kA>Bk⇤, (7)
where k · k⇤ is the nuclear norm [31]. Unlike the other metrics, the orthogonal Procrustes distance is not normalized between 0 and 1, although for normalized A⇤, B⇤ it lies in [0, 2].
2.1 Models we study
In this work we study representations of both text and image inputs. For text, we investigate representations computed by Transformer architectures in the BERT model family [8] on sentences from the Multigenre Natural Language Inference (MNLI) dataset [40]. We study BERT models of two sizes: BERT base, with 12 hidden layers of 768 neurons, and BERT medium, with 8 hidden layers of 512 neurons. We use the same architectures as in the open source BERT release3, but to generate diversity we study 3 variations of these models:
1. 10 BERT base models pretrained with different random seeds but not finetuned for particular tasks, released by Zhong et al. [41]4. 2. 10 BERT medium models initialized from pretrained models released by Zhong et al. [41], that we further finetuned on MNLI with 10 different finetuning seeds (100 models total). 3. 100 BERT base models that were initialized from the pretrained BERT model in [8] and finetuned on MNLI with different seeds, released by McCoy et al. [23]5.
For images, we investigate representations computed by ResNets [11] on CIFAR-10 test set images [14]. We train 100 ResNet-14 models6 from random initialization with different seeds on the CIFAR-10 training set and collect representations after each convolutional layer.
Further training details, as well as checks that our training protocols result in models with comparable performance to the original model releases, can be found in Appendix A.
3available at https://github.com/google-research/bert 4available at https://github.com/ruiqi-zhong/acl2021-instance-level 5available at https://github.com/tommccoy1/hans/tree/master/berts_of_a_feather 6from https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py
3 Warm-up: Intuitive Tests for Sensitivity and Specificity
When designing dissimilarity measures, researchers usually consider invariants that these measures should not be sensitive to [13]; for example, symmetries in neural networks imply that permuting the neurons in a fully connected layer does not change the representations learned. We take this one step further and frame dissimilarity measures as answering whether representations are essentially the same, or importantly different. We can then evaluate measures based on whether they respond to important changes (sensitivity) while ignoring changes that don’t matter (specificity).
Assessing sensitivity and specificity requires a ground truth–which representations are truly different? To answer this, we begin with the following two intuitions7: 1) neural network representations trained on the same data but from different random initializations are similar, and 2) representations lose crucial information as principal components are deleted. These motivate the following intuitive tests of specificity and sensitivity: we expect a dissimilarity measure to: 1) assign a small distance between architecturally identical neural networks that only differ in initialization seed, and 2) assign a large distance between a representation A and the representation  after deleting important principal components (enough to affect accuracy). We will see that PWCCA fails the first test (specificity), while CKA fails the second (sensitivity).
3.1 Specificity against changes to random seed
Neural networks with the same architecture trained from different random initializations show many similarities, such as highly correlated predictions on in-distribution data points [23]. Thus it seems natural to expect a good similarity measure to assign small distances between architecturally corresponding layers of networks that are identical except for initialization seed.
To check this property, we take two BERT base models pre-trained with different random seeds and, for every layer in the first model, compute its dissimilarity to every layer in both the first and second model. We do this for 5 separate pairs of models and average the results. To pass the intuitive specificity test, a dissimilarity measure should assign relatively small distances between a layer in the first network and its corresponding layer in the second network.
Figure 1 displays the average pair-wise PWCCA, CKA, and Orthogonal Procrustes distances between layers of two networks differing only in random seed. According to PWCCA, these networks’ representations are quite dissimilar; for instance, the two layer 7 representations are further apart
7Note we will see later that these intuitions need refinement.
than they are from any other layer in the same network. PWCCA is thus not specific against random initialization, as it can outweigh even large changes in layer depth.
In contrast, CKA can separate layer 7 in a different network from layers 4 or 10 in the same network, showing better specificity to random initialization. Orthogonal Procrustes exhibits smaller but non-trivial specificity, distinguishing layers once they are 4-5 layers apart.
3.2 Sensitivity to removing principal components
Dissimilarity measures should also be sensitive to deleting important principal components of a representation.8 To quantify which components are important, we fix a layer of a pre-trained BERT base model and measure how probing accuracy degrades as principal components are deleted (starting from the smallest component), since probing accuracy is a common measure of the information captured in a representation [4]. We probe linear classification performance on the Stanford Sentiment Tree Bank task (SST-2) [33], following the experimental protocol in Tamkin et al. [34]. Figure 3b shows how probing accuracy degrades with component deletion. Ideally, dissimilarity measures should be large by the time probing accuracy has decreased substantially.
To assess whether a dissimilarity measure is large, we need a baseline to compare to. For each measure, we define a dissimilarity score to be above the detectable threshold if it is larger than the dissimilarity score between networks with different random initialization. Figure 2 plots the dissimilarity induced by deleting principal components, as well as this baseline.
For the last layer of BERT, CKA requires 97% of a representation’s principal components to be deleted for the dissimilarity to be detectable; after deleting these components, probing accuracy shown in Figure 3b drops significantly from 80% to 63% (chance is 50%). CKA thus fails to detect large accuracy drops and so fails our intuitive sensitivity test.
Other metrics perform better: Orthogonal Procrustes’s detection threshold is ⇠85% of the principal components, corresponding to an accuracy drop 80% to 70%. PWCCA’s threshold is ⇠55% of principal components, corresponding to an accuracy drop from 80% to 75%.
PWCCA’s failure of specificity and CKA’s failure of sensitivity on these intuitive tests are worrying. However, before declaring definitive failure, in the next section, we turn to making our assessments more rigorous.
8For a representation A, we define  k, the result of deleting the k smallest principal components from A, as follows: we compute the singular value decomposition U⌃V T = A, construct U k 2 Rp⇥p k by dropping the lowest k singular vectors of U , and finally take  k = UT kA.
4 Rigorously Evaluating Dissimilarity Metrics
In the previous section, we saw that CKA and PWCCA each failed intuitive tests, based on sensitivity to principal components and specificity to random initialization. However, these were based primarily on intuitive, qualitative desiderata. Is there some way for us to make these tests more rigorous and quantitative?
First consider the intuitive layer specificity test (Section 3.1), which revealed that random initialization affects PWCCA more than large changes in layer depth. To justify why this is undesirable, we can turn to probing accuracy, which is strongly affected by layer depth, and only weakly affected by random seed (Figure 3a). This suggests a path forward: we can ground the layer test in the concrete differences in functionality captured by the probe.
More generally, we want metrics to be sensitive to changes that affect functionality, while ignoring those that don’t. This motivates the following general procedure, given a distance metric d and a functionality f (which assigns a real number to a given representation):
1. Collect a set S of representations that differ along one or more axes of interest (e.g. layer depth, random seed).
2. Choose a reference representation A 2 S. When f is an accuracy metric, it is reasonable to choose A = argmaxA2S f(A).9
3. For every representation B 2 S: • Compute |f(A) f(B)| • Compute d(A,B)
4. Report the rank correlation between |f(A) f(B)| and d(A,B) (measured by Kendall’s ⌧ or Spearman ⇢).
The above procedure provides a quantitative measure of how well the distance metric d responds to the functionality f . For instance, in the layer specificity test, since depth affects probing accuracy strongly while random seed affects it only weakly, a dissimilarity measure with high rank correlation will be strongly responsive to layer depth and weakly responsive to seed; thus rank correlation quantitatively formalizes the test from Section 3.1.
Correlation metrics also capture properties that our intuition might miss. For instance, Figure 3a shows that some variation in random seed actually does affect accuracy, and our procedure rewards metrics that pick up on this, while the intuitive sensitivity test would penalize them.
Our procedure requires choosing a collection of models S; the crucial feature of S is that it contains models with diverse behavior according to f . Different sets S, combined with a functional difference f , can be thought of as miniature “benchmarks" that surface complementary perspectives on dissimilarity measures’ responsiveness to that functional difference. In the rest of this section, we instantiate this quantitative benchmark for several choices of f and S, starting with the layer and principal component tests from Section 3 and continuing on to several tests of OOD performance.
The overall results are summarized in Table 1. Note that for any single benchmark, we expect the correlation coefficients to be significantly lower than 1, since the metric D must capture all important axes of variation while f measures only one type of functionality. A good metric is one that has consistently high correlation across many different functional measures.
Benchmark 1: Layer depth. We turn the layer test into a benchmark for both text and images. For the text setting, we construct a set S of 120 representations by pretraining 10 BERT base models with different initialization seeds and including each of the 12 BERT layers as a representation. We separately consider two functionalities f : probing accuracy on QNLI [37] and SST-2 [33]. To compute the rank correlation, we take the reference representation A to be the representation with highest probing accuracy. We compute the Kendall’s ⌧ and Spearman’s ⇢ rank correlations between the dissimilarities and the probing accuracy differences and report the results in Table 1.
9Choosing the highest accuracy model as the reference makes it more likely that as accuracy changes, models are on average becoming more dissimilar. A low accuracy model may be on the “periphery” of model space, where it is dissimilar to models with high accuracy, but potentially even more dissimilar to other low accuracy models that make different mistakes.
For the image setting, we similarly construct a set S of 70 representations by training 5 ResNet-14 models with different initialization seeds and including each of the 14 layers’ representations. We also consider two functionalities f for these vision models: probing accuracy on CIFAR-100 [14] and on SVHN [26], and compute rank correlations in the same way.
We find that PWCCA has lower rank correlations compared to CKA and Procrustes for both language probing tasks. This corroborates the intuitive specificity test (Section 3.1), suggesting that PWCCA registers too large of a dissimilarity across random initializations. For the vision tasks, CKA and Procrustes achieve similar rank correlations, while PWCCA cannot be computed because n < d.
Benchmark 2: Principal component (PC) deletion. We next quantify the PC deletion test from Section 3.2, by constructing a set S of representations that vary in both random initialization and fraction of principal components deleted. We pretrain 10 BERT base models with different initializations, and for each pretrained model we obtain 14 different representations by deleting that representation’s k smallest principal components, with k 2 {0, 100, 200, 300, 400, 500, 600, 650, 700, 725, 750, 758, 763, 767}. Thus S has 10 ⇥ 14 = 140 elements. The representations themselves are the layer-` activations, for ` 2 {8, 9, . . . , 12},10 so there are 5 different choices of S. We use SST-2 probing accuracy as the functionality of interest f , and select the reference representation A as the element in S with highest accuracy. Rank correlation
10Earlier layers have near-chance accuracy on probing tasks, so we ignore them.
results are consistent across the 5 choices of S (Appendix C), so we report the average as a summary statistic in Table 1.
We find that PWCCA has the highest rank correlation between dissimilarity and probing accuracy, followed by Procrustes, and distantly followed by CKA. This corroborates the intuitive observations from Section 3.2 that CKA is not sensitive to principal component deletion.
4.1 Investigating variation in OOD performance across random seeds
So far our benchmarks have been based on probing accuracy, which only measures in-distribution behavior (the train and test set of the probe are typically i.i.d.). In addition, the BERT models were always pretrained on language modeling but not finetuned for classification. To add diversity to our benchmarks, we next consider the out-of-distribution performance of language and vision models trained for classification tasks.
Benchmark 3: Changing fine-tuning seeds. McCoy et al. [23] show that a single pretrained BERT base model finetuned on MNLI with different random initializations will produce models with similar in-distribution performance, but widely variable performance on out-of-distribution data. We thus create a benchmark S out of McCoy et al.’s 100 released fine-tuned models, using OOD accuracy on the “Lexical Heuristic (Non-entailment)" subset of the HANS dataset [22] as our functionality f . This functionality is associated with the entire model, rather than an individual layer (in contrast to the probing functionality), but we consider one layer at a time to measure whether dissimilarities
between representations at that layer correlate with f . This allows us to also localize whether certain layers are more predictive of f .
We construct 12 different S (one for each of the 12 layers of BERT base), taking the reference representation A to be that of the highest accuracy model according to f . As before, we report each dissimilarity measure’s rank correlation with f in Table 1, averaged over the 12 runs.
All three dissimilarity measures correlate with OOD accuracy, with Orthogonal Procrustes and PWCCA being more correlated than CKA. Since the representations in our benchmarks were computed on in-distribution MNLI data, this has the interesting implication that dissimilarity measures can detect OOD differences without access to OOD data. It also implies that random initialization leads to meaningful functional differences that are picked up by these measures, especially Procrustes and PWCCA. Contrast this with our intuitive specificity test in Section 3.1, where all sensitivity to random initialization was seen as a shortcoming. Our more quantitative benchmark here suggests that some of that sensitivity tracks true functionality.
To check that the differences in rank correlation for Procrustes, PWCCA, and CKA are statistically significant, we compute bootstrap estimates of their 95% confidence intervals. With 2000 bootstrapped samples, we find statistically significant differences between all pairs of measures for most choices of layer depth S, so we conclude PWCCA > Orthogonal Procrustes > CKA (the full results are in Appendix E). We do not apply this procedure for the previous two benchmarks, because the different models have correlated randomness and so any p-value based on independence assumptions would be invalid.
Benchmark 4: Challenge sets: Changing pretraining and fine-tuning seeds. We also construct benchmarks using models trained from scratch with different random seeds (for language, this is pretraining and fine-tuning, and for vision, this is standard training). For language, we construct benchmarks from a collection of 100 BERT medium models, trained with all combinations of 10 pretraining and 10 fine-tuning seeds. The models are fine-tuned on MNLI, and we consider two different functionalities of interest f : accuracy on the OOD Antonymy stress test and on the OOD Numerical stress test [25], which both show significant variation in accuracy across models (see Figure 3d). We obtain 8 different sets S (one for each of the 8 layer depths in BERT medium), again taking A to be the representation of the highest-accuracy model according to f . Rank correlations for each dissimilarity measure are averaged over the 8 runs and reported in Table 1.
For vision, we construct benchmarks from a collection of 100 ResNet-14 models, trained with different random seeds on CIFAR-10. We consider 19 different functionalities of interest—the 19 types of corruptions in the CIFAR-10C dataset [12], which show significant variation in accuracy across models (see Figure 3c). We obtain 14 different sets S (one for each of the 14 layers), taking A to be the representation of the highest-accuracy model according to f . Rank correlations for each dissimilarity measure are averaged over the 14 runs and over the 19 corruption types and reported in Table 1. Results for each of the 19 corruptions individually can be found in Appendix D..
None of the dissimilarity measures show a large rank correlation for either the language or vision tasks, and for the Numerical stress test, at most layers, the associated p-values (assuming independence) are non-significant at the 0.05 level (see Appendix C). 11 Thus we conclude that all measures fail to be sensitive to OOD accuracy in these settings. One reason for this could be that there is less variation in the OOD accuracies compared to the previous experiment with the HANS dataset (there accuracies varied from 0 to nearly 60%). Another reason could be that it is harder to correctly account for both pretraining and fine-tuning variation at the same time. Either way, we hope that future dissimilarity measures can improve upon these results, and we present this benchmark as a challenge task to motivate progress.
5 Discussion
In this work we proposed a quantitative measure for evaluating similarity metrics, based on the rank correlation with functional behavior. Using this, we generated tasks motivated by sensitivity to
11See Appendix C for p-values as produced by sci-kit learn. Strictly speaking, the p-values are invalid because they assume independence, but the pretraining seed induces correlations. However, correctly accounting for these would tend to make the p-values larger, thus preserving our conclusion of non-significance .
deleting important directions, specificity to random initialization, and sensitivity to out-of-distribution performance. Popular existing metrics such as CKA and CCA often performed poorly on these tasks, sometimes in striking ways. Meanwhile, the classical Orthogonal Procrustes transform attained consistently good performance.
Given the success of Orthogonal Procrustes, it is worth reflecting on how it differs from the other metrics and why it might perform well. To do so, we consider a simplified case where A and B have the same singular vectors but different singular values. Thus without loss of generality A = ⇤1 and B = ⇤2, where the ⇤i are both diagonal. In this case, the Orthogonal Procrustes distance reduces to k⇤1 ⇤2k2F , or the sum of the squared distances between the singular values. We will see that both CCA and CKA reduce to less reasonable formulae in this case.
Orthogonal Procrustes vs. CCA. All three metrics derived from CCA assign zero distance even when the (non-zero) singular values are arbitrarily different. This is because CCA correlation coefficients are invariant to all invertible linear transformations. This invariance property may help explain why CCA metrics generally find layers within the same network to be much more similar than networks trained with different randomness. Random initialization introduces noise, particularly in unimportant principal components, while representations within the same network more easily preserve these components, and CCA may place too much weight on their associated correlation coefficients.
Orthogonal Procrustes vs. CKA. In contrast to the squared distance of Orthogonal Procrustes, CKA actually reduces to a quartic function based on the dot products between the squared entries of ⇤1 and ⇤2. As a consequence, CKA is dominated by representations’ largest singular values, leaving it insensitive to meaningful differences in smaller singular values as illustrated in Figure 2. This lack of sensitivity to moderate-sized differences may help explain why CKA fails to track out-of-distribution error effectively.
In addition to helping understand similarity measures, our benchmarks pinpoint directions for improvement. No method was sensitive to accuracy on the Numerical stress test in our challenge set, possibly due to a lower signal-to-noise ratio. Since Orthogonal Procrustes performed well on most of our tasks, it could be a promising foundation for a new measure, and recent work shows how to regularize Orthogonal Procrustes to handle high noise [28]. Perhaps similar techniques could be adapted here.
An alternative to our benchmarking approach is to directly define two representations’ dissimilarity as their difference in a functional behavior of interest. Feng et al. [9] take this approach, defining dissimilarity as difference in accuracy on a handful of probing tasks. One drawback of this approach is that a small set of probes may not capture all the differences in representations, so it is useful to base dissimilarity measures on representations’ intrinsic properties. Intrinsically defined dissimilarities also have the potential to highlight new functional behaviors, as we found that representations with similar in-distribution probing accuracy often have highly variable OOD accuracy.
A limitation of our work is that we only consider a handful of model variations and functional behaviors, and restricting our attention to these settings could overlook other important considerations. To address this, we envision a paradigm in which a rich tapestry of benchmarks are used to ground and validate neural network interpretations. Other axes of variation in models could include training on more or fewer examples, training on shuffled labels vs. real labels, training from specifically chosen initializations [10], and using different architectures. Other functional behaviors to examine could include modularity and meta-learning capabilities. Benchmarks could also be applied to other interpretability tools beyond dissimilarity. For example, sensitivity to deleting principal components could provide an additional sanity check for saliency maps and other visualization tools [1].
More broadly, many interpretability tools are designed as audits of models, although it is often unclear what characteristics of the models are consistently audited. We position this work as a counter-audit, where by collecting models that differ in functional behavior, we can assess whether the interpretability tools CKA, PWCCA, etc., accurately reflect the behavioral differences. Many other types of counter-audits may be designed to assess other interpretability tools. For example, models that have backdoors built into them to misclassify certain inputs provide counter-audits for interpretability tools that explain model predictions–these explanations should reflect any backdoors present [5, 15, 19, 38]. We are hopeful that more comprehensive checks on interpretability tools will provide deeper understanding of neural networks, and more reliable models.
Acknowledgments and Disclosure of Funding
Thanks to Ruiqi Zhong for helpful comments and assistance in finetuning models, and thanks to Daniel Rothchild and our anonymous reviewers for helpful discussion. FD is supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE 1752814 and the Open Philanthropy Project AI Fellows Program. JSD is supported by the NSF Division of Mathematical Sciences Grant No. 2031985. | 1. What are the main contributions and novel aspects introduced by the paper regarding representational similarity metrics?
2. How does the reviewer assess the quality and technical soundness of the submission?
3. What are the strengths and weaknesses of the proposed methods and analyses according to the reviewer?
4. Do you have any questions or suggestions regarding the clarity and organization of the submission?
5. How does the reviewer evaluate the significance and impact of the work in the field? | Summary Of The Paper
Review | Summary Of The Paper
The work is motivated by a desire to better understand representational similarity metrics and ensure they obey certain desirable properties. Specifically, representational similarity measures should be sensitive changes that affect functional behavior, and specific (invariant) to changes that do not affect functional behavior. The authors operationalize these desiderata as 1) sensitivity to deletions of principal components in activation matrices, and 2) invariance to variations between models that arise from different random seeds. They find that CCA-based methods are less specific than CKA and Orthogonal Procrustes Analysis (OPA), while CKA is less sensitive than CCA and OPA. The authors then establish “benchmarks” by examining how certain variables and interventions (e.g. deleting principal components, changes in random seed, fine-tuning) affect the correlation between representational similarity metrics and performance metrics (mainly linguistic probes). The authors find that the benchmarks generally recapitulate the initial findings re: specificity and sensitivity. OPA seems to fall in between CKA and CCA with respect to the trade-off between sensitivity and specificity, leading the authors to encourage its adoption.
Review
Excellent response. Score updated from 3 to 7. Details below
Originality: Are the tasks or methods new? Is the work a novel combination of well-known techniques? (This can be valuable!) Is it clear how this work differs from previous contributions? Is related work adequately cited?
It takes a set of established representational similarity methods, prescribes desirable operating principles, and attempts to understand and benchmark them with regards to the principles. The clear articulation of desirable operating principles are somewhat novel, and the principal component deletion and benchmarking components of this work are novel combinations of known techniques. The effects of random seeds/initialization on representational similarity metrics has already been examined by Kornblith et al. (2019), though they did not examine OPA. Otherwise, related work appears adequately cited.
Quality: Is the submission technically sound? Are claims well supported (e.g., by theoretical analysis or experimental results)? Are the methods used appropriate? Is this a complete piece of work or work in progress? Are the authors careful and honest about evaluating both the strengths and weaknesses of their work?
In its current form, I think the scope of the paper is limited. The analyses are limited to transformer models and linguistic probing tasks. Extending the analyses to CNNs and even vision transformers (and multiple datasets) would allow the authors to make stronger claims about generality and increase the relevance of this work. This could be accomplished with layer-wise probing tasks similarly to with the language models. Comparisons should also be made between different model architectures.
More generally, I am curious about the following issue: The authors state that “Metrics should have specificity against random initialization” (L41-42). If two different initializations drawn from the same distribution result in two networks with very different behavior (i.e. different outputs for the same inputs), do we want a representational similarity metric to detect this? My assumption is yes, but perhaps I’m mistaken. This should be discussed.
The authors state “Neural network representations trained on the same data but from different random initializations are similar” (L125-126). Is this axiomatic, or a claim that representational similarity measures are intended to test? If it’s the latter, can you expand on what Kornblith et al. (2019) report?
I believe it’s implicit that “similarity” between networks always means representational similarity, but I think it’s important that you explicitly dissociate representational vs. functional similarity. If two networks have very different representations but very similar outputs, would these networks be “similar”? By the definition of Sundararajan et al. (2018; Axiomatic Attribution for Deep Networks), these networks are functionally equivalent. To this end, comparing distributions (of outputs/classifier layers) could be more informative than just comparing task accuracies.
Doing more probe-representation benchmarks seems like an easy way to strengthen your work, as probing tasks are cheap to run and there are a lot of them.
Clarity: Is the submission clearly written? Is it well organized? (If not, please make constructive suggestions for improving its clarity.) Does it adequately inform the reader? (Note that a superbly written paper provides enough information for an expert reader to reproduce its results.)
The submission is sufficiently clear and organized, but I have a few questions and suggestions:
Looking at Table 1, CKA seems seems to be most consistent, yet the authors state “the classical Orthogonal Procrustes transform attained consistently good performance” (ln 289-290). Can these opinions be reconciled? Perhaps by quantification?
The results in Lines 169-172 should be presented in a figure, e.g. plotting PC-deletion vs. accuracy curves.
More information should be provided about the rows in Table 1. For example, the OOD conditions should be explicitly labeled as such.
Significance: Are the results important? Are others (researchers or practitioners) likely to use the ideas or build on them? Does the submission address a difficult task in a better way than previous work? Does it advance the state of the art in a demonstrable way? Does it provide unique data, unique conclusions about existing data, or a unique theoretical or experimental approach? I think the work is well-motivated.
The authors state “As a community, we need well-chosen formal criteria for evaluating metrics to avoid over-reliance on intuition and the pitfalls of too many researcher degrees of freedom” (L30-31), and I strongly agree with this. But as stated in the “Quality” section of this review, in its current form the work’s primary (and severe) limitation is that it only uses transformer models and language tasks. However, I am optimistic about the potential relevance and impact of this work if my concerns are addressed. |
NIPS | Title
Grounding Representation Similarity Through Statistical Testing
Abstract
To understand neural network behavior, recent works quantitatively compare different networks’ learned representations using canonical correlation analysis (CCA), centered kernel alignment (CKA), and other dissimilarity measures. Unfortunately, these widely used measures often disagree on fundamental observations, such as whether deep networks differing only in random initialization learn similar representations. These disagreements raise the question: which, if any, of these dissimilarity measures should we believe? We provide a framework to ground this question through a concrete test: measures should have sensitivity to changes that affect functional behavior, and specificity against changes that do not. We quantify this through a variety of functional behaviors including probing accuracy and robustness to distribution shift, and examine changes such as varying random initialization and deleting principal components. We find that current metrics exhibit different weaknesses, note that a classical baseline performs surprisingly well, and highlight settings where all metrics appear to fail, thus providing a challenge set for further improvement.
1 Introduction
Understanding neural networks is not only scientifically interesting, but critical for applying deep networks in high-stakes situations. Recent work has highlighted the value of analyzing not just the final outputs of a network, but also its intermediate representations [20, 29]. This has motivated the development of representation similarity measures, which can provide insight into how different training schemes, architectures, and datasets affect networks’ learned representations.
A number of similarity measures have been proposed, including centered kernel alignment (CKA) [13], ones based on canonical correlation analysis (CCA) [24, 30], single neuron alignment [20], vector space alignment [3, 6, 32], and others [2, 9, 16, 18, 21, 39]. Unfortunately, these different measures tell different stories. For instance, CKA and projection weighted CCA disagree on which layers of different networks are most similar [13]. This lack of consensus is worrying, as measures are often designed according to different and incompatible intuitive desiderata, such as whether finding a one-to-one assignment, or finding few-to-one mappings, between neurons is more appropriate [20]. As a community, we need well-chosen formal criteria for evaluating metrics to avoid over-reliance on intuition and the pitfalls of too many researcher degrees of freedom [17].
In this paper we view representation dissimilarity measures as implicitly answering a classification question–whether two representations are essentially similar or importantly different. Thus, in analogy to statistical testing, we can evaluate them based on their sensitivity to important change and specificity (non-responsiveness) against unimportant changes or noise.
As a warm-up, we first initially consider two intuitive criteria: first, that metrics should have specificity against random initialization; and second, that they should be sensitive to deleting important principal
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
components (those that affect probing accuracy). Unfortunately, popular metrics fail at least one of these two tests. CCA is not specific – random initialization noise overwhelms differences between even far-apart layers in a network (Section 3.1). CKA on the other hand is not sensitive, failing to detect changes in all but the top 10 principal components of a representation (Section 3.2).
We next construct quantitative benchmarks to evaluate a dissimilarity measure’s quality. To move beyond our intuitive criteria, we need a ground truth. For this we turn to the functional behavior of the representations we are comparing, measured through probing accuracy (an indicator of syntactic information) [4, 27, 35] and out-of-distribution performance of the model they belong to [7, 23, 25]. We then score dissimilarity measures based on their rank correlation with these measured functional differences. Overall our benchmarks contain 30,480 examples and vary representations across several axes including random seed, layer depth, and low-rank approximation (Section 4)1.
Our benchmarks confirm our two intuitive observations: on subtasks that consider layer depth and principal component deletion, we measure the rank correlation with probing accuracy and find CCA and CKA lacking as the previous warm-up experiments suggested. Meanwhile, the Orthogonal Procrustes distance, a classical but often overlooked2 dissimilarity measure, balances gracefully between CKA and CCA and consistently performs well. This underscores the need for systematic evaluation, otherwise we may fall to recency bias that undervalues classical baselines.
Other subtasks measure correlation with OOD accuracy, motivated by the observation that random initialization sometimes has large effects on OOD performance [23]. We find that dissimilarity measures can sometimes predict OOD performance using only the in-distribution representations, but we also identify a challenge set on which none of the measures do statistically better than chance. We hope this challenge set will help measure and spur progress in the future.
2 Problem Setup: Metrics and Models
Our goal is to quantify the similarity between two different groups of neurons (usually layers). We do this by comparing how their activations behave on the same dataset. Thus for a layer with p1 neurons, we define A 2 Rp1⇥n, the matrix of activations of the p1 neurons on n data points, to be that layer’s raw representation of the data. Similarly, let B 2 Rp2⇥n be a matrix of the activations of p2 neurons on the same n data points. We center and normalize these representations before computing dissimilarity, per standard practice. Specifically, for a raw representation A we first subtract the mean value from each column, then divide by the Frobenius norm, to produce the normalized representation A⇤, used in all our dissimilarity computations. In this work we study dissimilarity measures d(A⇤, B⇤) that allow for quantitative comparisons of representations both within and across different networks. We colloquially refer to values of d(A⇤, B⇤) as distances, although they do not necessarily satisfy the triangle inequality required of a proper metric.
We study five dissimilarity measures: centered kernel alignment (CKA), three measures derived from canonical correlation analysis (CCA), and a measure derived from the orthogonal Procrustes problem.
Centered kernel alignment (CKA) uses an inner product to quantify similarity between two representations. It is based on the idea that one can first choose a kernel, compute the n⇥ n kernel matrix for each representation, and then measure similarity as the alignment between these two kernel matrices. The measure of similarity thus depends on one’s choice of kernel; in this work we consider Linear CKA:
dLinear CKA(A,B) = 1 kAB>k2F
kAA>kF kBB>kF (1)
as proposed in Kornblith et al. [13]. Other choices of kernel are also valid; we focus on Linear CKA here since Kornblith et al. [13] report similar results from using either a linear or RBF kernel.
Canonical correlation analysis (CCA) finds orthogonal bases (wiA, wiB) for two matrices such that after projection onto wiA, w i B , the projected matrices have maximally correlated rows. For 1 i p1,
1Code to replicate our results can be found at https://github.com/js-d/sim_metric. 2For instance, Raghu et al. [30] and Morcos et al. [24] do not mention it, and Kornblith et al. [13] relegates it
to the appendix; although Smith et al. [32] does use it to analyze word embeddings and prefers it to CCA.
the ith canonical correlation coefficient ⇢i is computed as follows:
⇢i = max wiA,w i B
hwiA > A,wiB > Bi
kwiA > Ak · kwiB > Bk
(2)
s.t. hwiA > A,wjA > Ai = 0, 8j < i, hwiB > B,wjB > Bi = 0, 8j < i (3)
To transform the vector of correlation coefficients into a scalar measure, two options considered previously [13] are the mean correlation coefficient, ⇢̄CCA, and the mean squared correlation coefficient, R2CCA, defined as follows:
d⇢̄CCA(A,B) = 1 1
p1
X
i
⇢i, dR2CCA(A,B) = 1 1
p1
X
i
⇢2i (4)
To improve the robustness of CCA, Morcos et al. [24] propose projection-weighted CCA (PWCCA) as another scalar summary of CCA:
dPWCCA(A,B) = 1 P
i ↵i⇢iP i ↵i
, ↵i = X
j
|hhi, aji| (5)
where aj is the jth row of A, and hi = wiA > A is the projection of A onto the ith canonical direction. We find that PWCCA performs far better than ⇢̄CCA and R2CCA, so we focus on PWCCA in the main text, but include results on the other two measures in the appendix.
The orthogonal Procrustes problem consists of finding the left-rotation of A that is closest to B in Frobenius norm, i.e. solving the optimization problem:
min R
kB RAk2F, subject to R>R = I. (6)
The minimum is the squared orthogonal Procrustes distance between A and B, and is equal to
dProc(A,B) = kAk2F + kBk2F 2kA>Bk⇤, (7)
where k · k⇤ is the nuclear norm [31]. Unlike the other metrics, the orthogonal Procrustes distance is not normalized between 0 and 1, although for normalized A⇤, B⇤ it lies in [0, 2].
2.1 Models we study
In this work we study representations of both text and image inputs. For text, we investigate representations computed by Transformer architectures in the BERT model family [8] on sentences from the Multigenre Natural Language Inference (MNLI) dataset [40]. We study BERT models of two sizes: BERT base, with 12 hidden layers of 768 neurons, and BERT medium, with 8 hidden layers of 512 neurons. We use the same architectures as in the open source BERT release3, but to generate diversity we study 3 variations of these models:
1. 10 BERT base models pretrained with different random seeds but not finetuned for particular tasks, released by Zhong et al. [41]4. 2. 10 BERT medium models initialized from pretrained models released by Zhong et al. [41], that we further finetuned on MNLI with 10 different finetuning seeds (100 models total). 3. 100 BERT base models that were initialized from the pretrained BERT model in [8] and finetuned on MNLI with different seeds, released by McCoy et al. [23]5.
For images, we investigate representations computed by ResNets [11] on CIFAR-10 test set images [14]. We train 100 ResNet-14 models6 from random initialization with different seeds on the CIFAR-10 training set and collect representations after each convolutional layer.
Further training details, as well as checks that our training protocols result in models with comparable performance to the original model releases, can be found in Appendix A.
3available at https://github.com/google-research/bert 4available at https://github.com/ruiqi-zhong/acl2021-instance-level 5available at https://github.com/tommccoy1/hans/tree/master/berts_of_a_feather 6from https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py
3 Warm-up: Intuitive Tests for Sensitivity and Specificity
When designing dissimilarity measures, researchers usually consider invariants that these measures should not be sensitive to [13]; for example, symmetries in neural networks imply that permuting the neurons in a fully connected layer does not change the representations learned. We take this one step further and frame dissimilarity measures as answering whether representations are essentially the same, or importantly different. We can then evaluate measures based on whether they respond to important changes (sensitivity) while ignoring changes that don’t matter (specificity).
Assessing sensitivity and specificity requires a ground truth–which representations are truly different? To answer this, we begin with the following two intuitions7: 1) neural network representations trained on the same data but from different random initializations are similar, and 2) representations lose crucial information as principal components are deleted. These motivate the following intuitive tests of specificity and sensitivity: we expect a dissimilarity measure to: 1) assign a small distance between architecturally identical neural networks that only differ in initialization seed, and 2) assign a large distance between a representation A and the representation  after deleting important principal components (enough to affect accuracy). We will see that PWCCA fails the first test (specificity), while CKA fails the second (sensitivity).
3.1 Specificity against changes to random seed
Neural networks with the same architecture trained from different random initializations show many similarities, such as highly correlated predictions on in-distribution data points [23]. Thus it seems natural to expect a good similarity measure to assign small distances between architecturally corresponding layers of networks that are identical except for initialization seed.
To check this property, we take two BERT base models pre-trained with different random seeds and, for every layer in the first model, compute its dissimilarity to every layer in both the first and second model. We do this for 5 separate pairs of models and average the results. To pass the intuitive specificity test, a dissimilarity measure should assign relatively small distances between a layer in the first network and its corresponding layer in the second network.
Figure 1 displays the average pair-wise PWCCA, CKA, and Orthogonal Procrustes distances between layers of two networks differing only in random seed. According to PWCCA, these networks’ representations are quite dissimilar; for instance, the two layer 7 representations are further apart
7Note we will see later that these intuitions need refinement.
than they are from any other layer in the same network. PWCCA is thus not specific against random initialization, as it can outweigh even large changes in layer depth.
In contrast, CKA can separate layer 7 in a different network from layers 4 or 10 in the same network, showing better specificity to random initialization. Orthogonal Procrustes exhibits smaller but non-trivial specificity, distinguishing layers once they are 4-5 layers apart.
3.2 Sensitivity to removing principal components
Dissimilarity measures should also be sensitive to deleting important principal components of a representation.8 To quantify which components are important, we fix a layer of a pre-trained BERT base model and measure how probing accuracy degrades as principal components are deleted (starting from the smallest component), since probing accuracy is a common measure of the information captured in a representation [4]. We probe linear classification performance on the Stanford Sentiment Tree Bank task (SST-2) [33], following the experimental protocol in Tamkin et al. [34]. Figure 3b shows how probing accuracy degrades with component deletion. Ideally, dissimilarity measures should be large by the time probing accuracy has decreased substantially.
To assess whether a dissimilarity measure is large, we need a baseline to compare to. For each measure, we define a dissimilarity score to be above the detectable threshold if it is larger than the dissimilarity score between networks with different random initialization. Figure 2 plots the dissimilarity induced by deleting principal components, as well as this baseline.
For the last layer of BERT, CKA requires 97% of a representation’s principal components to be deleted for the dissimilarity to be detectable; after deleting these components, probing accuracy shown in Figure 3b drops significantly from 80% to 63% (chance is 50%). CKA thus fails to detect large accuracy drops and so fails our intuitive sensitivity test.
Other metrics perform better: Orthogonal Procrustes’s detection threshold is ⇠85% of the principal components, corresponding to an accuracy drop 80% to 70%. PWCCA’s threshold is ⇠55% of principal components, corresponding to an accuracy drop from 80% to 75%.
PWCCA’s failure of specificity and CKA’s failure of sensitivity on these intuitive tests are worrying. However, before declaring definitive failure, in the next section, we turn to making our assessments more rigorous.
8For a representation A, we define  k, the result of deleting the k smallest principal components from A, as follows: we compute the singular value decomposition U⌃V T = A, construct U k 2 Rp⇥p k by dropping the lowest k singular vectors of U , and finally take  k = UT kA.
4 Rigorously Evaluating Dissimilarity Metrics
In the previous section, we saw that CKA and PWCCA each failed intuitive tests, based on sensitivity to principal components and specificity to random initialization. However, these were based primarily on intuitive, qualitative desiderata. Is there some way for us to make these tests more rigorous and quantitative?
First consider the intuitive layer specificity test (Section 3.1), which revealed that random initialization affects PWCCA more than large changes in layer depth. To justify why this is undesirable, we can turn to probing accuracy, which is strongly affected by layer depth, and only weakly affected by random seed (Figure 3a). This suggests a path forward: we can ground the layer test in the concrete differences in functionality captured by the probe.
More generally, we want metrics to be sensitive to changes that affect functionality, while ignoring those that don’t. This motivates the following general procedure, given a distance metric d and a functionality f (which assigns a real number to a given representation):
1. Collect a set S of representations that differ along one or more axes of interest (e.g. layer depth, random seed).
2. Choose a reference representation A 2 S. When f is an accuracy metric, it is reasonable to choose A = argmaxA2S f(A).9
3. For every representation B 2 S: • Compute |f(A) f(B)| • Compute d(A,B)
4. Report the rank correlation between |f(A) f(B)| and d(A,B) (measured by Kendall’s ⌧ or Spearman ⇢).
The above procedure provides a quantitative measure of how well the distance metric d responds to the functionality f . For instance, in the layer specificity test, since depth affects probing accuracy strongly while random seed affects it only weakly, a dissimilarity measure with high rank correlation will be strongly responsive to layer depth and weakly responsive to seed; thus rank correlation quantitatively formalizes the test from Section 3.1.
Correlation metrics also capture properties that our intuition might miss. For instance, Figure 3a shows that some variation in random seed actually does affect accuracy, and our procedure rewards metrics that pick up on this, while the intuitive sensitivity test would penalize them.
Our procedure requires choosing a collection of models S; the crucial feature of S is that it contains models with diverse behavior according to f . Different sets S, combined with a functional difference f , can be thought of as miniature “benchmarks" that surface complementary perspectives on dissimilarity measures’ responsiveness to that functional difference. In the rest of this section, we instantiate this quantitative benchmark for several choices of f and S, starting with the layer and principal component tests from Section 3 and continuing on to several tests of OOD performance.
The overall results are summarized in Table 1. Note that for any single benchmark, we expect the correlation coefficients to be significantly lower than 1, since the metric D must capture all important axes of variation while f measures only one type of functionality. A good metric is one that has consistently high correlation across many different functional measures.
Benchmark 1: Layer depth. We turn the layer test into a benchmark for both text and images. For the text setting, we construct a set S of 120 representations by pretraining 10 BERT base models with different initialization seeds and including each of the 12 BERT layers as a representation. We separately consider two functionalities f : probing accuracy on QNLI [37] and SST-2 [33]. To compute the rank correlation, we take the reference representation A to be the representation with highest probing accuracy. We compute the Kendall’s ⌧ and Spearman’s ⇢ rank correlations between the dissimilarities and the probing accuracy differences and report the results in Table 1.
9Choosing the highest accuracy model as the reference makes it more likely that as accuracy changes, models are on average becoming more dissimilar. A low accuracy model may be on the “periphery” of model space, where it is dissimilar to models with high accuracy, but potentially even more dissimilar to other low accuracy models that make different mistakes.
For the image setting, we similarly construct a set S of 70 representations by training 5 ResNet-14 models with different initialization seeds and including each of the 14 layers’ representations. We also consider two functionalities f for these vision models: probing accuracy on CIFAR-100 [14] and on SVHN [26], and compute rank correlations in the same way.
We find that PWCCA has lower rank correlations compared to CKA and Procrustes for both language probing tasks. This corroborates the intuitive specificity test (Section 3.1), suggesting that PWCCA registers too large of a dissimilarity across random initializations. For the vision tasks, CKA and Procrustes achieve similar rank correlations, while PWCCA cannot be computed because n < d.
Benchmark 2: Principal component (PC) deletion. We next quantify the PC deletion test from Section 3.2, by constructing a set S of representations that vary in both random initialization and fraction of principal components deleted. We pretrain 10 BERT base models with different initializations, and for each pretrained model we obtain 14 different representations by deleting that representation’s k smallest principal components, with k 2 {0, 100, 200, 300, 400, 500, 600, 650, 700, 725, 750, 758, 763, 767}. Thus S has 10 ⇥ 14 = 140 elements. The representations themselves are the layer-` activations, for ` 2 {8, 9, . . . , 12},10 so there are 5 different choices of S. We use SST-2 probing accuracy as the functionality of interest f , and select the reference representation A as the element in S with highest accuracy. Rank correlation
10Earlier layers have near-chance accuracy on probing tasks, so we ignore them.
results are consistent across the 5 choices of S (Appendix C), so we report the average as a summary statistic in Table 1.
We find that PWCCA has the highest rank correlation between dissimilarity and probing accuracy, followed by Procrustes, and distantly followed by CKA. This corroborates the intuitive observations from Section 3.2 that CKA is not sensitive to principal component deletion.
4.1 Investigating variation in OOD performance across random seeds
So far our benchmarks have been based on probing accuracy, which only measures in-distribution behavior (the train and test set of the probe are typically i.i.d.). In addition, the BERT models were always pretrained on language modeling but not finetuned for classification. To add diversity to our benchmarks, we next consider the out-of-distribution performance of language and vision models trained for classification tasks.
Benchmark 3: Changing fine-tuning seeds. McCoy et al. [23] show that a single pretrained BERT base model finetuned on MNLI with different random initializations will produce models with similar in-distribution performance, but widely variable performance on out-of-distribution data. We thus create a benchmark S out of McCoy et al.’s 100 released fine-tuned models, using OOD accuracy on the “Lexical Heuristic (Non-entailment)" subset of the HANS dataset [22] as our functionality f . This functionality is associated with the entire model, rather than an individual layer (in contrast to the probing functionality), but we consider one layer at a time to measure whether dissimilarities
between representations at that layer correlate with f . This allows us to also localize whether certain layers are more predictive of f .
We construct 12 different S (one for each of the 12 layers of BERT base), taking the reference representation A to be that of the highest accuracy model according to f . As before, we report each dissimilarity measure’s rank correlation with f in Table 1, averaged over the 12 runs.
All three dissimilarity measures correlate with OOD accuracy, with Orthogonal Procrustes and PWCCA being more correlated than CKA. Since the representations in our benchmarks were computed on in-distribution MNLI data, this has the interesting implication that dissimilarity measures can detect OOD differences without access to OOD data. It also implies that random initialization leads to meaningful functional differences that are picked up by these measures, especially Procrustes and PWCCA. Contrast this with our intuitive specificity test in Section 3.1, where all sensitivity to random initialization was seen as a shortcoming. Our more quantitative benchmark here suggests that some of that sensitivity tracks true functionality.
To check that the differences in rank correlation for Procrustes, PWCCA, and CKA are statistically significant, we compute bootstrap estimates of their 95% confidence intervals. With 2000 bootstrapped samples, we find statistically significant differences between all pairs of measures for most choices of layer depth S, so we conclude PWCCA > Orthogonal Procrustes > CKA (the full results are in Appendix E). We do not apply this procedure for the previous two benchmarks, because the different models have correlated randomness and so any p-value based on independence assumptions would be invalid.
Benchmark 4: Challenge sets: Changing pretraining and fine-tuning seeds. We also construct benchmarks using models trained from scratch with different random seeds (for language, this is pretraining and fine-tuning, and for vision, this is standard training). For language, we construct benchmarks from a collection of 100 BERT medium models, trained with all combinations of 10 pretraining and 10 fine-tuning seeds. The models are fine-tuned on MNLI, and we consider two different functionalities of interest f : accuracy on the OOD Antonymy stress test and on the OOD Numerical stress test [25], which both show significant variation in accuracy across models (see Figure 3d). We obtain 8 different sets S (one for each of the 8 layer depths in BERT medium), again taking A to be the representation of the highest-accuracy model according to f . Rank correlations for each dissimilarity measure are averaged over the 8 runs and reported in Table 1.
For vision, we construct benchmarks from a collection of 100 ResNet-14 models, trained with different random seeds on CIFAR-10. We consider 19 different functionalities of interest—the 19 types of corruptions in the CIFAR-10C dataset [12], which show significant variation in accuracy across models (see Figure 3c). We obtain 14 different sets S (one for each of the 14 layers), taking A to be the representation of the highest-accuracy model according to f . Rank correlations for each dissimilarity measure are averaged over the 14 runs and over the 19 corruption types and reported in Table 1. Results for each of the 19 corruptions individually can be found in Appendix D..
None of the dissimilarity measures show a large rank correlation for either the language or vision tasks, and for the Numerical stress test, at most layers, the associated p-values (assuming independence) are non-significant at the 0.05 level (see Appendix C). 11 Thus we conclude that all measures fail to be sensitive to OOD accuracy in these settings. One reason for this could be that there is less variation in the OOD accuracies compared to the previous experiment with the HANS dataset (there accuracies varied from 0 to nearly 60%). Another reason could be that it is harder to correctly account for both pretraining and fine-tuning variation at the same time. Either way, we hope that future dissimilarity measures can improve upon these results, and we present this benchmark as a challenge task to motivate progress.
5 Discussion
In this work we proposed a quantitative measure for evaluating similarity metrics, based on the rank correlation with functional behavior. Using this, we generated tasks motivated by sensitivity to
11See Appendix C for p-values as produced by sci-kit learn. Strictly speaking, the p-values are invalid because they assume independence, but the pretraining seed induces correlations. However, correctly accounting for these would tend to make the p-values larger, thus preserving our conclusion of non-significance .
deleting important directions, specificity to random initialization, and sensitivity to out-of-distribution performance. Popular existing metrics such as CKA and CCA often performed poorly on these tasks, sometimes in striking ways. Meanwhile, the classical Orthogonal Procrustes transform attained consistently good performance.
Given the success of Orthogonal Procrustes, it is worth reflecting on how it differs from the other metrics and why it might perform well. To do so, we consider a simplified case where A and B have the same singular vectors but different singular values. Thus without loss of generality A = ⇤1 and B = ⇤2, where the ⇤i are both diagonal. In this case, the Orthogonal Procrustes distance reduces to k⇤1 ⇤2k2F , or the sum of the squared distances between the singular values. We will see that both CCA and CKA reduce to less reasonable formulae in this case.
Orthogonal Procrustes vs. CCA. All three metrics derived from CCA assign zero distance even when the (non-zero) singular values are arbitrarily different. This is because CCA correlation coefficients are invariant to all invertible linear transformations. This invariance property may help explain why CCA metrics generally find layers within the same network to be much more similar than networks trained with different randomness. Random initialization introduces noise, particularly in unimportant principal components, while representations within the same network more easily preserve these components, and CCA may place too much weight on their associated correlation coefficients.
Orthogonal Procrustes vs. CKA. In contrast to the squared distance of Orthogonal Procrustes, CKA actually reduces to a quartic function based on the dot products between the squared entries of ⇤1 and ⇤2. As a consequence, CKA is dominated by representations’ largest singular values, leaving it insensitive to meaningful differences in smaller singular values as illustrated in Figure 2. This lack of sensitivity to moderate-sized differences may help explain why CKA fails to track out-of-distribution error effectively.
In addition to helping understand similarity measures, our benchmarks pinpoint directions for improvement. No method was sensitive to accuracy on the Numerical stress test in our challenge set, possibly due to a lower signal-to-noise ratio. Since Orthogonal Procrustes performed well on most of our tasks, it could be a promising foundation for a new measure, and recent work shows how to regularize Orthogonal Procrustes to handle high noise [28]. Perhaps similar techniques could be adapted here.
An alternative to our benchmarking approach is to directly define two representations’ dissimilarity as their difference in a functional behavior of interest. Feng et al. [9] take this approach, defining dissimilarity as difference in accuracy on a handful of probing tasks. One drawback of this approach is that a small set of probes may not capture all the differences in representations, so it is useful to base dissimilarity measures on representations’ intrinsic properties. Intrinsically defined dissimilarities also have the potential to highlight new functional behaviors, as we found that representations with similar in-distribution probing accuracy often have highly variable OOD accuracy.
A limitation of our work is that we only consider a handful of model variations and functional behaviors, and restricting our attention to these settings could overlook other important considerations. To address this, we envision a paradigm in which a rich tapestry of benchmarks are used to ground and validate neural network interpretations. Other axes of variation in models could include training on more or fewer examples, training on shuffled labels vs. real labels, training from specifically chosen initializations [10], and using different architectures. Other functional behaviors to examine could include modularity and meta-learning capabilities. Benchmarks could also be applied to other interpretability tools beyond dissimilarity. For example, sensitivity to deleting principal components could provide an additional sanity check for saliency maps and other visualization tools [1].
More broadly, many interpretability tools are designed as audits of models, although it is often unclear what characteristics of the models are consistently audited. We position this work as a counter-audit, where by collecting models that differ in functional behavior, we can assess whether the interpretability tools CKA, PWCCA, etc., accurately reflect the behavioral differences. Many other types of counter-audits may be designed to assess other interpretability tools. For example, models that have backdoors built into them to misclassify certain inputs provide counter-audits for interpretability tools that explain model predictions–these explanations should reflect any backdoors present [5, 15, 19, 38]. We are hopeful that more comprehensive checks on interpretability tools will provide deeper understanding of neural networks, and more reliable models.
Acknowledgments and Disclosure of Funding
Thanks to Ruiqi Zhong for helpful comments and assistance in finetuning models, and thanks to Daniel Rothchild and our anonymous reviewers for helpful discussion. FD is supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE 1752814 and the Open Philanthropy Project AI Fellows Program. JSD is supported by the NSF Division of Mathematical Sciences Grant No. 2031985. | 1. What is the focus of the paper regarding learned representation similarity measures?
2. What are the strengths and weaknesses of the proposed benchmarks and stress tests?
3. How does the reviewer assess the novelty and significance of the paper's contributions?
4. Are there any concerns regarding the evaluation methodology and technical details? If yes, please specify.
5. Can you provide suggestions for improving the clarity and interpretation of the results? | Summary Of The Paper
Review | Summary Of The Paper
The authors present a number of benchmarks and stress tests for similarity measures of learned representations of neural networks. Three similarity metrics are compared for their sensitivity and specificity of the similarity measure under a number of conditions, such as out-of-domain data or low rank approximations of hidden layer activation. Empirical results suggest that there is no clear trend for one or the other similarity metric that would be robust against all types of tests proposed.
Update After reading the authors responses I found that they addressed most of my concerns sufficiently. My main concern was that CCA a) is not evaluated with cross-validation and instead canonical correlations are computed on training data and b) that the CCA projections are not regularized. These two points together can lead to overfitted CCA projections and biased estimates of the canonical correlations, especially in the settings addressed in the present study. This could explain the poor results of CCA in the comparisons. Th
The authors responded that this is how the PWCCA method was applied and evaluated and the overfitted and not-cross-validated canonical correlations are a 'feature' of PWCCA. I wouldn't agree to that personally, but I think it's a valid point -- after all many researchers do not evaluate unsupervised methods on held-out test data, so there's probably some value in examining and comparing how these (wrong and biased) metrics computed on training data behave.
So I think it's a valuable contribution and I increased my score accordingly.
Review
Originality:
The probing tasks proposed are novel to the best of my knowledge, however they are a combination of previously proposed methods. It’s an important contribution to evaluate these different methods, and the idea to compare these methods under a variety of different conditions is also interesting.
Quality:
The submission is technically sound for most parts, but there are some technical details and some motivations/intuitions that could be made clearer.
For instance in line 99 the authors write that
We find that PWCCA performs far better than …, here it would be helpful to see what performance is measured.
If canonical correlations are not “working right” it could also be due to how they are evaluated. With high dimensional low rank data sets (and some large layers might be of that kind), CCA requires regularisation, and it makes sense to avoid overfitted canonical correlations (ccs) by evaluating the ccs on held-out data, just like with any supervised model. In line 302 the authors also suggest that CCA might have overfitted, but if it’s really overfitted it would not result in high held-out ccs. So both the hyper parameter optimisation for the CCA regularisation parameter as well as the evaluation on held out data might change the CCA results and I would assume the ccs are more trustworthy when cross-validated. That said, the other alternative similarity measures are probably more attractive as they do not require fitting parameters.
The idea with most probing tests makes a lot of sense, but I’m not sure I understand what exactly is measured in the experiments with removal of PC components, or rather how that measure would help to understand something about representations - it's hard to relate that transformation to something that happens to networks in applications. A minor general remark on the selection of components, I think it could be helpful to discard components based on the amount of variance they explain, not their absolute number.
In general it seems the authors often use the term “performance” and it is not really clear what is meant by that. It seems the methods are better or worse on different tasks, but it could be better explained what exactly is measured across all tasks, and what we can conclude from the results, beyond the observations for the single experiments.
Also there are number of packages for perturbations and augmentations that would be worth integrating into these tests.
Clarity:
The manuscript is well written and structured, but there could be a bit more structure in the interpretation of results or the experimental design. I was asking myself what the results imply and what specificity and sensitivity results tell us about the networks if the probing tasks are so specific and lead to different results for most similarity metrics. Maybe one could evaluate just robustness of the similarity metrics under perturbations. Or maybe there is a way to add more probing tests or modify the metric to see more consistent differences between similarity metrics. In the end one is interested in things like which metric should I use to evaluate which aspect.
Significance:
Developing tests to evaluate is a relevant research direction and the tests proposed in this work are an important contribution. There are some aspects about the evaluation that could be improved and the heterogeneity of the empirical results suggest that it could be difficult to draw general conclusions about neural representations. |
NIPS | Title
Greedy Sampling for Approximate Clustering in the Presence of Outliers
Abstract
Greedy algorithms such as adaptive sampling (k-means++) and furthest point traversal are popular choices for clustering problems. One the one hand, they possess good theoretical approximation guarantees, and on the other, they are fast and easy to implement. However, one main issue with these algorithms is the sensitivity to noise/outliers in the data. In this work we show that for k-means and k-center clustering, simple modifications to the well-studied greedy algorithms result in nearly identical guarantees, while additionally being robust to outliers. For instance, in the case of k-means++, we show that a simple thresholding operation on the distances suffices to obtain an O(log k) approximation to the objective. We obtain similar results for the simpler k-center problem. Finally, we show experimentally that our algorithms are easy to implement and scale well. We also measure their ability to identify noisy points added to a dataset.
1 Introduction
Clustering is one of the fundamental problems in data analysis. There are several formulations that have been very successful in applications, including k-means, k-median, k-center, and various notions of hierarchical clustering (see [19, 12] and references there-in).
In this paper we will consider k-means and k-center clustering. These are both extremely well-studied. The classic algorithm of Gonzalez [16] for k-center clustering achieves a factor 2 approximation, and it is NP-hard to improve upon this for general metrics, unless P equals NP. For k-means, the classic algorithm is due to Lloyd [23], proposed over 35 years ago. Somewhat recently, [4] (see also [25]) proposed a popular variant, known as “k-means++”. This algorithm remedies one of the main drawbacks of Lloyd’s algorithm, which is the lack of theoretical guarantees. [4] proved that the k-means++ algorithm yields an O(log k) approximation to the k-means objective (and also improves performance in practice). By way of more complex algorithms, [21] gave a local search based algorithm that achieves a constant factor approximation. Recently, this has been improved by [2], which is the best known approximation algorithm for the problem. The best known hardness results rule out polynomial time approximation schemes [3, 11].
The algorithms of Gonzalez (also known as furthest point traversal) and [4] are appealing also due to their simplicity and efficiency. However, one main drawback in these algorithms is their sensitivity to corruptions/outliers in the data. Imagine 10k of the points of a dataset are corrupted and the coordinates take large values. Then both furthest point traversal as well as k-means++ end up choosing only the outliers. The goal of our work is to remedy this problem, and achieve the simplicity and scalability of these algorithms, while also being robust in a provable sense.
Specifically, our motivation will be to study clustering problems when some of the input points are (possibly adversarially) corrupted, or are outliers. Corruption of inputs is known to make even simple learning problems extremely difficult to deal with. For instance, learning linear classifiers in the presence of even a small fraction of noisy labels is a notoriously hard problem (see [18, 5]
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
and references therein). The field of high dimensional robust statistics has recently seen a lot of progress on various problems in both supervised and unsupervised learning (see [20, 14]). The main difference between our work and the works in robust statistics is that our focus is not to estimate a parameter related to a distribution, but to instead produce clusterings that are near-optimal in terms of an objective that is defined solely on inliers.
Formulating clustering with outliers. Let OPTfull(X) denote the k-center or k-means objective on a set of points X . Now, given a set of points that also includes outliers, the goal in clustering with outliers (see [7, 17, 22]) is to partition the points X into Xin and Xout so as to minimize OPTfull(Xin). To avoid the trivial case of setting Xin = ;, we rquire |Xout| z, for some parameter z that is also given. Thus, we define the optimum OPT of the k-clustering with outliers problem as
OPT := min |Xout|z OPTfull(X \Xout).
This way of defining the objective has also found use for other problems such as PCA with outliers (also known as robust PCA, see [6] and references therein). For the problems we consider, namely k-center and k-means, there are many existing works that provide approximation algorithms for OPT as defined above. The early work of [7] studied the problem of k-median and facility location in this setup. The algorithms provided were based on linear programming relaxations, and were primarily motivated by the theoretical question of the power of such relaxations. Recently, [17] gives a more practical local search based algorithm, with running time quadratic in the number of points (which can also be reduced to a quadratic dependence on z, in the case z ⌧ n). Both of these algorithms are bi-criteria approximations (defined formally below). In other words, they allow the algorithm to discard > z outliers, while obtaining a good approximation to the objective value OPT. In practice, this corresponds to declaring a small number of the inliers as outliers. In applications where the true clusters are robust to small perturbations, such algorithms are acceptable.
The recent result of [22] (and the earlier result of [10] for k-median) go beyond bi-criteria approximation. They prove that for k-means clustering, one can obtain a factor 50 approximation to the value of OPT, while declaring at most z points as outliers, as desired. While this effectively settles the complexity of the problem, there are many key drawbacks. First, the algorithm is based on an iterative procedure that solves a linear programming relaxation in each step, which can be very inefficient in practice (and hard to implement). Further, in many applications, it may be necessary to improve on the (factor 50) approximation guarantee, potentially at the cost of choosing more clusters or slightly weakening the bound on the number of outliers.
Our main results aim to address this drawback. We prove that very simple variants of the classic Gonzalez algorithm for k-center, and the k-means++ algorithm for k-means result in approximation guarantees. The catch is that we only obtain bi-criteria results. To state our results, we will define the following notion. Definition 1. Consider an algorithm for the k-clustering (means/center) problem that on input X, k, z, outputs k0 centers (allowed to be slightly more than k), along with a partition X = X 0
in [X 0 out that
satisfies (a) |X 0 out | z, and (b) the objective value of assigning the points X 0 in to the output centers is at most ↵ · OPT. Then we say that the algorithm obtains an (↵, ) approximation using k0 centers, for the k-clustering problem with outliers.
Note that while our main results only output k centers, clustering algorithms are also well-studied when the number of clusters is not strictly specified. This is common in practice, where the application only demands a rough bound on the number of clusters. Indeed, the k-means++ algorithm is known to achieve much better approximations (constant as opposed to O(log k)) for the problem without outliers, when the number of centers output is O(k) instead of k [1, 26].
1.1 Our results.
K-center clustering in metric spaces. For k-center, our algorithm is a variant of furthest point traversal, in which instead of selecting the furthest point from the current set of centers, we choose a random point that is not too far from the current set. Our results are the following. Theorem 1.1. Let z, k, " > 0 be given parameters, and X = Xin [Xout be a set of points in a metric space with |Xout| z. There is an efficient randomized algorithm that with probability 3/4 outputs a (2 + ", 4 log k)-approximation using precisely k centers to the k-center with outliers problem.
Remark – guessing the optimum. The additional " in the approximation is because we require guessing the value of the optimum. This is quite standard in clustering problems, and can be done by a binary search. If OPT is assumed to lie in the range (c, c ) for some c > 0, then it can be estimated up to an error of c" in time O(log( /")), which gets added as a factor in the running time of the algorithm. In practice, this is often easy to achieve with = poly(n). We will thus assume a knowledge of the optimum value in both our algorithms.
Also, note that the algorithm outputs exactly k centers, and obtains the same (factor 2, up to ") approximation to the objective as the Gonzalez algorithm, but after discarding O(z log k) points as outliers. Next, we will show that if we allow the algorithm to output > k centers, one can achieve a better dependence on the number of points discarded.
Theorem 1.2. Let z, k, c, " > 0 be given parameters, and X = Xin [Xout be a set of points in a metric space with |Xout| z. There is an efficient randomized algorithm that with probability 3/4 outputs a (2+ ", (1+ c)/c)-approximation using (1+ c)k centers to the k-center w/ outliers problem.
As c increases, note that the algorithm outputs very close to z outliers. In other words, the number of points it falsely discards as outliers is small (at the expense of larger k).
K-means clustering. Here, our main contribution is to study an algorithm called T-kmeans++, a variant of D2 sampling (i.e. k-means++), in which the distances are thresholded appropriately before probabilities are computed. For this simple variant, we will establish robust guarantees that nearly match the guarantees known for k-means++ without any outliers.
Theorem 1.3. Let z, k, be given parameters, and X = Xin [Xout be a set of points in Euclidean space with |Xout| z. There is an efficient randomized algorithm that with probability 3/4 gives an (O(log k), O(log k))-approximation using k centers to the k-means with outliers problem on X .
The algorithm outputs an O(log k) approximation to the objective value (similar to k-means++). However, the algorithm may discard up to O(z log k) points as outliers. Note also that when z = 0, we recover the usual k-means++ guarantee. As in the case of k-center, we ask if allowing a bi-criteria approximation improves the dependence on the number of outliers. Here, an additional dimension also comes into play. For k-means++, it is known that choosing O(k) centers lets us approximate the k-means objective up to an O(1) factor (see, for instance, [1, 4, 25]). We can thus ask if a similar result is possible in the presence of outliers. We show that the answer to both the questions is yes.
Theorem 1.4. Let z, k, , c be given parameters, and X = Xin [Xout be a set of points in a metric space with |Xout| z. Let > 0 be an arbitrary constant. There is an efficient randomized algorithm that with probability 3/4 gives a (( +64), (1+c)(1+ )/c(1 ))-approximation using (1+c)k centers to the k-center with outliers problem on X .
Given the simplicity of our procedure, it is essentially as fast as k-means++ (modulo the step of guessing the optimum value, which adds a logarithmic overhead). Assuming that this is O(log n), our running times are all eOkn. In particular, the procedure is significantly faster than local search approaches [17], as well as linear programming based algorithms [22, 10]. Our run times also compare well with those of recent, coreset based approaches to clustering with outliers, such as those of [9, 24] (see also references therein).
1.2 Overview of techniques
To show all our results, we consider simple randomized modifications of classic algorithms, specifically Gonzales’ algorithm and the k-means++ algorithm. Our modifications, in effect, place a threshold on the probability of any single point being chosen. The choice of the threshold ensures that during the entire course of the algorithm, only a small number of outlier points will be chosen. Our analysis thus needs to keep track of (a) the number of points being chosen, (b) the number of inlier clusters from which we have chosen points (and in the case of k-means, points that are “close to the center”), (c) number of “wasted” iterations, due to choosing outliers. We use different potential functions to keep track of these quantities and measure progress. These potentials are directly inspired by the elegant analysis of the k-means++ algorithm provided in [13] (which is conceptually simpler than the original one in [4]).
2 Warm-up: Metric k-center in the presence of outliers
Let (X, d) be a metric space. Recall that the classic Gonzalez algorithm [16] for k-center works by maintaining a set of centers S, and at each step finding the point x 2 X that is furthest from S and adding it to X . After k iterations, a simple argument shows that the S obtained gives a factor 2 approximation to the best k centers in terms of the k-center objective.
As we described earlier, this furthest point traversal algorithm is very susceptible to the presence of outliers. In particular, if the input X includes z > k points that are far away from the rest of the points, all the points selected (except possibly the first) will be outliers. Our main idea to overcome this problem is to ensure that no single point is too likely to be picked in each step. Consider the simple strategy of choosing one of the 2z points furthest away from S (uniformly at random; we are assuming n 2z + k). This ensures that in every step, there is at least a 1/2 probability of picking an inlier (as there are only z outliers). In what follows, we will improve upon this basic idea and show that it leads to a good approximation to the objective restricted to the inliers.
The algorithm for proving Theorems 1.1 and 1.2 is very simple: in every step, a center is added to the current solution by choosing a uniformly random point in the dataset that is at a distance > 2r from the current centers. As discussed in Section 1.2, our proofs of both the theorems employ an appropriately designed potential function, adapted from [13].
Algorithm 1 k-center with outliers Input: points X ✓ Rd, parameters k, z, r; r is a guess for OPT Output: a set S` ✓ X of size `
1: Initialize S0 = ; 2: for t = 1 to ` do 3: Let Ft be the set of all points that are at a distance > 2r from St 1. I.e.,
Ft := {x 2 X : d(x, St 1) > 2r}
4: Let x be a point sampled u.a.r from Ft 5: St = St 1 [ {x} 6: return S`
Notation. Let C1, C2, . . . , Ck be the optimal clusters. So by definition, [iCi = Xin. Let Ft be the set of far away points at time t, as defined in the algorithm. Thus Ft includes both inliers and outliers. A simple observation about the algorithm is the following Observation 1. Suppose that the guess of r is OPT, and consider any iteration t of the algorithm. Let u 2 Ci be one of the chosen centers (i.e., u 2 St). Then Ci \ Ft = ;, and thus no other point in Ci can be subsequently added as a center.
Finally, we denote by E(t)i the set of points in cluster Ci that are at a distance 2r from St. I.e., we define E(t)i := Ci \ Ft. The observation above implies that E (t) i = ; whenever St contains some u 2 Ci. But the converse is not necessarily true (since all the points in Ci could be at a distance < 2r from points in other clusters, which happened to be picked in St).
Next, let nt denote the number of clusters i such that Ci \ St = ;, i.e., the number of clusters none of whose points were selected so far. We are now ready to analyze the algorithm.
2.1 Algorithm choosing k-centers
We will now analyze the execution of Algorithm 1 for k iterations, thereby establishing Theorem 1.1.
The key step is to define the appropriate potential function. To this end, let wt denote the number of times that one of the outliers was added to the set S in the first t iterations. I.e., wt = |Xout \ St|. The potential we consider is now:
t := wt|Ft \Xin|
nt . (1)
Our main lemma bounds the expected increase in t, conditioned on any choice of St (recall that St determines nt). Lemma 1. Let St be any set of centers chosen in the first t iterations, for some t 0. We have
E t+1
[ t+1 t | St] z
nt .
As usual, Et+1 denotes an expectation only over the (t+ 1)th step. Let us first see how the lemma implies Theorem 1.1.
Proof of Theorem 1.1. The idea is to repeatedly apply Lemma 1. Since we do not know the values of nt, we use the simple lower bound nt k t, for any t < k. Along with the observation that 0 = 0 (since w0 = 0), we have
E[ k] = k 1X
t=0
E[ t+1 t] k 1X
t=0
z
k t zHk,
where Hk is the kth Harmonic number. Thus by Markov’s inequality, Pr[ k 4zHk] 3/4. By the definition of k, this means that with probability at least 3/4,
wk|Ft \Xin| nk 4z ln k.
The key observation is that we always have wk = nk. This is because if the set Sk did not intersect nk of the optimal clusters, then since Sk cannot include two points from the same cluster (as we observed earlier), precisely nk of the iterations must have chosen outliers. This means that with probability at least 3/4, we have |Ft \Xin| 4z ln k. This means that after k iterations, with probability at least 3/4, at most 4z ln k of the inliers are at a distance > 2r away from the chosen set Sk. Thus the total number of points at a distance > 2r away from Sk is at most z(4 ln k + 1). This completes the proof of the theorem.
We thus only need to show Lemma 1.
Proof of Lemma 1. For simplicity, let us write ei := |E(t)i | = |Ci \ Ft|. In other words ei is the number of points in the ith optimal cluster that are at distance > 2r from St. Let us also write F = P i ei. By definition, we have that F = |Ft \Xin|.
Then, the sampling in the (t+ 1)th iteration samples an inlier with probability F/|Ft|, and an outlier with probability 1 F|Ft| . If an inlier is sampled, the value nt reduces by 1, but wt stays the same. If an outlier is sampled, the value nt stays the same, while wt increases by 1. The value of |Ft \Xin| is non-increasing. If a point in Ci is chosen (which happens with probability ei/|Ft|), it reduces by at least ei. Thus, we have
E[ t+1] kX
i=1
ei |Ft| wt(F ei) nt 1 + ✓ 1 F|Ft| ◆ (wt + 1)F nt . (2)
The first term on the RHS can be simplified as
wt |Ft|(nt 1)
X
i
ei(F ei) = wt
|Ft|(nt 1)
F 2
X
i
e2i
!
The number of non-zero ei is at most nt, by definition. Thus we have P i e 2 i F 2/nt. Plugging this into (2) and simplifying, we have
E[ t+1] wtF 2
|Ft|nt + ✓ 1 F|Ft| ◆ (wt + 1)F nt = t + ✓ 1 F|Ft| ◆ F nt .
The proof now follows by using the simple facts: ⇣ 1 F|Ft| ⌘ z|Ft| (which is true because there are at most z outliers) and F |Ft| (which is true by definition, because F = |Xin \ Ft|).
This completes the analysis of Algorithm 1 when the number of centers ` is exactly k.
2.2 Bi-criteria approximation
Next, we see that running Algorithm 1 for ` = (1 + c)k iterations results in covering more clusters (thus resulting in fewer outliers). Thus we end up with a tradeoff between the number of centers chosen and the number of points the algorithm declares as outliers (while obtaining the same approximation (factor 2) for the objective OPT – Theorem 1.2). The potential function now needs modification. The details are deferred to Section A.1.
3 k-means via thresholded adaptive sampling
Next we consider the k-means problem when some of the points are outliers. Here we propose a variant of the k-means++ procedure (see [4]), which we call T-kmeans++. Our algorithm, like k-means++, is an iterative algorithm that samples a point to be a centroid at each iteration according to a probability that depends on the distance to the current set of centers. However, we avoid the problem of picking too many outliers by simply thresholding the distances.
Notation. Let us start with some notation that we use for the remainder of the paper. The points X are now in a Euclidean space (as opposed to an arbitrary metric space in Section 2). We assume as before that |X| = n, and X = Xin [ Xout, where |Xout| = z, which is a known parameter. Additioanlly, will be a parameter that we will control. For the purposes of defining the algorithm, we assume that we have a guess for the optimum objective value, denoted OPT.
Now, for any set of centers C, we define
⌧(x,C) = min ✓ d(x,C)2,
· OPT z
◆ (3)
We follow the standard practice of defining the distance to an empty set to be 1. Next, for any set of points U , define ⌧(U,C) = P x2U ⌧(x,C). Note that the parameter lets us interpolate between uniform sampling ( ! 0), and classic D2 sampling ( ! 1). In our results, choosing a higher has the effect of reducing the number of points we declare as outliers, at the expense of having a worse guarantee on the approximation ratio for the objective.
We can now state our algorithm (denoted Algorithm 2)
Algorithm 2 Thresholded Adaptive Sampling – T-kmeans++ Input: a set of points X ✓ Rd, parameters k, z, and a guess for the optimum OPT. Output: a set S ✓ X of size `.
1: Initialize S0 = ;. 2: for t = 1 . . . ` do 3: sample a point x from the distribution
p(x) = ⌧(x, St 1)P
x2X ⌧(x, St 1) . (with ⌧ as defined in (3))
4: set St = St 1 [ {x}. 5: return S`
The key to the analysis is the following observation, that instead of the k-means objective, it suffices to bound the quantity P x2X ⌧(x, S`). Lemma 2. Let C be a set of centers, and suppose that ⌧(X,C) ↵ · OPT. Then we can partition X into X 0
in and X 0 out such that
1. P
x2X0 in
d(x,C)2 ↵ · OPT, and
2. |X 0 out | ↵z .
Proof. The proof follows easily from the definition of ⌧ (Eq. (3)). Let X 0out be the set of points for which d(x,C)2 > OPT/z, and let X 0in be X \X 0out. Then by definition (and the bound on ⌧(X,C)),
we have X
x2X0in
d(x,C)2 + |X 0out| · OPT
z ↵ · OPT.
Both the terms on the LHS are non-negative. Using the fact that the first term is non-negative gives the first part of the lemma, and the inequality for the second term gives the second part of the lemma.
3.1 k-means with outliers: an O(log k) approximation
Our first result is an analog of the theorem of [4], for the setting in which we have outliers in the data. As in the case of k-center clustering, we use a potential based analysis (inspired from [13]). Theorem 3.1. Running algorithm 2 for k iterations outputs a set Sk that satisfies
E[⌧(X,Sk)] ( +O(1)) log k · OPT.
We note that Theorem 3.1 together with Lemma 2 directly implies Theorem 1.3. Thus the main step is to prove Theorem 3.1. This is done using a potential function as before, but requires a more careful argument than the one for k-center (specifically, the goal is not to include some point from a cluster, but to include a “central” one). Please see the supplement, section A.2 for details.
3.2 Bi-criteria approximation
Theorem 3.2. Consider running Algorithm 2 for ` = (1 + c)k iterations, where c > 0 is a constant. Then for any > 0, with probability , the set S` satisfies
⌧(X,S`) ( + 64)(1 + c)OPT
(1 )c .
Note that this theorem directly implies Theorem 1.4 by repeating the algorithm O(1/ ) times. Once again, we use a slightly different potential function from the one for the O(log k) approximation. We defer the details of the proof to Section A.3 of the supplementary material.
4 Experiments
In this section, we demonstrate the empirical performance of our algorithm on multiple real and synthetic datasets, and compare it to existing heuristics. We observe that the algorithm generally behaves better than known heuristics, both in accuracy and (especially in) the running time. Our real and sythetic datasets are designed in a manner similar to [17]. All real datasets we use are available from the UCI repository [15].
k-center with outliers. We will evaluate Algorithm 1 on synthetic data sets, where points are generated according a mixture of d-dimensional Gaussians. The outliers in this case are chosen randomly in an appropriate bounding box.
Metrics. For k-center, we choose synthetic datasets because we wish to measure the cluster recall, i.e., the fraction of true clusters from which points are chosen by the algorithm. (Ideally, if we choose k centers, we wish to have precisely one point chosen from each cluster, so the cluster recall is 1). We compute this quantity for three algorithms: the first is the trivial baseline of choosing k0 random points from the dataset (denoted Random). The second and third are KC-Outlier and Gonzalez respectively. Figure 1 shows the recall as we vary the number of centers chosen. Note that when k = 20, even when
roughly k0 = 23 centers are chosen, we have a perfect recall (i.e., all the clusters are chosen) for our algorithm. Meanwhile Random and Gonzalez take considerably longer to find all the clusters.
k-means with outliers. Once again, we demonstrate the cluster recall on a synthetic dataset. In this case, we compare our algorithm with a heuristic proposed in [17]: running k-means++ followed by an iteration of “outlier-senstive Lloyd’s iteration”, proposed in [8]. We also ran the latter procedure as a post-processing step for our algorithm. Figure 2 reports the cluster recall and the value of the k-means objective for the algorithms. Unlike the case of k-center, the T-kmeans++ algorithm can potentially choose points in one cluster multiple times. However, we consistently observe that T-kmeans++ outperforms the other heuristics.
Finally, we perform experiments on three datasets:
1. NIPS (a dataset from the conference NIPS over 1987-2015): clustering was done on the rows of a 11463⇥ 50 matrix (the number of columns was reduced via SVD).
2. The MNIST digit-recognition dataset: clustering was performed on the rows of a 60000⇥40 (again, SVD was used to reduce the number of columns).
3. Skin Dataset (available via the UCI database): clustering was performed on the rows of a 245, 057⇥ 3 matrix (original dataset).
In order to simulate corruptions, we randomly choose 2.5% of the points in the datasets and corrupt all the coordinates by adding independent noise in a pre-defined range. The following table outlines the results. We report the outlier recall, i.e., the number of true outliers designated as outliers by the algorithm. For fair comparison, we make all the algorithms output precisely z outliers. Our results indicate slightly better recall values for T-kmeans++. For some data sets (e.g. Skin), the k-means objective value is worse for T-kmeans++. Thus in this case, the outliers are not “sufficiently corrupting” the original clustering.1
Dataset k KM recall TKM recall KM objective TKM objective NIPS 10 0.960 0.977 4173211 4167724
20 0.939 0.973 4046443 4112852 30 0.924 0.978 3956768 4115889
Skin 10 0.619 0.667 7726552 7439527 20 0.642 0.690 5936156 5637427 30 0.630 0.690 5164635 4853001
MNIST 10 0.985 0.988 1.546 ⇥108 1.513 ⇥108 20 0.982 0.989 1.475 ⇥108 1.449 ⇥108 30 0.977 0.986 1.429 ⇥108 1.412 ⇥108
Table showing outlier recall for KM (k-means++) and TKM (T-kmeans++) along with the k-means cost.
5 Conclusion
We proposed simple variants of known greedy heuristics for two popular clustering settings (k-center and k-means clustering) in order to deal with outliers/noise in the data. We proved approximation guarantees, comparing to the corresponding objectives on only the inliers. The algorithms are also easy to implement, run in eO(kn) time, and perform well on both real and synthetic datasets.
1An anonymous reviewer suggested experiments on the kddcup-1999 dataset (as in [9]). However, we observed that treating certain labels as outliers as done in the prior work is not meaningful: the outliers turn out to be closer to one of the cluster centers than many points in that cluster. | 1. What are the key contributions and novel aspects introduced by the paper in modifying popular approximation algorithms for the k-centers and k-means clustering problems?
2. What are the strengths of the paper, particularly in its theoretical analysis and empirical results?
3. Do you have any concerns or questions regarding the paper, such as the availability of a good estimate of OPT (the minimum objective value) or minor comments/issues/typos? | Review | Review
This paper proposes simple algorithmic modifications to popular approximation algorithms for the k-centers and k-means clustering problems. These modifications limit the selection of outliers, and thereby allow the authors to translate existing theoretical properties of the standard algorithms to the corresponding problems where outliers are present. The modifications in essence either constrain the selection of initial centroids based on their distance from the current set, or reduce the probabilities associated with selecting outliers which arise in the kmeans++ algorithm which has d^2 proportional probabilities. Empirical results show that the methods work well on some popular benchmarks. The paper is clear and well written, and the methods, although simple modifications of existing algorithms, are intuitive. The theoretical analysis is also well presented and persuasive. My main concern here surrounds the availability of a good estimate of OPT (the minimum objective value), especially for the k-means problem. The authors claim that this is a common assumption in clustering literature, but don't provide a reference. To conclude, a few minor comments/issue/typos: 1. Theorem 1.2 appears to dominate Theorem 1.1 when c = 1. If my understanding is correct, what then is the use of Theorem 1.1? 2. In Theorem 1.4 there is a typo in the statement of the approximation (extra right brace after "\beta + 64"). 3. In line 235 I presume you mean "... considerably more technical than our analysis for K-CENTERS..." |
NIPS | Title
Greedy Sampling for Approximate Clustering in the Presence of Outliers
Abstract
Greedy algorithms such as adaptive sampling (k-means++) and furthest point traversal are popular choices for clustering problems. One the one hand, they possess good theoretical approximation guarantees, and on the other, they are fast and easy to implement. However, one main issue with these algorithms is the sensitivity to noise/outliers in the data. In this work we show that for k-means and k-center clustering, simple modifications to the well-studied greedy algorithms result in nearly identical guarantees, while additionally being robust to outliers. For instance, in the case of k-means++, we show that a simple thresholding operation on the distances suffices to obtain an O(log k) approximation to the objective. We obtain similar results for the simpler k-center problem. Finally, we show experimentally that our algorithms are easy to implement and scale well. We also measure their ability to identify noisy points added to a dataset.
1 Introduction
Clustering is one of the fundamental problems in data analysis. There are several formulations that have been very successful in applications, including k-means, k-median, k-center, and various notions of hierarchical clustering (see [19, 12] and references there-in).
In this paper we will consider k-means and k-center clustering. These are both extremely well-studied. The classic algorithm of Gonzalez [16] for k-center clustering achieves a factor 2 approximation, and it is NP-hard to improve upon this for general metrics, unless P equals NP. For k-means, the classic algorithm is due to Lloyd [23], proposed over 35 years ago. Somewhat recently, [4] (see also [25]) proposed a popular variant, known as “k-means++”. This algorithm remedies one of the main drawbacks of Lloyd’s algorithm, which is the lack of theoretical guarantees. [4] proved that the k-means++ algorithm yields an O(log k) approximation to the k-means objective (and also improves performance in practice). By way of more complex algorithms, [21] gave a local search based algorithm that achieves a constant factor approximation. Recently, this has been improved by [2], which is the best known approximation algorithm for the problem. The best known hardness results rule out polynomial time approximation schemes [3, 11].
The algorithms of Gonzalez (also known as furthest point traversal) and [4] are appealing also due to their simplicity and efficiency. However, one main drawback in these algorithms is their sensitivity to corruptions/outliers in the data. Imagine 10k of the points of a dataset are corrupted and the coordinates take large values. Then both furthest point traversal as well as k-means++ end up choosing only the outliers. The goal of our work is to remedy this problem, and achieve the simplicity and scalability of these algorithms, while also being robust in a provable sense.
Specifically, our motivation will be to study clustering problems when some of the input points are (possibly adversarially) corrupted, or are outliers. Corruption of inputs is known to make even simple learning problems extremely difficult to deal with. For instance, learning linear classifiers in the presence of even a small fraction of noisy labels is a notoriously hard problem (see [18, 5]
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
and references therein). The field of high dimensional robust statistics has recently seen a lot of progress on various problems in both supervised and unsupervised learning (see [20, 14]). The main difference between our work and the works in robust statistics is that our focus is not to estimate a parameter related to a distribution, but to instead produce clusterings that are near-optimal in terms of an objective that is defined solely on inliers.
Formulating clustering with outliers. Let OPTfull(X) denote the k-center or k-means objective on a set of points X . Now, given a set of points that also includes outliers, the goal in clustering with outliers (see [7, 17, 22]) is to partition the points X into Xin and Xout so as to minimize OPTfull(Xin). To avoid the trivial case of setting Xin = ;, we rquire |Xout| z, for some parameter z that is also given. Thus, we define the optimum OPT of the k-clustering with outliers problem as
OPT := min |Xout|z OPTfull(X \Xout).
This way of defining the objective has also found use for other problems such as PCA with outliers (also known as robust PCA, see [6] and references therein). For the problems we consider, namely k-center and k-means, there are many existing works that provide approximation algorithms for OPT as defined above. The early work of [7] studied the problem of k-median and facility location in this setup. The algorithms provided were based on linear programming relaxations, and were primarily motivated by the theoretical question of the power of such relaxations. Recently, [17] gives a more practical local search based algorithm, with running time quadratic in the number of points (which can also be reduced to a quadratic dependence on z, in the case z ⌧ n). Both of these algorithms are bi-criteria approximations (defined formally below). In other words, they allow the algorithm to discard > z outliers, while obtaining a good approximation to the objective value OPT. In practice, this corresponds to declaring a small number of the inliers as outliers. In applications where the true clusters are robust to small perturbations, such algorithms are acceptable.
The recent result of [22] (and the earlier result of [10] for k-median) go beyond bi-criteria approximation. They prove that for k-means clustering, one can obtain a factor 50 approximation to the value of OPT, while declaring at most z points as outliers, as desired. While this effectively settles the complexity of the problem, there are many key drawbacks. First, the algorithm is based on an iterative procedure that solves a linear programming relaxation in each step, which can be very inefficient in practice (and hard to implement). Further, in many applications, it may be necessary to improve on the (factor 50) approximation guarantee, potentially at the cost of choosing more clusters or slightly weakening the bound on the number of outliers.
Our main results aim to address this drawback. We prove that very simple variants of the classic Gonzalez algorithm for k-center, and the k-means++ algorithm for k-means result in approximation guarantees. The catch is that we only obtain bi-criteria results. To state our results, we will define the following notion. Definition 1. Consider an algorithm for the k-clustering (means/center) problem that on input X, k, z, outputs k0 centers (allowed to be slightly more than k), along with a partition X = X 0
in [X 0 out that
satisfies (a) |X 0 out | z, and (b) the objective value of assigning the points X 0 in to the output centers is at most ↵ · OPT. Then we say that the algorithm obtains an (↵, ) approximation using k0 centers, for the k-clustering problem with outliers.
Note that while our main results only output k centers, clustering algorithms are also well-studied when the number of clusters is not strictly specified. This is common in practice, where the application only demands a rough bound on the number of clusters. Indeed, the k-means++ algorithm is known to achieve much better approximations (constant as opposed to O(log k)) for the problem without outliers, when the number of centers output is O(k) instead of k [1, 26].
1.1 Our results.
K-center clustering in metric spaces. For k-center, our algorithm is a variant of furthest point traversal, in which instead of selecting the furthest point from the current set of centers, we choose a random point that is not too far from the current set. Our results are the following. Theorem 1.1. Let z, k, " > 0 be given parameters, and X = Xin [Xout be a set of points in a metric space with |Xout| z. There is an efficient randomized algorithm that with probability 3/4 outputs a (2 + ", 4 log k)-approximation using precisely k centers to the k-center with outliers problem.
Remark – guessing the optimum. The additional " in the approximation is because we require guessing the value of the optimum. This is quite standard in clustering problems, and can be done by a binary search. If OPT is assumed to lie in the range (c, c ) for some c > 0, then it can be estimated up to an error of c" in time O(log( /")), which gets added as a factor in the running time of the algorithm. In practice, this is often easy to achieve with = poly(n). We will thus assume a knowledge of the optimum value in both our algorithms.
Also, note that the algorithm outputs exactly k centers, and obtains the same (factor 2, up to ") approximation to the objective as the Gonzalez algorithm, but after discarding O(z log k) points as outliers. Next, we will show that if we allow the algorithm to output > k centers, one can achieve a better dependence on the number of points discarded.
Theorem 1.2. Let z, k, c, " > 0 be given parameters, and X = Xin [Xout be a set of points in a metric space with |Xout| z. There is an efficient randomized algorithm that with probability 3/4 outputs a (2+ ", (1+ c)/c)-approximation using (1+ c)k centers to the k-center w/ outliers problem.
As c increases, note that the algorithm outputs very close to z outliers. In other words, the number of points it falsely discards as outliers is small (at the expense of larger k).
K-means clustering. Here, our main contribution is to study an algorithm called T-kmeans++, a variant of D2 sampling (i.e. k-means++), in which the distances are thresholded appropriately before probabilities are computed. For this simple variant, we will establish robust guarantees that nearly match the guarantees known for k-means++ without any outliers.
Theorem 1.3. Let z, k, be given parameters, and X = Xin [Xout be a set of points in Euclidean space with |Xout| z. There is an efficient randomized algorithm that with probability 3/4 gives an (O(log k), O(log k))-approximation using k centers to the k-means with outliers problem on X .
The algorithm outputs an O(log k) approximation to the objective value (similar to k-means++). However, the algorithm may discard up to O(z log k) points as outliers. Note also that when z = 0, we recover the usual k-means++ guarantee. As in the case of k-center, we ask if allowing a bi-criteria approximation improves the dependence on the number of outliers. Here, an additional dimension also comes into play. For k-means++, it is known that choosing O(k) centers lets us approximate the k-means objective up to an O(1) factor (see, for instance, [1, 4, 25]). We can thus ask if a similar result is possible in the presence of outliers. We show that the answer to both the questions is yes.
Theorem 1.4. Let z, k, , c be given parameters, and X = Xin [Xout be a set of points in a metric space with |Xout| z. Let > 0 be an arbitrary constant. There is an efficient randomized algorithm that with probability 3/4 gives a (( +64), (1+c)(1+ )/c(1 ))-approximation using (1+c)k centers to the k-center with outliers problem on X .
Given the simplicity of our procedure, it is essentially as fast as k-means++ (modulo the step of guessing the optimum value, which adds a logarithmic overhead). Assuming that this is O(log n), our running times are all eOkn. In particular, the procedure is significantly faster than local search approaches [17], as well as linear programming based algorithms [22, 10]. Our run times also compare well with those of recent, coreset based approaches to clustering with outliers, such as those of [9, 24] (see also references therein).
1.2 Overview of techniques
To show all our results, we consider simple randomized modifications of classic algorithms, specifically Gonzales’ algorithm and the k-means++ algorithm. Our modifications, in effect, place a threshold on the probability of any single point being chosen. The choice of the threshold ensures that during the entire course of the algorithm, only a small number of outlier points will be chosen. Our analysis thus needs to keep track of (a) the number of points being chosen, (b) the number of inlier clusters from which we have chosen points (and in the case of k-means, points that are “close to the center”), (c) number of “wasted” iterations, due to choosing outliers. We use different potential functions to keep track of these quantities and measure progress. These potentials are directly inspired by the elegant analysis of the k-means++ algorithm provided in [13] (which is conceptually simpler than the original one in [4]).
2 Warm-up: Metric k-center in the presence of outliers
Let (X, d) be a metric space. Recall that the classic Gonzalez algorithm [16] for k-center works by maintaining a set of centers S, and at each step finding the point x 2 X that is furthest from S and adding it to X . After k iterations, a simple argument shows that the S obtained gives a factor 2 approximation to the best k centers in terms of the k-center objective.
As we described earlier, this furthest point traversal algorithm is very susceptible to the presence of outliers. In particular, if the input X includes z > k points that are far away from the rest of the points, all the points selected (except possibly the first) will be outliers. Our main idea to overcome this problem is to ensure that no single point is too likely to be picked in each step. Consider the simple strategy of choosing one of the 2z points furthest away from S (uniformly at random; we are assuming n 2z + k). This ensures that in every step, there is at least a 1/2 probability of picking an inlier (as there are only z outliers). In what follows, we will improve upon this basic idea and show that it leads to a good approximation to the objective restricted to the inliers.
The algorithm for proving Theorems 1.1 and 1.2 is very simple: in every step, a center is added to the current solution by choosing a uniformly random point in the dataset that is at a distance > 2r from the current centers. As discussed in Section 1.2, our proofs of both the theorems employ an appropriately designed potential function, adapted from [13].
Algorithm 1 k-center with outliers Input: points X ✓ Rd, parameters k, z, r; r is a guess for OPT Output: a set S` ✓ X of size `
1: Initialize S0 = ; 2: for t = 1 to ` do 3: Let Ft be the set of all points that are at a distance > 2r from St 1. I.e.,
Ft := {x 2 X : d(x, St 1) > 2r}
4: Let x be a point sampled u.a.r from Ft 5: St = St 1 [ {x} 6: return S`
Notation. Let C1, C2, . . . , Ck be the optimal clusters. So by definition, [iCi = Xin. Let Ft be the set of far away points at time t, as defined in the algorithm. Thus Ft includes both inliers and outliers. A simple observation about the algorithm is the following Observation 1. Suppose that the guess of r is OPT, and consider any iteration t of the algorithm. Let u 2 Ci be one of the chosen centers (i.e., u 2 St). Then Ci \ Ft = ;, and thus no other point in Ci can be subsequently added as a center.
Finally, we denote by E(t)i the set of points in cluster Ci that are at a distance 2r from St. I.e., we define E(t)i := Ci \ Ft. The observation above implies that E (t) i = ; whenever St contains some u 2 Ci. But the converse is not necessarily true (since all the points in Ci could be at a distance < 2r from points in other clusters, which happened to be picked in St).
Next, let nt denote the number of clusters i such that Ci \ St = ;, i.e., the number of clusters none of whose points were selected so far. We are now ready to analyze the algorithm.
2.1 Algorithm choosing k-centers
We will now analyze the execution of Algorithm 1 for k iterations, thereby establishing Theorem 1.1.
The key step is to define the appropriate potential function. To this end, let wt denote the number of times that one of the outliers was added to the set S in the first t iterations. I.e., wt = |Xout \ St|. The potential we consider is now:
t := wt|Ft \Xin|
nt . (1)
Our main lemma bounds the expected increase in t, conditioned on any choice of St (recall that St determines nt). Lemma 1. Let St be any set of centers chosen in the first t iterations, for some t 0. We have
E t+1
[ t+1 t | St] z
nt .
As usual, Et+1 denotes an expectation only over the (t+ 1)th step. Let us first see how the lemma implies Theorem 1.1.
Proof of Theorem 1.1. The idea is to repeatedly apply Lemma 1. Since we do not know the values of nt, we use the simple lower bound nt k t, for any t < k. Along with the observation that 0 = 0 (since w0 = 0), we have
E[ k] = k 1X
t=0
E[ t+1 t] k 1X
t=0
z
k t zHk,
where Hk is the kth Harmonic number. Thus by Markov’s inequality, Pr[ k 4zHk] 3/4. By the definition of k, this means that with probability at least 3/4,
wk|Ft \Xin| nk 4z ln k.
The key observation is that we always have wk = nk. This is because if the set Sk did not intersect nk of the optimal clusters, then since Sk cannot include two points from the same cluster (as we observed earlier), precisely nk of the iterations must have chosen outliers. This means that with probability at least 3/4, we have |Ft \Xin| 4z ln k. This means that after k iterations, with probability at least 3/4, at most 4z ln k of the inliers are at a distance > 2r away from the chosen set Sk. Thus the total number of points at a distance > 2r away from Sk is at most z(4 ln k + 1). This completes the proof of the theorem.
We thus only need to show Lemma 1.
Proof of Lemma 1. For simplicity, let us write ei := |E(t)i | = |Ci \ Ft|. In other words ei is the number of points in the ith optimal cluster that are at distance > 2r from St. Let us also write F = P i ei. By definition, we have that F = |Ft \Xin|.
Then, the sampling in the (t+ 1)th iteration samples an inlier with probability F/|Ft|, and an outlier with probability 1 F|Ft| . If an inlier is sampled, the value nt reduces by 1, but wt stays the same. If an outlier is sampled, the value nt stays the same, while wt increases by 1. The value of |Ft \Xin| is non-increasing. If a point in Ci is chosen (which happens with probability ei/|Ft|), it reduces by at least ei. Thus, we have
E[ t+1] kX
i=1
ei |Ft| wt(F ei) nt 1 + ✓ 1 F|Ft| ◆ (wt + 1)F nt . (2)
The first term on the RHS can be simplified as
wt |Ft|(nt 1)
X
i
ei(F ei) = wt
|Ft|(nt 1)
F 2
X
i
e2i
!
The number of non-zero ei is at most nt, by definition. Thus we have P i e 2 i F 2/nt. Plugging this into (2) and simplifying, we have
E[ t+1] wtF 2
|Ft|nt + ✓ 1 F|Ft| ◆ (wt + 1)F nt = t + ✓ 1 F|Ft| ◆ F nt .
The proof now follows by using the simple facts: ⇣ 1 F|Ft| ⌘ z|Ft| (which is true because there are at most z outliers) and F |Ft| (which is true by definition, because F = |Xin \ Ft|).
This completes the analysis of Algorithm 1 when the number of centers ` is exactly k.
2.2 Bi-criteria approximation
Next, we see that running Algorithm 1 for ` = (1 + c)k iterations results in covering more clusters (thus resulting in fewer outliers). Thus we end up with a tradeoff between the number of centers chosen and the number of points the algorithm declares as outliers (while obtaining the same approximation (factor 2) for the objective OPT – Theorem 1.2). The potential function now needs modification. The details are deferred to Section A.1.
3 k-means via thresholded adaptive sampling
Next we consider the k-means problem when some of the points are outliers. Here we propose a variant of the k-means++ procedure (see [4]), which we call T-kmeans++. Our algorithm, like k-means++, is an iterative algorithm that samples a point to be a centroid at each iteration according to a probability that depends on the distance to the current set of centers. However, we avoid the problem of picking too many outliers by simply thresholding the distances.
Notation. Let us start with some notation that we use for the remainder of the paper. The points X are now in a Euclidean space (as opposed to an arbitrary metric space in Section 2). We assume as before that |X| = n, and X = Xin [ Xout, where |Xout| = z, which is a known parameter. Additioanlly, will be a parameter that we will control. For the purposes of defining the algorithm, we assume that we have a guess for the optimum objective value, denoted OPT.
Now, for any set of centers C, we define
⌧(x,C) = min ✓ d(x,C)2,
· OPT z
◆ (3)
We follow the standard practice of defining the distance to an empty set to be 1. Next, for any set of points U , define ⌧(U,C) = P x2U ⌧(x,C). Note that the parameter lets us interpolate between uniform sampling ( ! 0), and classic D2 sampling ( ! 1). In our results, choosing a higher has the effect of reducing the number of points we declare as outliers, at the expense of having a worse guarantee on the approximation ratio for the objective.
We can now state our algorithm (denoted Algorithm 2)
Algorithm 2 Thresholded Adaptive Sampling – T-kmeans++ Input: a set of points X ✓ Rd, parameters k, z, and a guess for the optimum OPT. Output: a set S ✓ X of size `.
1: Initialize S0 = ;. 2: for t = 1 . . . ` do 3: sample a point x from the distribution
p(x) = ⌧(x, St 1)P
x2X ⌧(x, St 1) . (with ⌧ as defined in (3))
4: set St = St 1 [ {x}. 5: return S`
The key to the analysis is the following observation, that instead of the k-means objective, it suffices to bound the quantity P x2X ⌧(x, S`). Lemma 2. Let C be a set of centers, and suppose that ⌧(X,C) ↵ · OPT. Then we can partition X into X 0
in and X 0 out such that
1. P
x2X0 in
d(x,C)2 ↵ · OPT, and
2. |X 0 out | ↵z .
Proof. The proof follows easily from the definition of ⌧ (Eq. (3)). Let X 0out be the set of points for which d(x,C)2 > OPT/z, and let X 0in be X \X 0out. Then by definition (and the bound on ⌧(X,C)),
we have X
x2X0in
d(x,C)2 + |X 0out| · OPT
z ↵ · OPT.
Both the terms on the LHS are non-negative. Using the fact that the first term is non-negative gives the first part of the lemma, and the inequality for the second term gives the second part of the lemma.
3.1 k-means with outliers: an O(log k) approximation
Our first result is an analog of the theorem of [4], for the setting in which we have outliers in the data. As in the case of k-center clustering, we use a potential based analysis (inspired from [13]). Theorem 3.1. Running algorithm 2 for k iterations outputs a set Sk that satisfies
E[⌧(X,Sk)] ( +O(1)) log k · OPT.
We note that Theorem 3.1 together with Lemma 2 directly implies Theorem 1.3. Thus the main step is to prove Theorem 3.1. This is done using a potential function as before, but requires a more careful argument than the one for k-center (specifically, the goal is not to include some point from a cluster, but to include a “central” one). Please see the supplement, section A.2 for details.
3.2 Bi-criteria approximation
Theorem 3.2. Consider running Algorithm 2 for ` = (1 + c)k iterations, where c > 0 is a constant. Then for any > 0, with probability , the set S` satisfies
⌧(X,S`) ( + 64)(1 + c)OPT
(1 )c .
Note that this theorem directly implies Theorem 1.4 by repeating the algorithm O(1/ ) times. Once again, we use a slightly different potential function from the one for the O(log k) approximation. We defer the details of the proof to Section A.3 of the supplementary material.
4 Experiments
In this section, we demonstrate the empirical performance of our algorithm on multiple real and synthetic datasets, and compare it to existing heuristics. We observe that the algorithm generally behaves better than known heuristics, both in accuracy and (especially in) the running time. Our real and sythetic datasets are designed in a manner similar to [17]. All real datasets we use are available from the UCI repository [15].
k-center with outliers. We will evaluate Algorithm 1 on synthetic data sets, where points are generated according a mixture of d-dimensional Gaussians. The outliers in this case are chosen randomly in an appropriate bounding box.
Metrics. For k-center, we choose synthetic datasets because we wish to measure the cluster recall, i.e., the fraction of true clusters from which points are chosen by the algorithm. (Ideally, if we choose k centers, we wish to have precisely one point chosen from each cluster, so the cluster recall is 1). We compute this quantity for three algorithms: the first is the trivial baseline of choosing k0 random points from the dataset (denoted Random). The second and third are KC-Outlier and Gonzalez respectively. Figure 1 shows the recall as we vary the number of centers chosen. Note that when k = 20, even when
roughly k0 = 23 centers are chosen, we have a perfect recall (i.e., all the clusters are chosen) for our algorithm. Meanwhile Random and Gonzalez take considerably longer to find all the clusters.
k-means with outliers. Once again, we demonstrate the cluster recall on a synthetic dataset. In this case, we compare our algorithm with a heuristic proposed in [17]: running k-means++ followed by an iteration of “outlier-senstive Lloyd’s iteration”, proposed in [8]. We also ran the latter procedure as a post-processing step for our algorithm. Figure 2 reports the cluster recall and the value of the k-means objective for the algorithms. Unlike the case of k-center, the T-kmeans++ algorithm can potentially choose points in one cluster multiple times. However, we consistently observe that T-kmeans++ outperforms the other heuristics.
Finally, we perform experiments on three datasets:
1. NIPS (a dataset from the conference NIPS over 1987-2015): clustering was done on the rows of a 11463⇥ 50 matrix (the number of columns was reduced via SVD).
2. The MNIST digit-recognition dataset: clustering was performed on the rows of a 60000⇥40 (again, SVD was used to reduce the number of columns).
3. Skin Dataset (available via the UCI database): clustering was performed on the rows of a 245, 057⇥ 3 matrix (original dataset).
In order to simulate corruptions, we randomly choose 2.5% of the points in the datasets and corrupt all the coordinates by adding independent noise in a pre-defined range. The following table outlines the results. We report the outlier recall, i.e., the number of true outliers designated as outliers by the algorithm. For fair comparison, we make all the algorithms output precisely z outliers. Our results indicate slightly better recall values for T-kmeans++. For some data sets (e.g. Skin), the k-means objective value is worse for T-kmeans++. Thus in this case, the outliers are not “sufficiently corrupting” the original clustering.1
Dataset k KM recall TKM recall KM objective TKM objective NIPS 10 0.960 0.977 4173211 4167724
20 0.939 0.973 4046443 4112852 30 0.924 0.978 3956768 4115889
Skin 10 0.619 0.667 7726552 7439527 20 0.642 0.690 5936156 5637427 30 0.630 0.690 5164635 4853001
MNIST 10 0.985 0.988 1.546 ⇥108 1.513 ⇥108 20 0.982 0.989 1.475 ⇥108 1.449 ⇥108 30 0.977 0.986 1.429 ⇥108 1.412 ⇥108
Table showing outlier recall for KM (k-means++) and TKM (T-kmeans++) along with the k-means cost.
5 Conclusion
We proposed simple variants of known greedy heuristics for two popular clustering settings (k-center and k-means clustering) in order to deal with outliers/noise in the data. We proved approximation guarantees, comparing to the corresponding objectives on only the inliers. The algorithms are also easy to implement, run in eO(kn) time, and perform well on both real and synthetic datasets.
1An anonymous reviewer suggested experiments on the kddcup-1999 dataset (as in [9]). However, we observed that treating certain labels as outliers as done in the prior work is not meaningful: the outliers turn out to be closer to one of the cluster centers than many points in that cluster. | 1. What are the strengths and weaknesses of the paper's contributions to clustering algorithms with outliers?
2. How does the paper's successive sampling algorithm compare to similar algorithms in recent works, such as Distributed partial clustering and A practical algorithm for distributed clustering and outlier detection?
3. How does the paper's analysis of its algorithms differ from previous potential-based analyses, such as the theorem of [4]?
4. What are the limitations of the paper's experimental results, and how could they be improved by comparing the proposed algorithm to other algorithms and using real-world datasets with ground truth?
5. Why does the paper report "cluster recall" for k-center but "outlier recall" for k-means, and how do these measurements differ?
6. How does the additional threshold in the definition of τ(x, C) help in the outlier setting, and what is the intuition behind this choice? | Review | Review
I have a mixed feeling about this paper. On one hand, it contains some nice and simple ideas such as threshold sampling (for k-means) and successive sampling (for k-center), which can potentially be useful in practice. On the other hand, I have the following concerns. First, it seems that the authors were not aware of recent works on clustering with outliers, including: [*] Distributed partial clustering, SPAA'17 [**] A practical algorithm for distributed clustering and outlier detection, NIPS'18 Though these algorithms are (mainly) designed for the distributed models, they can be used in the centralized setting as well. In fact, [**] used a successive sampling procedure (originally from "Optimal time bounds for approximate clustering", UAI'02) that is similar to Algorithm 1 in this paper. Certainly, the problems targeted in [**] are k-median/means, while Algorithm 1 is designed for k-center, but the underlying ideas (i.e., iterative sampling from uncovered points) look to be very similar. I hope the authors can make a careful comparison between these algorithms. Moreover, in [*] a centralized (O(1), 2)-bicriteria algorithm is designed for k-means with outliers. The algorithm uses k centers and has running time close to linear in terms of the dataset size. It looks like this result is strictly better than Thm 1.3 in terms of approximation guarantee? Second, the author mentioned in Section 3 that "Our first result is an analog of the theorem of [4], for the setting in which we have outliers in the data. As in the case of k-center clustering, we use a potential based analysis (inspired from [12])." I hope that some discussion on the novelty of the algorithm and analysis can be included in the main text. Otherwise it appears that Alg. 2 and its analysis are very incremental. The experiments part looks very brief. -- Please give the details about how the synthetic dataset is generated, at the level that others can repeat the experiments. -- The proposed algorithm should be compared with the one in [**], for k-means. -- There are real world data sets with ground truth (i.e., which are the outliers) available, such as KddCup99 (http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html). It will be good to see the performance of the proposed algorithm on real world datasets. -- Why for k-center one reports "cluster recall", while for k-means one reports "outlier recall"? It would be good to see both measurements on both cases. -- Better to give some intuition before the mathematical lemmas and proofs on why the extra threshold in the definition of \tau(x, C) helps in the outlier setting. Other comments: -- Intro, second paragraph, the last two sentences look to contradictory to each other? -- Alg. 1, why not directly use k instead of \ell? |
NIPS | Title
Greedy Sampling for Approximate Clustering in the Presence of Outliers
Abstract
Greedy algorithms such as adaptive sampling (k-means++) and furthest point traversal are popular choices for clustering problems. One the one hand, they possess good theoretical approximation guarantees, and on the other, they are fast and easy to implement. However, one main issue with these algorithms is the sensitivity to noise/outliers in the data. In this work we show that for k-means and k-center clustering, simple modifications to the well-studied greedy algorithms result in nearly identical guarantees, while additionally being robust to outliers. For instance, in the case of k-means++, we show that a simple thresholding operation on the distances suffices to obtain an O(log k) approximation to the objective. We obtain similar results for the simpler k-center problem. Finally, we show experimentally that our algorithms are easy to implement and scale well. We also measure their ability to identify noisy points added to a dataset.
1 Introduction
Clustering is one of the fundamental problems in data analysis. There are several formulations that have been very successful in applications, including k-means, k-median, k-center, and various notions of hierarchical clustering (see [19, 12] and references there-in).
In this paper we will consider k-means and k-center clustering. These are both extremely well-studied. The classic algorithm of Gonzalez [16] for k-center clustering achieves a factor 2 approximation, and it is NP-hard to improve upon this for general metrics, unless P equals NP. For k-means, the classic algorithm is due to Lloyd [23], proposed over 35 years ago. Somewhat recently, [4] (see also [25]) proposed a popular variant, known as “k-means++”. This algorithm remedies one of the main drawbacks of Lloyd’s algorithm, which is the lack of theoretical guarantees. [4] proved that the k-means++ algorithm yields an O(log k) approximation to the k-means objective (and also improves performance in practice). By way of more complex algorithms, [21] gave a local search based algorithm that achieves a constant factor approximation. Recently, this has been improved by [2], which is the best known approximation algorithm for the problem. The best known hardness results rule out polynomial time approximation schemes [3, 11].
The algorithms of Gonzalez (also known as furthest point traversal) and [4] are appealing also due to their simplicity and efficiency. However, one main drawback in these algorithms is their sensitivity to corruptions/outliers in the data. Imagine 10k of the points of a dataset are corrupted and the coordinates take large values. Then both furthest point traversal as well as k-means++ end up choosing only the outliers. The goal of our work is to remedy this problem, and achieve the simplicity and scalability of these algorithms, while also being robust in a provable sense.
Specifically, our motivation will be to study clustering problems when some of the input points are (possibly adversarially) corrupted, or are outliers. Corruption of inputs is known to make even simple learning problems extremely difficult to deal with. For instance, learning linear classifiers in the presence of even a small fraction of noisy labels is a notoriously hard problem (see [18, 5]
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
and references therein). The field of high dimensional robust statistics has recently seen a lot of progress on various problems in both supervised and unsupervised learning (see [20, 14]). The main difference between our work and the works in robust statistics is that our focus is not to estimate a parameter related to a distribution, but to instead produce clusterings that are near-optimal in terms of an objective that is defined solely on inliers.
Formulating clustering with outliers. Let OPTfull(X) denote the k-center or k-means objective on a set of points X . Now, given a set of points that also includes outliers, the goal in clustering with outliers (see [7, 17, 22]) is to partition the points X into Xin and Xout so as to minimize OPTfull(Xin). To avoid the trivial case of setting Xin = ;, we rquire |Xout| z, for some parameter z that is also given. Thus, we define the optimum OPT of the k-clustering with outliers problem as
OPT := min |Xout|z OPTfull(X \Xout).
This way of defining the objective has also found use for other problems such as PCA with outliers (also known as robust PCA, see [6] and references therein). For the problems we consider, namely k-center and k-means, there are many existing works that provide approximation algorithms for OPT as defined above. The early work of [7] studied the problem of k-median and facility location in this setup. The algorithms provided were based on linear programming relaxations, and were primarily motivated by the theoretical question of the power of such relaxations. Recently, [17] gives a more practical local search based algorithm, with running time quadratic in the number of points (which can also be reduced to a quadratic dependence on z, in the case z ⌧ n). Both of these algorithms are bi-criteria approximations (defined formally below). In other words, they allow the algorithm to discard > z outliers, while obtaining a good approximation to the objective value OPT. In practice, this corresponds to declaring a small number of the inliers as outliers. In applications where the true clusters are robust to small perturbations, such algorithms are acceptable.
The recent result of [22] (and the earlier result of [10] for k-median) go beyond bi-criteria approximation. They prove that for k-means clustering, one can obtain a factor 50 approximation to the value of OPT, while declaring at most z points as outliers, as desired. While this effectively settles the complexity of the problem, there are many key drawbacks. First, the algorithm is based on an iterative procedure that solves a linear programming relaxation in each step, which can be very inefficient in practice (and hard to implement). Further, in many applications, it may be necessary to improve on the (factor 50) approximation guarantee, potentially at the cost of choosing more clusters or slightly weakening the bound on the number of outliers.
Our main results aim to address this drawback. We prove that very simple variants of the classic Gonzalez algorithm for k-center, and the k-means++ algorithm for k-means result in approximation guarantees. The catch is that we only obtain bi-criteria results. To state our results, we will define the following notion. Definition 1. Consider an algorithm for the k-clustering (means/center) problem that on input X, k, z, outputs k0 centers (allowed to be slightly more than k), along with a partition X = X 0
in [X 0 out that
satisfies (a) |X 0 out | z, and (b) the objective value of assigning the points X 0 in to the output centers is at most ↵ · OPT. Then we say that the algorithm obtains an (↵, ) approximation using k0 centers, for the k-clustering problem with outliers.
Note that while our main results only output k centers, clustering algorithms are also well-studied when the number of clusters is not strictly specified. This is common in practice, where the application only demands a rough bound on the number of clusters. Indeed, the k-means++ algorithm is known to achieve much better approximations (constant as opposed to O(log k)) for the problem without outliers, when the number of centers output is O(k) instead of k [1, 26].
1.1 Our results.
K-center clustering in metric spaces. For k-center, our algorithm is a variant of furthest point traversal, in which instead of selecting the furthest point from the current set of centers, we choose a random point that is not too far from the current set. Our results are the following. Theorem 1.1. Let z, k, " > 0 be given parameters, and X = Xin [Xout be a set of points in a metric space with |Xout| z. There is an efficient randomized algorithm that with probability 3/4 outputs a (2 + ", 4 log k)-approximation using precisely k centers to the k-center with outliers problem.
Remark – guessing the optimum. The additional " in the approximation is because we require guessing the value of the optimum. This is quite standard in clustering problems, and can be done by a binary search. If OPT is assumed to lie in the range (c, c ) for some c > 0, then it can be estimated up to an error of c" in time O(log( /")), which gets added as a factor in the running time of the algorithm. In practice, this is often easy to achieve with = poly(n). We will thus assume a knowledge of the optimum value in both our algorithms.
Also, note that the algorithm outputs exactly k centers, and obtains the same (factor 2, up to ") approximation to the objective as the Gonzalez algorithm, but after discarding O(z log k) points as outliers. Next, we will show that if we allow the algorithm to output > k centers, one can achieve a better dependence on the number of points discarded.
Theorem 1.2. Let z, k, c, " > 0 be given parameters, and X = Xin [Xout be a set of points in a metric space with |Xout| z. There is an efficient randomized algorithm that with probability 3/4 outputs a (2+ ", (1+ c)/c)-approximation using (1+ c)k centers to the k-center w/ outliers problem.
As c increases, note that the algorithm outputs very close to z outliers. In other words, the number of points it falsely discards as outliers is small (at the expense of larger k).
K-means clustering. Here, our main contribution is to study an algorithm called T-kmeans++, a variant of D2 sampling (i.e. k-means++), in which the distances are thresholded appropriately before probabilities are computed. For this simple variant, we will establish robust guarantees that nearly match the guarantees known for k-means++ without any outliers.
Theorem 1.3. Let z, k, be given parameters, and X = Xin [Xout be a set of points in Euclidean space with |Xout| z. There is an efficient randomized algorithm that with probability 3/4 gives an (O(log k), O(log k))-approximation using k centers to the k-means with outliers problem on X .
The algorithm outputs an O(log k) approximation to the objective value (similar to k-means++). However, the algorithm may discard up to O(z log k) points as outliers. Note also that when z = 0, we recover the usual k-means++ guarantee. As in the case of k-center, we ask if allowing a bi-criteria approximation improves the dependence on the number of outliers. Here, an additional dimension also comes into play. For k-means++, it is known that choosing O(k) centers lets us approximate the k-means objective up to an O(1) factor (see, for instance, [1, 4, 25]). We can thus ask if a similar result is possible in the presence of outliers. We show that the answer to both the questions is yes.
Theorem 1.4. Let z, k, , c be given parameters, and X = Xin [Xout be a set of points in a metric space with |Xout| z. Let > 0 be an arbitrary constant. There is an efficient randomized algorithm that with probability 3/4 gives a (( +64), (1+c)(1+ )/c(1 ))-approximation using (1+c)k centers to the k-center with outliers problem on X .
Given the simplicity of our procedure, it is essentially as fast as k-means++ (modulo the step of guessing the optimum value, which adds a logarithmic overhead). Assuming that this is O(log n), our running times are all eOkn. In particular, the procedure is significantly faster than local search approaches [17], as well as linear programming based algorithms [22, 10]. Our run times also compare well with those of recent, coreset based approaches to clustering with outliers, such as those of [9, 24] (see also references therein).
1.2 Overview of techniques
To show all our results, we consider simple randomized modifications of classic algorithms, specifically Gonzales’ algorithm and the k-means++ algorithm. Our modifications, in effect, place a threshold on the probability of any single point being chosen. The choice of the threshold ensures that during the entire course of the algorithm, only a small number of outlier points will be chosen. Our analysis thus needs to keep track of (a) the number of points being chosen, (b) the number of inlier clusters from which we have chosen points (and in the case of k-means, points that are “close to the center”), (c) number of “wasted” iterations, due to choosing outliers. We use different potential functions to keep track of these quantities and measure progress. These potentials are directly inspired by the elegant analysis of the k-means++ algorithm provided in [13] (which is conceptually simpler than the original one in [4]).
2 Warm-up: Metric k-center in the presence of outliers
Let (X, d) be a metric space. Recall that the classic Gonzalez algorithm [16] for k-center works by maintaining a set of centers S, and at each step finding the point x 2 X that is furthest from S and adding it to X . After k iterations, a simple argument shows that the S obtained gives a factor 2 approximation to the best k centers in terms of the k-center objective.
As we described earlier, this furthest point traversal algorithm is very susceptible to the presence of outliers. In particular, if the input X includes z > k points that are far away from the rest of the points, all the points selected (except possibly the first) will be outliers. Our main idea to overcome this problem is to ensure that no single point is too likely to be picked in each step. Consider the simple strategy of choosing one of the 2z points furthest away from S (uniformly at random; we are assuming n 2z + k). This ensures that in every step, there is at least a 1/2 probability of picking an inlier (as there are only z outliers). In what follows, we will improve upon this basic idea and show that it leads to a good approximation to the objective restricted to the inliers.
The algorithm for proving Theorems 1.1 and 1.2 is very simple: in every step, a center is added to the current solution by choosing a uniformly random point in the dataset that is at a distance > 2r from the current centers. As discussed in Section 1.2, our proofs of both the theorems employ an appropriately designed potential function, adapted from [13].
Algorithm 1 k-center with outliers Input: points X ✓ Rd, parameters k, z, r; r is a guess for OPT Output: a set S` ✓ X of size `
1: Initialize S0 = ; 2: for t = 1 to ` do 3: Let Ft be the set of all points that are at a distance > 2r from St 1. I.e.,
Ft := {x 2 X : d(x, St 1) > 2r}
4: Let x be a point sampled u.a.r from Ft 5: St = St 1 [ {x} 6: return S`
Notation. Let C1, C2, . . . , Ck be the optimal clusters. So by definition, [iCi = Xin. Let Ft be the set of far away points at time t, as defined in the algorithm. Thus Ft includes both inliers and outliers. A simple observation about the algorithm is the following Observation 1. Suppose that the guess of r is OPT, and consider any iteration t of the algorithm. Let u 2 Ci be one of the chosen centers (i.e., u 2 St). Then Ci \ Ft = ;, and thus no other point in Ci can be subsequently added as a center.
Finally, we denote by E(t)i the set of points in cluster Ci that are at a distance 2r from St. I.e., we define E(t)i := Ci \ Ft. The observation above implies that E (t) i = ; whenever St contains some u 2 Ci. But the converse is not necessarily true (since all the points in Ci could be at a distance < 2r from points in other clusters, which happened to be picked in St).
Next, let nt denote the number of clusters i such that Ci \ St = ;, i.e., the number of clusters none of whose points were selected so far. We are now ready to analyze the algorithm.
2.1 Algorithm choosing k-centers
We will now analyze the execution of Algorithm 1 for k iterations, thereby establishing Theorem 1.1.
The key step is to define the appropriate potential function. To this end, let wt denote the number of times that one of the outliers was added to the set S in the first t iterations. I.e., wt = |Xout \ St|. The potential we consider is now:
t := wt|Ft \Xin|
nt . (1)
Our main lemma bounds the expected increase in t, conditioned on any choice of St (recall that St determines nt). Lemma 1. Let St be any set of centers chosen in the first t iterations, for some t 0. We have
E t+1
[ t+1 t | St] z
nt .
As usual, Et+1 denotes an expectation only over the (t+ 1)th step. Let us first see how the lemma implies Theorem 1.1.
Proof of Theorem 1.1. The idea is to repeatedly apply Lemma 1. Since we do not know the values of nt, we use the simple lower bound nt k t, for any t < k. Along with the observation that 0 = 0 (since w0 = 0), we have
E[ k] = k 1X
t=0
E[ t+1 t] k 1X
t=0
z
k t zHk,
where Hk is the kth Harmonic number. Thus by Markov’s inequality, Pr[ k 4zHk] 3/4. By the definition of k, this means that with probability at least 3/4,
wk|Ft \Xin| nk 4z ln k.
The key observation is that we always have wk = nk. This is because if the set Sk did not intersect nk of the optimal clusters, then since Sk cannot include two points from the same cluster (as we observed earlier), precisely nk of the iterations must have chosen outliers. This means that with probability at least 3/4, we have |Ft \Xin| 4z ln k. This means that after k iterations, with probability at least 3/4, at most 4z ln k of the inliers are at a distance > 2r away from the chosen set Sk. Thus the total number of points at a distance > 2r away from Sk is at most z(4 ln k + 1). This completes the proof of the theorem.
We thus only need to show Lemma 1.
Proof of Lemma 1. For simplicity, let us write ei := |E(t)i | = |Ci \ Ft|. In other words ei is the number of points in the ith optimal cluster that are at distance > 2r from St. Let us also write F = P i ei. By definition, we have that F = |Ft \Xin|.
Then, the sampling in the (t+ 1)th iteration samples an inlier with probability F/|Ft|, and an outlier with probability 1 F|Ft| . If an inlier is sampled, the value nt reduces by 1, but wt stays the same. If an outlier is sampled, the value nt stays the same, while wt increases by 1. The value of |Ft \Xin| is non-increasing. If a point in Ci is chosen (which happens with probability ei/|Ft|), it reduces by at least ei. Thus, we have
E[ t+1] kX
i=1
ei |Ft| wt(F ei) nt 1 + ✓ 1 F|Ft| ◆ (wt + 1)F nt . (2)
The first term on the RHS can be simplified as
wt |Ft|(nt 1)
X
i
ei(F ei) = wt
|Ft|(nt 1)
F 2
X
i
e2i
!
The number of non-zero ei is at most nt, by definition. Thus we have P i e 2 i F 2/nt. Plugging this into (2) and simplifying, we have
E[ t+1] wtF 2
|Ft|nt + ✓ 1 F|Ft| ◆ (wt + 1)F nt = t + ✓ 1 F|Ft| ◆ F nt .
The proof now follows by using the simple facts: ⇣ 1 F|Ft| ⌘ z|Ft| (which is true because there are at most z outliers) and F |Ft| (which is true by definition, because F = |Xin \ Ft|).
This completes the analysis of Algorithm 1 when the number of centers ` is exactly k.
2.2 Bi-criteria approximation
Next, we see that running Algorithm 1 for ` = (1 + c)k iterations results in covering more clusters (thus resulting in fewer outliers). Thus we end up with a tradeoff between the number of centers chosen and the number of points the algorithm declares as outliers (while obtaining the same approximation (factor 2) for the objective OPT – Theorem 1.2). The potential function now needs modification. The details are deferred to Section A.1.
3 k-means via thresholded adaptive sampling
Next we consider the k-means problem when some of the points are outliers. Here we propose a variant of the k-means++ procedure (see [4]), which we call T-kmeans++. Our algorithm, like k-means++, is an iterative algorithm that samples a point to be a centroid at each iteration according to a probability that depends on the distance to the current set of centers. However, we avoid the problem of picking too many outliers by simply thresholding the distances.
Notation. Let us start with some notation that we use for the remainder of the paper. The points X are now in a Euclidean space (as opposed to an arbitrary metric space in Section 2). We assume as before that |X| = n, and X = Xin [ Xout, where |Xout| = z, which is a known parameter. Additioanlly, will be a parameter that we will control. For the purposes of defining the algorithm, we assume that we have a guess for the optimum objective value, denoted OPT.
Now, for any set of centers C, we define
⌧(x,C) = min ✓ d(x,C)2,
· OPT z
◆ (3)
We follow the standard practice of defining the distance to an empty set to be 1. Next, for any set of points U , define ⌧(U,C) = P x2U ⌧(x,C). Note that the parameter lets us interpolate between uniform sampling ( ! 0), and classic D2 sampling ( ! 1). In our results, choosing a higher has the effect of reducing the number of points we declare as outliers, at the expense of having a worse guarantee on the approximation ratio for the objective.
We can now state our algorithm (denoted Algorithm 2)
Algorithm 2 Thresholded Adaptive Sampling – T-kmeans++ Input: a set of points X ✓ Rd, parameters k, z, and a guess for the optimum OPT. Output: a set S ✓ X of size `.
1: Initialize S0 = ;. 2: for t = 1 . . . ` do 3: sample a point x from the distribution
p(x) = ⌧(x, St 1)P
x2X ⌧(x, St 1) . (with ⌧ as defined in (3))
4: set St = St 1 [ {x}. 5: return S`
The key to the analysis is the following observation, that instead of the k-means objective, it suffices to bound the quantity P x2X ⌧(x, S`). Lemma 2. Let C be a set of centers, and suppose that ⌧(X,C) ↵ · OPT. Then we can partition X into X 0
in and X 0 out such that
1. P
x2X0 in
d(x,C)2 ↵ · OPT, and
2. |X 0 out | ↵z .
Proof. The proof follows easily from the definition of ⌧ (Eq. (3)). Let X 0out be the set of points for which d(x,C)2 > OPT/z, and let X 0in be X \X 0out. Then by definition (and the bound on ⌧(X,C)),
we have X
x2X0in
d(x,C)2 + |X 0out| · OPT
z ↵ · OPT.
Both the terms on the LHS are non-negative. Using the fact that the first term is non-negative gives the first part of the lemma, and the inequality for the second term gives the second part of the lemma.
3.1 k-means with outliers: an O(log k) approximation
Our first result is an analog of the theorem of [4], for the setting in which we have outliers in the data. As in the case of k-center clustering, we use a potential based analysis (inspired from [13]). Theorem 3.1. Running algorithm 2 for k iterations outputs a set Sk that satisfies
E[⌧(X,Sk)] ( +O(1)) log k · OPT.
We note that Theorem 3.1 together with Lemma 2 directly implies Theorem 1.3. Thus the main step is to prove Theorem 3.1. This is done using a potential function as before, but requires a more careful argument than the one for k-center (specifically, the goal is not to include some point from a cluster, but to include a “central” one). Please see the supplement, section A.2 for details.
3.2 Bi-criteria approximation
Theorem 3.2. Consider running Algorithm 2 for ` = (1 + c)k iterations, where c > 0 is a constant. Then for any > 0, with probability , the set S` satisfies
⌧(X,S`) ( + 64)(1 + c)OPT
(1 )c .
Note that this theorem directly implies Theorem 1.4 by repeating the algorithm O(1/ ) times. Once again, we use a slightly different potential function from the one for the O(log k) approximation. We defer the details of the proof to Section A.3 of the supplementary material.
4 Experiments
In this section, we demonstrate the empirical performance of our algorithm on multiple real and synthetic datasets, and compare it to existing heuristics. We observe that the algorithm generally behaves better than known heuristics, both in accuracy and (especially in) the running time. Our real and sythetic datasets are designed in a manner similar to [17]. All real datasets we use are available from the UCI repository [15].
k-center with outliers. We will evaluate Algorithm 1 on synthetic data sets, where points are generated according a mixture of d-dimensional Gaussians. The outliers in this case are chosen randomly in an appropriate bounding box.
Metrics. For k-center, we choose synthetic datasets because we wish to measure the cluster recall, i.e., the fraction of true clusters from which points are chosen by the algorithm. (Ideally, if we choose k centers, we wish to have precisely one point chosen from each cluster, so the cluster recall is 1). We compute this quantity for three algorithms: the first is the trivial baseline of choosing k0 random points from the dataset (denoted Random). The second and third are KC-Outlier and Gonzalez respectively. Figure 1 shows the recall as we vary the number of centers chosen. Note that when k = 20, even when
roughly k0 = 23 centers are chosen, we have a perfect recall (i.e., all the clusters are chosen) for our algorithm. Meanwhile Random and Gonzalez take considerably longer to find all the clusters.
k-means with outliers. Once again, we demonstrate the cluster recall on a synthetic dataset. In this case, we compare our algorithm with a heuristic proposed in [17]: running k-means++ followed by an iteration of “outlier-senstive Lloyd’s iteration”, proposed in [8]. We also ran the latter procedure as a post-processing step for our algorithm. Figure 2 reports the cluster recall and the value of the k-means objective for the algorithms. Unlike the case of k-center, the T-kmeans++ algorithm can potentially choose points in one cluster multiple times. However, we consistently observe that T-kmeans++ outperforms the other heuristics.
Finally, we perform experiments on three datasets:
1. NIPS (a dataset from the conference NIPS over 1987-2015): clustering was done on the rows of a 11463⇥ 50 matrix (the number of columns was reduced via SVD).
2. The MNIST digit-recognition dataset: clustering was performed on the rows of a 60000⇥40 (again, SVD was used to reduce the number of columns).
3. Skin Dataset (available via the UCI database): clustering was performed on the rows of a 245, 057⇥ 3 matrix (original dataset).
In order to simulate corruptions, we randomly choose 2.5% of the points in the datasets and corrupt all the coordinates by adding independent noise in a pre-defined range. The following table outlines the results. We report the outlier recall, i.e., the number of true outliers designated as outliers by the algorithm. For fair comparison, we make all the algorithms output precisely z outliers. Our results indicate slightly better recall values for T-kmeans++. For some data sets (e.g. Skin), the k-means objective value is worse for T-kmeans++. Thus in this case, the outliers are not “sufficiently corrupting” the original clustering.1
Dataset k KM recall TKM recall KM objective TKM objective NIPS 10 0.960 0.977 4173211 4167724
20 0.939 0.973 4046443 4112852 30 0.924 0.978 3956768 4115889
Skin 10 0.619 0.667 7726552 7439527 20 0.642 0.690 5936156 5637427 30 0.630 0.690 5164635 4853001
MNIST 10 0.985 0.988 1.546 ⇥108 1.513 ⇥108 20 0.982 0.989 1.475 ⇥108 1.449 ⇥108 30 0.977 0.986 1.429 ⇥108 1.412 ⇥108
Table showing outlier recall for KM (k-means++) and TKM (T-kmeans++) along with the k-means cost.
5 Conclusion
We proposed simple variants of known greedy heuristics for two popular clustering settings (k-center and k-means clustering) in order to deal with outliers/noise in the data. We proved approximation guarantees, comparing to the corresponding objectives on only the inliers. The algorithms are also easy to implement, run in eO(kn) time, and perform well on both real and synthetic datasets.
1An anonymous reviewer suggested experiments on the kddcup-1999 dataset (as in [9]). However, we observed that treating certain labels as outliers as done in the prior work is not meaningful: the outliers turn out to be closer to one of the cluster centers than many points in that cluster. | 1. How does the reviewer assess the experimental section of the paper?
2. What does the reviewer suggest to enhance the experimental section?
3. What is the reviewer's opinion on the simplicity and implementability of the proposed algorithms?
4. Does the reviewer think that the paper lacks a running time analysis and comparison with other known methods? If so, why?
5. Are there any lower bound arguments that the reviewer thinks the authors should consider in their theoretical analysis? If so, what kind of combinations are they?
6. How does the reviewer summarize the contributions and limitations of the paper? | Review | Review
- Experimental section does not compare the results of the suggested algorithms with the other known algorithms using other techniques such as local search. A more elaborate experimental section will help. - Even though the theoretical results are nice, the main selling point of the paper from a usability viewpoint is the simple algorithms that are fast and easy to implement (and hence debug). However, the running time analysis and comparison with other known methods are missing. Such an analysis will help see the results of this paper in the right perspective. - On the theoretical front, are there lower bounds arguments of the form: for fixed c=1 and a=2 is there an (a,b,c) algorithm with b = O(1)? There are many such combinations possible. Do the authors know about such lower bounds? It would be nice to include this in the discussion to be able to evaluate the nice upper bounds given in this work. |
NIPS | Title
Fair Sortition Made Transparent
Abstract
Sortition is an age-old democratic paradigm, widely manifested today through the random selection of citizens’ assemblies. Recently-deployed algorithms select assemblies maximally fairly, meaning that subject to demographic quotas, they give all potential participants as equal a chance as possible of being chosen. While these fairness gains can bolster the legitimacy of citizens’ assemblies and facilitate their uptake, existing algorithms remain limited by their lack of transparency. To overcome this hurdle, in this work we focus on panel selection by uniform lottery, which is easy to realize in an observable way. By this approach, the final assembly is selected by uniformly sampling some pre-selected set of m possible assemblies. We provide theoretical guarantees on the fairness attainable via this type of uniform lottery, as compared to the existing maximally fair but opaque algorithms, for two different fairness objectives. We complement these results with experiments on real-world instances that demonstrate the viability of the uniform lottery approach as a method of selecting assemblies both fairly and transparently.
1 Introduction
In a citizens’ assembly, a panel of randomly chosen citizens is convened to deliberate and ultimately make recommendations on a policy issue. The defining aspect of citizens’ assemblies is the randomness of the process, sortition, by which participants are chosen. In practice, the sortition process works as follows: first, volunteers are solicited via thousands of letters or phone calls, which target individuals chosen uniformly at random. Those who respond affirmatively form the pool of volunteers, from which a final panel will be chosen. Finally, a selection algorithm is used to randomly select some pre-specified number k of pool members for the panel. To ensure adequate representation of demographic groups, the chosen panel is often constrained to satisfy some upper and lower quotas on feature categories such as age, gender, and ethnicity. We call a quota-satisfying panel of size k a feasible panel. As this process illustrates, citizens’ assemblies offer a way to involve the public in informed decision-making. This potential for civic participation has recently spurred a global resurgence in the popularity of citizens assemblies; they have been commissioned by governments and led to policy changes at the national level [19, 23, 12].
Prompted by the growing impact of citizens’ assemblies, there has been a recent flurry of computer scientific research on sortition, and in particular, on the fairness of the procedure by which participants are chosen [2, 13, 12]. The most practicable result to date is a family of selection algorithms proposed by Flanigan et al. [12], which are distinguished from their predecessors by their use of randomness toward the goal of fairness: while previously-used algorithms selected pool members in
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
a random but ad-hoc fashion, these new algorithms are maximally fair, ensuring that pool members have as equal probability as possible of being chosen for the panel, subject to the quotas.1 To encompass the many interpretations of “as equal as possible,” these algorithms permit the optimization of any fairness objective with certain convexity properties. There is now a publicly available implementation of the techniques of Flanigan et al. [12], called Panelot, which optimizes the egalitarian notion that no pool member has too little selection probability via the Leximin objective from fair division [21, 14]. This algorithm has already been deployed by several groups of panel organizers, and has been used to select dozens of panels worldwide.
Fairness gains in the panel selection process can lend legitimacy to citizens’ assemblies and potentially increase their adoption, but only insofar as the public trusts that these gains are truly realized. Currently, the potential for public trust in the panel selection process is limited by multiple factors. First, the latest panel selection algorithms select the final panel via behind-the-scenes computation. When panels are selected in this manner, observers cannot even verify that any given pool member has any chance of being chosen for the panel. A second and more fundamental hurdle is that randomness and probability, which are central to the sortition process, have been shown in many contexts to be difficult for people to understand and reason about [24, 20, 28]. Aiming to address these shortcomings, we propose and pursue the following notion of transparency in panel selection:
Transparency: Observers should be able to, without reasoning in-depth about probability, (1) understand the probabilities with which each individual will be chosen for the panel in theory, and (2) verify that individuals are actually selected with these probabilities in practice.
In this paper, we aim to achieve transparency and fairness simultaneously: this means advancing the defined goal of transparency, while preserving the fairness gains obtained by maximally fair selection algorithms. Although this task is reminiscent of existing AI research on trade-offs between fairness or transparency with other desirable objectives [4, 11, 3, 27], to our knowledge, this is the first investigation of the trade-off between fairness and transparency.
Setting aside for a moment the goal of fairness, we consider a method of random decision-making that is already common in the public sphere: the uniform lottery. To satisfy quotas, a uniform lottery for sortition must randomize not over individuals, but over entire feasible panels. In fact, this approach has been suggested by practitioners, and was even used in 2020 to select a citizens’ assembly in Michigan. The following example, which closely mirrors that real-world pilot,2 illustrates that panel selection via uniform lottery is naturally consistent with the transparency notion we pursue.
Suppose we construct 1000 feasible panels from a pool (possibly with duplicates), numbered 000- 999, and publish an (anonymized) list of which pool members are on each panel. We then inform spectators that we will choose each panel with equal probability. This satisfies criterion (1): spectators can easily understand that all panels will be chosen with the same probability of 1/1000, and can easily determine each individual’s selection probability by counting the number of panels containing the individual. To satisfy criterion (2), we enact the lottery by drawing each of the three digits of the final panel number individually from lottery machines. Lottery spectators can confirm that each ball is drawn with equal probability; this provides confirmation that panels are indeed being chosen with uniform probabilities, thus confirming the enactment of the proposed individual selection probabilities. In addition to its conventionality as a source of randomness, decision-making via drawing lottery balls invites an exciting spectacle, which can promote engagement with citizens’ assemblies.
This simple method neatly satisfies our transparency criteria, but it has one obvious downside: a uniform lottery over an arbitrary set of feasible panels does not guarantee any measure of equal probabilities to individuals. In fact, it is not even clear that the fairest possible uniform lottery over m panels, where m is a number conducive to selection by physical lottery (e.g. m =1000), would not be significantly less fair than maximally fair algorithms, which sample the fairest possible unconstrained distribution over panels. For example, if m is too small, there may be no uniform lottery which gives all individuals non-zero selection probability, even if each individual appears
1Quotas can preclude giving individuals exactly equal probabilities: if the panel must be 1/2 men, 1/2 women but the pool is split 3/4 men, 1/4 women, then some women must be chosen more often than some men.
2Of By For’s pilot of live panel selection via lottery can be viewed at https://vimeo.com/458304880# t=17m59s from 17:59 to 21:23. For a more detailed description, see Figure 3 and surrounding text in [12].
on some feasible panel (and so can attain a non-zero selection probability under an unconstrained distribution).
Fortunately, empirical evidence suggests that there is hope: in the 2020 pilot mentioned above, a uniform lottery over m =1000 panels was found that nearly matched the fairness of the maximally fair distribution generated by Panelot. Motivated by this anecdotal evidence, we aim to understand whether such a fair uniform lottery is guaranteed to exist in general, and if it does, how to find it. We summarize this goal in the following research questions:
Does there exist a uniform lottery overm panels that nearly preserves the fairness of the maximally fair unconstrained distribution over panels? And, Algorithmically, how do we compute such a uniform lottery?
Results and Contributions. After describing the model in Section 2, in Section 3 we prove that it is possible to round an (essentially) arbitrary distribution over panels to a uniform lottery while preserving all individuals’ selection probabilities up to only a small bounded deviation. These results use tools from discrepancy theory and randomized rounding. Intuitively, this bounded change in selection probabilities implies bounded losses in fairness; we formalize this intuition in Section 4, showing that there exists in general a uniform lottery that is nearly maximally fair, with respect to multiple choices of fairness objective. Although we would ideally like to give such bounds for the Leximin fairness objective, due to its use practice, we cannot succinctly represent bounds for this objective because it is not scalar valued. We therefore give bounds for Maximin, a closely related egalitarian objective which only considers the minimum selection probability given to any pool member [7]. We discuss in Section 4 why bounds on loss in Maximin fairness are, in the most meaningful sense, also bounds on loss in Leximin fairness. We additionally give upper bounds on the loss in Nash Welfare [21], a similarly well-established fairness objective that has also been implemented in panel selection tools [18].
Finally, in Section 5, we consider the algorithmic question in practice: given a maximally fair distribution over panels, can we actually find nearly maximally fair uniform lotteries that match our theoretical guarantees? To answer this question, we implement two standard rounding algorithms, along with near-optimal (but more computationally intensive) integer programming methods, for finding uniform lotteries. We then evaluate the performance of these algorithms in 11 real-world panel selection instances. We find that in all instances, we can compute uniform lotteries that nearly exactly preserve not only fairness with respect to both objectives, but entire sets of Leximin-optimal marginals, meaning that from the perspective of individuals, there is essentially no difference between using a uniform lottery versus the optimal unconstrained distribution sampled by the latest algorithms. We discuss these results, their implications, and how they can be deployed directly into the existing panel selection pipeline in Section 6.
2 Model
Panel Selection Problem. First, we formally define the task of panel selection for citizens’ assemblies. Let N = [n] be the pool of volunteers for the panel—individuals from the population who have indicated their willingness to participate in response to an invitation. Let F = {ft}t denote a fixed set of features of interest. Each feature ft : N → Ωt maps each pool member to their value of that feature, where Ωt is the set of ft’s possible values. For example, for feature ft = “gender”, we might have Ωt = {“male”,“female”, “non-binary”}. We define individual i’s feature vector F (i) = (ft(i))t ∈ ∏ t Ωt to be the vector encoding their values for all features in F .
As is done in practice and in previous research [13, 12], we impose that the chosen panel P must be a subset of the pool of size k, and must be representative of the broader population with respect to the features in F . This representativeness is imposed via quotas: for each feature f and corresponding value v ∈ Ω, we may have lower and upper quotas lf,v and uf,v. These quotas require that the panel contain between lf,v and uf,v individuals i such that f(i) = v.
In terms of these parameters, we define an instance of the panel selection problem as: given (N, k, F, l, u)—a pool, panel size, set of features, and sets of lower and upper quotas—randomly select a feasible panel, where a feasible panel is any set of individuals P from the collection K:
K := { P ∈ (Nk) : lf,v ≤ |{i ∈ P : f(i) = v}| ≤ uf,v for all f, v } .
Maximally Fair Selection Algorithms. A selection algorithm is a procedure that solves instances of the panel selection problem. A selection algorithm’s level of fairness on a given instance is determined by its panel distribution p, the (possibly implicit) distribution over K from which it draws the final panel. Because we care about fairness to individual pool members, we evaluate the fairness of p in terms of the fairness of selection probabilities, or marginals, that p implies for all pool members.3 We denote the vector of marginals implied by p as π, and we will sometimes specify a panel distribution as p, π to explicitly denote this pair. We say that π is realizable if it is implied by some distribution p over the feasible panels K. Maximally fair selection algorithms are those which solve the panel selection problem by sampling a specifically chosen p: one which implies marginals π that allocate probability as fairly as possible across pool members. The fairness of p, π is measured by a fairness objective F , which maps an allocation—in this case, of selection probability to pool members—to a real number measuring the allocation’s fairness. Fixing an instance, a fairness objective F , and a panel distribution p, we express the fairness of p as F(p). Existing maximally fair selection algorithms can maximize a wide range of fairness objectives, including those considered in this paper.
Leximin, Maximin, and Nash Welfare. Of the three fairness objectives we consider in this paper, Maximin and Nash Welfare (NW) have succinct formulae. For p, π they are defined as follows, where πi is the marginal of individual i:
Maximin(p) := min i∈N πi, NW(p) :=
(∏
i
πi
)1/n .
Intuitively, NW maximizes the geometric mean, prioritizing the marginal πi of each individual i in proportion to π−1i . Maximin maximizes the marginal probability of the individual least likely to be selected. Finally, Leximin is a refinement of Maximin, and is defined by the following algorithm: first, optimize Maximin; then, fixing the minimum marginal as a lower bound on any marginal, maximize the second-lowest marginal; and so on.
Our task: quantize a maximally fair panel distribution with minimal fairness loss. We define a 1/m-quantized panel distribution as a distribution over all feasible panels K in which all probabilities are integer multiples of 1/m. We use p̄ to denote a panel distribution with this property. Formally, while an (unconstrained) panel distribution p lies in D := {p ∈ R|K|+ : ‖p‖1 = 1}, a 1/m-quantized panel distribution in p̄ lies in D := {p̄ ∈ (Z+/m)|K| : ‖p̄‖1 = 1}. Note that a 1/m-quantized distribution p̄ immediately translates to a physical uniform lottery of over m panels (with duplicates): if p̄ assigns probability `/m to panel P , then the corresponding physical uniform lottery would contain ` duplicates of P . Thus, if we can compute a 1/m-quantized panel distribution p̄ with fairness F(p̄), then we have designed a physical uniform lottery over m panels with that same level of fairness.
Our goal follows directly from this observation: we want to show that given an instance and desired lottery size m, we can compute a 1/m-quantized distribution p̄ that is nearly as fair, with respect to a fairness notion F , as the maximally fair panel distribution in this instance p∗ ∈ arg maxp∈D F(p). We define the fairness loss in this quantization process to be the difference F(p∗) − F(p̄). We are aided in this task by the existence of practical algorithms for computing p∗ Flanigan et al. [12], which allows us to use p∗ as an input to the quantization procedure we hope to design. For intuition, we illustrate this quantization task in Figure 1, where π∗, π̄ are the marginals implied by p∗, p̄, respectively. Since the fairness of p∗, p̄ are computed in terms of π∗, π̄, it is intuitive that a quantization process that results in small marginal discrepancy, defined as the maximum change in any marginal ‖π− π̄‖∞, should also have small fairness loss. This idea motivates the upcoming section, in which we give quantization procedures with provably bounded marginal discrepancy, forming the foundation for our later bounds on fairness loss.
3A panel distribution p implies a unique vector of marginals π as follows: fixing p, π, a pool member i’s marginal selection probability πi is equal to the probability of drawing a panel from p containing that pool member. For a more detailed introduction to the connection between panel distributions and marginals, we refer readers to Flanigan et al. [12].
all feasible panels all feasible panels
Maximally fair distribution over panels (output of LEXIMIN [FGGHP21])
Uniform lottery over m panels
quantize
p* p̄input output
all pool members
π* π̄
all pool members1/m
( )
Does there exist a uniform lottery over m panels that nearly preserves the fairness of the maximally fair unconstrained distribution over panels? And, Algorithmically, how do we compute such a uniform lottery? Results and contributions. After describing the model in Section 2, in Section 3 we prove that it is possible to round an (essentially) arbitrary distribution over panels to a uniform lottery while preserving all individuals’ selection probabilities up to only a small bounded deviation. These results use tools from discrepancy theory and randomized rounding. Intuitively, this bounded change in selection probabilities implies bounded losses in fairness; we formalize this intuition in Section 4, showing that there exists in general a uniform lottery that is nearly maximally fair, with respect to multiple fairness objectives. Although we would ideally like to give such bounds for the Leximin fairness objective due to its use practice, this objective cannot be summarized by a single expression. Thus, we give bounds for the closely-related egalitarian objective, Maximin [CELM07]. We additionally give upper bounds on the loss in Nash Welfare [Mou03], a similarly well-established fairness objective that has also been implemented in panel selection tools [HG20]. Finally, in Section 5, we consider the algorithmic question: given a maximally fair distribution over panels, can we find a near-maximally fair uniform lottery, as our bounds suggest should exist? To answer this question, we implement two standard rounding algorithms, along with near-optimal (but more computationally-intensive) integer programming methods, for finding uniform lotteries. We then evaluate the performance of these algorithms in 11 real-world panel selection instances. We find that in all instances, we can compute uniform lotteries that nearly exactly preserve not only fairness with respect to both objectives, but entire sets of Leximin-optimal marginals, meaning that from the perspective of individuals, there is essentially no difference between using a uniform lottery versus the optimal distribution used by the latest algorithms. We discuss these results, their implications, and how they can be deployed directly into the existing panel selection pipeline in Section 6.
2 Model
Panel Selection Problem. First, we formally define the task of panel selection for citizens’ assemblies. Let N = [n] be the pool of volunteers for the panel—individuals from the population who have indicated their willingness to participate in response to an invitation. Let F = {ft}t denote a fixed set of features of interest. Each feature ft : N ! ⌦t maps each pool member to their value of that feature, where ⌦t is the set of ft’s possible values. For example, for feature ft = “gender”, we might have ⌦t = {“male”,“female”, “non-binary”}. We define individual i’s feature vector F (i) = (ft(i))t 2 Q t ⌦t to be the vector encoding their values for all features in F . As is done in practice and in previous research [FGGP20, FGG+21], we impose that the chosen panel P must be a subset of the pool of size k, and must be representative of the broader population with respect to the features in F . This representativeness is imposed via quotas: for each feature f and corresponding value v 2 ⌦, we may have lower and upper quotas lf,v and uf,v . These quotas require that the panel contain between lf,v and uf,v individuals i such that f(i) = v.
In terms of these parameters, we define an instance of the panel selection problem as: given (N, k, F, l, u)—a pool, panel size, set of features, and sets of lower and upper quotas—randomly select a feasible panel, where a feasible panel is any set of individuals P from the collection K:
K := P 2 (Nk) : lf,v |{i 2 P : f(i) = v}| uf,v for all f, v .
Maximally Fair Selection Algorithms. A selection algorithm is a procedure that solves instances of the panel selection problem. A selection algorithm’s level of fairness on a given instance is determined by its panel distribution p, the (possibly implicit) distribution over K from which it
draws the final panel. Because we care about fairness to individuals, we evaluate the fairness of p in
terms of the individual selection probabilities, or marginals, that p implies.3 We denote the vector of
marginals implied by p as ⇡, and we will sometimes specify a panel distribution as p, ⇡ to explicitly
denote this pair. We say that ⇡ is realizable if it is implied by some valid panel distribution p.
Maximally fair selection algorithms are those which solve the panel selection problem by sampling a specifically chosen p: one which implies marginals ⇡ that allocate probability as fairly as possible
3Any distribution over panels p implies a selection probability for each pool members: A pool member’s selection probability, per p, is equal to the probability of drawing a panel from p containing that pool member.
3
( )
Does there exi t a uniform lottery over m panels that nearly preserves the fairness of the maximally fair unconstrained distribution over panels? And, Algori hmically, how do we compute such a uniform lottery? Results and ontributions. After describing the model in Section 2, in Section 3 we prove that it is possible to round an (essentially) arbitrary distribution over panels to a uniform lottery while preservi g all individuals’ select on probabiliti s p to only a small bounded deviation. These results use tools from discrepancy theory an randomized rounding. Intuitively, this bounded change in selection probabilities implies bounded losses in fairness; we formalize this intuition in Section 4, showing that there exists in gene al a uniform lottery that is nearly maximally fair, with respect to multiple fairness objectives. Although we would ideally like to give such bounds for the Leximin fairness objective due to its use practice, this objective cannot be summarized by a single expression. Thus, we give bounds for the closely-relat d egalitarian objective, Maximin [CELM07]. We addition lly give upper bounds on the loss in Nash Welfare [Mou03], a similarly well-established fairness objective that has also been implemented in panel selection tools [HG20]. Finally, in Section 5, we consider the algorithmic question: given a maximally fair distribution over panels, can we find a near-maximally fair uniform lottery, as our bounds suggest should exist? To answer this question, we implement two standard rounding algorithms, along with near-optimal (but ore computationally-intensive) integer programming methods, for finding uniform lotteries. We then evaluate the erformance of these algorithms in 11 real-world panel selection instances. We find that in all instances, we can compute u iform lotteries that nearly exactly preserve not only fairness with respec to both objectives, but entire sets of Leximin-optimal marginals, meaning that from the persp ctive of individuals, there is essentially no difference between using a uniform lottery versus the optimal distribution u ed by the latest alg rithms. We discuss these results, their implications, and how they can be deployed directly i to the existing panel selection pipeline in Section 6.
2 Model
Panel Selection Problem. First, we formally define the task of panel selection for citizens’ assemblies. Let N = [n] be the pool of volun eers for the panel—individuals from the population who have indicated their willingness to participate in response to an invitation. Let F = {ft}t denote a fixed set of features of interest. Each featu e ft : N ! ⌦t maps each pool member to their value of that feature, where ⌦t is the set of ft’s possible values. For example, for feature ft = “gender”, we might have ⌦t = {“male”,“female”, “non-binary”}. We define individual i’s feature vector F (i) = (ft(i))t 2 Q t ⌦t to be the vector encoding their values for all features in F . As is done in practice and in pr vious r search [FGGP20, FGG+21], we impose that the chosen panel P must be a subs t of the p ol of size k, and must be representative of the broader population with re pect to th features in F . This representativeness is imposed via quotas: for each feature f and corresponding value v 2 ⌦, we may have lower and upper quotas lf,v and uf,v . These quotas require that the panel contain between lf,v and uf,v individuals i such that f(i) = v.
In terms of these parameters, we define an instance of the panel selection problem as: given (N, k, F, l, u)—a pool, panel size, set of features, and sets of lower and upper quotas—randomly select a easible p nel, where a feasible panel is any set of individuals P from the collection K:
K := P 2 (Nk) : lf,v |{i 2 P : f(i) = v}| uf,v for all f, v .
Maximally Fa r Selection Algorithms. A selection algorithm is a procedure that solves instances of t e pan l selection problem. A selection algorithm’s level of fairness on a given instance is determined by it panel distribution p, the (possibly implicit) distribution over K from which it
draws the final panel. Beca se we care about fair ess to individuals, we evaluate the fairness of p in
terms of the individual election probabiliti s, or marginals, that p implies.3 We denote the vector of
marginals im lied by p as ⇡, and we will sometimes specify a panel distribution as p, ⇡ to explicitly
denote this pair. We say that ⇡ ealizable if it is implied by some valid panel distribution p.
Maximally fair s ection algorithms are those which solve the panel selection problem by sampling a specifically chosen p: one which mplies marginals ⇡ that allocate probability as fairly as possible
3Any distribution over panels p implies a selection probability for each pool members: A pool member’s selectio prob bi ity, per p, is equal to the probability of drawing a panel from p containing that pool member.
3
Figure 1: The quantization task takes s inp t a maximally fair p nel distribution p∗ (implyi g marginals π∗), and outputs a 1/m-quantized panel distribution p̄ (implying marginals π̄).
3 Theoretical Bounds on Marginal Discrepancy
Here we prove that for a fixed panel distribution p, π, there exists a uniform lottery p̄, π̄ such that ‖π − π̄‖∞ is bounded. Preliminarily, we note that it is intuitive that bounds on this discrepancy should approach 0 as m becomes large with respect to n and k. To see why, begin by fixing some distribution p, π over panels: as m becomes large, we approach the scenario in which a uniform lottery p̄ can assign panels arbitrary probabilities, providing increasingly close approximations to p. Since the marginals πi are continuous with respect to p, as p̄→ p we have that π̄i → πi for all i. While this argument demonstrates convergence, it provides neither efficient algorithms nor tight bounds on the rate of convergence. In this section, our task is therefore to bound the rate of this convergence as a function of m and the other parameters of the instance. All omitted proofs of results from this section are included in Appendix B.
3.1 Worst-Case Upper Bounds
Our first set of upper bounds result from rounding STANDARD LP, the LP that most directly arises from our problem. This LP is defined in terms of a panel distribution p, π, and M , an n × |K| matrix describing which individuals are on which feasible panels: Mi,P = 1 if i ∈ P and Mi,P = 0 otherwise.
STANDARD LP Mp = π (3.1) ‖p‖1 = 1 (3.2) p ≥ 0. Here, (3.1) specifies n total constraints. Our goal is to round p to a uniform lottery p̄ over m panels (so the entries p̄ are multiples of 1/m) such that (3.2) is maintained exactly, and no constraint in (3.1) is relaxed by too much, i.e., ‖Mp−Mp̄‖∞ = ‖π − π̄‖∞ remains small. Randomized rounding is a natural first approach. Any randomized rounding scheme satisfying negative association (which includes several that respect (3.2)) yields the following bound: Theorem 3.1. For any realizable π, we may efficiently randomly generate p̄ such that its marginals π̄ satisfy
‖π − π̄‖∞ = O (√ n log n
m
) .
Fortunately, there is potential for improvement: randomized rounding does not make full use of the fact that M is k-column sparse, due to each panel in K containing exactly k individuals. We use this sparsity to get a stronger bound when n k2, which is a practically significant parameter regime. The proof applies a dependent rounding algorithm based on a theorem of Beck and Fiala [1], to which a modification ensures the exact satisfaction of constraint (3.2). Theorem 3.2. For any realizable π, we may efficiently construct p̄ such that its marginals π̄ satisfy
‖π − π̄‖∞ ≤ k/m.
This bound is already meaningful in practice, where k m is insured by the fact that m is prechosen along with k prior to panel selection. Note also that k is typically on the order of 100
(Table 1), whereas a uniform lottery can in practice be easily made orders of magnitude larger, as each additional factor of 10 in the size of the uniform lottery requires drawing only one more ball (and there is no fairness cost to drawing a larger lottery, since increasing m allows for uniform lotteries which better approximate the unconstrained optimal distribution).
3.2 Beyond-Worst-Case Upper Bounds
As we will demonstrate in Section 3.3, we cannot hope for a better worst-case upper bound than poly(k)/m. We thus shift our consideration to instances which are “simple” in their feature structure, having a small number of features (Theorem B.7), a limited number of unique feature vectors in the pool (Theorem 3.3), or multiple individuals that share each feature vector present (Theorem B.8). The beyond-worst-case bounds given by Theorem 3.3 and Theorem B.8 asymptotically dominate our worst-case bounds in Theorem 3.1 and Theorem 3.2, respectively. Moreover, Theorem 3.3 dominates all other upper bounds in 10 of the 11 practical instances studied in Section 5.
We note that while our worst-case upper bounds implied the near-preservation of any realizable set of marginals π, some of our beyond-worst-case results apply to only realizable π which are anonymous, meaning that πi are equal for all i with equal feature vectors. We contend that any reasonable set of marginals should have this property,4 and furthermore that the “anonymization” of any realizable π is also realizable (Claim B.6); hence this restriction is insignificant. Our beyondworst-case bounds also differ from our worst-case bounds in that they depart from the paradigm of rounding p, instead randomizing over panels that may fall outside the support of p.
The main beyond-worst-case bound we give, stated below, is parameterized by |C|, where C is the set of unique feature vectors that appear in the pool. All omitted proofs and other beyond worst-case results are stated and proven in Appendix B.
Theorem 3.3. If π is anonymous and realizable, then we may efficiently construct p̄ such that its marginals π̄ satisfy
‖π − π̄‖∞ = O (√ |C| log |C| m ) .
|C| is at most n, so this bound dominates Theorem 3.1. In 10 of the 11 real-world instances we study, |C| is also smaller than k2 (Appendix A), in which case this bound also dominates Theorem 3.2. At a high level, our beyond-worst-case upper bounds are obtained not by directly rounding p, but instead using the structure of the sortition instance to abstract the problem into one about “types.” For this bound we then solve an LP in terms of “types,” round that LP, and then reconstruct a rounded panel distribution p̄, π̄ from the “type” solution. In particular, the types of individuals are the feature vectors which appear in the pool, and types of panels are the multisets of k feature vectors that satisfy the instance quotas. Fixing an instance, we project some p into type space by viewing it as a distribution p over types of panels K, inducing marginals τc for each type individuals c ∈ C. To begin, we define the TYPE LP, which is analogous to Eq. (3.1). We let Q be the type analog of M , so that entry Qcj is the number of individuals i with F (i) = c contained in panels of type j ∈ K.5 Then,
TYPE LP Q p = τ (3.3) ‖p‖1 = 1 (3.4)
p ≥ 0.
We round p in this LP to a panel type distribution p̄ while preserving (3.4). All that remains, then, is to construct some p̄, π̄ such that p is consistent with p̄ and ‖π − π̄‖∞ is small. This p̄ is in general supported by panels outside of supp(p), unlike the p̄ obtained by Theorem 3.1. It is the anonymity of π which allows us to construct these new panels and prove that they are feasible for the instance.
4The class of all anonymous marginals π includes the maximizers π∗ of all reasonable fairness objectives, and second, this condition is satisfied by all existing selection algorithms used in practice, to our knowledge.
5Completing the analogy, C,K, Q, p, p̄, τ are the “type” versions of N,K,M, p, p̄, π from the original LP.
3.3 Lower Bounds
This method of using bounded discrepancy to derive nearly fairness-optimal uniform lotteries has its limits, since there are even sparse M and fractional x for which no integer x̄ yields nearby Mx̄. In the worst case, we establish lower bounds by modifying those of Beck and Fiala [25]: Theorem 3.4. There exist p, π for which for all uniform lotteries p̄, π̄,
min p̄∈D ‖π − π̄‖∞ = Ω
(√ k
m
) .
Our k-dependent upper and lower bounds are separated by a factor of √ k, matching the current upper and lower bounds of the Beck-Fiala conjecture as applied to linear discrepancy (also known as the lattice approximation problem [26]). The respective gaps are incomparable, however, since for a given x ∈ [0, 1]n, the former problem aims to minimize ‖M(x− x̄)‖∞ over x̄ ∈ {0, 1}n, while we aim to do the same over a subset of the x̄ ∈ Zn for which∑j xj = ∑ j x̄j (see Lemma B.4).
4 Theoretical Bounds on Fairness Loss
Since the fairness of a distribution p is determined by its marginals π, it is intuitive that if uniform lotteries incur only small marginal discrepancy (per Section 3), then they should also incur only small fairness losses. This should hold for any fairness notion that is sufficiently “smooth” (i.e., doesn’t change too quickly with changing marginals) in the vicinity of p, π.
Although our bounds from Section 3 apply to any reasonable initial distribution p, we are particularly concerned with bounding fairness loss from maximally fair initial distributions p∗. Here, we specifically consider such p∗ that are optimal with respect to Maximin and NW. We note that, since there exist anonymous p∗, π∗ that maximize these objectives, we can apply any upper bound from Section 3 to upper bound ‖π∗ − π̄‖∞. We defer omitted proofs to Appendix C.
4.1 Maximin
Since Leximin is the fairness objective optimized by the maximally fair algorithm used in practice, it would be most natural to start with a p∗ that is Leximin-optimal and bound fairness loss with respect to this objective. However, the fact that Leximin fairness cannot be represented by a single scalar value prevents us from formulating such an approximation guarantee. Instead, we first pursue bounds on the closely-related objective, Maximin. We argue that in the most meaningful sense, a worst-case Maximin guarantee is a Leximin guarantee: such a bound would show limited loss in the minimum marginal, and it is Leximin’s lexicographically first priority to maximize the minimum marginal.
First, we show there exists some p̄, π̄ that gives bounded Maximin loss from p∗, π∗, the Maximinoptimal unconstrained distribution. This bound follows from Theorems 3.3 and B.8, using the simple observation that p̄ can decrease the lowest marginal given by p∗ by no more than ‖π∗ − π̄‖∞. Here nmin := minc nc denotes the smallest number of individuals which share any feature vector c ∈ C. Corollary 4.1. By Theorem 3.3 and B.8, for Maximin-optimal p∗, there exists a uniform lottery p̄ that satisfies
Maximin(p∗)−Maximin(p̄) = 1 m ·O ( min {√ |C| log |C|, k nmin + 1 }) .
Theorem 3.4 demonstrates that we cannot get an upper bound on Maxmin loss stronger than O( √ k/m) using a uniform bound on changes in all πi. However, since Maximin is concerned only with the smallest πi, it seems plausible that better upper bounds on Maximin loss could result from rounding π while tightly controlling only losses in the smallest πi’s, while giving freer reign to larger marginals. We show that this is not the case by further modifying the instances from Theorem 3.4 to obtain the following lower bound on the Maximin loss: Theorem 4.1. There exists a Maximin-optimal p∗ such that, for all uniform lotteries p̄,
Maximin(p∗)−Maximin(p̄) = Ω (√ k
m
) .
4.2 Nash Welfare
As NW has also garnered interest by practitioners and is applicable in practice [18], we upper-bound the NW fairness loss. Unlike Maximin loss, an upper bound on NW loss does not immediately follow from one on ‖π − π̄‖∞, because decreases in smaller marginals have larger negative impact on the NW. As a result, the upper bound on NW resulting from Section 3 is slightly weaker than that on Maximin:
Theorem 4.2. For NW-optimal p∗, there exists a uniform lottery p̄ that satisfies
NW(p∗)−NW(p̄) = k m ·O ( min {√ |C| log |C|, k nmin + 1 }) .
We give an overview of the proof of Theorem 4.2. To begin, fix a NW-optimizing panel distribution p∗, π∗. Before applying our upper bounds on marginal discrepancy from Section 3, we must contend with the fact that if this bounded loss is suffered by already-tiny marginals, the NW may decrease substantially or even go to 0. Thus, we first prove Lemmas 4.1 and 4.2, which together imply that no marginal in π∗ is smaller than 1/n.
Lemma 4.1. For NW-optimal p∗ over a support of panels supp(p∗), there exists a constant λ ∈ R+ such that, for all P ∈ supp(p∗),∑i∈P 1/π∗i = λ.
Lemma 4.2. For NW-optimal p∗, π∗, we have that π∗i ≥ 1/n for all i ∈ N .
Lemma 4.1 follows from the fact that the partial derivative of NW with respect to the probability it assigns a given panel must be the same as that with respect to any other panel at p∗ (otherwise, mass in the distribution could be shifted to increase the NW). Lemma 4.2 then follows by the additional observation that EP∼p∗ [∑ i∈P 1/π ∗ i ] = n.
Finally Lemma 4.3 follows from the fact that Lemma 4.2 limits the potential multiplicative, and therefore additive, impact on the NW of decreasing any marginal by ‖π − π̄‖∞: Lemma 4.3. For NW-optimal p∗, π∗, there exists a uniform lottery p̄, π̄ that satisfies NW(p∗) − NW(p̄) ≤ k ‖π∗ − π̄‖∞.
As the NW-optimal marginals π∗ are anonymous, we can apply the upper bounds given by Theorem 3.3 and Theorem B.8 to show the existence of a p̄, π̄ satisfying the claim of the theorem.
5 Practical Algorithms for Computing Fair Uniform Lotteries
Algorithms. First, we implement versions of two existing rounding algorithms, which are implicit in our worst-case upper bounds.6 The first is Pipage rounding [16], or PIPAGE, a randomized rounding scheme satisfying negative association [10]. The second is BECK-FIALA, the dependent rounding scheme used in the proof of Theorem 3.2. To benchmark these algorithms against the highest level of fairness they could possibly achieve, we use integer programming (IP) to compute the fairest possible uniform lotteries over supp(p∗), the panels over which p∗ randomizes.7 We define IPMAXIMIN and IP-NW to find uniform lotteries over supp(p∗) maximizing Maximin and NW, respectively. We remark that the performance of these IPs is still subject to our theoretical upper and lower bounds. We provide implementation details in Appendix D.1.
One question is whether we should prefer the IPs or the rounding algorithms for real-world applications. Although IP-MAXIMIN appears to find good solutions at practicable speeds, IP-NW converges to optimality prohibitively slowly in some instances (see Appendix D.2 for runtimes). At the same time, we find that our simpler rounding algorithms give near-optimal uniform lotteries with respect to both fairness objectives. Also in favor of simpler rounding algorithms, many randomized rounding procedures (including Pipage rounding) have the advantage that they exactly
6We do not implement the algorithm implicit in Theorem 3.3 because our results already present sufficient alternatives for finding excellent uniform lotteries in practice.
7Note that these lotteries are not necessarily universally optimal, as they can randomize over only supp(p∗); conceivably, one could find a fairer uniform lottery by also randomizing over panels not in supp(p∗). However, PIPAGE and BECK-FIALA are also restricted in this way, and thus must be weakly dominated by the IP.
preserve marginals over the combined steps of randomly rounding to a uniform lottery and then randomly sampling it—a guarantee that is much more challenging to achieve with IPs.
Uniform lotteries nearly exactly preserve Maximin, Nash Welfare fairness. We first measure the fairness of uniform lotteries produced by these algorithms in 11 real-world panel selection instances from 7 different organizations worldwide (instance details in Appendix A). In all experiments, we generate a lottery of sizem = 1000. This is fairly small; it requires drawing only 3 balls from lottery machines, and in one instance we have that m < n. We nevertheless see excellent performance of all algorithms, and note that this performance will only improve with larger m.
Figure 2 shows the Maximin fairness of the uniform lottery computed by PIPAGE, BECK-FIALA, and IP-MAXIMIN for each instance. For intuition, recall that the level of Maximin fairness given by any lottery is exactly the minimum marginal assigned to any individual by that lottery. The upper edges of the gray boxes in Fig. 2 correspond to the optimal fairness attained by an unconstrained distribution p∗. These experiments reveal that the cost of transparency to Maximin-fairness is practically non-existent: across instances, the quantized distributions computed by IP-MAXIMIN decrease the minimum marginal by at most 2.1/m, amounting to a loss of no more than 0.0021 in the minimum marginal probability in any instance. Visually, we can see that this loss is negligible relative to the original magnitude of even the smallest marginals given by p∗. Surprisingly, though PIPAGE and BECK-FIALA do not aim to optimize any fairness objective, they achieve only slightly larger losses in Maximin fairness, with PIPAGE outperforming BECK-FIALA. Finally, the heights of the gray boxes indicate that our theoretical bounds are often meaningful in practice, giving lower bounds on Maximin fairness well above zero in nine out of eleven instances. We note these bounds only tighten with larger m. We present similarly encouraging results on NW loss in Appendix D.3.
Uniform lotteries nearly preserve all Leximin marginals. We still remain one step away from practice: our examination of Maximin does not address whether uniform lotteries can attain the finer-tuned fairness properties of the Leximin-optimal distributions currently used in practice. Fortunately, our results from Section 3 imply the existence of a quantized p̄ that closely approximates all marginals given by the Leximin-optimal distribution p∗, π∗. We evaluate the extent to which PIPAGE and BECK-FIALA preserve these marginals in Fig. 3. They are benchmarked against a new IP, IP-MARGINALS, which computes the uniform lottery over supp(p∗) minimizing ‖π∗ − π̄‖∞.
Figure 3 demonstrates that in the instance “sf(a)”, all algorithms produce marginals that deviate negligibly from those given by π∗. Analogous results on remaining instances appear in Appendix D.4 and show similar results. As was the case for Maximin, we see that our theoretical bounds are meaningful, but that we can consistently outperform them in real-world instances.
6 Discussion
Our aim was to show that uniform lotteries can preserve fairness, and our results ultimately suggest this, along with something stronger: that in practical instances, uniform lotteries can reliably almost exactly replicate the entire set of marginals given by the optimal unconstrained panel distribution. Our rounding algorithms can thus be plugged directly into the existing panel selection pipeline with essentially no impact on individuals’ selection probabilities, thus enabling translation of the output of Panelot (and other maximally fair algorithms) to a nearly maximally fair and transparent panel selection procedure. We note that our methods are not just compatible with ball-drawing lotteries, but any form of uniform physical randomness (e.g. dice, wheel-spinning, etc.).
Although we achieve our stated notion of transparency, a limitation of this notion is that it focuses on the final stage of the panel selection process. A more holistic notion of transparency might require that onlookers can verify that the panel is not being intentionally stacked with certain individuals. This work does not fully enable such verification: although onlookers can now observe individuals’ marginals, they still cannot verify that these marginals are actually maximally fair without verifying the underlying optimization algorithms. In particular, in the common case where quotas require even maximally fair panel distributions to select certain individuals with probability near one, onlookers cannot distinguish those from unfair distributions engineered such that one or more pool members are chosen with probability near one.
In research on economics, fair division, and other areas of AI, randomness is often proposed as a tool to make real-world systems fairer [17, 6, 15]. Nonetheless, in practice, these systems (with a few exceptions, such as school choice [22]) remain stubbornly deterministic. Among the hurdles to bringing the theoretical benefits of randomness into practice is that allocation mechanisms fare best when they can be readily understood, and that randomness can be perceived as undesirable or suspect. Sortition is a rather unique paradigm at the heart of this tension: it relies centrally on randomness, while in the public sphere it is attaining increasing political influence. It is therefore a uniquely high-impact domain in which to study how to combine the benefits of randomness, such as fairness, with transparency. We hope that this work and its potential for impact will inspire the investigation of fairness-transparency tradeoffs in other AI applications.
Acknowledgements. We would foremost like to thank Paul Gölz for helpful technical conversations and insights on the practical motivations for this research. We also thank Anupam Gupta for helpful technical conversations. Finally, several organizations for supplying real-world citizens’ assembly data, including the Sortition Foundation, the Center for Climate Assemblies, Healthy Democracy, MASS LBP, Nexus Institute, Of by For, and New Democracy.
Funding and Competing Interests. This work was partially supported by National Science Foundation grants CCF-2007080, IIS-2024287 and CCF-1733556; and by Office of Naval Research grant N00014-20-1-2488. Bailey Flanigan is supported by the National Science Foundation Graduate Research Fellowship and the Fannie and John Hertz Foundation. None of the authors have competing interests. | 1. What is the focus of the paper regarding transparency in sortition?
2. What are the strengths of the proposed solution, particularly in terms of interpretability and verifiability?
3. What are the weaknesses of the paper, especially regarding its theoretical contributions?
4. Do you have any concerns about the possibility of biasing the uniformly selected panel?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
The paper discusses transparency in the context of sortition - selecting citizens to participate in a “Citizens’ Assembly” for the purpose of making policy recommendations. They seek to design algorithms which are approximately fair (with respect to three metrics - Maximin, Leximin, and NW) while being more interpretable/verifiable than disclosing selection probabilities. Their proposed solution is to design a uniform lottery, which selects a panel uniformly at random from a precomputed set of panels. They bound the fairness loss and deviation in marginals due to restricting to uniform lotteries, discuss LP rounding algorithms to find such lotteries, and run experiments on 11 real-world instances comparing the algorithms.
Review
The paper is well-written, clearly explaining the problem and techniques used. The theoretical results are basically applications of known randomized rounding algorithms over a natural linear program for this problem. The empirical results seem to indicate that imposing their definition of transparency is not harmful in terms of performance, and their methods seem implementable in practice. I liked the premise of the paper to make randomization more "transparent" by preselecting a set of
m
panels, each of which could be a feasible panel, and then providing a uniform lottery on these to make the process look transparent. The main drawback to the paper is that the theoretical contribution is small—the LP rounding techniques used are not quite original.
I wonder though, if the set of m panels that are computed could be modified in a way to actually bias any uniformly selected panel. What's the guarantee that the
m
panels selected are all possible panels? In fact, moving the upper and lower bounds slightly could have a significant impact on the kind of panels obtained by uniformly sampling the feasible set. |
NIPS | Title
Fair Sortition Made Transparent
Abstract
Sortition is an age-old democratic paradigm, widely manifested today through the random selection of citizens’ assemblies. Recently-deployed algorithms select assemblies maximally fairly, meaning that subject to demographic quotas, they give all potential participants as equal a chance as possible of being chosen. While these fairness gains can bolster the legitimacy of citizens’ assemblies and facilitate their uptake, existing algorithms remain limited by their lack of transparency. To overcome this hurdle, in this work we focus on panel selection by uniform lottery, which is easy to realize in an observable way. By this approach, the final assembly is selected by uniformly sampling some pre-selected set of m possible assemblies. We provide theoretical guarantees on the fairness attainable via this type of uniform lottery, as compared to the existing maximally fair but opaque algorithms, for two different fairness objectives. We complement these results with experiments on real-world instances that demonstrate the viability of the uniform lottery approach as a method of selecting assemblies both fairly and transparently.
1 Introduction
In a citizens’ assembly, a panel of randomly chosen citizens is convened to deliberate and ultimately make recommendations on a policy issue. The defining aspect of citizens’ assemblies is the randomness of the process, sortition, by which participants are chosen. In practice, the sortition process works as follows: first, volunteers are solicited via thousands of letters or phone calls, which target individuals chosen uniformly at random. Those who respond affirmatively form the pool of volunteers, from which a final panel will be chosen. Finally, a selection algorithm is used to randomly select some pre-specified number k of pool members for the panel. To ensure adequate representation of demographic groups, the chosen panel is often constrained to satisfy some upper and lower quotas on feature categories such as age, gender, and ethnicity. We call a quota-satisfying panel of size k a feasible panel. As this process illustrates, citizens’ assemblies offer a way to involve the public in informed decision-making. This potential for civic participation has recently spurred a global resurgence in the popularity of citizens assemblies; they have been commissioned by governments and led to policy changes at the national level [19, 23, 12].
Prompted by the growing impact of citizens’ assemblies, there has been a recent flurry of computer scientific research on sortition, and in particular, on the fairness of the procedure by which participants are chosen [2, 13, 12]. The most practicable result to date is a family of selection algorithms proposed by Flanigan et al. [12], which are distinguished from their predecessors by their use of randomness toward the goal of fairness: while previously-used algorithms selected pool members in
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
a random but ad-hoc fashion, these new algorithms are maximally fair, ensuring that pool members have as equal probability as possible of being chosen for the panel, subject to the quotas.1 To encompass the many interpretations of “as equal as possible,” these algorithms permit the optimization of any fairness objective with certain convexity properties. There is now a publicly available implementation of the techniques of Flanigan et al. [12], called Panelot, which optimizes the egalitarian notion that no pool member has too little selection probability via the Leximin objective from fair division [21, 14]. This algorithm has already been deployed by several groups of panel organizers, and has been used to select dozens of panels worldwide.
Fairness gains in the panel selection process can lend legitimacy to citizens’ assemblies and potentially increase their adoption, but only insofar as the public trusts that these gains are truly realized. Currently, the potential for public trust in the panel selection process is limited by multiple factors. First, the latest panel selection algorithms select the final panel via behind-the-scenes computation. When panels are selected in this manner, observers cannot even verify that any given pool member has any chance of being chosen for the panel. A second and more fundamental hurdle is that randomness and probability, which are central to the sortition process, have been shown in many contexts to be difficult for people to understand and reason about [24, 20, 28]. Aiming to address these shortcomings, we propose and pursue the following notion of transparency in panel selection:
Transparency: Observers should be able to, without reasoning in-depth about probability, (1) understand the probabilities with which each individual will be chosen for the panel in theory, and (2) verify that individuals are actually selected with these probabilities in practice.
In this paper, we aim to achieve transparency and fairness simultaneously: this means advancing the defined goal of transparency, while preserving the fairness gains obtained by maximally fair selection algorithms. Although this task is reminiscent of existing AI research on trade-offs between fairness or transparency with other desirable objectives [4, 11, 3, 27], to our knowledge, this is the first investigation of the trade-off between fairness and transparency.
Setting aside for a moment the goal of fairness, we consider a method of random decision-making that is already common in the public sphere: the uniform lottery. To satisfy quotas, a uniform lottery for sortition must randomize not over individuals, but over entire feasible panels. In fact, this approach has been suggested by practitioners, and was even used in 2020 to select a citizens’ assembly in Michigan. The following example, which closely mirrors that real-world pilot,2 illustrates that panel selection via uniform lottery is naturally consistent with the transparency notion we pursue.
Suppose we construct 1000 feasible panels from a pool (possibly with duplicates), numbered 000- 999, and publish an (anonymized) list of which pool members are on each panel. We then inform spectators that we will choose each panel with equal probability. This satisfies criterion (1): spectators can easily understand that all panels will be chosen with the same probability of 1/1000, and can easily determine each individual’s selection probability by counting the number of panels containing the individual. To satisfy criterion (2), we enact the lottery by drawing each of the three digits of the final panel number individually from lottery machines. Lottery spectators can confirm that each ball is drawn with equal probability; this provides confirmation that panels are indeed being chosen with uniform probabilities, thus confirming the enactment of the proposed individual selection probabilities. In addition to its conventionality as a source of randomness, decision-making via drawing lottery balls invites an exciting spectacle, which can promote engagement with citizens’ assemblies.
This simple method neatly satisfies our transparency criteria, but it has one obvious downside: a uniform lottery over an arbitrary set of feasible panels does not guarantee any measure of equal probabilities to individuals. In fact, it is not even clear that the fairest possible uniform lottery over m panels, where m is a number conducive to selection by physical lottery (e.g. m =1000), would not be significantly less fair than maximally fair algorithms, which sample the fairest possible unconstrained distribution over panels. For example, if m is too small, there may be no uniform lottery which gives all individuals non-zero selection probability, even if each individual appears
1Quotas can preclude giving individuals exactly equal probabilities: if the panel must be 1/2 men, 1/2 women but the pool is split 3/4 men, 1/4 women, then some women must be chosen more often than some men.
2Of By For’s pilot of live panel selection via lottery can be viewed at https://vimeo.com/458304880# t=17m59s from 17:59 to 21:23. For a more detailed description, see Figure 3 and surrounding text in [12].
on some feasible panel (and so can attain a non-zero selection probability under an unconstrained distribution).
Fortunately, empirical evidence suggests that there is hope: in the 2020 pilot mentioned above, a uniform lottery over m =1000 panels was found that nearly matched the fairness of the maximally fair distribution generated by Panelot. Motivated by this anecdotal evidence, we aim to understand whether such a fair uniform lottery is guaranteed to exist in general, and if it does, how to find it. We summarize this goal in the following research questions:
Does there exist a uniform lottery overm panels that nearly preserves the fairness of the maximally fair unconstrained distribution over panels? And, Algorithmically, how do we compute such a uniform lottery?
Results and Contributions. After describing the model in Section 2, in Section 3 we prove that it is possible to round an (essentially) arbitrary distribution over panels to a uniform lottery while preserving all individuals’ selection probabilities up to only a small bounded deviation. These results use tools from discrepancy theory and randomized rounding. Intuitively, this bounded change in selection probabilities implies bounded losses in fairness; we formalize this intuition in Section 4, showing that there exists in general a uniform lottery that is nearly maximally fair, with respect to multiple choices of fairness objective. Although we would ideally like to give such bounds for the Leximin fairness objective, due to its use practice, we cannot succinctly represent bounds for this objective because it is not scalar valued. We therefore give bounds for Maximin, a closely related egalitarian objective which only considers the minimum selection probability given to any pool member [7]. We discuss in Section 4 why bounds on loss in Maximin fairness are, in the most meaningful sense, also bounds on loss in Leximin fairness. We additionally give upper bounds on the loss in Nash Welfare [21], a similarly well-established fairness objective that has also been implemented in panel selection tools [18].
Finally, in Section 5, we consider the algorithmic question in practice: given a maximally fair distribution over panels, can we actually find nearly maximally fair uniform lotteries that match our theoretical guarantees? To answer this question, we implement two standard rounding algorithms, along with near-optimal (but more computationally intensive) integer programming methods, for finding uniform lotteries. We then evaluate the performance of these algorithms in 11 real-world panel selection instances. We find that in all instances, we can compute uniform lotteries that nearly exactly preserve not only fairness with respect to both objectives, but entire sets of Leximin-optimal marginals, meaning that from the perspective of individuals, there is essentially no difference between using a uniform lottery versus the optimal unconstrained distribution sampled by the latest algorithms. We discuss these results, their implications, and how they can be deployed directly into the existing panel selection pipeline in Section 6.
2 Model
Panel Selection Problem. First, we formally define the task of panel selection for citizens’ assemblies. Let N = [n] be the pool of volunteers for the panel—individuals from the population who have indicated their willingness to participate in response to an invitation. Let F = {ft}t denote a fixed set of features of interest. Each feature ft : N → Ωt maps each pool member to their value of that feature, where Ωt is the set of ft’s possible values. For example, for feature ft = “gender”, we might have Ωt = {“male”,“female”, “non-binary”}. We define individual i’s feature vector F (i) = (ft(i))t ∈ ∏ t Ωt to be the vector encoding their values for all features in F .
As is done in practice and in previous research [13, 12], we impose that the chosen panel P must be a subset of the pool of size k, and must be representative of the broader population with respect to the features in F . This representativeness is imposed via quotas: for each feature f and corresponding value v ∈ Ω, we may have lower and upper quotas lf,v and uf,v. These quotas require that the panel contain between lf,v and uf,v individuals i such that f(i) = v.
In terms of these parameters, we define an instance of the panel selection problem as: given (N, k, F, l, u)—a pool, panel size, set of features, and sets of lower and upper quotas—randomly select a feasible panel, where a feasible panel is any set of individuals P from the collection K:
K := { P ∈ (Nk) : lf,v ≤ |{i ∈ P : f(i) = v}| ≤ uf,v for all f, v } .
Maximally Fair Selection Algorithms. A selection algorithm is a procedure that solves instances of the panel selection problem. A selection algorithm’s level of fairness on a given instance is determined by its panel distribution p, the (possibly implicit) distribution over K from which it draws the final panel. Because we care about fairness to individual pool members, we evaluate the fairness of p in terms of the fairness of selection probabilities, or marginals, that p implies for all pool members.3 We denote the vector of marginals implied by p as π, and we will sometimes specify a panel distribution as p, π to explicitly denote this pair. We say that π is realizable if it is implied by some distribution p over the feasible panels K. Maximally fair selection algorithms are those which solve the panel selection problem by sampling a specifically chosen p: one which implies marginals π that allocate probability as fairly as possible across pool members. The fairness of p, π is measured by a fairness objective F , which maps an allocation—in this case, of selection probability to pool members—to a real number measuring the allocation’s fairness. Fixing an instance, a fairness objective F , and a panel distribution p, we express the fairness of p as F(p). Existing maximally fair selection algorithms can maximize a wide range of fairness objectives, including those considered in this paper.
Leximin, Maximin, and Nash Welfare. Of the three fairness objectives we consider in this paper, Maximin and Nash Welfare (NW) have succinct formulae. For p, π they are defined as follows, where πi is the marginal of individual i:
Maximin(p) := min i∈N πi, NW(p) :=
(∏
i
πi
)1/n .
Intuitively, NW maximizes the geometric mean, prioritizing the marginal πi of each individual i in proportion to π−1i . Maximin maximizes the marginal probability of the individual least likely to be selected. Finally, Leximin is a refinement of Maximin, and is defined by the following algorithm: first, optimize Maximin; then, fixing the minimum marginal as a lower bound on any marginal, maximize the second-lowest marginal; and so on.
Our task: quantize a maximally fair panel distribution with minimal fairness loss. We define a 1/m-quantized panel distribution as a distribution over all feasible panels K in which all probabilities are integer multiples of 1/m. We use p̄ to denote a panel distribution with this property. Formally, while an (unconstrained) panel distribution p lies in D := {p ∈ R|K|+ : ‖p‖1 = 1}, a 1/m-quantized panel distribution in p̄ lies in D := {p̄ ∈ (Z+/m)|K| : ‖p̄‖1 = 1}. Note that a 1/m-quantized distribution p̄ immediately translates to a physical uniform lottery of over m panels (with duplicates): if p̄ assigns probability `/m to panel P , then the corresponding physical uniform lottery would contain ` duplicates of P . Thus, if we can compute a 1/m-quantized panel distribution p̄ with fairness F(p̄), then we have designed a physical uniform lottery over m panels with that same level of fairness.
Our goal follows directly from this observation: we want to show that given an instance and desired lottery size m, we can compute a 1/m-quantized distribution p̄ that is nearly as fair, with respect to a fairness notion F , as the maximally fair panel distribution in this instance p∗ ∈ arg maxp∈D F(p). We define the fairness loss in this quantization process to be the difference F(p∗) − F(p̄). We are aided in this task by the existence of practical algorithms for computing p∗ Flanigan et al. [12], which allows us to use p∗ as an input to the quantization procedure we hope to design. For intuition, we illustrate this quantization task in Figure 1, where π∗, π̄ are the marginals implied by p∗, p̄, respectively. Since the fairness of p∗, p̄ are computed in terms of π∗, π̄, it is intuitive that a quantization process that results in small marginal discrepancy, defined as the maximum change in any marginal ‖π− π̄‖∞, should also have small fairness loss. This idea motivates the upcoming section, in which we give quantization procedures with provably bounded marginal discrepancy, forming the foundation for our later bounds on fairness loss.
3A panel distribution p implies a unique vector of marginals π as follows: fixing p, π, a pool member i’s marginal selection probability πi is equal to the probability of drawing a panel from p containing that pool member. For a more detailed introduction to the connection between panel distributions and marginals, we refer readers to Flanigan et al. [12].
all feasible panels all feasible panels
Maximally fair distribution over panels (output of LEXIMIN [FGGHP21])
Uniform lottery over m panels
quantize
p* p̄input output
all pool members
π* π̄
all pool members1/m
( )
Does there exist a uniform lottery over m panels that nearly preserves the fairness of the maximally fair unconstrained distribution over panels? And, Algorithmically, how do we compute such a uniform lottery? Results and contributions. After describing the model in Section 2, in Section 3 we prove that it is possible to round an (essentially) arbitrary distribution over panels to a uniform lottery while preserving all individuals’ selection probabilities up to only a small bounded deviation. These results use tools from discrepancy theory and randomized rounding. Intuitively, this bounded change in selection probabilities implies bounded losses in fairness; we formalize this intuition in Section 4, showing that there exists in general a uniform lottery that is nearly maximally fair, with respect to multiple fairness objectives. Although we would ideally like to give such bounds for the Leximin fairness objective due to its use practice, this objective cannot be summarized by a single expression. Thus, we give bounds for the closely-related egalitarian objective, Maximin [CELM07]. We additionally give upper bounds on the loss in Nash Welfare [Mou03], a similarly well-established fairness objective that has also been implemented in panel selection tools [HG20]. Finally, in Section 5, we consider the algorithmic question: given a maximally fair distribution over panels, can we find a near-maximally fair uniform lottery, as our bounds suggest should exist? To answer this question, we implement two standard rounding algorithms, along with near-optimal (but more computationally-intensive) integer programming methods, for finding uniform lotteries. We then evaluate the performance of these algorithms in 11 real-world panel selection instances. We find that in all instances, we can compute uniform lotteries that nearly exactly preserve not only fairness with respect to both objectives, but entire sets of Leximin-optimal marginals, meaning that from the perspective of individuals, there is essentially no difference between using a uniform lottery versus the optimal distribution used by the latest algorithms. We discuss these results, their implications, and how they can be deployed directly into the existing panel selection pipeline in Section 6.
2 Model
Panel Selection Problem. First, we formally define the task of panel selection for citizens’ assemblies. Let N = [n] be the pool of volunteers for the panel—individuals from the population who have indicated their willingness to participate in response to an invitation. Let F = {ft}t denote a fixed set of features of interest. Each feature ft : N ! ⌦t maps each pool member to their value of that feature, where ⌦t is the set of ft’s possible values. For example, for feature ft = “gender”, we might have ⌦t = {“male”,“female”, “non-binary”}. We define individual i’s feature vector F (i) = (ft(i))t 2 Q t ⌦t to be the vector encoding their values for all features in F . As is done in practice and in previous research [FGGP20, FGG+21], we impose that the chosen panel P must be a subset of the pool of size k, and must be representative of the broader population with respect to the features in F . This representativeness is imposed via quotas: for each feature f and corresponding value v 2 ⌦, we may have lower and upper quotas lf,v and uf,v . These quotas require that the panel contain between lf,v and uf,v individuals i such that f(i) = v.
In terms of these parameters, we define an instance of the panel selection problem as: given (N, k, F, l, u)—a pool, panel size, set of features, and sets of lower and upper quotas—randomly select a feasible panel, where a feasible panel is any set of individuals P from the collection K:
K := P 2 (Nk) : lf,v |{i 2 P : f(i) = v}| uf,v for all f, v .
Maximally Fair Selection Algorithms. A selection algorithm is a procedure that solves instances of the panel selection problem. A selection algorithm’s level of fairness on a given instance is determined by its panel distribution p, the (possibly implicit) distribution over K from which it
draws the final panel. Because we care about fairness to individuals, we evaluate the fairness of p in
terms of the individual selection probabilities, or marginals, that p implies.3 We denote the vector of
marginals implied by p as ⇡, and we will sometimes specify a panel distribution as p, ⇡ to explicitly
denote this pair. We say that ⇡ is realizable if it is implied by some valid panel distribution p.
Maximally fair selection algorithms are those which solve the panel selection problem by sampling a specifically chosen p: one which implies marginals ⇡ that allocate probability as fairly as possible
3Any distribution over panels p implies a selection probability for each pool members: A pool member’s selection probability, per p, is equal to the probability of drawing a panel from p containing that pool member.
3
( )
Does there exi t a uniform lottery over m panels that nearly preserves the fairness of the maximally fair unconstrained distribution over panels? And, Algori hmically, how do we compute such a uniform lottery? Results and ontributions. After describing the model in Section 2, in Section 3 we prove that it is possible to round an (essentially) arbitrary distribution over panels to a uniform lottery while preservi g all individuals’ select on probabiliti s p to only a small bounded deviation. These results use tools from discrepancy theory an randomized rounding. Intuitively, this bounded change in selection probabilities implies bounded losses in fairness; we formalize this intuition in Section 4, showing that there exists in gene al a uniform lottery that is nearly maximally fair, with respect to multiple fairness objectives. Although we would ideally like to give such bounds for the Leximin fairness objective due to its use practice, this objective cannot be summarized by a single expression. Thus, we give bounds for the closely-relat d egalitarian objective, Maximin [CELM07]. We addition lly give upper bounds on the loss in Nash Welfare [Mou03], a similarly well-established fairness objective that has also been implemented in panel selection tools [HG20]. Finally, in Section 5, we consider the algorithmic question: given a maximally fair distribution over panels, can we find a near-maximally fair uniform lottery, as our bounds suggest should exist? To answer this question, we implement two standard rounding algorithms, along with near-optimal (but ore computationally-intensive) integer programming methods, for finding uniform lotteries. We then evaluate the erformance of these algorithms in 11 real-world panel selection instances. We find that in all instances, we can compute u iform lotteries that nearly exactly preserve not only fairness with respec to both objectives, but entire sets of Leximin-optimal marginals, meaning that from the persp ctive of individuals, there is essentially no difference between using a uniform lottery versus the optimal distribution u ed by the latest alg rithms. We discuss these results, their implications, and how they can be deployed directly i to the existing panel selection pipeline in Section 6.
2 Model
Panel Selection Problem. First, we formally define the task of panel selection for citizens’ assemblies. Let N = [n] be the pool of volun eers for the panel—individuals from the population who have indicated their willingness to participate in response to an invitation. Let F = {ft}t denote a fixed set of features of interest. Each featu e ft : N ! ⌦t maps each pool member to their value of that feature, where ⌦t is the set of ft’s possible values. For example, for feature ft = “gender”, we might have ⌦t = {“male”,“female”, “non-binary”}. We define individual i’s feature vector F (i) = (ft(i))t 2 Q t ⌦t to be the vector encoding their values for all features in F . As is done in practice and in pr vious r search [FGGP20, FGG+21], we impose that the chosen panel P must be a subs t of the p ol of size k, and must be representative of the broader population with re pect to th features in F . This representativeness is imposed via quotas: for each feature f and corresponding value v 2 ⌦, we may have lower and upper quotas lf,v and uf,v . These quotas require that the panel contain between lf,v and uf,v individuals i such that f(i) = v.
In terms of these parameters, we define an instance of the panel selection problem as: given (N, k, F, l, u)—a pool, panel size, set of features, and sets of lower and upper quotas—randomly select a easible p nel, where a feasible panel is any set of individuals P from the collection K:
K := P 2 (Nk) : lf,v |{i 2 P : f(i) = v}| uf,v for all f, v .
Maximally Fa r Selection Algorithms. A selection algorithm is a procedure that solves instances of t e pan l selection problem. A selection algorithm’s level of fairness on a given instance is determined by it panel distribution p, the (possibly implicit) distribution over K from which it
draws the final panel. Beca se we care about fair ess to individuals, we evaluate the fairness of p in
terms of the individual election probabiliti s, or marginals, that p implies.3 We denote the vector of
marginals im lied by p as ⇡, and we will sometimes specify a panel distribution as p, ⇡ to explicitly
denote this pair. We say that ⇡ ealizable if it is implied by some valid panel distribution p.
Maximally fair s ection algorithms are those which solve the panel selection problem by sampling a specifically chosen p: one which mplies marginals ⇡ that allocate probability as fairly as possible
3Any distribution over panels p implies a selection probability for each pool members: A pool member’s selectio prob bi ity, per p, is equal to the probability of drawing a panel from p containing that pool member.
3
Figure 1: The quantization task takes s inp t a maximally fair p nel distribution p∗ (implyi g marginals π∗), and outputs a 1/m-quantized panel distribution p̄ (implying marginals π̄).
3 Theoretical Bounds on Marginal Discrepancy
Here we prove that for a fixed panel distribution p, π, there exists a uniform lottery p̄, π̄ such that ‖π − π̄‖∞ is bounded. Preliminarily, we note that it is intuitive that bounds on this discrepancy should approach 0 as m becomes large with respect to n and k. To see why, begin by fixing some distribution p, π over panels: as m becomes large, we approach the scenario in which a uniform lottery p̄ can assign panels arbitrary probabilities, providing increasingly close approximations to p. Since the marginals πi are continuous with respect to p, as p̄→ p we have that π̄i → πi for all i. While this argument demonstrates convergence, it provides neither efficient algorithms nor tight bounds on the rate of convergence. In this section, our task is therefore to bound the rate of this convergence as a function of m and the other parameters of the instance. All omitted proofs of results from this section are included in Appendix B.
3.1 Worst-Case Upper Bounds
Our first set of upper bounds result from rounding STANDARD LP, the LP that most directly arises from our problem. This LP is defined in terms of a panel distribution p, π, and M , an n × |K| matrix describing which individuals are on which feasible panels: Mi,P = 1 if i ∈ P and Mi,P = 0 otherwise.
STANDARD LP Mp = π (3.1) ‖p‖1 = 1 (3.2) p ≥ 0. Here, (3.1) specifies n total constraints. Our goal is to round p to a uniform lottery p̄ over m panels (so the entries p̄ are multiples of 1/m) such that (3.2) is maintained exactly, and no constraint in (3.1) is relaxed by too much, i.e., ‖Mp−Mp̄‖∞ = ‖π − π̄‖∞ remains small. Randomized rounding is a natural first approach. Any randomized rounding scheme satisfying negative association (which includes several that respect (3.2)) yields the following bound: Theorem 3.1. For any realizable π, we may efficiently randomly generate p̄ such that its marginals π̄ satisfy
‖π − π̄‖∞ = O (√ n log n
m
) .
Fortunately, there is potential for improvement: randomized rounding does not make full use of the fact that M is k-column sparse, due to each panel in K containing exactly k individuals. We use this sparsity to get a stronger bound when n k2, which is a practically significant parameter regime. The proof applies a dependent rounding algorithm based on a theorem of Beck and Fiala [1], to which a modification ensures the exact satisfaction of constraint (3.2). Theorem 3.2. For any realizable π, we may efficiently construct p̄ such that its marginals π̄ satisfy
‖π − π̄‖∞ ≤ k/m.
This bound is already meaningful in practice, where k m is insured by the fact that m is prechosen along with k prior to panel selection. Note also that k is typically on the order of 100
(Table 1), whereas a uniform lottery can in practice be easily made orders of magnitude larger, as each additional factor of 10 in the size of the uniform lottery requires drawing only one more ball (and there is no fairness cost to drawing a larger lottery, since increasing m allows for uniform lotteries which better approximate the unconstrained optimal distribution).
3.2 Beyond-Worst-Case Upper Bounds
As we will demonstrate in Section 3.3, we cannot hope for a better worst-case upper bound than poly(k)/m. We thus shift our consideration to instances which are “simple” in their feature structure, having a small number of features (Theorem B.7), a limited number of unique feature vectors in the pool (Theorem 3.3), or multiple individuals that share each feature vector present (Theorem B.8). The beyond-worst-case bounds given by Theorem 3.3 and Theorem B.8 asymptotically dominate our worst-case bounds in Theorem 3.1 and Theorem 3.2, respectively. Moreover, Theorem 3.3 dominates all other upper bounds in 10 of the 11 practical instances studied in Section 5.
We note that while our worst-case upper bounds implied the near-preservation of any realizable set of marginals π, some of our beyond-worst-case results apply to only realizable π which are anonymous, meaning that πi are equal for all i with equal feature vectors. We contend that any reasonable set of marginals should have this property,4 and furthermore that the “anonymization” of any realizable π is also realizable (Claim B.6); hence this restriction is insignificant. Our beyondworst-case bounds also differ from our worst-case bounds in that they depart from the paradigm of rounding p, instead randomizing over panels that may fall outside the support of p.
The main beyond-worst-case bound we give, stated below, is parameterized by |C|, where C is the set of unique feature vectors that appear in the pool. All omitted proofs and other beyond worst-case results are stated and proven in Appendix B.
Theorem 3.3. If π is anonymous and realizable, then we may efficiently construct p̄ such that its marginals π̄ satisfy
‖π − π̄‖∞ = O (√ |C| log |C| m ) .
|C| is at most n, so this bound dominates Theorem 3.1. In 10 of the 11 real-world instances we study, |C| is also smaller than k2 (Appendix A), in which case this bound also dominates Theorem 3.2. At a high level, our beyond-worst-case upper bounds are obtained not by directly rounding p, but instead using the structure of the sortition instance to abstract the problem into one about “types.” For this bound we then solve an LP in terms of “types,” round that LP, and then reconstruct a rounded panel distribution p̄, π̄ from the “type” solution. In particular, the types of individuals are the feature vectors which appear in the pool, and types of panels are the multisets of k feature vectors that satisfy the instance quotas. Fixing an instance, we project some p into type space by viewing it as a distribution p over types of panels K, inducing marginals τc for each type individuals c ∈ C. To begin, we define the TYPE LP, which is analogous to Eq. (3.1). We let Q be the type analog of M , so that entry Qcj is the number of individuals i with F (i) = c contained in panels of type j ∈ K.5 Then,
TYPE LP Q p = τ (3.3) ‖p‖1 = 1 (3.4)
p ≥ 0.
We round p in this LP to a panel type distribution p̄ while preserving (3.4). All that remains, then, is to construct some p̄, π̄ such that p is consistent with p̄ and ‖π − π̄‖∞ is small. This p̄ is in general supported by panels outside of supp(p), unlike the p̄ obtained by Theorem 3.1. It is the anonymity of π which allows us to construct these new panels and prove that they are feasible for the instance.
4The class of all anonymous marginals π includes the maximizers π∗ of all reasonable fairness objectives, and second, this condition is satisfied by all existing selection algorithms used in practice, to our knowledge.
5Completing the analogy, C,K, Q, p, p̄, τ are the “type” versions of N,K,M, p, p̄, π from the original LP.
3.3 Lower Bounds
This method of using bounded discrepancy to derive nearly fairness-optimal uniform lotteries has its limits, since there are even sparse M and fractional x for which no integer x̄ yields nearby Mx̄. In the worst case, we establish lower bounds by modifying those of Beck and Fiala [25]: Theorem 3.4. There exist p, π for which for all uniform lotteries p̄, π̄,
min p̄∈D ‖π − π̄‖∞ = Ω
(√ k
m
) .
Our k-dependent upper and lower bounds are separated by a factor of √ k, matching the current upper and lower bounds of the Beck-Fiala conjecture as applied to linear discrepancy (also known as the lattice approximation problem [26]). The respective gaps are incomparable, however, since for a given x ∈ [0, 1]n, the former problem aims to minimize ‖M(x− x̄)‖∞ over x̄ ∈ {0, 1}n, while we aim to do the same over a subset of the x̄ ∈ Zn for which∑j xj = ∑ j x̄j (see Lemma B.4).
4 Theoretical Bounds on Fairness Loss
Since the fairness of a distribution p is determined by its marginals π, it is intuitive that if uniform lotteries incur only small marginal discrepancy (per Section 3), then they should also incur only small fairness losses. This should hold for any fairness notion that is sufficiently “smooth” (i.e., doesn’t change too quickly with changing marginals) in the vicinity of p, π.
Although our bounds from Section 3 apply to any reasonable initial distribution p, we are particularly concerned with bounding fairness loss from maximally fair initial distributions p∗. Here, we specifically consider such p∗ that are optimal with respect to Maximin and NW. We note that, since there exist anonymous p∗, π∗ that maximize these objectives, we can apply any upper bound from Section 3 to upper bound ‖π∗ − π̄‖∞. We defer omitted proofs to Appendix C.
4.1 Maximin
Since Leximin is the fairness objective optimized by the maximally fair algorithm used in practice, it would be most natural to start with a p∗ that is Leximin-optimal and bound fairness loss with respect to this objective. However, the fact that Leximin fairness cannot be represented by a single scalar value prevents us from formulating such an approximation guarantee. Instead, we first pursue bounds on the closely-related objective, Maximin. We argue that in the most meaningful sense, a worst-case Maximin guarantee is a Leximin guarantee: such a bound would show limited loss in the minimum marginal, and it is Leximin’s lexicographically first priority to maximize the minimum marginal.
First, we show there exists some p̄, π̄ that gives bounded Maximin loss from p∗, π∗, the Maximinoptimal unconstrained distribution. This bound follows from Theorems 3.3 and B.8, using the simple observation that p̄ can decrease the lowest marginal given by p∗ by no more than ‖π∗ − π̄‖∞. Here nmin := minc nc denotes the smallest number of individuals which share any feature vector c ∈ C. Corollary 4.1. By Theorem 3.3 and B.8, for Maximin-optimal p∗, there exists a uniform lottery p̄ that satisfies
Maximin(p∗)−Maximin(p̄) = 1 m ·O ( min {√ |C| log |C|, k nmin + 1 }) .
Theorem 3.4 demonstrates that we cannot get an upper bound on Maxmin loss stronger than O( √ k/m) using a uniform bound on changes in all πi. However, since Maximin is concerned only with the smallest πi, it seems plausible that better upper bounds on Maximin loss could result from rounding π while tightly controlling only losses in the smallest πi’s, while giving freer reign to larger marginals. We show that this is not the case by further modifying the instances from Theorem 3.4 to obtain the following lower bound on the Maximin loss: Theorem 4.1. There exists a Maximin-optimal p∗ such that, for all uniform lotteries p̄,
Maximin(p∗)−Maximin(p̄) = Ω (√ k
m
) .
4.2 Nash Welfare
As NW has also garnered interest by practitioners and is applicable in practice [18], we upper-bound the NW fairness loss. Unlike Maximin loss, an upper bound on NW loss does not immediately follow from one on ‖π − π̄‖∞, because decreases in smaller marginals have larger negative impact on the NW. As a result, the upper bound on NW resulting from Section 3 is slightly weaker than that on Maximin:
Theorem 4.2. For NW-optimal p∗, there exists a uniform lottery p̄ that satisfies
NW(p∗)−NW(p̄) = k m ·O ( min {√ |C| log |C|, k nmin + 1 }) .
We give an overview of the proof of Theorem 4.2. To begin, fix a NW-optimizing panel distribution p∗, π∗. Before applying our upper bounds on marginal discrepancy from Section 3, we must contend with the fact that if this bounded loss is suffered by already-tiny marginals, the NW may decrease substantially or even go to 0. Thus, we first prove Lemmas 4.1 and 4.2, which together imply that no marginal in π∗ is smaller than 1/n.
Lemma 4.1. For NW-optimal p∗ over a support of panels supp(p∗), there exists a constant λ ∈ R+ such that, for all P ∈ supp(p∗),∑i∈P 1/π∗i = λ.
Lemma 4.2. For NW-optimal p∗, π∗, we have that π∗i ≥ 1/n for all i ∈ N .
Lemma 4.1 follows from the fact that the partial derivative of NW with respect to the probability it assigns a given panel must be the same as that with respect to any other panel at p∗ (otherwise, mass in the distribution could be shifted to increase the NW). Lemma 4.2 then follows by the additional observation that EP∼p∗ [∑ i∈P 1/π ∗ i ] = n.
Finally Lemma 4.3 follows from the fact that Lemma 4.2 limits the potential multiplicative, and therefore additive, impact on the NW of decreasing any marginal by ‖π − π̄‖∞: Lemma 4.3. For NW-optimal p∗, π∗, there exists a uniform lottery p̄, π̄ that satisfies NW(p∗) − NW(p̄) ≤ k ‖π∗ − π̄‖∞.
As the NW-optimal marginals π∗ are anonymous, we can apply the upper bounds given by Theorem 3.3 and Theorem B.8 to show the existence of a p̄, π̄ satisfying the claim of the theorem.
5 Practical Algorithms for Computing Fair Uniform Lotteries
Algorithms. First, we implement versions of two existing rounding algorithms, which are implicit in our worst-case upper bounds.6 The first is Pipage rounding [16], or PIPAGE, a randomized rounding scheme satisfying negative association [10]. The second is BECK-FIALA, the dependent rounding scheme used in the proof of Theorem 3.2. To benchmark these algorithms against the highest level of fairness they could possibly achieve, we use integer programming (IP) to compute the fairest possible uniform lotteries over supp(p∗), the panels over which p∗ randomizes.7 We define IPMAXIMIN and IP-NW to find uniform lotteries over supp(p∗) maximizing Maximin and NW, respectively. We remark that the performance of these IPs is still subject to our theoretical upper and lower bounds. We provide implementation details in Appendix D.1.
One question is whether we should prefer the IPs or the rounding algorithms for real-world applications. Although IP-MAXIMIN appears to find good solutions at practicable speeds, IP-NW converges to optimality prohibitively slowly in some instances (see Appendix D.2 for runtimes). At the same time, we find that our simpler rounding algorithms give near-optimal uniform lotteries with respect to both fairness objectives. Also in favor of simpler rounding algorithms, many randomized rounding procedures (including Pipage rounding) have the advantage that they exactly
6We do not implement the algorithm implicit in Theorem 3.3 because our results already present sufficient alternatives for finding excellent uniform lotteries in practice.
7Note that these lotteries are not necessarily universally optimal, as they can randomize over only supp(p∗); conceivably, one could find a fairer uniform lottery by also randomizing over panels not in supp(p∗). However, PIPAGE and BECK-FIALA are also restricted in this way, and thus must be weakly dominated by the IP.
preserve marginals over the combined steps of randomly rounding to a uniform lottery and then randomly sampling it—a guarantee that is much more challenging to achieve with IPs.
Uniform lotteries nearly exactly preserve Maximin, Nash Welfare fairness. We first measure the fairness of uniform lotteries produced by these algorithms in 11 real-world panel selection instances from 7 different organizations worldwide (instance details in Appendix A). In all experiments, we generate a lottery of sizem = 1000. This is fairly small; it requires drawing only 3 balls from lottery machines, and in one instance we have that m < n. We nevertheless see excellent performance of all algorithms, and note that this performance will only improve with larger m.
Figure 2 shows the Maximin fairness of the uniform lottery computed by PIPAGE, BECK-FIALA, and IP-MAXIMIN for each instance. For intuition, recall that the level of Maximin fairness given by any lottery is exactly the minimum marginal assigned to any individual by that lottery. The upper edges of the gray boxes in Fig. 2 correspond to the optimal fairness attained by an unconstrained distribution p∗. These experiments reveal that the cost of transparency to Maximin-fairness is practically non-existent: across instances, the quantized distributions computed by IP-MAXIMIN decrease the minimum marginal by at most 2.1/m, amounting to a loss of no more than 0.0021 in the minimum marginal probability in any instance. Visually, we can see that this loss is negligible relative to the original magnitude of even the smallest marginals given by p∗. Surprisingly, though PIPAGE and BECK-FIALA do not aim to optimize any fairness objective, they achieve only slightly larger losses in Maximin fairness, with PIPAGE outperforming BECK-FIALA. Finally, the heights of the gray boxes indicate that our theoretical bounds are often meaningful in practice, giving lower bounds on Maximin fairness well above zero in nine out of eleven instances. We note these bounds only tighten with larger m. We present similarly encouraging results on NW loss in Appendix D.3.
Uniform lotteries nearly preserve all Leximin marginals. We still remain one step away from practice: our examination of Maximin does not address whether uniform lotteries can attain the finer-tuned fairness properties of the Leximin-optimal distributions currently used in practice. Fortunately, our results from Section 3 imply the existence of a quantized p̄ that closely approximates all marginals given by the Leximin-optimal distribution p∗, π∗. We evaluate the extent to which PIPAGE and BECK-FIALA preserve these marginals in Fig. 3. They are benchmarked against a new IP, IP-MARGINALS, which computes the uniform lottery over supp(p∗) minimizing ‖π∗ − π̄‖∞.
Figure 3 demonstrates that in the instance “sf(a)”, all algorithms produce marginals that deviate negligibly from those given by π∗. Analogous results on remaining instances appear in Appendix D.4 and show similar results. As was the case for Maximin, we see that our theoretical bounds are meaningful, but that we can consistently outperform them in real-world instances.
6 Discussion
Our aim was to show that uniform lotteries can preserve fairness, and our results ultimately suggest this, along with something stronger: that in practical instances, uniform lotteries can reliably almost exactly replicate the entire set of marginals given by the optimal unconstrained panel distribution. Our rounding algorithms can thus be plugged directly into the existing panel selection pipeline with essentially no impact on individuals’ selection probabilities, thus enabling translation of the output of Panelot (and other maximally fair algorithms) to a nearly maximally fair and transparent panel selection procedure. We note that our methods are not just compatible with ball-drawing lotteries, but any form of uniform physical randomness (e.g. dice, wheel-spinning, etc.).
Although we achieve our stated notion of transparency, a limitation of this notion is that it focuses on the final stage of the panel selection process. A more holistic notion of transparency might require that onlookers can verify that the panel is not being intentionally stacked with certain individuals. This work does not fully enable such verification: although onlookers can now observe individuals’ marginals, they still cannot verify that these marginals are actually maximally fair without verifying the underlying optimization algorithms. In particular, in the common case where quotas require even maximally fair panel distributions to select certain individuals with probability near one, onlookers cannot distinguish those from unfair distributions engineered such that one or more pool members are chosen with probability near one.
In research on economics, fair division, and other areas of AI, randomness is often proposed as a tool to make real-world systems fairer [17, 6, 15]. Nonetheless, in practice, these systems (with a few exceptions, such as school choice [22]) remain stubbornly deterministic. Among the hurdles to bringing the theoretical benefits of randomness into practice is that allocation mechanisms fare best when they can be readily understood, and that randomness can be perceived as undesirable or suspect. Sortition is a rather unique paradigm at the heart of this tension: it relies centrally on randomness, while in the public sphere it is attaining increasing political influence. It is therefore a uniquely high-impact domain in which to study how to combine the benefits of randomness, such as fairness, with transparency. We hope that this work and its potential for impact will inspire the investigation of fairness-transparency tradeoffs in other AI applications.
Acknowledgements. We would foremost like to thank Paul Gölz for helpful technical conversations and insights on the practical motivations for this research. We also thank Anupam Gupta for helpful technical conversations. Finally, several organizations for supplying real-world citizens’ assembly data, including the Sortition Foundation, the Center for Climate Assemblies, Healthy Democracy, MASS LBP, Nexus Institute, Of by For, and New Democracy.
Funding and Competing Interests. This work was partially supported by National Science Foundation grants CCF-2007080, IIS-2024287 and CCF-1733556; and by Office of Naval Research grant N00014-20-1-2488. Bailey Flanigan is supported by the National Science Foundation Graduate Research Fellowship and the Fannie and John Hertz Foundation. None of the authors have competing interests. | 1. What is the main contribution of the paper regarding transparency in assembly selection?
2. How does the proposed approach achieve fairness and transparency in panel selection?
3. Can the reviewer understand the connection between |K| and m?
4. Does the reviewer have difficulty parsing the definition of Maximin?
5. Can the current results hold if the objective is replaced with a group fairness measure?
6. Is k ≤ m holds for sure?
7. Are there any other minor suggestions or questions regarding the presentation or content of the paper? | Summary Of The Paper
Review | Summary Of The Paper
Additional to the maximally-fair consideration, this paper concerns the transparency in assembly selection problem. Specifically, the paper introduces the notion of transparency in panel selection as people can easy understand the probabilities with which each individual will be chosen for the panel and verify that individuals are actually selected with these probabilities. To achieve this, the paper studies the
m
-uniform lottery where the selection probability of each feasible panel must be multiples of
1
/
m
. The paper shows that there exists a uniform lottery over
m
panels that can nearly preserve the fairness of the maximally-fair unconstrained distribution over panels.
Furthermore, the paper uses the fairness loss and marginal discrepancy to quantify the closeness of such lottery and the lottery obtained in an unconstrained (no transparency requirement) setting. The paper characterizes several upper bounds for these measures and further strengthen these bounds in more structured settings.
Lastly, the paper conducts experiments on real-world panel selection instances to demonstrate the viability of the uniform lottery approach as a method of selecting assemblies both fairly and transparently.
Review
Originality, quality, clarity, and significance
I should first mention that I feel I'm not a suitable reviewer for this paper, as the main topics are outside my areas of expertise – I have indicated this in my low confidence for my score. Although I cannot judge its novelty adequately, the problem studied in this paper is well motivated and exhibits solid applications in real-world. The theoretically results around the problem setting appear correct and the experiments seem to be comprehensive. In terms of the presentation, the paper is overall well written. I would like to appreciate author's efforts in introduction section to make the motivation of the problem well defined.
Questions and comments:
are there any connections between
|
K
|
and
m
? I'm a bit confusing here, as in Line 90, you mentioned "uniform lottery over
m
panels", and
K
is the set of all feasible panels by definition in Line 129.
I found it a bit difficult to parse the definition of Maximin, it might simply because that I'm not in this area. Seems that Maximin can not bound the difference of marginal of individuals, which is usually the focus of individual fairness. Could author explain this a little bit?
The fairness objectives considered in the paper are more from the point of view of individuals. I'm wondering whether the current results still hold if you replace the objectives to a group fairness measure?
Does
k
≤
m
hold for sure? If
k
>
m
, Theorem 3.2 seems to be meaningless?
Others:
Line 248: "be most natural start with" -> starting
Line 281, you probably want to unify Nash-Welfare optimal and NW-optimal.
Line 291, it should be "Claim B.6"?
==== Post Rebuttal =====
Thank the authors for the clear and detailed feedback. Taking everything into consideration, I'd like to keep my rating as-is. |
NIPS | Title
Fair Sortition Made Transparent
Abstract
Sortition is an age-old democratic paradigm, widely manifested today through the random selection of citizens’ assemblies. Recently-deployed algorithms select assemblies maximally fairly, meaning that subject to demographic quotas, they give all potential participants as equal a chance as possible of being chosen. While these fairness gains can bolster the legitimacy of citizens’ assemblies and facilitate their uptake, existing algorithms remain limited by their lack of transparency. To overcome this hurdle, in this work we focus on panel selection by uniform lottery, which is easy to realize in an observable way. By this approach, the final assembly is selected by uniformly sampling some pre-selected set of m possible assemblies. We provide theoretical guarantees on the fairness attainable via this type of uniform lottery, as compared to the existing maximally fair but opaque algorithms, for two different fairness objectives. We complement these results with experiments on real-world instances that demonstrate the viability of the uniform lottery approach as a method of selecting assemblies both fairly and transparently.
1 Introduction
In a citizens’ assembly, a panel of randomly chosen citizens is convened to deliberate and ultimately make recommendations on a policy issue. The defining aspect of citizens’ assemblies is the randomness of the process, sortition, by which participants are chosen. In practice, the sortition process works as follows: first, volunteers are solicited via thousands of letters or phone calls, which target individuals chosen uniformly at random. Those who respond affirmatively form the pool of volunteers, from which a final panel will be chosen. Finally, a selection algorithm is used to randomly select some pre-specified number k of pool members for the panel. To ensure adequate representation of demographic groups, the chosen panel is often constrained to satisfy some upper and lower quotas on feature categories such as age, gender, and ethnicity. We call a quota-satisfying panel of size k a feasible panel. As this process illustrates, citizens’ assemblies offer a way to involve the public in informed decision-making. This potential for civic participation has recently spurred a global resurgence in the popularity of citizens assemblies; they have been commissioned by governments and led to policy changes at the national level [19, 23, 12].
Prompted by the growing impact of citizens’ assemblies, there has been a recent flurry of computer scientific research on sortition, and in particular, on the fairness of the procedure by which participants are chosen [2, 13, 12]. The most practicable result to date is a family of selection algorithms proposed by Flanigan et al. [12], which are distinguished from their predecessors by their use of randomness toward the goal of fairness: while previously-used algorithms selected pool members in
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
a random but ad-hoc fashion, these new algorithms are maximally fair, ensuring that pool members have as equal probability as possible of being chosen for the panel, subject to the quotas.1 To encompass the many interpretations of “as equal as possible,” these algorithms permit the optimization of any fairness objective with certain convexity properties. There is now a publicly available implementation of the techniques of Flanigan et al. [12], called Panelot, which optimizes the egalitarian notion that no pool member has too little selection probability via the Leximin objective from fair division [21, 14]. This algorithm has already been deployed by several groups of panel organizers, and has been used to select dozens of panels worldwide.
Fairness gains in the panel selection process can lend legitimacy to citizens’ assemblies and potentially increase their adoption, but only insofar as the public trusts that these gains are truly realized. Currently, the potential for public trust in the panel selection process is limited by multiple factors. First, the latest panel selection algorithms select the final panel via behind-the-scenes computation. When panels are selected in this manner, observers cannot even verify that any given pool member has any chance of being chosen for the panel. A second and more fundamental hurdle is that randomness and probability, which are central to the sortition process, have been shown in many contexts to be difficult for people to understand and reason about [24, 20, 28]. Aiming to address these shortcomings, we propose and pursue the following notion of transparency in panel selection:
Transparency: Observers should be able to, without reasoning in-depth about probability, (1) understand the probabilities with which each individual will be chosen for the panel in theory, and (2) verify that individuals are actually selected with these probabilities in practice.
In this paper, we aim to achieve transparency and fairness simultaneously: this means advancing the defined goal of transparency, while preserving the fairness gains obtained by maximally fair selection algorithms. Although this task is reminiscent of existing AI research on trade-offs between fairness or transparency with other desirable objectives [4, 11, 3, 27], to our knowledge, this is the first investigation of the trade-off between fairness and transparency.
Setting aside for a moment the goal of fairness, we consider a method of random decision-making that is already common in the public sphere: the uniform lottery. To satisfy quotas, a uniform lottery for sortition must randomize not over individuals, but over entire feasible panels. In fact, this approach has been suggested by practitioners, and was even used in 2020 to select a citizens’ assembly in Michigan. The following example, which closely mirrors that real-world pilot,2 illustrates that panel selection via uniform lottery is naturally consistent with the transparency notion we pursue.
Suppose we construct 1000 feasible panels from a pool (possibly with duplicates), numbered 000- 999, and publish an (anonymized) list of which pool members are on each panel. We then inform spectators that we will choose each panel with equal probability. This satisfies criterion (1): spectators can easily understand that all panels will be chosen with the same probability of 1/1000, and can easily determine each individual’s selection probability by counting the number of panels containing the individual. To satisfy criterion (2), we enact the lottery by drawing each of the three digits of the final panel number individually from lottery machines. Lottery spectators can confirm that each ball is drawn with equal probability; this provides confirmation that panels are indeed being chosen with uniform probabilities, thus confirming the enactment of the proposed individual selection probabilities. In addition to its conventionality as a source of randomness, decision-making via drawing lottery balls invites an exciting spectacle, which can promote engagement with citizens’ assemblies.
This simple method neatly satisfies our transparency criteria, but it has one obvious downside: a uniform lottery over an arbitrary set of feasible panels does not guarantee any measure of equal probabilities to individuals. In fact, it is not even clear that the fairest possible uniform lottery over m panels, where m is a number conducive to selection by physical lottery (e.g. m =1000), would not be significantly less fair than maximally fair algorithms, which sample the fairest possible unconstrained distribution over panels. For example, if m is too small, there may be no uniform lottery which gives all individuals non-zero selection probability, even if each individual appears
1Quotas can preclude giving individuals exactly equal probabilities: if the panel must be 1/2 men, 1/2 women but the pool is split 3/4 men, 1/4 women, then some women must be chosen more often than some men.
2Of By For’s pilot of live panel selection via lottery can be viewed at https://vimeo.com/458304880# t=17m59s from 17:59 to 21:23. For a more detailed description, see Figure 3 and surrounding text in [12].
on some feasible panel (and so can attain a non-zero selection probability under an unconstrained distribution).
Fortunately, empirical evidence suggests that there is hope: in the 2020 pilot mentioned above, a uniform lottery over m =1000 panels was found that nearly matched the fairness of the maximally fair distribution generated by Panelot. Motivated by this anecdotal evidence, we aim to understand whether such a fair uniform lottery is guaranteed to exist in general, and if it does, how to find it. We summarize this goal in the following research questions:
Does there exist a uniform lottery overm panels that nearly preserves the fairness of the maximally fair unconstrained distribution over panels? And, Algorithmically, how do we compute such a uniform lottery?
Results and Contributions. After describing the model in Section 2, in Section 3 we prove that it is possible to round an (essentially) arbitrary distribution over panels to a uniform lottery while preserving all individuals’ selection probabilities up to only a small bounded deviation. These results use tools from discrepancy theory and randomized rounding. Intuitively, this bounded change in selection probabilities implies bounded losses in fairness; we formalize this intuition in Section 4, showing that there exists in general a uniform lottery that is nearly maximally fair, with respect to multiple choices of fairness objective. Although we would ideally like to give such bounds for the Leximin fairness objective, due to its use practice, we cannot succinctly represent bounds for this objective because it is not scalar valued. We therefore give bounds for Maximin, a closely related egalitarian objective which only considers the minimum selection probability given to any pool member [7]. We discuss in Section 4 why bounds on loss in Maximin fairness are, in the most meaningful sense, also bounds on loss in Leximin fairness. We additionally give upper bounds on the loss in Nash Welfare [21], a similarly well-established fairness objective that has also been implemented in panel selection tools [18].
Finally, in Section 5, we consider the algorithmic question in practice: given a maximally fair distribution over panels, can we actually find nearly maximally fair uniform lotteries that match our theoretical guarantees? To answer this question, we implement two standard rounding algorithms, along with near-optimal (but more computationally intensive) integer programming methods, for finding uniform lotteries. We then evaluate the performance of these algorithms in 11 real-world panel selection instances. We find that in all instances, we can compute uniform lotteries that nearly exactly preserve not only fairness with respect to both objectives, but entire sets of Leximin-optimal marginals, meaning that from the perspective of individuals, there is essentially no difference between using a uniform lottery versus the optimal unconstrained distribution sampled by the latest algorithms. We discuss these results, their implications, and how they can be deployed directly into the existing panel selection pipeline in Section 6.
2 Model
Panel Selection Problem. First, we formally define the task of panel selection for citizens’ assemblies. Let N = [n] be the pool of volunteers for the panel—individuals from the population who have indicated their willingness to participate in response to an invitation. Let F = {ft}t denote a fixed set of features of interest. Each feature ft : N → Ωt maps each pool member to their value of that feature, where Ωt is the set of ft’s possible values. For example, for feature ft = “gender”, we might have Ωt = {“male”,“female”, “non-binary”}. We define individual i’s feature vector F (i) = (ft(i))t ∈ ∏ t Ωt to be the vector encoding their values for all features in F .
As is done in practice and in previous research [13, 12], we impose that the chosen panel P must be a subset of the pool of size k, and must be representative of the broader population with respect to the features in F . This representativeness is imposed via quotas: for each feature f and corresponding value v ∈ Ω, we may have lower and upper quotas lf,v and uf,v. These quotas require that the panel contain between lf,v and uf,v individuals i such that f(i) = v.
In terms of these parameters, we define an instance of the panel selection problem as: given (N, k, F, l, u)—a pool, panel size, set of features, and sets of lower and upper quotas—randomly select a feasible panel, where a feasible panel is any set of individuals P from the collection K:
K := { P ∈ (Nk) : lf,v ≤ |{i ∈ P : f(i) = v}| ≤ uf,v for all f, v } .
Maximally Fair Selection Algorithms. A selection algorithm is a procedure that solves instances of the panel selection problem. A selection algorithm’s level of fairness on a given instance is determined by its panel distribution p, the (possibly implicit) distribution over K from which it draws the final panel. Because we care about fairness to individual pool members, we evaluate the fairness of p in terms of the fairness of selection probabilities, or marginals, that p implies for all pool members.3 We denote the vector of marginals implied by p as π, and we will sometimes specify a panel distribution as p, π to explicitly denote this pair. We say that π is realizable if it is implied by some distribution p over the feasible panels K. Maximally fair selection algorithms are those which solve the panel selection problem by sampling a specifically chosen p: one which implies marginals π that allocate probability as fairly as possible across pool members. The fairness of p, π is measured by a fairness objective F , which maps an allocation—in this case, of selection probability to pool members—to a real number measuring the allocation’s fairness. Fixing an instance, a fairness objective F , and a panel distribution p, we express the fairness of p as F(p). Existing maximally fair selection algorithms can maximize a wide range of fairness objectives, including those considered in this paper.
Leximin, Maximin, and Nash Welfare. Of the three fairness objectives we consider in this paper, Maximin and Nash Welfare (NW) have succinct formulae. For p, π they are defined as follows, where πi is the marginal of individual i:
Maximin(p) := min i∈N πi, NW(p) :=
(∏
i
πi
)1/n .
Intuitively, NW maximizes the geometric mean, prioritizing the marginal πi of each individual i in proportion to π−1i . Maximin maximizes the marginal probability of the individual least likely to be selected. Finally, Leximin is a refinement of Maximin, and is defined by the following algorithm: first, optimize Maximin; then, fixing the minimum marginal as a lower bound on any marginal, maximize the second-lowest marginal; and so on.
Our task: quantize a maximally fair panel distribution with minimal fairness loss. We define a 1/m-quantized panel distribution as a distribution over all feasible panels K in which all probabilities are integer multiples of 1/m. We use p̄ to denote a panel distribution with this property. Formally, while an (unconstrained) panel distribution p lies in D := {p ∈ R|K|+ : ‖p‖1 = 1}, a 1/m-quantized panel distribution in p̄ lies in D := {p̄ ∈ (Z+/m)|K| : ‖p̄‖1 = 1}. Note that a 1/m-quantized distribution p̄ immediately translates to a physical uniform lottery of over m panels (with duplicates): if p̄ assigns probability `/m to panel P , then the corresponding physical uniform lottery would contain ` duplicates of P . Thus, if we can compute a 1/m-quantized panel distribution p̄ with fairness F(p̄), then we have designed a physical uniform lottery over m panels with that same level of fairness.
Our goal follows directly from this observation: we want to show that given an instance and desired lottery size m, we can compute a 1/m-quantized distribution p̄ that is nearly as fair, with respect to a fairness notion F , as the maximally fair panel distribution in this instance p∗ ∈ arg maxp∈D F(p). We define the fairness loss in this quantization process to be the difference F(p∗) − F(p̄). We are aided in this task by the existence of practical algorithms for computing p∗ Flanigan et al. [12], which allows us to use p∗ as an input to the quantization procedure we hope to design. For intuition, we illustrate this quantization task in Figure 1, where π∗, π̄ are the marginals implied by p∗, p̄, respectively. Since the fairness of p∗, p̄ are computed in terms of π∗, π̄, it is intuitive that a quantization process that results in small marginal discrepancy, defined as the maximum change in any marginal ‖π− π̄‖∞, should also have small fairness loss. This idea motivates the upcoming section, in which we give quantization procedures with provably bounded marginal discrepancy, forming the foundation for our later bounds on fairness loss.
3A panel distribution p implies a unique vector of marginals π as follows: fixing p, π, a pool member i’s marginal selection probability πi is equal to the probability of drawing a panel from p containing that pool member. For a more detailed introduction to the connection between panel distributions and marginals, we refer readers to Flanigan et al. [12].
all feasible panels all feasible panels
Maximally fair distribution over panels (output of LEXIMIN [FGGHP21])
Uniform lottery over m panels
quantize
p* p̄input output
all pool members
π* π̄
all pool members1/m
( )
Does there exist a uniform lottery over m panels that nearly preserves the fairness of the maximally fair unconstrained distribution over panels? And, Algorithmically, how do we compute such a uniform lottery? Results and contributions. After describing the model in Section 2, in Section 3 we prove that it is possible to round an (essentially) arbitrary distribution over panels to a uniform lottery while preserving all individuals’ selection probabilities up to only a small bounded deviation. These results use tools from discrepancy theory and randomized rounding. Intuitively, this bounded change in selection probabilities implies bounded losses in fairness; we formalize this intuition in Section 4, showing that there exists in general a uniform lottery that is nearly maximally fair, with respect to multiple fairness objectives. Although we would ideally like to give such bounds for the Leximin fairness objective due to its use practice, this objective cannot be summarized by a single expression. Thus, we give bounds for the closely-related egalitarian objective, Maximin [CELM07]. We additionally give upper bounds on the loss in Nash Welfare [Mou03], a similarly well-established fairness objective that has also been implemented in panel selection tools [HG20]. Finally, in Section 5, we consider the algorithmic question: given a maximally fair distribution over panels, can we find a near-maximally fair uniform lottery, as our bounds suggest should exist? To answer this question, we implement two standard rounding algorithms, along with near-optimal (but more computationally-intensive) integer programming methods, for finding uniform lotteries. We then evaluate the performance of these algorithms in 11 real-world panel selection instances. We find that in all instances, we can compute uniform lotteries that nearly exactly preserve not only fairness with respect to both objectives, but entire sets of Leximin-optimal marginals, meaning that from the perspective of individuals, there is essentially no difference between using a uniform lottery versus the optimal distribution used by the latest algorithms. We discuss these results, their implications, and how they can be deployed directly into the existing panel selection pipeline in Section 6.
2 Model
Panel Selection Problem. First, we formally define the task of panel selection for citizens’ assemblies. Let N = [n] be the pool of volunteers for the panel—individuals from the population who have indicated their willingness to participate in response to an invitation. Let F = {ft}t denote a fixed set of features of interest. Each feature ft : N ! ⌦t maps each pool member to their value of that feature, where ⌦t is the set of ft’s possible values. For example, for feature ft = “gender”, we might have ⌦t = {“male”,“female”, “non-binary”}. We define individual i’s feature vector F (i) = (ft(i))t 2 Q t ⌦t to be the vector encoding their values for all features in F . As is done in practice and in previous research [FGGP20, FGG+21], we impose that the chosen panel P must be a subset of the pool of size k, and must be representative of the broader population with respect to the features in F . This representativeness is imposed via quotas: for each feature f and corresponding value v 2 ⌦, we may have lower and upper quotas lf,v and uf,v . These quotas require that the panel contain between lf,v and uf,v individuals i such that f(i) = v.
In terms of these parameters, we define an instance of the panel selection problem as: given (N, k, F, l, u)—a pool, panel size, set of features, and sets of lower and upper quotas—randomly select a feasible panel, where a feasible panel is any set of individuals P from the collection K:
K := P 2 (Nk) : lf,v |{i 2 P : f(i) = v}| uf,v for all f, v .
Maximally Fair Selection Algorithms. A selection algorithm is a procedure that solves instances of the panel selection problem. A selection algorithm’s level of fairness on a given instance is determined by its panel distribution p, the (possibly implicit) distribution over K from which it
draws the final panel. Because we care about fairness to individuals, we evaluate the fairness of p in
terms of the individual selection probabilities, or marginals, that p implies.3 We denote the vector of
marginals implied by p as ⇡, and we will sometimes specify a panel distribution as p, ⇡ to explicitly
denote this pair. We say that ⇡ is realizable if it is implied by some valid panel distribution p.
Maximally fair selection algorithms are those which solve the panel selection problem by sampling a specifically chosen p: one which implies marginals ⇡ that allocate probability as fairly as possible
3Any distribution over panels p implies a selection probability for each pool members: A pool member’s selection probability, per p, is equal to the probability of drawing a panel from p containing that pool member.
3
( )
Does there exi t a uniform lottery over m panels that nearly preserves the fairness of the maximally fair unconstrained distribution over panels? And, Algori hmically, how do we compute such a uniform lottery? Results and ontributions. After describing the model in Section 2, in Section 3 we prove that it is possible to round an (essentially) arbitrary distribution over panels to a uniform lottery while preservi g all individuals’ select on probabiliti s p to only a small bounded deviation. These results use tools from discrepancy theory an randomized rounding. Intuitively, this bounded change in selection probabilities implies bounded losses in fairness; we formalize this intuition in Section 4, showing that there exists in gene al a uniform lottery that is nearly maximally fair, with respect to multiple fairness objectives. Although we would ideally like to give such bounds for the Leximin fairness objective due to its use practice, this objective cannot be summarized by a single expression. Thus, we give bounds for the closely-relat d egalitarian objective, Maximin [CELM07]. We addition lly give upper bounds on the loss in Nash Welfare [Mou03], a similarly well-established fairness objective that has also been implemented in panel selection tools [HG20]. Finally, in Section 5, we consider the algorithmic question: given a maximally fair distribution over panels, can we find a near-maximally fair uniform lottery, as our bounds suggest should exist? To answer this question, we implement two standard rounding algorithms, along with near-optimal (but ore computationally-intensive) integer programming methods, for finding uniform lotteries. We then evaluate the erformance of these algorithms in 11 real-world panel selection instances. We find that in all instances, we can compute u iform lotteries that nearly exactly preserve not only fairness with respec to both objectives, but entire sets of Leximin-optimal marginals, meaning that from the persp ctive of individuals, there is essentially no difference between using a uniform lottery versus the optimal distribution u ed by the latest alg rithms. We discuss these results, their implications, and how they can be deployed directly i to the existing panel selection pipeline in Section 6.
2 Model
Panel Selection Problem. First, we formally define the task of panel selection for citizens’ assemblies. Let N = [n] be the pool of volun eers for the panel—individuals from the population who have indicated their willingness to participate in response to an invitation. Let F = {ft}t denote a fixed set of features of interest. Each featu e ft : N ! ⌦t maps each pool member to their value of that feature, where ⌦t is the set of ft’s possible values. For example, for feature ft = “gender”, we might have ⌦t = {“male”,“female”, “non-binary”}. We define individual i’s feature vector F (i) = (ft(i))t 2 Q t ⌦t to be the vector encoding their values for all features in F . As is done in practice and in pr vious r search [FGGP20, FGG+21], we impose that the chosen panel P must be a subs t of the p ol of size k, and must be representative of the broader population with re pect to th features in F . This representativeness is imposed via quotas: for each feature f and corresponding value v 2 ⌦, we may have lower and upper quotas lf,v and uf,v . These quotas require that the panel contain between lf,v and uf,v individuals i such that f(i) = v.
In terms of these parameters, we define an instance of the panel selection problem as: given (N, k, F, l, u)—a pool, panel size, set of features, and sets of lower and upper quotas—randomly select a easible p nel, where a feasible panel is any set of individuals P from the collection K:
K := P 2 (Nk) : lf,v |{i 2 P : f(i) = v}| uf,v for all f, v .
Maximally Fa r Selection Algorithms. A selection algorithm is a procedure that solves instances of t e pan l selection problem. A selection algorithm’s level of fairness on a given instance is determined by it panel distribution p, the (possibly implicit) distribution over K from which it
draws the final panel. Beca se we care about fair ess to individuals, we evaluate the fairness of p in
terms of the individual election probabiliti s, or marginals, that p implies.3 We denote the vector of
marginals im lied by p as ⇡, and we will sometimes specify a panel distribution as p, ⇡ to explicitly
denote this pair. We say that ⇡ ealizable if it is implied by some valid panel distribution p.
Maximally fair s ection algorithms are those which solve the panel selection problem by sampling a specifically chosen p: one which mplies marginals ⇡ that allocate probability as fairly as possible
3Any distribution over panels p implies a selection probability for each pool members: A pool member’s selectio prob bi ity, per p, is equal to the probability of drawing a panel from p containing that pool member.
3
Figure 1: The quantization task takes s inp t a maximally fair p nel distribution p∗ (implyi g marginals π∗), and outputs a 1/m-quantized panel distribution p̄ (implying marginals π̄).
3 Theoretical Bounds on Marginal Discrepancy
Here we prove that for a fixed panel distribution p, π, there exists a uniform lottery p̄, π̄ such that ‖π − π̄‖∞ is bounded. Preliminarily, we note that it is intuitive that bounds on this discrepancy should approach 0 as m becomes large with respect to n and k. To see why, begin by fixing some distribution p, π over panels: as m becomes large, we approach the scenario in which a uniform lottery p̄ can assign panels arbitrary probabilities, providing increasingly close approximations to p. Since the marginals πi are continuous with respect to p, as p̄→ p we have that π̄i → πi for all i. While this argument demonstrates convergence, it provides neither efficient algorithms nor tight bounds on the rate of convergence. In this section, our task is therefore to bound the rate of this convergence as a function of m and the other parameters of the instance. All omitted proofs of results from this section are included in Appendix B.
3.1 Worst-Case Upper Bounds
Our first set of upper bounds result from rounding STANDARD LP, the LP that most directly arises from our problem. This LP is defined in terms of a panel distribution p, π, and M , an n × |K| matrix describing which individuals are on which feasible panels: Mi,P = 1 if i ∈ P and Mi,P = 0 otherwise.
STANDARD LP Mp = π (3.1) ‖p‖1 = 1 (3.2) p ≥ 0. Here, (3.1) specifies n total constraints. Our goal is to round p to a uniform lottery p̄ over m panels (so the entries p̄ are multiples of 1/m) such that (3.2) is maintained exactly, and no constraint in (3.1) is relaxed by too much, i.e., ‖Mp−Mp̄‖∞ = ‖π − π̄‖∞ remains small. Randomized rounding is a natural first approach. Any randomized rounding scheme satisfying negative association (which includes several that respect (3.2)) yields the following bound: Theorem 3.1. For any realizable π, we may efficiently randomly generate p̄ such that its marginals π̄ satisfy
‖π − π̄‖∞ = O (√ n log n
m
) .
Fortunately, there is potential for improvement: randomized rounding does not make full use of the fact that M is k-column sparse, due to each panel in K containing exactly k individuals. We use this sparsity to get a stronger bound when n k2, which is a practically significant parameter regime. The proof applies a dependent rounding algorithm based on a theorem of Beck and Fiala [1], to which a modification ensures the exact satisfaction of constraint (3.2). Theorem 3.2. For any realizable π, we may efficiently construct p̄ such that its marginals π̄ satisfy
‖π − π̄‖∞ ≤ k/m.
This bound is already meaningful in practice, where k m is insured by the fact that m is prechosen along with k prior to panel selection. Note also that k is typically on the order of 100
(Table 1), whereas a uniform lottery can in practice be easily made orders of magnitude larger, as each additional factor of 10 in the size of the uniform lottery requires drawing only one more ball (and there is no fairness cost to drawing a larger lottery, since increasing m allows for uniform lotteries which better approximate the unconstrained optimal distribution).
3.2 Beyond-Worst-Case Upper Bounds
As we will demonstrate in Section 3.3, we cannot hope for a better worst-case upper bound than poly(k)/m. We thus shift our consideration to instances which are “simple” in their feature structure, having a small number of features (Theorem B.7), a limited number of unique feature vectors in the pool (Theorem 3.3), or multiple individuals that share each feature vector present (Theorem B.8). The beyond-worst-case bounds given by Theorem 3.3 and Theorem B.8 asymptotically dominate our worst-case bounds in Theorem 3.1 and Theorem 3.2, respectively. Moreover, Theorem 3.3 dominates all other upper bounds in 10 of the 11 practical instances studied in Section 5.
We note that while our worst-case upper bounds implied the near-preservation of any realizable set of marginals π, some of our beyond-worst-case results apply to only realizable π which are anonymous, meaning that πi are equal for all i with equal feature vectors. We contend that any reasonable set of marginals should have this property,4 and furthermore that the “anonymization” of any realizable π is also realizable (Claim B.6); hence this restriction is insignificant. Our beyondworst-case bounds also differ from our worst-case bounds in that they depart from the paradigm of rounding p, instead randomizing over panels that may fall outside the support of p.
The main beyond-worst-case bound we give, stated below, is parameterized by |C|, where C is the set of unique feature vectors that appear in the pool. All omitted proofs and other beyond worst-case results are stated and proven in Appendix B.
Theorem 3.3. If π is anonymous and realizable, then we may efficiently construct p̄ such that its marginals π̄ satisfy
‖π − π̄‖∞ = O (√ |C| log |C| m ) .
|C| is at most n, so this bound dominates Theorem 3.1. In 10 of the 11 real-world instances we study, |C| is also smaller than k2 (Appendix A), in which case this bound also dominates Theorem 3.2. At a high level, our beyond-worst-case upper bounds are obtained not by directly rounding p, but instead using the structure of the sortition instance to abstract the problem into one about “types.” For this bound we then solve an LP in terms of “types,” round that LP, and then reconstruct a rounded panel distribution p̄, π̄ from the “type” solution. In particular, the types of individuals are the feature vectors which appear in the pool, and types of panels are the multisets of k feature vectors that satisfy the instance quotas. Fixing an instance, we project some p into type space by viewing it as a distribution p over types of panels K, inducing marginals τc for each type individuals c ∈ C. To begin, we define the TYPE LP, which is analogous to Eq. (3.1). We let Q be the type analog of M , so that entry Qcj is the number of individuals i with F (i) = c contained in panels of type j ∈ K.5 Then,
TYPE LP Q p = τ (3.3) ‖p‖1 = 1 (3.4)
p ≥ 0.
We round p in this LP to a panel type distribution p̄ while preserving (3.4). All that remains, then, is to construct some p̄, π̄ such that p is consistent with p̄ and ‖π − π̄‖∞ is small. This p̄ is in general supported by panels outside of supp(p), unlike the p̄ obtained by Theorem 3.1. It is the anonymity of π which allows us to construct these new panels and prove that they are feasible for the instance.
4The class of all anonymous marginals π includes the maximizers π∗ of all reasonable fairness objectives, and second, this condition is satisfied by all existing selection algorithms used in practice, to our knowledge.
5Completing the analogy, C,K, Q, p, p̄, τ are the “type” versions of N,K,M, p, p̄, π from the original LP.
3.3 Lower Bounds
This method of using bounded discrepancy to derive nearly fairness-optimal uniform lotteries has its limits, since there are even sparse M and fractional x for which no integer x̄ yields nearby Mx̄. In the worst case, we establish lower bounds by modifying those of Beck and Fiala [25]: Theorem 3.4. There exist p, π for which for all uniform lotteries p̄, π̄,
min p̄∈D ‖π − π̄‖∞ = Ω
(√ k
m
) .
Our k-dependent upper and lower bounds are separated by a factor of √ k, matching the current upper and lower bounds of the Beck-Fiala conjecture as applied to linear discrepancy (also known as the lattice approximation problem [26]). The respective gaps are incomparable, however, since for a given x ∈ [0, 1]n, the former problem aims to minimize ‖M(x− x̄)‖∞ over x̄ ∈ {0, 1}n, while we aim to do the same over a subset of the x̄ ∈ Zn for which∑j xj = ∑ j x̄j (see Lemma B.4).
4 Theoretical Bounds on Fairness Loss
Since the fairness of a distribution p is determined by its marginals π, it is intuitive that if uniform lotteries incur only small marginal discrepancy (per Section 3), then they should also incur only small fairness losses. This should hold for any fairness notion that is sufficiently “smooth” (i.e., doesn’t change too quickly with changing marginals) in the vicinity of p, π.
Although our bounds from Section 3 apply to any reasonable initial distribution p, we are particularly concerned with bounding fairness loss from maximally fair initial distributions p∗. Here, we specifically consider such p∗ that are optimal with respect to Maximin and NW. We note that, since there exist anonymous p∗, π∗ that maximize these objectives, we can apply any upper bound from Section 3 to upper bound ‖π∗ − π̄‖∞. We defer omitted proofs to Appendix C.
4.1 Maximin
Since Leximin is the fairness objective optimized by the maximally fair algorithm used in practice, it would be most natural to start with a p∗ that is Leximin-optimal and bound fairness loss with respect to this objective. However, the fact that Leximin fairness cannot be represented by a single scalar value prevents us from formulating such an approximation guarantee. Instead, we first pursue bounds on the closely-related objective, Maximin. We argue that in the most meaningful sense, a worst-case Maximin guarantee is a Leximin guarantee: such a bound would show limited loss in the minimum marginal, and it is Leximin’s lexicographically first priority to maximize the minimum marginal.
First, we show there exists some p̄, π̄ that gives bounded Maximin loss from p∗, π∗, the Maximinoptimal unconstrained distribution. This bound follows from Theorems 3.3 and B.8, using the simple observation that p̄ can decrease the lowest marginal given by p∗ by no more than ‖π∗ − π̄‖∞. Here nmin := minc nc denotes the smallest number of individuals which share any feature vector c ∈ C. Corollary 4.1. By Theorem 3.3 and B.8, for Maximin-optimal p∗, there exists a uniform lottery p̄ that satisfies
Maximin(p∗)−Maximin(p̄) = 1 m ·O ( min {√ |C| log |C|, k nmin + 1 }) .
Theorem 3.4 demonstrates that we cannot get an upper bound on Maxmin loss stronger than O( √ k/m) using a uniform bound on changes in all πi. However, since Maximin is concerned only with the smallest πi, it seems plausible that better upper bounds on Maximin loss could result from rounding π while tightly controlling only losses in the smallest πi’s, while giving freer reign to larger marginals. We show that this is not the case by further modifying the instances from Theorem 3.4 to obtain the following lower bound on the Maximin loss: Theorem 4.1. There exists a Maximin-optimal p∗ such that, for all uniform lotteries p̄,
Maximin(p∗)−Maximin(p̄) = Ω (√ k
m
) .
4.2 Nash Welfare
As NW has also garnered interest by practitioners and is applicable in practice [18], we upper-bound the NW fairness loss. Unlike Maximin loss, an upper bound on NW loss does not immediately follow from one on ‖π − π̄‖∞, because decreases in smaller marginals have larger negative impact on the NW. As a result, the upper bound on NW resulting from Section 3 is slightly weaker than that on Maximin:
Theorem 4.2. For NW-optimal p∗, there exists a uniform lottery p̄ that satisfies
NW(p∗)−NW(p̄) = k m ·O ( min {√ |C| log |C|, k nmin + 1 }) .
We give an overview of the proof of Theorem 4.2. To begin, fix a NW-optimizing panel distribution p∗, π∗. Before applying our upper bounds on marginal discrepancy from Section 3, we must contend with the fact that if this bounded loss is suffered by already-tiny marginals, the NW may decrease substantially or even go to 0. Thus, we first prove Lemmas 4.1 and 4.2, which together imply that no marginal in π∗ is smaller than 1/n.
Lemma 4.1. For NW-optimal p∗ over a support of panels supp(p∗), there exists a constant λ ∈ R+ such that, for all P ∈ supp(p∗),∑i∈P 1/π∗i = λ.
Lemma 4.2. For NW-optimal p∗, π∗, we have that π∗i ≥ 1/n for all i ∈ N .
Lemma 4.1 follows from the fact that the partial derivative of NW with respect to the probability it assigns a given panel must be the same as that with respect to any other panel at p∗ (otherwise, mass in the distribution could be shifted to increase the NW). Lemma 4.2 then follows by the additional observation that EP∼p∗ [∑ i∈P 1/π ∗ i ] = n.
Finally Lemma 4.3 follows from the fact that Lemma 4.2 limits the potential multiplicative, and therefore additive, impact on the NW of decreasing any marginal by ‖π − π̄‖∞: Lemma 4.3. For NW-optimal p∗, π∗, there exists a uniform lottery p̄, π̄ that satisfies NW(p∗) − NW(p̄) ≤ k ‖π∗ − π̄‖∞.
As the NW-optimal marginals π∗ are anonymous, we can apply the upper bounds given by Theorem 3.3 and Theorem B.8 to show the existence of a p̄, π̄ satisfying the claim of the theorem.
5 Practical Algorithms for Computing Fair Uniform Lotteries
Algorithms. First, we implement versions of two existing rounding algorithms, which are implicit in our worst-case upper bounds.6 The first is Pipage rounding [16], or PIPAGE, a randomized rounding scheme satisfying negative association [10]. The second is BECK-FIALA, the dependent rounding scheme used in the proof of Theorem 3.2. To benchmark these algorithms against the highest level of fairness they could possibly achieve, we use integer programming (IP) to compute the fairest possible uniform lotteries over supp(p∗), the panels over which p∗ randomizes.7 We define IPMAXIMIN and IP-NW to find uniform lotteries over supp(p∗) maximizing Maximin and NW, respectively. We remark that the performance of these IPs is still subject to our theoretical upper and lower bounds. We provide implementation details in Appendix D.1.
One question is whether we should prefer the IPs or the rounding algorithms for real-world applications. Although IP-MAXIMIN appears to find good solutions at practicable speeds, IP-NW converges to optimality prohibitively slowly in some instances (see Appendix D.2 for runtimes). At the same time, we find that our simpler rounding algorithms give near-optimal uniform lotteries with respect to both fairness objectives. Also in favor of simpler rounding algorithms, many randomized rounding procedures (including Pipage rounding) have the advantage that they exactly
6We do not implement the algorithm implicit in Theorem 3.3 because our results already present sufficient alternatives for finding excellent uniform lotteries in practice.
7Note that these lotteries are not necessarily universally optimal, as they can randomize over only supp(p∗); conceivably, one could find a fairer uniform lottery by also randomizing over panels not in supp(p∗). However, PIPAGE and BECK-FIALA are also restricted in this way, and thus must be weakly dominated by the IP.
preserve marginals over the combined steps of randomly rounding to a uniform lottery and then randomly sampling it—a guarantee that is much more challenging to achieve with IPs.
Uniform lotteries nearly exactly preserve Maximin, Nash Welfare fairness. We first measure the fairness of uniform lotteries produced by these algorithms in 11 real-world panel selection instances from 7 different organizations worldwide (instance details in Appendix A). In all experiments, we generate a lottery of sizem = 1000. This is fairly small; it requires drawing only 3 balls from lottery machines, and in one instance we have that m < n. We nevertheless see excellent performance of all algorithms, and note that this performance will only improve with larger m.
Figure 2 shows the Maximin fairness of the uniform lottery computed by PIPAGE, BECK-FIALA, and IP-MAXIMIN for each instance. For intuition, recall that the level of Maximin fairness given by any lottery is exactly the minimum marginal assigned to any individual by that lottery. The upper edges of the gray boxes in Fig. 2 correspond to the optimal fairness attained by an unconstrained distribution p∗. These experiments reveal that the cost of transparency to Maximin-fairness is practically non-existent: across instances, the quantized distributions computed by IP-MAXIMIN decrease the minimum marginal by at most 2.1/m, amounting to a loss of no more than 0.0021 in the minimum marginal probability in any instance. Visually, we can see that this loss is negligible relative to the original magnitude of even the smallest marginals given by p∗. Surprisingly, though PIPAGE and BECK-FIALA do not aim to optimize any fairness objective, they achieve only slightly larger losses in Maximin fairness, with PIPAGE outperforming BECK-FIALA. Finally, the heights of the gray boxes indicate that our theoretical bounds are often meaningful in practice, giving lower bounds on Maximin fairness well above zero in nine out of eleven instances. We note these bounds only tighten with larger m. We present similarly encouraging results on NW loss in Appendix D.3.
Uniform lotteries nearly preserve all Leximin marginals. We still remain one step away from practice: our examination of Maximin does not address whether uniform lotteries can attain the finer-tuned fairness properties of the Leximin-optimal distributions currently used in practice. Fortunately, our results from Section 3 imply the existence of a quantized p̄ that closely approximates all marginals given by the Leximin-optimal distribution p∗, π∗. We evaluate the extent to which PIPAGE and BECK-FIALA preserve these marginals in Fig. 3. They are benchmarked against a new IP, IP-MARGINALS, which computes the uniform lottery over supp(p∗) minimizing ‖π∗ − π̄‖∞.
Figure 3 demonstrates that in the instance “sf(a)”, all algorithms produce marginals that deviate negligibly from those given by π∗. Analogous results on remaining instances appear in Appendix D.4 and show similar results. As was the case for Maximin, we see that our theoretical bounds are meaningful, but that we can consistently outperform them in real-world instances.
6 Discussion
Our aim was to show that uniform lotteries can preserve fairness, and our results ultimately suggest this, along with something stronger: that in practical instances, uniform lotteries can reliably almost exactly replicate the entire set of marginals given by the optimal unconstrained panel distribution. Our rounding algorithms can thus be plugged directly into the existing panel selection pipeline with essentially no impact on individuals’ selection probabilities, thus enabling translation of the output of Panelot (and other maximally fair algorithms) to a nearly maximally fair and transparent panel selection procedure. We note that our methods are not just compatible with ball-drawing lotteries, but any form of uniform physical randomness (e.g. dice, wheel-spinning, etc.).
Although we achieve our stated notion of transparency, a limitation of this notion is that it focuses on the final stage of the panel selection process. A more holistic notion of transparency might require that onlookers can verify that the panel is not being intentionally stacked with certain individuals. This work does not fully enable such verification: although onlookers can now observe individuals’ marginals, they still cannot verify that these marginals are actually maximally fair without verifying the underlying optimization algorithms. In particular, in the common case where quotas require even maximally fair panel distributions to select certain individuals with probability near one, onlookers cannot distinguish those from unfair distributions engineered such that one or more pool members are chosen with probability near one.
In research on economics, fair division, and other areas of AI, randomness is often proposed as a tool to make real-world systems fairer [17, 6, 15]. Nonetheless, in practice, these systems (with a few exceptions, such as school choice [22]) remain stubbornly deterministic. Among the hurdles to bringing the theoretical benefits of randomness into practice is that allocation mechanisms fare best when they can be readily understood, and that randomness can be perceived as undesirable or suspect. Sortition is a rather unique paradigm at the heart of this tension: it relies centrally on randomness, while in the public sphere it is attaining increasing political influence. It is therefore a uniquely high-impact domain in which to study how to combine the benefits of randomness, such as fairness, with transparency. We hope that this work and its potential for impact will inspire the investigation of fairness-transparency tradeoffs in other AI applications.
Acknowledgements. We would foremost like to thank Paul Gölz for helpful technical conversations and insights on the practical motivations for this research. We also thank Anupam Gupta for helpful technical conversations. Finally, several organizations for supplying real-world citizens’ assembly data, including the Sortition Foundation, the Center for Climate Assemblies, Healthy Democracy, MASS LBP, Nexus Institute, Of by For, and New Democracy.
Funding and Competing Interests. This work was partially supported by National Science Foundation grants CCF-2007080, IIS-2024287 and CCF-1733556; and by Office of Naval Research grant N00014-20-1-2488. Bailey Flanigan is supported by the National Science Foundation Graduate Research Fellowship and the Fannie and John Hertz Foundation. None of the authors have competing interests. | 1. What is the focus of the paper regarding sortition methods for panel selection?
2. What are the strengths of the proposed approach, particularly in terms of fairness guarantees?
3. Do you have any concerns or questions about the method's ability to ensure Leximin solutions?
4. How do the upper bounds for the fairness difference between the p* and \bar{p} vary between Maximin and Nash Welfare notions?
5. What is the significance of the extra factor of k between the two upper bounds?
6. How does the reviewer assess the overall contribution and practical relevance of the paper in the context of real-world applications? | Summary Of The Paper
Review | Summary Of The Paper
This paper gives a method for selecting panels via sortition that offers transparency that can be theoretically guaranteed and empirically audited. Their method is based on uniform lotteries and aims to achieve Maximin and Nash Welfare notions of fairness.
Review
While previous works present the merits of sortition algorithms and give methods to better sample for sortition based committee selection, this work furthers these goals by analyzing uniform lottery with fairness constraints.
The worst-case marginal discrepancy bounds seem reasonable and it is great to see a tighter upper bound for n >> k^2 case since that is likely applicable in most applications.
I am not sure I fully understand the argument of why a worst-case Maximin guarantee is a Leximin guarantee. It seems like uniform lottery does not necessarily preclude Leximin solutions.
It is nice that the authors consider both Maximin and Nash Welfare when bounding the fairness difference between the p* and \bar{p}. What explains the extra factor of k between the Maximin and Nash upper bounds?
Considering that sortition is increasingly used in the real world, I believe this paper makes an important contribution in analyzing fairness costs of a transparent sortition process: uniform lottery. |
NIPS | Title
Fair Sortition Made Transparent
Abstract
Sortition is an age-old democratic paradigm, widely manifested today through the random selection of citizens’ assemblies. Recently-deployed algorithms select assemblies maximally fairly, meaning that subject to demographic quotas, they give all potential participants as equal a chance as possible of being chosen. While these fairness gains can bolster the legitimacy of citizens’ assemblies and facilitate their uptake, existing algorithms remain limited by their lack of transparency. To overcome this hurdle, in this work we focus on panel selection by uniform lottery, which is easy to realize in an observable way. By this approach, the final assembly is selected by uniformly sampling some pre-selected set of m possible assemblies. We provide theoretical guarantees on the fairness attainable via this type of uniform lottery, as compared to the existing maximally fair but opaque algorithms, for two different fairness objectives. We complement these results with experiments on real-world instances that demonstrate the viability of the uniform lottery approach as a method of selecting assemblies both fairly and transparently.
1 Introduction
In a citizens’ assembly, a panel of randomly chosen citizens is convened to deliberate and ultimately make recommendations on a policy issue. The defining aspect of citizens’ assemblies is the randomness of the process, sortition, by which participants are chosen. In practice, the sortition process works as follows: first, volunteers are solicited via thousands of letters or phone calls, which target individuals chosen uniformly at random. Those who respond affirmatively form the pool of volunteers, from which a final panel will be chosen. Finally, a selection algorithm is used to randomly select some pre-specified number k of pool members for the panel. To ensure adequate representation of demographic groups, the chosen panel is often constrained to satisfy some upper and lower quotas on feature categories such as age, gender, and ethnicity. We call a quota-satisfying panel of size k a feasible panel. As this process illustrates, citizens’ assemblies offer a way to involve the public in informed decision-making. This potential for civic participation has recently spurred a global resurgence in the popularity of citizens assemblies; they have been commissioned by governments and led to policy changes at the national level [19, 23, 12].
Prompted by the growing impact of citizens’ assemblies, there has been a recent flurry of computer scientific research on sortition, and in particular, on the fairness of the procedure by which participants are chosen [2, 13, 12]. The most practicable result to date is a family of selection algorithms proposed by Flanigan et al. [12], which are distinguished from their predecessors by their use of randomness toward the goal of fairness: while previously-used algorithms selected pool members in
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
a random but ad-hoc fashion, these new algorithms are maximally fair, ensuring that pool members have as equal probability as possible of being chosen for the panel, subject to the quotas.1 To encompass the many interpretations of “as equal as possible,” these algorithms permit the optimization of any fairness objective with certain convexity properties. There is now a publicly available implementation of the techniques of Flanigan et al. [12], called Panelot, which optimizes the egalitarian notion that no pool member has too little selection probability via the Leximin objective from fair division [21, 14]. This algorithm has already been deployed by several groups of panel organizers, and has been used to select dozens of panels worldwide.
Fairness gains in the panel selection process can lend legitimacy to citizens’ assemblies and potentially increase their adoption, but only insofar as the public trusts that these gains are truly realized. Currently, the potential for public trust in the panel selection process is limited by multiple factors. First, the latest panel selection algorithms select the final panel via behind-the-scenes computation. When panels are selected in this manner, observers cannot even verify that any given pool member has any chance of being chosen for the panel. A second and more fundamental hurdle is that randomness and probability, which are central to the sortition process, have been shown in many contexts to be difficult for people to understand and reason about [24, 20, 28]. Aiming to address these shortcomings, we propose and pursue the following notion of transparency in panel selection:
Transparency: Observers should be able to, without reasoning in-depth about probability, (1) understand the probabilities with which each individual will be chosen for the panel in theory, and (2) verify that individuals are actually selected with these probabilities in practice.
In this paper, we aim to achieve transparency and fairness simultaneously: this means advancing the defined goal of transparency, while preserving the fairness gains obtained by maximally fair selection algorithms. Although this task is reminiscent of existing AI research on trade-offs between fairness or transparency with other desirable objectives [4, 11, 3, 27], to our knowledge, this is the first investigation of the trade-off between fairness and transparency.
Setting aside for a moment the goal of fairness, we consider a method of random decision-making that is already common in the public sphere: the uniform lottery. To satisfy quotas, a uniform lottery for sortition must randomize not over individuals, but over entire feasible panels. In fact, this approach has been suggested by practitioners, and was even used in 2020 to select a citizens’ assembly in Michigan. The following example, which closely mirrors that real-world pilot,2 illustrates that panel selection via uniform lottery is naturally consistent with the transparency notion we pursue.
Suppose we construct 1000 feasible panels from a pool (possibly with duplicates), numbered 000- 999, and publish an (anonymized) list of which pool members are on each panel. We then inform spectators that we will choose each panel with equal probability. This satisfies criterion (1): spectators can easily understand that all panels will be chosen with the same probability of 1/1000, and can easily determine each individual’s selection probability by counting the number of panels containing the individual. To satisfy criterion (2), we enact the lottery by drawing each of the three digits of the final panel number individually from lottery machines. Lottery spectators can confirm that each ball is drawn with equal probability; this provides confirmation that panels are indeed being chosen with uniform probabilities, thus confirming the enactment of the proposed individual selection probabilities. In addition to its conventionality as a source of randomness, decision-making via drawing lottery balls invites an exciting spectacle, which can promote engagement with citizens’ assemblies.
This simple method neatly satisfies our transparency criteria, but it has one obvious downside: a uniform lottery over an arbitrary set of feasible panels does not guarantee any measure of equal probabilities to individuals. In fact, it is not even clear that the fairest possible uniform lottery over m panels, where m is a number conducive to selection by physical lottery (e.g. m =1000), would not be significantly less fair than maximally fair algorithms, which sample the fairest possible unconstrained distribution over panels. For example, if m is too small, there may be no uniform lottery which gives all individuals non-zero selection probability, even if each individual appears
1Quotas can preclude giving individuals exactly equal probabilities: if the panel must be 1/2 men, 1/2 women but the pool is split 3/4 men, 1/4 women, then some women must be chosen more often than some men.
2Of By For’s pilot of live panel selection via lottery can be viewed at https://vimeo.com/458304880# t=17m59s from 17:59 to 21:23. For a more detailed description, see Figure 3 and surrounding text in [12].
on some feasible panel (and so can attain a non-zero selection probability under an unconstrained distribution).
Fortunately, empirical evidence suggests that there is hope: in the 2020 pilot mentioned above, a uniform lottery over m =1000 panels was found that nearly matched the fairness of the maximally fair distribution generated by Panelot. Motivated by this anecdotal evidence, we aim to understand whether such a fair uniform lottery is guaranteed to exist in general, and if it does, how to find it. We summarize this goal in the following research questions:
Does there exist a uniform lottery overm panels that nearly preserves the fairness of the maximally fair unconstrained distribution over panels? And, Algorithmically, how do we compute such a uniform lottery?
Results and Contributions. After describing the model in Section 2, in Section 3 we prove that it is possible to round an (essentially) arbitrary distribution over panels to a uniform lottery while preserving all individuals’ selection probabilities up to only a small bounded deviation. These results use tools from discrepancy theory and randomized rounding. Intuitively, this bounded change in selection probabilities implies bounded losses in fairness; we formalize this intuition in Section 4, showing that there exists in general a uniform lottery that is nearly maximally fair, with respect to multiple choices of fairness objective. Although we would ideally like to give such bounds for the Leximin fairness objective, due to its use practice, we cannot succinctly represent bounds for this objective because it is not scalar valued. We therefore give bounds for Maximin, a closely related egalitarian objective which only considers the minimum selection probability given to any pool member [7]. We discuss in Section 4 why bounds on loss in Maximin fairness are, in the most meaningful sense, also bounds on loss in Leximin fairness. We additionally give upper bounds on the loss in Nash Welfare [21], a similarly well-established fairness objective that has also been implemented in panel selection tools [18].
Finally, in Section 5, we consider the algorithmic question in practice: given a maximally fair distribution over panels, can we actually find nearly maximally fair uniform lotteries that match our theoretical guarantees? To answer this question, we implement two standard rounding algorithms, along with near-optimal (but more computationally intensive) integer programming methods, for finding uniform lotteries. We then evaluate the performance of these algorithms in 11 real-world panel selection instances. We find that in all instances, we can compute uniform lotteries that nearly exactly preserve not only fairness with respect to both objectives, but entire sets of Leximin-optimal marginals, meaning that from the perspective of individuals, there is essentially no difference between using a uniform lottery versus the optimal unconstrained distribution sampled by the latest algorithms. We discuss these results, their implications, and how they can be deployed directly into the existing panel selection pipeline in Section 6.
2 Model
Panel Selection Problem. First, we formally define the task of panel selection for citizens’ assemblies. Let N = [n] be the pool of volunteers for the panel—individuals from the population who have indicated their willingness to participate in response to an invitation. Let F = {ft}t denote a fixed set of features of interest. Each feature ft : N → Ωt maps each pool member to their value of that feature, where Ωt is the set of ft’s possible values. For example, for feature ft = “gender”, we might have Ωt = {“male”,“female”, “non-binary”}. We define individual i’s feature vector F (i) = (ft(i))t ∈ ∏ t Ωt to be the vector encoding their values for all features in F .
As is done in practice and in previous research [13, 12], we impose that the chosen panel P must be a subset of the pool of size k, and must be representative of the broader population with respect to the features in F . This representativeness is imposed via quotas: for each feature f and corresponding value v ∈ Ω, we may have lower and upper quotas lf,v and uf,v. These quotas require that the panel contain between lf,v and uf,v individuals i such that f(i) = v.
In terms of these parameters, we define an instance of the panel selection problem as: given (N, k, F, l, u)—a pool, panel size, set of features, and sets of lower and upper quotas—randomly select a feasible panel, where a feasible panel is any set of individuals P from the collection K:
K := { P ∈ (Nk) : lf,v ≤ |{i ∈ P : f(i) = v}| ≤ uf,v for all f, v } .
Maximally Fair Selection Algorithms. A selection algorithm is a procedure that solves instances of the panel selection problem. A selection algorithm’s level of fairness on a given instance is determined by its panel distribution p, the (possibly implicit) distribution over K from which it draws the final panel. Because we care about fairness to individual pool members, we evaluate the fairness of p in terms of the fairness of selection probabilities, or marginals, that p implies for all pool members.3 We denote the vector of marginals implied by p as π, and we will sometimes specify a panel distribution as p, π to explicitly denote this pair. We say that π is realizable if it is implied by some distribution p over the feasible panels K. Maximally fair selection algorithms are those which solve the panel selection problem by sampling a specifically chosen p: one which implies marginals π that allocate probability as fairly as possible across pool members. The fairness of p, π is measured by a fairness objective F , which maps an allocation—in this case, of selection probability to pool members—to a real number measuring the allocation’s fairness. Fixing an instance, a fairness objective F , and a panel distribution p, we express the fairness of p as F(p). Existing maximally fair selection algorithms can maximize a wide range of fairness objectives, including those considered in this paper.
Leximin, Maximin, and Nash Welfare. Of the three fairness objectives we consider in this paper, Maximin and Nash Welfare (NW) have succinct formulae. For p, π they are defined as follows, where πi is the marginal of individual i:
Maximin(p) := min i∈N πi, NW(p) :=
(∏
i
πi
)1/n .
Intuitively, NW maximizes the geometric mean, prioritizing the marginal πi of each individual i in proportion to π−1i . Maximin maximizes the marginal probability of the individual least likely to be selected. Finally, Leximin is a refinement of Maximin, and is defined by the following algorithm: first, optimize Maximin; then, fixing the minimum marginal as a lower bound on any marginal, maximize the second-lowest marginal; and so on.
Our task: quantize a maximally fair panel distribution with minimal fairness loss. We define a 1/m-quantized panel distribution as a distribution over all feasible panels K in which all probabilities are integer multiples of 1/m. We use p̄ to denote a panel distribution with this property. Formally, while an (unconstrained) panel distribution p lies in D := {p ∈ R|K|+ : ‖p‖1 = 1}, a 1/m-quantized panel distribution in p̄ lies in D := {p̄ ∈ (Z+/m)|K| : ‖p̄‖1 = 1}. Note that a 1/m-quantized distribution p̄ immediately translates to a physical uniform lottery of over m panels (with duplicates): if p̄ assigns probability `/m to panel P , then the corresponding physical uniform lottery would contain ` duplicates of P . Thus, if we can compute a 1/m-quantized panel distribution p̄ with fairness F(p̄), then we have designed a physical uniform lottery over m panels with that same level of fairness.
Our goal follows directly from this observation: we want to show that given an instance and desired lottery size m, we can compute a 1/m-quantized distribution p̄ that is nearly as fair, with respect to a fairness notion F , as the maximally fair panel distribution in this instance p∗ ∈ arg maxp∈D F(p). We define the fairness loss in this quantization process to be the difference F(p∗) − F(p̄). We are aided in this task by the existence of practical algorithms for computing p∗ Flanigan et al. [12], which allows us to use p∗ as an input to the quantization procedure we hope to design. For intuition, we illustrate this quantization task in Figure 1, where π∗, π̄ are the marginals implied by p∗, p̄, respectively. Since the fairness of p∗, p̄ are computed in terms of π∗, π̄, it is intuitive that a quantization process that results in small marginal discrepancy, defined as the maximum change in any marginal ‖π− π̄‖∞, should also have small fairness loss. This idea motivates the upcoming section, in which we give quantization procedures with provably bounded marginal discrepancy, forming the foundation for our later bounds on fairness loss.
3A panel distribution p implies a unique vector of marginals π as follows: fixing p, π, a pool member i’s marginal selection probability πi is equal to the probability of drawing a panel from p containing that pool member. For a more detailed introduction to the connection between panel distributions and marginals, we refer readers to Flanigan et al. [12].
all feasible panels all feasible panels
Maximally fair distribution over panels (output of LEXIMIN [FGGHP21])
Uniform lottery over m panels
quantize
p* p̄input output
all pool members
π* π̄
all pool members1/m
( )
Does there exist a uniform lottery over m panels that nearly preserves the fairness of the maximally fair unconstrained distribution over panels? And, Algorithmically, how do we compute such a uniform lottery? Results and contributions. After describing the model in Section 2, in Section 3 we prove that it is possible to round an (essentially) arbitrary distribution over panels to a uniform lottery while preserving all individuals’ selection probabilities up to only a small bounded deviation. These results use tools from discrepancy theory and randomized rounding. Intuitively, this bounded change in selection probabilities implies bounded losses in fairness; we formalize this intuition in Section 4, showing that there exists in general a uniform lottery that is nearly maximally fair, with respect to multiple fairness objectives. Although we would ideally like to give such bounds for the Leximin fairness objective due to its use practice, this objective cannot be summarized by a single expression. Thus, we give bounds for the closely-related egalitarian objective, Maximin [CELM07]. We additionally give upper bounds on the loss in Nash Welfare [Mou03], a similarly well-established fairness objective that has also been implemented in panel selection tools [HG20]. Finally, in Section 5, we consider the algorithmic question: given a maximally fair distribution over panels, can we find a near-maximally fair uniform lottery, as our bounds suggest should exist? To answer this question, we implement two standard rounding algorithms, along with near-optimal (but more computationally-intensive) integer programming methods, for finding uniform lotteries. We then evaluate the performance of these algorithms in 11 real-world panel selection instances. We find that in all instances, we can compute uniform lotteries that nearly exactly preserve not only fairness with respect to both objectives, but entire sets of Leximin-optimal marginals, meaning that from the perspective of individuals, there is essentially no difference between using a uniform lottery versus the optimal distribution used by the latest algorithms. We discuss these results, their implications, and how they can be deployed directly into the existing panel selection pipeline in Section 6.
2 Model
Panel Selection Problem. First, we formally define the task of panel selection for citizens’ assemblies. Let N = [n] be the pool of volunteers for the panel—individuals from the population who have indicated their willingness to participate in response to an invitation. Let F = {ft}t denote a fixed set of features of interest. Each feature ft : N ! ⌦t maps each pool member to their value of that feature, where ⌦t is the set of ft’s possible values. For example, for feature ft = “gender”, we might have ⌦t = {“male”,“female”, “non-binary”}. We define individual i’s feature vector F (i) = (ft(i))t 2 Q t ⌦t to be the vector encoding their values for all features in F . As is done in practice and in previous research [FGGP20, FGG+21], we impose that the chosen panel P must be a subset of the pool of size k, and must be representative of the broader population with respect to the features in F . This representativeness is imposed via quotas: for each feature f and corresponding value v 2 ⌦, we may have lower and upper quotas lf,v and uf,v . These quotas require that the panel contain between lf,v and uf,v individuals i such that f(i) = v.
In terms of these parameters, we define an instance of the panel selection problem as: given (N, k, F, l, u)—a pool, panel size, set of features, and sets of lower and upper quotas—randomly select a feasible panel, where a feasible panel is any set of individuals P from the collection K:
K := P 2 (Nk) : lf,v |{i 2 P : f(i) = v}| uf,v for all f, v .
Maximally Fair Selection Algorithms. A selection algorithm is a procedure that solves instances of the panel selection problem. A selection algorithm’s level of fairness on a given instance is determined by its panel distribution p, the (possibly implicit) distribution over K from which it
draws the final panel. Because we care about fairness to individuals, we evaluate the fairness of p in
terms of the individual selection probabilities, or marginals, that p implies.3 We denote the vector of
marginals implied by p as ⇡, and we will sometimes specify a panel distribution as p, ⇡ to explicitly
denote this pair. We say that ⇡ is realizable if it is implied by some valid panel distribution p.
Maximally fair selection algorithms are those which solve the panel selection problem by sampling a specifically chosen p: one which implies marginals ⇡ that allocate probability as fairly as possible
3Any distribution over panels p implies a selection probability for each pool members: A pool member’s selection probability, per p, is equal to the probability of drawing a panel from p containing that pool member.
3
( )
Does there exi t a uniform lottery over m panels that nearly preserves the fairness of the maximally fair unconstrained distribution over panels? And, Algori hmically, how do we compute such a uniform lottery? Results and ontributions. After describing the model in Section 2, in Section 3 we prove that it is possible to round an (essentially) arbitrary distribution over panels to a uniform lottery while preservi g all individuals’ select on probabiliti s p to only a small bounded deviation. These results use tools from discrepancy theory an randomized rounding. Intuitively, this bounded change in selection probabilities implies bounded losses in fairness; we formalize this intuition in Section 4, showing that there exists in gene al a uniform lottery that is nearly maximally fair, with respect to multiple fairness objectives. Although we would ideally like to give such bounds for the Leximin fairness objective due to its use practice, this objective cannot be summarized by a single expression. Thus, we give bounds for the closely-relat d egalitarian objective, Maximin [CELM07]. We addition lly give upper bounds on the loss in Nash Welfare [Mou03], a similarly well-established fairness objective that has also been implemented in panel selection tools [HG20]. Finally, in Section 5, we consider the algorithmic question: given a maximally fair distribution over panels, can we find a near-maximally fair uniform lottery, as our bounds suggest should exist? To answer this question, we implement two standard rounding algorithms, along with near-optimal (but ore computationally-intensive) integer programming methods, for finding uniform lotteries. We then evaluate the erformance of these algorithms in 11 real-world panel selection instances. We find that in all instances, we can compute u iform lotteries that nearly exactly preserve not only fairness with respec to both objectives, but entire sets of Leximin-optimal marginals, meaning that from the persp ctive of individuals, there is essentially no difference between using a uniform lottery versus the optimal distribution u ed by the latest alg rithms. We discuss these results, their implications, and how they can be deployed directly i to the existing panel selection pipeline in Section 6.
2 Model
Panel Selection Problem. First, we formally define the task of panel selection for citizens’ assemblies. Let N = [n] be the pool of volun eers for the panel—individuals from the population who have indicated their willingness to participate in response to an invitation. Let F = {ft}t denote a fixed set of features of interest. Each featu e ft : N ! ⌦t maps each pool member to their value of that feature, where ⌦t is the set of ft’s possible values. For example, for feature ft = “gender”, we might have ⌦t = {“male”,“female”, “non-binary”}. We define individual i’s feature vector F (i) = (ft(i))t 2 Q t ⌦t to be the vector encoding their values for all features in F . As is done in practice and in pr vious r search [FGGP20, FGG+21], we impose that the chosen panel P must be a subs t of the p ol of size k, and must be representative of the broader population with re pect to th features in F . This representativeness is imposed via quotas: for each feature f and corresponding value v 2 ⌦, we may have lower and upper quotas lf,v and uf,v . These quotas require that the panel contain between lf,v and uf,v individuals i such that f(i) = v.
In terms of these parameters, we define an instance of the panel selection problem as: given (N, k, F, l, u)—a pool, panel size, set of features, and sets of lower and upper quotas—randomly select a easible p nel, where a feasible panel is any set of individuals P from the collection K:
K := P 2 (Nk) : lf,v |{i 2 P : f(i) = v}| uf,v for all f, v .
Maximally Fa r Selection Algorithms. A selection algorithm is a procedure that solves instances of t e pan l selection problem. A selection algorithm’s level of fairness on a given instance is determined by it panel distribution p, the (possibly implicit) distribution over K from which it
draws the final panel. Beca se we care about fair ess to individuals, we evaluate the fairness of p in
terms of the individual election probabiliti s, or marginals, that p implies.3 We denote the vector of
marginals im lied by p as ⇡, and we will sometimes specify a panel distribution as p, ⇡ to explicitly
denote this pair. We say that ⇡ ealizable if it is implied by some valid panel distribution p.
Maximally fair s ection algorithms are those which solve the panel selection problem by sampling a specifically chosen p: one which mplies marginals ⇡ that allocate probability as fairly as possible
3Any distribution over panels p implies a selection probability for each pool members: A pool member’s selectio prob bi ity, per p, is equal to the probability of drawing a panel from p containing that pool member.
3
Figure 1: The quantization task takes s inp t a maximally fair p nel distribution p∗ (implyi g marginals π∗), and outputs a 1/m-quantized panel distribution p̄ (implying marginals π̄).
3 Theoretical Bounds on Marginal Discrepancy
Here we prove that for a fixed panel distribution p, π, there exists a uniform lottery p̄, π̄ such that ‖π − π̄‖∞ is bounded. Preliminarily, we note that it is intuitive that bounds on this discrepancy should approach 0 as m becomes large with respect to n and k. To see why, begin by fixing some distribution p, π over panels: as m becomes large, we approach the scenario in which a uniform lottery p̄ can assign panels arbitrary probabilities, providing increasingly close approximations to p. Since the marginals πi are continuous with respect to p, as p̄→ p we have that π̄i → πi for all i. While this argument demonstrates convergence, it provides neither efficient algorithms nor tight bounds on the rate of convergence. In this section, our task is therefore to bound the rate of this convergence as a function of m and the other parameters of the instance. All omitted proofs of results from this section are included in Appendix B.
3.1 Worst-Case Upper Bounds
Our first set of upper bounds result from rounding STANDARD LP, the LP that most directly arises from our problem. This LP is defined in terms of a panel distribution p, π, and M , an n × |K| matrix describing which individuals are on which feasible panels: Mi,P = 1 if i ∈ P and Mi,P = 0 otherwise.
STANDARD LP Mp = π (3.1) ‖p‖1 = 1 (3.2) p ≥ 0. Here, (3.1) specifies n total constraints. Our goal is to round p to a uniform lottery p̄ over m panels (so the entries p̄ are multiples of 1/m) such that (3.2) is maintained exactly, and no constraint in (3.1) is relaxed by too much, i.e., ‖Mp−Mp̄‖∞ = ‖π − π̄‖∞ remains small. Randomized rounding is a natural first approach. Any randomized rounding scheme satisfying negative association (which includes several that respect (3.2)) yields the following bound: Theorem 3.1. For any realizable π, we may efficiently randomly generate p̄ such that its marginals π̄ satisfy
‖π − π̄‖∞ = O (√ n log n
m
) .
Fortunately, there is potential for improvement: randomized rounding does not make full use of the fact that M is k-column sparse, due to each panel in K containing exactly k individuals. We use this sparsity to get a stronger bound when n k2, which is a practically significant parameter regime. The proof applies a dependent rounding algorithm based on a theorem of Beck and Fiala [1], to which a modification ensures the exact satisfaction of constraint (3.2). Theorem 3.2. For any realizable π, we may efficiently construct p̄ such that its marginals π̄ satisfy
‖π − π̄‖∞ ≤ k/m.
This bound is already meaningful in practice, where k m is insured by the fact that m is prechosen along with k prior to panel selection. Note also that k is typically on the order of 100
(Table 1), whereas a uniform lottery can in practice be easily made orders of magnitude larger, as each additional factor of 10 in the size of the uniform lottery requires drawing only one more ball (and there is no fairness cost to drawing a larger lottery, since increasing m allows for uniform lotteries which better approximate the unconstrained optimal distribution).
3.2 Beyond-Worst-Case Upper Bounds
As we will demonstrate in Section 3.3, we cannot hope for a better worst-case upper bound than poly(k)/m. We thus shift our consideration to instances which are “simple” in their feature structure, having a small number of features (Theorem B.7), a limited number of unique feature vectors in the pool (Theorem 3.3), or multiple individuals that share each feature vector present (Theorem B.8). The beyond-worst-case bounds given by Theorem 3.3 and Theorem B.8 asymptotically dominate our worst-case bounds in Theorem 3.1 and Theorem 3.2, respectively. Moreover, Theorem 3.3 dominates all other upper bounds in 10 of the 11 practical instances studied in Section 5.
We note that while our worst-case upper bounds implied the near-preservation of any realizable set of marginals π, some of our beyond-worst-case results apply to only realizable π which are anonymous, meaning that πi are equal for all i with equal feature vectors. We contend that any reasonable set of marginals should have this property,4 and furthermore that the “anonymization” of any realizable π is also realizable (Claim B.6); hence this restriction is insignificant. Our beyondworst-case bounds also differ from our worst-case bounds in that they depart from the paradigm of rounding p, instead randomizing over panels that may fall outside the support of p.
The main beyond-worst-case bound we give, stated below, is parameterized by |C|, where C is the set of unique feature vectors that appear in the pool. All omitted proofs and other beyond worst-case results are stated and proven in Appendix B.
Theorem 3.3. If π is anonymous and realizable, then we may efficiently construct p̄ such that its marginals π̄ satisfy
‖π − π̄‖∞ = O (√ |C| log |C| m ) .
|C| is at most n, so this bound dominates Theorem 3.1. In 10 of the 11 real-world instances we study, |C| is also smaller than k2 (Appendix A), in which case this bound also dominates Theorem 3.2. At a high level, our beyond-worst-case upper bounds are obtained not by directly rounding p, but instead using the structure of the sortition instance to abstract the problem into one about “types.” For this bound we then solve an LP in terms of “types,” round that LP, and then reconstruct a rounded panel distribution p̄, π̄ from the “type” solution. In particular, the types of individuals are the feature vectors which appear in the pool, and types of panels are the multisets of k feature vectors that satisfy the instance quotas. Fixing an instance, we project some p into type space by viewing it as a distribution p over types of panels K, inducing marginals τc for each type individuals c ∈ C. To begin, we define the TYPE LP, which is analogous to Eq. (3.1). We let Q be the type analog of M , so that entry Qcj is the number of individuals i with F (i) = c contained in panels of type j ∈ K.5 Then,
TYPE LP Q p = τ (3.3) ‖p‖1 = 1 (3.4)
p ≥ 0.
We round p in this LP to a panel type distribution p̄ while preserving (3.4). All that remains, then, is to construct some p̄, π̄ such that p is consistent with p̄ and ‖π − π̄‖∞ is small. This p̄ is in general supported by panels outside of supp(p), unlike the p̄ obtained by Theorem 3.1. It is the anonymity of π which allows us to construct these new panels and prove that they are feasible for the instance.
4The class of all anonymous marginals π includes the maximizers π∗ of all reasonable fairness objectives, and second, this condition is satisfied by all existing selection algorithms used in practice, to our knowledge.
5Completing the analogy, C,K, Q, p, p̄, τ are the “type” versions of N,K,M, p, p̄, π from the original LP.
3.3 Lower Bounds
This method of using bounded discrepancy to derive nearly fairness-optimal uniform lotteries has its limits, since there are even sparse M and fractional x for which no integer x̄ yields nearby Mx̄. In the worst case, we establish lower bounds by modifying those of Beck and Fiala [25]: Theorem 3.4. There exist p, π for which for all uniform lotteries p̄, π̄,
min p̄∈D ‖π − π̄‖∞ = Ω
(√ k
m
) .
Our k-dependent upper and lower bounds are separated by a factor of √ k, matching the current upper and lower bounds of the Beck-Fiala conjecture as applied to linear discrepancy (also known as the lattice approximation problem [26]). The respective gaps are incomparable, however, since for a given x ∈ [0, 1]n, the former problem aims to minimize ‖M(x− x̄)‖∞ over x̄ ∈ {0, 1}n, while we aim to do the same over a subset of the x̄ ∈ Zn for which∑j xj = ∑ j x̄j (see Lemma B.4).
4 Theoretical Bounds on Fairness Loss
Since the fairness of a distribution p is determined by its marginals π, it is intuitive that if uniform lotteries incur only small marginal discrepancy (per Section 3), then they should also incur only small fairness losses. This should hold for any fairness notion that is sufficiently “smooth” (i.e., doesn’t change too quickly with changing marginals) in the vicinity of p, π.
Although our bounds from Section 3 apply to any reasonable initial distribution p, we are particularly concerned with bounding fairness loss from maximally fair initial distributions p∗. Here, we specifically consider such p∗ that are optimal with respect to Maximin and NW. We note that, since there exist anonymous p∗, π∗ that maximize these objectives, we can apply any upper bound from Section 3 to upper bound ‖π∗ − π̄‖∞. We defer omitted proofs to Appendix C.
4.1 Maximin
Since Leximin is the fairness objective optimized by the maximally fair algorithm used in practice, it would be most natural to start with a p∗ that is Leximin-optimal and bound fairness loss with respect to this objective. However, the fact that Leximin fairness cannot be represented by a single scalar value prevents us from formulating such an approximation guarantee. Instead, we first pursue bounds on the closely-related objective, Maximin. We argue that in the most meaningful sense, a worst-case Maximin guarantee is a Leximin guarantee: such a bound would show limited loss in the minimum marginal, and it is Leximin’s lexicographically first priority to maximize the minimum marginal.
First, we show there exists some p̄, π̄ that gives bounded Maximin loss from p∗, π∗, the Maximinoptimal unconstrained distribution. This bound follows from Theorems 3.3 and B.8, using the simple observation that p̄ can decrease the lowest marginal given by p∗ by no more than ‖π∗ − π̄‖∞. Here nmin := minc nc denotes the smallest number of individuals which share any feature vector c ∈ C. Corollary 4.1. By Theorem 3.3 and B.8, for Maximin-optimal p∗, there exists a uniform lottery p̄ that satisfies
Maximin(p∗)−Maximin(p̄) = 1 m ·O ( min {√ |C| log |C|, k nmin + 1 }) .
Theorem 3.4 demonstrates that we cannot get an upper bound on Maxmin loss stronger than O( √ k/m) using a uniform bound on changes in all πi. However, since Maximin is concerned only with the smallest πi, it seems plausible that better upper bounds on Maximin loss could result from rounding π while tightly controlling only losses in the smallest πi’s, while giving freer reign to larger marginals. We show that this is not the case by further modifying the instances from Theorem 3.4 to obtain the following lower bound on the Maximin loss: Theorem 4.1. There exists a Maximin-optimal p∗ such that, for all uniform lotteries p̄,
Maximin(p∗)−Maximin(p̄) = Ω (√ k
m
) .
4.2 Nash Welfare
As NW has also garnered interest by practitioners and is applicable in practice [18], we upper-bound the NW fairness loss. Unlike Maximin loss, an upper bound on NW loss does not immediately follow from one on ‖π − π̄‖∞, because decreases in smaller marginals have larger negative impact on the NW. As a result, the upper bound on NW resulting from Section 3 is slightly weaker than that on Maximin:
Theorem 4.2. For NW-optimal p∗, there exists a uniform lottery p̄ that satisfies
NW(p∗)−NW(p̄) = k m ·O ( min {√ |C| log |C|, k nmin + 1 }) .
We give an overview of the proof of Theorem 4.2. To begin, fix a NW-optimizing panel distribution p∗, π∗. Before applying our upper bounds on marginal discrepancy from Section 3, we must contend with the fact that if this bounded loss is suffered by already-tiny marginals, the NW may decrease substantially or even go to 0. Thus, we first prove Lemmas 4.1 and 4.2, which together imply that no marginal in π∗ is smaller than 1/n.
Lemma 4.1. For NW-optimal p∗ over a support of panels supp(p∗), there exists a constant λ ∈ R+ such that, for all P ∈ supp(p∗),∑i∈P 1/π∗i = λ.
Lemma 4.2. For NW-optimal p∗, π∗, we have that π∗i ≥ 1/n for all i ∈ N .
Lemma 4.1 follows from the fact that the partial derivative of NW with respect to the probability it assigns a given panel must be the same as that with respect to any other panel at p∗ (otherwise, mass in the distribution could be shifted to increase the NW). Lemma 4.2 then follows by the additional observation that EP∼p∗ [∑ i∈P 1/π ∗ i ] = n.
Finally Lemma 4.3 follows from the fact that Lemma 4.2 limits the potential multiplicative, and therefore additive, impact on the NW of decreasing any marginal by ‖π − π̄‖∞: Lemma 4.3. For NW-optimal p∗, π∗, there exists a uniform lottery p̄, π̄ that satisfies NW(p∗) − NW(p̄) ≤ k ‖π∗ − π̄‖∞.
As the NW-optimal marginals π∗ are anonymous, we can apply the upper bounds given by Theorem 3.3 and Theorem B.8 to show the existence of a p̄, π̄ satisfying the claim of the theorem.
5 Practical Algorithms for Computing Fair Uniform Lotteries
Algorithms. First, we implement versions of two existing rounding algorithms, which are implicit in our worst-case upper bounds.6 The first is Pipage rounding [16], or PIPAGE, a randomized rounding scheme satisfying negative association [10]. The second is BECK-FIALA, the dependent rounding scheme used in the proof of Theorem 3.2. To benchmark these algorithms against the highest level of fairness they could possibly achieve, we use integer programming (IP) to compute the fairest possible uniform lotteries over supp(p∗), the panels over which p∗ randomizes.7 We define IPMAXIMIN and IP-NW to find uniform lotteries over supp(p∗) maximizing Maximin and NW, respectively. We remark that the performance of these IPs is still subject to our theoretical upper and lower bounds. We provide implementation details in Appendix D.1.
One question is whether we should prefer the IPs or the rounding algorithms for real-world applications. Although IP-MAXIMIN appears to find good solutions at practicable speeds, IP-NW converges to optimality prohibitively slowly in some instances (see Appendix D.2 for runtimes). At the same time, we find that our simpler rounding algorithms give near-optimal uniform lotteries with respect to both fairness objectives. Also in favor of simpler rounding algorithms, many randomized rounding procedures (including Pipage rounding) have the advantage that they exactly
6We do not implement the algorithm implicit in Theorem 3.3 because our results already present sufficient alternatives for finding excellent uniform lotteries in practice.
7Note that these lotteries are not necessarily universally optimal, as they can randomize over only supp(p∗); conceivably, one could find a fairer uniform lottery by also randomizing over panels not in supp(p∗). However, PIPAGE and BECK-FIALA are also restricted in this way, and thus must be weakly dominated by the IP.
preserve marginals over the combined steps of randomly rounding to a uniform lottery and then randomly sampling it—a guarantee that is much more challenging to achieve with IPs.
Uniform lotteries nearly exactly preserve Maximin, Nash Welfare fairness. We first measure the fairness of uniform lotteries produced by these algorithms in 11 real-world panel selection instances from 7 different organizations worldwide (instance details in Appendix A). In all experiments, we generate a lottery of sizem = 1000. This is fairly small; it requires drawing only 3 balls from lottery machines, and in one instance we have that m < n. We nevertheless see excellent performance of all algorithms, and note that this performance will only improve with larger m.
Figure 2 shows the Maximin fairness of the uniform lottery computed by PIPAGE, BECK-FIALA, and IP-MAXIMIN for each instance. For intuition, recall that the level of Maximin fairness given by any lottery is exactly the minimum marginal assigned to any individual by that lottery. The upper edges of the gray boxes in Fig. 2 correspond to the optimal fairness attained by an unconstrained distribution p∗. These experiments reveal that the cost of transparency to Maximin-fairness is practically non-existent: across instances, the quantized distributions computed by IP-MAXIMIN decrease the minimum marginal by at most 2.1/m, amounting to a loss of no more than 0.0021 in the minimum marginal probability in any instance. Visually, we can see that this loss is negligible relative to the original magnitude of even the smallest marginals given by p∗. Surprisingly, though PIPAGE and BECK-FIALA do not aim to optimize any fairness objective, they achieve only slightly larger losses in Maximin fairness, with PIPAGE outperforming BECK-FIALA. Finally, the heights of the gray boxes indicate that our theoretical bounds are often meaningful in practice, giving lower bounds on Maximin fairness well above zero in nine out of eleven instances. We note these bounds only tighten with larger m. We present similarly encouraging results on NW loss in Appendix D.3.
Uniform lotteries nearly preserve all Leximin marginals. We still remain one step away from practice: our examination of Maximin does not address whether uniform lotteries can attain the finer-tuned fairness properties of the Leximin-optimal distributions currently used in practice. Fortunately, our results from Section 3 imply the existence of a quantized p̄ that closely approximates all marginals given by the Leximin-optimal distribution p∗, π∗. We evaluate the extent to which PIPAGE and BECK-FIALA preserve these marginals in Fig. 3. They are benchmarked against a new IP, IP-MARGINALS, which computes the uniform lottery over supp(p∗) minimizing ‖π∗ − π̄‖∞.
Figure 3 demonstrates that in the instance “sf(a)”, all algorithms produce marginals that deviate negligibly from those given by π∗. Analogous results on remaining instances appear in Appendix D.4 and show similar results. As was the case for Maximin, we see that our theoretical bounds are meaningful, but that we can consistently outperform them in real-world instances.
6 Discussion
Our aim was to show that uniform lotteries can preserve fairness, and our results ultimately suggest this, along with something stronger: that in practical instances, uniform lotteries can reliably almost exactly replicate the entire set of marginals given by the optimal unconstrained panel distribution. Our rounding algorithms can thus be plugged directly into the existing panel selection pipeline with essentially no impact on individuals’ selection probabilities, thus enabling translation of the output of Panelot (and other maximally fair algorithms) to a nearly maximally fair and transparent panel selection procedure. We note that our methods are not just compatible with ball-drawing lotteries, but any form of uniform physical randomness (e.g. dice, wheel-spinning, etc.).
Although we achieve our stated notion of transparency, a limitation of this notion is that it focuses on the final stage of the panel selection process. A more holistic notion of transparency might require that onlookers can verify that the panel is not being intentionally stacked with certain individuals. This work does not fully enable such verification: although onlookers can now observe individuals’ marginals, they still cannot verify that these marginals are actually maximally fair without verifying the underlying optimization algorithms. In particular, in the common case where quotas require even maximally fair panel distributions to select certain individuals with probability near one, onlookers cannot distinguish those from unfair distributions engineered such that one or more pool members are chosen with probability near one.
In research on economics, fair division, and other areas of AI, randomness is often proposed as a tool to make real-world systems fairer [17, 6, 15]. Nonetheless, in practice, these systems (with a few exceptions, such as school choice [22]) remain stubbornly deterministic. Among the hurdles to bringing the theoretical benefits of randomness into practice is that allocation mechanisms fare best when they can be readily understood, and that randomness can be perceived as undesirable or suspect. Sortition is a rather unique paradigm at the heart of this tension: it relies centrally on randomness, while in the public sphere it is attaining increasing political influence. It is therefore a uniquely high-impact domain in which to study how to combine the benefits of randomness, such as fairness, with transparency. We hope that this work and its potential for impact will inspire the investigation of fairness-transparency tradeoffs in other AI applications.
Acknowledgements. We would foremost like to thank Paul Gölz for helpful technical conversations and insights on the practical motivations for this research. We also thank Anupam Gupta for helpful technical conversations. Finally, several organizations for supplying real-world citizens’ assembly data, including the Sortition Foundation, the Center for Climate Assemblies, Healthy Democracy, MASS LBP, Nexus Institute, Of by For, and New Democracy.
Funding and Competing Interests. This work was partially supported by National Science Foundation grants CCF-2007080, IIS-2024287 and CCF-1733556; and by Office of Naval Research grant N00014-20-1-2488. Bailey Flanigan is supported by the National Science Foundation Graduate Research Fellowship and the Fannie and John Hertz Foundation. None of the authors have competing interests. | 1. What is the focus of the paper regarding citizens' panel selection?
2. What are the three key aspects of the method developed in the paper?
3. Is the reviewer familiar with the relevant literature for evaluating the paper's content?
4. How does the reviewer assess the paper's accessibility in terms of ease of following the arguments?
5. Does the reviewer believe the problem addressed in the paper, particularly the transparency requirement, is significant enough? | Summary Of The Paper
Review | Summary Of The Paper
This paper develops and characterizes methods for selecting a citizens' panel that is representative (satisfying upper and lower quotas on demographics' representation), fair (each eligible citizen has as equal a chance as possible of being selected), and transparent (where the final panel is chosen uniformly at random from a set of candidate panels in a public lottery).
Review
While I am not familiar with the sortition literature and did not check the mathematical details of this paper, I found it interesting and reasonably easy to follow. I'll let other reviewers to determine whether the problem studied (in particular, the transparency requirement) is important. |
NIPS | Title
On the non-universality of deep learning: quantifying the cost of symmetry
Abstract
We prove limitations on what neural networks trained by noisy gradient descent (GD) can efficiently learn. Our results apply whenever GD training is equivariant, which holds for many standard architectures and initializations. As applications, (i) we characterize the functions that fully-connected networks can weak-learn on the binary hypercube and unit sphere, demonstrating that depth-2 is as powerful as any other depth for this task; (ii) we extend the merged-staircase necessity result for learning with latent low-dimensional structure [ABM22] to beyond the meanfield regime. Under cryptographic assumptions, we also show hardness results for learning with fully-connected networks trained by stochastic gradient descent (SGD).
N/A
We prove limitations on what neural networks trained by noisy gradient descent (GD) can efficiently learn. Our results apply whenever GD training is equivariant, which holds for many standard architectures and initializations. As applications, (i) we characterize the functions that fully-connected networks can weak-learn on the binary hypercube and unit sphere, demonstrating that depth-2 is as powerful as any other depth for this task; (ii) we extend the merged-staircase necessity result for learning with latent low-dimensional structure [ABM22] to beyond the meanfield regime. Under cryptographic assumptions, we also show hardness results for learning with fully-connected networks trained by stochastic gradient descent (SGD).
1 Introduction
Over the last decade, deep learning has made advances in areas as diverse as image classification [KSH12], language translation [BCB14], classical board games [SHS+18], and programming [LCC+22]. Neural networks trained with gradient-based optimizers have surpassed classical methods for these tasks, raising the question: can we hope for deep learning methods to eventually replace all other learning algorithms? In other words, is deep learning a universal learning paradigm? Recently, [AS20, AKM+21] proved that in a certain sense the answer is yes: any PAC-learning algorithm [Val84] can be efficiently implemented as a neural network trained by stochastic gradient descent; analogously, any Statistical Query algorithm [Kea98] can be efficiently implemented as a neural network trained by noisy gradient descent.
However, there is a catch: the result of [AS20] relies on a carefully crafted network architecture with memory and computation modules, which is capable of emulating an arbitrary learning algorithm. This is far from the architectures which have been shown to be successful in practice. Neural networks in practice do incorporate domain knowledge, but they have more “regularity” than the architectures of [AS20], in the sense that they do not rely on heterogeneous and carefully assigned initial weights (e.g., convolutional networks and transformers for image recognition and language processing [LB+95, LKF10, VSP+17], graph neural networks for analyzing graph data [GMS05, BZSL13, VCC+17], and networks specialized for particle physics [BAO+20]). We therefore refine our question:
Is deep learning with “regular” architectures and initializations a universal learning paradigm? If not, can we quantify its limitations when architectures and data are not well aligned?
We would like an answer applicable to a wide range of architectures. In order to formalize the problem and develop a general theory, we take an approach similar to [Ng04, Sha18, LZA21] of understanding deep learning through the equivariance group G (a.k.a., symmetry group) of the learning algorithm.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Definition 1.1 (G-equivariant algorithm). A randomized algorithm A that takes in a data distribution D 2 P(X ⇥ Y)1 and outputs a function A(D) : X ! Y is said to be G-equivariant if for all g 2 G
A(D) d = A(g(D)) g. (G-equivariance)
Here g is a group element that acts on the data space X , and so is viewed as a function g : X ! X , and g(D) is the distribution of (g(x), y), where (x, y) ⇠ D.
In the case that the algorithm A is deep learning on the distribution D, the equivariance group depends on the optimizer, the architecture, and the network initialization [Ng04, LZA21].2
Examples of G-equivariant algorithms in deep learning In many deep learning settings, the equivariance group of the learning algorithm is large. Thus, in this paper, we call an algorithm “regular” if it has a large equivariance group. For example, SGD training of fully-connected networks with Gaussian initialization is orthogonally-equivariant [Ng04]; and is permutation-equivariant if we add skip connections [HZRS16]. SGD training of convolutional networks is translationally-equivariant if circular convolutions are used [SNPP19], and SGD training of i.i.d.-initialized transformers without positional embeddings is equivariant to permutations of tokens [VSP+17]. Furthermore, [LZA21, Theorem C.1] provides general conditions under which a deep learning algorithm is equivariant. See also the preliminaries in Section 2.
Summary of this work Based off of G-equivariance, we prove limitations on what “regular” neural networks trained by noisy gradient descent (GD) or stochastic gradient descent (SGD) can efficiently learn, implying a separation with the initializations and architectures considered in [AS20]. For GD, we prove a master theorem that enables two novel applications: (a) characterizing which functions can be efficiently weak-learned by fully-connected (FC) networks on both the hypercube and the unit sphere; and (b) a necessity result for which functions on the hypercube with latent low-dimensional structure can be efficiently learned. See Sections 1.2 and 1.3 for more details.
1.1 Related work
Most prior work on computational lower bounds for deep learning has focused on proving limitations of kernel methods (a.k.a. linear methods). Starting with [Bar93] and more recently with [WLLM19, AL19, KMS20, AL20, Hsu, HSSV21, ABM22] it is known that there are problems on which kernel methods provably fail. These results apply to training neural networks in the Neural Tangent Kernel (NTK) regime [JGH18], but do not apply to more general nonlinear training. Furthermore, for specific architectures such as FC architectures [GMMM21, Mis22] and convolutional architectures [MM21], the kernel and random features models at initialization are well understood, yielding stronger lower bounds for training in the NTK regime.
For nonlinear training, which is the setting of this paper, considerably less is known. In the context of sample complexity, [Ng04] introduced the study of the equivariance group of SGD, and constructed a distribution on d dimensions with a ⌦(d) versus O(1) sample complexity separation for learning with an SGD-trained FC architecture versus an arbitrary algorithm. More recently, [LZA21] built on [Ng04] to show a O(1) versus ⌦(d2) sample-complexity separation between SGD-trained convolutional and FC architectures. In this paper, we also analyze the equivariance group of the training algorithm, but with the goal of proving superpolynomial computational lower bounds.
In the context of computational lower bounds, it is known that networks trained with noisy3 gradient descent (GD) fall under the Statistical Query (SQ) framework [Kea98], which allows showing computational limitations for GD training based on SQ lower bounds. This has been combined in [AS20, SSS17, MS20, ACHM22] with the permutation symmetry of GD-training of i.i.d. FC networks to prove impossibility of efficiently learning high-degree parities and polynomials. In
1The set of probability distributions on ⌦ is denoted by P(⌦). You should think of D 2 P(X ⇥ Y) as a distribution of pairs (x, y) of covariates and labels.
2Note that the equivariance group of a training algorithm should not be confused with the equivariance group of an architecture in the context of geometric deep learning [BBCV21]. In that context, G-equivariance refers to the property of a neural network architecture fNN(·;✓) : X ! Y that fNN(g(x);✓) = g(fNN(x;✓)) for all x 2 X and all group elements g 2 G. In that case, G acts on both the input in X and output in Y .
3Here the noise is used to control the gradients’ precision as in [AS20, AKM+21].
our work, we show that these arguments can be viewed in the broader context of more general group symmetries, yielding stronger lower bounds than previously known. For stochastic gradient descent (SGD) training, [ABM22] proves a computational limitation for training of two-layer meanfield networks, but their result applies only when SGD converges to the mean-field limit, and does not apply to more general architectures beyond two-layer networks. Finally, most related to our SGD hardness result is [Sha18], which shows limitations of SGD-trained FC networks under a cryptographic assumption. However, the argument of [Sha18] relies on training being equivariant to linear transformations of the data, and therefore requires that data be whitened or preconditioned. Instead, our result for SGD does not require any preprocessing steps.
There is also recent work showing sample complexity benefits of invariant/equivariant neural network architectures [MMM21, EZ21, Ele21, BVB21, Ele22]. In contrast, we study equivariant training algorithms. These are distinct concepts: a deep learning algorithm can be G-equivariant, while the neural network architecture is neither G-invariant nor G-equivariant. For example, a FC network is not invariant to orthogonal transformations of the input. However, if we initialize it with Gaussian weights and train with SGD, then the learning algorithm is equivariant to orthogonal transformations of the input (see Proposition 2.5 below).
1.2 Contribution 1: Lower bounds for noisy gradient descent (GD)
Consider the supervised learning setup where we train a neural network fNN(·;✓) : X ! R parametrized by ✓ 2 Rp to minimize the mean-squared error on a data distribution D 2 P(X ⇥ R),
`D(✓) = E(x,y)⇠D[(y fNN(x;✓))2]. (1)
The noisy Gradient Descent (GD) training algorithm randomly initializes ✓0 ⇠ µ✓ for some initialization distribution µ✓ 2 P(Rp), and then iteratively updates the parameters with step size ⌘ > 0 in a direction gD(✓k) approximating the population loss gradient, plus Gaussian noise ⇠k ⇠ N (0, ⌧2I),
✓k+1 = ✓k ⌘gD(✓ k) + ⇠k. (GD)
Up to a constant factor, gD(✓) is the population loss gradient, except we have clipped the gradients of the network with the projection operator ⇧B(0,R) to lie in the ball B(0, R) = {z : kzk2 R} ⇢ Rp,4
gD(✓) = E(x,y)⇠D[(y fNN(x;✓))(⇧B(0,R)r✓fNN(x;✓))]. Clipping the gradients is often used in practice to avoid instability from exploding gradients (see, e.g., [ZHSJ19] and references within). In our context, clipping ensures that the injected noise ⇠k is on the same scale as the gradient r✓fNN of the network and so it controls the gradients’ precision. Similarly to the works [AS20, AKM+21, ACHM22], we consider noisy gradient descent training to be efficient if the following conditions are met. Definition 1.2 (Efficiency of GD, informal). GD training is efficient if the clipping radius R, step size ⌘, and inverse noise magnitude 1/⌧ are all polynomially-bounded in d, since then (GD) can be efficiently implemented using noisy minibatch SGD5.
We prove that some data distributions cannot be efficiently learned by G-equivariant GD training. For this, we introduce the G-alignment: Definition 1.3 (G-alignment). Let G be a compact group, let µX 2 P(X ) be a distribution over data points, and let f 2 L2(µX ) be a labeling function. The G-alignment of (µX , f) is:
C((µX , f);G) = sup h
Eg⇠µG [Ex⇠µX [f(g(x))h(x)]2],
where µG is the Haar measure of G and the supremum is over h 2 L2(µX ) such that khk2 = 1.
In our applications, we use tools from representation theory (see e.g., [Kna96]) to evaluate the G-alignment. Using the G-alignment, we can prove a master theorem for lower bounds: Theorem 1.4 (GD lower bound, informal statement of Theorem 3.1). Let Df 2 P(X ⇥ R) be the distribution of (x, f(x)) for x ⇠ µX . If µX is G-invariant6 and the G-alignment of (µX , f) is small, then f cannot be efficiently learned by a G-equivariant GD algorithm.
4Note that if fNN is an R-Lipschitz model, then gD(✓) will simply be the population gradient of the loss. 5Efficient implementability by minibatch SGD assumes bounded residual errors. 6Meaning that if x ⇠ µX , then for any g 2 G, we also have g(x) ⇠ µX .
Proof ideas We first make an observation of [Ng04]: if a G-equivariant algorithm can learn the function f by training on the distribution Df , then, for any group element g 2 G, it can learn f g by training on the distribution Df g. In other words, the algorithm can learn the class of functions F = {f g : g 2 g}, which can potentially be much larger than just the singleton set {f}. We conclude by showing that the class of functions F cannot be efficiently learned by GD training. The intuition is that the G-alignment measures the diversity of the functions in F . If the G-alignment is small, then there is no function h that correlates with most of the functions in F , which can be used to show F is hard to learn by gradient descent.
This type of argument appears in [AS20, ACHM22] in the specific case of Boolean functions and for permutation equivariance; our proof both applies to a more general setting (beyond Boolean functions and permutations) and yields sharper bounds; see Appendix A.3. Our bound can also be interpreted in terms of the Statistical Query framework, as we discuss in Appendix A.4. While Theorem 1.4 is intuitively simple, we demonstrate its power and ease-of-use by deriving two new applications.
Application: Characterization of weak-learnability by fully-connected (FC) networks In our first application, we consider weak-learnability: when can a function be learned non-negligibly better than just outputting the estimate fNN ⌘ 0? Using Theorem 1.4, we characterize which functions over the binary hypercube f : {+1, 1}d ! R and over the sphere f : Sd 1 ! R are efficiently weak-learnable by GD-trained FC networks with i.i.d. symmetric and i.i.d. Gaussian initialization, respectively. The takeaway is that a function f : {+1, 1}d ! R is weak-learnable if and only if it has a nonnegligible Fourier coefficient of order O(1) or d O(1). Similarly, a function f : Sd 1 ! R is weak-learnable if and only if it has nonnegligible projection onto the degree-O(1) spherical harmonics. Perhaps surprisingly, such functions can be efficiently weak-learned by 2-layer fully-connected networks, which shows that adding more depth does not help. This application is presented in Section 3.1.
Application: Evidence for the staircase property In our second application, we consider learning a target function f : {+1, 1}d ! R that only depends on the first P coordinates, f(x) = h(x1, . . . , xP ). Our regime of interest here is when the function hand : {+1, 1}P ! R remains fixed and the dimension d grows, since this models the situation where a latent low-dimensional space determines the labels in a high-dimensional dataset. Recently, [ABM22] studied SGD-training of mean-field two-layer networks, and gave a near-characterization of which functions can be learned to arbitrary accuracy ✏ in Oh,✏(d) samples, in terms of the merged-staircase property (MSP). Using Theorem 1.4, we prove that the MSP is necessary for GD-learnability whenever training is permutation-equivariant (which applies beyond the 2-layer mean-field regime) and we also generalize it beyond leaps of size 1. Details are in Section 3.2.
1.3 Contribution 2: Hardness for stochastic gradient descent (SGD)
The second part of this paper concerns Stochastic Gradient Descent (SGD) training, which randomly initializes the weights ✓0 ⇠ µ✓ , and then iteratively trains the parameters with the following update rule to try to minimize the loss (1):
✓k+1 = ✓k ⌘r✓(y fNN(xk+1;✓)) 2 |✓=✓k , (SGD)
where (yk+1,xk+1) ⇠ D is a fresh sample on each iteration, and ⌘ > 0 is the learning rate.7
Proving computational lower bounds for SGD is a notoriously difficult problem [AKM+21], exacerbated by the fact that for general architectures SGD can be used to simulate any polynomial-time learning algorithm [AS20]. However, we demonstrate that one can prove hardness results for SGD training based off of cryptographic assumptions when the training algorithm has a large equivariance group. We demonstrate the non-universality of SGD on a standard FC architecture. Theorem 1.5 (Hardness for SGD, informal statement of Theorem 4.4). Under the assumption that the Learning Parities with Noise (LPN) problem8 is hard, FC neural networks with Gaussian initialization
7For brevity, we focus on one-pass SGD with a single fresh sample per iteration. Our results extend to empirical risk minimization (ERM) setting and to mini-batch SGD, see Remark E.1.
8See Section 4 and Appendix D.3 for definitions and discussion on LPN.
trained by SGD cannot learn fmod8 : {+1, 1}d ! {0, . . . , 7},
fmod8(x) ⌘ dX
i=1
xi (mod 8),
in polynomial time from noisy samples (x, fmod8(x) + ⇠) where x ⇠ {+1, 1}d and ⇠ ⇠ N (0, 1).
This result shows a limitation of SGD training based on an average-case reduction from a cryptographic problem. The closest prior result is in [Sha18], which proved hardness results for learning with SGD on FC networks, but required preprocessing the data with a whitening transformation.
Proof idea The FC architecture and Gaussian initialization are necessary: an architecture that outputted fmod8(x) at initialization would trivially achieve zero loss. However, SGD on Gaussianinitialized FC networks is sign-flip equivariant, and this symmetry makes fmod8 hard to learn. If a sign-flip equivariant algorithm can learn the function fmod8(x) from noisy samples, then it can learn the function fmod8(x s) from noisy samples, where s 2 {+1, 1}d is an unknown sign-flip vector, and denotes elementwise product. However, this latter problem is hard under standard cryptographic assumptions. More details in Section 4.
2 Preliminaries
Notation Let Hd = {+1, 1}d be the binary hypercube, and Sd 1 = {x 2 Rd : kxk2 = 1} be the unit sphere. The law of a random variable X is L(X). If S is a finite set, then X ⇠ S stands for X ⇠ Unif[S]. Also let x ⇠ Sd 1 denote x drawn from the uniform Haar measure on Sd 1. For a set ⌦, let P(⌦) be the set of distributions on ⌦. Let be the elementwise product. For any µX 2 P(X ), and group G acting on X , we say µX is G-invariant if g(x) d = x for x ⇠ µX and any g 2 G.
2.1 Equivariance of GD and SGD
We define GD and SGD equivariance separately. Definition 2.1. Let AGD be the algorithm that takes in data distribution D 2 P(X ⇥ R), runs (GD) on initialization ✓0 ⇠ µ✓ for k steps, and outputs the function AGD(D) = fNN(·;✓k)
We say “(fNN, µ✓)-GD is G-equivariant” if AGD is G-equivariant in the sense of Definition 1.1. Definition 2.2. Let ASGD be the algorithm that takes in samples (xi, yi)i2[n], runs (SGD) on initialization ✓0 ⇠ µ✓ for n steps, and outputs ASGD((xi, yi)i2[n]) = fNN(·;✓k).
We say “(fNN, µ✓)-SGD is G-equivariant” if ASGD((xi, yi)i2[n]) d = ASGD((g(xi), yi)i2[n]) g for any g 2 G, and any samples (xi, yi)i2[n].
2.2 Regularity conditions on networks imply equivariances of GD and SGD
We take a data space X ✓ Rd, and consider the following groups that act on Rd. Definition 2.3. Define the following groups and actions:
• Let Gperm = Sd denote the group of permutations on [d]. An element 2 Gperm acts on x 2 Rd in the standard way: (x) = (x (1), . . . , x (d)).
• Let Gsign,perm denote the group of signed permutations, an element g = (s, ) 2 Gsign,perm is given by a sign-flip vector s 2 Hd and a permutation 2 Gperm. It acts on x 2 Rd by g(x) = s (x) = (s1x (1), . . . , sdx (d)).9
• Let Grot = SO(d) ✓ GL(d,R) denote the rotation group. An element g 2 Grot is a rotation matrix that acts on x 2 Rd by matrix multiplication.
9The group product is g1g2 = (s1, 1)(s2, 2) = (s1 1(s2), 1 2).
Under mild conditions on the neural network architecture and initialization, GD and SGD training are known to be Gperm-, Gsign,perm-, or Grot-equivariant [Ng04, LZA21]. Assumption 2.4 (Fully-connected i.i.d. first layer and no skip connections from the input). We can decompose the parameters as ✓ = (W , ), where W 2 Rm⇥d is the matrix of the first-layer weights, and there is a function gNN(·; ) : Rm ! R such that fNN(x;✓) = gNN(Wx; ). Furthermore, the initialization distribution is µ✓ = µW ⇥ µ , where µW = µ ⌦(m⇥d) w for µw 2 P(R).
Notice that Assumption 2.4 is satisfied by FC networks with i.i.d. initialization. Under assumptions on µw, we obtain equivariances of GD and SGD (see Appendix E for proofs.) Proposition 2.5 ([Ng04, LZA21]). Under Assumption 2.4, GD and SGD are Gperm-equivariant. If µw is sign-flip symmetric, then GD and SGD are Gsign,perm-equivariant. If µw = N (0, 2) for some , then GD and SGD are Grot-equivariant.
3 Lower bounds for learning with GD
In this section, let D(f, µX ) 2 P(X ⇥ R) denote the distribution of (x, f(x)) where x ⇠ µX . We give a master theorem for computational lower bounds for learning with G-equivariant GD. Theorem 3.1 (GD lower bound using G-alignment). Let G be a compact group, and let fNN(·;✓) : X ! R be an architecture and µ✓ 2 P(Rp) be an initialization such that GD is G-equivariant. Fix any G-invariant distribution µX 2 P(X ), any label function f⇤ 2 L2(µX ), and any baseline function ↵ 2 L2(µX ) satisfying ↵ g = ↵ for all g 2 G. Let ✓k be the random weights after k time-steps of GD training with noise parameter ⌧ > 0, step size ⌘ > 0, and clipping radius R > 0 on the distribution D = D(f⇤, µX ). Then, for any ✏ > 0,
P✓k [`D(✓k) kf⇤ ↵k2L2(µX ) ✏] ⌘R
p kC 2⌧ + C ✏ ,
where C = C((f⇤ ↵, µX );G) is the G-alignment of Definition 1.3.
As discussed in Section 1.2, the theorem states that if the G-alignment C is very small, then GD training cannot efficiently improve on the trivial loss from outputting ↵: either the number of steps k, the gradient precision R/⌧ , or the step size ⌘ have to be very large in order to learn. Appendix A shows a generalization of the theorem for learning a class of functions F = {f1, . . . , fm} instead of just a single function f⇤. This result goes beyond the lower bound of [AS20] even when G is the trivial group with one element: the main improvement is that Theorem 3.1 proves hardness for learning real-valued functions beyond just Boolean-valued functions. We demonstrate the usefulness of the theorem through two new applications in Sections 3.1 and 3.2.
3.1 Application: Characterizing weak-learnability by FC networks
In our first application of Theorem 3.1, we consider FC architectures with i.i.d. initialization, and show how to use their training equivariances to characterize what functions they can weak-learn: i.e., for what target functions f⇤ they can efficiently achieve a non-negligible correlation after training. Definition 3.2 (Weak learnability). Let {µd}d2N be a family of distributions µd 2 P(Xd), and let {fd}d2N be a family of functions fd 2 L2(µd). Finally, let {f̃d}d2N be a family of estimators, where f̃d is a random function in L2(µd). We say that {fd, µd}d2N is “weak-learned” by the family of estimators {f̃d}d2N if there are constants d0, C > 0 such that for all d > d0,
Pf̃d [kfd f̃dk 2 L2(µd) kfdk 2 L2(µd) d C ] 9/10. (2)
The constant 9/10 in the definition is arbitrary. In words, weak-learning measures whether the family of estimators {f̃d} has a non-negligible edge over simply estimating with the identically zero functions f̃d ⌘ 0. We study weak-learnability by GD-trained FC networks. Definition 3.3. We say that {fd, µd}d2N is efficiently weak-learnable by GD-trained FC networks if there are FC networks and initializations {fNN,d, µ✓,d}, and hyperparameters {⌘d, kd, Rd, ⌧d} such that for some constant c > 0,
• Hyperparameters are polynomial size: 0 ⌘d, kd, Rd, 1/⌧d O(dc);
• {f̃d} weak-learns {fd, µd} in the sense of Definition 3.2, where f̃d = fNN(·;✓d) for weights ✓d that are GD-trained on D(fd, µd) for kd steps with step size ⌘d, clipping radius Rd, and noise ⌧d, starting from initialization µ✓,d.
If µ✓,d is i.i.d copies of a symmetric distribution, we say that the FC networks are symmetricallyinitialized, and Gaussian-initialized if µ✓,d is i.i.d. copies of a Gaussian distribution.
3.1.1 Functions on hypercube, FC networks with i.i.d. symmetric initialization
Let us first consider functions on the Boolean hypercube f : Hd ! R. These can be uniquely written as a multilinear polynomial
f(x) = X
S✓[d]
f̂(S) Y
i2S
xi,
where f̂(S) are the Fourier coefficients of f [O’D14]. We characterize weak learnability of functions on the hypercube in terms of their Fourier coefficients. The full proof is deferred to Appendix B.1. Theorem 3.4. Let {fd}d2N be a family of functions fd : Hd ! R with kfdkL2(Hd) 1. Then {fd,Hd} is efficiently weak-learnable by GD-trained symmetrically-initialized FC networks if and only if there is a constant C > 0 such that for each d 2 N there is Sd ✓ [d] with |Sd| C or |Sd| d C, and |f̂d(Sd)| ⌦(d C).
The algorithmic result can be achieved by two-layer FC networks, and relies on random features analysis where each network weight is initialized to 0 with probability 1 p, and +1 or 1 with equal probability p/2.10 Therefore, for weak learning on the hypercube, two-layer networks are as good as networks of any depth. For the converse impossibility result, we apply Theorem 3.1, recalling that GD is Gsign,perm-equivariant by Proposition 2.5, and noting that Gsign,perm-alignment is:
Lemma 3.5. Let f : Hd ! R. Then C((f,Hd);Gsign,perm) = maxk2[d] d k 1P S✓[d] |S|=k f̂(S)2.
Proof. In the following, let s ⇠ Hd and ⇠ Gperm, so that g = (s, ) ⇠ Gsign,perm. Also let x,x0 ⇠ Hd be independent. For any h : Hd ! R, by (a) tensorizing, (b) expanding f in the Fourier basis, (c) the orthogonality relation Es[ S(s) S0(s)] = S,S0 , and (d) tensorizing,
Eg[Ex[f(g(x))h(x)]2] = E ,s[Ex[f(s (x))h(x)]2] (a) = E ,s,x,x0 [f(s (x))f(s (x0))h(x)h(x0)] (b) = Ex,x0, [ X
S,S0✓[d]
f̂(S)f̂(S0)h(x)h(x0) S( (x)) S0( (x 0))Es[ S(s) S0(s)]]
(c) = Ex,x0, [
X
S✓[d]
f̂(S)2h(x)h(x0) S( (x)) S( (x 0))]
(d) = E [
X
S✓[d]
f̂(S)2 Ex[h(x) S( (x))]2]
= X
S✓[d]
f̂(S)2 E [ĥ( 1(S))2]
= X
S✓[d]
f̂(S)2 ✓ d
|S|
◆ 1 X
S0,|S0|=|S|
ĥ(S0)2.
And since P
S0,|S0|=|S| ĥ(S 0)2 khk2L2(Hd), the supremum over h such that khkL2(Hd) = 1 is
achieved by taking h(x) = S(x) for some S.
10Surprisingly, this means that the full parity function f⇤(x) = Qd
i=1 xi can be efficiently learned with such initializations. See Appendix B.
So if the Fourier coefficients of f are negligible for all S s.t. min(|S|, d |S|) O(1), then the Gsign,perm-alignment of f is negligible. By Theorem 3.1, this means f cannot be learned efficiently. In Appendix B.1.2 we give a concrete example of a hard function, that was not previously known.
3.1.2 Functions on sphere, FC networks with i.i.d. Gaussian initialization
We now study learning a target function on the unit sphere, f 2 L2(Sd 1), where we take the standard Lebesgue measure on Sd 1. A key fact in harmonic analysis is that L2(Sd 1) can be written as the direct sum of subspaces spanned by spherical harmonics of each degree (see, e.g., [Hoc12]).
L 2(Sd 1) =
1M
l=0
Vd,l,
where Vd,l ✓ L2(Sd 1) is the space of degree-l spherical harmonics, which is of dimension
dim(Vd,l) = 2l + d 2
l
✓ l + d 3
l 1
◆ .
Let ⇧Vd,l : L2(Sd 1) ! Vd,l be the projection operator to the space of degree-l spherical harmonics. In Appendix B.2, we prove this characterization of weak-learnability for functions on the sphere: Theorem 3.6. Let {fd}d2N be a family of functions fd : Sd 1 ! R with kfdkL2(Sd 1) 1. Then {fd, Sd 1} is efficiently weak-learnable by GD-trained Gaussian-initialized FC networks if and only if there is a constant C > 0 such that PC l=0 k⇧Vd,lfdk 2 d C .
The algorithmic result can again be achieved by two-layer FC networks, and is a consequence of the analysis of the random feature kernel in [GMMM21], which shows that the projection of fd onto the low-degree spherical harmonics can be efficiently learned. For the impossibility result, we apply Theorem 3.1, noting that GD is Grot-equivariant by Proposition 2.5, and the Grot-alignment is: Lemma 3.7. Let f 2 L2(Sd 1). Then C((f, Sd 1);Grot) = maxl2Z 0 k⇧Vd,lfk2/ dim(Vd,l).
Proof. The Grot-alignment is computed using the representation theory of Grot, specifically the Schur orthogonality theorem (see, e.g., [Ser77, Kna96]). For any l, the subspace Vd,l is invariant to action by Grot, meaning that we may define the representation l of Grot, which for any g 2 Grot, f 2 Vd,l is given by l(g) : Vd,l ! Vd,l and l(g)f = f g 1. Furthermore, l is a unitary, irreducible representation, and l is not equivalent to l0 , for any l 6= l0 (see e.g., [Sta90, Theorem 1]). Therefore, by the Schur orthogonality relations [Kna96, Corollary 4.10], for any v1, w1 2 Vd,l1 and v2, w2 2 Vd,l2 , we have
Eg⇠Grot [h l1(g)v1, w1iL2(Sd 1)h l2(g)v2, w2iL2(Sd 1)] = l1l2hv1, v2iL2(Sd 1)hw1, w2iL2(Sd 1)/ dim(Vd,l1). (3)
Let g ⇠ Grot, drawn from the Haar probability measure. For any h 2 L2(Sd 1) such that khk
2 L2(Sd 1) = 1, by (a) the decomposition of L 2(Sd 1) into subspaces of spherical harmonics, (b) the Grot-invariance of each subspace Vd,l, and (c) the Schur orthogonality relations in (3),
Eg[hf g, hi2L2(Sd 1)] (a) =
1X
l1,l2=0
Eg[h⇧Vd,l1 (f g),⇧Vd,l1hiL2(Sd 1)h⇧Vd,l2 (f g),⇧Vd,l2hiL2(Sd 1)]
(b) =
1X
l1,l2=0
Eg[h(⇧Vd,l1 f) g,⇧Vd,l1hiL2(Sd 1)h(⇧Vd,l2 f) g,⇧Vd,l2hiL2(Sd 1)]
(c) =
1X
l=0
1
dim(Vd,l) k⇧Vd,lfk
2 L2(Sd 1)k⇧Vd,lhk 2 L2(Sd 1)
1X
l=0
k⇧Vd,lhk 2 L2(Sd 1) ! max l2Z 0
1
dim(Vd,l) k⇧Vd,lfk
2 L2(Sd 1)
= max l2Z 0
1
dim(Vd,l) k⇧Vd,lfk
2 L2(Sd 1).
Let l⇤ be the optimal value of l in the last line, which is known to exist by the fact that k⇧Vd,lfk2 kfk
2 and dim(Vd,l) ! 1 as l ! 1. The inequality is achieved by h = ⇧Vd,l⇤ f/k⇧Vd,l⇤ fk.
This implies that the Grot-alignment of f is negligible if and only if its projection to the low-order spherical harmonics is negligible. By Theorem 3.1, this implies the necessity result of Theorem 3.6.
3.2 Application: Extending the merged-staircase property necessity result
In our second application, we study the setting of learning a sparse function on the binary hypercube (a.k.a. a junta) that depends on only P d coordinates of the input x, i.e.,
f⇤(x) = h⇤(x1, . . . , xP ),
where h⇤ : HP ! R. The regime of interest to us is when h⇤ is fixed and d ! 1, representing a hidden signal in a high-dimensional dataset. This setting was studied by [ABM22], who identified the “merged-staircase property” (MSP) as an extension of [ABB+21]. We generalize the MSP below. Definition 3.8 (l-MSP). For l 2 Z+ and h⇤ : HP ! R, we say that h⇤ satisfies the merged staircase property with leap l (i.e., l-MSP) if its set of nonzero Fourier coefficients S = {S : ĥ⇤(S) 6= ;} can be ordered as S = {S1, . . . , Sm} such that for all i 2 [m], |Si \ [j<iSj | l.
For example, h⇤(x) = x1 + x1x2 + x1x2x3 satisfies 1-MSP; h⇤(x) = x1x2 + x1x2x3 satisfies 2-MSP, but not 1-MSP because of the leap required to learn x1x2; similarly h⇤(x) = x1x2x3 + x4 satisfies 3-MSP but not 2-MSP. If h⇤ satisfies l-MSP for some small l, then the function f⇤ can be learned greedily in an efficient manner, by iteratively discovering the coordinates on which it depends. In [ABM22] it was proved that the 1-MSP property nearly characterized which sparse functions could be ✏-learned in O✏,h⇤(d) samples by one-pass SGD training in the mean-field regime.
We prove the MSP necessity result for GD training. On the one hand, our necessity result is for a different training algorithm, GD, which injects noise during training. On the other, our result is much more general since it applies whenever GD is permutation-equivariant, which includes training of FC networks and ResNets of any depth (whereas the necessity result of [ABM22] applies only to two-layer architectures in the mean-field regime). We also generalize the result to any leap l. Theorem 3.9 (l-MSP necessity). Let fNN(·;✓) : Hd ! R be an architecture and µ✓ 2 P(Rp) be an initialization such that GD is Gperm-equivariant. Let ✓k be the random weights after k steps of GD training with noise parameter ⌧ > 0, step size ⌘, and clipping radius R on the distribution D = D(f⇤,Hd). Suppose that f⇤(x) = h⇤(z) where h⇤ : HP ! R does not satisfy l-MSP for some l 2 Z+. Then there are constants C, ✏0 > 0 depending on h⇤ such that
P✓k [`D(✓k) ✏0] C⌘R
2⌧
r k
dl+1 +
C
dl+1 .
The interpretation is that if h⇤ does not satisfy l-MSP, then to learn f⇤ to better than ✏0 error with constant probability, we need at least ⌦h⇤,✏(dl+1) steps of (GD) on a network with step size ⌘ = Oh⇤,✏(1), clipping radius R = Oh⇤,✏(1), and noise level ⌧ = ⌦h⇤,✏(1). The proof is deferred to Appendix C. It proceeds by first isolating the “easily-reachable” coordinates T ✓ [P ], and subtracting their contribution from f⇤. We then bound G-alignment of the resulting function, where G is the permutation group on [d] \ T .
4 Hardness for learning with SGD
In this section, for > 0, we let D(f, µX , ) 2 P(X ⇥ R) denote the distribution of (x, f(x) + ⇠) where x ⇠ µX and ⇠ ⇠ N (0, 2) is independent noise.
We show that the equivariance of SGD on certain architectures implies that the function fmod8 : Hd ! {0, . . . , 7} given by
fmod8(x) ⌘ X
i
xi (mod 8) (4)
is hard for SGD-trained, i.i.d. symmetrically-initialized FC networks. Our hardness result relies on a cryptographic assumption to prove superpolynomial lower bounds for SGD learning. For any S ✓ [d], let S : Hd ! {+1, 1} be the parity function S(x) = Q i2S xi.
Definition 4.1. The learning parities with Gaussian noise, (d, n, )-LPGN, problem is parametrized by d, n 2 Z>0 and 2 R>0. An instance (S, q, (xi, yi)i2[n]) consists of (i) an unknown subset S ✓ [d] of size |S| = bd/2c, and (ii) a known query vector q ⇠ Hd, and i.i.d. samples (xi, yi)i2[n] ⇠ D( S ,Hd, ). The task is to return S(q) 2 {+1, 1}.11
Our cryptographic assumption is that poly(d)-size circuits cannot succeed on LPGN. Definition 4.2. Let > 0. We say -LPGN is poly(d)-time solvable if there is a sequence of sample sizes {nd}d2N and circuits {Ad}d2N such that nd, size(Ad) poly(d), and Ad solves (d, nd, )-LPGN with success probability at least 9/10, when inputs are rounded to poly(d) bits. Assumption 4.3. Fix . The -LPGN-hardness assumption is: -LPGN is not poly(d)-time solvable.
The LPGN problem is the simply standard Learning Parities with Noise problem (LPN) [BKW03], except with Gaussian noise instead of binary classification noise, and we are also promised that |S| = bd/2c. In Appendix D.3, we derive Assumption 4.3 from the standard hardness of LPN. We now state our SGD hardness result. Theorem 4.4. Let {fNN,d, µ✓,d}d2N be a family of networks and initializations satisfying Assumption 2.4 (fully-connected) with i.i.d. symmetric initialization. Let > 0, and let {nd} be sample sizes such that (fNN,d, µ✓,d)-SGD training on nd samples from D(fmod8,Hd, ) rounded to poly(d) bits yields parameters ✓d with
E✓d [kfmod8 fNN(·;✓d)k2] 0.0001.
Then, under ( /2)-LPGN hardness, (fNN,d, µ✓,d)-SGD on nd samples cannot run in poly(d) time.
In order to prove Theorem 4.4, we use the sign-flip equivariance of gradient descent guaranteed by the symmetry in the initialization. A sign-flip equivariant network that learns fmod8(x) from -noisy samples, is capable of solving the harder problem of learning fmod8(x s) from -noisy samples, where s 2 Hd is an unknown sign-flip vector. However, through an average-case reduction we show that this problem is ( /2)-LPGN-hard. Therefore the theorem follows by contradiction.
5 Discussion
The general GD lower bound in Theorem 3.1 and the approach for basing hardness of SGD training on cryptographic assumptions in Theorem 4.4 could be further developed to other settings.
There are limitations of the results to address in future work. First, the GD lower bound requires adding noise to the gradients, which can hinder training. Second, real-world data distributions are typically not invariant to a group of transformations, so the results obtained by this work may not apply. It is open to develop results for distributions that are approximately invariant.
Finally, it is open whether computational lower bounds for SGD/GD training can be shown beyond those implied by equivariance. For example, consider the function f : Hd ! {+1, 1} that computes the “full parity”, i.e., the parity of all of the inputs f(x) = Qd i=1 xi. Past work has empirically shown that SGD on FC networks with Gaussian initialization [SSS17, AS20, NY21] fails to learn this function. Proving this would represent a significant advance, since there is no obvious equivariance that implies that the full parity is hard to learn — in fact we have shown weak-learnability with symmetric Rad(1/2) initialization, in which case training is Gsign,perm-equivariant.
Acknowledgements
We thank Jason Altschuler, Guy Bresler, Elisabetta Cornacchia, Sonia Hashim, Jan Hazla, Hannah Lawrence, Theodor Misiakiewicz, Dheeraj Nagaraj, and Philippe Rigollet for stimulating discussions. We thank the Simons Foundation and the NSF for supporting us through the Collaboration on the Theoretical Foundations of Deep Learning (deepfoundations.ai). This work was done in part while E.B. was visiting the Simons Institute for the Theory of Computing and the Bernoulli Center at EPFL, and was generously supported by Apple with an AI/ML fellowship.
11More formally, one would express this as a probabilistic promise problem [Ale03]. | 1. What are the main contributions and strengths of the paper regarding Lipschitz models and GD/SGD optimization?
2. What are the weaknesses and limitations of the paper, particularly regarding the assumption of bounded Lipschitz constant and its implications for neural networks?
3. How does the reviewer assess the significance and relevance of the two types of results presented in the paper?
4. What are the concerns regarding the second result's relation to prior work in [AS20], specifically regarding universality and efficiency?
5. Are there any suggestions or recommendations for future research related to this paper's topics? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper studies the learnability of Lipschitz models with GD/SGD optimization. It assumes that the learning algorithm satisfies G-equivariant for some non-trivial symmetry group G (which holds for GD/SGD learning in neural networks), and shows two types of results:
With GD learning, the paper characterizes the learnability of Lipschitz models on 1) functions of both the hypercube and the unit sphere; 2) functions with latent low-dimensional structure.
For the function given in Eq. (4), with SGD learning, a fully-connected network cannot learn and evaluate the function (4) in polynomial time with respect to the input dimension d.
Strengths And Weaknesses
Strength:
The first result characterizes the learnability of a rather general class of learning models. The second result provides a nice example that neural networks, despite their universal approximation ability, cannot efficiently learn.
Weakness:
My major concern is that it requires the architecture to have bounded Lipschitz constant over parameters. However, almost all neural networks have unbounded Lipschitz constants. The authors argue (in the supplementary materials) that, in practice, we can clip the gradients to yield bounded derivatives. However, for a purely theoretical paper, this is not a sound justification. Clipping the gradient is equivalent to changing the learning algorithm, but the objective function remains unchanged, still having an unbounded Lipschitz constant.
In that sense, Section 3 only holds for models with Liptchiz constant, for example, one-hidden-layer networks with fixed output layers and bounded-gradient activations. This deviates from the major claims --- "regular architectures and initialization" of neural networks. Meanwhile, the authors criticize that the prior work [AS20] does not "reflect architectures used in practice", which is unfair.
To rigorously prove for neural network training, the authors can either show i) GD operates in a bounded region, within which we have bounded Lipschitz, or ii) the theorem holds for gradient-clipping GD. Please consider addressing this problem. Otherwise, it is not eligible to claim the results hold for general DNN. Note that prior works (such as [Ele22]) that require C-Lipschitz on the learning model did not claim to hold for NN in their theorems.
Another concern is that the second result does not seem to counter [AS20] directly. [AS20] shows there exists DNN that can emulate any efficient learning algorithm (that is able to learn in poly-time). It assumes that at least one poly-time efficient algorithm exists. However, the non-universality shown in Theorem 4.4 relies on the hardness of γ-LPGN hardness, which is not poly-time solvable by any algorithm. There is a gap between the "universality" discussed in [AS20] and this paper. It is still possible that a practical DNN with G-equivariance SGD learning can emulate any efficient algorithm.
Questions
Please refer to the [weakness] part.
Limitations
Please refer to the [weakness] part. |
NIPS | Title
On the non-universality of deep learning: quantifying the cost of symmetry
Abstract
We prove limitations on what neural networks trained by noisy gradient descent (GD) can efficiently learn. Our results apply whenever GD training is equivariant, which holds for many standard architectures and initializations. As applications, (i) we characterize the functions that fully-connected networks can weak-learn on the binary hypercube and unit sphere, demonstrating that depth-2 is as powerful as any other depth for this task; (ii) we extend the merged-staircase necessity result for learning with latent low-dimensional structure [ABM22] to beyond the meanfield regime. Under cryptographic assumptions, we also show hardness results for learning with fully-connected networks trained by stochastic gradient descent (SGD).
N/A
We prove limitations on what neural networks trained by noisy gradient descent (GD) can efficiently learn. Our results apply whenever GD training is equivariant, which holds for many standard architectures and initializations. As applications, (i) we characterize the functions that fully-connected networks can weak-learn on the binary hypercube and unit sphere, demonstrating that depth-2 is as powerful as any other depth for this task; (ii) we extend the merged-staircase necessity result for learning with latent low-dimensional structure [ABM22] to beyond the meanfield regime. Under cryptographic assumptions, we also show hardness results for learning with fully-connected networks trained by stochastic gradient descent (SGD).
1 Introduction
Over the last decade, deep learning has made advances in areas as diverse as image classification [KSH12], language translation [BCB14], classical board games [SHS+18], and programming [LCC+22]. Neural networks trained with gradient-based optimizers have surpassed classical methods for these tasks, raising the question: can we hope for deep learning methods to eventually replace all other learning algorithms? In other words, is deep learning a universal learning paradigm? Recently, [AS20, AKM+21] proved that in a certain sense the answer is yes: any PAC-learning algorithm [Val84] can be efficiently implemented as a neural network trained by stochastic gradient descent; analogously, any Statistical Query algorithm [Kea98] can be efficiently implemented as a neural network trained by noisy gradient descent.
However, there is a catch: the result of [AS20] relies on a carefully crafted network architecture with memory and computation modules, which is capable of emulating an arbitrary learning algorithm. This is far from the architectures which have been shown to be successful in practice. Neural networks in practice do incorporate domain knowledge, but they have more “regularity” than the architectures of [AS20], in the sense that they do not rely on heterogeneous and carefully assigned initial weights (e.g., convolutional networks and transformers for image recognition and language processing [LB+95, LKF10, VSP+17], graph neural networks for analyzing graph data [GMS05, BZSL13, VCC+17], and networks specialized for particle physics [BAO+20]). We therefore refine our question:
Is deep learning with “regular” architectures and initializations a universal learning paradigm? If not, can we quantify its limitations when architectures and data are not well aligned?
We would like an answer applicable to a wide range of architectures. In order to formalize the problem and develop a general theory, we take an approach similar to [Ng04, Sha18, LZA21] of understanding deep learning through the equivariance group G (a.k.a., symmetry group) of the learning algorithm.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Definition 1.1 (G-equivariant algorithm). A randomized algorithm A that takes in a data distribution D 2 P(X ⇥ Y)1 and outputs a function A(D) : X ! Y is said to be G-equivariant if for all g 2 G
A(D) d = A(g(D)) g. (G-equivariance)
Here g is a group element that acts on the data space X , and so is viewed as a function g : X ! X , and g(D) is the distribution of (g(x), y), where (x, y) ⇠ D.
In the case that the algorithm A is deep learning on the distribution D, the equivariance group depends on the optimizer, the architecture, and the network initialization [Ng04, LZA21].2
Examples of G-equivariant algorithms in deep learning In many deep learning settings, the equivariance group of the learning algorithm is large. Thus, in this paper, we call an algorithm “regular” if it has a large equivariance group. For example, SGD training of fully-connected networks with Gaussian initialization is orthogonally-equivariant [Ng04]; and is permutation-equivariant if we add skip connections [HZRS16]. SGD training of convolutional networks is translationally-equivariant if circular convolutions are used [SNPP19], and SGD training of i.i.d.-initialized transformers without positional embeddings is equivariant to permutations of tokens [VSP+17]. Furthermore, [LZA21, Theorem C.1] provides general conditions under which a deep learning algorithm is equivariant. See also the preliminaries in Section 2.
Summary of this work Based off of G-equivariance, we prove limitations on what “regular” neural networks trained by noisy gradient descent (GD) or stochastic gradient descent (SGD) can efficiently learn, implying a separation with the initializations and architectures considered in [AS20]. For GD, we prove a master theorem that enables two novel applications: (a) characterizing which functions can be efficiently weak-learned by fully-connected (FC) networks on both the hypercube and the unit sphere; and (b) a necessity result for which functions on the hypercube with latent low-dimensional structure can be efficiently learned. See Sections 1.2 and 1.3 for more details.
1.1 Related work
Most prior work on computational lower bounds for deep learning has focused on proving limitations of kernel methods (a.k.a. linear methods). Starting with [Bar93] and more recently with [WLLM19, AL19, KMS20, AL20, Hsu, HSSV21, ABM22] it is known that there are problems on which kernel methods provably fail. These results apply to training neural networks in the Neural Tangent Kernel (NTK) regime [JGH18], but do not apply to more general nonlinear training. Furthermore, for specific architectures such as FC architectures [GMMM21, Mis22] and convolutional architectures [MM21], the kernel and random features models at initialization are well understood, yielding stronger lower bounds for training in the NTK regime.
For nonlinear training, which is the setting of this paper, considerably less is known. In the context of sample complexity, [Ng04] introduced the study of the equivariance group of SGD, and constructed a distribution on d dimensions with a ⌦(d) versus O(1) sample complexity separation for learning with an SGD-trained FC architecture versus an arbitrary algorithm. More recently, [LZA21] built on [Ng04] to show a O(1) versus ⌦(d2) sample-complexity separation between SGD-trained convolutional and FC architectures. In this paper, we also analyze the equivariance group of the training algorithm, but with the goal of proving superpolynomial computational lower bounds.
In the context of computational lower bounds, it is known that networks trained with noisy3 gradient descent (GD) fall under the Statistical Query (SQ) framework [Kea98], which allows showing computational limitations for GD training based on SQ lower bounds. This has been combined in [AS20, SSS17, MS20, ACHM22] with the permutation symmetry of GD-training of i.i.d. FC networks to prove impossibility of efficiently learning high-degree parities and polynomials. In
1The set of probability distributions on ⌦ is denoted by P(⌦). You should think of D 2 P(X ⇥ Y) as a distribution of pairs (x, y) of covariates and labels.
2Note that the equivariance group of a training algorithm should not be confused with the equivariance group of an architecture in the context of geometric deep learning [BBCV21]. In that context, G-equivariance refers to the property of a neural network architecture fNN(·;✓) : X ! Y that fNN(g(x);✓) = g(fNN(x;✓)) for all x 2 X and all group elements g 2 G. In that case, G acts on both the input in X and output in Y .
3Here the noise is used to control the gradients’ precision as in [AS20, AKM+21].
our work, we show that these arguments can be viewed in the broader context of more general group symmetries, yielding stronger lower bounds than previously known. For stochastic gradient descent (SGD) training, [ABM22] proves a computational limitation for training of two-layer meanfield networks, but their result applies only when SGD converges to the mean-field limit, and does not apply to more general architectures beyond two-layer networks. Finally, most related to our SGD hardness result is [Sha18], which shows limitations of SGD-trained FC networks under a cryptographic assumption. However, the argument of [Sha18] relies on training being equivariant to linear transformations of the data, and therefore requires that data be whitened or preconditioned. Instead, our result for SGD does not require any preprocessing steps.
There is also recent work showing sample complexity benefits of invariant/equivariant neural network architectures [MMM21, EZ21, Ele21, BVB21, Ele22]. In contrast, we study equivariant training algorithms. These are distinct concepts: a deep learning algorithm can be G-equivariant, while the neural network architecture is neither G-invariant nor G-equivariant. For example, a FC network is not invariant to orthogonal transformations of the input. However, if we initialize it with Gaussian weights and train with SGD, then the learning algorithm is equivariant to orthogonal transformations of the input (see Proposition 2.5 below).
1.2 Contribution 1: Lower bounds for noisy gradient descent (GD)
Consider the supervised learning setup where we train a neural network fNN(·;✓) : X ! R parametrized by ✓ 2 Rp to minimize the mean-squared error on a data distribution D 2 P(X ⇥ R),
`D(✓) = E(x,y)⇠D[(y fNN(x;✓))2]. (1)
The noisy Gradient Descent (GD) training algorithm randomly initializes ✓0 ⇠ µ✓ for some initialization distribution µ✓ 2 P(Rp), and then iteratively updates the parameters with step size ⌘ > 0 in a direction gD(✓k) approximating the population loss gradient, plus Gaussian noise ⇠k ⇠ N (0, ⌧2I),
✓k+1 = ✓k ⌘gD(✓ k) + ⇠k. (GD)
Up to a constant factor, gD(✓) is the population loss gradient, except we have clipped the gradients of the network with the projection operator ⇧B(0,R) to lie in the ball B(0, R) = {z : kzk2 R} ⇢ Rp,4
gD(✓) = E(x,y)⇠D[(y fNN(x;✓))(⇧B(0,R)r✓fNN(x;✓))]. Clipping the gradients is often used in practice to avoid instability from exploding gradients (see, e.g., [ZHSJ19] and references within). In our context, clipping ensures that the injected noise ⇠k is on the same scale as the gradient r✓fNN of the network and so it controls the gradients’ precision. Similarly to the works [AS20, AKM+21, ACHM22], we consider noisy gradient descent training to be efficient if the following conditions are met. Definition 1.2 (Efficiency of GD, informal). GD training is efficient if the clipping radius R, step size ⌘, and inverse noise magnitude 1/⌧ are all polynomially-bounded in d, since then (GD) can be efficiently implemented using noisy minibatch SGD5.
We prove that some data distributions cannot be efficiently learned by G-equivariant GD training. For this, we introduce the G-alignment: Definition 1.3 (G-alignment). Let G be a compact group, let µX 2 P(X ) be a distribution over data points, and let f 2 L2(µX ) be a labeling function. The G-alignment of (µX , f) is:
C((µX , f);G) = sup h
Eg⇠µG [Ex⇠µX [f(g(x))h(x)]2],
where µG is the Haar measure of G and the supremum is over h 2 L2(µX ) such that khk2 = 1.
In our applications, we use tools from representation theory (see e.g., [Kna96]) to evaluate the G-alignment. Using the G-alignment, we can prove a master theorem for lower bounds: Theorem 1.4 (GD lower bound, informal statement of Theorem 3.1). Let Df 2 P(X ⇥ R) be the distribution of (x, f(x)) for x ⇠ µX . If µX is G-invariant6 and the G-alignment of (µX , f) is small, then f cannot be efficiently learned by a G-equivariant GD algorithm.
4Note that if fNN is an R-Lipschitz model, then gD(✓) will simply be the population gradient of the loss. 5Efficient implementability by minibatch SGD assumes bounded residual errors. 6Meaning that if x ⇠ µX , then for any g 2 G, we also have g(x) ⇠ µX .
Proof ideas We first make an observation of [Ng04]: if a G-equivariant algorithm can learn the function f by training on the distribution Df , then, for any group element g 2 G, it can learn f g by training on the distribution Df g. In other words, the algorithm can learn the class of functions F = {f g : g 2 g}, which can potentially be much larger than just the singleton set {f}. We conclude by showing that the class of functions F cannot be efficiently learned by GD training. The intuition is that the G-alignment measures the diversity of the functions in F . If the G-alignment is small, then there is no function h that correlates with most of the functions in F , which can be used to show F is hard to learn by gradient descent.
This type of argument appears in [AS20, ACHM22] in the specific case of Boolean functions and for permutation equivariance; our proof both applies to a more general setting (beyond Boolean functions and permutations) and yields sharper bounds; see Appendix A.3. Our bound can also be interpreted in terms of the Statistical Query framework, as we discuss in Appendix A.4. While Theorem 1.4 is intuitively simple, we demonstrate its power and ease-of-use by deriving two new applications.
Application: Characterization of weak-learnability by fully-connected (FC) networks In our first application, we consider weak-learnability: when can a function be learned non-negligibly better than just outputting the estimate fNN ⌘ 0? Using Theorem 1.4, we characterize which functions over the binary hypercube f : {+1, 1}d ! R and over the sphere f : Sd 1 ! R are efficiently weak-learnable by GD-trained FC networks with i.i.d. symmetric and i.i.d. Gaussian initialization, respectively. The takeaway is that a function f : {+1, 1}d ! R is weak-learnable if and only if it has a nonnegligible Fourier coefficient of order O(1) or d O(1). Similarly, a function f : Sd 1 ! R is weak-learnable if and only if it has nonnegligible projection onto the degree-O(1) spherical harmonics. Perhaps surprisingly, such functions can be efficiently weak-learned by 2-layer fully-connected networks, which shows that adding more depth does not help. This application is presented in Section 3.1.
Application: Evidence for the staircase property In our second application, we consider learning a target function f : {+1, 1}d ! R that only depends on the first P coordinates, f(x) = h(x1, . . . , xP ). Our regime of interest here is when the function hand : {+1, 1}P ! R remains fixed and the dimension d grows, since this models the situation where a latent low-dimensional space determines the labels in a high-dimensional dataset. Recently, [ABM22] studied SGD-training of mean-field two-layer networks, and gave a near-characterization of which functions can be learned to arbitrary accuracy ✏ in Oh,✏(d) samples, in terms of the merged-staircase property (MSP). Using Theorem 1.4, we prove that the MSP is necessary for GD-learnability whenever training is permutation-equivariant (which applies beyond the 2-layer mean-field regime) and we also generalize it beyond leaps of size 1. Details are in Section 3.2.
1.3 Contribution 2: Hardness for stochastic gradient descent (SGD)
The second part of this paper concerns Stochastic Gradient Descent (SGD) training, which randomly initializes the weights ✓0 ⇠ µ✓ , and then iteratively trains the parameters with the following update rule to try to minimize the loss (1):
✓k+1 = ✓k ⌘r✓(y fNN(xk+1;✓)) 2 |✓=✓k , (SGD)
where (yk+1,xk+1) ⇠ D is a fresh sample on each iteration, and ⌘ > 0 is the learning rate.7
Proving computational lower bounds for SGD is a notoriously difficult problem [AKM+21], exacerbated by the fact that for general architectures SGD can be used to simulate any polynomial-time learning algorithm [AS20]. However, we demonstrate that one can prove hardness results for SGD training based off of cryptographic assumptions when the training algorithm has a large equivariance group. We demonstrate the non-universality of SGD on a standard FC architecture. Theorem 1.5 (Hardness for SGD, informal statement of Theorem 4.4). Under the assumption that the Learning Parities with Noise (LPN) problem8 is hard, FC neural networks with Gaussian initialization
7For brevity, we focus on one-pass SGD with a single fresh sample per iteration. Our results extend to empirical risk minimization (ERM) setting and to mini-batch SGD, see Remark E.1.
8See Section 4 and Appendix D.3 for definitions and discussion on LPN.
trained by SGD cannot learn fmod8 : {+1, 1}d ! {0, . . . , 7},
fmod8(x) ⌘ dX
i=1
xi (mod 8),
in polynomial time from noisy samples (x, fmod8(x) + ⇠) where x ⇠ {+1, 1}d and ⇠ ⇠ N (0, 1).
This result shows a limitation of SGD training based on an average-case reduction from a cryptographic problem. The closest prior result is in [Sha18], which proved hardness results for learning with SGD on FC networks, but required preprocessing the data with a whitening transformation.
Proof idea The FC architecture and Gaussian initialization are necessary: an architecture that outputted fmod8(x) at initialization would trivially achieve zero loss. However, SGD on Gaussianinitialized FC networks is sign-flip equivariant, and this symmetry makes fmod8 hard to learn. If a sign-flip equivariant algorithm can learn the function fmod8(x) from noisy samples, then it can learn the function fmod8(x s) from noisy samples, where s 2 {+1, 1}d is an unknown sign-flip vector, and denotes elementwise product. However, this latter problem is hard under standard cryptographic assumptions. More details in Section 4.
2 Preliminaries
Notation Let Hd = {+1, 1}d be the binary hypercube, and Sd 1 = {x 2 Rd : kxk2 = 1} be the unit sphere. The law of a random variable X is L(X). If S is a finite set, then X ⇠ S stands for X ⇠ Unif[S]. Also let x ⇠ Sd 1 denote x drawn from the uniform Haar measure on Sd 1. For a set ⌦, let P(⌦) be the set of distributions on ⌦. Let be the elementwise product. For any µX 2 P(X ), and group G acting on X , we say µX is G-invariant if g(x) d = x for x ⇠ µX and any g 2 G.
2.1 Equivariance of GD and SGD
We define GD and SGD equivariance separately. Definition 2.1. Let AGD be the algorithm that takes in data distribution D 2 P(X ⇥ R), runs (GD) on initialization ✓0 ⇠ µ✓ for k steps, and outputs the function AGD(D) = fNN(·;✓k)
We say “(fNN, µ✓)-GD is G-equivariant” if AGD is G-equivariant in the sense of Definition 1.1. Definition 2.2. Let ASGD be the algorithm that takes in samples (xi, yi)i2[n], runs (SGD) on initialization ✓0 ⇠ µ✓ for n steps, and outputs ASGD((xi, yi)i2[n]) = fNN(·;✓k).
We say “(fNN, µ✓)-SGD is G-equivariant” if ASGD((xi, yi)i2[n]) d = ASGD((g(xi), yi)i2[n]) g for any g 2 G, and any samples (xi, yi)i2[n].
2.2 Regularity conditions on networks imply equivariances of GD and SGD
We take a data space X ✓ Rd, and consider the following groups that act on Rd. Definition 2.3. Define the following groups and actions:
• Let Gperm = Sd denote the group of permutations on [d]. An element 2 Gperm acts on x 2 Rd in the standard way: (x) = (x (1), . . . , x (d)).
• Let Gsign,perm denote the group of signed permutations, an element g = (s, ) 2 Gsign,perm is given by a sign-flip vector s 2 Hd and a permutation 2 Gperm. It acts on x 2 Rd by g(x) = s (x) = (s1x (1), . . . , sdx (d)).9
• Let Grot = SO(d) ✓ GL(d,R) denote the rotation group. An element g 2 Grot is a rotation matrix that acts on x 2 Rd by matrix multiplication.
9The group product is g1g2 = (s1, 1)(s2, 2) = (s1 1(s2), 1 2).
Under mild conditions on the neural network architecture and initialization, GD and SGD training are known to be Gperm-, Gsign,perm-, or Grot-equivariant [Ng04, LZA21]. Assumption 2.4 (Fully-connected i.i.d. first layer and no skip connections from the input). We can decompose the parameters as ✓ = (W , ), where W 2 Rm⇥d is the matrix of the first-layer weights, and there is a function gNN(·; ) : Rm ! R such that fNN(x;✓) = gNN(Wx; ). Furthermore, the initialization distribution is µ✓ = µW ⇥ µ , where µW = µ ⌦(m⇥d) w for µw 2 P(R).
Notice that Assumption 2.4 is satisfied by FC networks with i.i.d. initialization. Under assumptions on µw, we obtain equivariances of GD and SGD (see Appendix E for proofs.) Proposition 2.5 ([Ng04, LZA21]). Under Assumption 2.4, GD and SGD are Gperm-equivariant. If µw is sign-flip symmetric, then GD and SGD are Gsign,perm-equivariant. If µw = N (0, 2) for some , then GD and SGD are Grot-equivariant.
3 Lower bounds for learning with GD
In this section, let D(f, µX ) 2 P(X ⇥ R) denote the distribution of (x, f(x)) where x ⇠ µX . We give a master theorem for computational lower bounds for learning with G-equivariant GD. Theorem 3.1 (GD lower bound using G-alignment). Let G be a compact group, and let fNN(·;✓) : X ! R be an architecture and µ✓ 2 P(Rp) be an initialization such that GD is G-equivariant. Fix any G-invariant distribution µX 2 P(X ), any label function f⇤ 2 L2(µX ), and any baseline function ↵ 2 L2(µX ) satisfying ↵ g = ↵ for all g 2 G. Let ✓k be the random weights after k time-steps of GD training with noise parameter ⌧ > 0, step size ⌘ > 0, and clipping radius R > 0 on the distribution D = D(f⇤, µX ). Then, for any ✏ > 0,
P✓k [`D(✓k) kf⇤ ↵k2L2(µX ) ✏] ⌘R
p kC 2⌧ + C ✏ ,
where C = C((f⇤ ↵, µX );G) is the G-alignment of Definition 1.3.
As discussed in Section 1.2, the theorem states that if the G-alignment C is very small, then GD training cannot efficiently improve on the trivial loss from outputting ↵: either the number of steps k, the gradient precision R/⌧ , or the step size ⌘ have to be very large in order to learn. Appendix A shows a generalization of the theorem for learning a class of functions F = {f1, . . . , fm} instead of just a single function f⇤. This result goes beyond the lower bound of [AS20] even when G is the trivial group with one element: the main improvement is that Theorem 3.1 proves hardness for learning real-valued functions beyond just Boolean-valued functions. We demonstrate the usefulness of the theorem through two new applications in Sections 3.1 and 3.2.
3.1 Application: Characterizing weak-learnability by FC networks
In our first application of Theorem 3.1, we consider FC architectures with i.i.d. initialization, and show how to use their training equivariances to characterize what functions they can weak-learn: i.e., for what target functions f⇤ they can efficiently achieve a non-negligible correlation after training. Definition 3.2 (Weak learnability). Let {µd}d2N be a family of distributions µd 2 P(Xd), and let {fd}d2N be a family of functions fd 2 L2(µd). Finally, let {f̃d}d2N be a family of estimators, where f̃d is a random function in L2(µd). We say that {fd, µd}d2N is “weak-learned” by the family of estimators {f̃d}d2N if there are constants d0, C > 0 such that for all d > d0,
Pf̃d [kfd f̃dk 2 L2(µd) kfdk 2 L2(µd) d C ] 9/10. (2)
The constant 9/10 in the definition is arbitrary. In words, weak-learning measures whether the family of estimators {f̃d} has a non-negligible edge over simply estimating with the identically zero functions f̃d ⌘ 0. We study weak-learnability by GD-trained FC networks. Definition 3.3. We say that {fd, µd}d2N is efficiently weak-learnable by GD-trained FC networks if there are FC networks and initializations {fNN,d, µ✓,d}, and hyperparameters {⌘d, kd, Rd, ⌧d} such that for some constant c > 0,
• Hyperparameters are polynomial size: 0 ⌘d, kd, Rd, 1/⌧d O(dc);
• {f̃d} weak-learns {fd, µd} in the sense of Definition 3.2, where f̃d = fNN(·;✓d) for weights ✓d that are GD-trained on D(fd, µd) for kd steps with step size ⌘d, clipping radius Rd, and noise ⌧d, starting from initialization µ✓,d.
If µ✓,d is i.i.d copies of a symmetric distribution, we say that the FC networks are symmetricallyinitialized, and Gaussian-initialized if µ✓,d is i.i.d. copies of a Gaussian distribution.
3.1.1 Functions on hypercube, FC networks with i.i.d. symmetric initialization
Let us first consider functions on the Boolean hypercube f : Hd ! R. These can be uniquely written as a multilinear polynomial
f(x) = X
S✓[d]
f̂(S) Y
i2S
xi,
where f̂(S) are the Fourier coefficients of f [O’D14]. We characterize weak learnability of functions on the hypercube in terms of their Fourier coefficients. The full proof is deferred to Appendix B.1. Theorem 3.4. Let {fd}d2N be a family of functions fd : Hd ! R with kfdkL2(Hd) 1. Then {fd,Hd} is efficiently weak-learnable by GD-trained symmetrically-initialized FC networks if and only if there is a constant C > 0 such that for each d 2 N there is Sd ✓ [d] with |Sd| C or |Sd| d C, and |f̂d(Sd)| ⌦(d C).
The algorithmic result can be achieved by two-layer FC networks, and relies on random features analysis where each network weight is initialized to 0 with probability 1 p, and +1 or 1 with equal probability p/2.10 Therefore, for weak learning on the hypercube, two-layer networks are as good as networks of any depth. For the converse impossibility result, we apply Theorem 3.1, recalling that GD is Gsign,perm-equivariant by Proposition 2.5, and noting that Gsign,perm-alignment is:
Lemma 3.5. Let f : Hd ! R. Then C((f,Hd);Gsign,perm) = maxk2[d] d k 1P S✓[d] |S|=k f̂(S)2.
Proof. In the following, let s ⇠ Hd and ⇠ Gperm, so that g = (s, ) ⇠ Gsign,perm. Also let x,x0 ⇠ Hd be independent. For any h : Hd ! R, by (a) tensorizing, (b) expanding f in the Fourier basis, (c) the orthogonality relation Es[ S(s) S0(s)] = S,S0 , and (d) tensorizing,
Eg[Ex[f(g(x))h(x)]2] = E ,s[Ex[f(s (x))h(x)]2] (a) = E ,s,x,x0 [f(s (x))f(s (x0))h(x)h(x0)] (b) = Ex,x0, [ X
S,S0✓[d]
f̂(S)f̂(S0)h(x)h(x0) S( (x)) S0( (x 0))Es[ S(s) S0(s)]]
(c) = Ex,x0, [
X
S✓[d]
f̂(S)2h(x)h(x0) S( (x)) S( (x 0))]
(d) = E [
X
S✓[d]
f̂(S)2 Ex[h(x) S( (x))]2]
= X
S✓[d]
f̂(S)2 E [ĥ( 1(S))2]
= X
S✓[d]
f̂(S)2 ✓ d
|S|
◆ 1 X
S0,|S0|=|S|
ĥ(S0)2.
And since P
S0,|S0|=|S| ĥ(S 0)2 khk2L2(Hd), the supremum over h such that khkL2(Hd) = 1 is
achieved by taking h(x) = S(x) for some S.
10Surprisingly, this means that the full parity function f⇤(x) = Qd
i=1 xi can be efficiently learned with such initializations. See Appendix B.
So if the Fourier coefficients of f are negligible for all S s.t. min(|S|, d |S|) O(1), then the Gsign,perm-alignment of f is negligible. By Theorem 3.1, this means f cannot be learned efficiently. In Appendix B.1.2 we give a concrete example of a hard function, that was not previously known.
3.1.2 Functions on sphere, FC networks with i.i.d. Gaussian initialization
We now study learning a target function on the unit sphere, f 2 L2(Sd 1), where we take the standard Lebesgue measure on Sd 1. A key fact in harmonic analysis is that L2(Sd 1) can be written as the direct sum of subspaces spanned by spherical harmonics of each degree (see, e.g., [Hoc12]).
L 2(Sd 1) =
1M
l=0
Vd,l,
where Vd,l ✓ L2(Sd 1) is the space of degree-l spherical harmonics, which is of dimension
dim(Vd,l) = 2l + d 2
l
✓ l + d 3
l 1
◆ .
Let ⇧Vd,l : L2(Sd 1) ! Vd,l be the projection operator to the space of degree-l spherical harmonics. In Appendix B.2, we prove this characterization of weak-learnability for functions on the sphere: Theorem 3.6. Let {fd}d2N be a family of functions fd : Sd 1 ! R with kfdkL2(Sd 1) 1. Then {fd, Sd 1} is efficiently weak-learnable by GD-trained Gaussian-initialized FC networks if and only if there is a constant C > 0 such that PC l=0 k⇧Vd,lfdk 2 d C .
The algorithmic result can again be achieved by two-layer FC networks, and is a consequence of the analysis of the random feature kernel in [GMMM21], which shows that the projection of fd onto the low-degree spherical harmonics can be efficiently learned. For the impossibility result, we apply Theorem 3.1, noting that GD is Grot-equivariant by Proposition 2.5, and the Grot-alignment is: Lemma 3.7. Let f 2 L2(Sd 1). Then C((f, Sd 1);Grot) = maxl2Z 0 k⇧Vd,lfk2/ dim(Vd,l).
Proof. The Grot-alignment is computed using the representation theory of Grot, specifically the Schur orthogonality theorem (see, e.g., [Ser77, Kna96]). For any l, the subspace Vd,l is invariant to action by Grot, meaning that we may define the representation l of Grot, which for any g 2 Grot, f 2 Vd,l is given by l(g) : Vd,l ! Vd,l and l(g)f = f g 1. Furthermore, l is a unitary, irreducible representation, and l is not equivalent to l0 , for any l 6= l0 (see e.g., [Sta90, Theorem 1]). Therefore, by the Schur orthogonality relations [Kna96, Corollary 4.10], for any v1, w1 2 Vd,l1 and v2, w2 2 Vd,l2 , we have
Eg⇠Grot [h l1(g)v1, w1iL2(Sd 1)h l2(g)v2, w2iL2(Sd 1)] = l1l2hv1, v2iL2(Sd 1)hw1, w2iL2(Sd 1)/ dim(Vd,l1). (3)
Let g ⇠ Grot, drawn from the Haar probability measure. For any h 2 L2(Sd 1) such that khk
2 L2(Sd 1) = 1, by (a) the decomposition of L 2(Sd 1) into subspaces of spherical harmonics, (b) the Grot-invariance of each subspace Vd,l, and (c) the Schur orthogonality relations in (3),
Eg[hf g, hi2L2(Sd 1)] (a) =
1X
l1,l2=0
Eg[h⇧Vd,l1 (f g),⇧Vd,l1hiL2(Sd 1)h⇧Vd,l2 (f g),⇧Vd,l2hiL2(Sd 1)]
(b) =
1X
l1,l2=0
Eg[h(⇧Vd,l1 f) g,⇧Vd,l1hiL2(Sd 1)h(⇧Vd,l2 f) g,⇧Vd,l2hiL2(Sd 1)]
(c) =
1X
l=0
1
dim(Vd,l) k⇧Vd,lfk
2 L2(Sd 1)k⇧Vd,lhk 2 L2(Sd 1)
1X
l=0
k⇧Vd,lhk 2 L2(Sd 1) ! max l2Z 0
1
dim(Vd,l) k⇧Vd,lfk
2 L2(Sd 1)
= max l2Z 0
1
dim(Vd,l) k⇧Vd,lfk
2 L2(Sd 1).
Let l⇤ be the optimal value of l in the last line, which is known to exist by the fact that k⇧Vd,lfk2 kfk
2 and dim(Vd,l) ! 1 as l ! 1. The inequality is achieved by h = ⇧Vd,l⇤ f/k⇧Vd,l⇤ fk.
This implies that the Grot-alignment of f is negligible if and only if its projection to the low-order spherical harmonics is negligible. By Theorem 3.1, this implies the necessity result of Theorem 3.6.
3.2 Application: Extending the merged-staircase property necessity result
In our second application, we study the setting of learning a sparse function on the binary hypercube (a.k.a. a junta) that depends on only P d coordinates of the input x, i.e.,
f⇤(x) = h⇤(x1, . . . , xP ),
where h⇤ : HP ! R. The regime of interest to us is when h⇤ is fixed and d ! 1, representing a hidden signal in a high-dimensional dataset. This setting was studied by [ABM22], who identified the “merged-staircase property” (MSP) as an extension of [ABB+21]. We generalize the MSP below. Definition 3.8 (l-MSP). For l 2 Z+ and h⇤ : HP ! R, we say that h⇤ satisfies the merged staircase property with leap l (i.e., l-MSP) if its set of nonzero Fourier coefficients S = {S : ĥ⇤(S) 6= ;} can be ordered as S = {S1, . . . , Sm} such that for all i 2 [m], |Si \ [j<iSj | l.
For example, h⇤(x) = x1 + x1x2 + x1x2x3 satisfies 1-MSP; h⇤(x) = x1x2 + x1x2x3 satisfies 2-MSP, but not 1-MSP because of the leap required to learn x1x2; similarly h⇤(x) = x1x2x3 + x4 satisfies 3-MSP but not 2-MSP. If h⇤ satisfies l-MSP for some small l, then the function f⇤ can be learned greedily in an efficient manner, by iteratively discovering the coordinates on which it depends. In [ABM22] it was proved that the 1-MSP property nearly characterized which sparse functions could be ✏-learned in O✏,h⇤(d) samples by one-pass SGD training in the mean-field regime.
We prove the MSP necessity result for GD training. On the one hand, our necessity result is for a different training algorithm, GD, which injects noise during training. On the other, our result is much more general since it applies whenever GD is permutation-equivariant, which includes training of FC networks and ResNets of any depth (whereas the necessity result of [ABM22] applies only to two-layer architectures in the mean-field regime). We also generalize the result to any leap l. Theorem 3.9 (l-MSP necessity). Let fNN(·;✓) : Hd ! R be an architecture and µ✓ 2 P(Rp) be an initialization such that GD is Gperm-equivariant. Let ✓k be the random weights after k steps of GD training with noise parameter ⌧ > 0, step size ⌘, and clipping radius R on the distribution D = D(f⇤,Hd). Suppose that f⇤(x) = h⇤(z) where h⇤ : HP ! R does not satisfy l-MSP for some l 2 Z+. Then there are constants C, ✏0 > 0 depending on h⇤ such that
P✓k [`D(✓k) ✏0] C⌘R
2⌧
r k
dl+1 +
C
dl+1 .
The interpretation is that if h⇤ does not satisfy l-MSP, then to learn f⇤ to better than ✏0 error with constant probability, we need at least ⌦h⇤,✏(dl+1) steps of (GD) on a network with step size ⌘ = Oh⇤,✏(1), clipping radius R = Oh⇤,✏(1), and noise level ⌧ = ⌦h⇤,✏(1). The proof is deferred to Appendix C. It proceeds by first isolating the “easily-reachable” coordinates T ✓ [P ], and subtracting their contribution from f⇤. We then bound G-alignment of the resulting function, where G is the permutation group on [d] \ T .
4 Hardness for learning with SGD
In this section, for > 0, we let D(f, µX , ) 2 P(X ⇥ R) denote the distribution of (x, f(x) + ⇠) where x ⇠ µX and ⇠ ⇠ N (0, 2) is independent noise.
We show that the equivariance of SGD on certain architectures implies that the function fmod8 : Hd ! {0, . . . , 7} given by
fmod8(x) ⌘ X
i
xi (mod 8) (4)
is hard for SGD-trained, i.i.d. symmetrically-initialized FC networks. Our hardness result relies on a cryptographic assumption to prove superpolynomial lower bounds for SGD learning. For any S ✓ [d], let S : Hd ! {+1, 1} be the parity function S(x) = Q i2S xi.
Definition 4.1. The learning parities with Gaussian noise, (d, n, )-LPGN, problem is parametrized by d, n 2 Z>0 and 2 R>0. An instance (S, q, (xi, yi)i2[n]) consists of (i) an unknown subset S ✓ [d] of size |S| = bd/2c, and (ii) a known query vector q ⇠ Hd, and i.i.d. samples (xi, yi)i2[n] ⇠ D( S ,Hd, ). The task is to return S(q) 2 {+1, 1}.11
Our cryptographic assumption is that poly(d)-size circuits cannot succeed on LPGN. Definition 4.2. Let > 0. We say -LPGN is poly(d)-time solvable if there is a sequence of sample sizes {nd}d2N and circuits {Ad}d2N such that nd, size(Ad) poly(d), and Ad solves (d, nd, )-LPGN with success probability at least 9/10, when inputs are rounded to poly(d) bits. Assumption 4.3. Fix . The -LPGN-hardness assumption is: -LPGN is not poly(d)-time solvable.
The LPGN problem is the simply standard Learning Parities with Noise problem (LPN) [BKW03], except with Gaussian noise instead of binary classification noise, and we are also promised that |S| = bd/2c. In Appendix D.3, we derive Assumption 4.3 from the standard hardness of LPN. We now state our SGD hardness result. Theorem 4.4. Let {fNN,d, µ✓,d}d2N be a family of networks and initializations satisfying Assumption 2.4 (fully-connected) with i.i.d. symmetric initialization. Let > 0, and let {nd} be sample sizes such that (fNN,d, µ✓,d)-SGD training on nd samples from D(fmod8,Hd, ) rounded to poly(d) bits yields parameters ✓d with
E✓d [kfmod8 fNN(·;✓d)k2] 0.0001.
Then, under ( /2)-LPGN hardness, (fNN,d, µ✓,d)-SGD on nd samples cannot run in poly(d) time.
In order to prove Theorem 4.4, we use the sign-flip equivariance of gradient descent guaranteed by the symmetry in the initialization. A sign-flip equivariant network that learns fmod8(x) from -noisy samples, is capable of solving the harder problem of learning fmod8(x s) from -noisy samples, where s 2 Hd is an unknown sign-flip vector. However, through an average-case reduction we show that this problem is ( /2)-LPGN-hard. Therefore the theorem follows by contradiction.
5 Discussion
The general GD lower bound in Theorem 3.1 and the approach for basing hardness of SGD training on cryptographic assumptions in Theorem 4.4 could be further developed to other settings.
There are limitations of the results to address in future work. First, the GD lower bound requires adding noise to the gradients, which can hinder training. Second, real-world data distributions are typically not invariant to a group of transformations, so the results obtained by this work may not apply. It is open to develop results for distributions that are approximately invariant.
Finally, it is open whether computational lower bounds for SGD/GD training can be shown beyond those implied by equivariance. For example, consider the function f : Hd ! {+1, 1} that computes the “full parity”, i.e., the parity of all of the inputs f(x) = Qd i=1 xi. Past work has empirically shown that SGD on FC networks with Gaussian initialization [SSS17, AS20, NY21] fails to learn this function. Proving this would represent a significant advance, since there is no obvious equivariance that implies that the full parity is hard to learn — in fact we have shown weak-learnability with symmetric Rad(1/2) initialization, in which case training is Gsign,perm-equivariant.
Acknowledgements
We thank Jason Altschuler, Guy Bresler, Elisabetta Cornacchia, Sonia Hashim, Jan Hazla, Hannah Lawrence, Theodor Misiakiewicz, Dheeraj Nagaraj, and Philippe Rigollet for stimulating discussions. We thank the Simons Foundation and the NSF for supporting us through the Collaboration on the Theoretical Foundations of Deep Learning (deepfoundations.ai). This work was done in part while E.B. was visiting the Simons Institute for the Theory of Computing and the Bernoulli Center at EPFL, and was generously supported by Apple with an AI/ML fellowship.
11More formally, one would express this as a probabilistic promise problem [Ale03]. | 1. What is the focus of the paper regarding Gradient Descent and Stochastic Gradient Descent training algorithms?
2. What are the strengths of the proposed approach, particularly in introducing new concepts and providing novel insights?
3. What are the weaknesses of the paper regarding the detail provided in the appendix and minor notational issues?
4. How does the reviewer assess the clarity and quality of writing in the paper?
5. What is the limitation of the paper regarding the assumption of G-invariance in real-world datasets? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper studies lower bounds of Gradient Descent (with noise) (GD) and Stochastic Gradient Descent (SGD) training algorithms that are also
G
-equivariant. The paper introduces the concept of
G
-orbit alignment which is used towards proving the first main result which is a computational lower bound on
G
-equivariant GD which appears novel. Two applications of this lower bound are provided in terms of weak learnability of functions on the hypercube, hypersphere, and a necessity result on the merged staircase property. As a separate, but interesting in it's own right contribution, the authors provide a first-of-its-kind hardness result for SGD using fully-connected networks on the Learning Parities with Gaussian Noise problem.
Strengths And Weaknesses
Disclaimer I did not check the veracity of the proofs beyond Appendix A.
This paper enjoys several strengths. In my opinion, the paper studies an extremely interesting problem and makes progress by providing several novel insights. I especially appreciated the idea of
G
-orbit alignment and its use in characterizing weak learnability. This is both intuitively clear and a great application. I also equally found the main results of the lower bounds on
G
-equivariant GD insightful and a significant step over previous work. The computational hardness results for SGD seemed interesting but this aspect was a bit beyond my area of expertise. The authors could do a bit more towards making this section more approachable given that the paper does not completely exhaust the 9-page limit. My main criticism of this work is that a lot of the interesting detail in the Appendix could have been brought forward to the main text. For example, the impossibility results for both applications that actually utilize Theorem 3.1 could be better served in the main paper. Overall, I found this paper to be a thoroughly enjoyable read as overall the clarity and high quality of writing made difficult ideas mostly approachable but at the same time did not water down their perceived impact.
Minor
line 147 compact groups, not any groups
minor notational issues. In definition 2.4
ψ
,
θ
~
are not defined
Definition 3.2 seems a bit contrived. Can you provide a bit more intuition for the constant 9/10?
Questions
One of the central assumptions in much of the technical results is that
μ
x
is
G
-invariant. Often times real-world datasets, due to noise or other randomness in the data collection process exhibit soft equivariance rather than hard ones. Can you provide commentary on how this specifically affects the weak learnability results ---i.e. does depth now matter beyond 2-layer networks?
Limitations
N/A |
NIPS | Title
On the non-universality of deep learning: quantifying the cost of symmetry
Abstract
We prove limitations on what neural networks trained by noisy gradient descent (GD) can efficiently learn. Our results apply whenever GD training is equivariant, which holds for many standard architectures and initializations. As applications, (i) we characterize the functions that fully-connected networks can weak-learn on the binary hypercube and unit sphere, demonstrating that depth-2 is as powerful as any other depth for this task; (ii) we extend the merged-staircase necessity result for learning with latent low-dimensional structure [ABM22] to beyond the meanfield regime. Under cryptographic assumptions, we also show hardness results for learning with fully-connected networks trained by stochastic gradient descent (SGD).
N/A
We prove limitations on what neural networks trained by noisy gradient descent (GD) can efficiently learn. Our results apply whenever GD training is equivariant, which holds for many standard architectures and initializations. As applications, (i) we characterize the functions that fully-connected networks can weak-learn on the binary hypercube and unit sphere, demonstrating that depth-2 is as powerful as any other depth for this task; (ii) we extend the merged-staircase necessity result for learning with latent low-dimensional structure [ABM22] to beyond the meanfield regime. Under cryptographic assumptions, we also show hardness results for learning with fully-connected networks trained by stochastic gradient descent (SGD).
1 Introduction
Over the last decade, deep learning has made advances in areas as diverse as image classification [KSH12], language translation [BCB14], classical board games [SHS+18], and programming [LCC+22]. Neural networks trained with gradient-based optimizers have surpassed classical methods for these tasks, raising the question: can we hope for deep learning methods to eventually replace all other learning algorithms? In other words, is deep learning a universal learning paradigm? Recently, [AS20, AKM+21] proved that in a certain sense the answer is yes: any PAC-learning algorithm [Val84] can be efficiently implemented as a neural network trained by stochastic gradient descent; analogously, any Statistical Query algorithm [Kea98] can be efficiently implemented as a neural network trained by noisy gradient descent.
However, there is a catch: the result of [AS20] relies on a carefully crafted network architecture with memory and computation modules, which is capable of emulating an arbitrary learning algorithm. This is far from the architectures which have been shown to be successful in practice. Neural networks in practice do incorporate domain knowledge, but they have more “regularity” than the architectures of [AS20], in the sense that they do not rely on heterogeneous and carefully assigned initial weights (e.g., convolutional networks and transformers for image recognition and language processing [LB+95, LKF10, VSP+17], graph neural networks for analyzing graph data [GMS05, BZSL13, VCC+17], and networks specialized for particle physics [BAO+20]). We therefore refine our question:
Is deep learning with “regular” architectures and initializations a universal learning paradigm? If not, can we quantify its limitations when architectures and data are not well aligned?
We would like an answer applicable to a wide range of architectures. In order to formalize the problem and develop a general theory, we take an approach similar to [Ng04, Sha18, LZA21] of understanding deep learning through the equivariance group G (a.k.a., symmetry group) of the learning algorithm.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Definition 1.1 (G-equivariant algorithm). A randomized algorithm A that takes in a data distribution D 2 P(X ⇥ Y)1 and outputs a function A(D) : X ! Y is said to be G-equivariant if for all g 2 G
A(D) d = A(g(D)) g. (G-equivariance)
Here g is a group element that acts on the data space X , and so is viewed as a function g : X ! X , and g(D) is the distribution of (g(x), y), where (x, y) ⇠ D.
In the case that the algorithm A is deep learning on the distribution D, the equivariance group depends on the optimizer, the architecture, and the network initialization [Ng04, LZA21].2
Examples of G-equivariant algorithms in deep learning In many deep learning settings, the equivariance group of the learning algorithm is large. Thus, in this paper, we call an algorithm “regular” if it has a large equivariance group. For example, SGD training of fully-connected networks with Gaussian initialization is orthogonally-equivariant [Ng04]; and is permutation-equivariant if we add skip connections [HZRS16]. SGD training of convolutional networks is translationally-equivariant if circular convolutions are used [SNPP19], and SGD training of i.i.d.-initialized transformers without positional embeddings is equivariant to permutations of tokens [VSP+17]. Furthermore, [LZA21, Theorem C.1] provides general conditions under which a deep learning algorithm is equivariant. See also the preliminaries in Section 2.
Summary of this work Based off of G-equivariance, we prove limitations on what “regular” neural networks trained by noisy gradient descent (GD) or stochastic gradient descent (SGD) can efficiently learn, implying a separation with the initializations and architectures considered in [AS20]. For GD, we prove a master theorem that enables two novel applications: (a) characterizing which functions can be efficiently weak-learned by fully-connected (FC) networks on both the hypercube and the unit sphere; and (b) a necessity result for which functions on the hypercube with latent low-dimensional structure can be efficiently learned. See Sections 1.2 and 1.3 for more details.
1.1 Related work
Most prior work on computational lower bounds for deep learning has focused on proving limitations of kernel methods (a.k.a. linear methods). Starting with [Bar93] and more recently with [WLLM19, AL19, KMS20, AL20, Hsu, HSSV21, ABM22] it is known that there are problems on which kernel methods provably fail. These results apply to training neural networks in the Neural Tangent Kernel (NTK) regime [JGH18], but do not apply to more general nonlinear training. Furthermore, for specific architectures such as FC architectures [GMMM21, Mis22] and convolutional architectures [MM21], the kernel and random features models at initialization are well understood, yielding stronger lower bounds for training in the NTK regime.
For nonlinear training, which is the setting of this paper, considerably less is known. In the context of sample complexity, [Ng04] introduced the study of the equivariance group of SGD, and constructed a distribution on d dimensions with a ⌦(d) versus O(1) sample complexity separation for learning with an SGD-trained FC architecture versus an arbitrary algorithm. More recently, [LZA21] built on [Ng04] to show a O(1) versus ⌦(d2) sample-complexity separation between SGD-trained convolutional and FC architectures. In this paper, we also analyze the equivariance group of the training algorithm, but with the goal of proving superpolynomial computational lower bounds.
In the context of computational lower bounds, it is known that networks trained with noisy3 gradient descent (GD) fall under the Statistical Query (SQ) framework [Kea98], which allows showing computational limitations for GD training based on SQ lower bounds. This has been combined in [AS20, SSS17, MS20, ACHM22] with the permutation symmetry of GD-training of i.i.d. FC networks to prove impossibility of efficiently learning high-degree parities and polynomials. In
1The set of probability distributions on ⌦ is denoted by P(⌦). You should think of D 2 P(X ⇥ Y) as a distribution of pairs (x, y) of covariates and labels.
2Note that the equivariance group of a training algorithm should not be confused with the equivariance group of an architecture in the context of geometric deep learning [BBCV21]. In that context, G-equivariance refers to the property of a neural network architecture fNN(·;✓) : X ! Y that fNN(g(x);✓) = g(fNN(x;✓)) for all x 2 X and all group elements g 2 G. In that case, G acts on both the input in X and output in Y .
3Here the noise is used to control the gradients’ precision as in [AS20, AKM+21].
our work, we show that these arguments can be viewed in the broader context of more general group symmetries, yielding stronger lower bounds than previously known. For stochastic gradient descent (SGD) training, [ABM22] proves a computational limitation for training of two-layer meanfield networks, but their result applies only when SGD converges to the mean-field limit, and does not apply to more general architectures beyond two-layer networks. Finally, most related to our SGD hardness result is [Sha18], which shows limitations of SGD-trained FC networks under a cryptographic assumption. However, the argument of [Sha18] relies on training being equivariant to linear transformations of the data, and therefore requires that data be whitened or preconditioned. Instead, our result for SGD does not require any preprocessing steps.
There is also recent work showing sample complexity benefits of invariant/equivariant neural network architectures [MMM21, EZ21, Ele21, BVB21, Ele22]. In contrast, we study equivariant training algorithms. These are distinct concepts: a deep learning algorithm can be G-equivariant, while the neural network architecture is neither G-invariant nor G-equivariant. For example, a FC network is not invariant to orthogonal transformations of the input. However, if we initialize it with Gaussian weights and train with SGD, then the learning algorithm is equivariant to orthogonal transformations of the input (see Proposition 2.5 below).
1.2 Contribution 1: Lower bounds for noisy gradient descent (GD)
Consider the supervised learning setup where we train a neural network fNN(·;✓) : X ! R parametrized by ✓ 2 Rp to minimize the mean-squared error on a data distribution D 2 P(X ⇥ R),
`D(✓) = E(x,y)⇠D[(y fNN(x;✓))2]. (1)
The noisy Gradient Descent (GD) training algorithm randomly initializes ✓0 ⇠ µ✓ for some initialization distribution µ✓ 2 P(Rp), and then iteratively updates the parameters with step size ⌘ > 0 in a direction gD(✓k) approximating the population loss gradient, plus Gaussian noise ⇠k ⇠ N (0, ⌧2I),
✓k+1 = ✓k ⌘gD(✓ k) + ⇠k. (GD)
Up to a constant factor, gD(✓) is the population loss gradient, except we have clipped the gradients of the network with the projection operator ⇧B(0,R) to lie in the ball B(0, R) = {z : kzk2 R} ⇢ Rp,4
gD(✓) = E(x,y)⇠D[(y fNN(x;✓))(⇧B(0,R)r✓fNN(x;✓))]. Clipping the gradients is often used in practice to avoid instability from exploding gradients (see, e.g., [ZHSJ19] and references within). In our context, clipping ensures that the injected noise ⇠k is on the same scale as the gradient r✓fNN of the network and so it controls the gradients’ precision. Similarly to the works [AS20, AKM+21, ACHM22], we consider noisy gradient descent training to be efficient if the following conditions are met. Definition 1.2 (Efficiency of GD, informal). GD training is efficient if the clipping radius R, step size ⌘, and inverse noise magnitude 1/⌧ are all polynomially-bounded in d, since then (GD) can be efficiently implemented using noisy minibatch SGD5.
We prove that some data distributions cannot be efficiently learned by G-equivariant GD training. For this, we introduce the G-alignment: Definition 1.3 (G-alignment). Let G be a compact group, let µX 2 P(X ) be a distribution over data points, and let f 2 L2(µX ) be a labeling function. The G-alignment of (µX , f) is:
C((µX , f);G) = sup h
Eg⇠µG [Ex⇠µX [f(g(x))h(x)]2],
where µG is the Haar measure of G and the supremum is over h 2 L2(µX ) such that khk2 = 1.
In our applications, we use tools from representation theory (see e.g., [Kna96]) to evaluate the G-alignment. Using the G-alignment, we can prove a master theorem for lower bounds: Theorem 1.4 (GD lower bound, informal statement of Theorem 3.1). Let Df 2 P(X ⇥ R) be the distribution of (x, f(x)) for x ⇠ µX . If µX is G-invariant6 and the G-alignment of (µX , f) is small, then f cannot be efficiently learned by a G-equivariant GD algorithm.
4Note that if fNN is an R-Lipschitz model, then gD(✓) will simply be the population gradient of the loss. 5Efficient implementability by minibatch SGD assumes bounded residual errors. 6Meaning that if x ⇠ µX , then for any g 2 G, we also have g(x) ⇠ µX .
Proof ideas We first make an observation of [Ng04]: if a G-equivariant algorithm can learn the function f by training on the distribution Df , then, for any group element g 2 G, it can learn f g by training on the distribution Df g. In other words, the algorithm can learn the class of functions F = {f g : g 2 g}, which can potentially be much larger than just the singleton set {f}. We conclude by showing that the class of functions F cannot be efficiently learned by GD training. The intuition is that the G-alignment measures the diversity of the functions in F . If the G-alignment is small, then there is no function h that correlates with most of the functions in F , which can be used to show F is hard to learn by gradient descent.
This type of argument appears in [AS20, ACHM22] in the specific case of Boolean functions and for permutation equivariance; our proof both applies to a more general setting (beyond Boolean functions and permutations) and yields sharper bounds; see Appendix A.3. Our bound can also be interpreted in terms of the Statistical Query framework, as we discuss in Appendix A.4. While Theorem 1.4 is intuitively simple, we demonstrate its power and ease-of-use by deriving two new applications.
Application: Characterization of weak-learnability by fully-connected (FC) networks In our first application, we consider weak-learnability: when can a function be learned non-negligibly better than just outputting the estimate fNN ⌘ 0? Using Theorem 1.4, we characterize which functions over the binary hypercube f : {+1, 1}d ! R and over the sphere f : Sd 1 ! R are efficiently weak-learnable by GD-trained FC networks with i.i.d. symmetric and i.i.d. Gaussian initialization, respectively. The takeaway is that a function f : {+1, 1}d ! R is weak-learnable if and only if it has a nonnegligible Fourier coefficient of order O(1) or d O(1). Similarly, a function f : Sd 1 ! R is weak-learnable if and only if it has nonnegligible projection onto the degree-O(1) spherical harmonics. Perhaps surprisingly, such functions can be efficiently weak-learned by 2-layer fully-connected networks, which shows that adding more depth does not help. This application is presented in Section 3.1.
Application: Evidence for the staircase property In our second application, we consider learning a target function f : {+1, 1}d ! R that only depends on the first P coordinates, f(x) = h(x1, . . . , xP ). Our regime of interest here is when the function hand : {+1, 1}P ! R remains fixed and the dimension d grows, since this models the situation where a latent low-dimensional space determines the labels in a high-dimensional dataset. Recently, [ABM22] studied SGD-training of mean-field two-layer networks, and gave a near-characterization of which functions can be learned to arbitrary accuracy ✏ in Oh,✏(d) samples, in terms of the merged-staircase property (MSP). Using Theorem 1.4, we prove that the MSP is necessary for GD-learnability whenever training is permutation-equivariant (which applies beyond the 2-layer mean-field regime) and we also generalize it beyond leaps of size 1. Details are in Section 3.2.
1.3 Contribution 2: Hardness for stochastic gradient descent (SGD)
The second part of this paper concerns Stochastic Gradient Descent (SGD) training, which randomly initializes the weights ✓0 ⇠ µ✓ , and then iteratively trains the parameters with the following update rule to try to minimize the loss (1):
✓k+1 = ✓k ⌘r✓(y fNN(xk+1;✓)) 2 |✓=✓k , (SGD)
where (yk+1,xk+1) ⇠ D is a fresh sample on each iteration, and ⌘ > 0 is the learning rate.7
Proving computational lower bounds for SGD is a notoriously difficult problem [AKM+21], exacerbated by the fact that for general architectures SGD can be used to simulate any polynomial-time learning algorithm [AS20]. However, we demonstrate that one can prove hardness results for SGD training based off of cryptographic assumptions when the training algorithm has a large equivariance group. We demonstrate the non-universality of SGD on a standard FC architecture. Theorem 1.5 (Hardness for SGD, informal statement of Theorem 4.4). Under the assumption that the Learning Parities with Noise (LPN) problem8 is hard, FC neural networks with Gaussian initialization
7For brevity, we focus on one-pass SGD with a single fresh sample per iteration. Our results extend to empirical risk minimization (ERM) setting and to mini-batch SGD, see Remark E.1.
8See Section 4 and Appendix D.3 for definitions and discussion on LPN.
trained by SGD cannot learn fmod8 : {+1, 1}d ! {0, . . . , 7},
fmod8(x) ⌘ dX
i=1
xi (mod 8),
in polynomial time from noisy samples (x, fmod8(x) + ⇠) where x ⇠ {+1, 1}d and ⇠ ⇠ N (0, 1).
This result shows a limitation of SGD training based on an average-case reduction from a cryptographic problem. The closest prior result is in [Sha18], which proved hardness results for learning with SGD on FC networks, but required preprocessing the data with a whitening transformation.
Proof idea The FC architecture and Gaussian initialization are necessary: an architecture that outputted fmod8(x) at initialization would trivially achieve zero loss. However, SGD on Gaussianinitialized FC networks is sign-flip equivariant, and this symmetry makes fmod8 hard to learn. If a sign-flip equivariant algorithm can learn the function fmod8(x) from noisy samples, then it can learn the function fmod8(x s) from noisy samples, where s 2 {+1, 1}d is an unknown sign-flip vector, and denotes elementwise product. However, this latter problem is hard under standard cryptographic assumptions. More details in Section 4.
2 Preliminaries
Notation Let Hd = {+1, 1}d be the binary hypercube, and Sd 1 = {x 2 Rd : kxk2 = 1} be the unit sphere. The law of a random variable X is L(X). If S is a finite set, then X ⇠ S stands for X ⇠ Unif[S]. Also let x ⇠ Sd 1 denote x drawn from the uniform Haar measure on Sd 1. For a set ⌦, let P(⌦) be the set of distributions on ⌦. Let be the elementwise product. For any µX 2 P(X ), and group G acting on X , we say µX is G-invariant if g(x) d = x for x ⇠ µX and any g 2 G.
2.1 Equivariance of GD and SGD
We define GD and SGD equivariance separately. Definition 2.1. Let AGD be the algorithm that takes in data distribution D 2 P(X ⇥ R), runs (GD) on initialization ✓0 ⇠ µ✓ for k steps, and outputs the function AGD(D) = fNN(·;✓k)
We say “(fNN, µ✓)-GD is G-equivariant” if AGD is G-equivariant in the sense of Definition 1.1. Definition 2.2. Let ASGD be the algorithm that takes in samples (xi, yi)i2[n], runs (SGD) on initialization ✓0 ⇠ µ✓ for n steps, and outputs ASGD((xi, yi)i2[n]) = fNN(·;✓k).
We say “(fNN, µ✓)-SGD is G-equivariant” if ASGD((xi, yi)i2[n]) d = ASGD((g(xi), yi)i2[n]) g for any g 2 G, and any samples (xi, yi)i2[n].
2.2 Regularity conditions on networks imply equivariances of GD and SGD
We take a data space X ✓ Rd, and consider the following groups that act on Rd. Definition 2.3. Define the following groups and actions:
• Let Gperm = Sd denote the group of permutations on [d]. An element 2 Gperm acts on x 2 Rd in the standard way: (x) = (x (1), . . . , x (d)).
• Let Gsign,perm denote the group of signed permutations, an element g = (s, ) 2 Gsign,perm is given by a sign-flip vector s 2 Hd and a permutation 2 Gperm. It acts on x 2 Rd by g(x) = s (x) = (s1x (1), . . . , sdx (d)).9
• Let Grot = SO(d) ✓ GL(d,R) denote the rotation group. An element g 2 Grot is a rotation matrix that acts on x 2 Rd by matrix multiplication.
9The group product is g1g2 = (s1, 1)(s2, 2) = (s1 1(s2), 1 2).
Under mild conditions on the neural network architecture and initialization, GD and SGD training are known to be Gperm-, Gsign,perm-, or Grot-equivariant [Ng04, LZA21]. Assumption 2.4 (Fully-connected i.i.d. first layer and no skip connections from the input). We can decompose the parameters as ✓ = (W , ), where W 2 Rm⇥d is the matrix of the first-layer weights, and there is a function gNN(·; ) : Rm ! R such that fNN(x;✓) = gNN(Wx; ). Furthermore, the initialization distribution is µ✓ = µW ⇥ µ , where µW = µ ⌦(m⇥d) w for µw 2 P(R).
Notice that Assumption 2.4 is satisfied by FC networks with i.i.d. initialization. Under assumptions on µw, we obtain equivariances of GD and SGD (see Appendix E for proofs.) Proposition 2.5 ([Ng04, LZA21]). Under Assumption 2.4, GD and SGD are Gperm-equivariant. If µw is sign-flip symmetric, then GD and SGD are Gsign,perm-equivariant. If µw = N (0, 2) for some , then GD and SGD are Grot-equivariant.
3 Lower bounds for learning with GD
In this section, let D(f, µX ) 2 P(X ⇥ R) denote the distribution of (x, f(x)) where x ⇠ µX . We give a master theorem for computational lower bounds for learning with G-equivariant GD. Theorem 3.1 (GD lower bound using G-alignment). Let G be a compact group, and let fNN(·;✓) : X ! R be an architecture and µ✓ 2 P(Rp) be an initialization such that GD is G-equivariant. Fix any G-invariant distribution µX 2 P(X ), any label function f⇤ 2 L2(µX ), and any baseline function ↵ 2 L2(µX ) satisfying ↵ g = ↵ for all g 2 G. Let ✓k be the random weights after k time-steps of GD training with noise parameter ⌧ > 0, step size ⌘ > 0, and clipping radius R > 0 on the distribution D = D(f⇤, µX ). Then, for any ✏ > 0,
P✓k [`D(✓k) kf⇤ ↵k2L2(µX ) ✏] ⌘R
p kC 2⌧ + C ✏ ,
where C = C((f⇤ ↵, µX );G) is the G-alignment of Definition 1.3.
As discussed in Section 1.2, the theorem states that if the G-alignment C is very small, then GD training cannot efficiently improve on the trivial loss from outputting ↵: either the number of steps k, the gradient precision R/⌧ , or the step size ⌘ have to be very large in order to learn. Appendix A shows a generalization of the theorem for learning a class of functions F = {f1, . . . , fm} instead of just a single function f⇤. This result goes beyond the lower bound of [AS20] even when G is the trivial group with one element: the main improvement is that Theorem 3.1 proves hardness for learning real-valued functions beyond just Boolean-valued functions. We demonstrate the usefulness of the theorem through two new applications in Sections 3.1 and 3.2.
3.1 Application: Characterizing weak-learnability by FC networks
In our first application of Theorem 3.1, we consider FC architectures with i.i.d. initialization, and show how to use their training equivariances to characterize what functions they can weak-learn: i.e., for what target functions f⇤ they can efficiently achieve a non-negligible correlation after training. Definition 3.2 (Weak learnability). Let {µd}d2N be a family of distributions µd 2 P(Xd), and let {fd}d2N be a family of functions fd 2 L2(µd). Finally, let {f̃d}d2N be a family of estimators, where f̃d is a random function in L2(µd). We say that {fd, µd}d2N is “weak-learned” by the family of estimators {f̃d}d2N if there are constants d0, C > 0 such that for all d > d0,
Pf̃d [kfd f̃dk 2 L2(µd) kfdk 2 L2(µd) d C ] 9/10. (2)
The constant 9/10 in the definition is arbitrary. In words, weak-learning measures whether the family of estimators {f̃d} has a non-negligible edge over simply estimating with the identically zero functions f̃d ⌘ 0. We study weak-learnability by GD-trained FC networks. Definition 3.3. We say that {fd, µd}d2N is efficiently weak-learnable by GD-trained FC networks if there are FC networks and initializations {fNN,d, µ✓,d}, and hyperparameters {⌘d, kd, Rd, ⌧d} such that for some constant c > 0,
• Hyperparameters are polynomial size: 0 ⌘d, kd, Rd, 1/⌧d O(dc);
• {f̃d} weak-learns {fd, µd} in the sense of Definition 3.2, where f̃d = fNN(·;✓d) for weights ✓d that are GD-trained on D(fd, µd) for kd steps with step size ⌘d, clipping radius Rd, and noise ⌧d, starting from initialization µ✓,d.
If µ✓,d is i.i.d copies of a symmetric distribution, we say that the FC networks are symmetricallyinitialized, and Gaussian-initialized if µ✓,d is i.i.d. copies of a Gaussian distribution.
3.1.1 Functions on hypercube, FC networks with i.i.d. symmetric initialization
Let us first consider functions on the Boolean hypercube f : Hd ! R. These can be uniquely written as a multilinear polynomial
f(x) = X
S✓[d]
f̂(S) Y
i2S
xi,
where f̂(S) are the Fourier coefficients of f [O’D14]. We characterize weak learnability of functions on the hypercube in terms of their Fourier coefficients. The full proof is deferred to Appendix B.1. Theorem 3.4. Let {fd}d2N be a family of functions fd : Hd ! R with kfdkL2(Hd) 1. Then {fd,Hd} is efficiently weak-learnable by GD-trained symmetrically-initialized FC networks if and only if there is a constant C > 0 such that for each d 2 N there is Sd ✓ [d] with |Sd| C or |Sd| d C, and |f̂d(Sd)| ⌦(d C).
The algorithmic result can be achieved by two-layer FC networks, and relies on random features analysis where each network weight is initialized to 0 with probability 1 p, and +1 or 1 with equal probability p/2.10 Therefore, for weak learning on the hypercube, two-layer networks are as good as networks of any depth. For the converse impossibility result, we apply Theorem 3.1, recalling that GD is Gsign,perm-equivariant by Proposition 2.5, and noting that Gsign,perm-alignment is:
Lemma 3.5. Let f : Hd ! R. Then C((f,Hd);Gsign,perm) = maxk2[d] d k 1P S✓[d] |S|=k f̂(S)2.
Proof. In the following, let s ⇠ Hd and ⇠ Gperm, so that g = (s, ) ⇠ Gsign,perm. Also let x,x0 ⇠ Hd be independent. For any h : Hd ! R, by (a) tensorizing, (b) expanding f in the Fourier basis, (c) the orthogonality relation Es[ S(s) S0(s)] = S,S0 , and (d) tensorizing,
Eg[Ex[f(g(x))h(x)]2] = E ,s[Ex[f(s (x))h(x)]2] (a) = E ,s,x,x0 [f(s (x))f(s (x0))h(x)h(x0)] (b) = Ex,x0, [ X
S,S0✓[d]
f̂(S)f̂(S0)h(x)h(x0) S( (x)) S0( (x 0))Es[ S(s) S0(s)]]
(c) = Ex,x0, [
X
S✓[d]
f̂(S)2h(x)h(x0) S( (x)) S( (x 0))]
(d) = E [
X
S✓[d]
f̂(S)2 Ex[h(x) S( (x))]2]
= X
S✓[d]
f̂(S)2 E [ĥ( 1(S))2]
= X
S✓[d]
f̂(S)2 ✓ d
|S|
◆ 1 X
S0,|S0|=|S|
ĥ(S0)2.
And since P
S0,|S0|=|S| ĥ(S 0)2 khk2L2(Hd), the supremum over h such that khkL2(Hd) = 1 is
achieved by taking h(x) = S(x) for some S.
10Surprisingly, this means that the full parity function f⇤(x) = Qd
i=1 xi can be efficiently learned with such initializations. See Appendix B.
So if the Fourier coefficients of f are negligible for all S s.t. min(|S|, d |S|) O(1), then the Gsign,perm-alignment of f is negligible. By Theorem 3.1, this means f cannot be learned efficiently. In Appendix B.1.2 we give a concrete example of a hard function, that was not previously known.
3.1.2 Functions on sphere, FC networks with i.i.d. Gaussian initialization
We now study learning a target function on the unit sphere, f 2 L2(Sd 1), where we take the standard Lebesgue measure on Sd 1. A key fact in harmonic analysis is that L2(Sd 1) can be written as the direct sum of subspaces spanned by spherical harmonics of each degree (see, e.g., [Hoc12]).
L 2(Sd 1) =
1M
l=0
Vd,l,
where Vd,l ✓ L2(Sd 1) is the space of degree-l spherical harmonics, which is of dimension
dim(Vd,l) = 2l + d 2
l
✓ l + d 3
l 1
◆ .
Let ⇧Vd,l : L2(Sd 1) ! Vd,l be the projection operator to the space of degree-l spherical harmonics. In Appendix B.2, we prove this characterization of weak-learnability for functions on the sphere: Theorem 3.6. Let {fd}d2N be a family of functions fd : Sd 1 ! R with kfdkL2(Sd 1) 1. Then {fd, Sd 1} is efficiently weak-learnable by GD-trained Gaussian-initialized FC networks if and only if there is a constant C > 0 such that PC l=0 k⇧Vd,lfdk 2 d C .
The algorithmic result can again be achieved by two-layer FC networks, and is a consequence of the analysis of the random feature kernel in [GMMM21], which shows that the projection of fd onto the low-degree spherical harmonics can be efficiently learned. For the impossibility result, we apply Theorem 3.1, noting that GD is Grot-equivariant by Proposition 2.5, and the Grot-alignment is: Lemma 3.7. Let f 2 L2(Sd 1). Then C((f, Sd 1);Grot) = maxl2Z 0 k⇧Vd,lfk2/ dim(Vd,l).
Proof. The Grot-alignment is computed using the representation theory of Grot, specifically the Schur orthogonality theorem (see, e.g., [Ser77, Kna96]). For any l, the subspace Vd,l is invariant to action by Grot, meaning that we may define the representation l of Grot, which for any g 2 Grot, f 2 Vd,l is given by l(g) : Vd,l ! Vd,l and l(g)f = f g 1. Furthermore, l is a unitary, irreducible representation, and l is not equivalent to l0 , for any l 6= l0 (see e.g., [Sta90, Theorem 1]). Therefore, by the Schur orthogonality relations [Kna96, Corollary 4.10], for any v1, w1 2 Vd,l1 and v2, w2 2 Vd,l2 , we have
Eg⇠Grot [h l1(g)v1, w1iL2(Sd 1)h l2(g)v2, w2iL2(Sd 1)] = l1l2hv1, v2iL2(Sd 1)hw1, w2iL2(Sd 1)/ dim(Vd,l1). (3)
Let g ⇠ Grot, drawn from the Haar probability measure. For any h 2 L2(Sd 1) such that khk
2 L2(Sd 1) = 1, by (a) the decomposition of L 2(Sd 1) into subspaces of spherical harmonics, (b) the Grot-invariance of each subspace Vd,l, and (c) the Schur orthogonality relations in (3),
Eg[hf g, hi2L2(Sd 1)] (a) =
1X
l1,l2=0
Eg[h⇧Vd,l1 (f g),⇧Vd,l1hiL2(Sd 1)h⇧Vd,l2 (f g),⇧Vd,l2hiL2(Sd 1)]
(b) =
1X
l1,l2=0
Eg[h(⇧Vd,l1 f) g,⇧Vd,l1hiL2(Sd 1)h(⇧Vd,l2 f) g,⇧Vd,l2hiL2(Sd 1)]
(c) =
1X
l=0
1
dim(Vd,l) k⇧Vd,lfk
2 L2(Sd 1)k⇧Vd,lhk 2 L2(Sd 1)
1X
l=0
k⇧Vd,lhk 2 L2(Sd 1) ! max l2Z 0
1
dim(Vd,l) k⇧Vd,lfk
2 L2(Sd 1)
= max l2Z 0
1
dim(Vd,l) k⇧Vd,lfk
2 L2(Sd 1).
Let l⇤ be the optimal value of l in the last line, which is known to exist by the fact that k⇧Vd,lfk2 kfk
2 and dim(Vd,l) ! 1 as l ! 1. The inequality is achieved by h = ⇧Vd,l⇤ f/k⇧Vd,l⇤ fk.
This implies that the Grot-alignment of f is negligible if and only if its projection to the low-order spherical harmonics is negligible. By Theorem 3.1, this implies the necessity result of Theorem 3.6.
3.2 Application: Extending the merged-staircase property necessity result
In our second application, we study the setting of learning a sparse function on the binary hypercube (a.k.a. a junta) that depends on only P d coordinates of the input x, i.e.,
f⇤(x) = h⇤(x1, . . . , xP ),
where h⇤ : HP ! R. The regime of interest to us is when h⇤ is fixed and d ! 1, representing a hidden signal in a high-dimensional dataset. This setting was studied by [ABM22], who identified the “merged-staircase property” (MSP) as an extension of [ABB+21]. We generalize the MSP below. Definition 3.8 (l-MSP). For l 2 Z+ and h⇤ : HP ! R, we say that h⇤ satisfies the merged staircase property with leap l (i.e., l-MSP) if its set of nonzero Fourier coefficients S = {S : ĥ⇤(S) 6= ;} can be ordered as S = {S1, . . . , Sm} such that for all i 2 [m], |Si \ [j<iSj | l.
For example, h⇤(x) = x1 + x1x2 + x1x2x3 satisfies 1-MSP; h⇤(x) = x1x2 + x1x2x3 satisfies 2-MSP, but not 1-MSP because of the leap required to learn x1x2; similarly h⇤(x) = x1x2x3 + x4 satisfies 3-MSP but not 2-MSP. If h⇤ satisfies l-MSP for some small l, then the function f⇤ can be learned greedily in an efficient manner, by iteratively discovering the coordinates on which it depends. In [ABM22] it was proved that the 1-MSP property nearly characterized which sparse functions could be ✏-learned in O✏,h⇤(d) samples by one-pass SGD training in the mean-field regime.
We prove the MSP necessity result for GD training. On the one hand, our necessity result is for a different training algorithm, GD, which injects noise during training. On the other, our result is much more general since it applies whenever GD is permutation-equivariant, which includes training of FC networks and ResNets of any depth (whereas the necessity result of [ABM22] applies only to two-layer architectures in the mean-field regime). We also generalize the result to any leap l. Theorem 3.9 (l-MSP necessity). Let fNN(·;✓) : Hd ! R be an architecture and µ✓ 2 P(Rp) be an initialization such that GD is Gperm-equivariant. Let ✓k be the random weights after k steps of GD training with noise parameter ⌧ > 0, step size ⌘, and clipping radius R on the distribution D = D(f⇤,Hd). Suppose that f⇤(x) = h⇤(z) where h⇤ : HP ! R does not satisfy l-MSP for some l 2 Z+. Then there are constants C, ✏0 > 0 depending on h⇤ such that
P✓k [`D(✓k) ✏0] C⌘R
2⌧
r k
dl+1 +
C
dl+1 .
The interpretation is that if h⇤ does not satisfy l-MSP, then to learn f⇤ to better than ✏0 error with constant probability, we need at least ⌦h⇤,✏(dl+1) steps of (GD) on a network with step size ⌘ = Oh⇤,✏(1), clipping radius R = Oh⇤,✏(1), and noise level ⌧ = ⌦h⇤,✏(1). The proof is deferred to Appendix C. It proceeds by first isolating the “easily-reachable” coordinates T ✓ [P ], and subtracting their contribution from f⇤. We then bound G-alignment of the resulting function, where G is the permutation group on [d] \ T .
4 Hardness for learning with SGD
In this section, for > 0, we let D(f, µX , ) 2 P(X ⇥ R) denote the distribution of (x, f(x) + ⇠) where x ⇠ µX and ⇠ ⇠ N (0, 2) is independent noise.
We show that the equivariance of SGD on certain architectures implies that the function fmod8 : Hd ! {0, . . . , 7} given by
fmod8(x) ⌘ X
i
xi (mod 8) (4)
is hard for SGD-trained, i.i.d. symmetrically-initialized FC networks. Our hardness result relies on a cryptographic assumption to prove superpolynomial lower bounds for SGD learning. For any S ✓ [d], let S : Hd ! {+1, 1} be the parity function S(x) = Q i2S xi.
Definition 4.1. The learning parities with Gaussian noise, (d, n, )-LPGN, problem is parametrized by d, n 2 Z>0 and 2 R>0. An instance (S, q, (xi, yi)i2[n]) consists of (i) an unknown subset S ✓ [d] of size |S| = bd/2c, and (ii) a known query vector q ⇠ Hd, and i.i.d. samples (xi, yi)i2[n] ⇠ D( S ,Hd, ). The task is to return S(q) 2 {+1, 1}.11
Our cryptographic assumption is that poly(d)-size circuits cannot succeed on LPGN. Definition 4.2. Let > 0. We say -LPGN is poly(d)-time solvable if there is a sequence of sample sizes {nd}d2N and circuits {Ad}d2N such that nd, size(Ad) poly(d), and Ad solves (d, nd, )-LPGN with success probability at least 9/10, when inputs are rounded to poly(d) bits. Assumption 4.3. Fix . The -LPGN-hardness assumption is: -LPGN is not poly(d)-time solvable.
The LPGN problem is the simply standard Learning Parities with Noise problem (LPN) [BKW03], except with Gaussian noise instead of binary classification noise, and we are also promised that |S| = bd/2c. In Appendix D.3, we derive Assumption 4.3 from the standard hardness of LPN. We now state our SGD hardness result. Theorem 4.4. Let {fNN,d, µ✓,d}d2N be a family of networks and initializations satisfying Assumption 2.4 (fully-connected) with i.i.d. symmetric initialization. Let > 0, and let {nd} be sample sizes such that (fNN,d, µ✓,d)-SGD training on nd samples from D(fmod8,Hd, ) rounded to poly(d) bits yields parameters ✓d with
E✓d [kfmod8 fNN(·;✓d)k2] 0.0001.
Then, under ( /2)-LPGN hardness, (fNN,d, µ✓,d)-SGD on nd samples cannot run in poly(d) time.
In order to prove Theorem 4.4, we use the sign-flip equivariance of gradient descent guaranteed by the symmetry in the initialization. A sign-flip equivariant network that learns fmod8(x) from -noisy samples, is capable of solving the harder problem of learning fmod8(x s) from -noisy samples, where s 2 Hd is an unknown sign-flip vector. However, through an average-case reduction we show that this problem is ( /2)-LPGN-hard. Therefore the theorem follows by contradiction.
5 Discussion
The general GD lower bound in Theorem 3.1 and the approach for basing hardness of SGD training on cryptographic assumptions in Theorem 4.4 could be further developed to other settings.
There are limitations of the results to address in future work. First, the GD lower bound requires adding noise to the gradients, which can hinder training. Second, real-world data distributions are typically not invariant to a group of transformations, so the results obtained by this work may not apply. It is open to develop results for distributions that are approximately invariant.
Finally, it is open whether computational lower bounds for SGD/GD training can be shown beyond those implied by equivariance. For example, consider the function f : Hd ! {+1, 1} that computes the “full parity”, i.e., the parity of all of the inputs f(x) = Qd i=1 xi. Past work has empirically shown that SGD on FC networks with Gaussian initialization [SSS17, AS20, NY21] fails to learn this function. Proving this would represent a significant advance, since there is no obvious equivariance that implies that the full parity is hard to learn — in fact we have shown weak-learnability with symmetric Rad(1/2) initialization, in which case training is Gsign,perm-equivariant.
Acknowledgements
We thank Jason Altschuler, Guy Bresler, Elisabetta Cornacchia, Sonia Hashim, Jan Hazla, Hannah Lawrence, Theodor Misiakiewicz, Dheeraj Nagaraj, and Philippe Rigollet for stimulating discussions. We thank the Simons Foundation and the NSF for supporting us through the Collaboration on the Theoretical Foundations of Deep Learning (deepfoundations.ai). This work was done in part while E.B. was visiting the Simons Institute for the Theory of Computing and the Bernoulli Center at EPFL, and was generously supported by Apple with an AI/ML fellowship.
11More formally, one would express this as a probabilistic promise problem [Ale03]. | 1. What is the main contribution of the paper regarding deep learning?
2. What are the strengths of the proposed approach, particularly in terms of novelty and mathematical tools?
3. What are the weaknesses of the paper, especially regarding its abstraction and lack of empirical studies?
4. Do you have any questions or concerns regarding the paper's assumptions and definitions?
5. How do you assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper mainly focused on the universal property of deep learning. Specifically, the authors showed that when the training algorithm satisfies a certain symmetry/equivairance, deep learning is no longer universal. Then, they characterize two types of functions with gradient descent (GD): 1) the functions that fully-connected networks could weakly learn on the binary hypercube as well as the unit sphere, and 2) the functions that neural networks with latent low-d structure. Lastly the authors extend the results to the stochastic GD.
Strengths And Weaknesses
Strength:
The paper has good novelty and defines a lot of new concepts, e.g., G-orbit-alignment, etc. It pushes the boundary of the research.
Use some decent mathematic tools.
Weakness:
The paper is very abstract, and some definition is not easy to think about. Could the authors give some examples to help readers better understand the newly defined concepts?
The authors always assume that the GD training algorithm is G-equivariant, but in practice, I was wondering if this condition could hold.
The paper lacks some empirical study to support the authors' findings.
Questions
In Line 237, the authors mentioned that if the
G
−
orbit-alignment is very small, then the GD training cannot efficiently improve on the trivial loss, could the authors elaborate on this part a bit more, since the right-hand side in the equation between 235 and 236 also depends on other factors?
As I mentioned in the weakness part, some definitions and assumptions are abstract, could the authors give some examples for readers to follow easily?
Could the authors provide some numerical studies to demonstrate the applications mentioned in the paper indeed work in practice.
Limitations
N/A. |
NIPS | Title
On the non-universality of deep learning: quantifying the cost of symmetry
Abstract
We prove limitations on what neural networks trained by noisy gradient descent (GD) can efficiently learn. Our results apply whenever GD training is equivariant, which holds for many standard architectures and initializations. As applications, (i) we characterize the functions that fully-connected networks can weak-learn on the binary hypercube and unit sphere, demonstrating that depth-2 is as powerful as any other depth for this task; (ii) we extend the merged-staircase necessity result for learning with latent low-dimensional structure [ABM22] to beyond the meanfield regime. Under cryptographic assumptions, we also show hardness results for learning with fully-connected networks trained by stochastic gradient descent (SGD).
N/A
We prove limitations on what neural networks trained by noisy gradient descent (GD) can efficiently learn. Our results apply whenever GD training is equivariant, which holds for many standard architectures and initializations. As applications, (i) we characterize the functions that fully-connected networks can weak-learn on the binary hypercube and unit sphere, demonstrating that depth-2 is as powerful as any other depth for this task; (ii) we extend the merged-staircase necessity result for learning with latent low-dimensional structure [ABM22] to beyond the meanfield regime. Under cryptographic assumptions, we also show hardness results for learning with fully-connected networks trained by stochastic gradient descent (SGD).
1 Introduction
Over the last decade, deep learning has made advances in areas as diverse as image classification [KSH12], language translation [BCB14], classical board games [SHS+18], and programming [LCC+22]. Neural networks trained with gradient-based optimizers have surpassed classical methods for these tasks, raising the question: can we hope for deep learning methods to eventually replace all other learning algorithms? In other words, is deep learning a universal learning paradigm? Recently, [AS20, AKM+21] proved that in a certain sense the answer is yes: any PAC-learning algorithm [Val84] can be efficiently implemented as a neural network trained by stochastic gradient descent; analogously, any Statistical Query algorithm [Kea98] can be efficiently implemented as a neural network trained by noisy gradient descent.
However, there is a catch: the result of [AS20] relies on a carefully crafted network architecture with memory and computation modules, which is capable of emulating an arbitrary learning algorithm. This is far from the architectures which have been shown to be successful in practice. Neural networks in practice do incorporate domain knowledge, but they have more “regularity” than the architectures of [AS20], in the sense that they do not rely on heterogeneous and carefully assigned initial weights (e.g., convolutional networks and transformers for image recognition and language processing [LB+95, LKF10, VSP+17], graph neural networks for analyzing graph data [GMS05, BZSL13, VCC+17], and networks specialized for particle physics [BAO+20]). We therefore refine our question:
Is deep learning with “regular” architectures and initializations a universal learning paradigm? If not, can we quantify its limitations when architectures and data are not well aligned?
We would like an answer applicable to a wide range of architectures. In order to formalize the problem and develop a general theory, we take an approach similar to [Ng04, Sha18, LZA21] of understanding deep learning through the equivariance group G (a.k.a., symmetry group) of the learning algorithm.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Definition 1.1 (G-equivariant algorithm). A randomized algorithm A that takes in a data distribution D 2 P(X ⇥ Y)1 and outputs a function A(D) : X ! Y is said to be G-equivariant if for all g 2 G
A(D) d = A(g(D)) g. (G-equivariance)
Here g is a group element that acts on the data space X , and so is viewed as a function g : X ! X , and g(D) is the distribution of (g(x), y), where (x, y) ⇠ D.
In the case that the algorithm A is deep learning on the distribution D, the equivariance group depends on the optimizer, the architecture, and the network initialization [Ng04, LZA21].2
Examples of G-equivariant algorithms in deep learning In many deep learning settings, the equivariance group of the learning algorithm is large. Thus, in this paper, we call an algorithm “regular” if it has a large equivariance group. For example, SGD training of fully-connected networks with Gaussian initialization is orthogonally-equivariant [Ng04]; and is permutation-equivariant if we add skip connections [HZRS16]. SGD training of convolutional networks is translationally-equivariant if circular convolutions are used [SNPP19], and SGD training of i.i.d.-initialized transformers without positional embeddings is equivariant to permutations of tokens [VSP+17]. Furthermore, [LZA21, Theorem C.1] provides general conditions under which a deep learning algorithm is equivariant. See also the preliminaries in Section 2.
Summary of this work Based off of G-equivariance, we prove limitations on what “regular” neural networks trained by noisy gradient descent (GD) or stochastic gradient descent (SGD) can efficiently learn, implying a separation with the initializations and architectures considered in [AS20]. For GD, we prove a master theorem that enables two novel applications: (a) characterizing which functions can be efficiently weak-learned by fully-connected (FC) networks on both the hypercube and the unit sphere; and (b) a necessity result for which functions on the hypercube with latent low-dimensional structure can be efficiently learned. See Sections 1.2 and 1.3 for more details.
1.1 Related work
Most prior work on computational lower bounds for deep learning has focused on proving limitations of kernel methods (a.k.a. linear methods). Starting with [Bar93] and more recently with [WLLM19, AL19, KMS20, AL20, Hsu, HSSV21, ABM22] it is known that there are problems on which kernel methods provably fail. These results apply to training neural networks in the Neural Tangent Kernel (NTK) regime [JGH18], but do not apply to more general nonlinear training. Furthermore, for specific architectures such as FC architectures [GMMM21, Mis22] and convolutional architectures [MM21], the kernel and random features models at initialization are well understood, yielding stronger lower bounds for training in the NTK regime.
For nonlinear training, which is the setting of this paper, considerably less is known. In the context of sample complexity, [Ng04] introduced the study of the equivariance group of SGD, and constructed a distribution on d dimensions with a ⌦(d) versus O(1) sample complexity separation for learning with an SGD-trained FC architecture versus an arbitrary algorithm. More recently, [LZA21] built on [Ng04] to show a O(1) versus ⌦(d2) sample-complexity separation between SGD-trained convolutional and FC architectures. In this paper, we also analyze the equivariance group of the training algorithm, but with the goal of proving superpolynomial computational lower bounds.
In the context of computational lower bounds, it is known that networks trained with noisy3 gradient descent (GD) fall under the Statistical Query (SQ) framework [Kea98], which allows showing computational limitations for GD training based on SQ lower bounds. This has been combined in [AS20, SSS17, MS20, ACHM22] with the permutation symmetry of GD-training of i.i.d. FC networks to prove impossibility of efficiently learning high-degree parities and polynomials. In
1The set of probability distributions on ⌦ is denoted by P(⌦). You should think of D 2 P(X ⇥ Y) as a distribution of pairs (x, y) of covariates and labels.
2Note that the equivariance group of a training algorithm should not be confused with the equivariance group of an architecture in the context of geometric deep learning [BBCV21]. In that context, G-equivariance refers to the property of a neural network architecture fNN(·;✓) : X ! Y that fNN(g(x);✓) = g(fNN(x;✓)) for all x 2 X and all group elements g 2 G. In that case, G acts on both the input in X and output in Y .
3Here the noise is used to control the gradients’ precision as in [AS20, AKM+21].
our work, we show that these arguments can be viewed in the broader context of more general group symmetries, yielding stronger lower bounds than previously known. For stochastic gradient descent (SGD) training, [ABM22] proves a computational limitation for training of two-layer meanfield networks, but their result applies only when SGD converges to the mean-field limit, and does not apply to more general architectures beyond two-layer networks. Finally, most related to our SGD hardness result is [Sha18], which shows limitations of SGD-trained FC networks under a cryptographic assumption. However, the argument of [Sha18] relies on training being equivariant to linear transformations of the data, and therefore requires that data be whitened or preconditioned. Instead, our result for SGD does not require any preprocessing steps.
There is also recent work showing sample complexity benefits of invariant/equivariant neural network architectures [MMM21, EZ21, Ele21, BVB21, Ele22]. In contrast, we study equivariant training algorithms. These are distinct concepts: a deep learning algorithm can be G-equivariant, while the neural network architecture is neither G-invariant nor G-equivariant. For example, a FC network is not invariant to orthogonal transformations of the input. However, if we initialize it with Gaussian weights and train with SGD, then the learning algorithm is equivariant to orthogonal transformations of the input (see Proposition 2.5 below).
1.2 Contribution 1: Lower bounds for noisy gradient descent (GD)
Consider the supervised learning setup where we train a neural network fNN(·;✓) : X ! R parametrized by ✓ 2 Rp to minimize the mean-squared error on a data distribution D 2 P(X ⇥ R),
`D(✓) = E(x,y)⇠D[(y fNN(x;✓))2]. (1)
The noisy Gradient Descent (GD) training algorithm randomly initializes ✓0 ⇠ µ✓ for some initialization distribution µ✓ 2 P(Rp), and then iteratively updates the parameters with step size ⌘ > 0 in a direction gD(✓k) approximating the population loss gradient, plus Gaussian noise ⇠k ⇠ N (0, ⌧2I),
✓k+1 = ✓k ⌘gD(✓ k) + ⇠k. (GD)
Up to a constant factor, gD(✓) is the population loss gradient, except we have clipped the gradients of the network with the projection operator ⇧B(0,R) to lie in the ball B(0, R) = {z : kzk2 R} ⇢ Rp,4
gD(✓) = E(x,y)⇠D[(y fNN(x;✓))(⇧B(0,R)r✓fNN(x;✓))]. Clipping the gradients is often used in practice to avoid instability from exploding gradients (see, e.g., [ZHSJ19] and references within). In our context, clipping ensures that the injected noise ⇠k is on the same scale as the gradient r✓fNN of the network and so it controls the gradients’ precision. Similarly to the works [AS20, AKM+21, ACHM22], we consider noisy gradient descent training to be efficient if the following conditions are met. Definition 1.2 (Efficiency of GD, informal). GD training is efficient if the clipping radius R, step size ⌘, and inverse noise magnitude 1/⌧ are all polynomially-bounded in d, since then (GD) can be efficiently implemented using noisy minibatch SGD5.
We prove that some data distributions cannot be efficiently learned by G-equivariant GD training. For this, we introduce the G-alignment: Definition 1.3 (G-alignment). Let G be a compact group, let µX 2 P(X ) be a distribution over data points, and let f 2 L2(µX ) be a labeling function. The G-alignment of (µX , f) is:
C((µX , f);G) = sup h
Eg⇠µG [Ex⇠µX [f(g(x))h(x)]2],
where µG is the Haar measure of G and the supremum is over h 2 L2(µX ) such that khk2 = 1.
In our applications, we use tools from representation theory (see e.g., [Kna96]) to evaluate the G-alignment. Using the G-alignment, we can prove a master theorem for lower bounds: Theorem 1.4 (GD lower bound, informal statement of Theorem 3.1). Let Df 2 P(X ⇥ R) be the distribution of (x, f(x)) for x ⇠ µX . If µX is G-invariant6 and the G-alignment of (µX , f) is small, then f cannot be efficiently learned by a G-equivariant GD algorithm.
4Note that if fNN is an R-Lipschitz model, then gD(✓) will simply be the population gradient of the loss. 5Efficient implementability by minibatch SGD assumes bounded residual errors. 6Meaning that if x ⇠ µX , then for any g 2 G, we also have g(x) ⇠ µX .
Proof ideas We first make an observation of [Ng04]: if a G-equivariant algorithm can learn the function f by training on the distribution Df , then, for any group element g 2 G, it can learn f g by training on the distribution Df g. In other words, the algorithm can learn the class of functions F = {f g : g 2 g}, which can potentially be much larger than just the singleton set {f}. We conclude by showing that the class of functions F cannot be efficiently learned by GD training. The intuition is that the G-alignment measures the diversity of the functions in F . If the G-alignment is small, then there is no function h that correlates with most of the functions in F , which can be used to show F is hard to learn by gradient descent.
This type of argument appears in [AS20, ACHM22] in the specific case of Boolean functions and for permutation equivariance; our proof both applies to a more general setting (beyond Boolean functions and permutations) and yields sharper bounds; see Appendix A.3. Our bound can also be interpreted in terms of the Statistical Query framework, as we discuss in Appendix A.4. While Theorem 1.4 is intuitively simple, we demonstrate its power and ease-of-use by deriving two new applications.
Application: Characterization of weak-learnability by fully-connected (FC) networks In our first application, we consider weak-learnability: when can a function be learned non-negligibly better than just outputting the estimate fNN ⌘ 0? Using Theorem 1.4, we characterize which functions over the binary hypercube f : {+1, 1}d ! R and over the sphere f : Sd 1 ! R are efficiently weak-learnable by GD-trained FC networks with i.i.d. symmetric and i.i.d. Gaussian initialization, respectively. The takeaway is that a function f : {+1, 1}d ! R is weak-learnable if and only if it has a nonnegligible Fourier coefficient of order O(1) or d O(1). Similarly, a function f : Sd 1 ! R is weak-learnable if and only if it has nonnegligible projection onto the degree-O(1) spherical harmonics. Perhaps surprisingly, such functions can be efficiently weak-learned by 2-layer fully-connected networks, which shows that adding more depth does not help. This application is presented in Section 3.1.
Application: Evidence for the staircase property In our second application, we consider learning a target function f : {+1, 1}d ! R that only depends on the first P coordinates, f(x) = h(x1, . . . , xP ). Our regime of interest here is when the function hand : {+1, 1}P ! R remains fixed and the dimension d grows, since this models the situation where a latent low-dimensional space determines the labels in a high-dimensional dataset. Recently, [ABM22] studied SGD-training of mean-field two-layer networks, and gave a near-characterization of which functions can be learned to arbitrary accuracy ✏ in Oh,✏(d) samples, in terms of the merged-staircase property (MSP). Using Theorem 1.4, we prove that the MSP is necessary for GD-learnability whenever training is permutation-equivariant (which applies beyond the 2-layer mean-field regime) and we also generalize it beyond leaps of size 1. Details are in Section 3.2.
1.3 Contribution 2: Hardness for stochastic gradient descent (SGD)
The second part of this paper concerns Stochastic Gradient Descent (SGD) training, which randomly initializes the weights ✓0 ⇠ µ✓ , and then iteratively trains the parameters with the following update rule to try to minimize the loss (1):
✓k+1 = ✓k ⌘r✓(y fNN(xk+1;✓)) 2 |✓=✓k , (SGD)
where (yk+1,xk+1) ⇠ D is a fresh sample on each iteration, and ⌘ > 0 is the learning rate.7
Proving computational lower bounds for SGD is a notoriously difficult problem [AKM+21], exacerbated by the fact that for general architectures SGD can be used to simulate any polynomial-time learning algorithm [AS20]. However, we demonstrate that one can prove hardness results for SGD training based off of cryptographic assumptions when the training algorithm has a large equivariance group. We demonstrate the non-universality of SGD on a standard FC architecture. Theorem 1.5 (Hardness for SGD, informal statement of Theorem 4.4). Under the assumption that the Learning Parities with Noise (LPN) problem8 is hard, FC neural networks with Gaussian initialization
7For brevity, we focus on one-pass SGD with a single fresh sample per iteration. Our results extend to empirical risk minimization (ERM) setting and to mini-batch SGD, see Remark E.1.
8See Section 4 and Appendix D.3 for definitions and discussion on LPN.
trained by SGD cannot learn fmod8 : {+1, 1}d ! {0, . . . , 7},
fmod8(x) ⌘ dX
i=1
xi (mod 8),
in polynomial time from noisy samples (x, fmod8(x) + ⇠) where x ⇠ {+1, 1}d and ⇠ ⇠ N (0, 1).
This result shows a limitation of SGD training based on an average-case reduction from a cryptographic problem. The closest prior result is in [Sha18], which proved hardness results for learning with SGD on FC networks, but required preprocessing the data with a whitening transformation.
Proof idea The FC architecture and Gaussian initialization are necessary: an architecture that outputted fmod8(x) at initialization would trivially achieve zero loss. However, SGD on Gaussianinitialized FC networks is sign-flip equivariant, and this symmetry makes fmod8 hard to learn. If a sign-flip equivariant algorithm can learn the function fmod8(x) from noisy samples, then it can learn the function fmod8(x s) from noisy samples, where s 2 {+1, 1}d is an unknown sign-flip vector, and denotes elementwise product. However, this latter problem is hard under standard cryptographic assumptions. More details in Section 4.
2 Preliminaries
Notation Let Hd = {+1, 1}d be the binary hypercube, and Sd 1 = {x 2 Rd : kxk2 = 1} be the unit sphere. The law of a random variable X is L(X). If S is a finite set, then X ⇠ S stands for X ⇠ Unif[S]. Also let x ⇠ Sd 1 denote x drawn from the uniform Haar measure on Sd 1. For a set ⌦, let P(⌦) be the set of distributions on ⌦. Let be the elementwise product. For any µX 2 P(X ), and group G acting on X , we say µX is G-invariant if g(x) d = x for x ⇠ µX and any g 2 G.
2.1 Equivariance of GD and SGD
We define GD and SGD equivariance separately. Definition 2.1. Let AGD be the algorithm that takes in data distribution D 2 P(X ⇥ R), runs (GD) on initialization ✓0 ⇠ µ✓ for k steps, and outputs the function AGD(D) = fNN(·;✓k)
We say “(fNN, µ✓)-GD is G-equivariant” if AGD is G-equivariant in the sense of Definition 1.1. Definition 2.2. Let ASGD be the algorithm that takes in samples (xi, yi)i2[n], runs (SGD) on initialization ✓0 ⇠ µ✓ for n steps, and outputs ASGD((xi, yi)i2[n]) = fNN(·;✓k).
We say “(fNN, µ✓)-SGD is G-equivariant” if ASGD((xi, yi)i2[n]) d = ASGD((g(xi), yi)i2[n]) g for any g 2 G, and any samples (xi, yi)i2[n].
2.2 Regularity conditions on networks imply equivariances of GD and SGD
We take a data space X ✓ Rd, and consider the following groups that act on Rd. Definition 2.3. Define the following groups and actions:
• Let Gperm = Sd denote the group of permutations on [d]. An element 2 Gperm acts on x 2 Rd in the standard way: (x) = (x (1), . . . , x (d)).
• Let Gsign,perm denote the group of signed permutations, an element g = (s, ) 2 Gsign,perm is given by a sign-flip vector s 2 Hd and a permutation 2 Gperm. It acts on x 2 Rd by g(x) = s (x) = (s1x (1), . . . , sdx (d)).9
• Let Grot = SO(d) ✓ GL(d,R) denote the rotation group. An element g 2 Grot is a rotation matrix that acts on x 2 Rd by matrix multiplication.
9The group product is g1g2 = (s1, 1)(s2, 2) = (s1 1(s2), 1 2).
Under mild conditions on the neural network architecture and initialization, GD and SGD training are known to be Gperm-, Gsign,perm-, or Grot-equivariant [Ng04, LZA21]. Assumption 2.4 (Fully-connected i.i.d. first layer and no skip connections from the input). We can decompose the parameters as ✓ = (W , ), where W 2 Rm⇥d is the matrix of the first-layer weights, and there is a function gNN(·; ) : Rm ! R such that fNN(x;✓) = gNN(Wx; ). Furthermore, the initialization distribution is µ✓ = µW ⇥ µ , where µW = µ ⌦(m⇥d) w for µw 2 P(R).
Notice that Assumption 2.4 is satisfied by FC networks with i.i.d. initialization. Under assumptions on µw, we obtain equivariances of GD and SGD (see Appendix E for proofs.) Proposition 2.5 ([Ng04, LZA21]). Under Assumption 2.4, GD and SGD are Gperm-equivariant. If µw is sign-flip symmetric, then GD and SGD are Gsign,perm-equivariant. If µw = N (0, 2) for some , then GD and SGD are Grot-equivariant.
3 Lower bounds for learning with GD
In this section, let D(f, µX ) 2 P(X ⇥ R) denote the distribution of (x, f(x)) where x ⇠ µX . We give a master theorem for computational lower bounds for learning with G-equivariant GD. Theorem 3.1 (GD lower bound using G-alignment). Let G be a compact group, and let fNN(·;✓) : X ! R be an architecture and µ✓ 2 P(Rp) be an initialization such that GD is G-equivariant. Fix any G-invariant distribution µX 2 P(X ), any label function f⇤ 2 L2(µX ), and any baseline function ↵ 2 L2(µX ) satisfying ↵ g = ↵ for all g 2 G. Let ✓k be the random weights after k time-steps of GD training with noise parameter ⌧ > 0, step size ⌘ > 0, and clipping radius R > 0 on the distribution D = D(f⇤, µX ). Then, for any ✏ > 0,
P✓k [`D(✓k) kf⇤ ↵k2L2(µX ) ✏] ⌘R
p kC 2⌧ + C ✏ ,
where C = C((f⇤ ↵, µX );G) is the G-alignment of Definition 1.3.
As discussed in Section 1.2, the theorem states that if the G-alignment C is very small, then GD training cannot efficiently improve on the trivial loss from outputting ↵: either the number of steps k, the gradient precision R/⌧ , or the step size ⌘ have to be very large in order to learn. Appendix A shows a generalization of the theorem for learning a class of functions F = {f1, . . . , fm} instead of just a single function f⇤. This result goes beyond the lower bound of [AS20] even when G is the trivial group with one element: the main improvement is that Theorem 3.1 proves hardness for learning real-valued functions beyond just Boolean-valued functions. We demonstrate the usefulness of the theorem through two new applications in Sections 3.1 and 3.2.
3.1 Application: Characterizing weak-learnability by FC networks
In our first application of Theorem 3.1, we consider FC architectures with i.i.d. initialization, and show how to use their training equivariances to characterize what functions they can weak-learn: i.e., for what target functions f⇤ they can efficiently achieve a non-negligible correlation after training. Definition 3.2 (Weak learnability). Let {µd}d2N be a family of distributions µd 2 P(Xd), and let {fd}d2N be a family of functions fd 2 L2(µd). Finally, let {f̃d}d2N be a family of estimators, where f̃d is a random function in L2(µd). We say that {fd, µd}d2N is “weak-learned” by the family of estimators {f̃d}d2N if there are constants d0, C > 0 such that for all d > d0,
Pf̃d [kfd f̃dk 2 L2(µd) kfdk 2 L2(µd) d C ] 9/10. (2)
The constant 9/10 in the definition is arbitrary. In words, weak-learning measures whether the family of estimators {f̃d} has a non-negligible edge over simply estimating with the identically zero functions f̃d ⌘ 0. We study weak-learnability by GD-trained FC networks. Definition 3.3. We say that {fd, µd}d2N is efficiently weak-learnable by GD-trained FC networks if there are FC networks and initializations {fNN,d, µ✓,d}, and hyperparameters {⌘d, kd, Rd, ⌧d} such that for some constant c > 0,
• Hyperparameters are polynomial size: 0 ⌘d, kd, Rd, 1/⌧d O(dc);
• {f̃d} weak-learns {fd, µd} in the sense of Definition 3.2, where f̃d = fNN(·;✓d) for weights ✓d that are GD-trained on D(fd, µd) for kd steps with step size ⌘d, clipping radius Rd, and noise ⌧d, starting from initialization µ✓,d.
If µ✓,d is i.i.d copies of a symmetric distribution, we say that the FC networks are symmetricallyinitialized, and Gaussian-initialized if µ✓,d is i.i.d. copies of a Gaussian distribution.
3.1.1 Functions on hypercube, FC networks with i.i.d. symmetric initialization
Let us first consider functions on the Boolean hypercube f : Hd ! R. These can be uniquely written as a multilinear polynomial
f(x) = X
S✓[d]
f̂(S) Y
i2S
xi,
where f̂(S) are the Fourier coefficients of f [O’D14]. We characterize weak learnability of functions on the hypercube in terms of their Fourier coefficients. The full proof is deferred to Appendix B.1. Theorem 3.4. Let {fd}d2N be a family of functions fd : Hd ! R with kfdkL2(Hd) 1. Then {fd,Hd} is efficiently weak-learnable by GD-trained symmetrically-initialized FC networks if and only if there is a constant C > 0 such that for each d 2 N there is Sd ✓ [d] with |Sd| C or |Sd| d C, and |f̂d(Sd)| ⌦(d C).
The algorithmic result can be achieved by two-layer FC networks, and relies on random features analysis where each network weight is initialized to 0 with probability 1 p, and +1 or 1 with equal probability p/2.10 Therefore, for weak learning on the hypercube, two-layer networks are as good as networks of any depth. For the converse impossibility result, we apply Theorem 3.1, recalling that GD is Gsign,perm-equivariant by Proposition 2.5, and noting that Gsign,perm-alignment is:
Lemma 3.5. Let f : Hd ! R. Then C((f,Hd);Gsign,perm) = maxk2[d] d k 1P S✓[d] |S|=k f̂(S)2.
Proof. In the following, let s ⇠ Hd and ⇠ Gperm, so that g = (s, ) ⇠ Gsign,perm. Also let x,x0 ⇠ Hd be independent. For any h : Hd ! R, by (a) tensorizing, (b) expanding f in the Fourier basis, (c) the orthogonality relation Es[ S(s) S0(s)] = S,S0 , and (d) tensorizing,
Eg[Ex[f(g(x))h(x)]2] = E ,s[Ex[f(s (x))h(x)]2] (a) = E ,s,x,x0 [f(s (x))f(s (x0))h(x)h(x0)] (b) = Ex,x0, [ X
S,S0✓[d]
f̂(S)f̂(S0)h(x)h(x0) S( (x)) S0( (x 0))Es[ S(s) S0(s)]]
(c) = Ex,x0, [
X
S✓[d]
f̂(S)2h(x)h(x0) S( (x)) S( (x 0))]
(d) = E [
X
S✓[d]
f̂(S)2 Ex[h(x) S( (x))]2]
= X
S✓[d]
f̂(S)2 E [ĥ( 1(S))2]
= X
S✓[d]
f̂(S)2 ✓ d
|S|
◆ 1 X
S0,|S0|=|S|
ĥ(S0)2.
And since P
S0,|S0|=|S| ĥ(S 0)2 khk2L2(Hd), the supremum over h such that khkL2(Hd) = 1 is
achieved by taking h(x) = S(x) for some S.
10Surprisingly, this means that the full parity function f⇤(x) = Qd
i=1 xi can be efficiently learned with such initializations. See Appendix B.
So if the Fourier coefficients of f are negligible for all S s.t. min(|S|, d |S|) O(1), then the Gsign,perm-alignment of f is negligible. By Theorem 3.1, this means f cannot be learned efficiently. In Appendix B.1.2 we give a concrete example of a hard function, that was not previously known.
3.1.2 Functions on sphere, FC networks with i.i.d. Gaussian initialization
We now study learning a target function on the unit sphere, f 2 L2(Sd 1), where we take the standard Lebesgue measure on Sd 1. A key fact in harmonic analysis is that L2(Sd 1) can be written as the direct sum of subspaces spanned by spherical harmonics of each degree (see, e.g., [Hoc12]).
L 2(Sd 1) =
1M
l=0
Vd,l,
where Vd,l ✓ L2(Sd 1) is the space of degree-l spherical harmonics, which is of dimension
dim(Vd,l) = 2l + d 2
l
✓ l + d 3
l 1
◆ .
Let ⇧Vd,l : L2(Sd 1) ! Vd,l be the projection operator to the space of degree-l spherical harmonics. In Appendix B.2, we prove this characterization of weak-learnability for functions on the sphere: Theorem 3.6. Let {fd}d2N be a family of functions fd : Sd 1 ! R with kfdkL2(Sd 1) 1. Then {fd, Sd 1} is efficiently weak-learnable by GD-trained Gaussian-initialized FC networks if and only if there is a constant C > 0 such that PC l=0 k⇧Vd,lfdk 2 d C .
The algorithmic result can again be achieved by two-layer FC networks, and is a consequence of the analysis of the random feature kernel in [GMMM21], which shows that the projection of fd onto the low-degree spherical harmonics can be efficiently learned. For the impossibility result, we apply Theorem 3.1, noting that GD is Grot-equivariant by Proposition 2.5, and the Grot-alignment is: Lemma 3.7. Let f 2 L2(Sd 1). Then C((f, Sd 1);Grot) = maxl2Z 0 k⇧Vd,lfk2/ dim(Vd,l).
Proof. The Grot-alignment is computed using the representation theory of Grot, specifically the Schur orthogonality theorem (see, e.g., [Ser77, Kna96]). For any l, the subspace Vd,l is invariant to action by Grot, meaning that we may define the representation l of Grot, which for any g 2 Grot, f 2 Vd,l is given by l(g) : Vd,l ! Vd,l and l(g)f = f g 1. Furthermore, l is a unitary, irreducible representation, and l is not equivalent to l0 , for any l 6= l0 (see e.g., [Sta90, Theorem 1]). Therefore, by the Schur orthogonality relations [Kna96, Corollary 4.10], for any v1, w1 2 Vd,l1 and v2, w2 2 Vd,l2 , we have
Eg⇠Grot [h l1(g)v1, w1iL2(Sd 1)h l2(g)v2, w2iL2(Sd 1)] = l1l2hv1, v2iL2(Sd 1)hw1, w2iL2(Sd 1)/ dim(Vd,l1). (3)
Let g ⇠ Grot, drawn from the Haar probability measure. For any h 2 L2(Sd 1) such that khk
2 L2(Sd 1) = 1, by (a) the decomposition of L 2(Sd 1) into subspaces of spherical harmonics, (b) the Grot-invariance of each subspace Vd,l, and (c) the Schur orthogonality relations in (3),
Eg[hf g, hi2L2(Sd 1)] (a) =
1X
l1,l2=0
Eg[h⇧Vd,l1 (f g),⇧Vd,l1hiL2(Sd 1)h⇧Vd,l2 (f g),⇧Vd,l2hiL2(Sd 1)]
(b) =
1X
l1,l2=0
Eg[h(⇧Vd,l1 f) g,⇧Vd,l1hiL2(Sd 1)h(⇧Vd,l2 f) g,⇧Vd,l2hiL2(Sd 1)]
(c) =
1X
l=0
1
dim(Vd,l) k⇧Vd,lfk
2 L2(Sd 1)k⇧Vd,lhk 2 L2(Sd 1)
1X
l=0
k⇧Vd,lhk 2 L2(Sd 1) ! max l2Z 0
1
dim(Vd,l) k⇧Vd,lfk
2 L2(Sd 1)
= max l2Z 0
1
dim(Vd,l) k⇧Vd,lfk
2 L2(Sd 1).
Let l⇤ be the optimal value of l in the last line, which is known to exist by the fact that k⇧Vd,lfk2 kfk
2 and dim(Vd,l) ! 1 as l ! 1. The inequality is achieved by h = ⇧Vd,l⇤ f/k⇧Vd,l⇤ fk.
This implies that the Grot-alignment of f is negligible if and only if its projection to the low-order spherical harmonics is negligible. By Theorem 3.1, this implies the necessity result of Theorem 3.6.
3.2 Application: Extending the merged-staircase property necessity result
In our second application, we study the setting of learning a sparse function on the binary hypercube (a.k.a. a junta) that depends on only P d coordinates of the input x, i.e.,
f⇤(x) = h⇤(x1, . . . , xP ),
where h⇤ : HP ! R. The regime of interest to us is when h⇤ is fixed and d ! 1, representing a hidden signal in a high-dimensional dataset. This setting was studied by [ABM22], who identified the “merged-staircase property” (MSP) as an extension of [ABB+21]. We generalize the MSP below. Definition 3.8 (l-MSP). For l 2 Z+ and h⇤ : HP ! R, we say that h⇤ satisfies the merged staircase property with leap l (i.e., l-MSP) if its set of nonzero Fourier coefficients S = {S : ĥ⇤(S) 6= ;} can be ordered as S = {S1, . . . , Sm} such that for all i 2 [m], |Si \ [j<iSj | l.
For example, h⇤(x) = x1 + x1x2 + x1x2x3 satisfies 1-MSP; h⇤(x) = x1x2 + x1x2x3 satisfies 2-MSP, but not 1-MSP because of the leap required to learn x1x2; similarly h⇤(x) = x1x2x3 + x4 satisfies 3-MSP but not 2-MSP. If h⇤ satisfies l-MSP for some small l, then the function f⇤ can be learned greedily in an efficient manner, by iteratively discovering the coordinates on which it depends. In [ABM22] it was proved that the 1-MSP property nearly characterized which sparse functions could be ✏-learned in O✏,h⇤(d) samples by one-pass SGD training in the mean-field regime.
We prove the MSP necessity result for GD training. On the one hand, our necessity result is for a different training algorithm, GD, which injects noise during training. On the other, our result is much more general since it applies whenever GD is permutation-equivariant, which includes training of FC networks and ResNets of any depth (whereas the necessity result of [ABM22] applies only to two-layer architectures in the mean-field regime). We also generalize the result to any leap l. Theorem 3.9 (l-MSP necessity). Let fNN(·;✓) : Hd ! R be an architecture and µ✓ 2 P(Rp) be an initialization such that GD is Gperm-equivariant. Let ✓k be the random weights after k steps of GD training with noise parameter ⌧ > 0, step size ⌘, and clipping radius R on the distribution D = D(f⇤,Hd). Suppose that f⇤(x) = h⇤(z) where h⇤ : HP ! R does not satisfy l-MSP for some l 2 Z+. Then there are constants C, ✏0 > 0 depending on h⇤ such that
P✓k [`D(✓k) ✏0] C⌘R
2⌧
r k
dl+1 +
C
dl+1 .
The interpretation is that if h⇤ does not satisfy l-MSP, then to learn f⇤ to better than ✏0 error with constant probability, we need at least ⌦h⇤,✏(dl+1) steps of (GD) on a network with step size ⌘ = Oh⇤,✏(1), clipping radius R = Oh⇤,✏(1), and noise level ⌧ = ⌦h⇤,✏(1). The proof is deferred to Appendix C. It proceeds by first isolating the “easily-reachable” coordinates T ✓ [P ], and subtracting their contribution from f⇤. We then bound G-alignment of the resulting function, where G is the permutation group on [d] \ T .
4 Hardness for learning with SGD
In this section, for > 0, we let D(f, µX , ) 2 P(X ⇥ R) denote the distribution of (x, f(x) + ⇠) where x ⇠ µX and ⇠ ⇠ N (0, 2) is independent noise.
We show that the equivariance of SGD on certain architectures implies that the function fmod8 : Hd ! {0, . . . , 7} given by
fmod8(x) ⌘ X
i
xi (mod 8) (4)
is hard for SGD-trained, i.i.d. symmetrically-initialized FC networks. Our hardness result relies on a cryptographic assumption to prove superpolynomial lower bounds for SGD learning. For any S ✓ [d], let S : Hd ! {+1, 1} be the parity function S(x) = Q i2S xi.
Definition 4.1. The learning parities with Gaussian noise, (d, n, )-LPGN, problem is parametrized by d, n 2 Z>0 and 2 R>0. An instance (S, q, (xi, yi)i2[n]) consists of (i) an unknown subset S ✓ [d] of size |S| = bd/2c, and (ii) a known query vector q ⇠ Hd, and i.i.d. samples (xi, yi)i2[n] ⇠ D( S ,Hd, ). The task is to return S(q) 2 {+1, 1}.11
Our cryptographic assumption is that poly(d)-size circuits cannot succeed on LPGN. Definition 4.2. Let > 0. We say -LPGN is poly(d)-time solvable if there is a sequence of sample sizes {nd}d2N and circuits {Ad}d2N such that nd, size(Ad) poly(d), and Ad solves (d, nd, )-LPGN with success probability at least 9/10, when inputs are rounded to poly(d) bits. Assumption 4.3. Fix . The -LPGN-hardness assumption is: -LPGN is not poly(d)-time solvable.
The LPGN problem is the simply standard Learning Parities with Noise problem (LPN) [BKW03], except with Gaussian noise instead of binary classification noise, and we are also promised that |S| = bd/2c. In Appendix D.3, we derive Assumption 4.3 from the standard hardness of LPN. We now state our SGD hardness result. Theorem 4.4. Let {fNN,d, µ✓,d}d2N be a family of networks and initializations satisfying Assumption 2.4 (fully-connected) with i.i.d. symmetric initialization. Let > 0, and let {nd} be sample sizes such that (fNN,d, µ✓,d)-SGD training on nd samples from D(fmod8,Hd, ) rounded to poly(d) bits yields parameters ✓d with
E✓d [kfmod8 fNN(·;✓d)k2] 0.0001.
Then, under ( /2)-LPGN hardness, (fNN,d, µ✓,d)-SGD on nd samples cannot run in poly(d) time.
In order to prove Theorem 4.4, we use the sign-flip equivariance of gradient descent guaranteed by the symmetry in the initialization. A sign-flip equivariant network that learns fmod8(x) from -noisy samples, is capable of solving the harder problem of learning fmod8(x s) from -noisy samples, where s 2 Hd is an unknown sign-flip vector. However, through an average-case reduction we show that this problem is ( /2)-LPGN-hard. Therefore the theorem follows by contradiction.
5 Discussion
The general GD lower bound in Theorem 3.1 and the approach for basing hardness of SGD training on cryptographic assumptions in Theorem 4.4 could be further developed to other settings.
There are limitations of the results to address in future work. First, the GD lower bound requires adding noise to the gradients, which can hinder training. Second, real-world data distributions are typically not invariant to a group of transformations, so the results obtained by this work may not apply. It is open to develop results for distributions that are approximately invariant.
Finally, it is open whether computational lower bounds for SGD/GD training can be shown beyond those implied by equivariance. For example, consider the function f : Hd ! {+1, 1} that computes the “full parity”, i.e., the parity of all of the inputs f(x) = Qd i=1 xi. Past work has empirically shown that SGD on FC networks with Gaussian initialization [SSS17, AS20, NY21] fails to learn this function. Proving this would represent a significant advance, since there is no obvious equivariance that implies that the full parity is hard to learn — in fact we have shown weak-learnability with symmetric Rad(1/2) initialization, in which case training is Gsign,perm-equivariant.
Acknowledgements
We thank Jason Altschuler, Guy Bresler, Elisabetta Cornacchia, Sonia Hashim, Jan Hazla, Hannah Lawrence, Theodor Misiakiewicz, Dheeraj Nagaraj, and Philippe Rigollet for stimulating discussions. We thank the Simons Foundation and the NSF for supporting us through the Collaboration on the Theoretical Foundations of Deep Learning (deepfoundations.ai). This work was done in part while E.B. was visiting the Simons Institute for the Theory of Computing and the Bernoulli Center at EPFL, and was generously supported by Apple with an AI/ML fellowship.
11More formally, one would express this as a probabilistic promise problem [Ale03]. | 1. What is the focus and contribution of the paper regarding the learnability of functions by a standard MLP with gradient descent?
2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis and novelty?
3. What are the weaknesses of the paper, especially regarding the significance of some of its results and comparisons with other works?
4. Do you have any concerns about the relevance of the results with noise to the standard noiseless training process?
5. Can you provide examples of functions that are not weak learnable? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper tries to analyze which functions can and cannot be learned by a standard MLP with gradient descent. This is analyzed by looking at the symmetry of the initialization and GD so the intuition is that functions that do not follow this symmetry are hard to learn.
Strengths And Weaknesses
I would first want to state that I am not a theoretician so I do not think I am the right person to review this paper, but I did the best I could to evaluate it.
Strength:
The paper is clearly written
The proofs seem correct to me
The question of what functions can be learned is an important one so the work is important and novel.
Weaknesses:
I mainly question the significance of some of the results, especially since the definition of weak learnability is very weak (as the "non-negligible" edge can be quite negligible with a large value of C). For example in Trm. 3.6 the condition
∑
ℓ
=
0
C
|
|
P
V
d
,
ℓ
f
d
|
|
2
≥
d
−
C
, the r.h.s increases (or not decreases) with
C
and the r.h.s decreases rapidly. It makes the condition very weak (as weak learnability is very weak) so I don't think it adds any value to our understanding.
Missing related work Shai Shalev-Shwartz, Ohad Shamir and Shaked Shammah "Failures of Gradient-Based Deep Learning".
You do not show results for GD/SGD but for GD/SGD with noise. This is equivalent to what is done in SGLD or differentially private learning which significantly hampers performance. It is not clear to me at all that the results with noise are relevant to the standard noiseless training due to the major impact the noise makes on the optimization process.
Questions
Can you give an example (that wasn't shown previously) of something that isn't weak learnable?
Limitations
Not discussed |
NIPS | Title
Learning Disentangled Representations of Videos with Missing Data
Abstract
Missing data poses significant challenges while learning representations of video sequences. We present Disentangled Imputed Video autoEncoder (DIVE), a deep generative model that imputes and predicts future video frames in the presence of missing data. Specifically, DIVE introduces a missingness latent variable, disentangles the hidden video representations into static and dynamic appearance, pose, and missingness factors for each object. DIVE imputes each object’s trajectory where the data is missing. On a moving MNIST dataset with various missing scenarios, DIVE outperforms the state of the art baselines by a substantial margin. We also present comparisons on a real-world MOTSChallenge pedestrian dataset, which demonstrates the practical value of our method in a more realistic setting. Our code and data can be found at https://github.com/Rose-STL-Lab/DIVE.
1 Introduction
Videos contain rich structured information about our physical world. Learning representations from video enables intelligent machines to reason about the surroundings and it is essential to a range of tasks in machine learning and computer vision, including activity recognition [1], video prediction [2] and spatiotemporal reasoning [3]. One of the fundamental challenges in video representation learning is the high-dimensional, dynamic, multi-modal distribution of pixels. Recent research in deep generative models [4, 5, 6, 7] tackles the challenge by exploiting inductive biases of videos and projecting the high-dimensional data into substantially lower dimensional space. These methods search for disentangled representations by decomposing the latent representation of video frames into semantically meaningful factors [8].
Unfortunately, existing methods cannot reason about the objects when they are missing in videos. In contrast, a five month-old child can understand that objects continue to exist even when they are unseen, a phenomena known as “object permanence” [9]. Towards making intelligent machines, we study learning disentangled representations of videos with missing data. We consider a variety of missing scenarios that might occur in natural videos: objects can be partially occluded; objects can disappear in a scene and reappear; objects can also become missing while changing their size, shape, color and brightness. The ability to disentangle these factors and learn appropriate representations is an important step toward spatiotemporal decision making in complex environments.
In this work, we build on the deep generative model of DDPAE [5] which integrates structured graphical models into deep neural networks. Our model, which we call Disentangled-Imputed-VideoautoEncoder (DIVE), (i) learns representations that factorize into appearance, pose and missingness
∗1College of Electrical and Computer Engineering, 2 Khoury College of Computer Sciences, Northeastern University, MA, USA, 3Computer Science & Engineering, University of California San Diego, CA, USA.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
latent variables; (ii) imputes missing data by sampling from the learned latent variables; and (iii) performs unsupervised stochastic video prediction using the imputed hidden representation. Besides imputation, another salient feature of our model is (iv) its ability to robustly generate objects even when their appearances are changing by modeling the static and dynamic appearances separately. Thismakes our technique more applicable to real-world problems.
We demonstrate the effectiveness of our method on a moving MNIST dataset with a variety of missing data scenarios including partial occlusions, out of scene, and missing frames with varying appearances. We further evaluate on the Multi-Object Tracking and Segmentation (MOTSChallenge) object tracking and segmentation challenge dataset. We show that DIVE is able to accurately infer missing data, perform video imputation and reconstruct input frames and generate future predictions. Compared with baselines, our approach is robust to missing data and achieves significant improvements in video prediction performances.
2 Related Work
Disentangled Representation. Unsupervised learning of disentangled representation for sequences generally falls into three categories: VAE-based [10, 6, 5, 7, 11, 12], GAN-like models [13, 14, 4, 15] and Sum-Product networks [11, 16]. For video data, a common practice is to encode a video frame into latent variables and disentangle the latent representation into content and dynamics factors. For example, [5] assumes the content (objects, background) of a video is fixed across frames, while the position of the content can change over time. In most cases, models can only handle complete video sequences without missing data. One exception is SQAIR [6], an generalization of AIR [17], which makes use of a latent variable to explicitly encode the presence of the respective object. SQAIR is further extended to an accelerated training scheme [16] or to better encode relational inductive biases [11, 12]. However, SQAIR and its extensions have no mechanism to recall an object. This leads to discovering an object as new when it reappears in the scene.
Video Prediction. Conditioning on the past frames, video prediction models are trained to reconstruct the input sequence and predict future frames. Many video prediction methods use dynamical modeling [18] or deep neural networks to learn a deterministic transformation from input to output, including LSTM [19], Convolutional LSTM [20] and PredRNN [21]. These methods often suffer from blurry predictions and cannot properly model the inherently uncertain future [22]. In contrast to deterministic prediction, we prefer stochastic video prediction [2, 23, 22, 24, 14, 25], which is more suitable for capturing the stochastic dynamics of the environment. For instance, [22] proposes an auto-regressive model to generate pixels sequentially. [14] generalizes VAE to video data with a learned prior. [26] develops a normalizing flow video prediction model. [25] proposes a Bayesian Predictive Network to learn the prior distribution from noisy videos but without disentangled representations. Our main goal is to learn disentangled latent representations from video that are both interpretable and robust to missing data.
Missing Value Imputation. Missing value imputation is the process of replacing the missing data in a sequence by an estimate of its true missing value. It is a central challenge of sequence modeling. Statistical methods often impose strong assumptions on the missing patterns. For example, mean/median averaging [27] and MICE [28], can only handle data missing at random. Latent variables models with the EM algorithm [29] can impute data missing not-at-random but are restricted to certain parametric models. Deep generative models offer a flexible framework of missing data imputation. For instance, [30, 31, 32] develop variants of recurrent neural networks to impute time series. [33, 34, 35] propose GAN-like models to learn missing patterns in multivariate time series. Unfortunately, to the best of our knowledge, all recent developments in generative modeling for missing value imputation have focused on low-dimensional time series, which are not directly applicable to high-dimensional video with complex scene dynamics.
3 Disentangled-Imputed-Video-autoEncoder (DIVE)
Videos often capture multiple objects moving with complex dynamics. For this work, we assume that each video has a maximum number of N objects, we observe a video sequence up to K time steps and aim to predict T − K + 1 time steps ahead. The key component of DIVE is based on
the decomposition and disentangling of the objects representations within a VAE framework, with similar recursive modules as in [5]. Specifically, we decompose the objects in a video and assign three sets of latent variables to each object: appearance, pose and missingness, representing distinct attributes. During inference, DIVE encodes the input video into latent representations, performs sequence imputation in the latent space and updates the hidden representations. The generation model then samples from the latent variables to reconstruct and generate future predictions. Figure 1 depicts the overall pipeline of our model.
Denote a video sequence with missing data as (y1, · · · ,yt) where each yt ∈ Rd is a frame. We assume an object in a video consists of appearance, pose (position and scale), and missingness. For each object i in frame t, we aim to learn the latent representation zti and disentangle it into three latent variables:
zti = [z t i,a, z t i,p, z t i,m], z t i,a ∈ Rh, zti,p ∈ R3, zti,m ∈ Z (1)
where zti,a is the appearance vector with dimension h, z t i,p is the pose vector with x, y coordinates and scale and zti,m is the binary missingness label. z t i,m = 1 if the object is occluded or missing.
3.1 Imputation Model
The imputation model leverages the missingness variable zti,m to update the hidden states. When there is no missing data, the encoded hidden state, given the input frame, is hti,y = fenc(h t−1 i,y ,h t+1 i,y , [y
t,hti−1,y]), where we enforce separate representations for each object. We implement the encoding function fenc with a bidirectional LSTM to propagate the hidden state over time. However, in the presence of missing data, such hidden state is unreliable and needs imputation. Denote the imputed hidden state as ĥti,y which will be discussed shortly. We update a latent space vector uti to select the corresponding hidden state, given the sampled missingness variable:
uti =
{ ĥti,y z t i,m = 1
γhti,y + (1− γ)ĥti,y zti,m = 0 , γ ∼ Bernoulli(p) (2)
Note that we apply a mixture of input hidden state hti,y and imputed hidden state ĥ t i,y with probability p. In our experiments, we found this mixed strategy to be helpful in mitigating covariate shift [36]. It forces the model to learn the correct imputation with self-supervision, which is reminiscent of the scheduled sampling [37] technique for sequence prediction.
The pose hidden states hti,p are obtained by propagating the updated latent representation through an LSTM network hti,p = LSTM(h t−1 i,p ,u t i). For prediction we use an LSTM network, with only h t−1 i,p as input in time t. We obtain the imputed hidden state by means of auto-regression. This is based on the assumption that a video sequence is locally stationary and the most recent history is predictive of the
future. Given the updated latent representation at time t, the imputed hidden state at the next time step is:
ĥti,y = FC(h t−1 i,p ) (3)
where FC(·) is a fully connected layer. This approach is similar in spirit to the time series imputation method in [32]. However, instead of imputing in the observation space, we perform imputation in the space of latent representations.
3.2 Inference Model
Missingness Inference. For the missingness variable zti,m, we also leverage the input encoding. We use a heaviside step function to make it binary:
zti,m = H(x), x ∼ N (µm, σ2m), [µm, σ2m] = FC(hti,y), H(x) = { 1 x ≥ 0 0 x < 0
(4)
where σ is the standard deviation of the noise, which is obtained from the hidden representation.
Pose Inference. The pose variable (position and scale) encodes the spatiotemporal dynamics of the video. We follow the variational inference technique for state-space representation of sequences [38]. That is, instead of directly inferring z1:Ki,p for K input frames, we use a stochastic variable β t i to reparameterize the state transition probability:
q(z1:Ti,p |y1:K) = K∏ t=1 q(zti,p|z1:t−1i,p ), z t i,p = ftran(z t−1 i,p , β t i ), β t i ∼ N (µp, σ2p) (5)
where the state transition ftran is a deterministic mapping from the previous state to the next time step. The stochastic transition variable βti is sampled from a Gaussian distribution parameterized by a mean µp and variance σ2p with [µp, σ 2 p] = FC(h t i,p).
Dynamic Appearance. Another novel feature of our approach is its ability to robustly generate objects even when their appearances are changing across frames. zti,a is the time-varying appearance. In particular, we decompose the appearance latent variable into a static component ai,s and a dynamic component ai,d which we model separately. The static component captures the inherent semantics of the object while the dynamic component models the nuanced variations in shape.
For the static component, we follow the procedure in [5] to perform inverse affine spatial transformation T −1(·; ·), given the pose of the object to center in the frame and rectify the images with a selected crop size. Future prediction is done in an autoregressive fashion:
ai,s = FC(hKi,a), h t+1 i,a = { LSTM1(hti,a, T −1(yt; zti,p)) t < K LSTM2(hti,a) K ≤ t < T
(6)
Here the appearance hidden state hti,a is propagated through an LSTM, whose last output is used to infer the static appearance. Similar to poses, we use a state-space representation for the dynamic component, but directly model the difference in appearances, which helps stabilizing training:
a1i,d = FC([ai,s, T −1(y1; z1i,p)]), at+1i,d = a t i,d + δ t i,d, δ t i,d = FC([h t i,a,ai,s]) (7)
The final appearance variable is sampled from a Gaussian distribution parametrized by the concatenation of static and dynamic components, which are randomly mixed with a probability p:
q(zi,a|y1:K) = ∏ t N (µa, σ2a), [µa, σ2a] = FC([ai,s, γati,d]), γ ∼ Bernoulli(p) (8)
The mixing strategy helps to mitigate covariate shift and enforces the static component to learn the inherent semantics of the objects across frames.
<latexit sha1_base64="zYko6nGTBvQl8k+Y8nvDo3F4fOM=">AAACDXicbVC9TsMwGHTKXyl/BUYWiwiJqUoQCMYKFsYikbZSE1WO47RWbSeyHVAV5Rm6woOwIVaegefgBXDTDNBykqXT3Xf25wtTRpV2nC+rtra+sblV327s7O7tHzQPj7oqySQmHk5YIvshUoRRQTxNNSP9VBLEQ0Z64eRu7veeiFQ0EY96mpKAo5GgMcVIG8nzo0SrYdN2Wk4JuErcitigQmfY/DY5nHEiNGZIqYHrpDrIkdQUM1I0/EyRFOEJGpGBoQJxooK8XLaAZ0aJYJxIc4SGpfo7kSOu1JSHZpIjPVbL3lz8zxtkOr4JcirSTBOBFw/FGYM6gfOfw4hKgjWbGoKwpGZXiMdIIqxNPw1fkGeccI5ElPvm8mLgBiUZh3Fuu0VhWnKXO1kl3YuWe9m6eri027dVX3VwAk7BOXDBNWiDe9ABHsCAghl4Aa/WzHqz3q2PxWjNqjLH4A+szx++fpym</latexit>
<latexit sha1_base64="zYko6nGTBvQl8k+Y8nvDo3F4fOM=">AAACDXicbVC9TsMwGHTKXyl/BUYWiwiJqUoQCMYKFsYikbZSE1WO47RWbSeyHVAV5Rm6woOwIVaegefgBXDTDNBykqXT3Xf25wtTRpV2nC+rtra+sblV327s7O7tHzQPj7oqySQmHk5YIvshUoRRQTxNNSP9VBLEQ0Z64eRu7veeiFQ0EY96mpKAo5GgMcVIG8nzo0SrYdN2Wk4JuErcitigQmfY/DY5nHEiNGZIqYHrpDrIkdQUM1I0/EyRFOEJGpGBoQJxooK8XLaAZ0aJYJxIc4SGpfo7kSOu1JSHZpIjPVbL3lz8zxtkOr4JcirSTBOBFw/FGYM6gfOfw4hKgjWbGoKwpGZXiMdIIqxNPw1fkGeccI5ElPvm8mLgBiUZh3Fuu0VhWnKXO1kl3YuWe9m6eri027dVX3VwAk7BOXDBNWiDe9ABHsCAghl4Aa/WzHqz3q2PxWjNqjLH4A+szx++fpym</latexit>
<latexit sha1_base64="zYko6nGTBvQl8k+Y8nvDo3F4fOM=">AAACDXicbVC9TsMwGHTKXyl/BUYWiwiJqUoQCMYKFsYikbZSE1WO47RWbSeyHVAV5Rm6woOwIVaegefgBXDTDNBykqXT3Xf25wtTRpV2nC+rtra+sblV327s7O7tHzQPj7oqySQmHk5YIvshUoRRQTxNNSP9VBLEQ0Z64eRu7veeiFQ0EY96mpKAo5GgMcVIG8nzo0SrYdN2Wk4JuErcitigQmfY/DY5nHEiNGZIqYHrpDrIkdQUM1I0/EyRFOEJGpGBoQJxooK8XLaAZ0aJYJxIc4SGpfo7kSOu1JSHZpIjPVbL3lz8zxtkOr4JcirSTBOBFw/FGYM6gfOfw4hKgjWbGoKwpGZXiMdIIqxNPw1fkGeccI5ElPvm8mLgBiUZh3Fuu0VhWnKXO1kl3YuWe9m6eri027dVX3VwAk7BOXDBNWiDe9ABHsCAghl4Aa/WzHqz3q2PxWjNqjLH4A+szx++fpym</latexit>
<latexit sha1_base64="zYko6nGTBvQl8k+Y8nvDo3F4fOM=">AAACDXicbVC9TsMwGHTKXyl/BUYWiwiJqUoQCMYKFsYikbZSE1WO47RWbSeyHVAV5Rm6woOwIVaegefgBXDTDNBykqXT3Xf25wtTRpV2nC+rtra+sblV327s7O7tHzQPj7oqySQmHk5YIvshUoRRQTxNNSP9VBLEQ0Z64eRu7veeiFQ0EY96mpKAo5GgMcVIG8nzo0SrYdN2Wk4JuErcitigQmfY/DY5nHEiNGZIqYHrpDrIkdQUM1I0/EyRFOEJGpGBoQJxooK8XLaAZ0aJYJxIc4SGpfo7kSOu1JSHZpIjPVbL3lz8zxtkOr4JcirSTBOBFw/FGYM6gfOfw4hKgjWbGoKwpGZXiMdIIqxNPw1fkGeccI5ElPvm8mLgBiUZh3Fuu0VhWnKXO1kl3YuWe9m6eri027dVX3VwAk7BOXDBNWiDe9ABHsCAghl4Aa/WzHqz3q2PxWjNqjLH4A+szx++fpym</latexit>
3.3 Generative Model and Learning
Given a video with missing data (y1, · · · ,yt), denote the underlying complete video as (x1, · · ·xt). Then, the generative distribution of the video sequence is given by:
p(y1:K ,xK+1:T |z1:T ) = N∏ i=1 p(y1:Ki |z1:Ki )p(xK+1:Ti |z K+1:T i ) (9)
In unsupervised learning of video representations, we simultaneously reconstruct the input video and predict future frames. Given the inferred latent variables, we generate yti and predict x t i for each object sequentially. In particular, we first generate the rectified object in the center, given the appearance zti,a. The decoder is parameterized by a deconvolutional layer. After that, we apply an spatial transformer T to rescale and place the object according to the pose zti,p. For each object, the generative model is:
p(yti |zti,a) = T (fdec(zti,a); zti,p) ◦ (1− zti,m), p(xti|zti,a) = T (fdec(zti,a), zti,p) (10)
Future prediction is similar to reconstruction, except we assume the video is always complete. The generated frame yt is the summation over yti for all objects. Following the VAE framework, we train the model by maximizing the evidence lower bound (ELBO). Please see details in Appendix D .
4 Experiments
4.1 Experimental Setup
We evaluate our method on variations of moving MNIST and MOTSChallenge multi-object tracking datasets. The prediction task is to generate 10 future frames, given an input of 10 frames. The baselines include the established state-of-the-art video prediction methods based on disentangled representation learning: DRNET [4], DDPAE [5] and SQAIR [24].
Evaluation Metrics. We use common evaluation metrics for video quality on the visible pixels, which include pixel-level Binary Cross entropy (BCE) per frame, Mean Square Error (MSE) per
frame, Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM). Additionally, DIVE is a probabilistic model, hence we also report Negative Evidence Lower Bound (NELBO).
As our DIVE model simultaneously imputes missing data and generates improved predictions, we report reconstruction and prediction performances separately. For implementation details for the experiments, please see Appendix A.
4.2 Moving MNIST Experiments
Data Description. Moving MNIST [19] is a synthetic dataset consisting of two digits with size 28×28 moving independently in a 64×64 frame. Each sequence is generated on-the-fly by sampling MNIST digits and synthesizing trajectories with fixed velocity with randomly sampled angle and initial position. We train the model for 300 epochs in scenarios 1 and 2, and 600 epochs in scenario 3. For each epoch we generate 10k sequences. The test set contains 1,024 fixed sequences. We simulate a variety of missing data scenarios including:
• Partial Occlusion: we occlude the upper 32 rows of the 64× 64 pixel frame to simulate the effect of objects being partially occluded at the boundaries of the frame. • Out of Scene: we randomly select an initial time step t′ = [3, 9] and remove the object from the frame in steps t′ and t′ + 1 to simulate the out of scene phenomena for two consecutive steps. • Missing with Varying Appearance: we apply an elastic transformation [39] to change the appearance of the objects individually. The transformation grid is chosen randomly for each sequence, and the parameter α of the deformation filter is set to α = 100 and reduced linearly to 0 (no transformation) along the steps of the sequence. We remove each object for one time-step following the same logic as in scenario 2.
Scenario 1: Partial occlusion. The top portion of Table 1 shows the quantitative performance comparison for all methods for the partial occlusion scenario. Our model outperforms all baseline models, except for the BCE in prediction. This is because DIVE generates sharper shapes which, in case of misalignment with the ground truth, have a larger effect on the pixel-level BCE. For reconstruction, our method often outperforms the baselines by a large margin, which highlights the significance of missing data imputation. Note that SQAIR performs well in reconstruction but fails in prediction. Prolonged full occlusions cause SQAIR to lose track of the object and re-identifying it as a new one when it reappears. Figure 3 shows a visualization of the predictions from DIVE and the baseline models. The bottom three rows show the decomposed representations from DIVE for each object and the missingness labels for objects in the corresponding order. We observe that DRNET and SQAIR fail to predict the objects position in the frame and appearance while DDPAE generates blurry predictions with the correct pose. These failure cases rarely occur for DIVE. Scenario 2: Out of Scene. The middle portion of Table 1 illustrates the quantitative performance of all methods for scenario 2. We observe that our method achieves significant improvement across all metrics. This implies that our imputation of missing data is accurate and can drastically improve the predictions. Figure 4 shows the prediction results of all methods evaluated for the out of scene case. We observe that DRNET and SQAIR fail to predict the future pose, and the quality of the
generated object appearance is poor. The qualitative comparison with DDPAE reveals that the objects generated by our model have higher brightness and sharpness. As the baselines cannot infer the object missingness, they may misidentify the missing object as any other object that is present. This would lead to confusion for modeling the pose and appearance. The figure also reveals how DIVE is able to predict the missing labels and hallucinate the pose of the objects when missing, allowing for accurate predictions.
Scenario 3: Missing with Varying Appearance. Quantitative results for 1 time step complete missingness with varying appearance are shown in the bottom portion of Table 1. Our method again achieves the best performance for all metrics. The difference between our models and baselines is quite significant given the difficulty of the task. Besides the complete missing frame, the varying appearances of the objects introduce an additional layer of complexity which can misguide the inference. Despite these challenges, DIVE can learn the appearance variation and successfully recognize the correct object in most cases. Figure 5 visualizes the model predictions, a tough case where two seemingly different digits (“2” and “6”) are progressively transformed into the same digit (“6”). SQAIR and DRNET have the ability to model varying appearance, but fail to generate
reasonable predictions due to similar reasons as before. DDPAE correctly predicts the pose after the missing step, but misidentifies the objects appearance before that. Also, DDPAE simply cannot model appearance variation. DIVE correctly estimates the pose and appearance variation of each object, while maintaining their identity throughout the sequence.
4.3 Pedestrian Experiments
The Multi-Object Tracking and Segmentation (MOTS) Challenge [40] dataset consists of real world video sequences of pedestrians and cars. We use 2 ground truth sequences in which pedestrians have been fully segmented and annotated [41]. The annotated sequences are further processed into shorter 20 frame sub-sequences, binarized and with at most 3 unique pedestrians. The smallest objects are scaled and the sequences are augmented by simulating constant camera motion and 1 time step complete camera occlusion, further details deferred to Appendix B.
Table 2 shows the quantitative metrics compared with the best performing baseline DDPAE. This dataset mimics the missing scenarios 1 (partial occlusion) and 3 (missing with varying appearance) because the appearance walking pedestrians is constantly changing. DIVE outperforms
DDPAE across all evaluation metrics. Figure 6 shows the outputs from both models as well as the decomposed objects and missingness labels from DIVE. Our method can accurately recognize 3 objects (pedestrians), infer their missingness and estimate their varying appearance. DDPAE fails to
decompose them due to its rigid assumption of fixed appearances and the inherent complexity of the scenario. In Appendix C, we perform two ablation studies. One on the significance of dynamic appearance modeling, and the other on the importance of estimating missingness and performing imputation.
5 Conclusion and Discussion
We propose a novel deep generative model that can simultaneously perform object decomposition, latent space disentangling, missing data imputation, and video forecasting. The key novelty of our method includes missing data detection and imputation in the hidden representations, as well as a robust way of dealing with dynamic appearances. Extensive experiments on moving MNIST demonstrate that DIVE can impute missing data without supervision and generate videos of significantly higher quality. Future work will focus on improving our model so that it is able to handle the complexity and dynamics in real world videos with unknown object number and colored scenes.
Broader Impact
Videos provide a window into the physics of the world we live in. They contain abundant visual information of what objects are, how they move, and what happens when cameras move against the scene. Being able to learn a representation that disentangles these factors is fundamental to AI that can understand and act in spatiotemporal environment. Despite the wealth of methods for video prediction, state-of-the-art approaches are sensitive to missing data, which are very common in realworld videos. Our proposed model significantly improves the robustness of video prediction methods against missing data, and thereby increasing the practical values of video prediction techniques and our trust in AI. Video surveillance systems can be potentially abused for discriminatory targeting, and we remained cognizant of the bias in our training data. To reduce the potential risk of this, we pre-processed the MOTSChallenge videos to greyscale.
Acknowledgments and Disclosure of Funding
This work was supported in part by NSF under Grants IIS#1850349, IIS#1814631, ECCS#1808381 and CMMI#1638234, the U. S. Army Research Office under Grant W911NF-20-1-0334 and the Alert DHS Center of Excellence under Award Number 2013-ST-061-ED0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the U.S. Department of Homeland Security. We thank Dr. Adam Kosiorek for helpful discussions. Additional revenues related to this work: ONR # N68335-19-C-0310, Google Faculty Research Award, Adobe Data Science Research Awards, GPUs donated by NVIDIA, and computing allocation awarded by DOE. | 1. What is the primary contribution of the paper in the field of video generation?
2. What are the strengths of the proposed model, particularly in its ability to handle missing data?
3. What are the weaknesses of the paper regarding its experiments and comparisons with other works?
4. How can the authors improve their model's performance on more complex datasets?
5. What additional ablation studies or analyses should the authors consider to further validate their approach? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper proposes a novel VAE-based stochastic video generation model that decomposes the latent space into object-specific latent variables that encode appearance, pose, and missingness (i.e. is the object present in the frame). The authors compare their approach with a variety of other methods on the moving MNIST dataset and a grayscale/binarized version on the MOTS dataset. They demonstrate their method accurately infers the binary missingness variable and effectively predicts future frame under different settings.
Strengths
This paper is well written and well motivated, for the most part. The model is a sensible extension of previous approaches and performs adequately on simple datasets.
Weaknesses
This paper would significantly benefit from greater experimental evaluation and analysis. There are some standard video prediction datasets missing from comparison, and there are many ablation studies / analysis of the different model components that I'd encourage the authors to consider. (More details on my recs below.) |
NIPS | Title
Learning Disentangled Representations of Videos with Missing Data
Abstract
Missing data poses significant challenges while learning representations of video sequences. We present Disentangled Imputed Video autoEncoder (DIVE), a deep generative model that imputes and predicts future video frames in the presence of missing data. Specifically, DIVE introduces a missingness latent variable, disentangles the hidden video representations into static and dynamic appearance, pose, and missingness factors for each object. DIVE imputes each object’s trajectory where the data is missing. On a moving MNIST dataset with various missing scenarios, DIVE outperforms the state of the art baselines by a substantial margin. We also present comparisons on a real-world MOTSChallenge pedestrian dataset, which demonstrates the practical value of our method in a more realistic setting. Our code and data can be found at https://github.com/Rose-STL-Lab/DIVE.
1 Introduction
Videos contain rich structured information about our physical world. Learning representations from video enables intelligent machines to reason about the surroundings and it is essential to a range of tasks in machine learning and computer vision, including activity recognition [1], video prediction [2] and spatiotemporal reasoning [3]. One of the fundamental challenges in video representation learning is the high-dimensional, dynamic, multi-modal distribution of pixels. Recent research in deep generative models [4, 5, 6, 7] tackles the challenge by exploiting inductive biases of videos and projecting the high-dimensional data into substantially lower dimensional space. These methods search for disentangled representations by decomposing the latent representation of video frames into semantically meaningful factors [8].
Unfortunately, existing methods cannot reason about the objects when they are missing in videos. In contrast, a five month-old child can understand that objects continue to exist even when they are unseen, a phenomena known as “object permanence” [9]. Towards making intelligent machines, we study learning disentangled representations of videos with missing data. We consider a variety of missing scenarios that might occur in natural videos: objects can be partially occluded; objects can disappear in a scene and reappear; objects can also become missing while changing their size, shape, color and brightness. The ability to disentangle these factors and learn appropriate representations is an important step toward spatiotemporal decision making in complex environments.
In this work, we build on the deep generative model of DDPAE [5] which integrates structured graphical models into deep neural networks. Our model, which we call Disentangled-Imputed-VideoautoEncoder (DIVE), (i) learns representations that factorize into appearance, pose and missingness
∗1College of Electrical and Computer Engineering, 2 Khoury College of Computer Sciences, Northeastern University, MA, USA, 3Computer Science & Engineering, University of California San Diego, CA, USA.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
latent variables; (ii) imputes missing data by sampling from the learned latent variables; and (iii) performs unsupervised stochastic video prediction using the imputed hidden representation. Besides imputation, another salient feature of our model is (iv) its ability to robustly generate objects even when their appearances are changing by modeling the static and dynamic appearances separately. Thismakes our technique more applicable to real-world problems.
We demonstrate the effectiveness of our method on a moving MNIST dataset with a variety of missing data scenarios including partial occlusions, out of scene, and missing frames with varying appearances. We further evaluate on the Multi-Object Tracking and Segmentation (MOTSChallenge) object tracking and segmentation challenge dataset. We show that DIVE is able to accurately infer missing data, perform video imputation and reconstruct input frames and generate future predictions. Compared with baselines, our approach is robust to missing data and achieves significant improvements in video prediction performances.
2 Related Work
Disentangled Representation. Unsupervised learning of disentangled representation for sequences generally falls into three categories: VAE-based [10, 6, 5, 7, 11, 12], GAN-like models [13, 14, 4, 15] and Sum-Product networks [11, 16]. For video data, a common practice is to encode a video frame into latent variables and disentangle the latent representation into content and dynamics factors. For example, [5] assumes the content (objects, background) of a video is fixed across frames, while the position of the content can change over time. In most cases, models can only handle complete video sequences without missing data. One exception is SQAIR [6], an generalization of AIR [17], which makes use of a latent variable to explicitly encode the presence of the respective object. SQAIR is further extended to an accelerated training scheme [16] or to better encode relational inductive biases [11, 12]. However, SQAIR and its extensions have no mechanism to recall an object. This leads to discovering an object as new when it reappears in the scene.
Video Prediction. Conditioning on the past frames, video prediction models are trained to reconstruct the input sequence and predict future frames. Many video prediction methods use dynamical modeling [18] or deep neural networks to learn a deterministic transformation from input to output, including LSTM [19], Convolutional LSTM [20] and PredRNN [21]. These methods often suffer from blurry predictions and cannot properly model the inherently uncertain future [22]. In contrast to deterministic prediction, we prefer stochastic video prediction [2, 23, 22, 24, 14, 25], which is more suitable for capturing the stochastic dynamics of the environment. For instance, [22] proposes an auto-regressive model to generate pixels sequentially. [14] generalizes VAE to video data with a learned prior. [26] develops a normalizing flow video prediction model. [25] proposes a Bayesian Predictive Network to learn the prior distribution from noisy videos but without disentangled representations. Our main goal is to learn disentangled latent representations from video that are both interpretable and robust to missing data.
Missing Value Imputation. Missing value imputation is the process of replacing the missing data in a sequence by an estimate of its true missing value. It is a central challenge of sequence modeling. Statistical methods often impose strong assumptions on the missing patterns. For example, mean/median averaging [27] and MICE [28], can only handle data missing at random. Latent variables models with the EM algorithm [29] can impute data missing not-at-random but are restricted to certain parametric models. Deep generative models offer a flexible framework of missing data imputation. For instance, [30, 31, 32] develop variants of recurrent neural networks to impute time series. [33, 34, 35] propose GAN-like models to learn missing patterns in multivariate time series. Unfortunately, to the best of our knowledge, all recent developments in generative modeling for missing value imputation have focused on low-dimensional time series, which are not directly applicable to high-dimensional video with complex scene dynamics.
3 Disentangled-Imputed-Video-autoEncoder (DIVE)
Videos often capture multiple objects moving with complex dynamics. For this work, we assume that each video has a maximum number of N objects, we observe a video sequence up to K time steps and aim to predict T − K + 1 time steps ahead. The key component of DIVE is based on
the decomposition and disentangling of the objects representations within a VAE framework, with similar recursive modules as in [5]. Specifically, we decompose the objects in a video and assign three sets of latent variables to each object: appearance, pose and missingness, representing distinct attributes. During inference, DIVE encodes the input video into latent representations, performs sequence imputation in the latent space and updates the hidden representations. The generation model then samples from the latent variables to reconstruct and generate future predictions. Figure 1 depicts the overall pipeline of our model.
Denote a video sequence with missing data as (y1, · · · ,yt) where each yt ∈ Rd is a frame. We assume an object in a video consists of appearance, pose (position and scale), and missingness. For each object i in frame t, we aim to learn the latent representation zti and disentangle it into three latent variables:
zti = [z t i,a, z t i,p, z t i,m], z t i,a ∈ Rh, zti,p ∈ R3, zti,m ∈ Z (1)
where zti,a is the appearance vector with dimension h, z t i,p is the pose vector with x, y coordinates and scale and zti,m is the binary missingness label. z t i,m = 1 if the object is occluded or missing.
3.1 Imputation Model
The imputation model leverages the missingness variable zti,m to update the hidden states. When there is no missing data, the encoded hidden state, given the input frame, is hti,y = fenc(h t−1 i,y ,h t+1 i,y , [y
t,hti−1,y]), where we enforce separate representations for each object. We implement the encoding function fenc with a bidirectional LSTM to propagate the hidden state over time. However, in the presence of missing data, such hidden state is unreliable and needs imputation. Denote the imputed hidden state as ĥti,y which will be discussed shortly. We update a latent space vector uti to select the corresponding hidden state, given the sampled missingness variable:
uti =
{ ĥti,y z t i,m = 1
γhti,y + (1− γ)ĥti,y zti,m = 0 , γ ∼ Bernoulli(p) (2)
Note that we apply a mixture of input hidden state hti,y and imputed hidden state ĥ t i,y with probability p. In our experiments, we found this mixed strategy to be helpful in mitigating covariate shift [36]. It forces the model to learn the correct imputation with self-supervision, which is reminiscent of the scheduled sampling [37] technique for sequence prediction.
The pose hidden states hti,p are obtained by propagating the updated latent representation through an LSTM network hti,p = LSTM(h t−1 i,p ,u t i). For prediction we use an LSTM network, with only h t−1 i,p as input in time t. We obtain the imputed hidden state by means of auto-regression. This is based on the assumption that a video sequence is locally stationary and the most recent history is predictive of the
future. Given the updated latent representation at time t, the imputed hidden state at the next time step is:
ĥti,y = FC(h t−1 i,p ) (3)
where FC(·) is a fully connected layer. This approach is similar in spirit to the time series imputation method in [32]. However, instead of imputing in the observation space, we perform imputation in the space of latent representations.
3.2 Inference Model
Missingness Inference. For the missingness variable zti,m, we also leverage the input encoding. We use a heaviside step function to make it binary:
zti,m = H(x), x ∼ N (µm, σ2m), [µm, σ2m] = FC(hti,y), H(x) = { 1 x ≥ 0 0 x < 0
(4)
where σ is the standard deviation of the noise, which is obtained from the hidden representation.
Pose Inference. The pose variable (position and scale) encodes the spatiotemporal dynamics of the video. We follow the variational inference technique for state-space representation of sequences [38]. That is, instead of directly inferring z1:Ki,p for K input frames, we use a stochastic variable β t i to reparameterize the state transition probability:
q(z1:Ti,p |y1:K) = K∏ t=1 q(zti,p|z1:t−1i,p ), z t i,p = ftran(z t−1 i,p , β t i ), β t i ∼ N (µp, σ2p) (5)
where the state transition ftran is a deterministic mapping from the previous state to the next time step. The stochastic transition variable βti is sampled from a Gaussian distribution parameterized by a mean µp and variance σ2p with [µp, σ 2 p] = FC(h t i,p).
Dynamic Appearance. Another novel feature of our approach is its ability to robustly generate objects even when their appearances are changing across frames. zti,a is the time-varying appearance. In particular, we decompose the appearance latent variable into a static component ai,s and a dynamic component ai,d which we model separately. The static component captures the inherent semantics of the object while the dynamic component models the nuanced variations in shape.
For the static component, we follow the procedure in [5] to perform inverse affine spatial transformation T −1(·; ·), given the pose of the object to center in the frame and rectify the images with a selected crop size. Future prediction is done in an autoregressive fashion:
ai,s = FC(hKi,a), h t+1 i,a = { LSTM1(hti,a, T −1(yt; zti,p)) t < K LSTM2(hti,a) K ≤ t < T
(6)
Here the appearance hidden state hti,a is propagated through an LSTM, whose last output is used to infer the static appearance. Similar to poses, we use a state-space representation for the dynamic component, but directly model the difference in appearances, which helps stabilizing training:
a1i,d = FC([ai,s, T −1(y1; z1i,p)]), at+1i,d = a t i,d + δ t i,d, δ t i,d = FC([h t i,a,ai,s]) (7)
The final appearance variable is sampled from a Gaussian distribution parametrized by the concatenation of static and dynamic components, which are randomly mixed with a probability p:
q(zi,a|y1:K) = ∏ t N (µa, σ2a), [µa, σ2a] = FC([ai,s, γati,d]), γ ∼ Bernoulli(p) (8)
The mixing strategy helps to mitigate covariate shift and enforces the static component to learn the inherent semantics of the objects across frames.
<latexit sha1_base64="zYko6nGTBvQl8k+Y8nvDo3F4fOM=">AAACDXicbVC9TsMwGHTKXyl/BUYWiwiJqUoQCMYKFsYikbZSE1WO47RWbSeyHVAV5Rm6woOwIVaegefgBXDTDNBykqXT3Xf25wtTRpV2nC+rtra+sblV327s7O7tHzQPj7oqySQmHk5YIvshUoRRQTxNNSP9VBLEQ0Z64eRu7veeiFQ0EY96mpKAo5GgMcVIG8nzo0SrYdN2Wk4JuErcitigQmfY/DY5nHEiNGZIqYHrpDrIkdQUM1I0/EyRFOEJGpGBoQJxooK8XLaAZ0aJYJxIc4SGpfo7kSOu1JSHZpIjPVbL3lz8zxtkOr4JcirSTBOBFw/FGYM6gfOfw4hKgjWbGoKwpGZXiMdIIqxNPw1fkGeccI5ElPvm8mLgBiUZh3Fuu0VhWnKXO1kl3YuWe9m6eri027dVX3VwAk7BOXDBNWiDe9ABHsCAghl4Aa/WzHqz3q2PxWjNqjLH4A+szx++fpym</latexit>
<latexit sha1_base64="zYko6nGTBvQl8k+Y8nvDo3F4fOM=">AAACDXicbVC9TsMwGHTKXyl/BUYWiwiJqUoQCMYKFsYikbZSE1WO47RWbSeyHVAV5Rm6woOwIVaegefgBXDTDNBykqXT3Xf25wtTRpV2nC+rtra+sblV327s7O7tHzQPj7oqySQmHk5YIvshUoRRQTxNNSP9VBLEQ0Z64eRu7veeiFQ0EY96mpKAo5GgMcVIG8nzo0SrYdN2Wk4JuErcitigQmfY/DY5nHEiNGZIqYHrpDrIkdQUM1I0/EyRFOEJGpGBoQJxooK8XLaAZ0aJYJxIc4SGpfo7kSOu1JSHZpIjPVbL3lz8zxtkOr4JcirSTBOBFw/FGYM6gfOfw4hKgjWbGoKwpGZXiMdIIqxNPw1fkGeccI5ElPvm8mLgBiUZh3Fuu0VhWnKXO1kl3YuWe9m6eri027dVX3VwAk7BOXDBNWiDe9ABHsCAghl4Aa/WzHqz3q2PxWjNqjLH4A+szx++fpym</latexit>
<latexit sha1_base64="zYko6nGTBvQl8k+Y8nvDo3F4fOM=">AAACDXicbVC9TsMwGHTKXyl/BUYWiwiJqUoQCMYKFsYikbZSE1WO47RWbSeyHVAV5Rm6woOwIVaegefgBXDTDNBykqXT3Xf25wtTRpV2nC+rtra+sblV327s7O7tHzQPj7oqySQmHk5YIvshUoRRQTxNNSP9VBLEQ0Z64eRu7veeiFQ0EY96mpKAo5GgMcVIG8nzo0SrYdN2Wk4JuErcitigQmfY/DY5nHEiNGZIqYHrpDrIkdQUM1I0/EyRFOEJGpGBoQJxooK8XLaAZ0aJYJxIc4SGpfo7kSOu1JSHZpIjPVbL3lz8zxtkOr4JcirSTBOBFw/FGYM6gfOfw4hKgjWbGoKwpGZXiMdIIqxNPw1fkGeccI5ElPvm8mLgBiUZh3Fuu0VhWnKXO1kl3YuWe9m6eri027dVX3VwAk7BOXDBNWiDe9ABHsCAghl4Aa/WzHqz3q2PxWjNqjLH4A+szx++fpym</latexit>
<latexit sha1_base64="zYko6nGTBvQl8k+Y8nvDo3F4fOM=">AAACDXicbVC9TsMwGHTKXyl/BUYWiwiJqUoQCMYKFsYikbZSE1WO47RWbSeyHVAV5Rm6woOwIVaegefgBXDTDNBykqXT3Xf25wtTRpV2nC+rtra+sblV327s7O7tHzQPj7oqySQmHk5YIvshUoRRQTxNNSP9VBLEQ0Z64eRu7veeiFQ0EY96mpKAo5GgMcVIG8nzo0SrYdN2Wk4JuErcitigQmfY/DY5nHEiNGZIqYHrpDrIkdQUM1I0/EyRFOEJGpGBoQJxooK8XLaAZ0aJYJxIc4SGpfo7kSOu1JSHZpIjPVbL3lz8zxtkOr4JcirSTBOBFw/FGYM6gfOfw4hKgjWbGoKwpGZXiMdIIqxNPw1fkGeccI5ElPvm8mLgBiUZh3Fuu0VhWnKXO1kl3YuWe9m6eri027dVX3VwAk7BOXDBNWiDe9ABHsCAghl4Aa/WzHqz3q2PxWjNqjLH4A+szx++fpym</latexit>
3.3 Generative Model and Learning
Given a video with missing data (y1, · · · ,yt), denote the underlying complete video as (x1, · · ·xt). Then, the generative distribution of the video sequence is given by:
p(y1:K ,xK+1:T |z1:T ) = N∏ i=1 p(y1:Ki |z1:Ki )p(xK+1:Ti |z K+1:T i ) (9)
In unsupervised learning of video representations, we simultaneously reconstruct the input video and predict future frames. Given the inferred latent variables, we generate yti and predict x t i for each object sequentially. In particular, we first generate the rectified object in the center, given the appearance zti,a. The decoder is parameterized by a deconvolutional layer. After that, we apply an spatial transformer T to rescale and place the object according to the pose zti,p. For each object, the generative model is:
p(yti |zti,a) = T (fdec(zti,a); zti,p) ◦ (1− zti,m), p(xti|zti,a) = T (fdec(zti,a), zti,p) (10)
Future prediction is similar to reconstruction, except we assume the video is always complete. The generated frame yt is the summation over yti for all objects. Following the VAE framework, we train the model by maximizing the evidence lower bound (ELBO). Please see details in Appendix D .
4 Experiments
4.1 Experimental Setup
We evaluate our method on variations of moving MNIST and MOTSChallenge multi-object tracking datasets. The prediction task is to generate 10 future frames, given an input of 10 frames. The baselines include the established state-of-the-art video prediction methods based on disentangled representation learning: DRNET [4], DDPAE [5] and SQAIR [24].
Evaluation Metrics. We use common evaluation metrics for video quality on the visible pixels, which include pixel-level Binary Cross entropy (BCE) per frame, Mean Square Error (MSE) per
frame, Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM). Additionally, DIVE is a probabilistic model, hence we also report Negative Evidence Lower Bound (NELBO).
As our DIVE model simultaneously imputes missing data and generates improved predictions, we report reconstruction and prediction performances separately. For implementation details for the experiments, please see Appendix A.
4.2 Moving MNIST Experiments
Data Description. Moving MNIST [19] is a synthetic dataset consisting of two digits with size 28×28 moving independently in a 64×64 frame. Each sequence is generated on-the-fly by sampling MNIST digits and synthesizing trajectories with fixed velocity with randomly sampled angle and initial position. We train the model for 300 epochs in scenarios 1 and 2, and 600 epochs in scenario 3. For each epoch we generate 10k sequences. The test set contains 1,024 fixed sequences. We simulate a variety of missing data scenarios including:
• Partial Occlusion: we occlude the upper 32 rows of the 64× 64 pixel frame to simulate the effect of objects being partially occluded at the boundaries of the frame. • Out of Scene: we randomly select an initial time step t′ = [3, 9] and remove the object from the frame in steps t′ and t′ + 1 to simulate the out of scene phenomena for two consecutive steps. • Missing with Varying Appearance: we apply an elastic transformation [39] to change the appearance of the objects individually. The transformation grid is chosen randomly for each sequence, and the parameter α of the deformation filter is set to α = 100 and reduced linearly to 0 (no transformation) along the steps of the sequence. We remove each object for one time-step following the same logic as in scenario 2.
Scenario 1: Partial occlusion. The top portion of Table 1 shows the quantitative performance comparison for all methods for the partial occlusion scenario. Our model outperforms all baseline models, except for the BCE in prediction. This is because DIVE generates sharper shapes which, in case of misalignment with the ground truth, have a larger effect on the pixel-level BCE. For reconstruction, our method often outperforms the baselines by a large margin, which highlights the significance of missing data imputation. Note that SQAIR performs well in reconstruction but fails in prediction. Prolonged full occlusions cause SQAIR to lose track of the object and re-identifying it as a new one when it reappears. Figure 3 shows a visualization of the predictions from DIVE and the baseline models. The bottom three rows show the decomposed representations from DIVE for each object and the missingness labels for objects in the corresponding order. We observe that DRNET and SQAIR fail to predict the objects position in the frame and appearance while DDPAE generates blurry predictions with the correct pose. These failure cases rarely occur for DIVE. Scenario 2: Out of Scene. The middle portion of Table 1 illustrates the quantitative performance of all methods for scenario 2. We observe that our method achieves significant improvement across all metrics. This implies that our imputation of missing data is accurate and can drastically improve the predictions. Figure 4 shows the prediction results of all methods evaluated for the out of scene case. We observe that DRNET and SQAIR fail to predict the future pose, and the quality of the
generated object appearance is poor. The qualitative comparison with DDPAE reveals that the objects generated by our model have higher brightness and sharpness. As the baselines cannot infer the object missingness, they may misidentify the missing object as any other object that is present. This would lead to confusion for modeling the pose and appearance. The figure also reveals how DIVE is able to predict the missing labels and hallucinate the pose of the objects when missing, allowing for accurate predictions.
Scenario 3: Missing with Varying Appearance. Quantitative results for 1 time step complete missingness with varying appearance are shown in the bottom portion of Table 1. Our method again achieves the best performance for all metrics. The difference between our models and baselines is quite significant given the difficulty of the task. Besides the complete missing frame, the varying appearances of the objects introduce an additional layer of complexity which can misguide the inference. Despite these challenges, DIVE can learn the appearance variation and successfully recognize the correct object in most cases. Figure 5 visualizes the model predictions, a tough case where two seemingly different digits (“2” and “6”) are progressively transformed into the same digit (“6”). SQAIR and DRNET have the ability to model varying appearance, but fail to generate
reasonable predictions due to similar reasons as before. DDPAE correctly predicts the pose after the missing step, but misidentifies the objects appearance before that. Also, DDPAE simply cannot model appearance variation. DIVE correctly estimates the pose and appearance variation of each object, while maintaining their identity throughout the sequence.
4.3 Pedestrian Experiments
The Multi-Object Tracking and Segmentation (MOTS) Challenge [40] dataset consists of real world video sequences of pedestrians and cars. We use 2 ground truth sequences in which pedestrians have been fully segmented and annotated [41]. The annotated sequences are further processed into shorter 20 frame sub-sequences, binarized and with at most 3 unique pedestrians. The smallest objects are scaled and the sequences are augmented by simulating constant camera motion and 1 time step complete camera occlusion, further details deferred to Appendix B.
Table 2 shows the quantitative metrics compared with the best performing baseline DDPAE. This dataset mimics the missing scenarios 1 (partial occlusion) and 3 (missing with varying appearance) because the appearance walking pedestrians is constantly changing. DIVE outperforms
DDPAE across all evaluation metrics. Figure 6 shows the outputs from both models as well as the decomposed objects and missingness labels from DIVE. Our method can accurately recognize 3 objects (pedestrians), infer their missingness and estimate their varying appearance. DDPAE fails to
decompose them due to its rigid assumption of fixed appearances and the inherent complexity of the scenario. In Appendix C, we perform two ablation studies. One on the significance of dynamic appearance modeling, and the other on the importance of estimating missingness and performing imputation.
5 Conclusion and Discussion
We propose a novel deep generative model that can simultaneously perform object decomposition, latent space disentangling, missing data imputation, and video forecasting. The key novelty of our method includes missing data detection and imputation in the hidden representations, as well as a robust way of dealing with dynamic appearances. Extensive experiments on moving MNIST demonstrate that DIVE can impute missing data without supervision and generate videos of significantly higher quality. Future work will focus on improving our model so that it is able to handle the complexity and dynamics in real world videos with unknown object number and colored scenes.
Broader Impact
Videos provide a window into the physics of the world we live in. They contain abundant visual information of what objects are, how they move, and what happens when cameras move against the scene. Being able to learn a representation that disentangles these factors is fundamental to AI that can understand and act in spatiotemporal environment. Despite the wealth of methods for video prediction, state-of-the-art approaches are sensitive to missing data, which are very common in realworld videos. Our proposed model significantly improves the robustness of video prediction methods against missing data, and thereby increasing the practical values of video prediction techniques and our trust in AI. Video surveillance systems can be potentially abused for discriminatory targeting, and we remained cognizant of the bias in our training data. To reduce the potential risk of this, we pre-processed the MOTSChallenge videos to greyscale.
Acknowledgments and Disclosure of Funding
This work was supported in part by NSF under Grants IIS#1850349, IIS#1814631, ECCS#1808381 and CMMI#1638234, the U. S. Army Research Office under Grant W911NF-20-1-0334 and the Alert DHS Center of Excellence under Award Number 2013-ST-061-ED0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the U.S. Department of Homeland Security. We thank Dr. Adam Kosiorek for helpful discussions. Additional revenues related to this work: ONR # N68335-19-C-0310, Google Faculty Research Award, Adobe Data Science Research Awards, GPUs donated by NVIDIA, and computing allocation awarded by DOE. | 1. What is the main contribution of the paper regarding deep generative models?
2. What are the strengths of the proposed DIVE model, particularly in its ability to reason about missing objects?
3. Do you have any concerns or questions regarding the training process and hyperparameters of the model? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper presents a deep generative model of video sequences by decomposing the the latent representations of videos into factors while accounting for the ability to reason about objects that are missing in videos or occluded. The key idea of the DIVE model proposed herein is to: 1) factorize the representation (of each frame) into appearance, pose and missingness 2) impute data when missing 3) use the model for video prediction by modeling the static and dynamic objects separately An bidirectional LSTM is used to encode each frame into a representation. A separate representation is inferred for each of the different objects. Then, a univariate Normal distribution is used to decide whether or not the object is missing or present. A sample from this distribution is passed through a step function to obtain a binary representation of whether or not the object is missing. The hidden state of the object if it is missing is set based either on the current representation or the previous representation of the object based on a hyperparameter. This "missingness corrected" representation is passed through an LSTM to obtain the pose representation. A time-varying dynamic and static representation for the appearance are also inferred. Using these disentangled representations of each object in each frame, two different LSTMs are used to parameterize the reconstruction and future prediction prediction of frames. The decoding processes uses a spatial transformer and the inferred pose variable to rescale and place the object into the frame.
Strengths
I think the idea of explicitly modeling the missingness process is an important one which this work makes use of to good effect. The neural architecture here is designed to make use of fine-grained knowledge of video semantics and consequently, the model compares well against several baselines with good experimental results (particularly, those in Figure 3/4/5).
Weaknesses
I have a lot of unanswered questions on how this model was trained as well as the kinds of hyperparameters the learning algorithm was sensitive to. |
NIPS | Title
Learning Disentangled Representations of Videos with Missing Data
Abstract
Missing data poses significant challenges while learning representations of video sequences. We present Disentangled Imputed Video autoEncoder (DIVE), a deep generative model that imputes and predicts future video frames in the presence of missing data. Specifically, DIVE introduces a missingness latent variable, disentangles the hidden video representations into static and dynamic appearance, pose, and missingness factors for each object. DIVE imputes each object’s trajectory where the data is missing. On a moving MNIST dataset with various missing scenarios, DIVE outperforms the state of the art baselines by a substantial margin. We also present comparisons on a real-world MOTSChallenge pedestrian dataset, which demonstrates the practical value of our method in a more realistic setting. Our code and data can be found at https://github.com/Rose-STL-Lab/DIVE.
1 Introduction
Videos contain rich structured information about our physical world. Learning representations from video enables intelligent machines to reason about the surroundings and it is essential to a range of tasks in machine learning and computer vision, including activity recognition [1], video prediction [2] and spatiotemporal reasoning [3]. One of the fundamental challenges in video representation learning is the high-dimensional, dynamic, multi-modal distribution of pixels. Recent research in deep generative models [4, 5, 6, 7] tackles the challenge by exploiting inductive biases of videos and projecting the high-dimensional data into substantially lower dimensional space. These methods search for disentangled representations by decomposing the latent representation of video frames into semantically meaningful factors [8].
Unfortunately, existing methods cannot reason about the objects when they are missing in videos. In contrast, a five month-old child can understand that objects continue to exist even when they are unseen, a phenomena known as “object permanence” [9]. Towards making intelligent machines, we study learning disentangled representations of videos with missing data. We consider a variety of missing scenarios that might occur in natural videos: objects can be partially occluded; objects can disappear in a scene and reappear; objects can also become missing while changing their size, shape, color and brightness. The ability to disentangle these factors and learn appropriate representations is an important step toward spatiotemporal decision making in complex environments.
In this work, we build on the deep generative model of DDPAE [5] which integrates structured graphical models into deep neural networks. Our model, which we call Disentangled-Imputed-VideoautoEncoder (DIVE), (i) learns representations that factorize into appearance, pose and missingness
∗1College of Electrical and Computer Engineering, 2 Khoury College of Computer Sciences, Northeastern University, MA, USA, 3Computer Science & Engineering, University of California San Diego, CA, USA.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
latent variables; (ii) imputes missing data by sampling from the learned latent variables; and (iii) performs unsupervised stochastic video prediction using the imputed hidden representation. Besides imputation, another salient feature of our model is (iv) its ability to robustly generate objects even when their appearances are changing by modeling the static and dynamic appearances separately. Thismakes our technique more applicable to real-world problems.
We demonstrate the effectiveness of our method on a moving MNIST dataset with a variety of missing data scenarios including partial occlusions, out of scene, and missing frames with varying appearances. We further evaluate on the Multi-Object Tracking and Segmentation (MOTSChallenge) object tracking and segmentation challenge dataset. We show that DIVE is able to accurately infer missing data, perform video imputation and reconstruct input frames and generate future predictions. Compared with baselines, our approach is robust to missing data and achieves significant improvements in video prediction performances.
2 Related Work
Disentangled Representation. Unsupervised learning of disentangled representation for sequences generally falls into three categories: VAE-based [10, 6, 5, 7, 11, 12], GAN-like models [13, 14, 4, 15] and Sum-Product networks [11, 16]. For video data, a common practice is to encode a video frame into latent variables and disentangle the latent representation into content and dynamics factors. For example, [5] assumes the content (objects, background) of a video is fixed across frames, while the position of the content can change over time. In most cases, models can only handle complete video sequences without missing data. One exception is SQAIR [6], an generalization of AIR [17], which makes use of a latent variable to explicitly encode the presence of the respective object. SQAIR is further extended to an accelerated training scheme [16] or to better encode relational inductive biases [11, 12]. However, SQAIR and its extensions have no mechanism to recall an object. This leads to discovering an object as new when it reappears in the scene.
Video Prediction. Conditioning on the past frames, video prediction models are trained to reconstruct the input sequence and predict future frames. Many video prediction methods use dynamical modeling [18] or deep neural networks to learn a deterministic transformation from input to output, including LSTM [19], Convolutional LSTM [20] and PredRNN [21]. These methods often suffer from blurry predictions and cannot properly model the inherently uncertain future [22]. In contrast to deterministic prediction, we prefer stochastic video prediction [2, 23, 22, 24, 14, 25], which is more suitable for capturing the stochastic dynamics of the environment. For instance, [22] proposes an auto-regressive model to generate pixels sequentially. [14] generalizes VAE to video data with a learned prior. [26] develops a normalizing flow video prediction model. [25] proposes a Bayesian Predictive Network to learn the prior distribution from noisy videos but without disentangled representations. Our main goal is to learn disentangled latent representations from video that are both interpretable and robust to missing data.
Missing Value Imputation. Missing value imputation is the process of replacing the missing data in a sequence by an estimate of its true missing value. It is a central challenge of sequence modeling. Statistical methods often impose strong assumptions on the missing patterns. For example, mean/median averaging [27] and MICE [28], can only handle data missing at random. Latent variables models with the EM algorithm [29] can impute data missing not-at-random but are restricted to certain parametric models. Deep generative models offer a flexible framework of missing data imputation. For instance, [30, 31, 32] develop variants of recurrent neural networks to impute time series. [33, 34, 35] propose GAN-like models to learn missing patterns in multivariate time series. Unfortunately, to the best of our knowledge, all recent developments in generative modeling for missing value imputation have focused on low-dimensional time series, which are not directly applicable to high-dimensional video with complex scene dynamics.
3 Disentangled-Imputed-Video-autoEncoder (DIVE)
Videos often capture multiple objects moving with complex dynamics. For this work, we assume that each video has a maximum number of N objects, we observe a video sequence up to K time steps and aim to predict T − K + 1 time steps ahead. The key component of DIVE is based on
the decomposition and disentangling of the objects representations within a VAE framework, with similar recursive modules as in [5]. Specifically, we decompose the objects in a video and assign three sets of latent variables to each object: appearance, pose and missingness, representing distinct attributes. During inference, DIVE encodes the input video into latent representations, performs sequence imputation in the latent space and updates the hidden representations. The generation model then samples from the latent variables to reconstruct and generate future predictions. Figure 1 depicts the overall pipeline of our model.
Denote a video sequence with missing data as (y1, · · · ,yt) where each yt ∈ Rd is a frame. We assume an object in a video consists of appearance, pose (position and scale), and missingness. For each object i in frame t, we aim to learn the latent representation zti and disentangle it into three latent variables:
zti = [z t i,a, z t i,p, z t i,m], z t i,a ∈ Rh, zti,p ∈ R3, zti,m ∈ Z (1)
where zti,a is the appearance vector with dimension h, z t i,p is the pose vector with x, y coordinates and scale and zti,m is the binary missingness label. z t i,m = 1 if the object is occluded or missing.
3.1 Imputation Model
The imputation model leverages the missingness variable zti,m to update the hidden states. When there is no missing data, the encoded hidden state, given the input frame, is hti,y = fenc(h t−1 i,y ,h t+1 i,y , [y
t,hti−1,y]), where we enforce separate representations for each object. We implement the encoding function fenc with a bidirectional LSTM to propagate the hidden state over time. However, in the presence of missing data, such hidden state is unreliable and needs imputation. Denote the imputed hidden state as ĥti,y which will be discussed shortly. We update a latent space vector uti to select the corresponding hidden state, given the sampled missingness variable:
uti =
{ ĥti,y z t i,m = 1
γhti,y + (1− γ)ĥti,y zti,m = 0 , γ ∼ Bernoulli(p) (2)
Note that we apply a mixture of input hidden state hti,y and imputed hidden state ĥ t i,y with probability p. In our experiments, we found this mixed strategy to be helpful in mitigating covariate shift [36]. It forces the model to learn the correct imputation with self-supervision, which is reminiscent of the scheduled sampling [37] technique for sequence prediction.
The pose hidden states hti,p are obtained by propagating the updated latent representation through an LSTM network hti,p = LSTM(h t−1 i,p ,u t i). For prediction we use an LSTM network, with only h t−1 i,p as input in time t. We obtain the imputed hidden state by means of auto-regression. This is based on the assumption that a video sequence is locally stationary and the most recent history is predictive of the
future. Given the updated latent representation at time t, the imputed hidden state at the next time step is:
ĥti,y = FC(h t−1 i,p ) (3)
where FC(·) is a fully connected layer. This approach is similar in spirit to the time series imputation method in [32]. However, instead of imputing in the observation space, we perform imputation in the space of latent representations.
3.2 Inference Model
Missingness Inference. For the missingness variable zti,m, we also leverage the input encoding. We use a heaviside step function to make it binary:
zti,m = H(x), x ∼ N (µm, σ2m), [µm, σ2m] = FC(hti,y), H(x) = { 1 x ≥ 0 0 x < 0
(4)
where σ is the standard deviation of the noise, which is obtained from the hidden representation.
Pose Inference. The pose variable (position and scale) encodes the spatiotemporal dynamics of the video. We follow the variational inference technique for state-space representation of sequences [38]. That is, instead of directly inferring z1:Ki,p for K input frames, we use a stochastic variable β t i to reparameterize the state transition probability:
q(z1:Ti,p |y1:K) = K∏ t=1 q(zti,p|z1:t−1i,p ), z t i,p = ftran(z t−1 i,p , β t i ), β t i ∼ N (µp, σ2p) (5)
where the state transition ftran is a deterministic mapping from the previous state to the next time step. The stochastic transition variable βti is sampled from a Gaussian distribution parameterized by a mean µp and variance σ2p with [µp, σ 2 p] = FC(h t i,p).
Dynamic Appearance. Another novel feature of our approach is its ability to robustly generate objects even when their appearances are changing across frames. zti,a is the time-varying appearance. In particular, we decompose the appearance latent variable into a static component ai,s and a dynamic component ai,d which we model separately. The static component captures the inherent semantics of the object while the dynamic component models the nuanced variations in shape.
For the static component, we follow the procedure in [5] to perform inverse affine spatial transformation T −1(·; ·), given the pose of the object to center in the frame and rectify the images with a selected crop size. Future prediction is done in an autoregressive fashion:
ai,s = FC(hKi,a), h t+1 i,a = { LSTM1(hti,a, T −1(yt; zti,p)) t < K LSTM2(hti,a) K ≤ t < T
(6)
Here the appearance hidden state hti,a is propagated through an LSTM, whose last output is used to infer the static appearance. Similar to poses, we use a state-space representation for the dynamic component, but directly model the difference in appearances, which helps stabilizing training:
a1i,d = FC([ai,s, T −1(y1; z1i,p)]), at+1i,d = a t i,d + δ t i,d, δ t i,d = FC([h t i,a,ai,s]) (7)
The final appearance variable is sampled from a Gaussian distribution parametrized by the concatenation of static and dynamic components, which are randomly mixed with a probability p:
q(zi,a|y1:K) = ∏ t N (µa, σ2a), [µa, σ2a] = FC([ai,s, γati,d]), γ ∼ Bernoulli(p) (8)
The mixing strategy helps to mitigate covariate shift and enforces the static component to learn the inherent semantics of the objects across frames.
<latexit sha1_base64="zYko6nGTBvQl8k+Y8nvDo3F4fOM=">AAACDXicbVC9TsMwGHTKXyl/BUYWiwiJqUoQCMYKFsYikbZSE1WO47RWbSeyHVAV5Rm6woOwIVaegefgBXDTDNBykqXT3Xf25wtTRpV2nC+rtra+sblV327s7O7tHzQPj7oqySQmHk5YIvshUoRRQTxNNSP9VBLEQ0Z64eRu7veeiFQ0EY96mpKAo5GgMcVIG8nzo0SrYdN2Wk4JuErcitigQmfY/DY5nHEiNGZIqYHrpDrIkdQUM1I0/EyRFOEJGpGBoQJxooK8XLaAZ0aJYJxIc4SGpfo7kSOu1JSHZpIjPVbL3lz8zxtkOr4JcirSTBOBFw/FGYM6gfOfw4hKgjWbGoKwpGZXiMdIIqxNPw1fkGeccI5ElPvm8mLgBiUZh3Fuu0VhWnKXO1kl3YuWe9m6eri027dVX3VwAk7BOXDBNWiDe9ABHsCAghl4Aa/WzHqz3q2PxWjNqjLH4A+szx++fpym</latexit>
<latexit sha1_base64="zYko6nGTBvQl8k+Y8nvDo3F4fOM=">AAACDXicbVC9TsMwGHTKXyl/BUYWiwiJqUoQCMYKFsYikbZSE1WO47RWbSeyHVAV5Rm6woOwIVaegefgBXDTDNBykqXT3Xf25wtTRpV2nC+rtra+sblV327s7O7tHzQPj7oqySQmHk5YIvshUoRRQTxNNSP9VBLEQ0Z64eRu7veeiFQ0EY96mpKAo5GgMcVIG8nzo0SrYdN2Wk4JuErcitigQmfY/DY5nHEiNGZIqYHrpDrIkdQUM1I0/EyRFOEJGpGBoQJxooK8XLaAZ0aJYJxIc4SGpfo7kSOu1JSHZpIjPVbL3lz8zxtkOr4JcirSTBOBFw/FGYM6gfOfw4hKgjWbGoKwpGZXiMdIIqxNPw1fkGeccI5ElPvm8mLgBiUZh3Fuu0VhWnKXO1kl3YuWe9m6eri027dVX3VwAk7BOXDBNWiDe9ABHsCAghl4Aa/WzHqz3q2PxWjNqjLH4A+szx++fpym</latexit>
<latexit sha1_base64="zYko6nGTBvQl8k+Y8nvDo3F4fOM=">AAACDXicbVC9TsMwGHTKXyl/BUYWiwiJqUoQCMYKFsYikbZSE1WO47RWbSeyHVAV5Rm6woOwIVaegefgBXDTDNBykqXT3Xf25wtTRpV2nC+rtra+sblV327s7O7tHzQPj7oqySQmHk5YIvshUoRRQTxNNSP9VBLEQ0Z64eRu7veeiFQ0EY96mpKAo5GgMcVIG8nzo0SrYdN2Wk4JuErcitigQmfY/DY5nHEiNGZIqYHrpDrIkdQUM1I0/EyRFOEJGpGBoQJxooK8XLaAZ0aJYJxIc4SGpfo7kSOu1JSHZpIjPVbL3lz8zxtkOr4JcirSTBOBFw/FGYM6gfOfw4hKgjWbGoKwpGZXiMdIIqxNPw1fkGeccI5ElPvm8mLgBiUZh3Fuu0VhWnKXO1kl3YuWe9m6eri027dVX3VwAk7BOXDBNWiDe9ABHsCAghl4Aa/WzHqz3q2PxWjNqjLH4A+szx++fpym</latexit>
<latexit sha1_base64="zYko6nGTBvQl8k+Y8nvDo3F4fOM=">AAACDXicbVC9TsMwGHTKXyl/BUYWiwiJqUoQCMYKFsYikbZSE1WO47RWbSeyHVAV5Rm6woOwIVaegefgBXDTDNBykqXT3Xf25wtTRpV2nC+rtra+sblV327s7O7tHzQPj7oqySQmHk5YIvshUoRRQTxNNSP9VBLEQ0Z64eRu7veeiFQ0EY96mpKAo5GgMcVIG8nzo0SrYdN2Wk4JuErcitigQmfY/DY5nHEiNGZIqYHrpDrIkdQUM1I0/EyRFOEJGpGBoQJxooK8XLaAZ0aJYJxIc4SGpfo7kSOu1JSHZpIjPVbL3lz8zxtkOr4JcirSTBOBFw/FGYM6gfOfw4hKgjWbGoKwpGZXiMdIIqxNPw1fkGeccI5ElPvm8mLgBiUZh3Fuu0VhWnKXO1kl3YuWe9m6eri027dVX3VwAk7BOXDBNWiDe9ABHsCAghl4Aa/WzHqz3q2PxWjNqjLH4A+szx++fpym</latexit>
3.3 Generative Model and Learning
Given a video with missing data (y1, · · · ,yt), denote the underlying complete video as (x1, · · ·xt). Then, the generative distribution of the video sequence is given by:
p(y1:K ,xK+1:T |z1:T ) = N∏ i=1 p(y1:Ki |z1:Ki )p(xK+1:Ti |z K+1:T i ) (9)
In unsupervised learning of video representations, we simultaneously reconstruct the input video and predict future frames. Given the inferred latent variables, we generate yti and predict x t i for each object sequentially. In particular, we first generate the rectified object in the center, given the appearance zti,a. The decoder is parameterized by a deconvolutional layer. After that, we apply an spatial transformer T to rescale and place the object according to the pose zti,p. For each object, the generative model is:
p(yti |zti,a) = T (fdec(zti,a); zti,p) ◦ (1− zti,m), p(xti|zti,a) = T (fdec(zti,a), zti,p) (10)
Future prediction is similar to reconstruction, except we assume the video is always complete. The generated frame yt is the summation over yti for all objects. Following the VAE framework, we train the model by maximizing the evidence lower bound (ELBO). Please see details in Appendix D .
4 Experiments
4.1 Experimental Setup
We evaluate our method on variations of moving MNIST and MOTSChallenge multi-object tracking datasets. The prediction task is to generate 10 future frames, given an input of 10 frames. The baselines include the established state-of-the-art video prediction methods based on disentangled representation learning: DRNET [4], DDPAE [5] and SQAIR [24].
Evaluation Metrics. We use common evaluation metrics for video quality on the visible pixels, which include pixel-level Binary Cross entropy (BCE) per frame, Mean Square Error (MSE) per
frame, Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM). Additionally, DIVE is a probabilistic model, hence we also report Negative Evidence Lower Bound (NELBO).
As our DIVE model simultaneously imputes missing data and generates improved predictions, we report reconstruction and prediction performances separately. For implementation details for the experiments, please see Appendix A.
4.2 Moving MNIST Experiments
Data Description. Moving MNIST [19] is a synthetic dataset consisting of two digits with size 28×28 moving independently in a 64×64 frame. Each sequence is generated on-the-fly by sampling MNIST digits and synthesizing trajectories with fixed velocity with randomly sampled angle and initial position. We train the model for 300 epochs in scenarios 1 and 2, and 600 epochs in scenario 3. For each epoch we generate 10k sequences. The test set contains 1,024 fixed sequences. We simulate a variety of missing data scenarios including:
• Partial Occlusion: we occlude the upper 32 rows of the 64× 64 pixel frame to simulate the effect of objects being partially occluded at the boundaries of the frame. • Out of Scene: we randomly select an initial time step t′ = [3, 9] and remove the object from the frame in steps t′ and t′ + 1 to simulate the out of scene phenomena for two consecutive steps. • Missing with Varying Appearance: we apply an elastic transformation [39] to change the appearance of the objects individually. The transformation grid is chosen randomly for each sequence, and the parameter α of the deformation filter is set to α = 100 and reduced linearly to 0 (no transformation) along the steps of the sequence. We remove each object for one time-step following the same logic as in scenario 2.
Scenario 1: Partial occlusion. The top portion of Table 1 shows the quantitative performance comparison for all methods for the partial occlusion scenario. Our model outperforms all baseline models, except for the BCE in prediction. This is because DIVE generates sharper shapes which, in case of misalignment with the ground truth, have a larger effect on the pixel-level BCE. For reconstruction, our method often outperforms the baselines by a large margin, which highlights the significance of missing data imputation. Note that SQAIR performs well in reconstruction but fails in prediction. Prolonged full occlusions cause SQAIR to lose track of the object and re-identifying it as a new one when it reappears. Figure 3 shows a visualization of the predictions from DIVE and the baseline models. The bottom three rows show the decomposed representations from DIVE for each object and the missingness labels for objects in the corresponding order. We observe that DRNET and SQAIR fail to predict the objects position in the frame and appearance while DDPAE generates blurry predictions with the correct pose. These failure cases rarely occur for DIVE. Scenario 2: Out of Scene. The middle portion of Table 1 illustrates the quantitative performance of all methods for scenario 2. We observe that our method achieves significant improvement across all metrics. This implies that our imputation of missing data is accurate and can drastically improve the predictions. Figure 4 shows the prediction results of all methods evaluated for the out of scene case. We observe that DRNET and SQAIR fail to predict the future pose, and the quality of the
generated object appearance is poor. The qualitative comparison with DDPAE reveals that the objects generated by our model have higher brightness and sharpness. As the baselines cannot infer the object missingness, they may misidentify the missing object as any other object that is present. This would lead to confusion for modeling the pose and appearance. The figure also reveals how DIVE is able to predict the missing labels and hallucinate the pose of the objects when missing, allowing for accurate predictions.
Scenario 3: Missing with Varying Appearance. Quantitative results for 1 time step complete missingness with varying appearance are shown in the bottom portion of Table 1. Our method again achieves the best performance for all metrics. The difference between our models and baselines is quite significant given the difficulty of the task. Besides the complete missing frame, the varying appearances of the objects introduce an additional layer of complexity which can misguide the inference. Despite these challenges, DIVE can learn the appearance variation and successfully recognize the correct object in most cases. Figure 5 visualizes the model predictions, a tough case where two seemingly different digits (“2” and “6”) are progressively transformed into the same digit (“6”). SQAIR and DRNET have the ability to model varying appearance, but fail to generate
reasonable predictions due to similar reasons as before. DDPAE correctly predicts the pose after the missing step, but misidentifies the objects appearance before that. Also, DDPAE simply cannot model appearance variation. DIVE correctly estimates the pose and appearance variation of each object, while maintaining their identity throughout the sequence.
4.3 Pedestrian Experiments
The Multi-Object Tracking and Segmentation (MOTS) Challenge [40] dataset consists of real world video sequences of pedestrians and cars. We use 2 ground truth sequences in which pedestrians have been fully segmented and annotated [41]. The annotated sequences are further processed into shorter 20 frame sub-sequences, binarized and with at most 3 unique pedestrians. The smallest objects are scaled and the sequences are augmented by simulating constant camera motion and 1 time step complete camera occlusion, further details deferred to Appendix B.
Table 2 shows the quantitative metrics compared with the best performing baseline DDPAE. This dataset mimics the missing scenarios 1 (partial occlusion) and 3 (missing with varying appearance) because the appearance walking pedestrians is constantly changing. DIVE outperforms
DDPAE across all evaluation metrics. Figure 6 shows the outputs from both models as well as the decomposed objects and missingness labels from DIVE. Our method can accurately recognize 3 objects (pedestrians), infer their missingness and estimate their varying appearance. DDPAE fails to
decompose them due to its rigid assumption of fixed appearances and the inherent complexity of the scenario. In Appendix C, we perform two ablation studies. One on the significance of dynamic appearance modeling, and the other on the importance of estimating missingness and performing imputation.
5 Conclusion and Discussion
We propose a novel deep generative model that can simultaneously perform object decomposition, latent space disentangling, missing data imputation, and video forecasting. The key novelty of our method includes missing data detection and imputation in the hidden representations, as well as a robust way of dealing with dynamic appearances. Extensive experiments on moving MNIST demonstrate that DIVE can impute missing data without supervision and generate videos of significantly higher quality. Future work will focus on improving our model so that it is able to handle the complexity and dynamics in real world videos with unknown object number and colored scenes.
Broader Impact
Videos provide a window into the physics of the world we live in. They contain abundant visual information of what objects are, how they move, and what happens when cameras move against the scene. Being able to learn a representation that disentangles these factors is fundamental to AI that can understand and act in spatiotemporal environment. Despite the wealth of methods for video prediction, state-of-the-art approaches are sensitive to missing data, which are very common in realworld videos. Our proposed model significantly improves the robustness of video prediction methods against missing data, and thereby increasing the practical values of video prediction techniques and our trust in AI. Video surveillance systems can be potentially abused for discriminatory targeting, and we remained cognizant of the bias in our training data. To reduce the potential risk of this, we pre-processed the MOTSChallenge videos to greyscale.
Acknowledgments and Disclosure of Funding
This work was supported in part by NSF under Grants IIS#1850349, IIS#1814631, ECCS#1808381 and CMMI#1638234, the U. S. Army Research Office under Grant W911NF-20-1-0334 and the Alert DHS Center of Excellence under Award Number 2013-ST-061-ED0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the U.S. Department of Homeland Security. We thank Dr. Adam Kosiorek for helpful discussions. Additional revenues related to this work: ONR # N68335-19-C-0310, Google Faculty Research Award, Adobe Data Science Research Awards, GPUs donated by NVIDIA, and computing allocation awarded by DOE. | 1. What is the focus and contribution of the paper on video representation learning?
2. What are the strengths of the proposed approach, particularly in terms of its mathematical soundness and experimental performance?
3. What are the weaknesses of the paper, especially regarding the evaluation and comparison with prior works?
4. Do you have any concerns or suggestions regarding the proposed method, such as the impact of dynamic appearance or the role of the mixture in equation 2?
5. How do you assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The paper introduces a novel method for model to learn video representation that disentangles pose, missingness, and appearance. The novelty lies in this missingness latent variable that is used to potentially impute the pose and appearance variable. Dynamic appearance is also introduce
Strengths
The method is mathematicaly sound and the experimental results seems to show that the proposed approach outperforms previous works.
Weaknesses
The main weakness lies in the evaluation. There is no ablation study. The model is more complicated than DDPAE and, without an ablation study, it's hard to tell if the method is better because of the better disentanglement or just because the architecture is bigger. I would recommend at least the following experiments: -measure the impact of dynamic appearance vs static: with LSTM, with a short time window, using a constant appearance -the overall same model but without imputing as described in 3.1 -what is the impact of the mixture in eq 2? what happens with different values of gamma In fig 4 and 5, if we compare the results of DDPAE with the results reported in the DDPAE's paper, the images are much worse. Why? |
NIPS | Title
Learning Disentangled Representations of Videos with Missing Data
Abstract
Missing data poses significant challenges while learning representations of video sequences. We present Disentangled Imputed Video autoEncoder (DIVE), a deep generative model that imputes and predicts future video frames in the presence of missing data. Specifically, DIVE introduces a missingness latent variable, disentangles the hidden video representations into static and dynamic appearance, pose, and missingness factors for each object. DIVE imputes each object’s trajectory where the data is missing. On a moving MNIST dataset with various missing scenarios, DIVE outperforms the state of the art baselines by a substantial margin. We also present comparisons on a real-world MOTSChallenge pedestrian dataset, which demonstrates the practical value of our method in a more realistic setting. Our code and data can be found at https://github.com/Rose-STL-Lab/DIVE.
1 Introduction
Videos contain rich structured information about our physical world. Learning representations from video enables intelligent machines to reason about the surroundings and it is essential to a range of tasks in machine learning and computer vision, including activity recognition [1], video prediction [2] and spatiotemporal reasoning [3]. One of the fundamental challenges in video representation learning is the high-dimensional, dynamic, multi-modal distribution of pixels. Recent research in deep generative models [4, 5, 6, 7] tackles the challenge by exploiting inductive biases of videos and projecting the high-dimensional data into substantially lower dimensional space. These methods search for disentangled representations by decomposing the latent representation of video frames into semantically meaningful factors [8].
Unfortunately, existing methods cannot reason about the objects when they are missing in videos. In contrast, a five month-old child can understand that objects continue to exist even when they are unseen, a phenomena known as “object permanence” [9]. Towards making intelligent machines, we study learning disentangled representations of videos with missing data. We consider a variety of missing scenarios that might occur in natural videos: objects can be partially occluded; objects can disappear in a scene and reappear; objects can also become missing while changing their size, shape, color and brightness. The ability to disentangle these factors and learn appropriate representations is an important step toward spatiotemporal decision making in complex environments.
In this work, we build on the deep generative model of DDPAE [5] which integrates structured graphical models into deep neural networks. Our model, which we call Disentangled-Imputed-VideoautoEncoder (DIVE), (i) learns representations that factorize into appearance, pose and missingness
∗1College of Electrical and Computer Engineering, 2 Khoury College of Computer Sciences, Northeastern University, MA, USA, 3Computer Science & Engineering, University of California San Diego, CA, USA.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
latent variables; (ii) imputes missing data by sampling from the learned latent variables; and (iii) performs unsupervised stochastic video prediction using the imputed hidden representation. Besides imputation, another salient feature of our model is (iv) its ability to robustly generate objects even when their appearances are changing by modeling the static and dynamic appearances separately. Thismakes our technique more applicable to real-world problems.
We demonstrate the effectiveness of our method on a moving MNIST dataset with a variety of missing data scenarios including partial occlusions, out of scene, and missing frames with varying appearances. We further evaluate on the Multi-Object Tracking and Segmentation (MOTSChallenge) object tracking and segmentation challenge dataset. We show that DIVE is able to accurately infer missing data, perform video imputation and reconstruct input frames and generate future predictions. Compared with baselines, our approach is robust to missing data and achieves significant improvements in video prediction performances.
2 Related Work
Disentangled Representation. Unsupervised learning of disentangled representation for sequences generally falls into three categories: VAE-based [10, 6, 5, 7, 11, 12], GAN-like models [13, 14, 4, 15] and Sum-Product networks [11, 16]. For video data, a common practice is to encode a video frame into latent variables and disentangle the latent representation into content and dynamics factors. For example, [5] assumes the content (objects, background) of a video is fixed across frames, while the position of the content can change over time. In most cases, models can only handle complete video sequences without missing data. One exception is SQAIR [6], an generalization of AIR [17], which makes use of a latent variable to explicitly encode the presence of the respective object. SQAIR is further extended to an accelerated training scheme [16] or to better encode relational inductive biases [11, 12]. However, SQAIR and its extensions have no mechanism to recall an object. This leads to discovering an object as new when it reappears in the scene.
Video Prediction. Conditioning on the past frames, video prediction models are trained to reconstruct the input sequence and predict future frames. Many video prediction methods use dynamical modeling [18] or deep neural networks to learn a deterministic transformation from input to output, including LSTM [19], Convolutional LSTM [20] and PredRNN [21]. These methods often suffer from blurry predictions and cannot properly model the inherently uncertain future [22]. In contrast to deterministic prediction, we prefer stochastic video prediction [2, 23, 22, 24, 14, 25], which is more suitable for capturing the stochastic dynamics of the environment. For instance, [22] proposes an auto-regressive model to generate pixels sequentially. [14] generalizes VAE to video data with a learned prior. [26] develops a normalizing flow video prediction model. [25] proposes a Bayesian Predictive Network to learn the prior distribution from noisy videos but without disentangled representations. Our main goal is to learn disentangled latent representations from video that are both interpretable and robust to missing data.
Missing Value Imputation. Missing value imputation is the process of replacing the missing data in a sequence by an estimate of its true missing value. It is a central challenge of sequence modeling. Statistical methods often impose strong assumptions on the missing patterns. For example, mean/median averaging [27] and MICE [28], can only handle data missing at random. Latent variables models with the EM algorithm [29] can impute data missing not-at-random but are restricted to certain parametric models. Deep generative models offer a flexible framework of missing data imputation. For instance, [30, 31, 32] develop variants of recurrent neural networks to impute time series. [33, 34, 35] propose GAN-like models to learn missing patterns in multivariate time series. Unfortunately, to the best of our knowledge, all recent developments in generative modeling for missing value imputation have focused on low-dimensional time series, which are not directly applicable to high-dimensional video with complex scene dynamics.
3 Disentangled-Imputed-Video-autoEncoder (DIVE)
Videos often capture multiple objects moving with complex dynamics. For this work, we assume that each video has a maximum number of N objects, we observe a video sequence up to K time steps and aim to predict T − K + 1 time steps ahead. The key component of DIVE is based on
the decomposition and disentangling of the objects representations within a VAE framework, with similar recursive modules as in [5]. Specifically, we decompose the objects in a video and assign three sets of latent variables to each object: appearance, pose and missingness, representing distinct attributes. During inference, DIVE encodes the input video into latent representations, performs sequence imputation in the latent space and updates the hidden representations. The generation model then samples from the latent variables to reconstruct and generate future predictions. Figure 1 depicts the overall pipeline of our model.
Denote a video sequence with missing data as (y1, · · · ,yt) where each yt ∈ Rd is a frame. We assume an object in a video consists of appearance, pose (position and scale), and missingness. For each object i in frame t, we aim to learn the latent representation zti and disentangle it into three latent variables:
zti = [z t i,a, z t i,p, z t i,m], z t i,a ∈ Rh, zti,p ∈ R3, zti,m ∈ Z (1)
where zti,a is the appearance vector with dimension h, z t i,p is the pose vector with x, y coordinates and scale and zti,m is the binary missingness label. z t i,m = 1 if the object is occluded or missing.
3.1 Imputation Model
The imputation model leverages the missingness variable zti,m to update the hidden states. When there is no missing data, the encoded hidden state, given the input frame, is hti,y = fenc(h t−1 i,y ,h t+1 i,y , [y
t,hti−1,y]), where we enforce separate representations for each object. We implement the encoding function fenc with a bidirectional LSTM to propagate the hidden state over time. However, in the presence of missing data, such hidden state is unreliable and needs imputation. Denote the imputed hidden state as ĥti,y which will be discussed shortly. We update a latent space vector uti to select the corresponding hidden state, given the sampled missingness variable:
uti =
{ ĥti,y z t i,m = 1
γhti,y + (1− γ)ĥti,y zti,m = 0 , γ ∼ Bernoulli(p) (2)
Note that we apply a mixture of input hidden state hti,y and imputed hidden state ĥ t i,y with probability p. In our experiments, we found this mixed strategy to be helpful in mitigating covariate shift [36]. It forces the model to learn the correct imputation with self-supervision, which is reminiscent of the scheduled sampling [37] technique for sequence prediction.
The pose hidden states hti,p are obtained by propagating the updated latent representation through an LSTM network hti,p = LSTM(h t−1 i,p ,u t i). For prediction we use an LSTM network, with only h t−1 i,p as input in time t. We obtain the imputed hidden state by means of auto-regression. This is based on the assumption that a video sequence is locally stationary and the most recent history is predictive of the
future. Given the updated latent representation at time t, the imputed hidden state at the next time step is:
ĥti,y = FC(h t−1 i,p ) (3)
where FC(·) is a fully connected layer. This approach is similar in spirit to the time series imputation method in [32]. However, instead of imputing in the observation space, we perform imputation in the space of latent representations.
3.2 Inference Model
Missingness Inference. For the missingness variable zti,m, we also leverage the input encoding. We use a heaviside step function to make it binary:
zti,m = H(x), x ∼ N (µm, σ2m), [µm, σ2m] = FC(hti,y), H(x) = { 1 x ≥ 0 0 x < 0
(4)
where σ is the standard deviation of the noise, which is obtained from the hidden representation.
Pose Inference. The pose variable (position and scale) encodes the spatiotemporal dynamics of the video. We follow the variational inference technique for state-space representation of sequences [38]. That is, instead of directly inferring z1:Ki,p for K input frames, we use a stochastic variable β t i to reparameterize the state transition probability:
q(z1:Ti,p |y1:K) = K∏ t=1 q(zti,p|z1:t−1i,p ), z t i,p = ftran(z t−1 i,p , β t i ), β t i ∼ N (µp, σ2p) (5)
where the state transition ftran is a deterministic mapping from the previous state to the next time step. The stochastic transition variable βti is sampled from a Gaussian distribution parameterized by a mean µp and variance σ2p with [µp, σ 2 p] = FC(h t i,p).
Dynamic Appearance. Another novel feature of our approach is its ability to robustly generate objects even when their appearances are changing across frames. zti,a is the time-varying appearance. In particular, we decompose the appearance latent variable into a static component ai,s and a dynamic component ai,d which we model separately. The static component captures the inherent semantics of the object while the dynamic component models the nuanced variations in shape.
For the static component, we follow the procedure in [5] to perform inverse affine spatial transformation T −1(·; ·), given the pose of the object to center in the frame and rectify the images with a selected crop size. Future prediction is done in an autoregressive fashion:
ai,s = FC(hKi,a), h t+1 i,a = { LSTM1(hti,a, T −1(yt; zti,p)) t < K LSTM2(hti,a) K ≤ t < T
(6)
Here the appearance hidden state hti,a is propagated through an LSTM, whose last output is used to infer the static appearance. Similar to poses, we use a state-space representation for the dynamic component, but directly model the difference in appearances, which helps stabilizing training:
a1i,d = FC([ai,s, T −1(y1; z1i,p)]), at+1i,d = a t i,d + δ t i,d, δ t i,d = FC([h t i,a,ai,s]) (7)
The final appearance variable is sampled from a Gaussian distribution parametrized by the concatenation of static and dynamic components, which are randomly mixed with a probability p:
q(zi,a|y1:K) = ∏ t N (µa, σ2a), [µa, σ2a] = FC([ai,s, γati,d]), γ ∼ Bernoulli(p) (8)
The mixing strategy helps to mitigate covariate shift and enforces the static component to learn the inherent semantics of the objects across frames.
<latexit sha1_base64="zYko6nGTBvQl8k+Y8nvDo3F4fOM=">AAACDXicbVC9TsMwGHTKXyl/BUYWiwiJqUoQCMYKFsYikbZSE1WO47RWbSeyHVAV5Rm6woOwIVaegefgBXDTDNBykqXT3Xf25wtTRpV2nC+rtra+sblV327s7O7tHzQPj7oqySQmHk5YIvshUoRRQTxNNSP9VBLEQ0Z64eRu7veeiFQ0EY96mpKAo5GgMcVIG8nzo0SrYdN2Wk4JuErcitigQmfY/DY5nHEiNGZIqYHrpDrIkdQUM1I0/EyRFOEJGpGBoQJxooK8XLaAZ0aJYJxIc4SGpfo7kSOu1JSHZpIjPVbL3lz8zxtkOr4JcirSTBOBFw/FGYM6gfOfw4hKgjWbGoKwpGZXiMdIIqxNPw1fkGeccI5ElPvm8mLgBiUZh3Fuu0VhWnKXO1kl3YuWe9m6eri027dVX3VwAk7BOXDBNWiDe9ABHsCAghl4Aa/WzHqz3q2PxWjNqjLH4A+szx++fpym</latexit>
<latexit sha1_base64="zYko6nGTBvQl8k+Y8nvDo3F4fOM=">AAACDXicbVC9TsMwGHTKXyl/BUYWiwiJqUoQCMYKFsYikbZSE1WO47RWbSeyHVAV5Rm6woOwIVaegefgBXDTDNBykqXT3Xf25wtTRpV2nC+rtra+sblV327s7O7tHzQPj7oqySQmHk5YIvshUoRRQTxNNSP9VBLEQ0Z64eRu7veeiFQ0EY96mpKAo5GgMcVIG8nzo0SrYdN2Wk4JuErcitigQmfY/DY5nHEiNGZIqYHrpDrIkdQUM1I0/EyRFOEJGpGBoQJxooK8XLaAZ0aJYJxIc4SGpfo7kSOu1JSHZpIjPVbL3lz8zxtkOr4JcirSTBOBFw/FGYM6gfOfw4hKgjWbGoKwpGZXiMdIIqxNPw1fkGeccI5ElPvm8mLgBiUZh3Fuu0VhWnKXO1kl3YuWe9m6eri027dVX3VwAk7BOXDBNWiDe9ABHsCAghl4Aa/WzHqz3q2PxWjNqjLH4A+szx++fpym</latexit>
<latexit sha1_base64="zYko6nGTBvQl8k+Y8nvDo3F4fOM=">AAACDXicbVC9TsMwGHTKXyl/BUYWiwiJqUoQCMYKFsYikbZSE1WO47RWbSeyHVAV5Rm6woOwIVaegefgBXDTDNBykqXT3Xf25wtTRpV2nC+rtra+sblV327s7O7tHzQPj7oqySQmHk5YIvshUoRRQTxNNSP9VBLEQ0Z64eRu7veeiFQ0EY96mpKAo5GgMcVIG8nzo0SrYdN2Wk4JuErcitigQmfY/DY5nHEiNGZIqYHrpDrIkdQUM1I0/EyRFOEJGpGBoQJxooK8XLaAZ0aJYJxIc4SGpfo7kSOu1JSHZpIjPVbL3lz8zxtkOr4JcirSTBOBFw/FGYM6gfOfw4hKgjWbGoKwpGZXiMdIIqxNPw1fkGeccI5ElPvm8mLgBiUZh3Fuu0VhWnKXO1kl3YuWe9m6eri027dVX3VwAk7BOXDBNWiDe9ABHsCAghl4Aa/WzHqz3q2PxWjNqjLH4A+szx++fpym</latexit>
<latexit sha1_base64="zYko6nGTBvQl8k+Y8nvDo3F4fOM=">AAACDXicbVC9TsMwGHTKXyl/BUYWiwiJqUoQCMYKFsYikbZSE1WO47RWbSeyHVAV5Rm6woOwIVaegefgBXDTDNBykqXT3Xf25wtTRpV2nC+rtra+sblV327s7O7tHzQPj7oqySQmHk5YIvshUoRRQTxNNSP9VBLEQ0Z64eRu7veeiFQ0EY96mpKAo5GgMcVIG8nzo0SrYdN2Wk4JuErcitigQmfY/DY5nHEiNGZIqYHrpDrIkdQUM1I0/EyRFOEJGpGBoQJxooK8XLaAZ0aJYJxIc4SGpfo7kSOu1JSHZpIjPVbL3lz8zxtkOr4JcirSTBOBFw/FGYM6gfOfw4hKgjWbGoKwpGZXiMdIIqxNPw1fkGeccI5ElPvm8mLgBiUZh3Fuu0VhWnKXO1kl3YuWe9m6eri027dVX3VwAk7BOXDBNWiDe9ABHsCAghl4Aa/WzHqz3q2PxWjNqjLH4A+szx++fpym</latexit>
3.3 Generative Model and Learning
Given a video with missing data (y1, · · · ,yt), denote the underlying complete video as (x1, · · ·xt). Then, the generative distribution of the video sequence is given by:
p(y1:K ,xK+1:T |z1:T ) = N∏ i=1 p(y1:Ki |z1:Ki )p(xK+1:Ti |z K+1:T i ) (9)
In unsupervised learning of video representations, we simultaneously reconstruct the input video and predict future frames. Given the inferred latent variables, we generate yti and predict x t i for each object sequentially. In particular, we first generate the rectified object in the center, given the appearance zti,a. The decoder is parameterized by a deconvolutional layer. After that, we apply an spatial transformer T to rescale and place the object according to the pose zti,p. For each object, the generative model is:
p(yti |zti,a) = T (fdec(zti,a); zti,p) ◦ (1− zti,m), p(xti|zti,a) = T (fdec(zti,a), zti,p) (10)
Future prediction is similar to reconstruction, except we assume the video is always complete. The generated frame yt is the summation over yti for all objects. Following the VAE framework, we train the model by maximizing the evidence lower bound (ELBO). Please see details in Appendix D .
4 Experiments
4.1 Experimental Setup
We evaluate our method on variations of moving MNIST and MOTSChallenge multi-object tracking datasets. The prediction task is to generate 10 future frames, given an input of 10 frames. The baselines include the established state-of-the-art video prediction methods based on disentangled representation learning: DRNET [4], DDPAE [5] and SQAIR [24].
Evaluation Metrics. We use common evaluation metrics for video quality on the visible pixels, which include pixel-level Binary Cross entropy (BCE) per frame, Mean Square Error (MSE) per
frame, Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM). Additionally, DIVE is a probabilistic model, hence we also report Negative Evidence Lower Bound (NELBO).
As our DIVE model simultaneously imputes missing data and generates improved predictions, we report reconstruction and prediction performances separately. For implementation details for the experiments, please see Appendix A.
4.2 Moving MNIST Experiments
Data Description. Moving MNIST [19] is a synthetic dataset consisting of two digits with size 28×28 moving independently in a 64×64 frame. Each sequence is generated on-the-fly by sampling MNIST digits and synthesizing trajectories with fixed velocity with randomly sampled angle and initial position. We train the model for 300 epochs in scenarios 1 and 2, and 600 epochs in scenario 3. For each epoch we generate 10k sequences. The test set contains 1,024 fixed sequences. We simulate a variety of missing data scenarios including:
• Partial Occlusion: we occlude the upper 32 rows of the 64× 64 pixel frame to simulate the effect of objects being partially occluded at the boundaries of the frame. • Out of Scene: we randomly select an initial time step t′ = [3, 9] and remove the object from the frame in steps t′ and t′ + 1 to simulate the out of scene phenomena for two consecutive steps. • Missing with Varying Appearance: we apply an elastic transformation [39] to change the appearance of the objects individually. The transformation grid is chosen randomly for each sequence, and the parameter α of the deformation filter is set to α = 100 and reduced linearly to 0 (no transformation) along the steps of the sequence. We remove each object for one time-step following the same logic as in scenario 2.
Scenario 1: Partial occlusion. The top portion of Table 1 shows the quantitative performance comparison for all methods for the partial occlusion scenario. Our model outperforms all baseline models, except for the BCE in prediction. This is because DIVE generates sharper shapes which, in case of misalignment with the ground truth, have a larger effect on the pixel-level BCE. For reconstruction, our method often outperforms the baselines by a large margin, which highlights the significance of missing data imputation. Note that SQAIR performs well in reconstruction but fails in prediction. Prolonged full occlusions cause SQAIR to lose track of the object and re-identifying it as a new one when it reappears. Figure 3 shows a visualization of the predictions from DIVE and the baseline models. The bottom three rows show the decomposed representations from DIVE for each object and the missingness labels for objects in the corresponding order. We observe that DRNET and SQAIR fail to predict the objects position in the frame and appearance while DDPAE generates blurry predictions with the correct pose. These failure cases rarely occur for DIVE. Scenario 2: Out of Scene. The middle portion of Table 1 illustrates the quantitative performance of all methods for scenario 2. We observe that our method achieves significant improvement across all metrics. This implies that our imputation of missing data is accurate and can drastically improve the predictions. Figure 4 shows the prediction results of all methods evaluated for the out of scene case. We observe that DRNET and SQAIR fail to predict the future pose, and the quality of the
generated object appearance is poor. The qualitative comparison with DDPAE reveals that the objects generated by our model have higher brightness and sharpness. As the baselines cannot infer the object missingness, they may misidentify the missing object as any other object that is present. This would lead to confusion for modeling the pose and appearance. The figure also reveals how DIVE is able to predict the missing labels and hallucinate the pose of the objects when missing, allowing for accurate predictions.
Scenario 3: Missing with Varying Appearance. Quantitative results for 1 time step complete missingness with varying appearance are shown in the bottom portion of Table 1. Our method again achieves the best performance for all metrics. The difference between our models and baselines is quite significant given the difficulty of the task. Besides the complete missing frame, the varying appearances of the objects introduce an additional layer of complexity which can misguide the inference. Despite these challenges, DIVE can learn the appearance variation and successfully recognize the correct object in most cases. Figure 5 visualizes the model predictions, a tough case where two seemingly different digits (“2” and “6”) are progressively transformed into the same digit (“6”). SQAIR and DRNET have the ability to model varying appearance, but fail to generate
reasonable predictions due to similar reasons as before. DDPAE correctly predicts the pose after the missing step, but misidentifies the objects appearance before that. Also, DDPAE simply cannot model appearance variation. DIVE correctly estimates the pose and appearance variation of each object, while maintaining their identity throughout the sequence.
4.3 Pedestrian Experiments
The Multi-Object Tracking and Segmentation (MOTS) Challenge [40] dataset consists of real world video sequences of pedestrians and cars. We use 2 ground truth sequences in which pedestrians have been fully segmented and annotated [41]. The annotated sequences are further processed into shorter 20 frame sub-sequences, binarized and with at most 3 unique pedestrians. The smallest objects are scaled and the sequences are augmented by simulating constant camera motion and 1 time step complete camera occlusion, further details deferred to Appendix B.
Table 2 shows the quantitative metrics compared with the best performing baseline DDPAE. This dataset mimics the missing scenarios 1 (partial occlusion) and 3 (missing with varying appearance) because the appearance walking pedestrians is constantly changing. DIVE outperforms
DDPAE across all evaluation metrics. Figure 6 shows the outputs from both models as well as the decomposed objects and missingness labels from DIVE. Our method can accurately recognize 3 objects (pedestrians), infer their missingness and estimate their varying appearance. DDPAE fails to
decompose them due to its rigid assumption of fixed appearances and the inherent complexity of the scenario. In Appendix C, we perform two ablation studies. One on the significance of dynamic appearance modeling, and the other on the importance of estimating missingness and performing imputation.
5 Conclusion and Discussion
We propose a novel deep generative model that can simultaneously perform object decomposition, latent space disentangling, missing data imputation, and video forecasting. The key novelty of our method includes missing data detection and imputation in the hidden representations, as well as a robust way of dealing with dynamic appearances. Extensive experiments on moving MNIST demonstrate that DIVE can impute missing data without supervision and generate videos of significantly higher quality. Future work will focus on improving our model so that it is able to handle the complexity and dynamics in real world videos with unknown object number and colored scenes.
Broader Impact
Videos provide a window into the physics of the world we live in. They contain abundant visual information of what objects are, how they move, and what happens when cameras move against the scene. Being able to learn a representation that disentangles these factors is fundamental to AI that can understand and act in spatiotemporal environment. Despite the wealth of methods for video prediction, state-of-the-art approaches are sensitive to missing data, which are very common in realworld videos. Our proposed model significantly improves the robustness of video prediction methods against missing data, and thereby increasing the practical values of video prediction techniques and our trust in AI. Video surveillance systems can be potentially abused for discriminatory targeting, and we remained cognizant of the bias in our training data. To reduce the potential risk of this, we pre-processed the MOTSChallenge videos to greyscale.
Acknowledgments and Disclosure of Funding
This work was supported in part by NSF under Grants IIS#1850349, IIS#1814631, ECCS#1808381 and CMMI#1638234, the U. S. Army Research Office under Grant W911NF-20-1-0334 and the Alert DHS Center of Excellence under Award Number 2013-ST-061-ED0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the U.S. Department of Homeland Security. We thank Dr. Adam Kosiorek for helpful discussions. Additional revenues related to this work: ONR # N68335-19-C-0310, Google Faculty Research Award, Adobe Data Science Research Awards, GPUs donated by NVIDIA, and computing allocation awarded by DOE. | 1. What is the main contribution of the paper regarding video prediction?
2. How does the proposed approach handle missing data, and what are the strengths of this method?
3. Are there any concerns or weaknesses regarding the applicability of the model, particularly in more complex datasets?
4. Is there any questionable aspect of the method, such as the use of a normal distribution in Missingness Inference? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The paper deals with video prediction (on 2-digit Moving MNIST and MOTS data) in scenarios with "missing" data - 1) occluded pixels, 2) missing digits, 3) missed frames and deformed objects. It builds on top of the model presented in DDPAE [14] by including additional latent variables to account for and compensate for the missing of data. It shows the effectiveness of its model compared to previous variants in the above scenarios of missing data.
Strengths
The paper presents an effective way of tackling missing information, by referencing previous works that have done the same but perhaps not in the context of video prediction. The overall model is soundly designed to handle missing information, by both reconstructing the missing information in the input video as well as predicting for future frames. It is justified that the handling of missing data happens in the latent space rather than pixel space, and for the case of video most of the proposed ideas (distributions of latent variables, their connections, etc.) seem appropriate. The explanations of the method are quite clear and easy enough to understand.
Weaknesses
The model proposed was built on top of a model (DDPAE [14]) that was designed for and tested on simplistic datasets of Moving MNIST and Bouncing Balls. Hence, it is very effective on the simpler case of well-defined individual components in a dark background. It is encouraging that the model was able to achieve good results on this setting, it is to be seen how well it can perform in more complex datasets, such as those with natural images. The paper presents some motivating results on the MOTS dataset to address this very concern, however the method has the potential of working on more complex scenarios. In 3.2 Missingness Inference, the use of a normal distribution for sampling before the use of the heaviside step function is not quite justified. |
NIPS | Title
Differentially Private Testing of Identity and Closeness of Discrete Distributions
Abstract
We study the fundamental problems of identity testing (goodness of fit), and closeness testing (two sample test) of distributions over k elements, under di erential privacy. While the problems have a long history in statistics, finite sample bounds for these problems have only been established recently. In this work, we derive upper and lower bounds on the sample complexity of both the problems under (Á, ”)-di erential privacy. We provide sample optimal algorithms for identity testing problem for all parameter ranges, and the first results for closeness testing. Our closeness testing bounds are optimal in the sparse regime where the number of samples is at most k. Our upper bounds are obtained by privatizing non-private estimators for these problems. The non-private estimators are chosen to have small sensitivity. We propose a general framework to establish lower bounds on the sample complexity of statistical tasks under di erential privacy. We show a bound on di erentially private algorithms in terms of a coupling between the two hypothesis classes we aim to test. By carefully constructing chosen priors over the hypothesis classes, and using Le Cam’s two point theorem we provide a general mechanism for proving lower bounds. We believe that the framework can be used to obtain strong lower bounds for other statistical tasks under privacy.
1 Introduction
Testing whether observed data conforms to an underlying model is a fundamental scientific problem. In a statistical framework, given samples from an unknown probabilistic model, the goal is to determine whether the underlying model has a property of interest. This question has received great attention in statistics as hypothesis testing [1, 2], where it was mostly studied in the asymptotic regime when the number of samples m æ Œ. In the past two decades there has been a lot of work from the computer science, information theory, and statistics community on various distribution testing problems in the non-asymptotic (small-sample) regime, where the domain size k could be potentially larger than m (See [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], references therein, and [16] for a recent survey). Here the goal is to characterize the minimum number of samples necessary (sample complexity) as a function of the domain size k, and the other parameters. At the same time, preserving the privacy of individuals who contribute to the data samples has emerged as one of the key challenges in designing statistical mechanisms over the last few years. For example, the privacy of individuals participating in surveys on sensitive subjects
úThe authors are listed in alphabetical order. This research was supported by NSF-CCF-CRII 1657471, and a grant from Cornell University.
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
is of utmost importance. Without a properly designed mechanism, statistical processing might divulge the sensitive information about the data. There have been many publicized instances of individual data being de-anonymized, including the deanonymization of Netflix database [17], and individual information from census-related data [18]. Protecting privacy for the purposes of data release, or even computation on data has been studied extensively across several fields, including statistics, machine learning, database theory, algorithm design, and cryptography (See e.g., [19, 20, 21, 22, 23, 24, 25]). While the motivation is clear, even a formal notion of privacy is not straight forward. We use di erential privacy [26], a notion which rose from database and cryptography literature, and has emerged as one of the most popular privacy measures (See [26, 27, 22, 28, 29, 30, 31, 32], references therein, and the recent book [33]). Roughly speaking, it requires that the output of the algorithm should be statistically close on two neighboring datasets. For a formal definition of di erential privacy, see Section 2. A natural question when designing a di erentially private algorithm is to understand how the data requirement grows to ensure privacy, along with the same accuracy. In this paper, we study the sample size requirements for di erentially private discrete distribution testing.
1.1 Results and Techniques
We consider two fundamental statistical tasks for testing distributions over [k]: (i) identity testing, where given sample access to an unknown distribution p, and a known distribution q, the goal is to decide whether p = q, or dT V (p, q) Ø –, and (ii) closeness testing, where given sample access to unknown distributions p, and q, the goal is to decide whether p = q, or dT V (p, q) Ø –. (See Section 2 for precise statements of these problems). Given di erential privacy constraints (Á, ”), we provide (Á, ”)-di erentially private algorithms for both these tasks. For identity testing, our bounds are optimal up to constant factors for all ranges of k, –, Á, ”, and for closeness testing the results are tight in the small sample regime where m = O(k). Our upper bounds are based on various methods to privatize the previously known tests. A critical component is to design and analyze test statistic that have low sensitivity (see Definition 4), in order to preserve privacy. We first state that any (Á + ”, 0)-DP algorithm is also an (Á, ”) algorithm. [34] showed that for testing problems, any (Á, ”) algorithm will also imply a (Á + c”, 0)-DP algorithm. Please refer to Lemma 2 and Lemma 3 for more detail. Therefore, for all the problems, we simply consider (Á, 0)-DP algorithms (Á-DP), and we can replace Á with (Á + ”) in both the upper and lower bounds without loss of generality. One of the main contributions of our work is to propose a general framework for establishing lower bounds for the sample complexity of statistical problems such as property estimation and hypothesis testing under privacy constraints. We describe this, and the other results below. A summary of the results is presented in Table 1, which we now describe in detail. 1. DP Lower Bounds via Coupling. We establish a general method to prove lower
bounds for distribution testing problems. Suppose Xm1 , and Y m1 are generated by two statistical sources. Further suppose there is a coupling between the two sources such that the expected hamming distance between the coupled samples is at most D, then if Á + ” = o(1/D), there is no (Á, ”)-di erentially private algorithm to distinguish between the two sources. This result is stated precisely in Theorem 1. By carefully using designed coupling schemes, we provide lower bounds for identity testing, and closeness testing. 2. Reduction from identity to uniformity. We reduce the problem of Á-DP identity testing of distributions over [k] to Á-DP uniformity testing over distributions over [6k]. Such a reduction, without privacy constraints was shown in [35], and we use their result to obtain a reduction that also preserves privacy, with at most a constant factor blow-up in the sample complexity. This result is given in Theorem 3. 3. Identity Testing. It was recently shown that O( Ô k
–2 ) [7, 36, 11, 37] samples are necessary and su cient for identity testing without privacy constraints. The statistic used in these papers are variants of chi-squared tests, which could have a high global sensitivity. Given the reduction from identity to uniformity, it su ces to consider uniformity testing. We consider the test statistic studied by [38] which is simply the distance of the empirical distribution to the uniform distribution. This statistic also has a low sensitivity, and
futhermore has the optimal sample complexity in all parameter ranges, without privacy constraints. In Theorem 2, we state the optimal sample complexity of identity testing. The upper bounds are derived by privatizing the statistic in [38]. For lower bound, we use our technique in Theorem 1. We design a coupling between the uniform distribution u[k], and a mixture of distributions, which are all at distance – from u[k] in total variation distance. In particular, we consider the mixture distribution used in [7]. Much of the technical details go into proving the existence of couplings with small expected Hamming distance. [34] studied identity testing under pure di erential privacy, and obtained an algorithm with complexity O 3 Ô k –2 + Ô k log k –3/2Á + (k log k) 1/3 –5/3Á2/3 4 . Our results improve their
bounds significantly. 4. Closeness Testing. Closeness testing problem was proposed by [3], and optimal bound
of 1 max{ k 2/3
–4/3 ,
Ô k –2 } 2
was shown in [10]. They proposed a chi-square based statistic, which we show has a small sensitivity. We privatize their algorithm to obtain the sample complexity bounds. In the sparse regime we prove a sample complexity bound of 1 k 2/3
–4/3 +
Ô k
– Ô Á
2 , and in the dense regime, we obtain a bound of O 1 Ô k
–2 + 1 –2Á
2 . These
results are stated in Theorem 4. Since closeness testing is a harder problem than identity testing, all the lower bounds from identity testing port over to closeness testing. The closeness testing lower bounds are given in Theorem 4.
1.2 Related Work
A number of papers have recently studied hypothesis testing problems under di erential privacy guarantees [39, 40, 41]. Some works analyze the distribution of the test statistic in the asymptotic regime. The work most closely related to ours is [34], which studied identity testing in the finite sample regime. We mentioned their guarantees along with our results on identity testing in the previous section. There has been a line of research for statistical testing and estimation problems under the notion of local di erential privacy [24, 23, 42, 43, 44, 45, 46, 47, 48, 49]. These papers study some basic statistical problems and provide minimax lower bounds using Fano’s inequality. [50] studies structured distribution estimation under di erential privacy. Information theoretic approaches to data privacy have been studied recently using quantities like mutual information, and guessing probability to quantify privacy [51, 52, 53, 54, 55]. [56, 57] provide methods to prove lower bounds on DP algorithms via packing. Recently, [58] use coupling to prove lower bounds on the sample complexity for di erentially private confidence intervals. Our results are more general, in that, we can handle mixtures of distributions, which can provide optimal lower bounds on identity testing. [59, 60] characterize
di erential privacy through a coupling argument. [61] also uses the idea of coupling implicitly when designing di erentially private partition algorithms. [62] uses our coupling argument to prove lower bounds for di erentially private property estimation problems. In a contemporaneous and independent work, [63], the authors study the same problems that we consider, and obtain the same upper bounds for the sparse case, when m Æ k. They also provide experimental results to show the performance of the privatized algorithms. However, their results are sub-optimal for m = (k) for identity testing, and they do not provide any lower bounds for the problems. Both [34], and [63] consider only pure-di erential privacy, which are a special case of our results.
Organization of the paper. In Section 2, we discuss the definitions and notations. A general technique for proving lower bounds for di erentially private algorithms is described in Section 3. Section 4 gives upper and lower bounds for identity testing, and closeness testing is studied in Section 5.
2 Preliminaries
Let k be the class of all discrete distributions over a domain of size k, which wlog is assumed to be [k] := {1, . . . ,k}. We denote length-m samples X1, . . . ,Xm by Xm1 . For x œ [k], let px be the probability of x under p. Let Mx(Xm1 ) be the number of times x appears in X m 1 . For A ™ [k], let p(A) = q
xœA px. Let X ≥ p denote that the random variable X has distribution p. Let u[k] be the uniform distribution over [k], and B(b) be the Bernoulli distribution with bias b. The total variation distance between distributions p, and q over [k] is dT V (p, q) := supAµ[k]{p(A) ≠ q(A)} = 12 Îp ≠ qÎ1.
Definition 1. Let p, and q be distributions over X , and Y respectively. A coupling between p and q is a distribution over X ◊ Y whose marginals are p and q respectively.
Definition 2. The Hamming distance between two sequences Xm1 and Y m1 is dH(Xm1 , Y m1 ) :=q m
i=1 I{Xi ”= Yi}, the number of positions where Xm1 , and Y m1 di er.
Definition 3. A randomized algorithm A on a set X m æ S is said to be (Á, ”)-di erentially private if for any S µ range(A), and all pairs of Xm1 , and Y m1 with dH(Xm1 , Y m1 ) Æ 1 such that Pr (A(Xm1 ) œ S) Æ eÁ · Pr (A(Y m1 ) œ S) + ”.
The case when ” = 0 is called pure di erential privacy. For simplicity, we denote pure di erential privacy as Á-di erential privacy (Á-DP). Next we state the group property of di erential privacy. We give a proof in Appendix A.1. Lemma 1. Let A be a (Á, ”)-DP algorithm, then for sequences xm1 , and ym1 with dH(xm1 , ym1 ) Æ t, and ’S µ range(A), Pr (A(xm1 ) œ S) Æ etÁ · Pr (A(ym1 ) œ S) + ”teÁ(t≠1).
The next two lemmas state a relationship between (Á, ”) and Á-di erential privacy. We give a proof of Lemma 2 in Appendix A.2. And Lemma 3 follows from [34]. Lemma 2. Any (Á + ”, 0)- di erentially private algorithm is also (Á, ”)-di erentially private.
Lemma 3. An (Á, ”)-DP algorithm for a testing problem can be converted to an (Á + c”, 0) algorithm for some constant c > 0.
Combining these two results, it su ces to prove bounds for (Á, 0)-DP, and plug in Á with (Á + ”) to obtain bounds that are tight up to constant factors for (Á, ”)-DP. The notion of sensitivity is useful in establishing bounds under di erential privacy. Definition 4. The sensitivity of f : [k]m æ R is
(f) := maxdH (Xm1 ,Y m1 )Æ1 |f(X m 1 ) ≠ f(Y m1 )| .
For x œ R, ‡(x) := 11+exp(≠x) = exp(x)
1+exp(x) is the sigmoid function. The following properties follow from the definition of ‡.
Lemma 4. 1. For all x, “ œ R, exp(≠ |“|) Æ ‡(x+“) ‡(x) Æ exp(|“|).
2. Let 0 < ÷ < 12 . Suppose x Ø log 1 ÷ . Then ‡(x) > 1 ≠ ÷.
Identity Testing (IT). Given description of q œ k over [k], parameters –, and m independent samples Xm1 from unknown p œ k. A is an (k, –)-identity testing algorithm for q, if when p = q, A outputs “p = q” with probability at least 0.9, and when dT V (p, q) Ø –, A outputs “p ”= q” with probability at least 0.9. Definition 5. The sample complexity of DP-identity testing, denoted S(IT, k, –, Á), is the smallest m for which there exists an Á-DP algorithm A that uses m samples to achieve (k, –)-identity testing. Without privacy concerns, S(IT, k, –) denotes the sample complexity. When q = u[k], the problem reduces to uniformity testing, and the sample complexity is denoted as S(UT, k, –, Á).
Closeness Testing (CT). Given m independent samples Xm1 , and Y m1 from unknown distributions p, and q. An algorithm A is an (k, –)-closeness testing algorithm if when p = q, A outputs p = q with probability at least 0.9, and when dT V (p, q) Ø –, A outputs p ”= q with probability at least 0.9. Definition 6. The sample complexity of DP-closeness testing, denoted S(CT, k, –, Á), is the smallest m for which there exists an Á-DP algorithm A that uses m samples to achieve (k, –)-closeness testing. When privacy is not a concern, we denote the sample complexity of closeness testing as S(CT, k, –).
Hypothesis Testing (HT). Suppose we have distributions p and q over X m, and Xm1 ≥ p, Y m
1 ≥ q, we say an algorithm A : X m æ {p, q} can distinguish between p and q if Pr (A(Xm1 ) = q) < 0.1 and Pr (A(Y m1 ) = p) < 0.1.
3 Privacy Bounds Via Coupling
Recall that coupling between distributions p and q over X , and Y, is a distribution over X ◊ Y whose marginal distributions are p and q (Definition 1). For simplicity, we treat coupling as a randomized function f : X æ Y such that if X ≥ p, then Y = f(X) ≥ q. Note that X, and Y are not necessarily independent. Example 1. Let B(b1), and B(b2) be Bernoulli distributions with bias b1, and b2 such that b1 < b2. Let p, and q be distributions over {0, 1}m obtained by m i.i.d. samples from B(b1), and B(b2) respectively. Let Xm1 be distributed according to p. Generate a sequence Y m1 as follows: If Xi = 1, then Yi = 1. If Xi = 0, we flip another coin with bias (b2 ≠b1)/(1≠b1), and let Yi be the output of this coin. Repeat the process independently for each i, such that the Yi’s are all independent of each other. Then Pr (Yi = 1) = b1 +(1≠ b1)(b2 ≠ b1)/(1≠ b1) = b2, and Y m1 is distributed according to q.
We would like to use coupling to prove lower bounds on di erentially private algorithms for testing problems. Let p and q be distributions over X m. If there is a coupling between p and q with a small expected Hamming distance, we might expect that the algorithm cannot have strong privacy guarantees. The following theorem formalizes this intuition: Theorem 1. Suppose there is a coupling between p and q over X m, such that E [dH(Xm1 , Y m1 )] Æ D where Xm1 ≥ p, Y m1 ≥ q. Then, any (Á, ”)-di erentially private hypothesis testing algorithm A : X m æ {p, q} on p and q must satisfy Á + ” = ! 1 D "
Proof. Let (Xm1 , Y m1 ) be distributed according to a coupling of p, and q with E [dH(Xm1 , Y m1 )] Æ D. By Markov’s inequality, Pr (dH(Xm1 , Y m1 ) > 10D) < Pr (dH(Xm1 , Y m1 ) > 10 · E [dH(Xm1 , Y m1 )]) < 0.1. Let xm1 and ym1 be the realization of Xm1 and Y m1 . Let W = {(xm1 , ym1 )|dH(xm1 , ym1 ) Æ 10D}. Then we have
0.1 Ø Pr (A(Xm1 ) = q) Ø ÿ
(xm1 ,ym1 )œW Pr (Xm1 = xm1 , Y m1 = ym1 ) · Pr (A(xm1 ) = q).
By Lemma 1, and Pr (dH(Xm1 , Y m1 ) > 10D) < 0.1, and Pr (A(ym1 ) = q) Æ 1,
Pr (A(Y m1 ) = q) Æ ÿ
(xm1 ,ym1 )œW Pr (xm1 , ym1 ) · Pr (A(ym1 ) = q) +
ÿ
(xm1 ,ym1 )/œW Pr (xm1 , ym1 ) · 1
Æ ÿ
(xm1 ,ym1 )œW Pr (xm1 , ym1 ) · (eÁ·10D Pr (A(xm1 ) = q) + 10D” · eÁ·10(D≠1)) + 0.1
Æ 0.1eÁ·10D + 10D” · eÁ·10D + 0.1.
Since we know Pr (A(Y m1 ) = q) > 0.9, then 0.9 < Pr (A(Y m1 ) = q) < 0.1eÁ·10D + 10D” · e Á·10D + 0.1. Hence, either eÁ·10D = (1) or 10D” = (1), which implies that D =
! min ) 1 Á , 1 ” *" =
1 1
Á+”
2 , proving the theorem.
Set ” = 0, we obtain the bound for pure di erential privacy. In the next few sections, we use this theorem to get sample complexity bounds for di erentially private testing problems.
4 Identity Testing
In this section, we prove the bounds for identity testing. Our main result is the following. Theorem 2.
S(IT, k, –, Á) = 1 k 1/2 –2 + max Ó k 1/2 –Á1/2 , k 1/3 –4/3Á2/3 , 1 –Á Ô2 .
Or we can write it according to the parameter range,
S(IT, k, –, Á) =
Y ___]
___[
1 Ô
k –2 + k
1/2
–Á1/2
2 , when k = ! 1 –4 " and k = ! 1 –2Á " ,
1 Ô
k –2 + k
1/3
–4/3Á2/3
2 , when k = ! –
Á
" and k = O ! 1 –4 + 1 –2Á " ,
1 Ô
k –2 + 1 –Á
2 , when k = O ! –
Á
" .
Our bounds are tight up to constant factors in all parameters. To get the sample complexity for (Á, ”)-di erential privacy, we can simply replace Á by (Á + ”). In Theorem 3 we will show a reduction from identity to uniformity testing under pure di erential privacy. Using this, it will be enough to design algorithms for uniformity testing, which is done in Section 4.2. Moreover since uniformity testing is a special case of identity testing, any lower bound for uniformity will port over to identity, and we give such bounds in Section 4.3.
4.1 Uniformity Testing implies Identity Testing
The sample complexity of testing identity of any distribution is O( Ô k
–2 ), a bound that is tight for the uniform distribution. Recently [35] proposed a scheme to reduce the problem of testing identity of distributions over [k] for total variation distance – to the problem of testing uniformity over [6k] with total variation parameter –/3. In other words, they show that S(IT, k, –) Æ S(UT, 6k, –/3). Building on [35], we prove that a similar bound also holds for di erentially private algorithms. The proof is in Appendix B. Theorem 3. S(IT, k, –, Á) Æ S(UT, 6k, –/3, Á).
4.2 Identity Testing – Upper Bounds
In this section, we will show that by privatizing the statistic proposed in [38] we can achieve the sample complexity in Theorem 2 for all parameter ranges. The procedure is described in Algorithm 1.
Recall that Mx(Xm1 ) is the number of appearances of x in Xm1 . Let
S(Xm1 ) := 1 2 ·
nÿ
x=1
---- Mx(Xm1 ) m ≠ 1 k ---- , (1)
be the TV distance from the empirical distribution to the uniform distribution. Let µ(p) = E [S(Xm1 )] when the samples are drawn from distribution p. They show the following separation result on the expected value of S(Xm1 ). Lemma 5 ([38]). Let p be a distribution over [k] and dT V (p, u[k]) Ø –, then there is a constant c such that
µ(p) ≠ µ(u[k]) Ø c–2 min Ó m 2 k2 , m k , 1 – Ô .
[38] used this result to show that thresholding S(Xm1 ) at 0 is an optimal algorithm for identity testing. We first normalize the statistic to simplify the presentation of our DP algorithm. Let
Z(Xm1 ) :=
Y _]
_[
k 1 S(Xm1 ) ≠ µ(u[k]) ≠ 12 c– 2 · m 2 k2 2 , when m Æ k, m ! S(Xm1 ) ≠ µ(u[k]) ≠ 12 c– 2 · m k " , when k < m Æ k
–2 , m ! S(Xm1 ) ≠ µ(u[k]) ≠ 12 c– " , when m Ø k –2 . (2)
where c is the constant in Lemma 5, and µ(u[k]) is the expected value of S(Xm1 ) when Xm1 are drawn from uniform distribution.
Algorithm 1 Uniformity testing Input: Á, –, i.i.d. samples Xm1 from p
1: Let Z(Xm1 ) be evaluated from (1), and (2). 2: Generate Y ≥ B(‡(Á · Z)), ‡ is the sigmoid function. 3: if Y = 0, return p = u[k], else, return p ”= u[k].
We now prove that this algorithm is Á-DP. We need the following sensitivity result. Lemma 6. (Z) Æ 1 for all values of m, and k.
Proof. Recall that S(Xm1 ) = 12 · q n x=1 --- Mx(X m 1 ) m ≠ 1 k ---. Changing any one symbol changes at most two of the Mx(Xm1 )’s. Therefore at most two of the terms change by at most1 m
. Therefore, (S(Xm1 )) Æ 1m , for any m. When m Æ k, this can be strengthened with observation that Mx(Xm1 )/m Ø 1k , for all Mx(X m
1 ) Ø 1. Therefore, S(Xm1 ) = 12 ·1q x:Mx(Xm1 )Ø1 1 Mx(Xm1 ) m ≠ 1 k 2 + q x:Mx(Xm1 )=0 1 k 2 = 0(X m 1 ) k
, where 0(Xm1 ) is the number of symbols not appearing in Xm1 . This changes by at most one when one symbol is changed, proving the result.
Using this lemma, Á · Z(Xm1 ) changes by at most Á when Xm1 is changed at one location. Invoking Lemma 4, the probability of any output changes by a multiplicative exp(Á), and the algorithm is Á-di erentially private. To prove the sample complexity bound, we first show that the mean of the test statistic is well separated using Lemma 5. Then we use the concentration bound of the test statistic from [38] to get the final complexity. Due to lack of space, the detailed proof of sample complexity bound is given in Appendix C.
4.3 Sample Complexity Lower bounds for Uniformity Testing
In this section, we will show the lower bound part of Theorem 2. The first term is the lower bound without privacy constraints, proved in [7]. In this section, we will prove the terms associated with privacy.
The simplest argument is for m Ø k –2 , which hopefully will give you a sense of how coupling argument works. We consider the case of binary identity testing where the goal is to test whether the bias of a coin is 1/2 or –-far from 1/2. This is a special case of identity testing for distributions over [k] (when k ≠ 2 symbols have probability zero). This is strictly harder than the problem of distinguishing between B(1/2) and B(1/2 + –). The coupling given in Example 1 has expected hamming distance of –m. Hence combing with Theorem 1, we get a lower bound of ( 1
–Á ).
We now consider the cases m Æ k and k < m Æ k –2 .
To this end, we invoke LeCam’s two point theorem, and design a hypothesis testing problem that will imply a lower bound on uniformity testing. The testing problem will be to distinguish between the following two cases. Case 1: We are given m independent samples from the uniform distribution u[k]. Case 2: Generate a distribution p with dT V (p, u[k]) Ø – according to some prior over all such distributions. We are then given m independent samples from this distribution p. Le Cam’s two point theorem [64] states that any lower bound for distinguishing between these two cases is a lower bound on identity testing problem. We now describe the prior construction for Case 2, which is the same as considered by [7] for lower bounds on identity testing without privacy considerations. For each z œ {±1}k/2, define a distribution pz over [k] such that
pz(2i ≠ 1) = 1 + zi · 2–
k , and pz(2i) = 1 ≠ zi · 2– k .
Then for any z, dT V (Pz, u[k]) = –. For Case 2, choose p uniformly from these 2k/2 distributions. Let Q2 denote the distribution on [k]m by this process. In other words, Q2 is a mixture of product distributions over [k]. In Case 1, let Q1 be the distribution of m i.i.d. samples from u[k]. To obtain a sample complexity lower bound for distinguishing the two cases, we will design a coupling between Q1, and Q2, and bound its expected Hamming distance. While it can be shown that the Hamming distance of the coupling between the uniform distribution with any one of the 2k/2 distributions grows as –m, it can be significantly smaller, when we consider the mixtures. In particular, the following lemma shows that there exist couplings with bounded Hamming distance. Lemma 7. There is a coupling between Xm1 generated by Q1, and Y m1 by Q2 such that
E [dH(Xm1 , Y m1 )] Æ C · –2 min{ m 2 k , m 3/2 k1/2 }.
The lemma is proved in Appendix D. Now applying Theorem 1, we get the bound in Theorem 2.
5 Closeness Testing
Recall the closeness testing problem from Section 2, and the tight non-private bounds from Table 1. Our main result in this section is the following theorem characterizing the sample complexity of di erentially private algorithms for closeness testing. Theorem 4. If – > 1/k1/4, and Á–2 > 1/k,
S(CT, k, –, Á) = 3 k 2/3
–4/3 + k
1/2
– Ô Á
4 ,
otherwise,
3
k 1/2
–2 + k
1/2
– Ô Á + 1 –Á
4 Æ S(CT, k, –, Á) Æ O 3 k 1/2
–2 + 1 –2Á
4 .
This theorem shows that in the sparse regime, when m = O(k), our bounds are tight up to constant factors in all parameters. To prove the upper bounds, we only consider the case when ” = 0, which would su ce by lemma 2. We privatize the closeness testing algorithm of [10]. To reduce the strain on the readers, we drop the sequence notations explicitly and let
µi := Mi(Xm1 ), and ‹i := Mi(Y m1 ).
The statistic used by [10] is
Z(Xm1 , Y m1 ) := ÿ
iœ[k]
(µi ≠ ‹i)2 ≠ µi ≠ ‹i µi + ‹i ,
where we assume that ((µi ≠ ‹i)2 ≠ µi ≠ ‹i)/(µi + ‹i) = 0, when µi + ‹i = 0. It turns out that this statistic has a constant sensitivity, as shown in Lemma 8. Lemma 8. (Z(Xm1 , Y m1 )) Æ 14.
Proof. Since Z(Xm1 , Y m1 ) is symmetric, without loss of generality assume that one of the symbols is changed in Y m1 . This would cause at most two of the ‹i’s to change. Suppose ‹i Ø 1, and it changed to ‹i ≠ 1. Suppose, µi + ‹i > 1, the absolute change in the ith term of the statistic is
---- (µi ≠ ‹i)2
µi + ‹i ≠ (µi ≠ ‹i + 1)
2
µi + ‹i ≠ 1
---- = ---- (µi + ‹i)(2µi ≠ 2‹i + 1) + (µi ≠ ‹i)2
(µi + ‹i)(µi + ‹i ≠ 1)
----
Æ ---- 2µi ≠ 2‹i + 1 µi + ‹i ≠ 1 ---- + ---- µi ≠ ‹i µi + ‹i ≠ 1 ----
Æ3 |µi ≠ ‹i| + 1 µi + ‹i ≠ 1 Æ 3 + 4 µi + ‹i ≠ 1 Æ 7.
When µi + ‹i = 1, the change can again be bounded by 7. Since at most two of the ‹i’s change, we obtain the desired bound.
We use the same approach with the test statistic as with uniformity testing to obtain a di erentially private closeness testing method, described in Algorithm 2. Since the sensitivity of the statistic is at most 14, the input to the sigmoid changes by at most Á when any input sample is changed. Invoking Lemma 4, the probability of any output changes by a multiplicative exp(Á), and the algorithm is Á-di erentially private.
Algorithm 2 Input: Á, –, sample access to distribution p and q
1: Z Õ Ω (Z(Xm1 , Y m1 ) ≠ 12 m
2 – 2
4k+2m )/14 2: Generate Y ≥ B(‡(exp(Á · Z Õ)) 3: if Y = 0, return p = q 4: else, return p ”= q
The remaining part is to show that Algorithm 2 satisfies sample complexity upper bounds described in theorem 4. We will give the details in Appendix E, where the analysis of the lower bound is also given.
Acknowledgement
The authors thank Gautam Kamath for some very helpful suggestions about this work. | 1. What is the focus and contribution of the paper regarding differential privacy?
2. What are the strengths of the proposed technique using probabilistic coupling?
3. Are there any weaknesses or limitations in the paper, particularly regarding its novelty compared to prior works?
4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content? | Review | Review
This paper studies the finite sample complexity of identity testing and closeness testing under the constraints of differential privacy. The paper introduces a technique based on probabilistic coupling useful to prove lower bounds on the sample complexity of some statistical tasks. It uses this technique to show lower bounds on uniformity testing which implies a lower bound on identity testing, and on closeness testing. The paper also provides upper bounds. Pros: +the general technique that the paper proposes, using probabilistic coupling to prove lower bounds, is neat and interesting and it could be applied to many other statistical tasks beyond the scope of this paper, +the paper finite sample complexity results clarify the picture both respect to the non-private setting, and with respect to previous works in the private setting, +the result showing that the reduction from identity testing to uniform testing in the non-private setting extends to the private setting is an interesting observation, +the paper is well written and gives enough details to understand the main results. Cons: -this is not the first paper using probabilistic coupling arguments in the setting of differential privacy and few related works are missing: [1] gives a lower bound proof for confidence intervals using a coupling argument, [2,3] characterize differential privacy through a coupling argument and use this characterization in program verification, [4] uses an implicit coupling argument in one of their main results. [1]Vishesh Karwa, Salil P. Vadhan: Finite Sample Differentially Private Confidence Intervals. ITCS 2018 [2]Gilles Barthe, Noémie Fong, Marco Gaboardi, Benjamin Grégoire, Justin Hsu, Pierre-Yves Strub: Advanced Probabilistic Couplings for Differential Privacy. CCS 2016 [3]Gilles Barthe, Marco Gaboardi, Benjamin Grégoire, Justin Hsu, Pierre-Yves Strub: Proving Differential Privacy via Probabilistic Couplings. LICS 2016 [4]Cynthia Dwork, Moni Naor, Omer Reingold, Guy N. Rothblum: Pure Differential Privacy for Rectangle Queries via Private Partitions. ASIACRYPT (2) 2015 #After author rfeedback Thanks for your answer. I agree with Reviewer #2, it would be great to have a description of the tests in the main part of the paper. I updated the list of references above on coupling. |
NIPS | Title
Differentially Private Testing of Identity and Closeness of Discrete Distributions
Abstract
We study the fundamental problems of identity testing (goodness of fit), and closeness testing (two sample test) of distributions over k elements, under di erential privacy. While the problems have a long history in statistics, finite sample bounds for these problems have only been established recently. In this work, we derive upper and lower bounds on the sample complexity of both the problems under (Á, ”)-di erential privacy. We provide sample optimal algorithms for identity testing problem for all parameter ranges, and the first results for closeness testing. Our closeness testing bounds are optimal in the sparse regime where the number of samples is at most k. Our upper bounds are obtained by privatizing non-private estimators for these problems. The non-private estimators are chosen to have small sensitivity. We propose a general framework to establish lower bounds on the sample complexity of statistical tasks under di erential privacy. We show a bound on di erentially private algorithms in terms of a coupling between the two hypothesis classes we aim to test. By carefully constructing chosen priors over the hypothesis classes, and using Le Cam’s two point theorem we provide a general mechanism for proving lower bounds. We believe that the framework can be used to obtain strong lower bounds for other statistical tasks under privacy.
1 Introduction
Testing whether observed data conforms to an underlying model is a fundamental scientific problem. In a statistical framework, given samples from an unknown probabilistic model, the goal is to determine whether the underlying model has a property of interest. This question has received great attention in statistics as hypothesis testing [1, 2], where it was mostly studied in the asymptotic regime when the number of samples m æ Œ. In the past two decades there has been a lot of work from the computer science, information theory, and statistics community on various distribution testing problems in the non-asymptotic (small-sample) regime, where the domain size k could be potentially larger than m (See [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], references therein, and [16] for a recent survey). Here the goal is to characterize the minimum number of samples necessary (sample complexity) as a function of the domain size k, and the other parameters. At the same time, preserving the privacy of individuals who contribute to the data samples has emerged as one of the key challenges in designing statistical mechanisms over the last few years. For example, the privacy of individuals participating in surveys on sensitive subjects
úThe authors are listed in alphabetical order. This research was supported by NSF-CCF-CRII 1657471, and a grant from Cornell University.
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
is of utmost importance. Without a properly designed mechanism, statistical processing might divulge the sensitive information about the data. There have been many publicized instances of individual data being de-anonymized, including the deanonymization of Netflix database [17], and individual information from census-related data [18]. Protecting privacy for the purposes of data release, or even computation on data has been studied extensively across several fields, including statistics, machine learning, database theory, algorithm design, and cryptography (See e.g., [19, 20, 21, 22, 23, 24, 25]). While the motivation is clear, even a formal notion of privacy is not straight forward. We use di erential privacy [26], a notion which rose from database and cryptography literature, and has emerged as one of the most popular privacy measures (See [26, 27, 22, 28, 29, 30, 31, 32], references therein, and the recent book [33]). Roughly speaking, it requires that the output of the algorithm should be statistically close on two neighboring datasets. For a formal definition of di erential privacy, see Section 2. A natural question when designing a di erentially private algorithm is to understand how the data requirement grows to ensure privacy, along with the same accuracy. In this paper, we study the sample size requirements for di erentially private discrete distribution testing.
1.1 Results and Techniques
We consider two fundamental statistical tasks for testing distributions over [k]: (i) identity testing, where given sample access to an unknown distribution p, and a known distribution q, the goal is to decide whether p = q, or dT V (p, q) Ø –, and (ii) closeness testing, where given sample access to unknown distributions p, and q, the goal is to decide whether p = q, or dT V (p, q) Ø –. (See Section 2 for precise statements of these problems). Given di erential privacy constraints (Á, ”), we provide (Á, ”)-di erentially private algorithms for both these tasks. For identity testing, our bounds are optimal up to constant factors for all ranges of k, –, Á, ”, and for closeness testing the results are tight in the small sample regime where m = O(k). Our upper bounds are based on various methods to privatize the previously known tests. A critical component is to design and analyze test statistic that have low sensitivity (see Definition 4), in order to preserve privacy. We first state that any (Á + ”, 0)-DP algorithm is also an (Á, ”) algorithm. [34] showed that for testing problems, any (Á, ”) algorithm will also imply a (Á + c”, 0)-DP algorithm. Please refer to Lemma 2 and Lemma 3 for more detail. Therefore, for all the problems, we simply consider (Á, 0)-DP algorithms (Á-DP), and we can replace Á with (Á + ”) in both the upper and lower bounds without loss of generality. One of the main contributions of our work is to propose a general framework for establishing lower bounds for the sample complexity of statistical problems such as property estimation and hypothesis testing under privacy constraints. We describe this, and the other results below. A summary of the results is presented in Table 1, which we now describe in detail. 1. DP Lower Bounds via Coupling. We establish a general method to prove lower
bounds for distribution testing problems. Suppose Xm1 , and Y m1 are generated by two statistical sources. Further suppose there is a coupling between the two sources such that the expected hamming distance between the coupled samples is at most D, then if Á + ” = o(1/D), there is no (Á, ”)-di erentially private algorithm to distinguish between the two sources. This result is stated precisely in Theorem 1. By carefully using designed coupling schemes, we provide lower bounds for identity testing, and closeness testing. 2. Reduction from identity to uniformity. We reduce the problem of Á-DP identity testing of distributions over [k] to Á-DP uniformity testing over distributions over [6k]. Such a reduction, without privacy constraints was shown in [35], and we use their result to obtain a reduction that also preserves privacy, with at most a constant factor blow-up in the sample complexity. This result is given in Theorem 3. 3. Identity Testing. It was recently shown that O( Ô k
–2 ) [7, 36, 11, 37] samples are necessary and su cient for identity testing without privacy constraints. The statistic used in these papers are variants of chi-squared tests, which could have a high global sensitivity. Given the reduction from identity to uniformity, it su ces to consider uniformity testing. We consider the test statistic studied by [38] which is simply the distance of the empirical distribution to the uniform distribution. This statistic also has a low sensitivity, and
futhermore has the optimal sample complexity in all parameter ranges, without privacy constraints. In Theorem 2, we state the optimal sample complexity of identity testing. The upper bounds are derived by privatizing the statistic in [38]. For lower bound, we use our technique in Theorem 1. We design a coupling between the uniform distribution u[k], and a mixture of distributions, which are all at distance – from u[k] in total variation distance. In particular, we consider the mixture distribution used in [7]. Much of the technical details go into proving the existence of couplings with small expected Hamming distance. [34] studied identity testing under pure di erential privacy, and obtained an algorithm with complexity O 3 Ô k –2 + Ô k log k –3/2Á + (k log k) 1/3 –5/3Á2/3 4 . Our results improve their
bounds significantly. 4. Closeness Testing. Closeness testing problem was proposed by [3], and optimal bound
of 1 max{ k 2/3
–4/3 ,
Ô k –2 } 2
was shown in [10]. They proposed a chi-square based statistic, which we show has a small sensitivity. We privatize their algorithm to obtain the sample complexity bounds. In the sparse regime we prove a sample complexity bound of 1 k 2/3
–4/3 +
Ô k
– Ô Á
2 , and in the dense regime, we obtain a bound of O 1 Ô k
–2 + 1 –2Á
2 . These
results are stated in Theorem 4. Since closeness testing is a harder problem than identity testing, all the lower bounds from identity testing port over to closeness testing. The closeness testing lower bounds are given in Theorem 4.
1.2 Related Work
A number of papers have recently studied hypothesis testing problems under di erential privacy guarantees [39, 40, 41]. Some works analyze the distribution of the test statistic in the asymptotic regime. The work most closely related to ours is [34], which studied identity testing in the finite sample regime. We mentioned their guarantees along with our results on identity testing in the previous section. There has been a line of research for statistical testing and estimation problems under the notion of local di erential privacy [24, 23, 42, 43, 44, 45, 46, 47, 48, 49]. These papers study some basic statistical problems and provide minimax lower bounds using Fano’s inequality. [50] studies structured distribution estimation under di erential privacy. Information theoretic approaches to data privacy have been studied recently using quantities like mutual information, and guessing probability to quantify privacy [51, 52, 53, 54, 55]. [56, 57] provide methods to prove lower bounds on DP algorithms via packing. Recently, [58] use coupling to prove lower bounds on the sample complexity for di erentially private confidence intervals. Our results are more general, in that, we can handle mixtures of distributions, which can provide optimal lower bounds on identity testing. [59, 60] characterize
di erential privacy through a coupling argument. [61] also uses the idea of coupling implicitly when designing di erentially private partition algorithms. [62] uses our coupling argument to prove lower bounds for di erentially private property estimation problems. In a contemporaneous and independent work, [63], the authors study the same problems that we consider, and obtain the same upper bounds for the sparse case, when m Æ k. They also provide experimental results to show the performance of the privatized algorithms. However, their results are sub-optimal for m = (k) for identity testing, and they do not provide any lower bounds for the problems. Both [34], and [63] consider only pure-di erential privacy, which are a special case of our results.
Organization of the paper. In Section 2, we discuss the definitions and notations. A general technique for proving lower bounds for di erentially private algorithms is described in Section 3. Section 4 gives upper and lower bounds for identity testing, and closeness testing is studied in Section 5.
2 Preliminaries
Let k be the class of all discrete distributions over a domain of size k, which wlog is assumed to be [k] := {1, . . . ,k}. We denote length-m samples X1, . . . ,Xm by Xm1 . For x œ [k], let px be the probability of x under p. Let Mx(Xm1 ) be the number of times x appears in X m 1 . For A ™ [k], let p(A) = q
xœA px. Let X ≥ p denote that the random variable X has distribution p. Let u[k] be the uniform distribution over [k], and B(b) be the Bernoulli distribution with bias b. The total variation distance between distributions p, and q over [k] is dT V (p, q) := supAµ[k]{p(A) ≠ q(A)} = 12 Îp ≠ qÎ1.
Definition 1. Let p, and q be distributions over X , and Y respectively. A coupling between p and q is a distribution over X ◊ Y whose marginals are p and q respectively.
Definition 2. The Hamming distance between two sequences Xm1 and Y m1 is dH(Xm1 , Y m1 ) :=q m
i=1 I{Xi ”= Yi}, the number of positions where Xm1 , and Y m1 di er.
Definition 3. A randomized algorithm A on a set X m æ S is said to be (Á, ”)-di erentially private if for any S µ range(A), and all pairs of Xm1 , and Y m1 with dH(Xm1 , Y m1 ) Æ 1 such that Pr (A(Xm1 ) œ S) Æ eÁ · Pr (A(Y m1 ) œ S) + ”.
The case when ” = 0 is called pure di erential privacy. For simplicity, we denote pure di erential privacy as Á-di erential privacy (Á-DP). Next we state the group property of di erential privacy. We give a proof in Appendix A.1. Lemma 1. Let A be a (Á, ”)-DP algorithm, then for sequences xm1 , and ym1 with dH(xm1 , ym1 ) Æ t, and ’S µ range(A), Pr (A(xm1 ) œ S) Æ etÁ · Pr (A(ym1 ) œ S) + ”teÁ(t≠1).
The next two lemmas state a relationship between (Á, ”) and Á-di erential privacy. We give a proof of Lemma 2 in Appendix A.2. And Lemma 3 follows from [34]. Lemma 2. Any (Á + ”, 0)- di erentially private algorithm is also (Á, ”)-di erentially private.
Lemma 3. An (Á, ”)-DP algorithm for a testing problem can be converted to an (Á + c”, 0) algorithm for some constant c > 0.
Combining these two results, it su ces to prove bounds for (Á, 0)-DP, and plug in Á with (Á + ”) to obtain bounds that are tight up to constant factors for (Á, ”)-DP. The notion of sensitivity is useful in establishing bounds under di erential privacy. Definition 4. The sensitivity of f : [k]m æ R is
(f) := maxdH (Xm1 ,Y m1 )Æ1 |f(X m 1 ) ≠ f(Y m1 )| .
For x œ R, ‡(x) := 11+exp(≠x) = exp(x)
1+exp(x) is the sigmoid function. The following properties follow from the definition of ‡.
Lemma 4. 1. For all x, “ œ R, exp(≠ |“|) Æ ‡(x+“) ‡(x) Æ exp(|“|).
2. Let 0 < ÷ < 12 . Suppose x Ø log 1 ÷ . Then ‡(x) > 1 ≠ ÷.
Identity Testing (IT). Given description of q œ k over [k], parameters –, and m independent samples Xm1 from unknown p œ k. A is an (k, –)-identity testing algorithm for q, if when p = q, A outputs “p = q” with probability at least 0.9, and when dT V (p, q) Ø –, A outputs “p ”= q” with probability at least 0.9. Definition 5. The sample complexity of DP-identity testing, denoted S(IT, k, –, Á), is the smallest m for which there exists an Á-DP algorithm A that uses m samples to achieve (k, –)-identity testing. Without privacy concerns, S(IT, k, –) denotes the sample complexity. When q = u[k], the problem reduces to uniformity testing, and the sample complexity is denoted as S(UT, k, –, Á).
Closeness Testing (CT). Given m independent samples Xm1 , and Y m1 from unknown distributions p, and q. An algorithm A is an (k, –)-closeness testing algorithm if when p = q, A outputs p = q with probability at least 0.9, and when dT V (p, q) Ø –, A outputs p ”= q with probability at least 0.9. Definition 6. The sample complexity of DP-closeness testing, denoted S(CT, k, –, Á), is the smallest m for which there exists an Á-DP algorithm A that uses m samples to achieve (k, –)-closeness testing. When privacy is not a concern, we denote the sample complexity of closeness testing as S(CT, k, –).
Hypothesis Testing (HT). Suppose we have distributions p and q over X m, and Xm1 ≥ p, Y m
1 ≥ q, we say an algorithm A : X m æ {p, q} can distinguish between p and q if Pr (A(Xm1 ) = q) < 0.1 and Pr (A(Y m1 ) = p) < 0.1.
3 Privacy Bounds Via Coupling
Recall that coupling between distributions p and q over X , and Y, is a distribution over X ◊ Y whose marginal distributions are p and q (Definition 1). For simplicity, we treat coupling as a randomized function f : X æ Y such that if X ≥ p, then Y = f(X) ≥ q. Note that X, and Y are not necessarily independent. Example 1. Let B(b1), and B(b2) be Bernoulli distributions with bias b1, and b2 such that b1 < b2. Let p, and q be distributions over {0, 1}m obtained by m i.i.d. samples from B(b1), and B(b2) respectively. Let Xm1 be distributed according to p. Generate a sequence Y m1 as follows: If Xi = 1, then Yi = 1. If Xi = 0, we flip another coin with bias (b2 ≠b1)/(1≠b1), and let Yi be the output of this coin. Repeat the process independently for each i, such that the Yi’s are all independent of each other. Then Pr (Yi = 1) = b1 +(1≠ b1)(b2 ≠ b1)/(1≠ b1) = b2, and Y m1 is distributed according to q.
We would like to use coupling to prove lower bounds on di erentially private algorithms for testing problems. Let p and q be distributions over X m. If there is a coupling between p and q with a small expected Hamming distance, we might expect that the algorithm cannot have strong privacy guarantees. The following theorem formalizes this intuition: Theorem 1. Suppose there is a coupling between p and q over X m, such that E [dH(Xm1 , Y m1 )] Æ D where Xm1 ≥ p, Y m1 ≥ q. Then, any (Á, ”)-di erentially private hypothesis testing algorithm A : X m æ {p, q} on p and q must satisfy Á + ” = ! 1 D "
Proof. Let (Xm1 , Y m1 ) be distributed according to a coupling of p, and q with E [dH(Xm1 , Y m1 )] Æ D. By Markov’s inequality, Pr (dH(Xm1 , Y m1 ) > 10D) < Pr (dH(Xm1 , Y m1 ) > 10 · E [dH(Xm1 , Y m1 )]) < 0.1. Let xm1 and ym1 be the realization of Xm1 and Y m1 . Let W = {(xm1 , ym1 )|dH(xm1 , ym1 ) Æ 10D}. Then we have
0.1 Ø Pr (A(Xm1 ) = q) Ø ÿ
(xm1 ,ym1 )œW Pr (Xm1 = xm1 , Y m1 = ym1 ) · Pr (A(xm1 ) = q).
By Lemma 1, and Pr (dH(Xm1 , Y m1 ) > 10D) < 0.1, and Pr (A(ym1 ) = q) Æ 1,
Pr (A(Y m1 ) = q) Æ ÿ
(xm1 ,ym1 )œW Pr (xm1 , ym1 ) · Pr (A(ym1 ) = q) +
ÿ
(xm1 ,ym1 )/œW Pr (xm1 , ym1 ) · 1
Æ ÿ
(xm1 ,ym1 )œW Pr (xm1 , ym1 ) · (eÁ·10D Pr (A(xm1 ) = q) + 10D” · eÁ·10(D≠1)) + 0.1
Æ 0.1eÁ·10D + 10D” · eÁ·10D + 0.1.
Since we know Pr (A(Y m1 ) = q) > 0.9, then 0.9 < Pr (A(Y m1 ) = q) < 0.1eÁ·10D + 10D” · e Á·10D + 0.1. Hence, either eÁ·10D = (1) or 10D” = (1), which implies that D =
! min ) 1 Á , 1 ” *" =
1 1
Á+”
2 , proving the theorem.
Set ” = 0, we obtain the bound for pure di erential privacy. In the next few sections, we use this theorem to get sample complexity bounds for di erentially private testing problems.
4 Identity Testing
In this section, we prove the bounds for identity testing. Our main result is the following. Theorem 2.
S(IT, k, –, Á) = 1 k 1/2 –2 + max Ó k 1/2 –Á1/2 , k 1/3 –4/3Á2/3 , 1 –Á Ô2 .
Or we can write it according to the parameter range,
S(IT, k, –, Á) =
Y ___]
___[
1 Ô
k –2 + k
1/2
–Á1/2
2 , when k = ! 1 –4 " and k = ! 1 –2Á " ,
1 Ô
k –2 + k
1/3
–4/3Á2/3
2 , when k = ! –
Á
" and k = O ! 1 –4 + 1 –2Á " ,
1 Ô
k –2 + 1 –Á
2 , when k = O ! –
Á
" .
Our bounds are tight up to constant factors in all parameters. To get the sample complexity for (Á, ”)-di erential privacy, we can simply replace Á by (Á + ”). In Theorem 3 we will show a reduction from identity to uniformity testing under pure di erential privacy. Using this, it will be enough to design algorithms for uniformity testing, which is done in Section 4.2. Moreover since uniformity testing is a special case of identity testing, any lower bound for uniformity will port over to identity, and we give such bounds in Section 4.3.
4.1 Uniformity Testing implies Identity Testing
The sample complexity of testing identity of any distribution is O( Ô k
–2 ), a bound that is tight for the uniform distribution. Recently [35] proposed a scheme to reduce the problem of testing identity of distributions over [k] for total variation distance – to the problem of testing uniformity over [6k] with total variation parameter –/3. In other words, they show that S(IT, k, –) Æ S(UT, 6k, –/3). Building on [35], we prove that a similar bound also holds for di erentially private algorithms. The proof is in Appendix B. Theorem 3. S(IT, k, –, Á) Æ S(UT, 6k, –/3, Á).
4.2 Identity Testing – Upper Bounds
In this section, we will show that by privatizing the statistic proposed in [38] we can achieve the sample complexity in Theorem 2 for all parameter ranges. The procedure is described in Algorithm 1.
Recall that Mx(Xm1 ) is the number of appearances of x in Xm1 . Let
S(Xm1 ) := 1 2 ·
nÿ
x=1
---- Mx(Xm1 ) m ≠ 1 k ---- , (1)
be the TV distance from the empirical distribution to the uniform distribution. Let µ(p) = E [S(Xm1 )] when the samples are drawn from distribution p. They show the following separation result on the expected value of S(Xm1 ). Lemma 5 ([38]). Let p be a distribution over [k] and dT V (p, u[k]) Ø –, then there is a constant c such that
µ(p) ≠ µ(u[k]) Ø c–2 min Ó m 2 k2 , m k , 1 – Ô .
[38] used this result to show that thresholding S(Xm1 ) at 0 is an optimal algorithm for identity testing. We first normalize the statistic to simplify the presentation of our DP algorithm. Let
Z(Xm1 ) :=
Y _]
_[
k 1 S(Xm1 ) ≠ µ(u[k]) ≠ 12 c– 2 · m 2 k2 2 , when m Æ k, m ! S(Xm1 ) ≠ µ(u[k]) ≠ 12 c– 2 · m k " , when k < m Æ k
–2 , m ! S(Xm1 ) ≠ µ(u[k]) ≠ 12 c– " , when m Ø k –2 . (2)
where c is the constant in Lemma 5, and µ(u[k]) is the expected value of S(Xm1 ) when Xm1 are drawn from uniform distribution.
Algorithm 1 Uniformity testing Input: Á, –, i.i.d. samples Xm1 from p
1: Let Z(Xm1 ) be evaluated from (1), and (2). 2: Generate Y ≥ B(‡(Á · Z)), ‡ is the sigmoid function. 3: if Y = 0, return p = u[k], else, return p ”= u[k].
We now prove that this algorithm is Á-DP. We need the following sensitivity result. Lemma 6. (Z) Æ 1 for all values of m, and k.
Proof. Recall that S(Xm1 ) = 12 · q n x=1 --- Mx(X m 1 ) m ≠ 1 k ---. Changing any one symbol changes at most two of the Mx(Xm1 )’s. Therefore at most two of the terms change by at most1 m
. Therefore, (S(Xm1 )) Æ 1m , for any m. When m Æ k, this can be strengthened with observation that Mx(Xm1 )/m Ø 1k , for all Mx(X m
1 ) Ø 1. Therefore, S(Xm1 ) = 12 ·1q x:Mx(Xm1 )Ø1 1 Mx(Xm1 ) m ≠ 1 k 2 + q x:Mx(Xm1 )=0 1 k 2 = 0(X m 1 ) k
, where 0(Xm1 ) is the number of symbols not appearing in Xm1 . This changes by at most one when one symbol is changed, proving the result.
Using this lemma, Á · Z(Xm1 ) changes by at most Á when Xm1 is changed at one location. Invoking Lemma 4, the probability of any output changes by a multiplicative exp(Á), and the algorithm is Á-di erentially private. To prove the sample complexity bound, we first show that the mean of the test statistic is well separated using Lemma 5. Then we use the concentration bound of the test statistic from [38] to get the final complexity. Due to lack of space, the detailed proof of sample complexity bound is given in Appendix C.
4.3 Sample Complexity Lower bounds for Uniformity Testing
In this section, we will show the lower bound part of Theorem 2. The first term is the lower bound without privacy constraints, proved in [7]. In this section, we will prove the terms associated with privacy.
The simplest argument is for m Ø k –2 , which hopefully will give you a sense of how coupling argument works. We consider the case of binary identity testing where the goal is to test whether the bias of a coin is 1/2 or –-far from 1/2. This is a special case of identity testing for distributions over [k] (when k ≠ 2 symbols have probability zero). This is strictly harder than the problem of distinguishing between B(1/2) and B(1/2 + –). The coupling given in Example 1 has expected hamming distance of –m. Hence combing with Theorem 1, we get a lower bound of ( 1
–Á ).
We now consider the cases m Æ k and k < m Æ k –2 .
To this end, we invoke LeCam’s two point theorem, and design a hypothesis testing problem that will imply a lower bound on uniformity testing. The testing problem will be to distinguish between the following two cases. Case 1: We are given m independent samples from the uniform distribution u[k]. Case 2: Generate a distribution p with dT V (p, u[k]) Ø – according to some prior over all such distributions. We are then given m independent samples from this distribution p. Le Cam’s two point theorem [64] states that any lower bound for distinguishing between these two cases is a lower bound on identity testing problem. We now describe the prior construction for Case 2, which is the same as considered by [7] for lower bounds on identity testing without privacy considerations. For each z œ {±1}k/2, define a distribution pz over [k] such that
pz(2i ≠ 1) = 1 + zi · 2–
k , and pz(2i) = 1 ≠ zi · 2– k .
Then for any z, dT V (Pz, u[k]) = –. For Case 2, choose p uniformly from these 2k/2 distributions. Let Q2 denote the distribution on [k]m by this process. In other words, Q2 is a mixture of product distributions over [k]. In Case 1, let Q1 be the distribution of m i.i.d. samples from u[k]. To obtain a sample complexity lower bound for distinguishing the two cases, we will design a coupling between Q1, and Q2, and bound its expected Hamming distance. While it can be shown that the Hamming distance of the coupling between the uniform distribution with any one of the 2k/2 distributions grows as –m, it can be significantly smaller, when we consider the mixtures. In particular, the following lemma shows that there exist couplings with bounded Hamming distance. Lemma 7. There is a coupling between Xm1 generated by Q1, and Y m1 by Q2 such that
E [dH(Xm1 , Y m1 )] Æ C · –2 min{ m 2 k , m 3/2 k1/2 }.
The lemma is proved in Appendix D. Now applying Theorem 1, we get the bound in Theorem 2.
5 Closeness Testing
Recall the closeness testing problem from Section 2, and the tight non-private bounds from Table 1. Our main result in this section is the following theorem characterizing the sample complexity of di erentially private algorithms for closeness testing. Theorem 4. If – > 1/k1/4, and Á–2 > 1/k,
S(CT, k, –, Á) = 3 k 2/3
–4/3 + k
1/2
– Ô Á
4 ,
otherwise,
3
k 1/2
–2 + k
1/2
– Ô Á + 1 –Á
4 Æ S(CT, k, –, Á) Æ O 3 k 1/2
–2 + 1 –2Á
4 .
This theorem shows that in the sparse regime, when m = O(k), our bounds are tight up to constant factors in all parameters. To prove the upper bounds, we only consider the case when ” = 0, which would su ce by lemma 2. We privatize the closeness testing algorithm of [10]. To reduce the strain on the readers, we drop the sequence notations explicitly and let
µi := Mi(Xm1 ), and ‹i := Mi(Y m1 ).
The statistic used by [10] is
Z(Xm1 , Y m1 ) := ÿ
iœ[k]
(µi ≠ ‹i)2 ≠ µi ≠ ‹i µi + ‹i ,
where we assume that ((µi ≠ ‹i)2 ≠ µi ≠ ‹i)/(µi + ‹i) = 0, when µi + ‹i = 0. It turns out that this statistic has a constant sensitivity, as shown in Lemma 8. Lemma 8. (Z(Xm1 , Y m1 )) Æ 14.
Proof. Since Z(Xm1 , Y m1 ) is symmetric, without loss of generality assume that one of the symbols is changed in Y m1 . This would cause at most two of the ‹i’s to change. Suppose ‹i Ø 1, and it changed to ‹i ≠ 1. Suppose, µi + ‹i > 1, the absolute change in the ith term of the statistic is
---- (µi ≠ ‹i)2
µi + ‹i ≠ (µi ≠ ‹i + 1)
2
µi + ‹i ≠ 1
---- = ---- (µi + ‹i)(2µi ≠ 2‹i + 1) + (µi ≠ ‹i)2
(µi + ‹i)(µi + ‹i ≠ 1)
----
Æ ---- 2µi ≠ 2‹i + 1 µi + ‹i ≠ 1 ---- + ---- µi ≠ ‹i µi + ‹i ≠ 1 ----
Æ3 |µi ≠ ‹i| + 1 µi + ‹i ≠ 1 Æ 3 + 4 µi + ‹i ≠ 1 Æ 7.
When µi + ‹i = 1, the change can again be bounded by 7. Since at most two of the ‹i’s change, we obtain the desired bound.
We use the same approach with the test statistic as with uniformity testing to obtain a di erentially private closeness testing method, described in Algorithm 2. Since the sensitivity of the statistic is at most 14, the input to the sigmoid changes by at most Á when any input sample is changed. Invoking Lemma 4, the probability of any output changes by a multiplicative exp(Á), and the algorithm is Á-di erentially private.
Algorithm 2 Input: Á, –, sample access to distribution p and q
1: Z Õ Ω (Z(Xm1 , Y m1 ) ≠ 12 m
2 – 2
4k+2m )/14 2: Generate Y ≥ B(‡(exp(Á · Z Õ)) 3: if Y = 0, return p = q 4: else, return p ”= q
The remaining part is to show that Algorithm 2 satisfies sample complexity upper bounds described in theorem 4. We will give the details in Appendix E, where the analysis of the lower bound is also given.
Acknowledgement
The authors thank Gautam Kamath for some very helpful suggestions about this work. | 1. What are the main contributions and findings of the paper regarding sample complexities of private versions of identity and closeness testing problems?
2. How do the upper and lower bounds for identity testing and closeness testing compare, particularly when the number of samples is large?
3. What are the privatized test statistics used in the paper, and how do they relate to the original test statistics?
4. Can you provide additional discussion on the significance and impact of the results, as well as potential future research directions?
5. Are there any areas where the paper could be improved or expanded upon, such as including more details on the test statistics or providing a concluding section summarizing the main findings and future research directions? | Review | Review
This paper presents upper and lower bounds on the sample complexities of private versions of the identity and closeness testing problems. The upper and lower bounds for identity testing match for all parameter ranges. However, for the closeness testing problem, there's a gap of between the upper and lower bounds when the number of samples is larger than "k" (size of the support of the distributions). The results of this paper are interesting and fundamental. The authors privatize recent test statistics: DGPP17 (for identity testing) and CDVV14 (for closeness testing) to get their upper bounds. The lower bound for identity testing is based on Le Cam's method and the same lower bound is used for closeness testing. The paper is well written and organized. Comments: 1. In lines 129 and 130: "Both [CDK17], and [ADR17] consider only pure-differential privacy, which are a special case of our results." But isn't that okay since you argue that pure-DP is enough? 2. It would be nice if the test statistics (and their private versions) are discussed in the main writeup. Perhaps you can move the proof of Theorem 1 (which seems very basic) to the appendix to make some space? 3. It would be nice if you can end with a concluding section that not only recaps the main results but also discusses some interesting extensions and future research directions. Overall, I think this is a good paper and recommend it for publication. #After author rfeedback Thanks for your detailed response. I am glad to see that you have addressed my points. Looking forward to reading your final version :) |
NIPS | Title
Differentially Private Testing of Identity and Closeness of Discrete Distributions
Abstract
We study the fundamental problems of identity testing (goodness of fit), and closeness testing (two sample test) of distributions over k elements, under di erential privacy. While the problems have a long history in statistics, finite sample bounds for these problems have only been established recently. In this work, we derive upper and lower bounds on the sample complexity of both the problems under (Á, ”)-di erential privacy. We provide sample optimal algorithms for identity testing problem for all parameter ranges, and the first results for closeness testing. Our closeness testing bounds are optimal in the sparse regime where the number of samples is at most k. Our upper bounds are obtained by privatizing non-private estimators for these problems. The non-private estimators are chosen to have small sensitivity. We propose a general framework to establish lower bounds on the sample complexity of statistical tasks under di erential privacy. We show a bound on di erentially private algorithms in terms of a coupling between the two hypothesis classes we aim to test. By carefully constructing chosen priors over the hypothesis classes, and using Le Cam’s two point theorem we provide a general mechanism for proving lower bounds. We believe that the framework can be used to obtain strong lower bounds for other statistical tasks under privacy.
1 Introduction
Testing whether observed data conforms to an underlying model is a fundamental scientific problem. In a statistical framework, given samples from an unknown probabilistic model, the goal is to determine whether the underlying model has a property of interest. This question has received great attention in statistics as hypothesis testing [1, 2], where it was mostly studied in the asymptotic regime when the number of samples m æ Œ. In the past two decades there has been a lot of work from the computer science, information theory, and statistics community on various distribution testing problems in the non-asymptotic (small-sample) regime, where the domain size k could be potentially larger than m (See [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], references therein, and [16] for a recent survey). Here the goal is to characterize the minimum number of samples necessary (sample complexity) as a function of the domain size k, and the other parameters. At the same time, preserving the privacy of individuals who contribute to the data samples has emerged as one of the key challenges in designing statistical mechanisms over the last few years. For example, the privacy of individuals participating in surveys on sensitive subjects
úThe authors are listed in alphabetical order. This research was supported by NSF-CCF-CRII 1657471, and a grant from Cornell University.
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
is of utmost importance. Without a properly designed mechanism, statistical processing might divulge the sensitive information about the data. There have been many publicized instances of individual data being de-anonymized, including the deanonymization of Netflix database [17], and individual information from census-related data [18]. Protecting privacy for the purposes of data release, or even computation on data has been studied extensively across several fields, including statistics, machine learning, database theory, algorithm design, and cryptography (See e.g., [19, 20, 21, 22, 23, 24, 25]). While the motivation is clear, even a formal notion of privacy is not straight forward. We use di erential privacy [26], a notion which rose from database and cryptography literature, and has emerged as one of the most popular privacy measures (See [26, 27, 22, 28, 29, 30, 31, 32], references therein, and the recent book [33]). Roughly speaking, it requires that the output of the algorithm should be statistically close on two neighboring datasets. For a formal definition of di erential privacy, see Section 2. A natural question when designing a di erentially private algorithm is to understand how the data requirement grows to ensure privacy, along with the same accuracy. In this paper, we study the sample size requirements for di erentially private discrete distribution testing.
1.1 Results and Techniques
We consider two fundamental statistical tasks for testing distributions over [k]: (i) identity testing, where given sample access to an unknown distribution p, and a known distribution q, the goal is to decide whether p = q, or dT V (p, q) Ø –, and (ii) closeness testing, where given sample access to unknown distributions p, and q, the goal is to decide whether p = q, or dT V (p, q) Ø –. (See Section 2 for precise statements of these problems). Given di erential privacy constraints (Á, ”), we provide (Á, ”)-di erentially private algorithms for both these tasks. For identity testing, our bounds are optimal up to constant factors for all ranges of k, –, Á, ”, and for closeness testing the results are tight in the small sample regime where m = O(k). Our upper bounds are based on various methods to privatize the previously known tests. A critical component is to design and analyze test statistic that have low sensitivity (see Definition 4), in order to preserve privacy. We first state that any (Á + ”, 0)-DP algorithm is also an (Á, ”) algorithm. [34] showed that for testing problems, any (Á, ”) algorithm will also imply a (Á + c”, 0)-DP algorithm. Please refer to Lemma 2 and Lemma 3 for more detail. Therefore, for all the problems, we simply consider (Á, 0)-DP algorithms (Á-DP), and we can replace Á with (Á + ”) in both the upper and lower bounds without loss of generality. One of the main contributions of our work is to propose a general framework for establishing lower bounds for the sample complexity of statistical problems such as property estimation and hypothesis testing under privacy constraints. We describe this, and the other results below. A summary of the results is presented in Table 1, which we now describe in detail. 1. DP Lower Bounds via Coupling. We establish a general method to prove lower
bounds for distribution testing problems. Suppose Xm1 , and Y m1 are generated by two statistical sources. Further suppose there is a coupling between the two sources such that the expected hamming distance between the coupled samples is at most D, then if Á + ” = o(1/D), there is no (Á, ”)-di erentially private algorithm to distinguish between the two sources. This result is stated precisely in Theorem 1. By carefully using designed coupling schemes, we provide lower bounds for identity testing, and closeness testing. 2. Reduction from identity to uniformity. We reduce the problem of Á-DP identity testing of distributions over [k] to Á-DP uniformity testing over distributions over [6k]. Such a reduction, without privacy constraints was shown in [35], and we use their result to obtain a reduction that also preserves privacy, with at most a constant factor blow-up in the sample complexity. This result is given in Theorem 3. 3. Identity Testing. It was recently shown that O( Ô k
–2 ) [7, 36, 11, 37] samples are necessary and su cient for identity testing without privacy constraints. The statistic used in these papers are variants of chi-squared tests, which could have a high global sensitivity. Given the reduction from identity to uniformity, it su ces to consider uniformity testing. We consider the test statistic studied by [38] which is simply the distance of the empirical distribution to the uniform distribution. This statistic also has a low sensitivity, and
futhermore has the optimal sample complexity in all parameter ranges, without privacy constraints. In Theorem 2, we state the optimal sample complexity of identity testing. The upper bounds are derived by privatizing the statistic in [38]. For lower bound, we use our technique in Theorem 1. We design a coupling between the uniform distribution u[k], and a mixture of distributions, which are all at distance – from u[k] in total variation distance. In particular, we consider the mixture distribution used in [7]. Much of the technical details go into proving the existence of couplings with small expected Hamming distance. [34] studied identity testing under pure di erential privacy, and obtained an algorithm with complexity O 3 Ô k –2 + Ô k log k –3/2Á + (k log k) 1/3 –5/3Á2/3 4 . Our results improve their
bounds significantly. 4. Closeness Testing. Closeness testing problem was proposed by [3], and optimal bound
of 1 max{ k 2/3
–4/3 ,
Ô k –2 } 2
was shown in [10]. They proposed a chi-square based statistic, which we show has a small sensitivity. We privatize their algorithm to obtain the sample complexity bounds. In the sparse regime we prove a sample complexity bound of 1 k 2/3
–4/3 +
Ô k
– Ô Á
2 , and in the dense regime, we obtain a bound of O 1 Ô k
–2 + 1 –2Á
2 . These
results are stated in Theorem 4. Since closeness testing is a harder problem than identity testing, all the lower bounds from identity testing port over to closeness testing. The closeness testing lower bounds are given in Theorem 4.
1.2 Related Work
A number of papers have recently studied hypothesis testing problems under di erential privacy guarantees [39, 40, 41]. Some works analyze the distribution of the test statistic in the asymptotic regime. The work most closely related to ours is [34], which studied identity testing in the finite sample regime. We mentioned their guarantees along with our results on identity testing in the previous section. There has been a line of research for statistical testing and estimation problems under the notion of local di erential privacy [24, 23, 42, 43, 44, 45, 46, 47, 48, 49]. These papers study some basic statistical problems and provide minimax lower bounds using Fano’s inequality. [50] studies structured distribution estimation under di erential privacy. Information theoretic approaches to data privacy have been studied recently using quantities like mutual information, and guessing probability to quantify privacy [51, 52, 53, 54, 55]. [56, 57] provide methods to prove lower bounds on DP algorithms via packing. Recently, [58] use coupling to prove lower bounds on the sample complexity for di erentially private confidence intervals. Our results are more general, in that, we can handle mixtures of distributions, which can provide optimal lower bounds on identity testing. [59, 60] characterize
di erential privacy through a coupling argument. [61] also uses the idea of coupling implicitly when designing di erentially private partition algorithms. [62] uses our coupling argument to prove lower bounds for di erentially private property estimation problems. In a contemporaneous and independent work, [63], the authors study the same problems that we consider, and obtain the same upper bounds for the sparse case, when m Æ k. They also provide experimental results to show the performance of the privatized algorithms. However, their results are sub-optimal for m = (k) for identity testing, and they do not provide any lower bounds for the problems. Both [34], and [63] consider only pure-di erential privacy, which are a special case of our results.
Organization of the paper. In Section 2, we discuss the definitions and notations. A general technique for proving lower bounds for di erentially private algorithms is described in Section 3. Section 4 gives upper and lower bounds for identity testing, and closeness testing is studied in Section 5.
2 Preliminaries
Let k be the class of all discrete distributions over a domain of size k, which wlog is assumed to be [k] := {1, . . . ,k}. We denote length-m samples X1, . . . ,Xm by Xm1 . For x œ [k], let px be the probability of x under p. Let Mx(Xm1 ) be the number of times x appears in X m 1 . For A ™ [k], let p(A) = q
xœA px. Let X ≥ p denote that the random variable X has distribution p. Let u[k] be the uniform distribution over [k], and B(b) be the Bernoulli distribution with bias b. The total variation distance between distributions p, and q over [k] is dT V (p, q) := supAµ[k]{p(A) ≠ q(A)} = 12 Îp ≠ qÎ1.
Definition 1. Let p, and q be distributions over X , and Y respectively. A coupling between p and q is a distribution over X ◊ Y whose marginals are p and q respectively.
Definition 2. The Hamming distance between two sequences Xm1 and Y m1 is dH(Xm1 , Y m1 ) :=q m
i=1 I{Xi ”= Yi}, the number of positions where Xm1 , and Y m1 di er.
Definition 3. A randomized algorithm A on a set X m æ S is said to be (Á, ”)-di erentially private if for any S µ range(A), and all pairs of Xm1 , and Y m1 with dH(Xm1 , Y m1 ) Æ 1 such that Pr (A(Xm1 ) œ S) Æ eÁ · Pr (A(Y m1 ) œ S) + ”.
The case when ” = 0 is called pure di erential privacy. For simplicity, we denote pure di erential privacy as Á-di erential privacy (Á-DP). Next we state the group property of di erential privacy. We give a proof in Appendix A.1. Lemma 1. Let A be a (Á, ”)-DP algorithm, then for sequences xm1 , and ym1 with dH(xm1 , ym1 ) Æ t, and ’S µ range(A), Pr (A(xm1 ) œ S) Æ etÁ · Pr (A(ym1 ) œ S) + ”teÁ(t≠1).
The next two lemmas state a relationship between (Á, ”) and Á-di erential privacy. We give a proof of Lemma 2 in Appendix A.2. And Lemma 3 follows from [34]. Lemma 2. Any (Á + ”, 0)- di erentially private algorithm is also (Á, ”)-di erentially private.
Lemma 3. An (Á, ”)-DP algorithm for a testing problem can be converted to an (Á + c”, 0) algorithm for some constant c > 0.
Combining these two results, it su ces to prove bounds for (Á, 0)-DP, and plug in Á with (Á + ”) to obtain bounds that are tight up to constant factors for (Á, ”)-DP. The notion of sensitivity is useful in establishing bounds under di erential privacy. Definition 4. The sensitivity of f : [k]m æ R is
(f) := maxdH (Xm1 ,Y m1 )Æ1 |f(X m 1 ) ≠ f(Y m1 )| .
For x œ R, ‡(x) := 11+exp(≠x) = exp(x)
1+exp(x) is the sigmoid function. The following properties follow from the definition of ‡.
Lemma 4. 1. For all x, “ œ R, exp(≠ |“|) Æ ‡(x+“) ‡(x) Æ exp(|“|).
2. Let 0 < ÷ < 12 . Suppose x Ø log 1 ÷ . Then ‡(x) > 1 ≠ ÷.
Identity Testing (IT). Given description of q œ k over [k], parameters –, and m independent samples Xm1 from unknown p œ k. A is an (k, –)-identity testing algorithm for q, if when p = q, A outputs “p = q” with probability at least 0.9, and when dT V (p, q) Ø –, A outputs “p ”= q” with probability at least 0.9. Definition 5. The sample complexity of DP-identity testing, denoted S(IT, k, –, Á), is the smallest m for which there exists an Á-DP algorithm A that uses m samples to achieve (k, –)-identity testing. Without privacy concerns, S(IT, k, –) denotes the sample complexity. When q = u[k], the problem reduces to uniformity testing, and the sample complexity is denoted as S(UT, k, –, Á).
Closeness Testing (CT). Given m independent samples Xm1 , and Y m1 from unknown distributions p, and q. An algorithm A is an (k, –)-closeness testing algorithm if when p = q, A outputs p = q with probability at least 0.9, and when dT V (p, q) Ø –, A outputs p ”= q with probability at least 0.9. Definition 6. The sample complexity of DP-closeness testing, denoted S(CT, k, –, Á), is the smallest m for which there exists an Á-DP algorithm A that uses m samples to achieve (k, –)-closeness testing. When privacy is not a concern, we denote the sample complexity of closeness testing as S(CT, k, –).
Hypothesis Testing (HT). Suppose we have distributions p and q over X m, and Xm1 ≥ p, Y m
1 ≥ q, we say an algorithm A : X m æ {p, q} can distinguish between p and q if Pr (A(Xm1 ) = q) < 0.1 and Pr (A(Y m1 ) = p) < 0.1.
3 Privacy Bounds Via Coupling
Recall that coupling between distributions p and q over X , and Y, is a distribution over X ◊ Y whose marginal distributions are p and q (Definition 1). For simplicity, we treat coupling as a randomized function f : X æ Y such that if X ≥ p, then Y = f(X) ≥ q. Note that X, and Y are not necessarily independent. Example 1. Let B(b1), and B(b2) be Bernoulli distributions with bias b1, and b2 such that b1 < b2. Let p, and q be distributions over {0, 1}m obtained by m i.i.d. samples from B(b1), and B(b2) respectively. Let Xm1 be distributed according to p. Generate a sequence Y m1 as follows: If Xi = 1, then Yi = 1. If Xi = 0, we flip another coin with bias (b2 ≠b1)/(1≠b1), and let Yi be the output of this coin. Repeat the process independently for each i, such that the Yi’s are all independent of each other. Then Pr (Yi = 1) = b1 +(1≠ b1)(b2 ≠ b1)/(1≠ b1) = b2, and Y m1 is distributed according to q.
We would like to use coupling to prove lower bounds on di erentially private algorithms for testing problems. Let p and q be distributions over X m. If there is a coupling between p and q with a small expected Hamming distance, we might expect that the algorithm cannot have strong privacy guarantees. The following theorem formalizes this intuition: Theorem 1. Suppose there is a coupling between p and q over X m, such that E [dH(Xm1 , Y m1 )] Æ D where Xm1 ≥ p, Y m1 ≥ q. Then, any (Á, ”)-di erentially private hypothesis testing algorithm A : X m æ {p, q} on p and q must satisfy Á + ” = ! 1 D "
Proof. Let (Xm1 , Y m1 ) be distributed according to a coupling of p, and q with E [dH(Xm1 , Y m1 )] Æ D. By Markov’s inequality, Pr (dH(Xm1 , Y m1 ) > 10D) < Pr (dH(Xm1 , Y m1 ) > 10 · E [dH(Xm1 , Y m1 )]) < 0.1. Let xm1 and ym1 be the realization of Xm1 and Y m1 . Let W = {(xm1 , ym1 )|dH(xm1 , ym1 ) Æ 10D}. Then we have
0.1 Ø Pr (A(Xm1 ) = q) Ø ÿ
(xm1 ,ym1 )œW Pr (Xm1 = xm1 , Y m1 = ym1 ) · Pr (A(xm1 ) = q).
By Lemma 1, and Pr (dH(Xm1 , Y m1 ) > 10D) < 0.1, and Pr (A(ym1 ) = q) Æ 1,
Pr (A(Y m1 ) = q) Æ ÿ
(xm1 ,ym1 )œW Pr (xm1 , ym1 ) · Pr (A(ym1 ) = q) +
ÿ
(xm1 ,ym1 )/œW Pr (xm1 , ym1 ) · 1
Æ ÿ
(xm1 ,ym1 )œW Pr (xm1 , ym1 ) · (eÁ·10D Pr (A(xm1 ) = q) + 10D” · eÁ·10(D≠1)) + 0.1
Æ 0.1eÁ·10D + 10D” · eÁ·10D + 0.1.
Since we know Pr (A(Y m1 ) = q) > 0.9, then 0.9 < Pr (A(Y m1 ) = q) < 0.1eÁ·10D + 10D” · e Á·10D + 0.1. Hence, either eÁ·10D = (1) or 10D” = (1), which implies that D =
! min ) 1 Á , 1 ” *" =
1 1
Á+”
2 , proving the theorem.
Set ” = 0, we obtain the bound for pure di erential privacy. In the next few sections, we use this theorem to get sample complexity bounds for di erentially private testing problems.
4 Identity Testing
In this section, we prove the bounds for identity testing. Our main result is the following. Theorem 2.
S(IT, k, –, Á) = 1 k 1/2 –2 + max Ó k 1/2 –Á1/2 , k 1/3 –4/3Á2/3 , 1 –Á Ô2 .
Or we can write it according to the parameter range,
S(IT, k, –, Á) =
Y ___]
___[
1 Ô
k –2 + k
1/2
–Á1/2
2 , when k = ! 1 –4 " and k = ! 1 –2Á " ,
1 Ô
k –2 + k
1/3
–4/3Á2/3
2 , when k = ! –
Á
" and k = O ! 1 –4 + 1 –2Á " ,
1 Ô
k –2 + 1 –Á
2 , when k = O ! –
Á
" .
Our bounds are tight up to constant factors in all parameters. To get the sample complexity for (Á, ”)-di erential privacy, we can simply replace Á by (Á + ”). In Theorem 3 we will show a reduction from identity to uniformity testing under pure di erential privacy. Using this, it will be enough to design algorithms for uniformity testing, which is done in Section 4.2. Moreover since uniformity testing is a special case of identity testing, any lower bound for uniformity will port over to identity, and we give such bounds in Section 4.3.
4.1 Uniformity Testing implies Identity Testing
The sample complexity of testing identity of any distribution is O( Ô k
–2 ), a bound that is tight for the uniform distribution. Recently [35] proposed a scheme to reduce the problem of testing identity of distributions over [k] for total variation distance – to the problem of testing uniformity over [6k] with total variation parameter –/3. In other words, they show that S(IT, k, –) Æ S(UT, 6k, –/3). Building on [35], we prove that a similar bound also holds for di erentially private algorithms. The proof is in Appendix B. Theorem 3. S(IT, k, –, Á) Æ S(UT, 6k, –/3, Á).
4.2 Identity Testing – Upper Bounds
In this section, we will show that by privatizing the statistic proposed in [38] we can achieve the sample complexity in Theorem 2 for all parameter ranges. The procedure is described in Algorithm 1.
Recall that Mx(Xm1 ) is the number of appearances of x in Xm1 . Let
S(Xm1 ) := 1 2 ·
nÿ
x=1
---- Mx(Xm1 ) m ≠ 1 k ---- , (1)
be the TV distance from the empirical distribution to the uniform distribution. Let µ(p) = E [S(Xm1 )] when the samples are drawn from distribution p. They show the following separation result on the expected value of S(Xm1 ). Lemma 5 ([38]). Let p be a distribution over [k] and dT V (p, u[k]) Ø –, then there is a constant c such that
µ(p) ≠ µ(u[k]) Ø c–2 min Ó m 2 k2 , m k , 1 – Ô .
[38] used this result to show that thresholding S(Xm1 ) at 0 is an optimal algorithm for identity testing. We first normalize the statistic to simplify the presentation of our DP algorithm. Let
Z(Xm1 ) :=
Y _]
_[
k 1 S(Xm1 ) ≠ µ(u[k]) ≠ 12 c– 2 · m 2 k2 2 , when m Æ k, m ! S(Xm1 ) ≠ µ(u[k]) ≠ 12 c– 2 · m k " , when k < m Æ k
–2 , m ! S(Xm1 ) ≠ µ(u[k]) ≠ 12 c– " , when m Ø k –2 . (2)
where c is the constant in Lemma 5, and µ(u[k]) is the expected value of S(Xm1 ) when Xm1 are drawn from uniform distribution.
Algorithm 1 Uniformity testing Input: Á, –, i.i.d. samples Xm1 from p
1: Let Z(Xm1 ) be evaluated from (1), and (2). 2: Generate Y ≥ B(‡(Á · Z)), ‡ is the sigmoid function. 3: if Y = 0, return p = u[k], else, return p ”= u[k].
We now prove that this algorithm is Á-DP. We need the following sensitivity result. Lemma 6. (Z) Æ 1 for all values of m, and k.
Proof. Recall that S(Xm1 ) = 12 · q n x=1 --- Mx(X m 1 ) m ≠ 1 k ---. Changing any one symbol changes at most two of the Mx(Xm1 )’s. Therefore at most two of the terms change by at most1 m
. Therefore, (S(Xm1 )) Æ 1m , for any m. When m Æ k, this can be strengthened with observation that Mx(Xm1 )/m Ø 1k , for all Mx(X m
1 ) Ø 1. Therefore, S(Xm1 ) = 12 ·1q x:Mx(Xm1 )Ø1 1 Mx(Xm1 ) m ≠ 1 k 2 + q x:Mx(Xm1 )=0 1 k 2 = 0(X m 1 ) k
, where 0(Xm1 ) is the number of symbols not appearing in Xm1 . This changes by at most one when one symbol is changed, proving the result.
Using this lemma, Á · Z(Xm1 ) changes by at most Á when Xm1 is changed at one location. Invoking Lemma 4, the probability of any output changes by a multiplicative exp(Á), and the algorithm is Á-di erentially private. To prove the sample complexity bound, we first show that the mean of the test statistic is well separated using Lemma 5. Then we use the concentration bound of the test statistic from [38] to get the final complexity. Due to lack of space, the detailed proof of sample complexity bound is given in Appendix C.
4.3 Sample Complexity Lower bounds for Uniformity Testing
In this section, we will show the lower bound part of Theorem 2. The first term is the lower bound without privacy constraints, proved in [7]. In this section, we will prove the terms associated with privacy.
The simplest argument is for m Ø k –2 , which hopefully will give you a sense of how coupling argument works. We consider the case of binary identity testing where the goal is to test whether the bias of a coin is 1/2 or –-far from 1/2. This is a special case of identity testing for distributions over [k] (when k ≠ 2 symbols have probability zero). This is strictly harder than the problem of distinguishing between B(1/2) and B(1/2 + –). The coupling given in Example 1 has expected hamming distance of –m. Hence combing with Theorem 1, we get a lower bound of ( 1
–Á ).
We now consider the cases m Æ k and k < m Æ k –2 .
To this end, we invoke LeCam’s two point theorem, and design a hypothesis testing problem that will imply a lower bound on uniformity testing. The testing problem will be to distinguish between the following two cases. Case 1: We are given m independent samples from the uniform distribution u[k]. Case 2: Generate a distribution p with dT V (p, u[k]) Ø – according to some prior over all such distributions. We are then given m independent samples from this distribution p. Le Cam’s two point theorem [64] states that any lower bound for distinguishing between these two cases is a lower bound on identity testing problem. We now describe the prior construction for Case 2, which is the same as considered by [7] for lower bounds on identity testing without privacy considerations. For each z œ {±1}k/2, define a distribution pz over [k] such that
pz(2i ≠ 1) = 1 + zi · 2–
k , and pz(2i) = 1 ≠ zi · 2– k .
Then for any z, dT V (Pz, u[k]) = –. For Case 2, choose p uniformly from these 2k/2 distributions. Let Q2 denote the distribution on [k]m by this process. In other words, Q2 is a mixture of product distributions over [k]. In Case 1, let Q1 be the distribution of m i.i.d. samples from u[k]. To obtain a sample complexity lower bound for distinguishing the two cases, we will design a coupling between Q1, and Q2, and bound its expected Hamming distance. While it can be shown that the Hamming distance of the coupling between the uniform distribution with any one of the 2k/2 distributions grows as –m, it can be significantly smaller, when we consider the mixtures. In particular, the following lemma shows that there exist couplings with bounded Hamming distance. Lemma 7. There is a coupling between Xm1 generated by Q1, and Y m1 by Q2 such that
E [dH(Xm1 , Y m1 )] Æ C · –2 min{ m 2 k , m 3/2 k1/2 }.
The lemma is proved in Appendix D. Now applying Theorem 1, we get the bound in Theorem 2.
5 Closeness Testing
Recall the closeness testing problem from Section 2, and the tight non-private bounds from Table 1. Our main result in this section is the following theorem characterizing the sample complexity of di erentially private algorithms for closeness testing. Theorem 4. If – > 1/k1/4, and Á–2 > 1/k,
S(CT, k, –, Á) = 3 k 2/3
–4/3 + k
1/2
– Ô Á
4 ,
otherwise,
3
k 1/2
–2 + k
1/2
– Ô Á + 1 –Á
4 Æ S(CT, k, –, Á) Æ O 3 k 1/2
–2 + 1 –2Á
4 .
This theorem shows that in the sparse regime, when m = O(k), our bounds are tight up to constant factors in all parameters. To prove the upper bounds, we only consider the case when ” = 0, which would su ce by lemma 2. We privatize the closeness testing algorithm of [10]. To reduce the strain on the readers, we drop the sequence notations explicitly and let
µi := Mi(Xm1 ), and ‹i := Mi(Y m1 ).
The statistic used by [10] is
Z(Xm1 , Y m1 ) := ÿ
iœ[k]
(µi ≠ ‹i)2 ≠ µi ≠ ‹i µi + ‹i ,
where we assume that ((µi ≠ ‹i)2 ≠ µi ≠ ‹i)/(µi + ‹i) = 0, when µi + ‹i = 0. It turns out that this statistic has a constant sensitivity, as shown in Lemma 8. Lemma 8. (Z(Xm1 , Y m1 )) Æ 14.
Proof. Since Z(Xm1 , Y m1 ) is symmetric, without loss of generality assume that one of the symbols is changed in Y m1 . This would cause at most two of the ‹i’s to change. Suppose ‹i Ø 1, and it changed to ‹i ≠ 1. Suppose, µi + ‹i > 1, the absolute change in the ith term of the statistic is
---- (µi ≠ ‹i)2
µi + ‹i ≠ (µi ≠ ‹i + 1)
2
µi + ‹i ≠ 1
---- = ---- (µi + ‹i)(2µi ≠ 2‹i + 1) + (µi ≠ ‹i)2
(µi + ‹i)(µi + ‹i ≠ 1)
----
Æ ---- 2µi ≠ 2‹i + 1 µi + ‹i ≠ 1 ---- + ---- µi ≠ ‹i µi + ‹i ≠ 1 ----
Æ3 |µi ≠ ‹i| + 1 µi + ‹i ≠ 1 Æ 3 + 4 µi + ‹i ≠ 1 Æ 7.
When µi + ‹i = 1, the change can again be bounded by 7. Since at most two of the ‹i’s change, we obtain the desired bound.
We use the same approach with the test statistic as with uniformity testing to obtain a di erentially private closeness testing method, described in Algorithm 2. Since the sensitivity of the statistic is at most 14, the input to the sigmoid changes by at most Á when any input sample is changed. Invoking Lemma 4, the probability of any output changes by a multiplicative exp(Á), and the algorithm is Á-di erentially private.
Algorithm 2 Input: Á, –, sample access to distribution p and q
1: Z Õ Ω (Z(Xm1 , Y m1 ) ≠ 12 m
2 – 2
4k+2m )/14 2: Generate Y ≥ B(‡(exp(Á · Z Õ)) 3: if Y = 0, return p = q 4: else, return p ”= q
The remaining part is to show that Algorithm 2 satisfies sample complexity upper bounds described in theorem 4. We will give the details in Appendix E, where the analysis of the lower bound is also given.
Acknowledgement
The authors thank Gautam Kamath for some very helpful suggestions about this work. | 1. What are the main contributions and novel aspects introduced by the paper regarding testing problems for distributions over finite domains?
2. What are the strengths of the paper, particularly in terms of sample complexity upper and lower bounds, and privatizing reductions from identity testing to uniformity testing?
3. Do you have any questions or concerns about the paper's approach to private closeness testing, experimental results, or its extension to (ε, δ)-DP? | Review | Review
This paper studies two basic testing problems for distributions over finite domains. The first is the identity testing problem, where given a known distribution q and i.i.d. samples from p, the goal is to distinguish between the case where p = q and the case where the distributions are far in total variation distance. A second, closely related problem, is the closeness testing problem where q is also taken to be an unknown distribution accessed via i.i.d. samples. This submission studies the sample complexity of solving these problems subject to differential privacy. For identity testing, it gives sample complexity upper and lower bounds which match up to constant factors. For closeness testing, it gives bounds which match in the "sparse" regime where the data domain is much larger than the accuracy/privacy parameters, and which match up to polynomial factors otherwise. The private identity testing problem that the authors consider had been studied in prior work of Cai, Diakonikolas, and Kamath, and this submission gives an improved (and simplified) upper bound. Private closeness testing appears not to have been studied before. (Independent work of Aliakbarpour, Diakonikolas, and Rubinfeld gives similar upper bounds which are complemented by experimental results, but does not study lower bounds.) The upper bounds are obtained by a) privatizing a reduction from identity testing to uniformity testing, due to Goldreich, and b) modifying recent uniformity and closeness testers to guarantee differential privacy. These modifications turn out to be relatively straightforward, as the test statistics introduced in these prior works have low sensitivity. Hence, private testers can be obtained by making randomized decisions based on these test statistics, instead of deterministically thresholding them. Lower bounds follow from a new coupling technique. The idea is to construct two distributions on finite samples that an accurate tester must be able to distinguish. However, if there is a coupling between these distributions where the samples have low expected Hamming distance, then differentially private algorithms will have a hard time telling them apart. This is a nice trick which is powerful enough to prove tight lower bounds, and extends generically to give lower bounds for (eps, delta)-DP. To summarize, the results are fairly interesting, deftly leverage recent advances in distribution testing, and introduce some nice new techniques. I think this is a solid paper for NIPS. After reading author feedback: I still believe this is a strong paper and should be accepted. |
NIPS | Title
Disentangling the Predictive Variance of Deep Ensembles through the Neural Tangent Kernel
Abstract
Identifying unfamiliar inputs, also known as out-of-distribution (OOD) detection, is a crucial property of any decision making process. A simple and empirically validated technique is based on deep ensembles where the variance of predictions over different neural networks acts as a substitute for input uncertainty. Nevertheless, a theoretical understanding of the inductive biases leading to the performance of deep ensemble’s uncertainty estimation is missing. To improve our description of their behavior, we study deep ensembles with large layer widths operating in simplified linear training regimes, in which the functions trained with gradient descent can be described by the neural tangent kernel. We identify two sources of noise, each inducing a distinct inductive bias in the predictive variance at initialization. We further show theoretically and empirically that both noise sources affect the predictive variance of non-linear deep ensembles in toy models and realistic settings after training. Finally, we propose practical ways to eliminate part of these noise sources leading to significant changes and improved OOD detection in trained deep ensembles.
1 Introduction
Modern artificial intelligence uses intricate deep neural networks to process data, make predictions and take actions. One of the crucial steps toward allowing these agents to act in the real world is to incorporate a reliable mechanism for estimating uncertainty – in particular when human lives are at risk [1, 2]. Although the ongoing success of deep learning is remarkable, the increasing data, model and training algorithm complexity make a thorough understanding of their inner workings increasingly difficult. This applies when trying to understand when and why a system is certain or uncertain about a given output and is therefore the topic of numerous publications [3–10].
Principled mechanisms for uncertainty quantification would rely on Bayesian inference with an appropriate prior. This has led to the development of (approximate) Bayesian inference methods for deep neural networks [11–15]. Simply aggregating an ensemble of models [16] and using the disagreement of their predictions as a substitute for uncertainty has gained popularity. However, the theoretical justification of deep ensembles remains a matter of debate, see Wilson and Izmailov [17]. Although a link between Bayesian inference and deep ensembles can be obtained, see [18, 19], an understanding of the widely adopted standard deep ensemble and it’s predictive distribution is still missing [20, 21]. Note that even for principled Bayesian approaches there is no valid theoretical or practical OOD guarantee without a proper definition of out-of-distribution data [22].
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
One avenue to simplify the analyses of deep neural networks that gained a lot of attention in recent years is to increase the layer width to infinity [23, 24] or to very large values [25, 26]. In the former regime, an intriguing equivalence of infinitely wide deep networks at initialization and Gaussian processes allows for exact Bayesian inference and therefore principled uncertainty estimation. Although it is not possible to generally derive a Bayesian posterior for trained infinite or finite layer width networks, the resulting model predictions can be expressed analytically by kernels. Given this favorable mathematical description, the question of how powerful and similar these models are compared to their arguably black-box counterparts arises, with e.g. moderate width, complex optimizers and training stochasticity [25, 27–32].
In this paper, we leverage this tractable description of trained neural networks and take a first step towards understanding the predictive distribution of neural networks ensembles with large but finite width. Building on top of the various studies mentioned, we do so by studying the case where these networks can be described by a kernel and study the effect of two distinct noise sources stemming from the network initialization: The noise in the functional initialization of the network and the initialization noise of the gradient, which affects the training and therefore the kernel. As we will show, these noise sources will affect the predictive distributions differently and influence the network’s generalization on in- and out-of-distribution data.
Our contributions are the following:
• We provide a first order approximation of the predictive variance of an ensemble of linearly trained, finite-width neural networks. We identify interpretable terms in the refined variance description, originating from 2 distinct noise sources, and further provide their analytical expression for single layer neural networks with ReLU non-linearities.
• We show theoretically that under mild assumptions these refined variance terms survive nonlinear training for sufficiently large width, and therefore contribute to the predictive variance of non-linearly trained deep ensembles. Crucially, our result suggests that any finer description of the predictive variance of a linearized ensemble can be erased by nonlinear training.
• We conduct empirical studies validating our theoretical results, and investigate how the different variance terms influence generalization on in - and out-of-distribution. We highlight the practical implications of our theory by proposing simple methods to isolate noise sources in realistic settings which can lead to improved OOD detection.1
2 Neural network ensembles and their relations to kernels
Let fθ = f(·, θ) : Rh0 → RhL denote a neural network parameterized by the weights θ ∈ Rn. The weights consist of weight matrices and bias vectors {(Wl, bl)}Ll=1 describing the following feed-forward computation beginning with the input data x0:
zl+1 = σw√ hl W l+1xl + bl+1 with xl+1 = ϕ(zl+1). (1)
Here hl is the dimension of the vector xl and ϕ is a pointwise non-linearity such as the softplus log(1 + ex) or Rectified Linear Unit i.e. max(0, x) (ReLU) [33]. We follow Jacot et al. [24] and use σw = √ 2 to control the standard deviation of the initialised weights W lij , b l i ∼ N (0, 1).
Given a set of N datapoints X = (xi)0≤i≤N ∈ RN×h0 and targets Y = (yi)0≤i≤N ∈ RN×hL , we consider regression problems with the goal of finding θ∗ which minimizes the mean squared error (MSE) loss L(θ) = 12 ∑N i=0 ∥f(xi, θ)− yi∥22. For ease of notation, we denote by f(X , θ) ∈ RN ·hL the vectorized evaluation of f on each datapoint and Y ∈ RN ·hL the target vector for the entire dataset. As the widths of the hidden layers grow towards infinity, the distribution of outputs at initialization f(x, θ0) converges to a multivariate gaussian distribution due to the Central Limit Theorem [23]. The resulting function can then accurately be described as a zero-mean Gaussian process, coined Neural Network Gaussian Process (NNGP), where the covariance of a pair of output neurons i, j for data x and x′ is given by the kernel
1Source code for all experiments: github.com/seijin-kobayashi/disentangle-predvar
K(x, x′)i,j = lim h→∞ E[f i(x, θ0)f j(x′, θ0)] (2)
with h = min(h1, ..., hL−1). This equivalence can be used to analytically compute the Bayesian posterior of infinitely wide Bayesian neural networks [34].
On the other hand infinite width models trained via gradient descent (GD) can be described by the Neural Tangent Kernel (NTK). Given θ, the NTK Θθ of fθ is a matrix in RN ·hL × RN ·hL with the (i, j)-entry given as the following dot product
⟨∇θf(xi, θ),∇θf(xj , θ)⟩ (3)
where we consider without loss of generality the output dimension of f to be hL = 1 for ease of notation. Furthermore, we denote Θθ(X ,X ) := ∇θf(X , θ)∇θf(X , θ)T the matrix and Θθ(x
′,X ) := ∇θf(x′, θ)∇θf(X , θ)T the vector form of the NTK while highlighting the dependencies on different datapoints.
Lee et al. [25] showed that for sufficiently wide networks under common parametrizations, the gradient descent dynamics of the model with a sufficiently small learning rate behaves closely to its linearly trained counterpart, i.e. its first-order Taylor expansion in parameter space. In this gradient flow regime, after training on the mean squared error converges, we can rewrite the predictions of the linearly trained models in the following closed-form:
f lin(x) =f(x, θ0) +Qθ0(x,X )(Y − f(X , θ0)) (4)
where Qθ0(x,X ) := Θθ0(x,X )Θθ0(X ,X )−1 with Θθ0 the NTK at initialization, i.e. of f(., θ0). The linearization error throughout training supt≥0 ∥f lint (x) − ft(x)∥ is further shown to decrease with the width of the network, bounded by O(h− 12 ). Note that one can also linearize the dynamics without increasing the width of a neural network but by simply changing its output scaling [26].
When moving from finite to the infinite width limit the training of a multilayer perceptron (MLP) can again be described with the NTK, which now converges to a deterministic kernel Θ∞ [24], a result which extends to convolutional neural networks [27] and other common architectures [35, 36]. A fully trained neural network model can then be expressed as
f∞(x) = f(x, θ0) + Θ∞(x,X )Θ∞(X ,X )−1(Y − f(X , θ0)). (5)
where f({X , x}, θ0) ∼ N (0,K({X , x}, {X , x})).
2.1 Predictive distribution of linearly trained deep ensembles
In this Section, we study in detail the predictive distribution of ensembles of linearly trained models, i.e. the distribution of f lin(x) given x over random initializations θ0. In particular, for a given data x, we are interested in the mean E[f(x)] and variance V[f(x)] of trained models over random initialization. The former is typically used for the prediction of a deep ensemble, while the latter is used for estimating model or epistemic uncertainty utilized e.g. for OOD detection or exploration. To start, we describe the simpler case of the infinite width limit and a deterministic NTK, which allows us to compute the mean and variance of the solutions found by training easily:
E[f∞(x)] =Q∞(x,X )Y, V[f∞(x)] =K(x, x) +Q∞(x,X )K(X ,X )Q∞(x,X )T − 2Q∞(x,X )K(X , x)
(6)
where we introduced Q∞(x,X ) = Θ∞(x,X )Θ∞(X ,X )−1. For finite width linearly trained networks, the kernel is no longer deterministic, and its stochasticity influences the predictive distribution. Because there is probability mass assigned to the neighborhood of rare events where the NTK kernel matrix is not invertible, the expectation and variance over parameter initialization of the expression in equation 4 diverges to infinity.
Fortunately, due to the convergence in probability of the empirical NTK to the infinite width counterpart [24], we know these singularities become rarer and ultimately vanish as the width increases to infinity. Intuitively, we should therefore be able to assign meaningful, finite values to these undefined
quantities, which ignores these rare singularities. The delta method [37] in statistics formalizes this intuition, by using Taylor approximation to smooth out the singularities before computing the mean or variance. When the probability mass of the empirical NTK is highly concentrated in a small radius around the limiting NTK, the expression 4 is roughly linear w.r.t the NTK entries. Given this observation, we prove (see Appendix A.2) the following result, and justify that the obtained expression is informative of the empirical predictive mean and variance of deep ensembles. Rewriting equation 4 into
f lin(x) =f(x, θ0) + Q̄(x,X )(Y − f(X , θ0)) + [Qθ0(x,X )− Q̄(x,X )](Y − f(X , θ0))
(7)
where Q̄(x,X ) = Θ̄(x,X )Θ̄(X ,X )−1 and Θ̄ = E(Θθ0), we state: Proposition 2.1. For one hidden layer networks parametrized as in equation 1, given an input x and training data (X ,Y), when increasing the hidden layer width h, we have the following convergence in distribution over random initialization θ0:
√ h[Qθ0(x,X )− Q̄(x,X )](Y − f(X , θ0)) dist.→ Z(x)
where Z(x) is the linear combination of 2 Chi-Square distributions, such that
V(Z(x)) = lim h→∞ (hVc(x) + hVi(x))
where
Vc(x) =V[Θθ0(x,X )Θ̄(X ,X )−1Y] + V[Q̄(x,X )Θθ0(X ,X )Θ̄(X ,X )−1Y] − 2Cov[Q̄(x,X )Θθ0(X ,X )Θ̄(X ,X )−1Y,Θθ0(x,X )Θ̄(X ,X )−1Y], Vi(x) =V[Θθ0(x,X )Θ̄(X ,X )−1f(X , θ0)] + V[Q̄(x,X )Θθ0(X ,X )Θ̄(X ,X )−1f(X , θ0)] − 2Cov[Q̄(x,X )Θθ0(X ,X )Θ̄(X ,X )−1f(X , θ0),Θθ0(x,X )Θ̄(X ,X )−1f(X , θ0)].
We omit the dependence of θ0 on the width h for notational simplicity. While the expectation or variance of equation 4 for any finite width is undefined, their empirical mean and variance are with high probability indistinguishable from that of the above limiting distribution (see Lemma A.1). Note that the above proposition assumes the noise in Θθ0 to be decorrelated from f(x, θ0), which can hold true under specific constructions of the network that are of practical interest as we will see in the following (c.f. Appendix A.3.2).
Given Proposition 2.1, we now describe the approximate variance of f lin(x) for L = 2, which we can extend to the general L > 2 case using an informal argument (see A.2.2): Proposition 2.2. Let f be a neural network with identical width of all hidden layers, h1 = h2 = ... = hL−1 = h. We assume ∥Θθ0 − Θ̄∥2F = Op( 1h ). Then,
V[f lin(x)] ≈ Va(x) + Vc(x) + Vi(x) + Vcor(x) + Vres(x)
where
Va(x) =K̄(x, x) + Q̄(x,X )K̄(X ,X )Q̄(x,X )T − 2Q̄(x,X )K̄(X , x), Vcor(x) =2E [ [Θθ0(x,X )− Q̄(x,X )Θθ0(X ,X )][Θ̄(X ,X )−1Θθ0(X ,X )Θ̄(X ,X )−1] ] · [K̄(X , x)− K̄(X ,X )Q̄(x,X )T ]
and Vres(x) = O(h−2) as well as K̄ the expectation over initializations of the finite width counterpart of the NNGP kernel.
Several observations can be made: First, the above expression only involves the first and second moments of the empirical, finite width NTK, as well as the first moment of the NNGP kernel. These terms can be analytically computed in some settings. We provide in Appendix A.4.3 some of the moments for the special case of a 1-hidden layer ReLU network, and show the analytical expression correspond to empirical findings.
Second, the decomposition demonstrates the interplay of 2 distinct noise sources in the predictive variance:
• Va is the variance associated to the expression in the first line of equation 7. Intuitively, it is the finite width counterpart of the predictive variance of the infinite width model (equation 6), as it assumes the NTK is deterministic. The variance stems entirely from the functional noise at initialization and converges to the infinite width predictive variance as the width increases.
• Vc and Vi stem from the second line of equation 7. Vc is a first-order approximation of the predictive variance of a linearly trained network with pure kernel noise, without functional noise i.e. Vc ≈ V[Qθ0(x,X )Y]. On the other hand, Vi depends on the interplay between the 2 noises, and can be identified as the predictive variance of a deep ensemble with a deterministic NTK Θ̄ and a new functional prior g(x) = Θθ0(x,X )Θ̄(X ,X )−1f(X , θ0). Intuitively, this new functional prior can be seen as a data-specific inductive bias on the NTK formulation of the predictive variance (see Appendix A.3.1 for more details).
• Vcor is a covariance term between the 2 terms in equation 7 and also contains the correlation terms between Θθ0 and f(x, θ0). In general, its analytical expression is challenging to obtain as it requires the 4th moments of the finite width NNGP kernel fluctuation. Here, we provide its expression under the same simplifying assumption that the noise in Θθ0 is decorrelated from f(x, θ0). We therefore do not attempt to describe it in general, and focus in our empirical Section on the terms that are tractable and can be easily isolated for practical purposes.
Each of Vc,Vi,Vcor decay in O(h−1), which, together with Va, provide a first-order approximation of the predictive variance of f lin(x). Note that Va and Vc are of particular interest, as removing either the kernel or the functional noise at initialization will collapse the predictive variance of the trained ensemble to either one of these 2 terms.
2.2 Predictive distribution of standard deep ensemble of large width
An important question at this point is to which extent our analysis for linearly trained models applies to a fully and non-linearly trained deep ensemble. Indeed, if the discrepancy between the predictive variance of a linearly trained ensemble and its non-linear counterpart is of a larger order of magnitude than the higher-order correction in the variance term, the latter can be ’erased’ by training. Building on top of previous work, we show that, under the assumption of an empirically supported conjecture [38], for one hidden layer networks trained on the Mean Squared Error (MSE) loss, this discrepancy is asymptotically dominated by the refined predictive variance terms of the linearly trained ensemble we described in Section 2.1.
Proposition 2.3. Let f be a neural network with identical width of all hidden layers, h1 = h2 = ... = hL−1 = h, and such that the derivative of the non-linearity ϕ′ is bounded and Lipschitz continuous on R. Let the training data (X ,Y) contained in some compact set, such that the NTK of f on X is invertible. Let ft (resp. f lint ) be the model (resp. linearized model) trained on the MSE loss with gradient flow at timestep t with some learning rate. Assuming
sup t ∥Θθ0 −Θθt∥F = O(
1 h ) (8)
Then, ∀x, ∀δ > 0,∃C,H : ∀h > H ,
P [ sup t ∥f lint (x)− ft(x)∥2 ≤ C h ] ≥ 1− δ. (9)
In particular, for one hidden layer networks, after training,
|V̂(f(x))− V̂(f lin(x))| = Op(V̂ [ [Qθ0(x,X )− Q̄(x,X )](Y − f(X , θ0)) ] ) (10)
where V̂ denotes the empirical variance with some fixed sample size.
The proof can be found in Appendix A.1.1. While only the bound supt∥Θθ0 −Θθt∥F = O( 1√h ) has been proven in previous works [25], many empirical studies including those in the present work (see Appendix Fig. 5, Table 3) have shown that the bound decreases faster in practice, on the order of O(h−1) [25, 38]. Note that this result suggests the approximation provided in Proposition 2.2 is as good as it gets for describing the predictive variance of non-linearly trained ensembles: the higher order terms would be of a smaller order of magnitude than the non-linear correction to the training, rendering any finer approximation pointless.
3 Disentangling deep ensemble variance in practice
The goal of this Section is to validate our theoretical findings in experiments. First, we aim to show qualitatively and quantitatively that the variance of linearly trained neural networks is well approximated by the decomposition introduced in Proposition 2.2. To do so, we investigate ensembles of linearly trained models and analyze their behavior in toy models and on common computer vision classification datasets. We then extend our analyses to fully-trained non-linear deep neural networks optimized with (stochastic) gradient descent in parameter space. Here, we confirm empirically the strong influence of the variance description of linearly trained models in these less restrictive settings while being trained to very low training loss. Therefore we showcase the improved understanding of deep ensembles through their linearly trained counterpart and highlight the practical relevance of our study by observing significant OOD detection performance differences of models when removing noise sources in various settings.
3.1 Disentangling noise sources in kernel models
To isolate the different terms in Proposition 2.2, we construct, from a given initialization θ0 with the associated linearized model f lin, three additional linearly trained models:
f lin-c(x) = Qθ0(x,X )Y f lin-a(x) = f(x, θ0) + Q̄(x,X )(Y − f(X , θ0)) f lin-i(x) = g(x, θ0) + Q̄(x,X )(Y − g(X , θ0))
where g(x, θ0) = Θθ0(x,X )Θ̄(X ,X )−1f(X , θ0). Note that the predictive variance over random initialization of these functions corresponds to respectively Vc,Va,Vi as defined in Section 2.1. As one can see, we can simply remove the initialization noise from f lin by subtracting the initial (noisy) function f(x, θ0) before training resulting in a centered model f lin-c. Equivalently, we can remove noise that originates from the kernel by using the empirical average over kernels resulting in model f lin-a. Finally, we can isolate f lin-i by the same averaging trick as in f lin-a but use as functional noise g(x, θ0) which can be precomputed and added to f lin-c before training. Note that we neglect the terms involving covariance terms and focus on the parts which are easy to isolate, for linearly trained as well as for standard models. This will later allow us to study practical ways to subtract important parts of the predictive distribution for neural networks leading for example to significant OOD detection performance differences. Now we explore the differences and similarities of these disentangled functions and their respective predictive distributions.
3.1.1 Visualizations on a star-shaped toy dataset
To qualitatively visualize the different terms, we construct a two-way star-shaped regression problem on a 2d-plane depicted in Figure 1. After training an ensemble we visualize its predictive variance on the input space. Our first goal is to visualize qualitative differences in the predictive variance of ensembles consisting of f lin and the 3 disentangled models from above. We train a large ensemble of size 300 where each model is a one-layer ReLU neural network with hidden dimension 512 and 1 hidden layer. As suggested analytically for one hidden layer ReLU networks (see Appendix A.4.3), for example V[f lin-c(x)] depends on the angle of the datapoints while V[f lin(x)] depicts a superposition of the 3 isolated variances. While the ReLU activation does not satisfy the Lipschitzcontinuity assumption of Proposition 2.3, we use it to illustrate and validate our analytical description of the inductive biases induced by the different variance terms. We use the Softplus activation which behaved similarly to ReLU in the experiments in the next Section.
3.1.2 Disentangling linearly trained / kernel ensembles for MNIST and CIFAR10
Next, we move to a quantitative analysis of the asymptotic behavior of the various variance terms, as we increase the hidden layer size. In Figure 2, we analyze the predictive variance of the kernel models based on MLPs and Convolutional Neural Networks (CNN) for various depths and widths and on subsets of MNIST [39] and CIFAR10. As before, we construct a binary classification task through a MSE loss with dataset size of N = 100 and confirm, shown in Figure 2, that Vc, Vi decay by 1/h over all of our experiments. Crucially, we see that they contribute to the overall variance V even for relatively large widths. We further observe a decay in 1/h2 of the residual term as predicted by Proposition 2.2. As in all of our experiments, the variance magnitude and therefore the influence on f lin of the disentangled parts is highly architecture and dataset-dependent. Note that the small size of the datasets comes from the necessity to compute the inverse of the kernels for every ensemble member, see Appendix B for a additional analysis on larger datasets and scaling plots of Vcor. In Table 1, we quantify the previously observed qualitative difference of the various predictive variances by evaluating their performance on out-of-distribution detection tasks, where high predictive variance is used as a proxy for detecting out-of-distribution data. We focus our attention on analysing V[f lin-c(x)] and V[f lin-a(x)], as they are the variance terms containing purely the functional and kernel noise, respectively. As an evaluation metric, we follow numerous studies and compute the area under the receiver operating characteristics curve (AUROC, c.f. Appendix B). We fit a linearized ensemble on a larger subset of the standard 10-way classification MNIST and CIFAR10 datasets using MSE loss. When training our ensembles on MNIST, we test and average the OOD detection performance on FashionMNIST (FM) [40], E-MNIST (EM) [41] and K-MNIST (KM) [42]. When training our ensembles on CIFAR10, we compute the AUROC for SVHN [43], LSUN [44], TinyImageNet (TIN)
and CIFAR100 (C100), see Appendix Table 4 for the variance magnitude and AUROC values for all datasets.
The results show significant differences in variance magnitude and AUROC values. While we do not claim competitive OOD performance, we aim to highlight the differences in behavior of the isolated functions developed above: we see for instance that for (MLP, MNIST, N=1000), f lin-a generally performs better than f lin in OOD detection. Indeed, the overall worse performance of V[f lin-c(x)] seems to be affecting that of V[f lin(x)] which contains both terms. On the other hand, we see that for the setup (CNN, CIFAR10, N=1000) V[f lin(x)] is not well described by this interpolation argument, which highlights the influence of the other variance terms described in Proposition 2.2. Furthermore, the OOD detection capabilities of each function seem to be highly dependent on the particular data considered: Ensembles of f lin-c are relatively good at identifying SVHN data as OOD, while being poor at identifying LSUN and iSUN data. These observations highlight the particular inductive bias of each variance term for OOD detection on different datasets.
We further report the test set generalization of the ensemble mean of different functions, highlighting the diversity in the predictive mean of these models as well. Note that for N >= 1000 we trained the ensembles in linear fashion with gradient flow (which coincides with the kernel expression) up until the MSE training error was smaller than 0.01.
3.2 Does the refined variance description generalize to standard gradient descent in practice?
In this Section, we start with empirical verification of Proposition 2.3 and show that the bound in equation 10 holds in practice. Given this verification, we then propose equivalent disentangled models as those previously defined but in the non-linear setting, and 1) show significant differences in their predictive distribution but also 2) investigate to which extent improvements in OOD detection translate from kernel / linearly to fully non-linearly trained models. We stress that we do not consider early stopped models and aim to connect the kernel with the gradient descent models faithfully.
3.2.1 Survival of the kernel noise after training
To validate Proposition 2.3, we first introduce f gd(x) = f(x,θt), a model trained with standard gradient descent of t steps i.e. θt = θ0 − ∑t−1 i=0 η∇θf(X , θi)(Y − f(X , θi)). To empirically verify
Proposition 2.3, we introduce the following ratio
R(f) = exp ( Ex∼X ′ ( log[
∥V̂[f lin(x)]− V̂[f gd(x)]∥ ∥V̂c(x) + V̂i(x)∥
] ))
(11)
where the empirical variances are computed over random initialization, and the expectation over some data distribution which we choose to be the union of the test-set and the various OOD datasets. Given a datapoint x, the term inside the log measures the ratio between the discrepancy of the variance between the linearized and non-linear ensemble, against the refined variance terms. R(f) is then the geometric mean of this ratio over the whole dataset. Proposition 2.3 predicts that the ratio remains bounded as the width increases, suggesting that the refined terms contribute to the final predictive variance of the non-linear model in a non negligible manner. We empirically verify this prediction for various depths in Fig. 3 and Appendix Figure 6, for functions trained on subsets MNIST and CIFAR10. Note that for all our experiments we also empirically verify the assumption from Proposition 2.3 (see Appendix Figure 5, Table 3).
3.2.2 Disentangling noise sources in gradient descent non-linear models
Motivated by the empirical verification of Proposition 2.3, we now aim to isolate different noise sources in non-linear models trained with gradient descent. Starting from a non-linear network f gd, we follow the same strategy as before and silence the functional initialization noise by centering the network (referred as f gd-c(x)) by simply subtracting the function at initialization. On the other hand, we remove the kernel noise with a simple trick: We first sample a random weight θc0 once, and use it as the weight initialization for all ensemble members. A function noise is added by first removing the function initialization from θc0, and adding that of a second random network which is not trained. The
resulting functions (referred as f gd-a(x)) will induce and ensemble which will only differ in their functional initialization while having the same Jacobian
f gd-c(x) = f(x, θt)− f(x, θ0), f gd-a(x) = f(x, θct )− f(x, θc0) + f(x, θ0).
We furthermore introduce f gd-i(x), the non linear counterpart to f lin-i(x), which we construct similarly to f gd-a(x) but using g(x, θ0, θc0) = Θθ0(x,X )Θθc(X ,X )−1f(X , θ0) as the function initialization instead of f(x, θ0) (see Section 2.1 and Appendix A.3.1 for the justification). Unlike f gd-a and f gd-c, constructing f gd-i requires the inversion of large matrices due to the way g is defined, a challenging task for realistic settings. While its practical use is thus limited, we introduce it to illustrate the correspondence of correspondence of the predictive variance of linearized vs non-linear deep ensemble.
Given these simple modifications of f gd, we rerun the experiments conducted for the linearly trained models for moderate dataset sizes (N=1000). We observe close similarities in the OOD detection capabilities as well as predictive variance between the introduced non-linearly trained ensembles and their linearly trained counterparts. We further train these models on the full MNIST dataset (N=50000) for which we show the same trend in Appendix Table 5. We also include the ensemble’ performance when trained on the full CIFAR10 dataset. Intriguingly, the relative performance of the ensemble is somewhat preserved in both settings between N=1000 and N=50000, even when training with SGD, promoting the use of quick, linear training on subset of data as a proxy for the OOD performance of a fully trained deep ensemble.
Similar to the case of (MLP, MNIST, N=1000/50000), we observe that f gd ensemble performance is an interpolation of f gd-c and f gd-a which interestingly performs often favorably, on different OOD data. To understand if the noise introduced by SGD impacts the predictive distribution of our disentangled ensembles, we compared the behavior of f gd and f sgd in the lower data regime
of N = 1000. Intriguingly, we show in Appendix Table 6 that no significant empirical difference between GD and SGD models can be observed and hypothesize that noise sources discussed in this study are more important in our approximately linear training regimes. To speed up experiments we used (S)GD with momentum (0.9) in all experiments of this subsection.
3.2.3 Removing noise of models possibly far away from the linear regime
Finally, we investigate the OOD performance of the previously introduced model variants f sgd, f sgd-c and f sgd-d in more realistic settings. To do so we train the commonly used WideResNet 28-10 [45] on CIFAR10 with BatchNorm [46] Layers and cross-entropy (CE) loss with batchsize of 128, without data augmentation (see Table 3.2.3). These network and training algorithm choices are considered crucial to achieving state-of-the-art and superior performance compared to their linearly trained counterparts. Strikingly, we notice that our model variants, which each isolate a different initial noise source, significantly affect the OOD capabilities of the final models when the training loss is virtually 0 - as in all of our experiments. This indicates that the discussed noise sources influence the ensemble’s predictive variance long throughout training. We provide similar results for CIFAR100 and FashionMNIST in Table B.1 and B.1 of the Appendix B. We stress that we do not claim that our theoretical assumptions hold in this setup.
4 Conclusion
The generalization on in-and out-of-distribution data of deep neural network ensembles is poorly understood. This is particularly worrying since deep ensembles are widely used in practice when trying to asses if data is out-of-distribution. In this study, we try to provide insights into the sources of noise stemming from initialization that influence the predictive distribution of trained deep ensembles. By focusing on large-width models we are able to characterize two distinct sources of noise and describe an analytical approximation of the predictive variance in some restricted settings. We then show theoretically and empirically how parts of this refined predictive variance description in the linear training regime survive and impact the predictive distribution of non-linearly trained deep ensembles. This allows us to extrapolate insights of the tractable linearly trained deep ensembles into the non-linear regime which can lead to improved out-of-distribution detection of deep ensembles by eliminating potentially unfavorable noise sources. Although our theoretical analysis relies on the closeness to linear gradient descent which has shown to result in less powerful models in practice, we hope that our surprising empirical success of noise disentanglement sparks further research into using the lens of linear gradient descent to understand the mysteries of deep learning.
Acknowledgments and Disclosure of Funding
Seijin Kobayashi was supported by the Swiss National Science Foundation (SNF) grant CRSII5_173721. Pau Vilimelis Aceituno was supported by the ETH Postdoctoral Fellowship program (007113). Johannes von Oswald was funded by the Swiss Data Science Center (J.v.O. P18-03). We thank Christian Henning, Frederik Benzing and Yassir Akram for helpful discussions. Seijin Kobayashi and Johannes von Oswald are grateful for Angelika Steger’s and João Sacramento’s overall support and guidance. | 1. What is the main contribution of the paper regarding the variance of a one-layer neural network?
2. What are the strengths and weaknesses of the proposed decomposition of the variance?
3. How would one extend the decomposition in the presence of additional sources of noise?
4. How are the AUROC values computed?
5. Why is the superposition of the three isolated variances surprising?
6. Does V_i factor into defining favorable/unfavorable sources of noise?
7. Are there any difficulties in obtaining high-quality estimates of the different variance terms?
8. What are the major limitations of the work according to the reviewer?
9. How could the conclusions of the empirical analysis be improved?
10. What is the significance of understanding the degree to which the variance of a deep ensemble is indicative of an ensemble's certainty in making a prediction? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper provides a first order approximation of the variance of a one-layer neural network. This approximation is composed of 5 terms:
V
a
is the finite-width equivalent of the predictive variance of an infinite width NTK, and depends on functional noise at initialization
V
c
is an approximation of the predictive variance of a linearly-trained network, and depends only on the kernel noise.
V
i
is the predictive variance of an ensemble with a NTK kernel and a data-specific prior
V
c
o
r
and
V
r
e
s
are two terms that are negligible in empirical experiments.
Based on this decomposition, the authors define ensembles that depend only on one or another source of noise (
f
c
and
f
a
), and show that isolating these sources of noise can improve OOD detection.
Strengths And Weaknesses
Strengths
Understanding the degree to which the variance of a deep ensemble is indicative of an ensemble's certainty in making a prediction is a crucial question as deep ensembles are deployed on real-world applications.
The authors push through a detailed decomposition of the variance of linearly trained neural networks, then generalize this decomposition under several assumptions to neural networks trained with gradient descent and SGD.
The authors analyze the terms in the proposed decomposition empirically across a variety of experimental settings (different training modalities, datasets, and dataset samples).
Weaknesses
I found the empirical section of this paper difficult to put into context with the previous decomposition (cf. "Questions" section below).
I am not sure I follow the author's claim that isolating unfavorable noise improves OOD detection. This appears to be the case in certain experiments but not all (e.g., Table 1 for the CNN on Cifar-10), and it also seems like the "unfavorable" noise depends on the experiment setting (Table 1, linear models on CNN:
f
l
i
n
−
c
does better for SVHN but worse for LSUN).
I would've liked to see
V
i
included in the empirical analysis, since it is the third major linear term in the approximation to the full variance.
Questions
How would one extend the decomposition of Prop. 2.2 in the presence of additional sources of noise? Would an approach such as [1] be sufficient?
How are the AUROC values computed? Are they computed based on the (pointwise) predictive variances of the different ensembles?
Line 219: "Interestingly,
V
[
f
l
i
n
(
x
)
]
depicts a superposition of the three isolated variances." Is this not a validation of the decomposition of Prop 2.2? Why is this surprising?
In cases where we have
V
[
f
l
i
n
]
≤
V
[
f
l
i
n
−
c
]
+
V
[
f
l
i
n
−
a
]
, is this due to
V
i
? Does
V
i
factor into defining favorable/unfavorable sources of noise?
Are there any difficulties in obtaining high-quality estimates of the different variance terms? Are the empirical estimates unbiased?
[1] Understanding Double Descent Requires A Fine-Grained Bias-Variance Decomposition, Adlam & Pennington, 2020
Limitations
The authors discuss different limitations of their work, mostly based around assumptions made about the behavior of NNs (e.g., 2.3). These assumptions are discussed in detail, evaluated empirically, and put in the general context of this work.
In my mind, the two major limitations of this work are the following:
The interplay between the two sources of noise (initialization & kernel noise) is not analyzed empirically, despite appearing with equal importance in the decomposition of Prop. 2.2.
I found the conclusions of the empirical analysis to be somewhat unclear. I believe the discussion could be improved by the authors stating early on which hypothesis they seek to verify, and how that hypothesis is or isn't validated by the experimental results. In the paper's current state, I find the conclusions to be somewhat unclear (which source of noise is conducive to OOD detection? Is a single source of noise sufficient in general?). |
NIPS | Title
Disentangling the Predictive Variance of Deep Ensembles through the Neural Tangent Kernel
Abstract
Identifying unfamiliar inputs, also known as out-of-distribution (OOD) detection, is a crucial property of any decision making process. A simple and empirically validated technique is based on deep ensembles where the variance of predictions over different neural networks acts as a substitute for input uncertainty. Nevertheless, a theoretical understanding of the inductive biases leading to the performance of deep ensemble’s uncertainty estimation is missing. To improve our description of their behavior, we study deep ensembles with large layer widths operating in simplified linear training regimes, in which the functions trained with gradient descent can be described by the neural tangent kernel. We identify two sources of noise, each inducing a distinct inductive bias in the predictive variance at initialization. We further show theoretically and empirically that both noise sources affect the predictive variance of non-linear deep ensembles in toy models and realistic settings after training. Finally, we propose practical ways to eliminate part of these noise sources leading to significant changes and improved OOD detection in trained deep ensembles.
1 Introduction
Modern artificial intelligence uses intricate deep neural networks to process data, make predictions and take actions. One of the crucial steps toward allowing these agents to act in the real world is to incorporate a reliable mechanism for estimating uncertainty – in particular when human lives are at risk [1, 2]. Although the ongoing success of deep learning is remarkable, the increasing data, model and training algorithm complexity make a thorough understanding of their inner workings increasingly difficult. This applies when trying to understand when and why a system is certain or uncertain about a given output and is therefore the topic of numerous publications [3–10].
Principled mechanisms for uncertainty quantification would rely on Bayesian inference with an appropriate prior. This has led to the development of (approximate) Bayesian inference methods for deep neural networks [11–15]. Simply aggregating an ensemble of models [16] and using the disagreement of their predictions as a substitute for uncertainty has gained popularity. However, the theoretical justification of deep ensembles remains a matter of debate, see Wilson and Izmailov [17]. Although a link between Bayesian inference and deep ensembles can be obtained, see [18, 19], an understanding of the widely adopted standard deep ensemble and it’s predictive distribution is still missing [20, 21]. Note that even for principled Bayesian approaches there is no valid theoretical or practical OOD guarantee without a proper definition of out-of-distribution data [22].
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
One avenue to simplify the analyses of deep neural networks that gained a lot of attention in recent years is to increase the layer width to infinity [23, 24] or to very large values [25, 26]. In the former regime, an intriguing equivalence of infinitely wide deep networks at initialization and Gaussian processes allows for exact Bayesian inference and therefore principled uncertainty estimation. Although it is not possible to generally derive a Bayesian posterior for trained infinite or finite layer width networks, the resulting model predictions can be expressed analytically by kernels. Given this favorable mathematical description, the question of how powerful and similar these models are compared to their arguably black-box counterparts arises, with e.g. moderate width, complex optimizers and training stochasticity [25, 27–32].
In this paper, we leverage this tractable description of trained neural networks and take a first step towards understanding the predictive distribution of neural networks ensembles with large but finite width. Building on top of the various studies mentioned, we do so by studying the case where these networks can be described by a kernel and study the effect of two distinct noise sources stemming from the network initialization: The noise in the functional initialization of the network and the initialization noise of the gradient, which affects the training and therefore the kernel. As we will show, these noise sources will affect the predictive distributions differently and influence the network’s generalization on in- and out-of-distribution data.
Our contributions are the following:
• We provide a first order approximation of the predictive variance of an ensemble of linearly trained, finite-width neural networks. We identify interpretable terms in the refined variance description, originating from 2 distinct noise sources, and further provide their analytical expression for single layer neural networks with ReLU non-linearities.
• We show theoretically that under mild assumptions these refined variance terms survive nonlinear training for sufficiently large width, and therefore contribute to the predictive variance of non-linearly trained deep ensembles. Crucially, our result suggests that any finer description of the predictive variance of a linearized ensemble can be erased by nonlinear training.
• We conduct empirical studies validating our theoretical results, and investigate how the different variance terms influence generalization on in - and out-of-distribution. We highlight the practical implications of our theory by proposing simple methods to isolate noise sources in realistic settings which can lead to improved OOD detection.1
2 Neural network ensembles and their relations to kernels
Let fθ = f(·, θ) : Rh0 → RhL denote a neural network parameterized by the weights θ ∈ Rn. The weights consist of weight matrices and bias vectors {(Wl, bl)}Ll=1 describing the following feed-forward computation beginning with the input data x0:
zl+1 = σw√ hl W l+1xl + bl+1 with xl+1 = ϕ(zl+1). (1)
Here hl is the dimension of the vector xl and ϕ is a pointwise non-linearity such as the softplus log(1 + ex) or Rectified Linear Unit i.e. max(0, x) (ReLU) [33]. We follow Jacot et al. [24] and use σw = √ 2 to control the standard deviation of the initialised weights W lij , b l i ∼ N (0, 1).
Given a set of N datapoints X = (xi)0≤i≤N ∈ RN×h0 and targets Y = (yi)0≤i≤N ∈ RN×hL , we consider regression problems with the goal of finding θ∗ which minimizes the mean squared error (MSE) loss L(θ) = 12 ∑N i=0 ∥f(xi, θ)− yi∥22. For ease of notation, we denote by f(X , θ) ∈ RN ·hL the vectorized evaluation of f on each datapoint and Y ∈ RN ·hL the target vector for the entire dataset. As the widths of the hidden layers grow towards infinity, the distribution of outputs at initialization f(x, θ0) converges to a multivariate gaussian distribution due to the Central Limit Theorem [23]. The resulting function can then accurately be described as a zero-mean Gaussian process, coined Neural Network Gaussian Process (NNGP), where the covariance of a pair of output neurons i, j for data x and x′ is given by the kernel
1Source code for all experiments: github.com/seijin-kobayashi/disentangle-predvar
K(x, x′)i,j = lim h→∞ E[f i(x, θ0)f j(x′, θ0)] (2)
with h = min(h1, ..., hL−1). This equivalence can be used to analytically compute the Bayesian posterior of infinitely wide Bayesian neural networks [34].
On the other hand infinite width models trained via gradient descent (GD) can be described by the Neural Tangent Kernel (NTK). Given θ, the NTK Θθ of fθ is a matrix in RN ·hL × RN ·hL with the (i, j)-entry given as the following dot product
⟨∇θf(xi, θ),∇θf(xj , θ)⟩ (3)
where we consider without loss of generality the output dimension of f to be hL = 1 for ease of notation. Furthermore, we denote Θθ(X ,X ) := ∇θf(X , θ)∇θf(X , θ)T the matrix and Θθ(x
′,X ) := ∇θf(x′, θ)∇θf(X , θ)T the vector form of the NTK while highlighting the dependencies on different datapoints.
Lee et al. [25] showed that for sufficiently wide networks under common parametrizations, the gradient descent dynamics of the model with a sufficiently small learning rate behaves closely to its linearly trained counterpart, i.e. its first-order Taylor expansion in parameter space. In this gradient flow regime, after training on the mean squared error converges, we can rewrite the predictions of the linearly trained models in the following closed-form:
f lin(x) =f(x, θ0) +Qθ0(x,X )(Y − f(X , θ0)) (4)
where Qθ0(x,X ) := Θθ0(x,X )Θθ0(X ,X )−1 with Θθ0 the NTK at initialization, i.e. of f(., θ0). The linearization error throughout training supt≥0 ∥f lint (x) − ft(x)∥ is further shown to decrease with the width of the network, bounded by O(h− 12 ). Note that one can also linearize the dynamics without increasing the width of a neural network but by simply changing its output scaling [26].
When moving from finite to the infinite width limit the training of a multilayer perceptron (MLP) can again be described with the NTK, which now converges to a deterministic kernel Θ∞ [24], a result which extends to convolutional neural networks [27] and other common architectures [35, 36]. A fully trained neural network model can then be expressed as
f∞(x) = f(x, θ0) + Θ∞(x,X )Θ∞(X ,X )−1(Y − f(X , θ0)). (5)
where f({X , x}, θ0) ∼ N (0,K({X , x}, {X , x})).
2.1 Predictive distribution of linearly trained deep ensembles
In this Section, we study in detail the predictive distribution of ensembles of linearly trained models, i.e. the distribution of f lin(x) given x over random initializations θ0. In particular, for a given data x, we are interested in the mean E[f(x)] and variance V[f(x)] of trained models over random initialization. The former is typically used for the prediction of a deep ensemble, while the latter is used for estimating model or epistemic uncertainty utilized e.g. for OOD detection or exploration. To start, we describe the simpler case of the infinite width limit and a deterministic NTK, which allows us to compute the mean and variance of the solutions found by training easily:
E[f∞(x)] =Q∞(x,X )Y, V[f∞(x)] =K(x, x) +Q∞(x,X )K(X ,X )Q∞(x,X )T − 2Q∞(x,X )K(X , x)
(6)
where we introduced Q∞(x,X ) = Θ∞(x,X )Θ∞(X ,X )−1. For finite width linearly trained networks, the kernel is no longer deterministic, and its stochasticity influences the predictive distribution. Because there is probability mass assigned to the neighborhood of rare events where the NTK kernel matrix is not invertible, the expectation and variance over parameter initialization of the expression in equation 4 diverges to infinity.
Fortunately, due to the convergence in probability of the empirical NTK to the infinite width counterpart [24], we know these singularities become rarer and ultimately vanish as the width increases to infinity. Intuitively, we should therefore be able to assign meaningful, finite values to these undefined
quantities, which ignores these rare singularities. The delta method [37] in statistics formalizes this intuition, by using Taylor approximation to smooth out the singularities before computing the mean or variance. When the probability mass of the empirical NTK is highly concentrated in a small radius around the limiting NTK, the expression 4 is roughly linear w.r.t the NTK entries. Given this observation, we prove (see Appendix A.2) the following result, and justify that the obtained expression is informative of the empirical predictive mean and variance of deep ensembles. Rewriting equation 4 into
f lin(x) =f(x, θ0) + Q̄(x,X )(Y − f(X , θ0)) + [Qθ0(x,X )− Q̄(x,X )](Y − f(X , θ0))
(7)
where Q̄(x,X ) = Θ̄(x,X )Θ̄(X ,X )−1 and Θ̄ = E(Θθ0), we state: Proposition 2.1. For one hidden layer networks parametrized as in equation 1, given an input x and training data (X ,Y), when increasing the hidden layer width h, we have the following convergence in distribution over random initialization θ0:
√ h[Qθ0(x,X )− Q̄(x,X )](Y − f(X , θ0)) dist.→ Z(x)
where Z(x) is the linear combination of 2 Chi-Square distributions, such that
V(Z(x)) = lim h→∞ (hVc(x) + hVi(x))
where
Vc(x) =V[Θθ0(x,X )Θ̄(X ,X )−1Y] + V[Q̄(x,X )Θθ0(X ,X )Θ̄(X ,X )−1Y] − 2Cov[Q̄(x,X )Θθ0(X ,X )Θ̄(X ,X )−1Y,Θθ0(x,X )Θ̄(X ,X )−1Y], Vi(x) =V[Θθ0(x,X )Θ̄(X ,X )−1f(X , θ0)] + V[Q̄(x,X )Θθ0(X ,X )Θ̄(X ,X )−1f(X , θ0)] − 2Cov[Q̄(x,X )Θθ0(X ,X )Θ̄(X ,X )−1f(X , θ0),Θθ0(x,X )Θ̄(X ,X )−1f(X , θ0)].
We omit the dependence of θ0 on the width h for notational simplicity. While the expectation or variance of equation 4 for any finite width is undefined, their empirical mean and variance are with high probability indistinguishable from that of the above limiting distribution (see Lemma A.1). Note that the above proposition assumes the noise in Θθ0 to be decorrelated from f(x, θ0), which can hold true under specific constructions of the network that are of practical interest as we will see in the following (c.f. Appendix A.3.2).
Given Proposition 2.1, we now describe the approximate variance of f lin(x) for L = 2, which we can extend to the general L > 2 case using an informal argument (see A.2.2): Proposition 2.2. Let f be a neural network with identical width of all hidden layers, h1 = h2 = ... = hL−1 = h. We assume ∥Θθ0 − Θ̄∥2F = Op( 1h ). Then,
V[f lin(x)] ≈ Va(x) + Vc(x) + Vi(x) + Vcor(x) + Vres(x)
where
Va(x) =K̄(x, x) + Q̄(x,X )K̄(X ,X )Q̄(x,X )T − 2Q̄(x,X )K̄(X , x), Vcor(x) =2E [ [Θθ0(x,X )− Q̄(x,X )Θθ0(X ,X )][Θ̄(X ,X )−1Θθ0(X ,X )Θ̄(X ,X )−1] ] · [K̄(X , x)− K̄(X ,X )Q̄(x,X )T ]
and Vres(x) = O(h−2) as well as K̄ the expectation over initializations of the finite width counterpart of the NNGP kernel.
Several observations can be made: First, the above expression only involves the first and second moments of the empirical, finite width NTK, as well as the first moment of the NNGP kernel. These terms can be analytically computed in some settings. We provide in Appendix A.4.3 some of the moments for the special case of a 1-hidden layer ReLU network, and show the analytical expression correspond to empirical findings.
Second, the decomposition demonstrates the interplay of 2 distinct noise sources in the predictive variance:
• Va is the variance associated to the expression in the first line of equation 7. Intuitively, it is the finite width counterpart of the predictive variance of the infinite width model (equation 6), as it assumes the NTK is deterministic. The variance stems entirely from the functional noise at initialization and converges to the infinite width predictive variance as the width increases.
• Vc and Vi stem from the second line of equation 7. Vc is a first-order approximation of the predictive variance of a linearly trained network with pure kernel noise, without functional noise i.e. Vc ≈ V[Qθ0(x,X )Y]. On the other hand, Vi depends on the interplay between the 2 noises, and can be identified as the predictive variance of a deep ensemble with a deterministic NTK Θ̄ and a new functional prior g(x) = Θθ0(x,X )Θ̄(X ,X )−1f(X , θ0). Intuitively, this new functional prior can be seen as a data-specific inductive bias on the NTK formulation of the predictive variance (see Appendix A.3.1 for more details).
• Vcor is a covariance term between the 2 terms in equation 7 and also contains the correlation terms between Θθ0 and f(x, θ0). In general, its analytical expression is challenging to obtain as it requires the 4th moments of the finite width NNGP kernel fluctuation. Here, we provide its expression under the same simplifying assumption that the noise in Θθ0 is decorrelated from f(x, θ0). We therefore do not attempt to describe it in general, and focus in our empirical Section on the terms that are tractable and can be easily isolated for practical purposes.
Each of Vc,Vi,Vcor decay in O(h−1), which, together with Va, provide a first-order approximation of the predictive variance of f lin(x). Note that Va and Vc are of particular interest, as removing either the kernel or the functional noise at initialization will collapse the predictive variance of the trained ensemble to either one of these 2 terms.
2.2 Predictive distribution of standard deep ensemble of large width
An important question at this point is to which extent our analysis for linearly trained models applies to a fully and non-linearly trained deep ensemble. Indeed, if the discrepancy between the predictive variance of a linearly trained ensemble and its non-linear counterpart is of a larger order of magnitude than the higher-order correction in the variance term, the latter can be ’erased’ by training. Building on top of previous work, we show that, under the assumption of an empirically supported conjecture [38], for one hidden layer networks trained on the Mean Squared Error (MSE) loss, this discrepancy is asymptotically dominated by the refined predictive variance terms of the linearly trained ensemble we described in Section 2.1.
Proposition 2.3. Let f be a neural network with identical width of all hidden layers, h1 = h2 = ... = hL−1 = h, and such that the derivative of the non-linearity ϕ′ is bounded and Lipschitz continuous on R. Let the training data (X ,Y) contained in some compact set, such that the NTK of f on X is invertible. Let ft (resp. f lint ) be the model (resp. linearized model) trained on the MSE loss with gradient flow at timestep t with some learning rate. Assuming
sup t ∥Θθ0 −Θθt∥F = O(
1 h ) (8)
Then, ∀x, ∀δ > 0,∃C,H : ∀h > H ,
P [ sup t ∥f lint (x)− ft(x)∥2 ≤ C h ] ≥ 1− δ. (9)
In particular, for one hidden layer networks, after training,
|V̂(f(x))− V̂(f lin(x))| = Op(V̂ [ [Qθ0(x,X )− Q̄(x,X )](Y − f(X , θ0)) ] ) (10)
where V̂ denotes the empirical variance with some fixed sample size.
The proof can be found in Appendix A.1.1. While only the bound supt∥Θθ0 −Θθt∥F = O( 1√h ) has been proven in previous works [25], many empirical studies including those in the present work (see Appendix Fig. 5, Table 3) have shown that the bound decreases faster in practice, on the order of O(h−1) [25, 38]. Note that this result suggests the approximation provided in Proposition 2.2 is as good as it gets for describing the predictive variance of non-linearly trained ensembles: the higher order terms would be of a smaller order of magnitude than the non-linear correction to the training, rendering any finer approximation pointless.
3 Disentangling deep ensemble variance in practice
The goal of this Section is to validate our theoretical findings in experiments. First, we aim to show qualitatively and quantitatively that the variance of linearly trained neural networks is well approximated by the decomposition introduced in Proposition 2.2. To do so, we investigate ensembles of linearly trained models and analyze their behavior in toy models and on common computer vision classification datasets. We then extend our analyses to fully-trained non-linear deep neural networks optimized with (stochastic) gradient descent in parameter space. Here, we confirm empirically the strong influence of the variance description of linearly trained models in these less restrictive settings while being trained to very low training loss. Therefore we showcase the improved understanding of deep ensembles through their linearly trained counterpart and highlight the practical relevance of our study by observing significant OOD detection performance differences of models when removing noise sources in various settings.
3.1 Disentangling noise sources in kernel models
To isolate the different terms in Proposition 2.2, we construct, from a given initialization θ0 with the associated linearized model f lin, three additional linearly trained models:
f lin-c(x) = Qθ0(x,X )Y f lin-a(x) = f(x, θ0) + Q̄(x,X )(Y − f(X , θ0)) f lin-i(x) = g(x, θ0) + Q̄(x,X )(Y − g(X , θ0))
where g(x, θ0) = Θθ0(x,X )Θ̄(X ,X )−1f(X , θ0). Note that the predictive variance over random initialization of these functions corresponds to respectively Vc,Va,Vi as defined in Section 2.1. As one can see, we can simply remove the initialization noise from f lin by subtracting the initial (noisy) function f(x, θ0) before training resulting in a centered model f lin-c. Equivalently, we can remove noise that originates from the kernel by using the empirical average over kernels resulting in model f lin-a. Finally, we can isolate f lin-i by the same averaging trick as in f lin-a but use as functional noise g(x, θ0) which can be precomputed and added to f lin-c before training. Note that we neglect the terms involving covariance terms and focus on the parts which are easy to isolate, for linearly trained as well as for standard models. This will later allow us to study practical ways to subtract important parts of the predictive distribution for neural networks leading for example to significant OOD detection performance differences. Now we explore the differences and similarities of these disentangled functions and their respective predictive distributions.
3.1.1 Visualizations on a star-shaped toy dataset
To qualitatively visualize the different terms, we construct a two-way star-shaped regression problem on a 2d-plane depicted in Figure 1. After training an ensemble we visualize its predictive variance on the input space. Our first goal is to visualize qualitative differences in the predictive variance of ensembles consisting of f lin and the 3 disentangled models from above. We train a large ensemble of size 300 where each model is a one-layer ReLU neural network with hidden dimension 512 and 1 hidden layer. As suggested analytically for one hidden layer ReLU networks (see Appendix A.4.3), for example V[f lin-c(x)] depends on the angle of the datapoints while V[f lin(x)] depicts a superposition of the 3 isolated variances. While the ReLU activation does not satisfy the Lipschitzcontinuity assumption of Proposition 2.3, we use it to illustrate and validate our analytical description of the inductive biases induced by the different variance terms. We use the Softplus activation which behaved similarly to ReLU in the experiments in the next Section.
3.1.2 Disentangling linearly trained / kernel ensembles for MNIST and CIFAR10
Next, we move to a quantitative analysis of the asymptotic behavior of the various variance terms, as we increase the hidden layer size. In Figure 2, we analyze the predictive variance of the kernel models based on MLPs and Convolutional Neural Networks (CNN) for various depths and widths and on subsets of MNIST [39] and CIFAR10. As before, we construct a binary classification task through a MSE loss with dataset size of N = 100 and confirm, shown in Figure 2, that Vc, Vi decay by 1/h over all of our experiments. Crucially, we see that they contribute to the overall variance V even for relatively large widths. We further observe a decay in 1/h2 of the residual term as predicted by Proposition 2.2. As in all of our experiments, the variance magnitude and therefore the influence on f lin of the disentangled parts is highly architecture and dataset-dependent. Note that the small size of the datasets comes from the necessity to compute the inverse of the kernels for every ensemble member, see Appendix B for a additional analysis on larger datasets and scaling plots of Vcor. In Table 1, we quantify the previously observed qualitative difference of the various predictive variances by evaluating their performance on out-of-distribution detection tasks, where high predictive variance is used as a proxy for detecting out-of-distribution data. We focus our attention on analysing V[f lin-c(x)] and V[f lin-a(x)], as they are the variance terms containing purely the functional and kernel noise, respectively. As an evaluation metric, we follow numerous studies and compute the area under the receiver operating characteristics curve (AUROC, c.f. Appendix B). We fit a linearized ensemble on a larger subset of the standard 10-way classification MNIST and CIFAR10 datasets using MSE loss. When training our ensembles on MNIST, we test and average the OOD detection performance on FashionMNIST (FM) [40], E-MNIST (EM) [41] and K-MNIST (KM) [42]. When training our ensembles on CIFAR10, we compute the AUROC for SVHN [43], LSUN [44], TinyImageNet (TIN)
and CIFAR100 (C100), see Appendix Table 4 for the variance magnitude and AUROC values for all datasets.
The results show significant differences in variance magnitude and AUROC values. While we do not claim competitive OOD performance, we aim to highlight the differences in behavior of the isolated functions developed above: we see for instance that for (MLP, MNIST, N=1000), f lin-a generally performs better than f lin in OOD detection. Indeed, the overall worse performance of V[f lin-c(x)] seems to be affecting that of V[f lin(x)] which contains both terms. On the other hand, we see that for the setup (CNN, CIFAR10, N=1000) V[f lin(x)] is not well described by this interpolation argument, which highlights the influence of the other variance terms described in Proposition 2.2. Furthermore, the OOD detection capabilities of each function seem to be highly dependent on the particular data considered: Ensembles of f lin-c are relatively good at identifying SVHN data as OOD, while being poor at identifying LSUN and iSUN data. These observations highlight the particular inductive bias of each variance term for OOD detection on different datasets.
We further report the test set generalization of the ensemble mean of different functions, highlighting the diversity in the predictive mean of these models as well. Note that for N >= 1000 we trained the ensembles in linear fashion with gradient flow (which coincides with the kernel expression) up until the MSE training error was smaller than 0.01.
3.2 Does the refined variance description generalize to standard gradient descent in practice?
In this Section, we start with empirical verification of Proposition 2.3 and show that the bound in equation 10 holds in practice. Given this verification, we then propose equivalent disentangled models as those previously defined but in the non-linear setting, and 1) show significant differences in their predictive distribution but also 2) investigate to which extent improvements in OOD detection translate from kernel / linearly to fully non-linearly trained models. We stress that we do not consider early stopped models and aim to connect the kernel with the gradient descent models faithfully.
3.2.1 Survival of the kernel noise after training
To validate Proposition 2.3, we first introduce f gd(x) = f(x,θt), a model trained with standard gradient descent of t steps i.e. θt = θ0 − ∑t−1 i=0 η∇θf(X , θi)(Y − f(X , θi)). To empirically verify
Proposition 2.3, we introduce the following ratio
R(f) = exp ( Ex∼X ′ ( log[
∥V̂[f lin(x)]− V̂[f gd(x)]∥ ∥V̂c(x) + V̂i(x)∥
] ))
(11)
where the empirical variances are computed over random initialization, and the expectation over some data distribution which we choose to be the union of the test-set and the various OOD datasets. Given a datapoint x, the term inside the log measures the ratio between the discrepancy of the variance between the linearized and non-linear ensemble, against the refined variance terms. R(f) is then the geometric mean of this ratio over the whole dataset. Proposition 2.3 predicts that the ratio remains bounded as the width increases, suggesting that the refined terms contribute to the final predictive variance of the non-linear model in a non negligible manner. We empirically verify this prediction for various depths in Fig. 3 and Appendix Figure 6, for functions trained on subsets MNIST and CIFAR10. Note that for all our experiments we also empirically verify the assumption from Proposition 2.3 (see Appendix Figure 5, Table 3).
3.2.2 Disentangling noise sources in gradient descent non-linear models
Motivated by the empirical verification of Proposition 2.3, we now aim to isolate different noise sources in non-linear models trained with gradient descent. Starting from a non-linear network f gd, we follow the same strategy as before and silence the functional initialization noise by centering the network (referred as f gd-c(x)) by simply subtracting the function at initialization. On the other hand, we remove the kernel noise with a simple trick: We first sample a random weight θc0 once, and use it as the weight initialization for all ensemble members. A function noise is added by first removing the function initialization from θc0, and adding that of a second random network which is not trained. The
resulting functions (referred as f gd-a(x)) will induce and ensemble which will only differ in their functional initialization while having the same Jacobian
f gd-c(x) = f(x, θt)− f(x, θ0), f gd-a(x) = f(x, θct )− f(x, θc0) + f(x, θ0).
We furthermore introduce f gd-i(x), the non linear counterpart to f lin-i(x), which we construct similarly to f gd-a(x) but using g(x, θ0, θc0) = Θθ0(x,X )Θθc(X ,X )−1f(X , θ0) as the function initialization instead of f(x, θ0) (see Section 2.1 and Appendix A.3.1 for the justification). Unlike f gd-a and f gd-c, constructing f gd-i requires the inversion of large matrices due to the way g is defined, a challenging task for realistic settings. While its practical use is thus limited, we introduce it to illustrate the correspondence of correspondence of the predictive variance of linearized vs non-linear deep ensemble.
Given these simple modifications of f gd, we rerun the experiments conducted for the linearly trained models for moderate dataset sizes (N=1000). We observe close similarities in the OOD detection capabilities as well as predictive variance between the introduced non-linearly trained ensembles and their linearly trained counterparts. We further train these models on the full MNIST dataset (N=50000) for which we show the same trend in Appendix Table 5. We also include the ensemble’ performance when trained on the full CIFAR10 dataset. Intriguingly, the relative performance of the ensemble is somewhat preserved in both settings between N=1000 and N=50000, even when training with SGD, promoting the use of quick, linear training on subset of data as a proxy for the OOD performance of a fully trained deep ensemble.
Similar to the case of (MLP, MNIST, N=1000/50000), we observe that f gd ensemble performance is an interpolation of f gd-c and f gd-a which interestingly performs often favorably, on different OOD data. To understand if the noise introduced by SGD impacts the predictive distribution of our disentangled ensembles, we compared the behavior of f gd and f sgd in the lower data regime
of N = 1000. Intriguingly, we show in Appendix Table 6 that no significant empirical difference between GD and SGD models can be observed and hypothesize that noise sources discussed in this study are more important in our approximately linear training regimes. To speed up experiments we used (S)GD with momentum (0.9) in all experiments of this subsection.
3.2.3 Removing noise of models possibly far away from the linear regime
Finally, we investigate the OOD performance of the previously introduced model variants f sgd, f sgd-c and f sgd-d in more realistic settings. To do so we train the commonly used WideResNet 28-10 [45] on CIFAR10 with BatchNorm [46] Layers and cross-entropy (CE) loss with batchsize of 128, without data augmentation (see Table 3.2.3). These network and training algorithm choices are considered crucial to achieving state-of-the-art and superior performance compared to their linearly trained counterparts. Strikingly, we notice that our model variants, which each isolate a different initial noise source, significantly affect the OOD capabilities of the final models when the training loss is virtually 0 - as in all of our experiments. This indicates that the discussed noise sources influence the ensemble’s predictive variance long throughout training. We provide similar results for CIFAR100 and FashionMNIST in Table B.1 and B.1 of the Appendix B. We stress that we do not claim that our theoretical assumptions hold in this setup.
4 Conclusion
The generalization on in-and out-of-distribution data of deep neural network ensembles is poorly understood. This is particularly worrying since deep ensembles are widely used in practice when trying to asses if data is out-of-distribution. In this study, we try to provide insights into the sources of noise stemming from initialization that influence the predictive distribution of trained deep ensembles. By focusing on large-width models we are able to characterize two distinct sources of noise and describe an analytical approximation of the predictive variance in some restricted settings. We then show theoretically and empirically how parts of this refined predictive variance description in the linear training regime survive and impact the predictive distribution of non-linearly trained deep ensembles. This allows us to extrapolate insights of the tractable linearly trained deep ensembles into the non-linear regime which can lead to improved out-of-distribution detection of deep ensembles by eliminating potentially unfavorable noise sources. Although our theoretical analysis relies on the closeness to linear gradient descent which has shown to result in less powerful models in practice, we hope that our surprising empirical success of noise disentanglement sparks further research into using the lens of linear gradient descent to understand the mysteries of deep learning.
Acknowledgments and Disclosure of Funding
Seijin Kobayashi was supported by the Swiss National Science Foundation (SNF) grant CRSII5_173721. Pau Vilimelis Aceituno was supported by the ETH Postdoctoral Fellowship program (007113). Johannes von Oswald was funded by the Swiss Data Science Center (J.v.O. P18-03). We thank Christian Henning, Frederik Benzing and Yassir Akram for helpful discussions. Seijin Kobayashi and Johannes von Oswald are grateful for Angelika Steger’s and João Sacramento’s overall support and guidance. | 1. What is the focus and contribution of the paper regarding predictive variance in neural networks?
2. What are the strengths and weaknesses of the proposed approach, particularly in its theoretical analysis and empirical validation?
3. Do you have any questions regarding the paper's experimental design, results, or conclusions?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any limitations or potential biases in the paper's approach or findings that should be acknowledged or addressed? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This work is focused on decomposing the sources of variance in neural network predictions - the predictive variance of neural networks is an important quantity that allows for neural ensembles (and other approximate bayesian models) to detect OOD samples without seeing OOD data. However, this quantity is poorly understood - this work proposes to take a close, principled look at neural network predictive variance by studying linear neural networks. Linear neural network predictions can be represented via a kernel expression, allowing a theoretical decoupling of model predictions and (noisy) gradient descent.
The work largely comprises of 2 main sections. The first section (section 2) contains the main theoretical results of the work, and begins by introducing linear neural networks and infinitely wide neural networks. The work then builds up a theoretical decomposition of variance terms in the predictive distribution of an ensemble of neural networks. To do this, the authors begin by analyzing an ensemble of infinitely wide neural networks, whose kernels are deterministic, demonstrating that it's predictive variance can explained by the functional noise over different initializations. The authors then extend this analysis to finite width linear networks, for one and more layers - under some assumptions, the authors show that the predictive variance of finite-width linear networks can be described largely by two noise sources: the functional noise as described above, and the kernel noise stemming from the variance over different kernels in the finite-width regime. Finally, the authors provide evidence that the discrepancy between a linearly trained model and it's non-linear, gradient descent counterpart, can be bounded on the order of it's layer-width, justifying the approximation of non-linear ensembles with linear models.
The second section (section 3) validates the authors hypothesis via a set of empirical experiments. The first set of experiments utilizes tractable linear models for which distinct components of predictive variance can be explicitly removed. Here the authors use a toy dataset to provide an intuitive visualization of how each isolated component of the variance behaves. Next the authors study 2 real-world tasks, CIFAR-10 and MNIST, using a more complex linear model (both a CNN and MLP model), examining both ensemble performance and ensemble OOD detection rates. They find that different noise components have different levels of OOD detection - in some cases, the model with only kernel noise can achieve comparable or better performance in OOD detection than the full model. They then extend their empirical analysis to non-linear models - using a clever trick around model initialization, the authors derive two variants of gradient-descent models which isolate either functional initialization noise or initial kernel noise. They find that the isolated components of predictive variance in neural ensembles trained with (S)GD mirror those of the linearly trained models on OOD detection performance. Finally, the authors train fully modern neural networks, using residual connections, batch-norm, and cross-entropy loss on CIFAR-10, leveraging the same techniques to isolate each noise source. They show that, in this setting, isolating noise sources can have a significant effect on OOD performance - in particular, centered models which remove initialization noise, outperform standard models.
Strengths And Weaknesses
Strengths:
The predictive variance of neural networks, and it's utility to OOD detection is of broad interest to the field.
Leveraging linear networks (and their kernels) to get a theoretical intuition about the different sources of variance in neural ensembles is a clever and novel approach to understanding the sources of variance, and their effects.
The theoretical analysis provided in this work is very interesting, and provides the foundation for future work aiming to understand predictive variance in neural models.
The final empirical result is relatively strong - namely, they show that a particular source of noise (kernel noise) is more effective at identifying OOD examples than standard neural models in modern neural architectures trained with SGD.
Weaknesses:
The biggest weakness of this work lies in the empirical analysis and the takeaways from this analysis:
Overall, the benefits of each isolated noise source are not consistent. It is not clear what effect of each noise source has on OOD detection - in some cases isolated kernel noise outperforms isolated functional noise, and in other cases the opposite occurs.
The "big takeaway" of the role of each noise source in OOD detection largely seems to rely on the single result of the fully fleshed out neural network on CIFAR-10. Only is this model is there a clear story on the importance of isolated kernel noise.
In other words, the paper cannot empirically show us why the proposed decomposition is practically useful other than that it has some effect on OOD detection.
Lastly, I think the structure of the paper could be improved:
For instance, much of the theoretical analysis lacks intuition around key quantities (e.g. the matrix Q is referenced all over the paper, and it would help the readability of the formulas to provide some intuition as to what this matrix represents).
Figures and tables are placed distant from their corresponding sections - additionally, sometime the corresponding figure or table is not referenced in the main text (e.g. Table 1 is not referenced in section 3.2.2).
Questions
The experiment in Figure 1 is a nice visualization, but the intuition it provides is not tied into later results - why is the distance or angle between datapoints a useful interpretation of functional or kernel noise? If it is not relevant to the next set of experiments due to data dependency then what is the toy experiment showing us?
Limitations
yes |
NIPS | Title
Disentangling the Predictive Variance of Deep Ensembles through the Neural Tangent Kernel
Abstract
Identifying unfamiliar inputs, also known as out-of-distribution (OOD) detection, is a crucial property of any decision making process. A simple and empirically validated technique is based on deep ensembles where the variance of predictions over different neural networks acts as a substitute for input uncertainty. Nevertheless, a theoretical understanding of the inductive biases leading to the performance of deep ensemble’s uncertainty estimation is missing. To improve our description of their behavior, we study deep ensembles with large layer widths operating in simplified linear training regimes, in which the functions trained with gradient descent can be described by the neural tangent kernel. We identify two sources of noise, each inducing a distinct inductive bias in the predictive variance at initialization. We further show theoretically and empirically that both noise sources affect the predictive variance of non-linear deep ensembles in toy models and realistic settings after training. Finally, we propose practical ways to eliminate part of these noise sources leading to significant changes and improved OOD detection in trained deep ensembles.
1 Introduction
Modern artificial intelligence uses intricate deep neural networks to process data, make predictions and take actions. One of the crucial steps toward allowing these agents to act in the real world is to incorporate a reliable mechanism for estimating uncertainty – in particular when human lives are at risk [1, 2]. Although the ongoing success of deep learning is remarkable, the increasing data, model and training algorithm complexity make a thorough understanding of their inner workings increasingly difficult. This applies when trying to understand when and why a system is certain or uncertain about a given output and is therefore the topic of numerous publications [3–10].
Principled mechanisms for uncertainty quantification would rely on Bayesian inference with an appropriate prior. This has led to the development of (approximate) Bayesian inference methods for deep neural networks [11–15]. Simply aggregating an ensemble of models [16] and using the disagreement of their predictions as a substitute for uncertainty has gained popularity. However, the theoretical justification of deep ensembles remains a matter of debate, see Wilson and Izmailov [17]. Although a link between Bayesian inference and deep ensembles can be obtained, see [18, 19], an understanding of the widely adopted standard deep ensemble and it’s predictive distribution is still missing [20, 21]. Note that even for principled Bayesian approaches there is no valid theoretical or practical OOD guarantee without a proper definition of out-of-distribution data [22].
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
One avenue to simplify the analyses of deep neural networks that gained a lot of attention in recent years is to increase the layer width to infinity [23, 24] or to very large values [25, 26]. In the former regime, an intriguing equivalence of infinitely wide deep networks at initialization and Gaussian processes allows for exact Bayesian inference and therefore principled uncertainty estimation. Although it is not possible to generally derive a Bayesian posterior for trained infinite or finite layer width networks, the resulting model predictions can be expressed analytically by kernels. Given this favorable mathematical description, the question of how powerful and similar these models are compared to their arguably black-box counterparts arises, with e.g. moderate width, complex optimizers and training stochasticity [25, 27–32].
In this paper, we leverage this tractable description of trained neural networks and take a first step towards understanding the predictive distribution of neural networks ensembles with large but finite width. Building on top of the various studies mentioned, we do so by studying the case where these networks can be described by a kernel and study the effect of two distinct noise sources stemming from the network initialization: The noise in the functional initialization of the network and the initialization noise of the gradient, which affects the training and therefore the kernel. As we will show, these noise sources will affect the predictive distributions differently and influence the network’s generalization on in- and out-of-distribution data.
Our contributions are the following:
• We provide a first order approximation of the predictive variance of an ensemble of linearly trained, finite-width neural networks. We identify interpretable terms in the refined variance description, originating from 2 distinct noise sources, and further provide their analytical expression for single layer neural networks with ReLU non-linearities.
• We show theoretically that under mild assumptions these refined variance terms survive nonlinear training for sufficiently large width, and therefore contribute to the predictive variance of non-linearly trained deep ensembles. Crucially, our result suggests that any finer description of the predictive variance of a linearized ensemble can be erased by nonlinear training.
• We conduct empirical studies validating our theoretical results, and investigate how the different variance terms influence generalization on in - and out-of-distribution. We highlight the practical implications of our theory by proposing simple methods to isolate noise sources in realistic settings which can lead to improved OOD detection.1
2 Neural network ensembles and their relations to kernels
Let fθ = f(·, θ) : Rh0 → RhL denote a neural network parameterized by the weights θ ∈ Rn. The weights consist of weight matrices and bias vectors {(Wl, bl)}Ll=1 describing the following feed-forward computation beginning with the input data x0:
zl+1 = σw√ hl W l+1xl + bl+1 with xl+1 = ϕ(zl+1). (1)
Here hl is the dimension of the vector xl and ϕ is a pointwise non-linearity such as the softplus log(1 + ex) or Rectified Linear Unit i.e. max(0, x) (ReLU) [33]. We follow Jacot et al. [24] and use σw = √ 2 to control the standard deviation of the initialised weights W lij , b l i ∼ N (0, 1).
Given a set of N datapoints X = (xi)0≤i≤N ∈ RN×h0 and targets Y = (yi)0≤i≤N ∈ RN×hL , we consider regression problems with the goal of finding θ∗ which minimizes the mean squared error (MSE) loss L(θ) = 12 ∑N i=0 ∥f(xi, θ)− yi∥22. For ease of notation, we denote by f(X , θ) ∈ RN ·hL the vectorized evaluation of f on each datapoint and Y ∈ RN ·hL the target vector for the entire dataset. As the widths of the hidden layers grow towards infinity, the distribution of outputs at initialization f(x, θ0) converges to a multivariate gaussian distribution due to the Central Limit Theorem [23]. The resulting function can then accurately be described as a zero-mean Gaussian process, coined Neural Network Gaussian Process (NNGP), where the covariance of a pair of output neurons i, j for data x and x′ is given by the kernel
1Source code for all experiments: github.com/seijin-kobayashi/disentangle-predvar
K(x, x′)i,j = lim h→∞ E[f i(x, θ0)f j(x′, θ0)] (2)
with h = min(h1, ..., hL−1). This equivalence can be used to analytically compute the Bayesian posterior of infinitely wide Bayesian neural networks [34].
On the other hand infinite width models trained via gradient descent (GD) can be described by the Neural Tangent Kernel (NTK). Given θ, the NTK Θθ of fθ is a matrix in RN ·hL × RN ·hL with the (i, j)-entry given as the following dot product
⟨∇θf(xi, θ),∇θf(xj , θ)⟩ (3)
where we consider without loss of generality the output dimension of f to be hL = 1 for ease of notation. Furthermore, we denote Θθ(X ,X ) := ∇θf(X , θ)∇θf(X , θ)T the matrix and Θθ(x
′,X ) := ∇θf(x′, θ)∇θf(X , θ)T the vector form of the NTK while highlighting the dependencies on different datapoints.
Lee et al. [25] showed that for sufficiently wide networks under common parametrizations, the gradient descent dynamics of the model with a sufficiently small learning rate behaves closely to its linearly trained counterpart, i.e. its first-order Taylor expansion in parameter space. In this gradient flow regime, after training on the mean squared error converges, we can rewrite the predictions of the linearly trained models in the following closed-form:
f lin(x) =f(x, θ0) +Qθ0(x,X )(Y − f(X , θ0)) (4)
where Qθ0(x,X ) := Θθ0(x,X )Θθ0(X ,X )−1 with Θθ0 the NTK at initialization, i.e. of f(., θ0). The linearization error throughout training supt≥0 ∥f lint (x) − ft(x)∥ is further shown to decrease with the width of the network, bounded by O(h− 12 ). Note that one can also linearize the dynamics without increasing the width of a neural network but by simply changing its output scaling [26].
When moving from finite to the infinite width limit the training of a multilayer perceptron (MLP) can again be described with the NTK, which now converges to a deterministic kernel Θ∞ [24], a result which extends to convolutional neural networks [27] and other common architectures [35, 36]. A fully trained neural network model can then be expressed as
f∞(x) = f(x, θ0) + Θ∞(x,X )Θ∞(X ,X )−1(Y − f(X , θ0)). (5)
where f({X , x}, θ0) ∼ N (0,K({X , x}, {X , x})).
2.1 Predictive distribution of linearly trained deep ensembles
In this Section, we study in detail the predictive distribution of ensembles of linearly trained models, i.e. the distribution of f lin(x) given x over random initializations θ0. In particular, for a given data x, we are interested in the mean E[f(x)] and variance V[f(x)] of trained models over random initialization. The former is typically used for the prediction of a deep ensemble, while the latter is used for estimating model or epistemic uncertainty utilized e.g. for OOD detection or exploration. To start, we describe the simpler case of the infinite width limit and a deterministic NTK, which allows us to compute the mean and variance of the solutions found by training easily:
E[f∞(x)] =Q∞(x,X )Y, V[f∞(x)] =K(x, x) +Q∞(x,X )K(X ,X )Q∞(x,X )T − 2Q∞(x,X )K(X , x)
(6)
where we introduced Q∞(x,X ) = Θ∞(x,X )Θ∞(X ,X )−1. For finite width linearly trained networks, the kernel is no longer deterministic, and its stochasticity influences the predictive distribution. Because there is probability mass assigned to the neighborhood of rare events where the NTK kernel matrix is not invertible, the expectation and variance over parameter initialization of the expression in equation 4 diverges to infinity.
Fortunately, due to the convergence in probability of the empirical NTK to the infinite width counterpart [24], we know these singularities become rarer and ultimately vanish as the width increases to infinity. Intuitively, we should therefore be able to assign meaningful, finite values to these undefined
quantities, which ignores these rare singularities. The delta method [37] in statistics formalizes this intuition, by using Taylor approximation to smooth out the singularities before computing the mean or variance. When the probability mass of the empirical NTK is highly concentrated in a small radius around the limiting NTK, the expression 4 is roughly linear w.r.t the NTK entries. Given this observation, we prove (see Appendix A.2) the following result, and justify that the obtained expression is informative of the empirical predictive mean and variance of deep ensembles. Rewriting equation 4 into
f lin(x) =f(x, θ0) + Q̄(x,X )(Y − f(X , θ0)) + [Qθ0(x,X )− Q̄(x,X )](Y − f(X , θ0))
(7)
where Q̄(x,X ) = Θ̄(x,X )Θ̄(X ,X )−1 and Θ̄ = E(Θθ0), we state: Proposition 2.1. For one hidden layer networks parametrized as in equation 1, given an input x and training data (X ,Y), when increasing the hidden layer width h, we have the following convergence in distribution over random initialization θ0:
√ h[Qθ0(x,X )− Q̄(x,X )](Y − f(X , θ0)) dist.→ Z(x)
where Z(x) is the linear combination of 2 Chi-Square distributions, such that
V(Z(x)) = lim h→∞ (hVc(x) + hVi(x))
where
Vc(x) =V[Θθ0(x,X )Θ̄(X ,X )−1Y] + V[Q̄(x,X )Θθ0(X ,X )Θ̄(X ,X )−1Y] − 2Cov[Q̄(x,X )Θθ0(X ,X )Θ̄(X ,X )−1Y,Θθ0(x,X )Θ̄(X ,X )−1Y], Vi(x) =V[Θθ0(x,X )Θ̄(X ,X )−1f(X , θ0)] + V[Q̄(x,X )Θθ0(X ,X )Θ̄(X ,X )−1f(X , θ0)] − 2Cov[Q̄(x,X )Θθ0(X ,X )Θ̄(X ,X )−1f(X , θ0),Θθ0(x,X )Θ̄(X ,X )−1f(X , θ0)].
We omit the dependence of θ0 on the width h for notational simplicity. While the expectation or variance of equation 4 for any finite width is undefined, their empirical mean and variance are with high probability indistinguishable from that of the above limiting distribution (see Lemma A.1). Note that the above proposition assumes the noise in Θθ0 to be decorrelated from f(x, θ0), which can hold true under specific constructions of the network that are of practical interest as we will see in the following (c.f. Appendix A.3.2).
Given Proposition 2.1, we now describe the approximate variance of f lin(x) for L = 2, which we can extend to the general L > 2 case using an informal argument (see A.2.2): Proposition 2.2. Let f be a neural network with identical width of all hidden layers, h1 = h2 = ... = hL−1 = h. We assume ∥Θθ0 − Θ̄∥2F = Op( 1h ). Then,
V[f lin(x)] ≈ Va(x) + Vc(x) + Vi(x) + Vcor(x) + Vres(x)
where
Va(x) =K̄(x, x) + Q̄(x,X )K̄(X ,X )Q̄(x,X )T − 2Q̄(x,X )K̄(X , x), Vcor(x) =2E [ [Θθ0(x,X )− Q̄(x,X )Θθ0(X ,X )][Θ̄(X ,X )−1Θθ0(X ,X )Θ̄(X ,X )−1] ] · [K̄(X , x)− K̄(X ,X )Q̄(x,X )T ]
and Vres(x) = O(h−2) as well as K̄ the expectation over initializations of the finite width counterpart of the NNGP kernel.
Several observations can be made: First, the above expression only involves the first and second moments of the empirical, finite width NTK, as well as the first moment of the NNGP kernel. These terms can be analytically computed in some settings. We provide in Appendix A.4.3 some of the moments for the special case of a 1-hidden layer ReLU network, and show the analytical expression correspond to empirical findings.
Second, the decomposition demonstrates the interplay of 2 distinct noise sources in the predictive variance:
• Va is the variance associated to the expression in the first line of equation 7. Intuitively, it is the finite width counterpart of the predictive variance of the infinite width model (equation 6), as it assumes the NTK is deterministic. The variance stems entirely from the functional noise at initialization and converges to the infinite width predictive variance as the width increases.
• Vc and Vi stem from the second line of equation 7. Vc is a first-order approximation of the predictive variance of a linearly trained network with pure kernel noise, without functional noise i.e. Vc ≈ V[Qθ0(x,X )Y]. On the other hand, Vi depends on the interplay between the 2 noises, and can be identified as the predictive variance of a deep ensemble with a deterministic NTK Θ̄ and a new functional prior g(x) = Θθ0(x,X )Θ̄(X ,X )−1f(X , θ0). Intuitively, this new functional prior can be seen as a data-specific inductive bias on the NTK formulation of the predictive variance (see Appendix A.3.1 for more details).
• Vcor is a covariance term between the 2 terms in equation 7 and also contains the correlation terms between Θθ0 and f(x, θ0). In general, its analytical expression is challenging to obtain as it requires the 4th moments of the finite width NNGP kernel fluctuation. Here, we provide its expression under the same simplifying assumption that the noise in Θθ0 is decorrelated from f(x, θ0). We therefore do not attempt to describe it in general, and focus in our empirical Section on the terms that are tractable and can be easily isolated for practical purposes.
Each of Vc,Vi,Vcor decay in O(h−1), which, together with Va, provide a first-order approximation of the predictive variance of f lin(x). Note that Va and Vc are of particular interest, as removing either the kernel or the functional noise at initialization will collapse the predictive variance of the trained ensemble to either one of these 2 terms.
2.2 Predictive distribution of standard deep ensemble of large width
An important question at this point is to which extent our analysis for linearly trained models applies to a fully and non-linearly trained deep ensemble. Indeed, if the discrepancy between the predictive variance of a linearly trained ensemble and its non-linear counterpart is of a larger order of magnitude than the higher-order correction in the variance term, the latter can be ’erased’ by training. Building on top of previous work, we show that, under the assumption of an empirically supported conjecture [38], for one hidden layer networks trained on the Mean Squared Error (MSE) loss, this discrepancy is asymptotically dominated by the refined predictive variance terms of the linearly trained ensemble we described in Section 2.1.
Proposition 2.3. Let f be a neural network with identical width of all hidden layers, h1 = h2 = ... = hL−1 = h, and such that the derivative of the non-linearity ϕ′ is bounded and Lipschitz continuous on R. Let the training data (X ,Y) contained in some compact set, such that the NTK of f on X is invertible. Let ft (resp. f lint ) be the model (resp. linearized model) trained on the MSE loss with gradient flow at timestep t with some learning rate. Assuming
sup t ∥Θθ0 −Θθt∥F = O(
1 h ) (8)
Then, ∀x, ∀δ > 0,∃C,H : ∀h > H ,
P [ sup t ∥f lint (x)− ft(x)∥2 ≤ C h ] ≥ 1− δ. (9)
In particular, for one hidden layer networks, after training,
|V̂(f(x))− V̂(f lin(x))| = Op(V̂ [ [Qθ0(x,X )− Q̄(x,X )](Y − f(X , θ0)) ] ) (10)
where V̂ denotes the empirical variance with some fixed sample size.
The proof can be found in Appendix A.1.1. While only the bound supt∥Θθ0 −Θθt∥F = O( 1√h ) has been proven in previous works [25], many empirical studies including those in the present work (see Appendix Fig. 5, Table 3) have shown that the bound decreases faster in practice, on the order of O(h−1) [25, 38]. Note that this result suggests the approximation provided in Proposition 2.2 is as good as it gets for describing the predictive variance of non-linearly trained ensembles: the higher order terms would be of a smaller order of magnitude than the non-linear correction to the training, rendering any finer approximation pointless.
3 Disentangling deep ensemble variance in practice
The goal of this Section is to validate our theoretical findings in experiments. First, we aim to show qualitatively and quantitatively that the variance of linearly trained neural networks is well approximated by the decomposition introduced in Proposition 2.2. To do so, we investigate ensembles of linearly trained models and analyze their behavior in toy models and on common computer vision classification datasets. We then extend our analyses to fully-trained non-linear deep neural networks optimized with (stochastic) gradient descent in parameter space. Here, we confirm empirically the strong influence of the variance description of linearly trained models in these less restrictive settings while being trained to very low training loss. Therefore we showcase the improved understanding of deep ensembles through their linearly trained counterpart and highlight the practical relevance of our study by observing significant OOD detection performance differences of models when removing noise sources in various settings.
3.1 Disentangling noise sources in kernel models
To isolate the different terms in Proposition 2.2, we construct, from a given initialization θ0 with the associated linearized model f lin, three additional linearly trained models:
f lin-c(x) = Qθ0(x,X )Y f lin-a(x) = f(x, θ0) + Q̄(x,X )(Y − f(X , θ0)) f lin-i(x) = g(x, θ0) + Q̄(x,X )(Y − g(X , θ0))
where g(x, θ0) = Θθ0(x,X )Θ̄(X ,X )−1f(X , θ0). Note that the predictive variance over random initialization of these functions corresponds to respectively Vc,Va,Vi as defined in Section 2.1. As one can see, we can simply remove the initialization noise from f lin by subtracting the initial (noisy) function f(x, θ0) before training resulting in a centered model f lin-c. Equivalently, we can remove noise that originates from the kernel by using the empirical average over kernels resulting in model f lin-a. Finally, we can isolate f lin-i by the same averaging trick as in f lin-a but use as functional noise g(x, θ0) which can be precomputed and added to f lin-c before training. Note that we neglect the terms involving covariance terms and focus on the parts which are easy to isolate, for linearly trained as well as for standard models. This will later allow us to study practical ways to subtract important parts of the predictive distribution for neural networks leading for example to significant OOD detection performance differences. Now we explore the differences and similarities of these disentangled functions and their respective predictive distributions.
3.1.1 Visualizations on a star-shaped toy dataset
To qualitatively visualize the different terms, we construct a two-way star-shaped regression problem on a 2d-plane depicted in Figure 1. After training an ensemble we visualize its predictive variance on the input space. Our first goal is to visualize qualitative differences in the predictive variance of ensembles consisting of f lin and the 3 disentangled models from above. We train a large ensemble of size 300 where each model is a one-layer ReLU neural network with hidden dimension 512 and 1 hidden layer. As suggested analytically for one hidden layer ReLU networks (see Appendix A.4.3), for example V[f lin-c(x)] depends on the angle of the datapoints while V[f lin(x)] depicts a superposition of the 3 isolated variances. While the ReLU activation does not satisfy the Lipschitzcontinuity assumption of Proposition 2.3, we use it to illustrate and validate our analytical description of the inductive biases induced by the different variance terms. We use the Softplus activation which behaved similarly to ReLU in the experiments in the next Section.
3.1.2 Disentangling linearly trained / kernel ensembles for MNIST and CIFAR10
Next, we move to a quantitative analysis of the asymptotic behavior of the various variance terms, as we increase the hidden layer size. In Figure 2, we analyze the predictive variance of the kernel models based on MLPs and Convolutional Neural Networks (CNN) for various depths and widths and on subsets of MNIST [39] and CIFAR10. As before, we construct a binary classification task through a MSE loss with dataset size of N = 100 and confirm, shown in Figure 2, that Vc, Vi decay by 1/h over all of our experiments. Crucially, we see that they contribute to the overall variance V even for relatively large widths. We further observe a decay in 1/h2 of the residual term as predicted by Proposition 2.2. As in all of our experiments, the variance magnitude and therefore the influence on f lin of the disentangled parts is highly architecture and dataset-dependent. Note that the small size of the datasets comes from the necessity to compute the inverse of the kernels for every ensemble member, see Appendix B for a additional analysis on larger datasets and scaling plots of Vcor. In Table 1, we quantify the previously observed qualitative difference of the various predictive variances by evaluating their performance on out-of-distribution detection tasks, where high predictive variance is used as a proxy for detecting out-of-distribution data. We focus our attention on analysing V[f lin-c(x)] and V[f lin-a(x)], as they are the variance terms containing purely the functional and kernel noise, respectively. As an evaluation metric, we follow numerous studies and compute the area under the receiver operating characteristics curve (AUROC, c.f. Appendix B). We fit a linearized ensemble on a larger subset of the standard 10-way classification MNIST and CIFAR10 datasets using MSE loss. When training our ensembles on MNIST, we test and average the OOD detection performance on FashionMNIST (FM) [40], E-MNIST (EM) [41] and K-MNIST (KM) [42]. When training our ensembles on CIFAR10, we compute the AUROC for SVHN [43], LSUN [44], TinyImageNet (TIN)
and CIFAR100 (C100), see Appendix Table 4 for the variance magnitude and AUROC values for all datasets.
The results show significant differences in variance magnitude and AUROC values. While we do not claim competitive OOD performance, we aim to highlight the differences in behavior of the isolated functions developed above: we see for instance that for (MLP, MNIST, N=1000), f lin-a generally performs better than f lin in OOD detection. Indeed, the overall worse performance of V[f lin-c(x)] seems to be affecting that of V[f lin(x)] which contains both terms. On the other hand, we see that for the setup (CNN, CIFAR10, N=1000) V[f lin(x)] is not well described by this interpolation argument, which highlights the influence of the other variance terms described in Proposition 2.2. Furthermore, the OOD detection capabilities of each function seem to be highly dependent on the particular data considered: Ensembles of f lin-c are relatively good at identifying SVHN data as OOD, while being poor at identifying LSUN and iSUN data. These observations highlight the particular inductive bias of each variance term for OOD detection on different datasets.
We further report the test set generalization of the ensemble mean of different functions, highlighting the diversity in the predictive mean of these models as well. Note that for N >= 1000 we trained the ensembles in linear fashion with gradient flow (which coincides with the kernel expression) up until the MSE training error was smaller than 0.01.
3.2 Does the refined variance description generalize to standard gradient descent in practice?
In this Section, we start with empirical verification of Proposition 2.3 and show that the bound in equation 10 holds in practice. Given this verification, we then propose equivalent disentangled models as those previously defined but in the non-linear setting, and 1) show significant differences in their predictive distribution but also 2) investigate to which extent improvements in OOD detection translate from kernel / linearly to fully non-linearly trained models. We stress that we do not consider early stopped models and aim to connect the kernel with the gradient descent models faithfully.
3.2.1 Survival of the kernel noise after training
To validate Proposition 2.3, we first introduce f gd(x) = f(x,θt), a model trained with standard gradient descent of t steps i.e. θt = θ0 − ∑t−1 i=0 η∇θf(X , θi)(Y − f(X , θi)). To empirically verify
Proposition 2.3, we introduce the following ratio
R(f) = exp ( Ex∼X ′ ( log[
∥V̂[f lin(x)]− V̂[f gd(x)]∥ ∥V̂c(x) + V̂i(x)∥
] ))
(11)
where the empirical variances are computed over random initialization, and the expectation over some data distribution which we choose to be the union of the test-set and the various OOD datasets. Given a datapoint x, the term inside the log measures the ratio between the discrepancy of the variance between the linearized and non-linear ensemble, against the refined variance terms. R(f) is then the geometric mean of this ratio over the whole dataset. Proposition 2.3 predicts that the ratio remains bounded as the width increases, suggesting that the refined terms contribute to the final predictive variance of the non-linear model in a non negligible manner. We empirically verify this prediction for various depths in Fig. 3 and Appendix Figure 6, for functions trained on subsets MNIST and CIFAR10. Note that for all our experiments we also empirically verify the assumption from Proposition 2.3 (see Appendix Figure 5, Table 3).
3.2.2 Disentangling noise sources in gradient descent non-linear models
Motivated by the empirical verification of Proposition 2.3, we now aim to isolate different noise sources in non-linear models trained with gradient descent. Starting from a non-linear network f gd, we follow the same strategy as before and silence the functional initialization noise by centering the network (referred as f gd-c(x)) by simply subtracting the function at initialization. On the other hand, we remove the kernel noise with a simple trick: We first sample a random weight θc0 once, and use it as the weight initialization for all ensemble members. A function noise is added by first removing the function initialization from θc0, and adding that of a second random network which is not trained. The
resulting functions (referred as f gd-a(x)) will induce and ensemble which will only differ in their functional initialization while having the same Jacobian
f gd-c(x) = f(x, θt)− f(x, θ0), f gd-a(x) = f(x, θct )− f(x, θc0) + f(x, θ0).
We furthermore introduce f gd-i(x), the non linear counterpart to f lin-i(x), which we construct similarly to f gd-a(x) but using g(x, θ0, θc0) = Θθ0(x,X )Θθc(X ,X )−1f(X , θ0) as the function initialization instead of f(x, θ0) (see Section 2.1 and Appendix A.3.1 for the justification). Unlike f gd-a and f gd-c, constructing f gd-i requires the inversion of large matrices due to the way g is defined, a challenging task for realistic settings. While its practical use is thus limited, we introduce it to illustrate the correspondence of correspondence of the predictive variance of linearized vs non-linear deep ensemble.
Given these simple modifications of f gd, we rerun the experiments conducted for the linearly trained models for moderate dataset sizes (N=1000). We observe close similarities in the OOD detection capabilities as well as predictive variance between the introduced non-linearly trained ensembles and their linearly trained counterparts. We further train these models on the full MNIST dataset (N=50000) for which we show the same trend in Appendix Table 5. We also include the ensemble’ performance when trained on the full CIFAR10 dataset. Intriguingly, the relative performance of the ensemble is somewhat preserved in both settings between N=1000 and N=50000, even when training with SGD, promoting the use of quick, linear training on subset of data as a proxy for the OOD performance of a fully trained deep ensemble.
Similar to the case of (MLP, MNIST, N=1000/50000), we observe that f gd ensemble performance is an interpolation of f gd-c and f gd-a which interestingly performs often favorably, on different OOD data. To understand if the noise introduced by SGD impacts the predictive distribution of our disentangled ensembles, we compared the behavior of f gd and f sgd in the lower data regime
of N = 1000. Intriguingly, we show in Appendix Table 6 that no significant empirical difference between GD and SGD models can be observed and hypothesize that noise sources discussed in this study are more important in our approximately linear training regimes. To speed up experiments we used (S)GD with momentum (0.9) in all experiments of this subsection.
3.2.3 Removing noise of models possibly far away from the linear regime
Finally, we investigate the OOD performance of the previously introduced model variants f sgd, f sgd-c and f sgd-d in more realistic settings. To do so we train the commonly used WideResNet 28-10 [45] on CIFAR10 with BatchNorm [46] Layers and cross-entropy (CE) loss with batchsize of 128, without data augmentation (see Table 3.2.3). These network and training algorithm choices are considered crucial to achieving state-of-the-art and superior performance compared to their linearly trained counterparts. Strikingly, we notice that our model variants, which each isolate a different initial noise source, significantly affect the OOD capabilities of the final models when the training loss is virtually 0 - as in all of our experiments. This indicates that the discussed noise sources influence the ensemble’s predictive variance long throughout training. We provide similar results for CIFAR100 and FashionMNIST in Table B.1 and B.1 of the Appendix B. We stress that we do not claim that our theoretical assumptions hold in this setup.
4 Conclusion
The generalization on in-and out-of-distribution data of deep neural network ensembles is poorly understood. This is particularly worrying since deep ensembles are widely used in practice when trying to asses if data is out-of-distribution. In this study, we try to provide insights into the sources of noise stemming from initialization that influence the predictive distribution of trained deep ensembles. By focusing on large-width models we are able to characterize two distinct sources of noise and describe an analytical approximation of the predictive variance in some restricted settings. We then show theoretically and empirically how parts of this refined predictive variance description in the linear training regime survive and impact the predictive distribution of non-linearly trained deep ensembles. This allows us to extrapolate insights of the tractable linearly trained deep ensembles into the non-linear regime which can lead to improved out-of-distribution detection of deep ensembles by eliminating potentially unfavorable noise sources. Although our theoretical analysis relies on the closeness to linear gradient descent which has shown to result in less powerful models in practice, we hope that our surprising empirical success of noise disentanglement sparks further research into using the lens of linear gradient descent to understand the mysteries of deep learning.
Acknowledgments and Disclosure of Funding
Seijin Kobayashi was supported by the Swiss National Science Foundation (SNF) grant CRSII5_173721. Pau Vilimelis Aceituno was supported by the ETH Postdoctoral Fellowship program (007113). Johannes von Oswald was funded by the Swiss Data Science Center (J.v.O. P18-03). We thank Christian Henning, Frederik Benzing and Yassir Akram for helpful discussions. Seijin Kobayashi and Johannes von Oswald are grateful for Angelika Steger’s and João Sacramento’s overall support and guidance. | 1. What is the focus of the paper regarding deep learning ensembles?
2. What are the strengths and weaknesses of the proposed approach?
3. Do you have any questions or concerns regarding the experiments and their interpretation?
4. How does the reviewer assess the clarity, quality, significance, and originality of the paper's content?
5. Are there any limitations or potential negative societal impacts associated with the study? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper studies ensembles of linearly trained deep networks, for which a decomposition of the variance is proposed into two main components:
V_a results from the variance of the initial value f_theta_0
V_c results from the variance of the initial kernel Theta_0 and other components that decay faster as the width of the network is grown.
It is also shown that this decomposition still hold when training non-linearly one hidden layer networks under stability assumption of the NTK during training. It is conjectured (and empirically somehow observed) that the difference in prediction between linear and non-linear training is small enough for the above decomposition to still be relevant in deeper networks.
Experiments on toy datasets enable visualizing the different sources of variance. Experiments on more difficult (MNIST, CIFAR10) datasets show that conclusions vary across setups. A method is proposed to isolate sources of variance in actual setups.
Strengths And Weaknesses
Originality: To the best of my knowledge (though I am not so familiar with recent literature), the idea of studying ensemble of ensembles of linearly trained networks is new and sound. The results linking infinite-width models to linearly-trained finite networks are new, in the line of works around the NTK formalism.
Quality: The paper is well organised, the experiments and figures are relevant to the discussion. I had some comments about the experiments (see questions below)
Clarity: The material is well presented. The theoretical results are clearly discussed, as well as the assumptions. I however was not sure what is the main message to get from the experiments on large networks/datasets. This is probably related to the latter, but I did not understand what are the "practical ways to eliminate noisy sources" claimed to be proposed in the abstract.
Significance:: The study of ensembles of neural nets is of particular relevance as many recent papers use ensembles of deep nets. The decomposition between variance of the initial function and variance of the initial kernel is of interest for understanding the training dynamics of neural networks.
Questions
it is not clear that the y scale in figure 2 and 5 is a log scale from just looking at the figure (and not mentioned in the text).
in figure 2, contrary to what is mentioned in the text, it looks that the decay of V_c is slightly slower than 1/h (thus suggesting that V_c does not decay in O(1/h)). It is difficult to see because of the log-log scale but it looks to be closer to 1/h^{3/4}
in figure 5 in appendix, I did not understand how many steps of GD were run for this experiment. I am guessing that it uses very few steps since the dataset if very small (N=100), and I am expecting the results to be different on actual setups. What do you think?
Limitations
I did not find a discussion about potential negative societal impact, nor do I need that such paper should have one.
The limitations of the setups (linearly trained networks, small datasets) are discussed. |
NIPS | Title
Disentangling the Predictive Variance of Deep Ensembles through the Neural Tangent Kernel
Abstract
Identifying unfamiliar inputs, also known as out-of-distribution (OOD) detection, is a crucial property of any decision making process. A simple and empirically validated technique is based on deep ensembles where the variance of predictions over different neural networks acts as a substitute for input uncertainty. Nevertheless, a theoretical understanding of the inductive biases leading to the performance of deep ensemble’s uncertainty estimation is missing. To improve our description of their behavior, we study deep ensembles with large layer widths operating in simplified linear training regimes, in which the functions trained with gradient descent can be described by the neural tangent kernel. We identify two sources of noise, each inducing a distinct inductive bias in the predictive variance at initialization. We further show theoretically and empirically that both noise sources affect the predictive variance of non-linear deep ensembles in toy models and realistic settings after training. Finally, we propose practical ways to eliminate part of these noise sources leading to significant changes and improved OOD detection in trained deep ensembles.
1 Introduction
Modern artificial intelligence uses intricate deep neural networks to process data, make predictions and take actions. One of the crucial steps toward allowing these agents to act in the real world is to incorporate a reliable mechanism for estimating uncertainty – in particular when human lives are at risk [1, 2]. Although the ongoing success of deep learning is remarkable, the increasing data, model and training algorithm complexity make a thorough understanding of their inner workings increasingly difficult. This applies when trying to understand when and why a system is certain or uncertain about a given output and is therefore the topic of numerous publications [3–10].
Principled mechanisms for uncertainty quantification would rely on Bayesian inference with an appropriate prior. This has led to the development of (approximate) Bayesian inference methods for deep neural networks [11–15]. Simply aggregating an ensemble of models [16] and using the disagreement of their predictions as a substitute for uncertainty has gained popularity. However, the theoretical justification of deep ensembles remains a matter of debate, see Wilson and Izmailov [17]. Although a link between Bayesian inference and deep ensembles can be obtained, see [18, 19], an understanding of the widely adopted standard deep ensemble and it’s predictive distribution is still missing [20, 21]. Note that even for principled Bayesian approaches there is no valid theoretical or practical OOD guarantee without a proper definition of out-of-distribution data [22].
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
One avenue to simplify the analyses of deep neural networks that gained a lot of attention in recent years is to increase the layer width to infinity [23, 24] or to very large values [25, 26]. In the former regime, an intriguing equivalence of infinitely wide deep networks at initialization and Gaussian processes allows for exact Bayesian inference and therefore principled uncertainty estimation. Although it is not possible to generally derive a Bayesian posterior for trained infinite or finite layer width networks, the resulting model predictions can be expressed analytically by kernels. Given this favorable mathematical description, the question of how powerful and similar these models are compared to their arguably black-box counterparts arises, with e.g. moderate width, complex optimizers and training stochasticity [25, 27–32].
In this paper, we leverage this tractable description of trained neural networks and take a first step towards understanding the predictive distribution of neural networks ensembles with large but finite width. Building on top of the various studies mentioned, we do so by studying the case where these networks can be described by a kernel and study the effect of two distinct noise sources stemming from the network initialization: The noise in the functional initialization of the network and the initialization noise of the gradient, which affects the training and therefore the kernel. As we will show, these noise sources will affect the predictive distributions differently and influence the network’s generalization on in- and out-of-distribution data.
Our contributions are the following:
• We provide a first order approximation of the predictive variance of an ensemble of linearly trained, finite-width neural networks. We identify interpretable terms in the refined variance description, originating from 2 distinct noise sources, and further provide their analytical expression for single layer neural networks with ReLU non-linearities.
• We show theoretically that under mild assumptions these refined variance terms survive nonlinear training for sufficiently large width, and therefore contribute to the predictive variance of non-linearly trained deep ensembles. Crucially, our result suggests that any finer description of the predictive variance of a linearized ensemble can be erased by nonlinear training.
• We conduct empirical studies validating our theoretical results, and investigate how the different variance terms influence generalization on in - and out-of-distribution. We highlight the practical implications of our theory by proposing simple methods to isolate noise sources in realistic settings which can lead to improved OOD detection.1
2 Neural network ensembles and their relations to kernels
Let fθ = f(·, θ) : Rh0 → RhL denote a neural network parameterized by the weights θ ∈ Rn. The weights consist of weight matrices and bias vectors {(Wl, bl)}Ll=1 describing the following feed-forward computation beginning with the input data x0:
zl+1 = σw√ hl W l+1xl + bl+1 with xl+1 = ϕ(zl+1). (1)
Here hl is the dimension of the vector xl and ϕ is a pointwise non-linearity such as the softplus log(1 + ex) or Rectified Linear Unit i.e. max(0, x) (ReLU) [33]. We follow Jacot et al. [24] and use σw = √ 2 to control the standard deviation of the initialised weights W lij , b l i ∼ N (0, 1).
Given a set of N datapoints X = (xi)0≤i≤N ∈ RN×h0 and targets Y = (yi)0≤i≤N ∈ RN×hL , we consider regression problems with the goal of finding θ∗ which minimizes the mean squared error (MSE) loss L(θ) = 12 ∑N i=0 ∥f(xi, θ)− yi∥22. For ease of notation, we denote by f(X , θ) ∈ RN ·hL the vectorized evaluation of f on each datapoint and Y ∈ RN ·hL the target vector for the entire dataset. As the widths of the hidden layers grow towards infinity, the distribution of outputs at initialization f(x, θ0) converges to a multivariate gaussian distribution due to the Central Limit Theorem [23]. The resulting function can then accurately be described as a zero-mean Gaussian process, coined Neural Network Gaussian Process (NNGP), where the covariance of a pair of output neurons i, j for data x and x′ is given by the kernel
1Source code for all experiments: github.com/seijin-kobayashi/disentangle-predvar
K(x, x′)i,j = lim h→∞ E[f i(x, θ0)f j(x′, θ0)] (2)
with h = min(h1, ..., hL−1). This equivalence can be used to analytically compute the Bayesian posterior of infinitely wide Bayesian neural networks [34].
On the other hand infinite width models trained via gradient descent (GD) can be described by the Neural Tangent Kernel (NTK). Given θ, the NTK Θθ of fθ is a matrix in RN ·hL × RN ·hL with the (i, j)-entry given as the following dot product
⟨∇θf(xi, θ),∇θf(xj , θ)⟩ (3)
where we consider without loss of generality the output dimension of f to be hL = 1 for ease of notation. Furthermore, we denote Θθ(X ,X ) := ∇θf(X , θ)∇θf(X , θ)T the matrix and Θθ(x
′,X ) := ∇θf(x′, θ)∇θf(X , θ)T the vector form of the NTK while highlighting the dependencies on different datapoints.
Lee et al. [25] showed that for sufficiently wide networks under common parametrizations, the gradient descent dynamics of the model with a sufficiently small learning rate behaves closely to its linearly trained counterpart, i.e. its first-order Taylor expansion in parameter space. In this gradient flow regime, after training on the mean squared error converges, we can rewrite the predictions of the linearly trained models in the following closed-form:
f lin(x) =f(x, θ0) +Qθ0(x,X )(Y − f(X , θ0)) (4)
where Qθ0(x,X ) := Θθ0(x,X )Θθ0(X ,X )−1 with Θθ0 the NTK at initialization, i.e. of f(., θ0). The linearization error throughout training supt≥0 ∥f lint (x) − ft(x)∥ is further shown to decrease with the width of the network, bounded by O(h− 12 ). Note that one can also linearize the dynamics without increasing the width of a neural network but by simply changing its output scaling [26].
When moving from finite to the infinite width limit the training of a multilayer perceptron (MLP) can again be described with the NTK, which now converges to a deterministic kernel Θ∞ [24], a result which extends to convolutional neural networks [27] and other common architectures [35, 36]. A fully trained neural network model can then be expressed as
f∞(x) = f(x, θ0) + Θ∞(x,X )Θ∞(X ,X )−1(Y − f(X , θ0)). (5)
where f({X , x}, θ0) ∼ N (0,K({X , x}, {X , x})).
2.1 Predictive distribution of linearly trained deep ensembles
In this Section, we study in detail the predictive distribution of ensembles of linearly trained models, i.e. the distribution of f lin(x) given x over random initializations θ0. In particular, for a given data x, we are interested in the mean E[f(x)] and variance V[f(x)] of trained models over random initialization. The former is typically used for the prediction of a deep ensemble, while the latter is used for estimating model or epistemic uncertainty utilized e.g. for OOD detection or exploration. To start, we describe the simpler case of the infinite width limit and a deterministic NTK, which allows us to compute the mean and variance of the solutions found by training easily:
E[f∞(x)] =Q∞(x,X )Y, V[f∞(x)] =K(x, x) +Q∞(x,X )K(X ,X )Q∞(x,X )T − 2Q∞(x,X )K(X , x)
(6)
where we introduced Q∞(x,X ) = Θ∞(x,X )Θ∞(X ,X )−1. For finite width linearly trained networks, the kernel is no longer deterministic, and its stochasticity influences the predictive distribution. Because there is probability mass assigned to the neighborhood of rare events where the NTK kernel matrix is not invertible, the expectation and variance over parameter initialization of the expression in equation 4 diverges to infinity.
Fortunately, due to the convergence in probability of the empirical NTK to the infinite width counterpart [24], we know these singularities become rarer and ultimately vanish as the width increases to infinity. Intuitively, we should therefore be able to assign meaningful, finite values to these undefined
quantities, which ignores these rare singularities. The delta method [37] in statistics formalizes this intuition, by using Taylor approximation to smooth out the singularities before computing the mean or variance. When the probability mass of the empirical NTK is highly concentrated in a small radius around the limiting NTK, the expression 4 is roughly linear w.r.t the NTK entries. Given this observation, we prove (see Appendix A.2) the following result, and justify that the obtained expression is informative of the empirical predictive mean and variance of deep ensembles. Rewriting equation 4 into
f lin(x) =f(x, θ0) + Q̄(x,X )(Y − f(X , θ0)) + [Qθ0(x,X )− Q̄(x,X )](Y − f(X , θ0))
(7)
where Q̄(x,X ) = Θ̄(x,X )Θ̄(X ,X )−1 and Θ̄ = E(Θθ0), we state: Proposition 2.1. For one hidden layer networks parametrized as in equation 1, given an input x and training data (X ,Y), when increasing the hidden layer width h, we have the following convergence in distribution over random initialization θ0:
√ h[Qθ0(x,X )− Q̄(x,X )](Y − f(X , θ0)) dist.→ Z(x)
where Z(x) is the linear combination of 2 Chi-Square distributions, such that
V(Z(x)) = lim h→∞ (hVc(x) + hVi(x))
where
Vc(x) =V[Θθ0(x,X )Θ̄(X ,X )−1Y] + V[Q̄(x,X )Θθ0(X ,X )Θ̄(X ,X )−1Y] − 2Cov[Q̄(x,X )Θθ0(X ,X )Θ̄(X ,X )−1Y,Θθ0(x,X )Θ̄(X ,X )−1Y], Vi(x) =V[Θθ0(x,X )Θ̄(X ,X )−1f(X , θ0)] + V[Q̄(x,X )Θθ0(X ,X )Θ̄(X ,X )−1f(X , θ0)] − 2Cov[Q̄(x,X )Θθ0(X ,X )Θ̄(X ,X )−1f(X , θ0),Θθ0(x,X )Θ̄(X ,X )−1f(X , θ0)].
We omit the dependence of θ0 on the width h for notational simplicity. While the expectation or variance of equation 4 for any finite width is undefined, their empirical mean and variance are with high probability indistinguishable from that of the above limiting distribution (see Lemma A.1). Note that the above proposition assumes the noise in Θθ0 to be decorrelated from f(x, θ0), which can hold true under specific constructions of the network that are of practical interest as we will see in the following (c.f. Appendix A.3.2).
Given Proposition 2.1, we now describe the approximate variance of f lin(x) for L = 2, which we can extend to the general L > 2 case using an informal argument (see A.2.2): Proposition 2.2. Let f be a neural network with identical width of all hidden layers, h1 = h2 = ... = hL−1 = h. We assume ∥Θθ0 − Θ̄∥2F = Op( 1h ). Then,
V[f lin(x)] ≈ Va(x) + Vc(x) + Vi(x) + Vcor(x) + Vres(x)
where
Va(x) =K̄(x, x) + Q̄(x,X )K̄(X ,X )Q̄(x,X )T − 2Q̄(x,X )K̄(X , x), Vcor(x) =2E [ [Θθ0(x,X )− Q̄(x,X )Θθ0(X ,X )][Θ̄(X ,X )−1Θθ0(X ,X )Θ̄(X ,X )−1] ] · [K̄(X , x)− K̄(X ,X )Q̄(x,X )T ]
and Vres(x) = O(h−2) as well as K̄ the expectation over initializations of the finite width counterpart of the NNGP kernel.
Several observations can be made: First, the above expression only involves the first and second moments of the empirical, finite width NTK, as well as the first moment of the NNGP kernel. These terms can be analytically computed in some settings. We provide in Appendix A.4.3 some of the moments for the special case of a 1-hidden layer ReLU network, and show the analytical expression correspond to empirical findings.
Second, the decomposition demonstrates the interplay of 2 distinct noise sources in the predictive variance:
• Va is the variance associated to the expression in the first line of equation 7. Intuitively, it is the finite width counterpart of the predictive variance of the infinite width model (equation 6), as it assumes the NTK is deterministic. The variance stems entirely from the functional noise at initialization and converges to the infinite width predictive variance as the width increases.
• Vc and Vi stem from the second line of equation 7. Vc is a first-order approximation of the predictive variance of a linearly trained network with pure kernel noise, without functional noise i.e. Vc ≈ V[Qθ0(x,X )Y]. On the other hand, Vi depends on the interplay between the 2 noises, and can be identified as the predictive variance of a deep ensemble with a deterministic NTK Θ̄ and a new functional prior g(x) = Θθ0(x,X )Θ̄(X ,X )−1f(X , θ0). Intuitively, this new functional prior can be seen as a data-specific inductive bias on the NTK formulation of the predictive variance (see Appendix A.3.1 for more details).
• Vcor is a covariance term between the 2 terms in equation 7 and also contains the correlation terms between Θθ0 and f(x, θ0). In general, its analytical expression is challenging to obtain as it requires the 4th moments of the finite width NNGP kernel fluctuation. Here, we provide its expression under the same simplifying assumption that the noise in Θθ0 is decorrelated from f(x, θ0). We therefore do not attempt to describe it in general, and focus in our empirical Section on the terms that are tractable and can be easily isolated for practical purposes.
Each of Vc,Vi,Vcor decay in O(h−1), which, together with Va, provide a first-order approximation of the predictive variance of f lin(x). Note that Va and Vc are of particular interest, as removing either the kernel or the functional noise at initialization will collapse the predictive variance of the trained ensemble to either one of these 2 terms.
2.2 Predictive distribution of standard deep ensemble of large width
An important question at this point is to which extent our analysis for linearly trained models applies to a fully and non-linearly trained deep ensemble. Indeed, if the discrepancy between the predictive variance of a linearly trained ensemble and its non-linear counterpart is of a larger order of magnitude than the higher-order correction in the variance term, the latter can be ’erased’ by training. Building on top of previous work, we show that, under the assumption of an empirically supported conjecture [38], for one hidden layer networks trained on the Mean Squared Error (MSE) loss, this discrepancy is asymptotically dominated by the refined predictive variance terms of the linearly trained ensemble we described in Section 2.1.
Proposition 2.3. Let f be a neural network with identical width of all hidden layers, h1 = h2 = ... = hL−1 = h, and such that the derivative of the non-linearity ϕ′ is bounded and Lipschitz continuous on R. Let the training data (X ,Y) contained in some compact set, such that the NTK of f on X is invertible. Let ft (resp. f lint ) be the model (resp. linearized model) trained on the MSE loss with gradient flow at timestep t with some learning rate. Assuming
sup t ∥Θθ0 −Θθt∥F = O(
1 h ) (8)
Then, ∀x, ∀δ > 0,∃C,H : ∀h > H ,
P [ sup t ∥f lint (x)− ft(x)∥2 ≤ C h ] ≥ 1− δ. (9)
In particular, for one hidden layer networks, after training,
|V̂(f(x))− V̂(f lin(x))| = Op(V̂ [ [Qθ0(x,X )− Q̄(x,X )](Y − f(X , θ0)) ] ) (10)
where V̂ denotes the empirical variance with some fixed sample size.
The proof can be found in Appendix A.1.1. While only the bound supt∥Θθ0 −Θθt∥F = O( 1√h ) has been proven in previous works [25], many empirical studies including those in the present work (see Appendix Fig. 5, Table 3) have shown that the bound decreases faster in practice, on the order of O(h−1) [25, 38]. Note that this result suggests the approximation provided in Proposition 2.2 is as good as it gets for describing the predictive variance of non-linearly trained ensembles: the higher order terms would be of a smaller order of magnitude than the non-linear correction to the training, rendering any finer approximation pointless.
3 Disentangling deep ensemble variance in practice
The goal of this Section is to validate our theoretical findings in experiments. First, we aim to show qualitatively and quantitatively that the variance of linearly trained neural networks is well approximated by the decomposition introduced in Proposition 2.2. To do so, we investigate ensembles of linearly trained models and analyze their behavior in toy models and on common computer vision classification datasets. We then extend our analyses to fully-trained non-linear deep neural networks optimized with (stochastic) gradient descent in parameter space. Here, we confirm empirically the strong influence of the variance description of linearly trained models in these less restrictive settings while being trained to very low training loss. Therefore we showcase the improved understanding of deep ensembles through their linearly trained counterpart and highlight the practical relevance of our study by observing significant OOD detection performance differences of models when removing noise sources in various settings.
3.1 Disentangling noise sources in kernel models
To isolate the different terms in Proposition 2.2, we construct, from a given initialization θ0 with the associated linearized model f lin, three additional linearly trained models:
f lin-c(x) = Qθ0(x,X )Y f lin-a(x) = f(x, θ0) + Q̄(x,X )(Y − f(X , θ0)) f lin-i(x) = g(x, θ0) + Q̄(x,X )(Y − g(X , θ0))
where g(x, θ0) = Θθ0(x,X )Θ̄(X ,X )−1f(X , θ0). Note that the predictive variance over random initialization of these functions corresponds to respectively Vc,Va,Vi as defined in Section 2.1. As one can see, we can simply remove the initialization noise from f lin by subtracting the initial (noisy) function f(x, θ0) before training resulting in a centered model f lin-c. Equivalently, we can remove noise that originates from the kernel by using the empirical average over kernels resulting in model f lin-a. Finally, we can isolate f lin-i by the same averaging trick as in f lin-a but use as functional noise g(x, θ0) which can be precomputed and added to f lin-c before training. Note that we neglect the terms involving covariance terms and focus on the parts which are easy to isolate, for linearly trained as well as for standard models. This will later allow us to study practical ways to subtract important parts of the predictive distribution for neural networks leading for example to significant OOD detection performance differences. Now we explore the differences and similarities of these disentangled functions and their respective predictive distributions.
3.1.1 Visualizations on a star-shaped toy dataset
To qualitatively visualize the different terms, we construct a two-way star-shaped regression problem on a 2d-plane depicted in Figure 1. After training an ensemble we visualize its predictive variance on the input space. Our first goal is to visualize qualitative differences in the predictive variance of ensembles consisting of f lin and the 3 disentangled models from above. We train a large ensemble of size 300 where each model is a one-layer ReLU neural network with hidden dimension 512 and 1 hidden layer. As suggested analytically for one hidden layer ReLU networks (see Appendix A.4.3), for example V[f lin-c(x)] depends on the angle of the datapoints while V[f lin(x)] depicts a superposition of the 3 isolated variances. While the ReLU activation does not satisfy the Lipschitzcontinuity assumption of Proposition 2.3, we use it to illustrate and validate our analytical description of the inductive biases induced by the different variance terms. We use the Softplus activation which behaved similarly to ReLU in the experiments in the next Section.
3.1.2 Disentangling linearly trained / kernel ensembles for MNIST and CIFAR10
Next, we move to a quantitative analysis of the asymptotic behavior of the various variance terms, as we increase the hidden layer size. In Figure 2, we analyze the predictive variance of the kernel models based on MLPs and Convolutional Neural Networks (CNN) for various depths and widths and on subsets of MNIST [39] and CIFAR10. As before, we construct a binary classification task through a MSE loss with dataset size of N = 100 and confirm, shown in Figure 2, that Vc, Vi decay by 1/h over all of our experiments. Crucially, we see that they contribute to the overall variance V even for relatively large widths. We further observe a decay in 1/h2 of the residual term as predicted by Proposition 2.2. As in all of our experiments, the variance magnitude and therefore the influence on f lin of the disentangled parts is highly architecture and dataset-dependent. Note that the small size of the datasets comes from the necessity to compute the inverse of the kernels for every ensemble member, see Appendix B for a additional analysis on larger datasets and scaling plots of Vcor. In Table 1, we quantify the previously observed qualitative difference of the various predictive variances by evaluating their performance on out-of-distribution detection tasks, where high predictive variance is used as a proxy for detecting out-of-distribution data. We focus our attention on analysing V[f lin-c(x)] and V[f lin-a(x)], as they are the variance terms containing purely the functional and kernel noise, respectively. As an evaluation metric, we follow numerous studies and compute the area under the receiver operating characteristics curve (AUROC, c.f. Appendix B). We fit a linearized ensemble on a larger subset of the standard 10-way classification MNIST and CIFAR10 datasets using MSE loss. When training our ensembles on MNIST, we test and average the OOD detection performance on FashionMNIST (FM) [40], E-MNIST (EM) [41] and K-MNIST (KM) [42]. When training our ensembles on CIFAR10, we compute the AUROC for SVHN [43], LSUN [44], TinyImageNet (TIN)
and CIFAR100 (C100), see Appendix Table 4 for the variance magnitude and AUROC values for all datasets.
The results show significant differences in variance magnitude and AUROC values. While we do not claim competitive OOD performance, we aim to highlight the differences in behavior of the isolated functions developed above: we see for instance that for (MLP, MNIST, N=1000), f lin-a generally performs better than f lin in OOD detection. Indeed, the overall worse performance of V[f lin-c(x)] seems to be affecting that of V[f lin(x)] which contains both terms. On the other hand, we see that for the setup (CNN, CIFAR10, N=1000) V[f lin(x)] is not well described by this interpolation argument, which highlights the influence of the other variance terms described in Proposition 2.2. Furthermore, the OOD detection capabilities of each function seem to be highly dependent on the particular data considered: Ensembles of f lin-c are relatively good at identifying SVHN data as OOD, while being poor at identifying LSUN and iSUN data. These observations highlight the particular inductive bias of each variance term for OOD detection on different datasets.
We further report the test set generalization of the ensemble mean of different functions, highlighting the diversity in the predictive mean of these models as well. Note that for N >= 1000 we trained the ensembles in linear fashion with gradient flow (which coincides with the kernel expression) up until the MSE training error was smaller than 0.01.
3.2 Does the refined variance description generalize to standard gradient descent in practice?
In this Section, we start with empirical verification of Proposition 2.3 and show that the bound in equation 10 holds in practice. Given this verification, we then propose equivalent disentangled models as those previously defined but in the non-linear setting, and 1) show significant differences in their predictive distribution but also 2) investigate to which extent improvements in OOD detection translate from kernel / linearly to fully non-linearly trained models. We stress that we do not consider early stopped models and aim to connect the kernel with the gradient descent models faithfully.
3.2.1 Survival of the kernel noise after training
To validate Proposition 2.3, we first introduce f gd(x) = f(x,θt), a model trained with standard gradient descent of t steps i.e. θt = θ0 − ∑t−1 i=0 η∇θf(X , θi)(Y − f(X , θi)). To empirically verify
Proposition 2.3, we introduce the following ratio
R(f) = exp ( Ex∼X ′ ( log[
∥V̂[f lin(x)]− V̂[f gd(x)]∥ ∥V̂c(x) + V̂i(x)∥
] ))
(11)
where the empirical variances are computed over random initialization, and the expectation over some data distribution which we choose to be the union of the test-set and the various OOD datasets. Given a datapoint x, the term inside the log measures the ratio between the discrepancy of the variance between the linearized and non-linear ensemble, against the refined variance terms. R(f) is then the geometric mean of this ratio over the whole dataset. Proposition 2.3 predicts that the ratio remains bounded as the width increases, suggesting that the refined terms contribute to the final predictive variance of the non-linear model in a non negligible manner. We empirically verify this prediction for various depths in Fig. 3 and Appendix Figure 6, for functions trained on subsets MNIST and CIFAR10. Note that for all our experiments we also empirically verify the assumption from Proposition 2.3 (see Appendix Figure 5, Table 3).
3.2.2 Disentangling noise sources in gradient descent non-linear models
Motivated by the empirical verification of Proposition 2.3, we now aim to isolate different noise sources in non-linear models trained with gradient descent. Starting from a non-linear network f gd, we follow the same strategy as before and silence the functional initialization noise by centering the network (referred as f gd-c(x)) by simply subtracting the function at initialization. On the other hand, we remove the kernel noise with a simple trick: We first sample a random weight θc0 once, and use it as the weight initialization for all ensemble members. A function noise is added by first removing the function initialization from θc0, and adding that of a second random network which is not trained. The
resulting functions (referred as f gd-a(x)) will induce and ensemble which will only differ in their functional initialization while having the same Jacobian
f gd-c(x) = f(x, θt)− f(x, θ0), f gd-a(x) = f(x, θct )− f(x, θc0) + f(x, θ0).
We furthermore introduce f gd-i(x), the non linear counterpart to f lin-i(x), which we construct similarly to f gd-a(x) but using g(x, θ0, θc0) = Θθ0(x,X )Θθc(X ,X )−1f(X , θ0) as the function initialization instead of f(x, θ0) (see Section 2.1 and Appendix A.3.1 for the justification). Unlike f gd-a and f gd-c, constructing f gd-i requires the inversion of large matrices due to the way g is defined, a challenging task for realistic settings. While its practical use is thus limited, we introduce it to illustrate the correspondence of correspondence of the predictive variance of linearized vs non-linear deep ensemble.
Given these simple modifications of f gd, we rerun the experiments conducted for the linearly trained models for moderate dataset sizes (N=1000). We observe close similarities in the OOD detection capabilities as well as predictive variance between the introduced non-linearly trained ensembles and their linearly trained counterparts. We further train these models on the full MNIST dataset (N=50000) for which we show the same trend in Appendix Table 5. We also include the ensemble’ performance when trained on the full CIFAR10 dataset. Intriguingly, the relative performance of the ensemble is somewhat preserved in both settings between N=1000 and N=50000, even when training with SGD, promoting the use of quick, linear training on subset of data as a proxy for the OOD performance of a fully trained deep ensemble.
Similar to the case of (MLP, MNIST, N=1000/50000), we observe that f gd ensemble performance is an interpolation of f gd-c and f gd-a which interestingly performs often favorably, on different OOD data. To understand if the noise introduced by SGD impacts the predictive distribution of our disentangled ensembles, we compared the behavior of f gd and f sgd in the lower data regime
of N = 1000. Intriguingly, we show in Appendix Table 6 that no significant empirical difference between GD and SGD models can be observed and hypothesize that noise sources discussed in this study are more important in our approximately linear training regimes. To speed up experiments we used (S)GD with momentum (0.9) in all experiments of this subsection.
3.2.3 Removing noise of models possibly far away from the linear regime
Finally, we investigate the OOD performance of the previously introduced model variants f sgd, f sgd-c and f sgd-d in more realistic settings. To do so we train the commonly used WideResNet 28-10 [45] on CIFAR10 with BatchNorm [46] Layers and cross-entropy (CE) loss with batchsize of 128, without data augmentation (see Table 3.2.3). These network and training algorithm choices are considered crucial to achieving state-of-the-art and superior performance compared to their linearly trained counterparts. Strikingly, we notice that our model variants, which each isolate a different initial noise source, significantly affect the OOD capabilities of the final models when the training loss is virtually 0 - as in all of our experiments. This indicates that the discussed noise sources influence the ensemble’s predictive variance long throughout training. We provide similar results for CIFAR100 and FashionMNIST in Table B.1 and B.1 of the Appendix B. We stress that we do not claim that our theoretical assumptions hold in this setup.
4 Conclusion
The generalization on in-and out-of-distribution data of deep neural network ensembles is poorly understood. This is particularly worrying since deep ensembles are widely used in practice when trying to asses if data is out-of-distribution. In this study, we try to provide insights into the sources of noise stemming from initialization that influence the predictive distribution of trained deep ensembles. By focusing on large-width models we are able to characterize two distinct sources of noise and describe an analytical approximation of the predictive variance in some restricted settings. We then show theoretically and empirically how parts of this refined predictive variance description in the linear training regime survive and impact the predictive distribution of non-linearly trained deep ensembles. This allows us to extrapolate insights of the tractable linearly trained deep ensembles into the non-linear regime which can lead to improved out-of-distribution detection of deep ensembles by eliminating potentially unfavorable noise sources. Although our theoretical analysis relies on the closeness to linear gradient descent which has shown to result in less powerful models in practice, we hope that our surprising empirical success of noise disentanglement sparks further research into using the lens of linear gradient descent to understand the mysteries of deep learning.
Acknowledgments and Disclosure of Funding
Seijin Kobayashi was supported by the Swiss National Science Foundation (SNF) grant CRSII5_173721. Pau Vilimelis Aceituno was supported by the ETH Postdoctoral Fellowship program (007113). Johannes von Oswald was funded by the Swiss Data Science Center (J.v.O. P18-03). We thank Christian Henning, Frederik Benzing and Yassir Akram for helpful discussions. Seijin Kobayashi and Johannes von Oswald are grateful for Angelika Steger’s and João Sacramento’s overall support and guidance. | 1. What is the main contribution of the paper regarding understanding finite-width neural networks?
2. What are the strengths and weaknesses of the proposed approach, particularly in its ability to decompose the variance of ensembles of linearly trained, finite width neural networks into different sources of noise?
3. Do you have any questions or concerns about the assumptions made during the derivation, such as the independence assumptions that do not hold in practice?
4. How does the reviewer assess the clarity and quality of the paper's content, particularly in the description of the variance components and the implications of the results?
5. Are there any limitations to the paper's approach or propositions, such as the assumption that the functional initialization noise is decorrelated from the f(x)? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper proposes to understand finite-width neural networks in terms of linearly trained finite width networks. The authors propose to decompose the variance of ensembles of linearly trained, finite width neural networks into different sources (of noise), and study empirically the implications of this decomposition for understanding actual finite-width networks trained with gradient descent and stochastic gradient descent. The work builds from [1][2] which proposes to approximate a sufficiently wide network
f
θ
with its linearly trained counterpart
f
l
i
n
, and in the infinite with limit with the fully trained network
f
inf
. The paper decompose the variance of
f
inf
and
f
l
i
n
, and interprets different variance terms arising in ensembles of linear predictors in terms of kernel and functional noise (ln 140), which affects the predictive variance. The derivation relies on certain independence assumptions that do not hold in practice. The paper additionally claims ways to eliminate sources of noise which improve OOD detection in ensembles. The results comparing the networks which only include disentangled noise sources show no consensus in how/if the proposed sources of noise affect the total variance. The claims of improving OOD detection after reducing sources of noise coming from the results don’t take into account that the results lie within the confidence interval (cf. Table 2).
Strengths And Weaknesses
Strengths: This paper proposes to understand finite-width neural networks in terms of linearly trained finite width networks. The paper decompose the variance of ensembles of linearly trained, finite width neural networks into different sources (of noise). The work builds from [1][2] which proposes to approximate a sufficiently wide network
f
θ
with its linearly trained counterpart
f
l
i
n
, and in the infinite with limit with the fully trained network
f
inf
. This is an interesting perspective for analyzing deep ensemblesl
Weaknesses:
The paper decompose the variance of ensembles of
f
inf
and
f
l
i
n
. The paper claims the variance for the ensemble of
f
l
i
n
includes two components which correspond to the functional initialization noise and the other to the kernel noise (ln 140), and that this noise affects the predictive variance. During the derivation it is assumed that the that the functional initialization noise is decorrelated from the
f
(
x
)
(ln 133). The authors mentioned that this is future work but this is a central assumption in this work.
The results comparing the networks which only include disentangled noise sources show no consensus in how/if the proposed sources of noise affect the total variance. Could we add a Fig 2 which includes the baseline, total V?
The claim of improving OOD detection after reducing sources of noise coming from the results in Table 2. In Table 2, we can see that the comparisons lie within each other's confidence intervals, so it is unclear why we can claim a model is better than another.
Some sections need additional clarification, i.e. the description of the variance components in paragraph ln 147 to 160 is not clear to me. Given that this paper is offering a new perspective on the variance decomposition a clear description of each component would be greatly beneficial. I had to come back to this twice. The same holds for the implications of the results in Fig 1.
The assumptions and implications of the propositions, in particular 2.1 and 2.3, could be clearer. Proposition 2.3 can be true for a single network, is this correct? This proposition is in the deep ensemble section 2.2.
Questions
what is it the implication of assuming the functional initialization noise is decorrelated from the
f
(
x
)
in proposition 2.1?
In line 140, why is it obvious that Va corresponds to the inf width model?
What are the implications of Fig 1, if we cannot draw any conclusions from the shape/location of the data points, as the authors mentioned this changes per model/dataset?
Could we add a Fig 2 which includes the baseline, total V? this would be useful to better understand the scale of different V.
Limitations
N/A |
NIPS | Title
Disentangling the Predictive Variance of Deep Ensembles through the Neural Tangent Kernel
Abstract
Identifying unfamiliar inputs, also known as out-of-distribution (OOD) detection, is a crucial property of any decision making process. A simple and empirically validated technique is based on deep ensembles where the variance of predictions over different neural networks acts as a substitute for input uncertainty. Nevertheless, a theoretical understanding of the inductive biases leading to the performance of deep ensemble’s uncertainty estimation is missing. To improve our description of their behavior, we study deep ensembles with large layer widths operating in simplified linear training regimes, in which the functions trained with gradient descent can be described by the neural tangent kernel. We identify two sources of noise, each inducing a distinct inductive bias in the predictive variance at initialization. We further show theoretically and empirically that both noise sources affect the predictive variance of non-linear deep ensembles in toy models and realistic settings after training. Finally, we propose practical ways to eliminate part of these noise sources leading to significant changes and improved OOD detection in trained deep ensembles.
1 Introduction
Modern artificial intelligence uses intricate deep neural networks to process data, make predictions and take actions. One of the crucial steps toward allowing these agents to act in the real world is to incorporate a reliable mechanism for estimating uncertainty – in particular when human lives are at risk [1, 2]. Although the ongoing success of deep learning is remarkable, the increasing data, model and training algorithm complexity make a thorough understanding of their inner workings increasingly difficult. This applies when trying to understand when and why a system is certain or uncertain about a given output and is therefore the topic of numerous publications [3–10].
Principled mechanisms for uncertainty quantification would rely on Bayesian inference with an appropriate prior. This has led to the development of (approximate) Bayesian inference methods for deep neural networks [11–15]. Simply aggregating an ensemble of models [16] and using the disagreement of their predictions as a substitute for uncertainty has gained popularity. However, the theoretical justification of deep ensembles remains a matter of debate, see Wilson and Izmailov [17]. Although a link between Bayesian inference and deep ensembles can be obtained, see [18, 19], an understanding of the widely adopted standard deep ensemble and it’s predictive distribution is still missing [20, 21]. Note that even for principled Bayesian approaches there is no valid theoretical or practical OOD guarantee without a proper definition of out-of-distribution data [22].
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
One avenue to simplify the analyses of deep neural networks that gained a lot of attention in recent years is to increase the layer width to infinity [23, 24] or to very large values [25, 26]. In the former regime, an intriguing equivalence of infinitely wide deep networks at initialization and Gaussian processes allows for exact Bayesian inference and therefore principled uncertainty estimation. Although it is not possible to generally derive a Bayesian posterior for trained infinite or finite layer width networks, the resulting model predictions can be expressed analytically by kernels. Given this favorable mathematical description, the question of how powerful and similar these models are compared to their arguably black-box counterparts arises, with e.g. moderate width, complex optimizers and training stochasticity [25, 27–32].
In this paper, we leverage this tractable description of trained neural networks and take a first step towards understanding the predictive distribution of neural networks ensembles with large but finite width. Building on top of the various studies mentioned, we do so by studying the case where these networks can be described by a kernel and study the effect of two distinct noise sources stemming from the network initialization: The noise in the functional initialization of the network and the initialization noise of the gradient, which affects the training and therefore the kernel. As we will show, these noise sources will affect the predictive distributions differently and influence the network’s generalization on in- and out-of-distribution data.
Our contributions are the following:
• We provide a first order approximation of the predictive variance of an ensemble of linearly trained, finite-width neural networks. We identify interpretable terms in the refined variance description, originating from 2 distinct noise sources, and further provide their analytical expression for single layer neural networks with ReLU non-linearities.
• We show theoretically that under mild assumptions these refined variance terms survive nonlinear training for sufficiently large width, and therefore contribute to the predictive variance of non-linearly trained deep ensembles. Crucially, our result suggests that any finer description of the predictive variance of a linearized ensemble can be erased by nonlinear training.
• We conduct empirical studies validating our theoretical results, and investigate how the different variance terms influence generalization on in - and out-of-distribution. We highlight the practical implications of our theory by proposing simple methods to isolate noise sources in realistic settings which can lead to improved OOD detection.1
2 Neural network ensembles and their relations to kernels
Let fθ = f(·, θ) : Rh0 → RhL denote a neural network parameterized by the weights θ ∈ Rn. The weights consist of weight matrices and bias vectors {(Wl, bl)}Ll=1 describing the following feed-forward computation beginning with the input data x0:
zl+1 = σw√ hl W l+1xl + bl+1 with xl+1 = ϕ(zl+1). (1)
Here hl is the dimension of the vector xl and ϕ is a pointwise non-linearity such as the softplus log(1 + ex) or Rectified Linear Unit i.e. max(0, x) (ReLU) [33]. We follow Jacot et al. [24] and use σw = √ 2 to control the standard deviation of the initialised weights W lij , b l i ∼ N (0, 1).
Given a set of N datapoints X = (xi)0≤i≤N ∈ RN×h0 and targets Y = (yi)0≤i≤N ∈ RN×hL , we consider regression problems with the goal of finding θ∗ which minimizes the mean squared error (MSE) loss L(θ) = 12 ∑N i=0 ∥f(xi, θ)− yi∥22. For ease of notation, we denote by f(X , θ) ∈ RN ·hL the vectorized evaluation of f on each datapoint and Y ∈ RN ·hL the target vector for the entire dataset. As the widths of the hidden layers grow towards infinity, the distribution of outputs at initialization f(x, θ0) converges to a multivariate gaussian distribution due to the Central Limit Theorem [23]. The resulting function can then accurately be described as a zero-mean Gaussian process, coined Neural Network Gaussian Process (NNGP), where the covariance of a pair of output neurons i, j for data x and x′ is given by the kernel
1Source code for all experiments: github.com/seijin-kobayashi/disentangle-predvar
K(x, x′)i,j = lim h→∞ E[f i(x, θ0)f j(x′, θ0)] (2)
with h = min(h1, ..., hL−1). This equivalence can be used to analytically compute the Bayesian posterior of infinitely wide Bayesian neural networks [34].
On the other hand infinite width models trained via gradient descent (GD) can be described by the Neural Tangent Kernel (NTK). Given θ, the NTK Θθ of fθ is a matrix in RN ·hL × RN ·hL with the (i, j)-entry given as the following dot product
⟨∇θf(xi, θ),∇θf(xj , θ)⟩ (3)
where we consider without loss of generality the output dimension of f to be hL = 1 for ease of notation. Furthermore, we denote Θθ(X ,X ) := ∇θf(X , θ)∇θf(X , θ)T the matrix and Θθ(x
′,X ) := ∇θf(x′, θ)∇θf(X , θ)T the vector form of the NTK while highlighting the dependencies on different datapoints.
Lee et al. [25] showed that for sufficiently wide networks under common parametrizations, the gradient descent dynamics of the model with a sufficiently small learning rate behaves closely to its linearly trained counterpart, i.e. its first-order Taylor expansion in parameter space. In this gradient flow regime, after training on the mean squared error converges, we can rewrite the predictions of the linearly trained models in the following closed-form:
f lin(x) =f(x, θ0) +Qθ0(x,X )(Y − f(X , θ0)) (4)
where Qθ0(x,X ) := Θθ0(x,X )Θθ0(X ,X )−1 with Θθ0 the NTK at initialization, i.e. of f(., θ0). The linearization error throughout training supt≥0 ∥f lint (x) − ft(x)∥ is further shown to decrease with the width of the network, bounded by O(h− 12 ). Note that one can also linearize the dynamics without increasing the width of a neural network but by simply changing its output scaling [26].
When moving from finite to the infinite width limit the training of a multilayer perceptron (MLP) can again be described with the NTK, which now converges to a deterministic kernel Θ∞ [24], a result which extends to convolutional neural networks [27] and other common architectures [35, 36]. A fully trained neural network model can then be expressed as
f∞(x) = f(x, θ0) + Θ∞(x,X )Θ∞(X ,X )−1(Y − f(X , θ0)). (5)
where f({X , x}, θ0) ∼ N (0,K({X , x}, {X , x})).
2.1 Predictive distribution of linearly trained deep ensembles
In this Section, we study in detail the predictive distribution of ensembles of linearly trained models, i.e. the distribution of f lin(x) given x over random initializations θ0. In particular, for a given data x, we are interested in the mean E[f(x)] and variance V[f(x)] of trained models over random initialization. The former is typically used for the prediction of a deep ensemble, while the latter is used for estimating model or epistemic uncertainty utilized e.g. for OOD detection or exploration. To start, we describe the simpler case of the infinite width limit and a deterministic NTK, which allows us to compute the mean and variance of the solutions found by training easily:
E[f∞(x)] =Q∞(x,X )Y, V[f∞(x)] =K(x, x) +Q∞(x,X )K(X ,X )Q∞(x,X )T − 2Q∞(x,X )K(X , x)
(6)
where we introduced Q∞(x,X ) = Θ∞(x,X )Θ∞(X ,X )−1. For finite width linearly trained networks, the kernel is no longer deterministic, and its stochasticity influences the predictive distribution. Because there is probability mass assigned to the neighborhood of rare events where the NTK kernel matrix is not invertible, the expectation and variance over parameter initialization of the expression in equation 4 diverges to infinity.
Fortunately, due to the convergence in probability of the empirical NTK to the infinite width counterpart [24], we know these singularities become rarer and ultimately vanish as the width increases to infinity. Intuitively, we should therefore be able to assign meaningful, finite values to these undefined
quantities, which ignores these rare singularities. The delta method [37] in statistics formalizes this intuition, by using Taylor approximation to smooth out the singularities before computing the mean or variance. When the probability mass of the empirical NTK is highly concentrated in a small radius around the limiting NTK, the expression 4 is roughly linear w.r.t the NTK entries. Given this observation, we prove (see Appendix A.2) the following result, and justify that the obtained expression is informative of the empirical predictive mean and variance of deep ensembles. Rewriting equation 4 into
f lin(x) =f(x, θ0) + Q̄(x,X )(Y − f(X , θ0)) + [Qθ0(x,X )− Q̄(x,X )](Y − f(X , θ0))
(7)
where Q̄(x,X ) = Θ̄(x,X )Θ̄(X ,X )−1 and Θ̄ = E(Θθ0), we state: Proposition 2.1. For one hidden layer networks parametrized as in equation 1, given an input x and training data (X ,Y), when increasing the hidden layer width h, we have the following convergence in distribution over random initialization θ0:
√ h[Qθ0(x,X )− Q̄(x,X )](Y − f(X , θ0)) dist.→ Z(x)
where Z(x) is the linear combination of 2 Chi-Square distributions, such that
V(Z(x)) = lim h→∞ (hVc(x) + hVi(x))
where
Vc(x) =V[Θθ0(x,X )Θ̄(X ,X )−1Y] + V[Q̄(x,X )Θθ0(X ,X )Θ̄(X ,X )−1Y] − 2Cov[Q̄(x,X )Θθ0(X ,X )Θ̄(X ,X )−1Y,Θθ0(x,X )Θ̄(X ,X )−1Y], Vi(x) =V[Θθ0(x,X )Θ̄(X ,X )−1f(X , θ0)] + V[Q̄(x,X )Θθ0(X ,X )Θ̄(X ,X )−1f(X , θ0)] − 2Cov[Q̄(x,X )Θθ0(X ,X )Θ̄(X ,X )−1f(X , θ0),Θθ0(x,X )Θ̄(X ,X )−1f(X , θ0)].
We omit the dependence of θ0 on the width h for notational simplicity. While the expectation or variance of equation 4 for any finite width is undefined, their empirical mean and variance are with high probability indistinguishable from that of the above limiting distribution (see Lemma A.1). Note that the above proposition assumes the noise in Θθ0 to be decorrelated from f(x, θ0), which can hold true under specific constructions of the network that are of practical interest as we will see in the following (c.f. Appendix A.3.2).
Given Proposition 2.1, we now describe the approximate variance of f lin(x) for L = 2, which we can extend to the general L > 2 case using an informal argument (see A.2.2): Proposition 2.2. Let f be a neural network with identical width of all hidden layers, h1 = h2 = ... = hL−1 = h. We assume ∥Θθ0 − Θ̄∥2F = Op( 1h ). Then,
V[f lin(x)] ≈ Va(x) + Vc(x) + Vi(x) + Vcor(x) + Vres(x)
where
Va(x) =K̄(x, x) + Q̄(x,X )K̄(X ,X )Q̄(x,X )T − 2Q̄(x,X )K̄(X , x), Vcor(x) =2E [ [Θθ0(x,X )− Q̄(x,X )Θθ0(X ,X )][Θ̄(X ,X )−1Θθ0(X ,X )Θ̄(X ,X )−1] ] · [K̄(X , x)− K̄(X ,X )Q̄(x,X )T ]
and Vres(x) = O(h−2) as well as K̄ the expectation over initializations of the finite width counterpart of the NNGP kernel.
Several observations can be made: First, the above expression only involves the first and second moments of the empirical, finite width NTK, as well as the first moment of the NNGP kernel. These terms can be analytically computed in some settings. We provide in Appendix A.4.3 some of the moments for the special case of a 1-hidden layer ReLU network, and show the analytical expression correspond to empirical findings.
Second, the decomposition demonstrates the interplay of 2 distinct noise sources in the predictive variance:
• Va is the variance associated to the expression in the first line of equation 7. Intuitively, it is the finite width counterpart of the predictive variance of the infinite width model (equation 6), as it assumes the NTK is deterministic. The variance stems entirely from the functional noise at initialization and converges to the infinite width predictive variance as the width increases.
• Vc and Vi stem from the second line of equation 7. Vc is a first-order approximation of the predictive variance of a linearly trained network with pure kernel noise, without functional noise i.e. Vc ≈ V[Qθ0(x,X )Y]. On the other hand, Vi depends on the interplay between the 2 noises, and can be identified as the predictive variance of a deep ensemble with a deterministic NTK Θ̄ and a new functional prior g(x) = Θθ0(x,X )Θ̄(X ,X )−1f(X , θ0). Intuitively, this new functional prior can be seen as a data-specific inductive bias on the NTK formulation of the predictive variance (see Appendix A.3.1 for more details).
• Vcor is a covariance term between the 2 terms in equation 7 and also contains the correlation terms between Θθ0 and f(x, θ0). In general, its analytical expression is challenging to obtain as it requires the 4th moments of the finite width NNGP kernel fluctuation. Here, we provide its expression under the same simplifying assumption that the noise in Θθ0 is decorrelated from f(x, θ0). We therefore do not attempt to describe it in general, and focus in our empirical Section on the terms that are tractable and can be easily isolated for practical purposes.
Each of Vc,Vi,Vcor decay in O(h−1), which, together with Va, provide a first-order approximation of the predictive variance of f lin(x). Note that Va and Vc are of particular interest, as removing either the kernel or the functional noise at initialization will collapse the predictive variance of the trained ensemble to either one of these 2 terms.
2.2 Predictive distribution of standard deep ensemble of large width
An important question at this point is to which extent our analysis for linearly trained models applies to a fully and non-linearly trained deep ensemble. Indeed, if the discrepancy between the predictive variance of a linearly trained ensemble and its non-linear counterpart is of a larger order of magnitude than the higher-order correction in the variance term, the latter can be ’erased’ by training. Building on top of previous work, we show that, under the assumption of an empirically supported conjecture [38], for one hidden layer networks trained on the Mean Squared Error (MSE) loss, this discrepancy is asymptotically dominated by the refined predictive variance terms of the linearly trained ensemble we described in Section 2.1.
Proposition 2.3. Let f be a neural network with identical width of all hidden layers, h1 = h2 = ... = hL−1 = h, and such that the derivative of the non-linearity ϕ′ is bounded and Lipschitz continuous on R. Let the training data (X ,Y) contained in some compact set, such that the NTK of f on X is invertible. Let ft (resp. f lint ) be the model (resp. linearized model) trained on the MSE loss with gradient flow at timestep t with some learning rate. Assuming
sup t ∥Θθ0 −Θθt∥F = O(
1 h ) (8)
Then, ∀x, ∀δ > 0,∃C,H : ∀h > H ,
P [ sup t ∥f lint (x)− ft(x)∥2 ≤ C h ] ≥ 1− δ. (9)
In particular, for one hidden layer networks, after training,
|V̂(f(x))− V̂(f lin(x))| = Op(V̂ [ [Qθ0(x,X )− Q̄(x,X )](Y − f(X , θ0)) ] ) (10)
where V̂ denotes the empirical variance with some fixed sample size.
The proof can be found in Appendix A.1.1. While only the bound supt∥Θθ0 −Θθt∥F = O( 1√h ) has been proven in previous works [25], many empirical studies including those in the present work (see Appendix Fig. 5, Table 3) have shown that the bound decreases faster in practice, on the order of O(h−1) [25, 38]. Note that this result suggests the approximation provided in Proposition 2.2 is as good as it gets for describing the predictive variance of non-linearly trained ensembles: the higher order terms would be of a smaller order of magnitude than the non-linear correction to the training, rendering any finer approximation pointless.
3 Disentangling deep ensemble variance in practice
The goal of this Section is to validate our theoretical findings in experiments. First, we aim to show qualitatively and quantitatively that the variance of linearly trained neural networks is well approximated by the decomposition introduced in Proposition 2.2. To do so, we investigate ensembles of linearly trained models and analyze their behavior in toy models and on common computer vision classification datasets. We then extend our analyses to fully-trained non-linear deep neural networks optimized with (stochastic) gradient descent in parameter space. Here, we confirm empirically the strong influence of the variance description of linearly trained models in these less restrictive settings while being trained to very low training loss. Therefore we showcase the improved understanding of deep ensembles through their linearly trained counterpart and highlight the practical relevance of our study by observing significant OOD detection performance differences of models when removing noise sources in various settings.
3.1 Disentangling noise sources in kernel models
To isolate the different terms in Proposition 2.2, we construct, from a given initialization θ0 with the associated linearized model f lin, three additional linearly trained models:
f lin-c(x) = Qθ0(x,X )Y f lin-a(x) = f(x, θ0) + Q̄(x,X )(Y − f(X , θ0)) f lin-i(x) = g(x, θ0) + Q̄(x,X )(Y − g(X , θ0))
where g(x, θ0) = Θθ0(x,X )Θ̄(X ,X )−1f(X , θ0). Note that the predictive variance over random initialization of these functions corresponds to respectively Vc,Va,Vi as defined in Section 2.1. As one can see, we can simply remove the initialization noise from f lin by subtracting the initial (noisy) function f(x, θ0) before training resulting in a centered model f lin-c. Equivalently, we can remove noise that originates from the kernel by using the empirical average over kernels resulting in model f lin-a. Finally, we can isolate f lin-i by the same averaging trick as in f lin-a but use as functional noise g(x, θ0) which can be precomputed and added to f lin-c before training. Note that we neglect the terms involving covariance terms and focus on the parts which are easy to isolate, for linearly trained as well as for standard models. This will later allow us to study practical ways to subtract important parts of the predictive distribution for neural networks leading for example to significant OOD detection performance differences. Now we explore the differences and similarities of these disentangled functions and their respective predictive distributions.
3.1.1 Visualizations on a star-shaped toy dataset
To qualitatively visualize the different terms, we construct a two-way star-shaped regression problem on a 2d-plane depicted in Figure 1. After training an ensemble we visualize its predictive variance on the input space. Our first goal is to visualize qualitative differences in the predictive variance of ensembles consisting of f lin and the 3 disentangled models from above. We train a large ensemble of size 300 where each model is a one-layer ReLU neural network with hidden dimension 512 and 1 hidden layer. As suggested analytically for one hidden layer ReLU networks (see Appendix A.4.3), for example V[f lin-c(x)] depends on the angle of the datapoints while V[f lin(x)] depicts a superposition of the 3 isolated variances. While the ReLU activation does not satisfy the Lipschitzcontinuity assumption of Proposition 2.3, we use it to illustrate and validate our analytical description of the inductive biases induced by the different variance terms. We use the Softplus activation which behaved similarly to ReLU in the experiments in the next Section.
3.1.2 Disentangling linearly trained / kernel ensembles for MNIST and CIFAR10
Next, we move to a quantitative analysis of the asymptotic behavior of the various variance terms, as we increase the hidden layer size. In Figure 2, we analyze the predictive variance of the kernel models based on MLPs and Convolutional Neural Networks (CNN) for various depths and widths and on subsets of MNIST [39] and CIFAR10. As before, we construct a binary classification task through a MSE loss with dataset size of N = 100 and confirm, shown in Figure 2, that Vc, Vi decay by 1/h over all of our experiments. Crucially, we see that they contribute to the overall variance V even for relatively large widths. We further observe a decay in 1/h2 of the residual term as predicted by Proposition 2.2. As in all of our experiments, the variance magnitude and therefore the influence on f lin of the disentangled parts is highly architecture and dataset-dependent. Note that the small size of the datasets comes from the necessity to compute the inverse of the kernels for every ensemble member, see Appendix B for a additional analysis on larger datasets and scaling plots of Vcor. In Table 1, we quantify the previously observed qualitative difference of the various predictive variances by evaluating their performance on out-of-distribution detection tasks, where high predictive variance is used as a proxy for detecting out-of-distribution data. We focus our attention on analysing V[f lin-c(x)] and V[f lin-a(x)], as they are the variance terms containing purely the functional and kernel noise, respectively. As an evaluation metric, we follow numerous studies and compute the area under the receiver operating characteristics curve (AUROC, c.f. Appendix B). We fit a linearized ensemble on a larger subset of the standard 10-way classification MNIST and CIFAR10 datasets using MSE loss. When training our ensembles on MNIST, we test and average the OOD detection performance on FashionMNIST (FM) [40], E-MNIST (EM) [41] and K-MNIST (KM) [42]. When training our ensembles on CIFAR10, we compute the AUROC for SVHN [43], LSUN [44], TinyImageNet (TIN)
and CIFAR100 (C100), see Appendix Table 4 for the variance magnitude and AUROC values for all datasets.
The results show significant differences in variance magnitude and AUROC values. While we do not claim competitive OOD performance, we aim to highlight the differences in behavior of the isolated functions developed above: we see for instance that for (MLP, MNIST, N=1000), f lin-a generally performs better than f lin in OOD detection. Indeed, the overall worse performance of V[f lin-c(x)] seems to be affecting that of V[f lin(x)] which contains both terms. On the other hand, we see that for the setup (CNN, CIFAR10, N=1000) V[f lin(x)] is not well described by this interpolation argument, which highlights the influence of the other variance terms described in Proposition 2.2. Furthermore, the OOD detection capabilities of each function seem to be highly dependent on the particular data considered: Ensembles of f lin-c are relatively good at identifying SVHN data as OOD, while being poor at identifying LSUN and iSUN data. These observations highlight the particular inductive bias of each variance term for OOD detection on different datasets.
We further report the test set generalization of the ensemble mean of different functions, highlighting the diversity in the predictive mean of these models as well. Note that for N >= 1000 we trained the ensembles in linear fashion with gradient flow (which coincides with the kernel expression) up until the MSE training error was smaller than 0.01.
3.2 Does the refined variance description generalize to standard gradient descent in practice?
In this Section, we start with empirical verification of Proposition 2.3 and show that the bound in equation 10 holds in practice. Given this verification, we then propose equivalent disentangled models as those previously defined but in the non-linear setting, and 1) show significant differences in their predictive distribution but also 2) investigate to which extent improvements in OOD detection translate from kernel / linearly to fully non-linearly trained models. We stress that we do not consider early stopped models and aim to connect the kernel with the gradient descent models faithfully.
3.2.1 Survival of the kernel noise after training
To validate Proposition 2.3, we first introduce f gd(x) = f(x,θt), a model trained with standard gradient descent of t steps i.e. θt = θ0 − ∑t−1 i=0 η∇θf(X , θi)(Y − f(X , θi)). To empirically verify
Proposition 2.3, we introduce the following ratio
R(f) = exp ( Ex∼X ′ ( log[
∥V̂[f lin(x)]− V̂[f gd(x)]∥ ∥V̂c(x) + V̂i(x)∥
] ))
(11)
where the empirical variances are computed over random initialization, and the expectation over some data distribution which we choose to be the union of the test-set and the various OOD datasets. Given a datapoint x, the term inside the log measures the ratio between the discrepancy of the variance between the linearized and non-linear ensemble, against the refined variance terms. R(f) is then the geometric mean of this ratio over the whole dataset. Proposition 2.3 predicts that the ratio remains bounded as the width increases, suggesting that the refined terms contribute to the final predictive variance of the non-linear model in a non negligible manner. We empirically verify this prediction for various depths in Fig. 3 and Appendix Figure 6, for functions trained on subsets MNIST and CIFAR10. Note that for all our experiments we also empirically verify the assumption from Proposition 2.3 (see Appendix Figure 5, Table 3).
3.2.2 Disentangling noise sources in gradient descent non-linear models
Motivated by the empirical verification of Proposition 2.3, we now aim to isolate different noise sources in non-linear models trained with gradient descent. Starting from a non-linear network f gd, we follow the same strategy as before and silence the functional initialization noise by centering the network (referred as f gd-c(x)) by simply subtracting the function at initialization. On the other hand, we remove the kernel noise with a simple trick: We first sample a random weight θc0 once, and use it as the weight initialization for all ensemble members. A function noise is added by first removing the function initialization from θc0, and adding that of a second random network which is not trained. The
resulting functions (referred as f gd-a(x)) will induce and ensemble which will only differ in their functional initialization while having the same Jacobian
f gd-c(x) = f(x, θt)− f(x, θ0), f gd-a(x) = f(x, θct )− f(x, θc0) + f(x, θ0).
We furthermore introduce f gd-i(x), the non linear counterpart to f lin-i(x), which we construct similarly to f gd-a(x) but using g(x, θ0, θc0) = Θθ0(x,X )Θθc(X ,X )−1f(X , θ0) as the function initialization instead of f(x, θ0) (see Section 2.1 and Appendix A.3.1 for the justification). Unlike f gd-a and f gd-c, constructing f gd-i requires the inversion of large matrices due to the way g is defined, a challenging task for realistic settings. While its practical use is thus limited, we introduce it to illustrate the correspondence of correspondence of the predictive variance of linearized vs non-linear deep ensemble.
Given these simple modifications of f gd, we rerun the experiments conducted for the linearly trained models for moderate dataset sizes (N=1000). We observe close similarities in the OOD detection capabilities as well as predictive variance between the introduced non-linearly trained ensembles and their linearly trained counterparts. We further train these models on the full MNIST dataset (N=50000) for which we show the same trend in Appendix Table 5. We also include the ensemble’ performance when trained on the full CIFAR10 dataset. Intriguingly, the relative performance of the ensemble is somewhat preserved in both settings between N=1000 and N=50000, even when training with SGD, promoting the use of quick, linear training on subset of data as a proxy for the OOD performance of a fully trained deep ensemble.
Similar to the case of (MLP, MNIST, N=1000/50000), we observe that f gd ensemble performance is an interpolation of f gd-c and f gd-a which interestingly performs often favorably, on different OOD data. To understand if the noise introduced by SGD impacts the predictive distribution of our disentangled ensembles, we compared the behavior of f gd and f sgd in the lower data regime
of N = 1000. Intriguingly, we show in Appendix Table 6 that no significant empirical difference between GD and SGD models can be observed and hypothesize that noise sources discussed in this study are more important in our approximately linear training regimes. To speed up experiments we used (S)GD with momentum (0.9) in all experiments of this subsection.
3.2.3 Removing noise of models possibly far away from the linear regime
Finally, we investigate the OOD performance of the previously introduced model variants f sgd, f sgd-c and f sgd-d in more realistic settings. To do so we train the commonly used WideResNet 28-10 [45] on CIFAR10 with BatchNorm [46] Layers and cross-entropy (CE) loss with batchsize of 128, without data augmentation (see Table 3.2.3). These network and training algorithm choices are considered crucial to achieving state-of-the-art and superior performance compared to their linearly trained counterparts. Strikingly, we notice that our model variants, which each isolate a different initial noise source, significantly affect the OOD capabilities of the final models when the training loss is virtually 0 - as in all of our experiments. This indicates that the discussed noise sources influence the ensemble’s predictive variance long throughout training. We provide similar results for CIFAR100 and FashionMNIST in Table B.1 and B.1 of the Appendix B. We stress that we do not claim that our theoretical assumptions hold in this setup.
4 Conclusion
The generalization on in-and out-of-distribution data of deep neural network ensembles is poorly understood. This is particularly worrying since deep ensembles are widely used in practice when trying to asses if data is out-of-distribution. In this study, we try to provide insights into the sources of noise stemming from initialization that influence the predictive distribution of trained deep ensembles. By focusing on large-width models we are able to characterize two distinct sources of noise and describe an analytical approximation of the predictive variance in some restricted settings. We then show theoretically and empirically how parts of this refined predictive variance description in the linear training regime survive and impact the predictive distribution of non-linearly trained deep ensembles. This allows us to extrapolate insights of the tractable linearly trained deep ensembles into the non-linear regime which can lead to improved out-of-distribution detection of deep ensembles by eliminating potentially unfavorable noise sources. Although our theoretical analysis relies on the closeness to linear gradient descent which has shown to result in less powerful models in practice, we hope that our surprising empirical success of noise disentanglement sparks further research into using the lens of linear gradient descent to understand the mysteries of deep learning.
Acknowledgments and Disclosure of Funding
Seijin Kobayashi was supported by the Swiss National Science Foundation (SNF) grant CRSII5_173721. Pau Vilimelis Aceituno was supported by the ETH Postdoctoral Fellowship program (007113). Johannes von Oswald was funded by the Swiss Data Science Center (J.v.O. P18-03). We thank Christian Henning, Frederik Benzing and Yassir Akram for helpful discussions. Seijin Kobayashi and Johannes von Oswald are grateful for Angelika Steger’s and João Sacramento’s overall support and guidance. | 1. What is the main contribution of the paper regarding neural networks and prediction variance?
2. What are the strengths of the proposed decomposition approach and its practical consequences?
3. Do you have any concerns or questions regarding the empirical studies and their explanations?
4. How does the reviewer assess the quality, significance, and originality of the manuscript's content?
5. Are there any limitations or potential areas for improvement in the study? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This manuscript uses the concepts of Neural Network Gaussian Processes and Neural Tangent Kernels to analyze the sources of prediction variance from neural networks. By making several reasonable assumptions, they can break the predictive variance into the superposition of several distinct terms. Through this mathematical analysis and accompanying empirical studies, they demonstrate the practical consequences of this decomposition on several different neural networks. Finally, they demonstrate that these decompositions can be used to build a greater intuitive understanding on out-of-distribution data.
Strengths And Weaknesses
This manuscript clearly presents its theoretical arguments and accompanies them with empirical studies that confirm the findings. Given the large focus on robustness to out-of-distribution data in recent years, this type of decomposition can help practitioners think about potential ways to understand and improve their models. I believe that this decomposition is original, and has high quality and significance.
The final empirical studies, though, are underexplained and a bit confusing. It is not clearly described enough what prediction tasks are being performed (simply “out-of-distribution” tasks) and what approach is being used to determine out-of-distribution samples. This needs to be much more clearly described in a revised manuscript. The claims about performance are not refined enough. They claim that “These observations highlight the particular inductive bias of each variance term for OOD detection.” However, it is not clearly described enough how these inductive biases relate to the particular predictive performance. The analysis would be much enhanced with clearer statements about exactly how these inductive biases relate to performance.
Questions
How exactly were OOD tasks performed? Update: This has been largely clarified after the rebuttal.
I do not fully understand the claims about the relationship between each variance term and OOD on data-specific performance. Can you please elaborate on your claims of how the results highlight and explain these relationships? Update: The response really highlights the challenges of interpretation. Further claims and exploration are beyond the scope of this work, but I would encourage the authors to continue examining this question.
Limitations
I believe that the authors have been forthright and fair in their descriptions of the limitations of their theoretical analysis. |
NIPS | Title
Minimax Bounds for Generalized Linear Models
Abstract
We establish a new class of minimax prediction error bounds for generalized linear models. Our bounds significantly improve previous results when the design matrix is poorly structured, including natural cases where the matrix is wide or does not have full column rank. Apart from the typical L2 risks, we study a class of entropic risks which recovers the usual L2 prediction and estimation risks, and demonstrate that a tight analysis of Fisher information can uncover underlying structural dependency in terms of the spectrum of the design matrix. The minimax approach we take differs from the traditional metric entropy approach, and can be applied to many other settings.
1 Introduction
Throughout, we consider a parametric framework where observations X ∈ Rn are generated according to X ∼ Pθ, where Pθ denotes a probability measure on a measurable space (X ⊆ Rn,F) indexed by an underlying parameter θ ∈ Θ ⊂ Rd. For each Pθ, we associate a density f(·; θ) with respect to an underlying measure λ on (X ,F) according to
dPθ(x) = f(x; θ)dλ(x).
This setup contains a vast array of fundamental applications in machine learning, engineering, neuroscience, finance, statistics and information theory [1–10]. As examples, mean estimation [1], covariance and precision matrix estimation [2], phase retrieval [3,4], group or membership testing [5], pairwise ranking [10], can all be modeled in terms of parametric statistics. The central question to address in all of these problems is essentially the same: how accurately can we infer the parameter θ given the observation X?
One of the most popular parameteric families is the exponential family, which captures a rich variety of parametric models such as binomial, Gaussian, Poisson, etc. Given a parameter η ∈ R, a density f(·; η) is said to belong to the exponential family if it can be written as
f(x; η) = g(x) exp ( ηx− Φ(η) s(σ) ) . (1)
Here, the parameter η is the natural parameter, g : X ⊆ R→ [0,∞) is the base measure, Φ : R→ R is the cumulant function, and s(σ) > 0 is a variance parameter. The density f(·; η) is understood to be on a probability space (X ⊆ R,F) with respect to a dominating σ-finite measure λ. In this work, we are interested in the following generalized linear model (GLM), where observation X ∈ Rn is generated according to an exponential family with natural parameter equal to a linear transformation of the underlying parameter θ. In other words,
f(x; θ) = n∏ i=1 { g(xi) exp ( xi〈mi, θ〉 − Φ(〈mi, θ〉) s(σ) )} , (2)
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
for a real parameter θ := (θ1, θ2, . . . , θd) ∈ Rd and a fixed design matrix M ∈ Rn×d, with rows given by the vectors {mi}ni=1 ⊂ Rd. The above model assumes each Xi is drawn from its own exponential family, with respective natural parameters 〈mi, θ〉, i = 1, 2, . . . , n. Evidently, this captures the classical (Gaussian) linear model X = Mθ + Z, where f(·; θ) is taken to be the usual Gaussian density, and also captures a much broader class of problems including phase retrieval, matrix recovery and logistic regression. See [11–13] for history and theory of the generalized linear model.
In order to evaluate the performance of an estimator θ̂ (i.e., a measurable function ofX), it is common to define a loss function L(·, ·) : Rd × Rd 7−→ R and analyze the loss L(θ, θ̂). A typical figure of merit is the constrained minimax risk R(M,Θ), defined as
R(M,Θ) := inf θ̂ sup θ∈Θ L(θ, θ̂).
In words, the minimax risk characterizes the worst-case risk under the specified loss L(·, ·) achieved by the best estimator, with a constraint that θ belongs to a specified parameter space Θ.
Two choices of the loss function L(·, ·) give rise to the usual variants of L2 loss:
1. Estimation loss, where the loss function L(·, ·) is defined as
L1(θ, θ̂) = E‖θ − θ̂‖2 for all θ, θ̂ ∈ Rd. (3)
2. Prediction loss, where the loss function L(·, ·) is defined as
L2(θ, θ̂) = 1
n E‖Mθ −Mθ̂‖2 for all θ, θ̂ ∈ Rd. (4)
In this work, we shall approach things from an information theoretic viewpoint. In particular, we will bound minimax risk under entropic loss (closely connected to logarithmic loss in the statistical learning and information literature, see, e.g., [14–16]), from which L2 estimates will follow. To start, let us review some of the key definitions in information theory. Suppose the parameter θ ∈ Rd follows a prior π, a probability measure on Rd having density ψ with respect to Lebesgue measure. The differential entropy h(θ) corresponding to random variable θ is defined as
h(θ) := − ∫ Rd ψ(u) logψ(u)du.
Here and throughout, we will take logarithms with respect to the natural base, and assume all entropies exist (i.e., their defining integrals exist in the Lebesgue sense). The mutual information I(θ;X) between parameter θ ∼ π and observation X ∼ Pθ is defined as
I(θ;X) := ∫ Rd ∫ X f(x; θ) log f(x; θ)∫ Rd f(x; θ ′)dπ(θ′) dλ(x)dπ(θ).
The conditional entropy is defined as h(θ|X) := h(θ)− I(θ;X). The entropy power of a random variable U is defined as exp(2h(U)), and for any two random variables U and V with well-defined conditional entropy, the conditional entropy power is defined similarly as exp(2h(U |V )). Lower bounds on conditional entropy power can be translated into lower bounds of other losses, via tools in rate distortion theory [17]. To illustrate this, let’s consider the following two Bayes risks, with suprema taken over all priors π on the parameter space Θ ⊆ Rd, and infima taken over all valid estimators θ̂ (i.e., measurable functions of X).
1. Entropic estimation loss, where the Bayes risk is defined as
Re(M,Θ) := inf θ̂ sup π n∑ i=1 exp ( 2h(θi|θ̂i) ) . (5)
2. Entropic prediction loss, where the Bayes risk is defined as
Rp(M,Θ) := inf θ̂ sup π
1
n n∑ i=1 exp ( 2h(m>i θ|m>i θ̂) ) . (6)
The following simple observation shows that any lower bound derived for the entropic Bayes risks implies a lower bound on the minimax L2 risks.
Lemma 1. We have inf θ̂ supθ∈Θ L1(θ, θ̂) & Re(M,Θ) and inf θ̂ supθ∈Θ L2(θ, θ̂) & Rp(M,Θ).
Proof. This follows since Gaussians maximize entropy subject to second moment constraints and conditioning reduces entropy: E(θi−θ̂i)2 ≥ Var(θi−θ̂i) & exp(2h(θi−θ̂i)) & exp(2h(θi|θ̂i)).
Here and onwards, we use “&” (also “.” and “ ”) to refer to “≥” (and “≤”, “=”, respectively) up to constants that do not depend on parameters.
Although we focus on L2 loss in the present work, we remark that minimax bounds on entropic loss directly yield corresponding estimates on Lp loss using standard arguments involving covering and packing numbers of Lp spaces. See, for example, the work by Raskutti et al. [18]. Despite its universal nature, there is relatively limited work on deriving minimax bounds for the entropic loss. This is the focus of the present work, and as a consequence, we obtain bounds on L2 loss that significantly improve on prior results when the matrix M is poorly structured.
1.1 Contributions
In this paper, we make three main contributions.
1. First, we establish L2 minimax risk and entropic Bayes risk bounds for the generalized linear model (2). The generality of the GLM allows us to extend our results to specific instances of the GLM such as the Gaussian linear model, phase retrieval and matrix recovery.
2. Second, we establish L2 minimax risk and entropic Bayes risk bounds for the Gaussian linear model. In particular, our bounds are nontrivial for many instances where previous results fail (for example when M ∈ Rn×d does not have full column rank, including cases with d > n), and can be naturally applied to the sparse problem where ‖θ‖0 ≤ k. Further, we show that both our minimax risk and entropic Bayes risk bounds are tight up to constants and log factors when M is sampled from a Gaussian ensemble.
3. Third, we investigate the L2 minimax risk via the lens of the entropic Bayes risk, and provide evidence that information theoretic minimax methods can naturally extract dependencies on the structure of design matrix M via analysis of Fisher information. The techniques we develop are general and can be used to establish minimax results for other problems.
2 Main Results and Discussion
The following notation is used throughout: upper-case letters (e.g., X , Y ) denote random variables or matrices, and lower-case letters (e.g., x, y) denote realizations of random variables or vectors. We use subscript notation vi to denote the i-th component of a vector v = (v1, v2, . . . , vd). We let [k] denote the set {1, 2, . . . , k}. We will be making the following assumption.
Assumption: The second derivative of the cumulant function Φ is bounded uniformly by a constant L > 0: Φ′′(·) ≤ L. The following lemma characterizes the mean and variance of densities in the exponential family.
Lemma 2 (Page 29, [11]). Any observation X generated according to the exponential family (1) has mean Φ′(η) and variance s(σ) · Φ′′(η).
In other words, our assumption is equivalent to saying that the variance of each observation X1, . . . , Xn is bounded. This is a common assumption made in the literature; See, for example, [19–22].
Our first main result establishes a minimax prediction lower bound corresponding to the generalized linear model (2). Let us first make a few definitions. For an n × k matrix A, we define the vector ΛA := (λ1, . . . , λk) ∈ Rk, where the λi’s denote the eigenvalues of the k × k symmetric matrix
A>A in descending order. ‖ΛA‖p denotes the usual Lp norm of the vector ΛA for p ≥ 1. Finally, we define
Γ(A) := max ( ‖ΛA‖21 ‖ΛA‖22 , λmin(A >A) ‖Λ−1A ‖1 ) , (7)
where Λ−1A := (λ −1 1 , . . . , λ −1 k ), with the convention that λmin(A >A)‖Λ−1A ‖1 = 0 when λmin(A >A) = 0.
Theorem 3. For observations X ∈ Rn generated via the generalized linear model (2) with a fixed design matrix M ∈ Rn×d, the minimax L2 prediction risk and the entropic Bayes prediction risk are lower bounded by
1 n inf θ̂ sup θ∈Rd E‖Mθ̂ −Mθ‖2 & 1 n s(σ) L Γ(M).
1 n inf θ̂ sup π n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) ) & 1 n s(σ) L ‖ΛM‖21 ‖ΛM‖22 .
Bounds on minimax risk under an additional sparsity constraint ‖θ‖0 ≤ k (i.e., the true parameter θ has at most k non-zero entries) can be derived as a corollary.
Corollary 4 (Sparse Version of Theorem 3). For observations X ∈ Rn generated via the generalized linear model (2), with the additional constraint that ‖θ‖0 ≤ k (i.e., Θ := {θ ∈ Rd : ‖θ‖0 ≤ k}), the minimax prediction error is lower bounded by
1 n inf θ̂ sup θ∈Θ E‖Mθ̂ −Mθ‖2 & 1 n s(σ) L max Q∈Mk Γ(Q).
1 n inf θ̂ sup π n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) ) & 1 n s(σ) L max Q∈Mk ‖ΛQ‖21 ‖ΛQ‖22 .
Here, the maximum is taken overMk, the set of all n× k′ submatrices of M , with k′ ≤ k.
We now note an important specialization of Corollary 4. In particular, consider the Gaussian linear model with observations X ∈ Rn generated according to
X = Mθ + Z, (8)
with Z ∼ N (0, σ2 In) the standard Gaussian vector. This corresponds to the GLM of (2) when the functions are taken to be h(x) = e−x
2/(2σ2), s(σ) = σ2, and Φ(t) = t2/2 (hence, L = 1). This is a particularly important instance worth highlighting because of the ubiquity of the Gaussian linear model in applications.
Theorem 5. For observations X ∈ Rn generated via the Gaussian linear model (8), with the sparsity constraint ‖θ‖0 ≤ k (i.e., Θ := {θ ∈ Rd : ‖θ‖0 ≤ k}), the minimax prediction error is lower bounded by
1 n inf θ̂ sup θ∈Θ E‖Mθ̂ −Mθ‖2 & σ 2 n max Q∈Mk Γ(Q).
1 n inf θ̂ sup π n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) ) & σ2 n max Q∈Mk ‖ΛQ‖21 ‖ΛQ‖22 .
Here, the maximum is taken overMk, the set of all n× k′ submatrices of M , with k′ ≤ k. Remark 6. In the above results, the function Γ(·) can in fact be replaced with
Γ̃(M) := max ( n∑ i=1 ‖mi‖42 ‖Mmi‖2 , λmin(M >M)‖Λ−1M ‖1 ) ,
which is stronger than the original statements. However, the chosen statements above highlight the simple dependence on the spectrum of ΛM .
2.1 Related Work
Most relevant to our results is the following lower bound on minimax L2 estimation risk and entropic Bayes estimation risk, developed in a recent work by Lee and Courtade [23]. We note that [23] does not bound prediction loss (which is often of primary interest), as we have done in the present paper. Theorem 7 (Theorem 3, [23]). Let observation X be generated via the generalized linear model defined in (2), with the additional structural constraint Θ = Bd2(R) := {v : ‖v‖22 ≤ R2}. Suppose the cumulant function Φ satisfies Φ′′ ≤ L for some constant L. Then, the minimax estimation error is lower bounded by
inf θ̂ sup θ∈Θ E‖θ̂ − θ‖2 & inf θ̂ sup π n∑ i=1 exp(2h(θi|θ̂i)) & min ( R2, s(σ) L Tr((M>M)−1) ) . (9)
The bound of (9) is tight when X is generated by the Gaussian linear model, showing that (Gaussian) linear models are most favorable in the sense of minimax estimation error amongst the class of GLMs considered here. Lee and Courtade extracted the dependence on the Tr(M>M) term by analyzing a Fisher information term in the class of Bayesian Cramér-Rao-type bounds from [24]. Earlier work (see, e.g., [25]) yielded bounds on the order of d/λmax(M>M), which is loose compared to (9).
There is a large body of work that establish minimax lower bounds on prediction error for specific models of the generalized linear model. Typically, these analyses depend on methods involving metric entropy (see, for example, [4, 18, 19, 26–28]). A popular minimax result is due to Raskutti et al. [18], who consider the sparse Gaussian linear model, where for a fixed design matrix M with an additional sparsity constraint ‖θ‖0 ≤ k,
σ2 Φ2k,−(M)
Φ2k,+(M)
k n log
( ed
k
) . inf
θ̂ sup ‖θ‖0≤k
1 n E‖Mθ̂ −Mθ‖22 . σ2 min
( k
n log
( ed
k
) , 1 ) . (10)
Here the terms Φr,−(M) and Φr,+(M) correspond to the constrained eigenvalues,
Φr,−(M) := inf 06=‖θ‖0≤r
‖Mθ‖2
‖θ‖2 , Φr,+(M) := sup
06=‖θ‖0≤r
‖Mθ‖2
‖θ‖2 . (11)
The upper bound of (10) is achieved by classical methods such as aggregation [29–32].
One can readily observe that the lower bound of (10) becomes degenerate for even mildly ill-structured design matrices M . For example, in the case where M has repeating columns, the above result gives a lower bound of 0, which is not very interesting. This suggests that the metric entropy approach does not easily capture the dependence of the structure of design matrixM at the resolution of the complete spectrum of M>M as our results do. In fact, it can be shown that Corollary 4 uniformly improves upon (10) up to logarithmic factors; see Section 4.1 of the supplementary. Further, the lower bound of Raskutti et al. does not hold for k > n, which is a disadvantage for high dimensional problems where d n. Verzelen [30] discusses the regime where kn log ( ed k ) ≥ 12 and k ≤ max(d
1/3, n/5) and provide bounds for the worst-case matrix M , which is a different setting from ours.
There are also lines of work on specific settings of the generalized linear model. For example, Candes et al. [28] discusses low-rank matrix recovery, and Cai et al. [4] considers phase retrieval. There are, however, fewer results that directly look at the generalized linear model of our setting. The closest work related is that of Abramovich and Grinshtein [19], where they consider estimating the entire vector Mθ, as opposed to our setting where we estimate θ first with θ̂, then evaluate Mθ̂. Their result also depends on the ratio between (constrained) minimum and maximum eigenvalues as in (10), and hence fails when M is not full rank or otherwise has divergent maximum and minimum (constrained) eigenvalues.
Comparing Theorems 3 and 5 with the results surveyed above raises several points (illustrated in Table 1):
• Nontrivialness when M is not full rank. Unlike the lower bound in (10), the ratio ‖ΛM‖21/‖ΛM‖22 does not vanish when M is not full rank; see Case (d) in Table 1. This is particularly important when the dimension of the parameter is large relative to the number of observed samples.
Remark 8. In some cases, (10) can be improved by ignoring certain components of θ ∈ Rd via dimensionality reduction. For example, if the first two columns of M are the same, then it is possible to ignore the first component of θ and simply look at the remaining d− 1 components. We remark that even with this reduction, (10) still depends on the ratio between minimum and maximum constrained eigenvalues of the new “effective” matrix, and leads to a poor lower bound when the minimum and maximum constrained eigenvalues are of a different order. We remark that other dimensionality reduction methods (such as rotations) may be limited by the sparsity constraint ‖θ‖0 ≤ k. Moreover, in general when the spectrum of M is all positive (with divergent large/small eigenvalues), one cannot use dimensionality reduction to improve the result of (10).
2.2 Application to Gaussian Designs
Gaussian designs are frequently adopted in machine learning and compressed sensing (see, for example, [18, 33–35]). The following proposition provides a concentration bound for the ratio ‖ΛM‖21/‖ΛM‖22 when M is sampled from the standard Gaussian ensemble (i.e., where each component of M is sampled i.i.d. according to a standard Gaussian). Proposition 9. Let the design matrix M ∈ Rn×k be sampled from the Gaussian ensemble. There exist universal constants c1, c2, c3 > 0 such that ‖ΛM‖21/‖ΛM‖22 ≥ c1 min(n, k) with probability at least 1− c2 exp(−c3 min(n, k)).
Proposition 9 implies that, with high probability, the lower bound of Theorem 5 (and therefore the corresponding estimate in Theorem 3) is sharp up to a logarithmic term that is negligible when d k. In particular, under the assumptions of Theorem 5, we obtain with the help of (10) that
σ2 min
( k
n , 1
) . inf
θ̂ sup ‖θ‖0≤k
1 n E‖Mθ̂ −Mθ‖22 . σ2 min
( k
n log
( ed
k
) , 1 ) , (12)
with the lower bound holding with high probability in min(n, k). This can significantly improve on the lower bound (10); consider, for example, the case where s := min(2k, d) = αn for
some fixed α < 1. Note that any n × s submatrix M ′ of M satisfies Φ2k,−(M)/Φ2k,+(M) ≤ λmin(M ′>M ′)/λmax(M ′>M ′). An asymptotic result by Bai and Yin [36] implies that if α is fixed then this latter ratio converges to (1− √ α) 2 / (1 + √ α)
2 almost surely as n, k, d → ∞. Hence, asymptotically speaking, the result of (10) is tight at most up to constants depending on α while our results of Corollary 4 is tight (up to log factors) without dependency of α.
Interestingly, Proposition 9 also holds for square matrices, where the minimum eigenvalue is close to zero (more precisely, for a square Gaussian matrix M ∈ Rn×n, λmin(M>M) is of the order n−1, as shown in the work of Rudelson and Vershynin [37]). Proposition 9 follows from Szarek’s work [38] on concentration of the largest n/2 singular values for a square Gaussian matrix M ∈ Rn×n, concentration of singular values of rectangular subgaussian matrices [26], and an application of interlacing inequalities for singular values of submatrices [39]. Similar results can be shown for subgaussian matrices under additional assumptions using tools from [40].
3 Key Points of Proofs of Main Theorems
In our approach, we will be using classical information theory tools inspired by the techniques developed by Lee and Courtade [23].
3.1 Preliminaries
We say that a measure µ is log-concave if dµ(x) = e−V (x)dx for some convex function V (·). The Fisher information IX(θ) given θ ∈ Rd corresponding to the map θ 7−→ Pθ is defined as
IX(θ) = EX ‖∇θ log f(X; θ)‖22 ,
where the gradient is taken with respect to θ, and the expectation is taken with respect to X ∼ Pθ. If the parameter θ has a prior π that is log-concave, the following lemma gives an upper bound on the mutual information I(θ;X), which depends on the covariance matrix of θ, defined as Cov(θ).
Lemma 10 (Theorem 2, [24]). Suppose the prior π of θ ∈ Rd is log-concave. Then, under mild regularity conditions on the map θ 7−→ Pθ, we have
I(θ;X) ≤ d · φ (
Tr(Cov(θ)) · E IX(θ) d2
) , (13)
where the function φ(·) is defined as φ(x) := {√
x if 0 ≤ x < 1, 1 + 12 log x if x ≥ 1.
We note that the regularity condition in Lemma 10 requires that each member of the parametric family Pθ has density f(·; θ) smooth enough to permit the following change of integral and differentiation,∫
X ∇θf(x; θ)dλ(x) = 0, µ− a.e. θ. (14)
In our case, since we are working with the GLM of (2), the regularity condition is automatically satisfied.
When θ is a one-dimensional (i.e., d = 1) log-concave random variable, the bound of (13) is sharp up to a (modest) multiplicative constant when Var(θ)E IX(θ) is bounded away from zero. There exists a tighter version of Lemma 10 when π is uniformly log-concave, however Lemma 10 is enough for our purposes. We direct the interested reader to the paper [24].
3.2 Proof Sketch of Theorem 3
We start off by noting that we can lower bound the entropic Bayes risk of (6) by taking a specific prior π. For our purposes, we will let θ have a multivariate Gaussian prior π = N ( 0, β2 Id ) .
We continue with a bound on the sum of conditional entropy powers n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) ) ≥ n∑ i=1 exp ( 2h(m>i θ)− 2I(m>i θ;X) ) , (15)
which follows from the data-processing inequality I(m>i θ;m > i θ̂) ≤ I(m>i θ;X), since m>i θ → X → m>i θ̂ forms a Markov chain.
When mi ∈ Rd is a zero-vector, exp ( 2h(m>i θ|m>i θ) ) = exp ( 2h(m>i θ)− 2I(m>i θ;X) ) = 0 and hence does not contribute to the summations within (15). This implies that removing zero vector rows from M does not affect the proof following (15). Hence, in the following we will assume that the matrix M does not have rows that are zero vectors.
By our choice of the prior π, the density of m>i θ is Gaussian and hence log-concave, which allows us to invoke Lemma 10, implying
n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) ) ≥ n∑ i=1 exp ( 2h(m>i θ)− 2φ(Var(m>i θ) · E IX(m>i θ)) ) . (16)
Here, the expectation is taken with respect to the marginal density of m>i θ. The primary task is now to obtain a reasonable bound on the expected Fisher information term E IX(m>i θ). To do this, we introduce the following lemma, which provides an upper bound for the expected Fisher information E IX(m>i θ). Lemma 11. Fix M ∈ Rn×d. If parameter θ has a prior π = N (0, β2 Id) and X ∈ Rn is sampled according to the generalized linear model defined as (2), then
E IX(m>i θ) ≤ L s(σ) · ‖Mmi‖ 2 2 ‖mi‖42 + 1 β2 ·Ψi(M) for all i = 1, 2, . . . , n. (17)
The function Ψi(M) depends only on M and is finite. The expectation is taken with respect to the marginal density of m>i θ.
The functions Ψi(·) are not explictly stated here because later we will be taking β large enough so that Ψi(·)/β2 in (17) can be ignored. A proof of Lemma 11 and more details about the functions Ψi(·) are included in the supplementary. We can continue from (16) and see that
n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) )
& n∑ i=1 β2‖mi‖22 exp ( −2φ [ β2‖mi‖22 ( L s(σ) · ‖Mmi‖ 2 2 ‖mi‖42 + 1 β2 Ψi(M) )]) (a)
n∑ i=1
1
L s(σ) · ‖Mmi‖22 ‖mi‖42 + 1β2 Ψi(M)
(b) = (1− )s(σ)
L n∑ i=1 ‖mi‖42 ‖Mmi‖22 . (18)
In the above, both (a) and (b) require a selection of β2 to be large enough. In particular, in (a), β2 ≥ s(σ)/L would guarantee that the function φ behaves logarithmically (recall from Lemma 10 that φ(t) behaves logarithmically if t ≥ 1). In (b), the variable depends on the selection of β. Since the function Ψi(M) is finite for all i = 1, . . . , n, by taking β2 a constant large enough, we can force to be as close to zero as possible. Hence, we can say that the inequality holds with = 0. A direct application of the Cauchy-Schwarz inequality then yields
n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) ) ≥ s(σ) L (∑n i=1 ‖mi‖22 )2∑n i=1 ‖Mmi‖22 = s(σ) L ‖ΛM‖21 ‖ΛM‖22 . (19)
On the other hand, from Theorem 7 and the matrix identity ‖Mv‖22 ≥ λmin(M>M)‖v‖22,
inf θ̂ sup θ∈Rd
E‖Mθ̂ −Mθ‖22 ≥ λmin(M>M) · Tr ( (M>M)−1 ) = λd‖Λ−1M ‖1. (20)
Combining (19) and (20) with Lemma 1 finishes the proof.
3.3 An Alternative Proof of Theorem 5
For the Gaussian linear model, we have the following tighter version of Lemma 11.
Lemma 12. Fix M ∈ Rn×d. If θ ∼ N (0, β2 Id) and X ∈ Rn is sampled according to the Gaussian linear model defined as (8). Then,
E IX(m>i θ) ≤ 1 σ2 · ‖Mmi‖ 2 2 ‖mi‖42 for 1 ≤ i ≤ n. (21)
By taking any β2 ≥ σ2 maxi ( ‖mi‖22/‖Mmi‖22 ) , the function φ(·) in (16) will again behave logarithmically, directly implying (18) with = 0. The remaining proof follows similarly as before. Remark 13. The functions Ψi(·) can be difficult to bound directly (see supplementary for more details). Hence, the improved tightness and simplicity of Lemma 12 over Lemma 11 for the Gaussian linear model provides more flexibility on the selection of β. This can be helpful when dealing with problem settings where there are other constraints on the parameter space Θ. Remark 14. There is a subtle but crucial difference in the proof techniques employed here compared to those in [23]. The key step in [23] requires bounding the Fisher information IX(θi) with diagonal terms in the Fisher information matrix IX(θ), i.e., Lemma 9 of [23]. In our case, we need to bound the Fisher information IX(m>i θ) (e.g., Lemma 11), and here, the terms m>i θ are not necessarily mutually independent as required in Lemma 9 of [23], which prevents us from a direct application. Instead, we choose θ to have a Gaussian prior and try to bound IX(θi) directly. This is facilitated by properties of the Gaussian distribution; see Section 4.3 in the appendix for more details.
Broader Impact
The generalized linear model (GLM) is a broad class of statistical models that have extensive applications in machine learning, electrical engineering, finance, biology, and many areas not stated here. Many algorithms have been proposed for inference, prediction and classification tasks under the umbrella of the GLM, such as the Lasso algorithm, the EM algorithm, Dantzig selectors, etc., but often it is hard to confidently assess optimality. Lower bounds for minimax and Bayes risks play a key role here by providing theoretical benchmarks with which one can evaluate the performance of algorithms. While many previous approaches have focused on the Gaussian linear model, in this paper we establish minimax and Bayes risk lower bounds that hold uniformly over all statistical models within the GLM. Our arguments demonstrate a set of information-theoretic techniques that are general and applicable to setups other than the GLM. As a result, many applications stand to potentially benefit from our work.
Acknowledgments
This work was supported in part by NSF grants CCF-1704967, CCF-1750430, CCF-0939370. | 1. What are the contributions and key findings of the paper in terms of minimax risk bounds for generalized linear models and Gaussian linear models?
2. How do the results of the paper compare to previous works, specifically Lee-Courtade?
3. Can you provide more details about the tools developed by Y. Wu that were used in the paper?
4. How does the paper demonstrate the effectiveness of information theoretic methods in extracting dependencies on the structure of the design matrix?
5. What are some potential applications or future directions for research related to the paper's findings? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
Minimax risk bounds for Generalized Linear Models and Gaussian Linear models for L_2 and entropic risks. The results go beyond Lee-Courtade. using tools developed by Y. Wu. Gives interesting and improved results even when design matrix is not well structured (fat, rank deficient). Works for some random designs. The paper provides evidence that information theoretic methods can extract (sense?) dependencies on the structure of design matrix.
Strengths
Well-written. Timely, interesting and explainable results. Of interest to any theoretically inclined reader. I find the connexion between risks and ratio of Schatten 1-2 norms of design matrix particularly appealing.
Weaknesses
Nothing special |
NIPS | Title
Minimax Bounds for Generalized Linear Models
Abstract
We establish a new class of minimax prediction error bounds for generalized linear models. Our bounds significantly improve previous results when the design matrix is poorly structured, including natural cases where the matrix is wide or does not have full column rank. Apart from the typical L2 risks, we study a class of entropic risks which recovers the usual L2 prediction and estimation risks, and demonstrate that a tight analysis of Fisher information can uncover underlying structural dependency in terms of the spectrum of the design matrix. The minimax approach we take differs from the traditional metric entropy approach, and can be applied to many other settings.
1 Introduction
Throughout, we consider a parametric framework where observations X ∈ Rn are generated according to X ∼ Pθ, where Pθ denotes a probability measure on a measurable space (X ⊆ Rn,F) indexed by an underlying parameter θ ∈ Θ ⊂ Rd. For each Pθ, we associate a density f(·; θ) with respect to an underlying measure λ on (X ,F) according to
dPθ(x) = f(x; θ)dλ(x).
This setup contains a vast array of fundamental applications in machine learning, engineering, neuroscience, finance, statistics and information theory [1–10]. As examples, mean estimation [1], covariance and precision matrix estimation [2], phase retrieval [3,4], group or membership testing [5], pairwise ranking [10], can all be modeled in terms of parametric statistics. The central question to address in all of these problems is essentially the same: how accurately can we infer the parameter θ given the observation X?
One of the most popular parameteric families is the exponential family, which captures a rich variety of parametric models such as binomial, Gaussian, Poisson, etc. Given a parameter η ∈ R, a density f(·; η) is said to belong to the exponential family if it can be written as
f(x; η) = g(x) exp ( ηx− Φ(η) s(σ) ) . (1)
Here, the parameter η is the natural parameter, g : X ⊆ R→ [0,∞) is the base measure, Φ : R→ R is the cumulant function, and s(σ) > 0 is a variance parameter. The density f(·; η) is understood to be on a probability space (X ⊆ R,F) with respect to a dominating σ-finite measure λ. In this work, we are interested in the following generalized linear model (GLM), where observation X ∈ Rn is generated according to an exponential family with natural parameter equal to a linear transformation of the underlying parameter θ. In other words,
f(x; θ) = n∏ i=1 { g(xi) exp ( xi〈mi, θ〉 − Φ(〈mi, θ〉) s(σ) )} , (2)
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
for a real parameter θ := (θ1, θ2, . . . , θd) ∈ Rd and a fixed design matrix M ∈ Rn×d, with rows given by the vectors {mi}ni=1 ⊂ Rd. The above model assumes each Xi is drawn from its own exponential family, with respective natural parameters 〈mi, θ〉, i = 1, 2, . . . , n. Evidently, this captures the classical (Gaussian) linear model X = Mθ + Z, where f(·; θ) is taken to be the usual Gaussian density, and also captures a much broader class of problems including phase retrieval, matrix recovery and logistic regression. See [11–13] for history and theory of the generalized linear model.
In order to evaluate the performance of an estimator θ̂ (i.e., a measurable function ofX), it is common to define a loss function L(·, ·) : Rd × Rd 7−→ R and analyze the loss L(θ, θ̂). A typical figure of merit is the constrained minimax risk R(M,Θ), defined as
R(M,Θ) := inf θ̂ sup θ∈Θ L(θ, θ̂).
In words, the minimax risk characterizes the worst-case risk under the specified loss L(·, ·) achieved by the best estimator, with a constraint that θ belongs to a specified parameter space Θ.
Two choices of the loss function L(·, ·) give rise to the usual variants of L2 loss:
1. Estimation loss, where the loss function L(·, ·) is defined as
L1(θ, θ̂) = E‖θ − θ̂‖2 for all θ, θ̂ ∈ Rd. (3)
2. Prediction loss, where the loss function L(·, ·) is defined as
L2(θ, θ̂) = 1
n E‖Mθ −Mθ̂‖2 for all θ, θ̂ ∈ Rd. (4)
In this work, we shall approach things from an information theoretic viewpoint. In particular, we will bound minimax risk under entropic loss (closely connected to logarithmic loss in the statistical learning and information literature, see, e.g., [14–16]), from which L2 estimates will follow. To start, let us review some of the key definitions in information theory. Suppose the parameter θ ∈ Rd follows a prior π, a probability measure on Rd having density ψ with respect to Lebesgue measure. The differential entropy h(θ) corresponding to random variable θ is defined as
h(θ) := − ∫ Rd ψ(u) logψ(u)du.
Here and throughout, we will take logarithms with respect to the natural base, and assume all entropies exist (i.e., their defining integrals exist in the Lebesgue sense). The mutual information I(θ;X) between parameter θ ∼ π and observation X ∼ Pθ is defined as
I(θ;X) := ∫ Rd ∫ X f(x; θ) log f(x; θ)∫ Rd f(x; θ ′)dπ(θ′) dλ(x)dπ(θ).
The conditional entropy is defined as h(θ|X) := h(θ)− I(θ;X). The entropy power of a random variable U is defined as exp(2h(U)), and for any two random variables U and V with well-defined conditional entropy, the conditional entropy power is defined similarly as exp(2h(U |V )). Lower bounds on conditional entropy power can be translated into lower bounds of other losses, via tools in rate distortion theory [17]. To illustrate this, let’s consider the following two Bayes risks, with suprema taken over all priors π on the parameter space Θ ⊆ Rd, and infima taken over all valid estimators θ̂ (i.e., measurable functions of X).
1. Entropic estimation loss, where the Bayes risk is defined as
Re(M,Θ) := inf θ̂ sup π n∑ i=1 exp ( 2h(θi|θ̂i) ) . (5)
2. Entropic prediction loss, where the Bayes risk is defined as
Rp(M,Θ) := inf θ̂ sup π
1
n n∑ i=1 exp ( 2h(m>i θ|m>i θ̂) ) . (6)
The following simple observation shows that any lower bound derived for the entropic Bayes risks implies a lower bound on the minimax L2 risks.
Lemma 1. We have inf θ̂ supθ∈Θ L1(θ, θ̂) & Re(M,Θ) and inf θ̂ supθ∈Θ L2(θ, θ̂) & Rp(M,Θ).
Proof. This follows since Gaussians maximize entropy subject to second moment constraints and conditioning reduces entropy: E(θi−θ̂i)2 ≥ Var(θi−θ̂i) & exp(2h(θi−θ̂i)) & exp(2h(θi|θ̂i)).
Here and onwards, we use “&” (also “.” and “ ”) to refer to “≥” (and “≤”, “=”, respectively) up to constants that do not depend on parameters.
Although we focus on L2 loss in the present work, we remark that minimax bounds on entropic loss directly yield corresponding estimates on Lp loss using standard arguments involving covering and packing numbers of Lp spaces. See, for example, the work by Raskutti et al. [18]. Despite its universal nature, there is relatively limited work on deriving minimax bounds for the entropic loss. This is the focus of the present work, and as a consequence, we obtain bounds on L2 loss that significantly improve on prior results when the matrix M is poorly structured.
1.1 Contributions
In this paper, we make three main contributions.
1. First, we establish L2 minimax risk and entropic Bayes risk bounds for the generalized linear model (2). The generality of the GLM allows us to extend our results to specific instances of the GLM such as the Gaussian linear model, phase retrieval and matrix recovery.
2. Second, we establish L2 minimax risk and entropic Bayes risk bounds for the Gaussian linear model. In particular, our bounds are nontrivial for many instances where previous results fail (for example when M ∈ Rn×d does not have full column rank, including cases with d > n), and can be naturally applied to the sparse problem where ‖θ‖0 ≤ k. Further, we show that both our minimax risk and entropic Bayes risk bounds are tight up to constants and log factors when M is sampled from a Gaussian ensemble.
3. Third, we investigate the L2 minimax risk via the lens of the entropic Bayes risk, and provide evidence that information theoretic minimax methods can naturally extract dependencies on the structure of design matrix M via analysis of Fisher information. The techniques we develop are general and can be used to establish minimax results for other problems.
2 Main Results and Discussion
The following notation is used throughout: upper-case letters (e.g., X , Y ) denote random variables or matrices, and lower-case letters (e.g., x, y) denote realizations of random variables or vectors. We use subscript notation vi to denote the i-th component of a vector v = (v1, v2, . . . , vd). We let [k] denote the set {1, 2, . . . , k}. We will be making the following assumption.
Assumption: The second derivative of the cumulant function Φ is bounded uniformly by a constant L > 0: Φ′′(·) ≤ L. The following lemma characterizes the mean and variance of densities in the exponential family.
Lemma 2 (Page 29, [11]). Any observation X generated according to the exponential family (1) has mean Φ′(η) and variance s(σ) · Φ′′(η).
In other words, our assumption is equivalent to saying that the variance of each observation X1, . . . , Xn is bounded. This is a common assumption made in the literature; See, for example, [19–22].
Our first main result establishes a minimax prediction lower bound corresponding to the generalized linear model (2). Let us first make a few definitions. For an n × k matrix A, we define the vector ΛA := (λ1, . . . , λk) ∈ Rk, where the λi’s denote the eigenvalues of the k × k symmetric matrix
A>A in descending order. ‖ΛA‖p denotes the usual Lp norm of the vector ΛA for p ≥ 1. Finally, we define
Γ(A) := max ( ‖ΛA‖21 ‖ΛA‖22 , λmin(A >A) ‖Λ−1A ‖1 ) , (7)
where Λ−1A := (λ −1 1 , . . . , λ −1 k ), with the convention that λmin(A >A)‖Λ−1A ‖1 = 0 when λmin(A >A) = 0.
Theorem 3. For observations X ∈ Rn generated via the generalized linear model (2) with a fixed design matrix M ∈ Rn×d, the minimax L2 prediction risk and the entropic Bayes prediction risk are lower bounded by
1 n inf θ̂ sup θ∈Rd E‖Mθ̂ −Mθ‖2 & 1 n s(σ) L Γ(M).
1 n inf θ̂ sup π n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) ) & 1 n s(σ) L ‖ΛM‖21 ‖ΛM‖22 .
Bounds on minimax risk under an additional sparsity constraint ‖θ‖0 ≤ k (i.e., the true parameter θ has at most k non-zero entries) can be derived as a corollary.
Corollary 4 (Sparse Version of Theorem 3). For observations X ∈ Rn generated via the generalized linear model (2), with the additional constraint that ‖θ‖0 ≤ k (i.e., Θ := {θ ∈ Rd : ‖θ‖0 ≤ k}), the minimax prediction error is lower bounded by
1 n inf θ̂ sup θ∈Θ E‖Mθ̂ −Mθ‖2 & 1 n s(σ) L max Q∈Mk Γ(Q).
1 n inf θ̂ sup π n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) ) & 1 n s(σ) L max Q∈Mk ‖ΛQ‖21 ‖ΛQ‖22 .
Here, the maximum is taken overMk, the set of all n× k′ submatrices of M , with k′ ≤ k.
We now note an important specialization of Corollary 4. In particular, consider the Gaussian linear model with observations X ∈ Rn generated according to
X = Mθ + Z, (8)
with Z ∼ N (0, σ2 In) the standard Gaussian vector. This corresponds to the GLM of (2) when the functions are taken to be h(x) = e−x
2/(2σ2), s(σ) = σ2, and Φ(t) = t2/2 (hence, L = 1). This is a particularly important instance worth highlighting because of the ubiquity of the Gaussian linear model in applications.
Theorem 5. For observations X ∈ Rn generated via the Gaussian linear model (8), with the sparsity constraint ‖θ‖0 ≤ k (i.e., Θ := {θ ∈ Rd : ‖θ‖0 ≤ k}), the minimax prediction error is lower bounded by
1 n inf θ̂ sup θ∈Θ E‖Mθ̂ −Mθ‖2 & σ 2 n max Q∈Mk Γ(Q).
1 n inf θ̂ sup π n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) ) & σ2 n max Q∈Mk ‖ΛQ‖21 ‖ΛQ‖22 .
Here, the maximum is taken overMk, the set of all n× k′ submatrices of M , with k′ ≤ k. Remark 6. In the above results, the function Γ(·) can in fact be replaced with
Γ̃(M) := max ( n∑ i=1 ‖mi‖42 ‖Mmi‖2 , λmin(M >M)‖Λ−1M ‖1 ) ,
which is stronger than the original statements. However, the chosen statements above highlight the simple dependence on the spectrum of ΛM .
2.1 Related Work
Most relevant to our results is the following lower bound on minimax L2 estimation risk and entropic Bayes estimation risk, developed in a recent work by Lee and Courtade [23]. We note that [23] does not bound prediction loss (which is often of primary interest), as we have done in the present paper. Theorem 7 (Theorem 3, [23]). Let observation X be generated via the generalized linear model defined in (2), with the additional structural constraint Θ = Bd2(R) := {v : ‖v‖22 ≤ R2}. Suppose the cumulant function Φ satisfies Φ′′ ≤ L for some constant L. Then, the minimax estimation error is lower bounded by
inf θ̂ sup θ∈Θ E‖θ̂ − θ‖2 & inf θ̂ sup π n∑ i=1 exp(2h(θi|θ̂i)) & min ( R2, s(σ) L Tr((M>M)−1) ) . (9)
The bound of (9) is tight when X is generated by the Gaussian linear model, showing that (Gaussian) linear models are most favorable in the sense of minimax estimation error amongst the class of GLMs considered here. Lee and Courtade extracted the dependence on the Tr(M>M) term by analyzing a Fisher information term in the class of Bayesian Cramér-Rao-type bounds from [24]. Earlier work (see, e.g., [25]) yielded bounds on the order of d/λmax(M>M), which is loose compared to (9).
There is a large body of work that establish minimax lower bounds on prediction error for specific models of the generalized linear model. Typically, these analyses depend on methods involving metric entropy (see, for example, [4, 18, 19, 26–28]). A popular minimax result is due to Raskutti et al. [18], who consider the sparse Gaussian linear model, where for a fixed design matrix M with an additional sparsity constraint ‖θ‖0 ≤ k,
σ2 Φ2k,−(M)
Φ2k,+(M)
k n log
( ed
k
) . inf
θ̂ sup ‖θ‖0≤k
1 n E‖Mθ̂ −Mθ‖22 . σ2 min
( k
n log
( ed
k
) , 1 ) . (10)
Here the terms Φr,−(M) and Φr,+(M) correspond to the constrained eigenvalues,
Φr,−(M) := inf 06=‖θ‖0≤r
‖Mθ‖2
‖θ‖2 , Φr,+(M) := sup
06=‖θ‖0≤r
‖Mθ‖2
‖θ‖2 . (11)
The upper bound of (10) is achieved by classical methods such as aggregation [29–32].
One can readily observe that the lower bound of (10) becomes degenerate for even mildly ill-structured design matrices M . For example, in the case where M has repeating columns, the above result gives a lower bound of 0, which is not very interesting. This suggests that the metric entropy approach does not easily capture the dependence of the structure of design matrixM at the resolution of the complete spectrum of M>M as our results do. In fact, it can be shown that Corollary 4 uniformly improves upon (10) up to logarithmic factors; see Section 4.1 of the supplementary. Further, the lower bound of Raskutti et al. does not hold for k > n, which is a disadvantage for high dimensional problems where d n. Verzelen [30] discusses the regime where kn log ( ed k ) ≥ 12 and k ≤ max(d
1/3, n/5) and provide bounds for the worst-case matrix M , which is a different setting from ours.
There are also lines of work on specific settings of the generalized linear model. For example, Candes et al. [28] discusses low-rank matrix recovery, and Cai et al. [4] considers phase retrieval. There are, however, fewer results that directly look at the generalized linear model of our setting. The closest work related is that of Abramovich and Grinshtein [19], where they consider estimating the entire vector Mθ, as opposed to our setting where we estimate θ first with θ̂, then evaluate Mθ̂. Their result also depends on the ratio between (constrained) minimum and maximum eigenvalues as in (10), and hence fails when M is not full rank or otherwise has divergent maximum and minimum (constrained) eigenvalues.
Comparing Theorems 3 and 5 with the results surveyed above raises several points (illustrated in Table 1):
• Nontrivialness when M is not full rank. Unlike the lower bound in (10), the ratio ‖ΛM‖21/‖ΛM‖22 does not vanish when M is not full rank; see Case (d) in Table 1. This is particularly important when the dimension of the parameter is large relative to the number of observed samples.
Remark 8. In some cases, (10) can be improved by ignoring certain components of θ ∈ Rd via dimensionality reduction. For example, if the first two columns of M are the same, then it is possible to ignore the first component of θ and simply look at the remaining d− 1 components. We remark that even with this reduction, (10) still depends on the ratio between minimum and maximum constrained eigenvalues of the new “effective” matrix, and leads to a poor lower bound when the minimum and maximum constrained eigenvalues are of a different order. We remark that other dimensionality reduction methods (such as rotations) may be limited by the sparsity constraint ‖θ‖0 ≤ k. Moreover, in general when the spectrum of M is all positive (with divergent large/small eigenvalues), one cannot use dimensionality reduction to improve the result of (10).
2.2 Application to Gaussian Designs
Gaussian designs are frequently adopted in machine learning and compressed sensing (see, for example, [18, 33–35]). The following proposition provides a concentration bound for the ratio ‖ΛM‖21/‖ΛM‖22 when M is sampled from the standard Gaussian ensemble (i.e., where each component of M is sampled i.i.d. according to a standard Gaussian). Proposition 9. Let the design matrix M ∈ Rn×k be sampled from the Gaussian ensemble. There exist universal constants c1, c2, c3 > 0 such that ‖ΛM‖21/‖ΛM‖22 ≥ c1 min(n, k) with probability at least 1− c2 exp(−c3 min(n, k)).
Proposition 9 implies that, with high probability, the lower bound of Theorem 5 (and therefore the corresponding estimate in Theorem 3) is sharp up to a logarithmic term that is negligible when d k. In particular, under the assumptions of Theorem 5, we obtain with the help of (10) that
σ2 min
( k
n , 1
) . inf
θ̂ sup ‖θ‖0≤k
1 n E‖Mθ̂ −Mθ‖22 . σ2 min
( k
n log
( ed
k
) , 1 ) , (12)
with the lower bound holding with high probability in min(n, k). This can significantly improve on the lower bound (10); consider, for example, the case where s := min(2k, d) = αn for
some fixed α < 1. Note that any n × s submatrix M ′ of M satisfies Φ2k,−(M)/Φ2k,+(M) ≤ λmin(M ′>M ′)/λmax(M ′>M ′). An asymptotic result by Bai and Yin [36] implies that if α is fixed then this latter ratio converges to (1− √ α) 2 / (1 + √ α)
2 almost surely as n, k, d → ∞. Hence, asymptotically speaking, the result of (10) is tight at most up to constants depending on α while our results of Corollary 4 is tight (up to log factors) without dependency of α.
Interestingly, Proposition 9 also holds for square matrices, where the minimum eigenvalue is close to zero (more precisely, for a square Gaussian matrix M ∈ Rn×n, λmin(M>M) is of the order n−1, as shown in the work of Rudelson and Vershynin [37]). Proposition 9 follows from Szarek’s work [38] on concentration of the largest n/2 singular values for a square Gaussian matrix M ∈ Rn×n, concentration of singular values of rectangular subgaussian matrices [26], and an application of interlacing inequalities for singular values of submatrices [39]. Similar results can be shown for subgaussian matrices under additional assumptions using tools from [40].
3 Key Points of Proofs of Main Theorems
In our approach, we will be using classical information theory tools inspired by the techniques developed by Lee and Courtade [23].
3.1 Preliminaries
We say that a measure µ is log-concave if dµ(x) = e−V (x)dx for some convex function V (·). The Fisher information IX(θ) given θ ∈ Rd corresponding to the map θ 7−→ Pθ is defined as
IX(θ) = EX ‖∇θ log f(X; θ)‖22 ,
where the gradient is taken with respect to θ, and the expectation is taken with respect to X ∼ Pθ. If the parameter θ has a prior π that is log-concave, the following lemma gives an upper bound on the mutual information I(θ;X), which depends on the covariance matrix of θ, defined as Cov(θ).
Lemma 10 (Theorem 2, [24]). Suppose the prior π of θ ∈ Rd is log-concave. Then, under mild regularity conditions on the map θ 7−→ Pθ, we have
I(θ;X) ≤ d · φ (
Tr(Cov(θ)) · E IX(θ) d2
) , (13)
where the function φ(·) is defined as φ(x) := {√
x if 0 ≤ x < 1, 1 + 12 log x if x ≥ 1.
We note that the regularity condition in Lemma 10 requires that each member of the parametric family Pθ has density f(·; θ) smooth enough to permit the following change of integral and differentiation,∫
X ∇θf(x; θ)dλ(x) = 0, µ− a.e. θ. (14)
In our case, since we are working with the GLM of (2), the regularity condition is automatically satisfied.
When θ is a one-dimensional (i.e., d = 1) log-concave random variable, the bound of (13) is sharp up to a (modest) multiplicative constant when Var(θ)E IX(θ) is bounded away from zero. There exists a tighter version of Lemma 10 when π is uniformly log-concave, however Lemma 10 is enough for our purposes. We direct the interested reader to the paper [24].
3.2 Proof Sketch of Theorem 3
We start off by noting that we can lower bound the entropic Bayes risk of (6) by taking a specific prior π. For our purposes, we will let θ have a multivariate Gaussian prior π = N ( 0, β2 Id ) .
We continue with a bound on the sum of conditional entropy powers n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) ) ≥ n∑ i=1 exp ( 2h(m>i θ)− 2I(m>i θ;X) ) , (15)
which follows from the data-processing inequality I(m>i θ;m > i θ̂) ≤ I(m>i θ;X), since m>i θ → X → m>i θ̂ forms a Markov chain.
When mi ∈ Rd is a zero-vector, exp ( 2h(m>i θ|m>i θ) ) = exp ( 2h(m>i θ)− 2I(m>i θ;X) ) = 0 and hence does not contribute to the summations within (15). This implies that removing zero vector rows from M does not affect the proof following (15). Hence, in the following we will assume that the matrix M does not have rows that are zero vectors.
By our choice of the prior π, the density of m>i θ is Gaussian and hence log-concave, which allows us to invoke Lemma 10, implying
n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) ) ≥ n∑ i=1 exp ( 2h(m>i θ)− 2φ(Var(m>i θ) · E IX(m>i θ)) ) . (16)
Here, the expectation is taken with respect to the marginal density of m>i θ. The primary task is now to obtain a reasonable bound on the expected Fisher information term E IX(m>i θ). To do this, we introduce the following lemma, which provides an upper bound for the expected Fisher information E IX(m>i θ). Lemma 11. Fix M ∈ Rn×d. If parameter θ has a prior π = N (0, β2 Id) and X ∈ Rn is sampled according to the generalized linear model defined as (2), then
E IX(m>i θ) ≤ L s(σ) · ‖Mmi‖ 2 2 ‖mi‖42 + 1 β2 ·Ψi(M) for all i = 1, 2, . . . , n. (17)
The function Ψi(M) depends only on M and is finite. The expectation is taken with respect to the marginal density of m>i θ.
The functions Ψi(·) are not explictly stated here because later we will be taking β large enough so that Ψi(·)/β2 in (17) can be ignored. A proof of Lemma 11 and more details about the functions Ψi(·) are included in the supplementary. We can continue from (16) and see that
n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) )
& n∑ i=1 β2‖mi‖22 exp ( −2φ [ β2‖mi‖22 ( L s(σ) · ‖Mmi‖ 2 2 ‖mi‖42 + 1 β2 Ψi(M) )]) (a)
n∑ i=1
1
L s(σ) · ‖Mmi‖22 ‖mi‖42 + 1β2 Ψi(M)
(b) = (1− )s(σ)
L n∑ i=1 ‖mi‖42 ‖Mmi‖22 . (18)
In the above, both (a) and (b) require a selection of β2 to be large enough. In particular, in (a), β2 ≥ s(σ)/L would guarantee that the function φ behaves logarithmically (recall from Lemma 10 that φ(t) behaves logarithmically if t ≥ 1). In (b), the variable depends on the selection of β. Since the function Ψi(M) is finite for all i = 1, . . . , n, by taking β2 a constant large enough, we can force to be as close to zero as possible. Hence, we can say that the inequality holds with = 0. A direct application of the Cauchy-Schwarz inequality then yields
n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) ) ≥ s(σ) L (∑n i=1 ‖mi‖22 )2∑n i=1 ‖Mmi‖22 = s(σ) L ‖ΛM‖21 ‖ΛM‖22 . (19)
On the other hand, from Theorem 7 and the matrix identity ‖Mv‖22 ≥ λmin(M>M)‖v‖22,
inf θ̂ sup θ∈Rd
E‖Mθ̂ −Mθ‖22 ≥ λmin(M>M) · Tr ( (M>M)−1 ) = λd‖Λ−1M ‖1. (20)
Combining (19) and (20) with Lemma 1 finishes the proof.
3.3 An Alternative Proof of Theorem 5
For the Gaussian linear model, we have the following tighter version of Lemma 11.
Lemma 12. Fix M ∈ Rn×d. If θ ∼ N (0, β2 Id) and X ∈ Rn is sampled according to the Gaussian linear model defined as (8). Then,
E IX(m>i θ) ≤ 1 σ2 · ‖Mmi‖ 2 2 ‖mi‖42 for 1 ≤ i ≤ n. (21)
By taking any β2 ≥ σ2 maxi ( ‖mi‖22/‖Mmi‖22 ) , the function φ(·) in (16) will again behave logarithmically, directly implying (18) with = 0. The remaining proof follows similarly as before. Remark 13. The functions Ψi(·) can be difficult to bound directly (see supplementary for more details). Hence, the improved tightness and simplicity of Lemma 12 over Lemma 11 for the Gaussian linear model provides more flexibility on the selection of β. This can be helpful when dealing with problem settings where there are other constraints on the parameter space Θ. Remark 14. There is a subtle but crucial difference in the proof techniques employed here compared to those in [23]. The key step in [23] requires bounding the Fisher information IX(θi) with diagonal terms in the Fisher information matrix IX(θ), i.e., Lemma 9 of [23]. In our case, we need to bound the Fisher information IX(m>i θ) (e.g., Lemma 11), and here, the terms m>i θ are not necessarily mutually independent as required in Lemma 9 of [23], which prevents us from a direct application. Instead, we choose θ to have a Gaussian prior and try to bound IX(θi) directly. This is facilitated by properties of the Gaussian distribution; see Section 4.3 in the appendix for more details.
Broader Impact
The generalized linear model (GLM) is a broad class of statistical models that have extensive applications in machine learning, electrical engineering, finance, biology, and many areas not stated here. Many algorithms have been proposed for inference, prediction and classification tasks under the umbrella of the GLM, such as the Lasso algorithm, the EM algorithm, Dantzig selectors, etc., but often it is hard to confidently assess optimality. Lower bounds for minimax and Bayes risks play a key role here by providing theoretical benchmarks with which one can evaluate the performance of algorithms. While many previous approaches have focused on the Gaussian linear model, in this paper we establish minimax and Bayes risk lower bounds that hold uniformly over all statistical models within the GLM. Our arguments demonstrate a set of information-theoretic techniques that are general and applicable to setups other than the GLM. As a result, many applications stand to potentially benefit from our work.
Acknowledgments
This work was supported in part by NSF grants CCF-1704967, CCF-1750430, CCF-0939370. | 1. What is the focus and contribution of the paper on generalized linear models?
2. What are the strengths of the proposed approach, particularly in terms of its novelty and comprehensiveness?
3. Do you have any concerns regarding the assumptions made in the paper, such as the Gaussian ensemble assumption?
4. How do the proposed regret bounds compare to prior works in the field?
5. Are there any potential applications or future directions for this research that could be explored? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The paper considers generalized linear models and designs newer regret bounds which offer a significant improvement in the prior art. They also consider as special case \Gaussian linear models and when the linear coefficients are sampled from a Gaussian ensemble. I am not active in this area of research, nonetheless I found the results striking and engaging.
Strengths
The analysis is impressively novel and comprehensive. The paper is primarily a theoretical exposition to improve the regret bounds, which in and of itself is a worthy contribution. The ideas have been delineated logically and clearly.
Weaknesses
In general, paper is sound. I would be interested to see what happens if we relax the Gaussian ensemble assumption of 2.2, namely, what if there is mixture of Gaussian or a heavy-tailed sampling instead of Gaussian sampling? It would be also interesting to see some intuitive justification of the losses considered in equation (5) and (6) and behind Lemma 1. Some minor points, the notation of subscripts of \theta should be introduced before stating equation (5). Post-rebuttal : I am satisfied with the responses and would like to keep the score as is. |
NIPS | Title
Minimax Bounds for Generalized Linear Models
Abstract
We establish a new class of minimax prediction error bounds for generalized linear models. Our bounds significantly improve previous results when the design matrix is poorly structured, including natural cases where the matrix is wide or does not have full column rank. Apart from the typical L2 risks, we study a class of entropic risks which recovers the usual L2 prediction and estimation risks, and demonstrate that a tight analysis of Fisher information can uncover underlying structural dependency in terms of the spectrum of the design matrix. The minimax approach we take differs from the traditional metric entropy approach, and can be applied to many other settings.
1 Introduction
Throughout, we consider a parametric framework where observations X ∈ Rn are generated according to X ∼ Pθ, where Pθ denotes a probability measure on a measurable space (X ⊆ Rn,F) indexed by an underlying parameter θ ∈ Θ ⊂ Rd. For each Pθ, we associate a density f(·; θ) with respect to an underlying measure λ on (X ,F) according to
dPθ(x) = f(x; θ)dλ(x).
This setup contains a vast array of fundamental applications in machine learning, engineering, neuroscience, finance, statistics and information theory [1–10]. As examples, mean estimation [1], covariance and precision matrix estimation [2], phase retrieval [3,4], group or membership testing [5], pairwise ranking [10], can all be modeled in terms of parametric statistics. The central question to address in all of these problems is essentially the same: how accurately can we infer the parameter θ given the observation X?
One of the most popular parameteric families is the exponential family, which captures a rich variety of parametric models such as binomial, Gaussian, Poisson, etc. Given a parameter η ∈ R, a density f(·; η) is said to belong to the exponential family if it can be written as
f(x; η) = g(x) exp ( ηx− Φ(η) s(σ) ) . (1)
Here, the parameter η is the natural parameter, g : X ⊆ R→ [0,∞) is the base measure, Φ : R→ R is the cumulant function, and s(σ) > 0 is a variance parameter. The density f(·; η) is understood to be on a probability space (X ⊆ R,F) with respect to a dominating σ-finite measure λ. In this work, we are interested in the following generalized linear model (GLM), where observation X ∈ Rn is generated according to an exponential family with natural parameter equal to a linear transformation of the underlying parameter θ. In other words,
f(x; θ) = n∏ i=1 { g(xi) exp ( xi〈mi, θ〉 − Φ(〈mi, θ〉) s(σ) )} , (2)
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
for a real parameter θ := (θ1, θ2, . . . , θd) ∈ Rd and a fixed design matrix M ∈ Rn×d, with rows given by the vectors {mi}ni=1 ⊂ Rd. The above model assumes each Xi is drawn from its own exponential family, with respective natural parameters 〈mi, θ〉, i = 1, 2, . . . , n. Evidently, this captures the classical (Gaussian) linear model X = Mθ + Z, where f(·; θ) is taken to be the usual Gaussian density, and also captures a much broader class of problems including phase retrieval, matrix recovery and logistic regression. See [11–13] for history and theory of the generalized linear model.
In order to evaluate the performance of an estimator θ̂ (i.e., a measurable function ofX), it is common to define a loss function L(·, ·) : Rd × Rd 7−→ R and analyze the loss L(θ, θ̂). A typical figure of merit is the constrained minimax risk R(M,Θ), defined as
R(M,Θ) := inf θ̂ sup θ∈Θ L(θ, θ̂).
In words, the minimax risk characterizes the worst-case risk under the specified loss L(·, ·) achieved by the best estimator, with a constraint that θ belongs to a specified parameter space Θ.
Two choices of the loss function L(·, ·) give rise to the usual variants of L2 loss:
1. Estimation loss, where the loss function L(·, ·) is defined as
L1(θ, θ̂) = E‖θ − θ̂‖2 for all θ, θ̂ ∈ Rd. (3)
2. Prediction loss, where the loss function L(·, ·) is defined as
L2(θ, θ̂) = 1
n E‖Mθ −Mθ̂‖2 for all θ, θ̂ ∈ Rd. (4)
In this work, we shall approach things from an information theoretic viewpoint. In particular, we will bound minimax risk under entropic loss (closely connected to logarithmic loss in the statistical learning and information literature, see, e.g., [14–16]), from which L2 estimates will follow. To start, let us review some of the key definitions in information theory. Suppose the parameter θ ∈ Rd follows a prior π, a probability measure on Rd having density ψ with respect to Lebesgue measure. The differential entropy h(θ) corresponding to random variable θ is defined as
h(θ) := − ∫ Rd ψ(u) logψ(u)du.
Here and throughout, we will take logarithms with respect to the natural base, and assume all entropies exist (i.e., their defining integrals exist in the Lebesgue sense). The mutual information I(θ;X) between parameter θ ∼ π and observation X ∼ Pθ is defined as
I(θ;X) := ∫ Rd ∫ X f(x; θ) log f(x; θ)∫ Rd f(x; θ ′)dπ(θ′) dλ(x)dπ(θ).
The conditional entropy is defined as h(θ|X) := h(θ)− I(θ;X). The entropy power of a random variable U is defined as exp(2h(U)), and for any two random variables U and V with well-defined conditional entropy, the conditional entropy power is defined similarly as exp(2h(U |V )). Lower bounds on conditional entropy power can be translated into lower bounds of other losses, via tools in rate distortion theory [17]. To illustrate this, let’s consider the following two Bayes risks, with suprema taken over all priors π on the parameter space Θ ⊆ Rd, and infima taken over all valid estimators θ̂ (i.e., measurable functions of X).
1. Entropic estimation loss, where the Bayes risk is defined as
Re(M,Θ) := inf θ̂ sup π n∑ i=1 exp ( 2h(θi|θ̂i) ) . (5)
2. Entropic prediction loss, where the Bayes risk is defined as
Rp(M,Θ) := inf θ̂ sup π
1
n n∑ i=1 exp ( 2h(m>i θ|m>i θ̂) ) . (6)
The following simple observation shows that any lower bound derived for the entropic Bayes risks implies a lower bound on the minimax L2 risks.
Lemma 1. We have inf θ̂ supθ∈Θ L1(θ, θ̂) & Re(M,Θ) and inf θ̂ supθ∈Θ L2(θ, θ̂) & Rp(M,Θ).
Proof. This follows since Gaussians maximize entropy subject to second moment constraints and conditioning reduces entropy: E(θi−θ̂i)2 ≥ Var(θi−θ̂i) & exp(2h(θi−θ̂i)) & exp(2h(θi|θ̂i)).
Here and onwards, we use “&” (also “.” and “ ”) to refer to “≥” (and “≤”, “=”, respectively) up to constants that do not depend on parameters.
Although we focus on L2 loss in the present work, we remark that minimax bounds on entropic loss directly yield corresponding estimates on Lp loss using standard arguments involving covering and packing numbers of Lp spaces. See, for example, the work by Raskutti et al. [18]. Despite its universal nature, there is relatively limited work on deriving minimax bounds for the entropic loss. This is the focus of the present work, and as a consequence, we obtain bounds on L2 loss that significantly improve on prior results when the matrix M is poorly structured.
1.1 Contributions
In this paper, we make three main contributions.
1. First, we establish L2 minimax risk and entropic Bayes risk bounds for the generalized linear model (2). The generality of the GLM allows us to extend our results to specific instances of the GLM such as the Gaussian linear model, phase retrieval and matrix recovery.
2. Second, we establish L2 minimax risk and entropic Bayes risk bounds for the Gaussian linear model. In particular, our bounds are nontrivial for many instances where previous results fail (for example when M ∈ Rn×d does not have full column rank, including cases with d > n), and can be naturally applied to the sparse problem where ‖θ‖0 ≤ k. Further, we show that both our minimax risk and entropic Bayes risk bounds are tight up to constants and log factors when M is sampled from a Gaussian ensemble.
3. Third, we investigate the L2 minimax risk via the lens of the entropic Bayes risk, and provide evidence that information theoretic minimax methods can naturally extract dependencies on the structure of design matrix M via analysis of Fisher information. The techniques we develop are general and can be used to establish minimax results for other problems.
2 Main Results and Discussion
The following notation is used throughout: upper-case letters (e.g., X , Y ) denote random variables or matrices, and lower-case letters (e.g., x, y) denote realizations of random variables or vectors. We use subscript notation vi to denote the i-th component of a vector v = (v1, v2, . . . , vd). We let [k] denote the set {1, 2, . . . , k}. We will be making the following assumption.
Assumption: The second derivative of the cumulant function Φ is bounded uniformly by a constant L > 0: Φ′′(·) ≤ L. The following lemma characterizes the mean and variance of densities in the exponential family.
Lemma 2 (Page 29, [11]). Any observation X generated according to the exponential family (1) has mean Φ′(η) and variance s(σ) · Φ′′(η).
In other words, our assumption is equivalent to saying that the variance of each observation X1, . . . , Xn is bounded. This is a common assumption made in the literature; See, for example, [19–22].
Our first main result establishes a minimax prediction lower bound corresponding to the generalized linear model (2). Let us first make a few definitions. For an n × k matrix A, we define the vector ΛA := (λ1, . . . , λk) ∈ Rk, where the λi’s denote the eigenvalues of the k × k symmetric matrix
A>A in descending order. ‖ΛA‖p denotes the usual Lp norm of the vector ΛA for p ≥ 1. Finally, we define
Γ(A) := max ( ‖ΛA‖21 ‖ΛA‖22 , λmin(A >A) ‖Λ−1A ‖1 ) , (7)
where Λ−1A := (λ −1 1 , . . . , λ −1 k ), with the convention that λmin(A >A)‖Λ−1A ‖1 = 0 when λmin(A >A) = 0.
Theorem 3. For observations X ∈ Rn generated via the generalized linear model (2) with a fixed design matrix M ∈ Rn×d, the minimax L2 prediction risk and the entropic Bayes prediction risk are lower bounded by
1 n inf θ̂ sup θ∈Rd E‖Mθ̂ −Mθ‖2 & 1 n s(σ) L Γ(M).
1 n inf θ̂ sup π n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) ) & 1 n s(σ) L ‖ΛM‖21 ‖ΛM‖22 .
Bounds on minimax risk under an additional sparsity constraint ‖θ‖0 ≤ k (i.e., the true parameter θ has at most k non-zero entries) can be derived as a corollary.
Corollary 4 (Sparse Version of Theorem 3). For observations X ∈ Rn generated via the generalized linear model (2), with the additional constraint that ‖θ‖0 ≤ k (i.e., Θ := {θ ∈ Rd : ‖θ‖0 ≤ k}), the minimax prediction error is lower bounded by
1 n inf θ̂ sup θ∈Θ E‖Mθ̂ −Mθ‖2 & 1 n s(σ) L max Q∈Mk Γ(Q).
1 n inf θ̂ sup π n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) ) & 1 n s(σ) L max Q∈Mk ‖ΛQ‖21 ‖ΛQ‖22 .
Here, the maximum is taken overMk, the set of all n× k′ submatrices of M , with k′ ≤ k.
We now note an important specialization of Corollary 4. In particular, consider the Gaussian linear model with observations X ∈ Rn generated according to
X = Mθ + Z, (8)
with Z ∼ N (0, σ2 In) the standard Gaussian vector. This corresponds to the GLM of (2) when the functions are taken to be h(x) = e−x
2/(2σ2), s(σ) = σ2, and Φ(t) = t2/2 (hence, L = 1). This is a particularly important instance worth highlighting because of the ubiquity of the Gaussian linear model in applications.
Theorem 5. For observations X ∈ Rn generated via the Gaussian linear model (8), with the sparsity constraint ‖θ‖0 ≤ k (i.e., Θ := {θ ∈ Rd : ‖θ‖0 ≤ k}), the minimax prediction error is lower bounded by
1 n inf θ̂ sup θ∈Θ E‖Mθ̂ −Mθ‖2 & σ 2 n max Q∈Mk Γ(Q).
1 n inf θ̂ sup π n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) ) & σ2 n max Q∈Mk ‖ΛQ‖21 ‖ΛQ‖22 .
Here, the maximum is taken overMk, the set of all n× k′ submatrices of M , with k′ ≤ k. Remark 6. In the above results, the function Γ(·) can in fact be replaced with
Γ̃(M) := max ( n∑ i=1 ‖mi‖42 ‖Mmi‖2 , λmin(M >M)‖Λ−1M ‖1 ) ,
which is stronger than the original statements. However, the chosen statements above highlight the simple dependence on the spectrum of ΛM .
2.1 Related Work
Most relevant to our results is the following lower bound on minimax L2 estimation risk and entropic Bayes estimation risk, developed in a recent work by Lee and Courtade [23]. We note that [23] does not bound prediction loss (which is often of primary interest), as we have done in the present paper. Theorem 7 (Theorem 3, [23]). Let observation X be generated via the generalized linear model defined in (2), with the additional structural constraint Θ = Bd2(R) := {v : ‖v‖22 ≤ R2}. Suppose the cumulant function Φ satisfies Φ′′ ≤ L for some constant L. Then, the minimax estimation error is lower bounded by
inf θ̂ sup θ∈Θ E‖θ̂ − θ‖2 & inf θ̂ sup π n∑ i=1 exp(2h(θi|θ̂i)) & min ( R2, s(σ) L Tr((M>M)−1) ) . (9)
The bound of (9) is tight when X is generated by the Gaussian linear model, showing that (Gaussian) linear models are most favorable in the sense of minimax estimation error amongst the class of GLMs considered here. Lee and Courtade extracted the dependence on the Tr(M>M) term by analyzing a Fisher information term in the class of Bayesian Cramér-Rao-type bounds from [24]. Earlier work (see, e.g., [25]) yielded bounds on the order of d/λmax(M>M), which is loose compared to (9).
There is a large body of work that establish minimax lower bounds on prediction error for specific models of the generalized linear model. Typically, these analyses depend on methods involving metric entropy (see, for example, [4, 18, 19, 26–28]). A popular minimax result is due to Raskutti et al. [18], who consider the sparse Gaussian linear model, where for a fixed design matrix M with an additional sparsity constraint ‖θ‖0 ≤ k,
σ2 Φ2k,−(M)
Φ2k,+(M)
k n log
( ed
k
) . inf
θ̂ sup ‖θ‖0≤k
1 n E‖Mθ̂ −Mθ‖22 . σ2 min
( k
n log
( ed
k
) , 1 ) . (10)
Here the terms Φr,−(M) and Φr,+(M) correspond to the constrained eigenvalues,
Φr,−(M) := inf 06=‖θ‖0≤r
‖Mθ‖2
‖θ‖2 , Φr,+(M) := sup
06=‖θ‖0≤r
‖Mθ‖2
‖θ‖2 . (11)
The upper bound of (10) is achieved by classical methods such as aggregation [29–32].
One can readily observe that the lower bound of (10) becomes degenerate for even mildly ill-structured design matrices M . For example, in the case where M has repeating columns, the above result gives a lower bound of 0, which is not very interesting. This suggests that the metric entropy approach does not easily capture the dependence of the structure of design matrixM at the resolution of the complete spectrum of M>M as our results do. In fact, it can be shown that Corollary 4 uniformly improves upon (10) up to logarithmic factors; see Section 4.1 of the supplementary. Further, the lower bound of Raskutti et al. does not hold for k > n, which is a disadvantage for high dimensional problems where d n. Verzelen [30] discusses the regime where kn log ( ed k ) ≥ 12 and k ≤ max(d
1/3, n/5) and provide bounds for the worst-case matrix M , which is a different setting from ours.
There are also lines of work on specific settings of the generalized linear model. For example, Candes et al. [28] discusses low-rank matrix recovery, and Cai et al. [4] considers phase retrieval. There are, however, fewer results that directly look at the generalized linear model of our setting. The closest work related is that of Abramovich and Grinshtein [19], where they consider estimating the entire vector Mθ, as opposed to our setting where we estimate θ first with θ̂, then evaluate Mθ̂. Their result also depends on the ratio between (constrained) minimum and maximum eigenvalues as in (10), and hence fails when M is not full rank or otherwise has divergent maximum and minimum (constrained) eigenvalues.
Comparing Theorems 3 and 5 with the results surveyed above raises several points (illustrated in Table 1):
• Nontrivialness when M is not full rank. Unlike the lower bound in (10), the ratio ‖ΛM‖21/‖ΛM‖22 does not vanish when M is not full rank; see Case (d) in Table 1. This is particularly important when the dimension of the parameter is large relative to the number of observed samples.
Remark 8. In some cases, (10) can be improved by ignoring certain components of θ ∈ Rd via dimensionality reduction. For example, if the first two columns of M are the same, then it is possible to ignore the first component of θ and simply look at the remaining d− 1 components. We remark that even with this reduction, (10) still depends on the ratio between minimum and maximum constrained eigenvalues of the new “effective” matrix, and leads to a poor lower bound when the minimum and maximum constrained eigenvalues are of a different order. We remark that other dimensionality reduction methods (such as rotations) may be limited by the sparsity constraint ‖θ‖0 ≤ k. Moreover, in general when the spectrum of M is all positive (with divergent large/small eigenvalues), one cannot use dimensionality reduction to improve the result of (10).
2.2 Application to Gaussian Designs
Gaussian designs are frequently adopted in machine learning and compressed sensing (see, for example, [18, 33–35]). The following proposition provides a concentration bound for the ratio ‖ΛM‖21/‖ΛM‖22 when M is sampled from the standard Gaussian ensemble (i.e., where each component of M is sampled i.i.d. according to a standard Gaussian). Proposition 9. Let the design matrix M ∈ Rn×k be sampled from the Gaussian ensemble. There exist universal constants c1, c2, c3 > 0 such that ‖ΛM‖21/‖ΛM‖22 ≥ c1 min(n, k) with probability at least 1− c2 exp(−c3 min(n, k)).
Proposition 9 implies that, with high probability, the lower bound of Theorem 5 (and therefore the corresponding estimate in Theorem 3) is sharp up to a logarithmic term that is negligible when d k. In particular, under the assumptions of Theorem 5, we obtain with the help of (10) that
σ2 min
( k
n , 1
) . inf
θ̂ sup ‖θ‖0≤k
1 n E‖Mθ̂ −Mθ‖22 . σ2 min
( k
n log
( ed
k
) , 1 ) , (12)
with the lower bound holding with high probability in min(n, k). This can significantly improve on the lower bound (10); consider, for example, the case where s := min(2k, d) = αn for
some fixed α < 1. Note that any n × s submatrix M ′ of M satisfies Φ2k,−(M)/Φ2k,+(M) ≤ λmin(M ′>M ′)/λmax(M ′>M ′). An asymptotic result by Bai and Yin [36] implies that if α is fixed then this latter ratio converges to (1− √ α) 2 / (1 + √ α)
2 almost surely as n, k, d → ∞. Hence, asymptotically speaking, the result of (10) is tight at most up to constants depending on α while our results of Corollary 4 is tight (up to log factors) without dependency of α.
Interestingly, Proposition 9 also holds for square matrices, where the minimum eigenvalue is close to zero (more precisely, for a square Gaussian matrix M ∈ Rn×n, λmin(M>M) is of the order n−1, as shown in the work of Rudelson and Vershynin [37]). Proposition 9 follows from Szarek’s work [38] on concentration of the largest n/2 singular values for a square Gaussian matrix M ∈ Rn×n, concentration of singular values of rectangular subgaussian matrices [26], and an application of interlacing inequalities for singular values of submatrices [39]. Similar results can be shown for subgaussian matrices under additional assumptions using tools from [40].
3 Key Points of Proofs of Main Theorems
In our approach, we will be using classical information theory tools inspired by the techniques developed by Lee and Courtade [23].
3.1 Preliminaries
We say that a measure µ is log-concave if dµ(x) = e−V (x)dx for some convex function V (·). The Fisher information IX(θ) given θ ∈ Rd corresponding to the map θ 7−→ Pθ is defined as
IX(θ) = EX ‖∇θ log f(X; θ)‖22 ,
where the gradient is taken with respect to θ, and the expectation is taken with respect to X ∼ Pθ. If the parameter θ has a prior π that is log-concave, the following lemma gives an upper bound on the mutual information I(θ;X), which depends on the covariance matrix of θ, defined as Cov(θ).
Lemma 10 (Theorem 2, [24]). Suppose the prior π of θ ∈ Rd is log-concave. Then, under mild regularity conditions on the map θ 7−→ Pθ, we have
I(θ;X) ≤ d · φ (
Tr(Cov(θ)) · E IX(θ) d2
) , (13)
where the function φ(·) is defined as φ(x) := {√
x if 0 ≤ x < 1, 1 + 12 log x if x ≥ 1.
We note that the regularity condition in Lemma 10 requires that each member of the parametric family Pθ has density f(·; θ) smooth enough to permit the following change of integral and differentiation,∫
X ∇θf(x; θ)dλ(x) = 0, µ− a.e. θ. (14)
In our case, since we are working with the GLM of (2), the regularity condition is automatically satisfied.
When θ is a one-dimensional (i.e., d = 1) log-concave random variable, the bound of (13) is sharp up to a (modest) multiplicative constant when Var(θ)E IX(θ) is bounded away from zero. There exists a tighter version of Lemma 10 when π is uniformly log-concave, however Lemma 10 is enough for our purposes. We direct the interested reader to the paper [24].
3.2 Proof Sketch of Theorem 3
We start off by noting that we can lower bound the entropic Bayes risk of (6) by taking a specific prior π. For our purposes, we will let θ have a multivariate Gaussian prior π = N ( 0, β2 Id ) .
We continue with a bound on the sum of conditional entropy powers n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) ) ≥ n∑ i=1 exp ( 2h(m>i θ)− 2I(m>i θ;X) ) , (15)
which follows from the data-processing inequality I(m>i θ;m > i θ̂) ≤ I(m>i θ;X), since m>i θ → X → m>i θ̂ forms a Markov chain.
When mi ∈ Rd is a zero-vector, exp ( 2h(m>i θ|m>i θ) ) = exp ( 2h(m>i θ)− 2I(m>i θ;X) ) = 0 and hence does not contribute to the summations within (15). This implies that removing zero vector rows from M does not affect the proof following (15). Hence, in the following we will assume that the matrix M does not have rows that are zero vectors.
By our choice of the prior π, the density of m>i θ is Gaussian and hence log-concave, which allows us to invoke Lemma 10, implying
n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) ) ≥ n∑ i=1 exp ( 2h(m>i θ)− 2φ(Var(m>i θ) · E IX(m>i θ)) ) . (16)
Here, the expectation is taken with respect to the marginal density of m>i θ. The primary task is now to obtain a reasonable bound on the expected Fisher information term E IX(m>i θ). To do this, we introduce the following lemma, which provides an upper bound for the expected Fisher information E IX(m>i θ). Lemma 11. Fix M ∈ Rn×d. If parameter θ has a prior π = N (0, β2 Id) and X ∈ Rn is sampled according to the generalized linear model defined as (2), then
E IX(m>i θ) ≤ L s(σ) · ‖Mmi‖ 2 2 ‖mi‖42 + 1 β2 ·Ψi(M) for all i = 1, 2, . . . , n. (17)
The function Ψi(M) depends only on M and is finite. The expectation is taken with respect to the marginal density of m>i θ.
The functions Ψi(·) are not explictly stated here because later we will be taking β large enough so that Ψi(·)/β2 in (17) can be ignored. A proof of Lemma 11 and more details about the functions Ψi(·) are included in the supplementary. We can continue from (16) and see that
n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) )
& n∑ i=1 β2‖mi‖22 exp ( −2φ [ β2‖mi‖22 ( L s(σ) · ‖Mmi‖ 2 2 ‖mi‖42 + 1 β2 Ψi(M) )]) (a)
n∑ i=1
1
L s(σ) · ‖Mmi‖22 ‖mi‖42 + 1β2 Ψi(M)
(b) = (1− )s(σ)
L n∑ i=1 ‖mi‖42 ‖Mmi‖22 . (18)
In the above, both (a) and (b) require a selection of β2 to be large enough. In particular, in (a), β2 ≥ s(σ)/L would guarantee that the function φ behaves logarithmically (recall from Lemma 10 that φ(t) behaves logarithmically if t ≥ 1). In (b), the variable depends on the selection of β. Since the function Ψi(M) is finite for all i = 1, . . . , n, by taking β2 a constant large enough, we can force to be as close to zero as possible. Hence, we can say that the inequality holds with = 0. A direct application of the Cauchy-Schwarz inequality then yields
n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) ) ≥ s(σ) L (∑n i=1 ‖mi‖22 )2∑n i=1 ‖Mmi‖22 = s(σ) L ‖ΛM‖21 ‖ΛM‖22 . (19)
On the other hand, from Theorem 7 and the matrix identity ‖Mv‖22 ≥ λmin(M>M)‖v‖22,
inf θ̂ sup θ∈Rd
E‖Mθ̂ −Mθ‖22 ≥ λmin(M>M) · Tr ( (M>M)−1 ) = λd‖Λ−1M ‖1. (20)
Combining (19) and (20) with Lemma 1 finishes the proof.
3.3 An Alternative Proof of Theorem 5
For the Gaussian linear model, we have the following tighter version of Lemma 11.
Lemma 12. Fix M ∈ Rn×d. If θ ∼ N (0, β2 Id) and X ∈ Rn is sampled according to the Gaussian linear model defined as (8). Then,
E IX(m>i θ) ≤ 1 σ2 · ‖Mmi‖ 2 2 ‖mi‖42 for 1 ≤ i ≤ n. (21)
By taking any β2 ≥ σ2 maxi ( ‖mi‖22/‖Mmi‖22 ) , the function φ(·) in (16) will again behave logarithmically, directly implying (18) with = 0. The remaining proof follows similarly as before. Remark 13. The functions Ψi(·) can be difficult to bound directly (see supplementary for more details). Hence, the improved tightness and simplicity of Lemma 12 over Lemma 11 for the Gaussian linear model provides more flexibility on the selection of β. This can be helpful when dealing with problem settings where there are other constraints on the parameter space Θ. Remark 14. There is a subtle but crucial difference in the proof techniques employed here compared to those in [23]. The key step in [23] requires bounding the Fisher information IX(θi) with diagonal terms in the Fisher information matrix IX(θ), i.e., Lemma 9 of [23]. In our case, we need to bound the Fisher information IX(m>i θ) (e.g., Lemma 11), and here, the terms m>i θ are not necessarily mutually independent as required in Lemma 9 of [23], which prevents us from a direct application. Instead, we choose θ to have a Gaussian prior and try to bound IX(θi) directly. This is facilitated by properties of the Gaussian distribution; see Section 4.3 in the appendix for more details.
Broader Impact
The generalized linear model (GLM) is a broad class of statistical models that have extensive applications in machine learning, electrical engineering, finance, biology, and many areas not stated here. Many algorithms have been proposed for inference, prediction and classification tasks under the umbrella of the GLM, such as the Lasso algorithm, the EM algorithm, Dantzig selectors, etc., but often it is hard to confidently assess optimality. Lower bounds for minimax and Bayes risks play a key role here by providing theoretical benchmarks with which one can evaluate the performance of algorithms. While many previous approaches have focused on the Gaussian linear model, in this paper we establish minimax and Bayes risk lower bounds that hold uniformly over all statistical models within the GLM. Our arguments demonstrate a set of information-theoretic techniques that are general and applicable to setups other than the GLM. As a result, many applications stand to potentially benefit from our work.
Acknowledgments
This work was supported in part by NSF grants CCF-1704967, CCF-1750430, CCF-0939370. | 1. What is the main contribution of the paper regarding minimax lower bounds for prediction errors in generalized linear models?
2. What are the strengths of the proposed approach compared to previous works, particularly in terms of assumptions and technical steps?
3. How does the reviewer assess the novelty of the paper, especially in relation to a similar work mentioned in the review?
4. What are the weaknesses of the paper, specifically regarding the treatment of certain assumptions and the tightness of the result?
5. Are there any missing factors or improvements that could be added to the current lower bound proposed in the paper? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper is devoted to establishing tight minimax lower bounds on the prediction error in generalized linear models. Crucially, the lower bounds established in this paper require weaker spectral properties of the design matrix, i.e. robust to both near-zero or extreme values in the spectrum. The main idea behind the proof is similar to [23], where the authors first reduce to a Bayesian entropic loss, and then apply the general relationship between the mutual information and Fisher information in [24] to lower bound the Bayesian entropic loss. The main contributions of this paper mainly include: 1. A general minimax lower bound on the prediction error for a large class of generalized linear models; 2. A minimax lower bound for sparse (Gaussian) estimation which requires weaker spectral properties of the design matrix; 3. A direct Bayesian lower bound instead of going through the multiple hypothesis testing and the metric entropies.
Strengths
The assumptions in this paper are general, claims are sound, and the results are interesting. It is particularly interesting to see an explicit Bayesian lower bound based on a natural Gaussian prior, whereas most prior work only showed even a weaker bound under a carefully constructed discrete prior. Also see the listed contributions above.
Weaknesses
1. The novelty of this paper seems questionable, mainly in view of [23]. Specifically, [23] studied a similar problem for generalized linear models where the only difference seems to be that the estimation error was considered instead of the prediction error. The technical steps are also very close to each other: both work reduced to Bayesian entropic loss, then the result of [24] was invoked to show that an upper bound on the Fisher information is sufficient, and finally the authors provided upper bounds on the Fisher information. Of course the last step is different; however this difference does not seem to add too much novelty. 2. Specializing to sparse models, the contribution that weaker spectrum properties are now sufficient seems to be outweighed. First, some problems suffered in the previous approaches can be easily fixed. For example, the authors commented that when the design matrix M has two repeated columns, the previous lower bound becomes trivial. However, this can be easily fixed as follows. Assume wlog that the first two columns of M are the same, then we may simply fix \theta_1 = 0 and allow others to vary arbitrarily. In this way we effectively remove the first column of M, keep the same sparsity property, and only reduce the parameter dimension from d to d-1. Now applying the previous lower bound to the new and well-conditioned matrix gives the desired minimax rate. Also note that similar approaches can be taken even when half of the columns of M are repeated (and we reduce the dimension from d to d/2, which does not affect the rate analysis). Therefore, this comparison may seem slightly unfair and does not make a too strong case to me. Second, I do not fully understand why the authors treat the assumption k < n in previous work a ``crucial disadvantage". Note that for a typical constant noise level \sigma, the previous lower bound already shows that a sample complexity of n > k*log(ed/k) is necessary to achieve a constant statistical error. Also, for any \sigma, Eqn. (12) in this paper also shows that the case n > k gives a trivial error \sigma^2. This seems to suggest that one may wlog restrict to n > k for showing minimax lower bounds. 3. The tightness of the result is not sufficiently discussed. The authors claimed that their Theorem 3 is tight if either the largest or the smallest eigenvalues are of the same order. However, it seems that in those scenarios the previous lower bounds also give the tight answer. In other words, the authors did not explicitly construct an example such that the previous lower bound is not tight but the current bound becomes tight. Moreover, compared with the lower bound in (10), the authors did not show that the new bound provides a uniformly improvement over it. Finally, and most importantly, there is a missing logarithmic factor in the current lower bound, which is known to be necessary and important in sparse estimation. So missing the log factor seems to make the bound not very desirable in my opinion. Post rebuttal: The points #2, #3 are satisfactorily addressed in the rebuttal; please add these discussions to the final paper. However, my novelty concern over [23] is not adequately addressed. I took a closer look at both papers, and the only difference is on the upper bounds of the Fisher information, while other steps (bayes entropic loss, generalized van-trees inequality) are essentially the same. I agree that the current paper uses a different approach to upper bound the Fisher information: [23] used a Jensen's inequality (or a data-processing property of Fisher information) to relate the individual Fisher information to the trace of the entire Fisher information matrix (p.s. I do not understand why the authors call this a "single-letterization"); in the current paper, the individual Fisher information is studied directly by assuming a Gaussian prior and using the rotational invariance of the Gaussian distribution. However, I would prefer to treat this step as a direct and relatively straightforward computation of the Fisher information, and still do not think there is much technical innovation here. Given that this novelty concern remains, I decide to only increase my score from 5 to 6. |
NIPS | Title
Minimax Bounds for Generalized Linear Models
Abstract
We establish a new class of minimax prediction error bounds for generalized linear models. Our bounds significantly improve previous results when the design matrix is poorly structured, including natural cases where the matrix is wide or does not have full column rank. Apart from the typical L2 risks, we study a class of entropic risks which recovers the usual L2 prediction and estimation risks, and demonstrate that a tight analysis of Fisher information can uncover underlying structural dependency in terms of the spectrum of the design matrix. The minimax approach we take differs from the traditional metric entropy approach, and can be applied to many other settings.
1 Introduction
Throughout, we consider a parametric framework where observations X ∈ Rn are generated according to X ∼ Pθ, where Pθ denotes a probability measure on a measurable space (X ⊆ Rn,F) indexed by an underlying parameter θ ∈ Θ ⊂ Rd. For each Pθ, we associate a density f(·; θ) with respect to an underlying measure λ on (X ,F) according to
dPθ(x) = f(x; θ)dλ(x).
This setup contains a vast array of fundamental applications in machine learning, engineering, neuroscience, finance, statistics and information theory [1–10]. As examples, mean estimation [1], covariance and precision matrix estimation [2], phase retrieval [3,4], group or membership testing [5], pairwise ranking [10], can all be modeled in terms of parametric statistics. The central question to address in all of these problems is essentially the same: how accurately can we infer the parameter θ given the observation X?
One of the most popular parameteric families is the exponential family, which captures a rich variety of parametric models such as binomial, Gaussian, Poisson, etc. Given a parameter η ∈ R, a density f(·; η) is said to belong to the exponential family if it can be written as
f(x; η) = g(x) exp ( ηx− Φ(η) s(σ) ) . (1)
Here, the parameter η is the natural parameter, g : X ⊆ R→ [0,∞) is the base measure, Φ : R→ R is the cumulant function, and s(σ) > 0 is a variance parameter. The density f(·; η) is understood to be on a probability space (X ⊆ R,F) with respect to a dominating σ-finite measure λ. In this work, we are interested in the following generalized linear model (GLM), where observation X ∈ Rn is generated according to an exponential family with natural parameter equal to a linear transformation of the underlying parameter θ. In other words,
f(x; θ) = n∏ i=1 { g(xi) exp ( xi〈mi, θ〉 − Φ(〈mi, θ〉) s(σ) )} , (2)
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
for a real parameter θ := (θ1, θ2, . . . , θd) ∈ Rd and a fixed design matrix M ∈ Rn×d, with rows given by the vectors {mi}ni=1 ⊂ Rd. The above model assumes each Xi is drawn from its own exponential family, with respective natural parameters 〈mi, θ〉, i = 1, 2, . . . , n. Evidently, this captures the classical (Gaussian) linear model X = Mθ + Z, where f(·; θ) is taken to be the usual Gaussian density, and also captures a much broader class of problems including phase retrieval, matrix recovery and logistic regression. See [11–13] for history and theory of the generalized linear model.
In order to evaluate the performance of an estimator θ̂ (i.e., a measurable function ofX), it is common to define a loss function L(·, ·) : Rd × Rd 7−→ R and analyze the loss L(θ, θ̂). A typical figure of merit is the constrained minimax risk R(M,Θ), defined as
R(M,Θ) := inf θ̂ sup θ∈Θ L(θ, θ̂).
In words, the minimax risk characterizes the worst-case risk under the specified loss L(·, ·) achieved by the best estimator, with a constraint that θ belongs to a specified parameter space Θ.
Two choices of the loss function L(·, ·) give rise to the usual variants of L2 loss:
1. Estimation loss, where the loss function L(·, ·) is defined as
L1(θ, θ̂) = E‖θ − θ̂‖2 for all θ, θ̂ ∈ Rd. (3)
2. Prediction loss, where the loss function L(·, ·) is defined as
L2(θ, θ̂) = 1
n E‖Mθ −Mθ̂‖2 for all θ, θ̂ ∈ Rd. (4)
In this work, we shall approach things from an information theoretic viewpoint. In particular, we will bound minimax risk under entropic loss (closely connected to logarithmic loss in the statistical learning and information literature, see, e.g., [14–16]), from which L2 estimates will follow. To start, let us review some of the key definitions in information theory. Suppose the parameter θ ∈ Rd follows a prior π, a probability measure on Rd having density ψ with respect to Lebesgue measure. The differential entropy h(θ) corresponding to random variable θ is defined as
h(θ) := − ∫ Rd ψ(u) logψ(u)du.
Here and throughout, we will take logarithms with respect to the natural base, and assume all entropies exist (i.e., their defining integrals exist in the Lebesgue sense). The mutual information I(θ;X) between parameter θ ∼ π and observation X ∼ Pθ is defined as
I(θ;X) := ∫ Rd ∫ X f(x; θ) log f(x; θ)∫ Rd f(x; θ ′)dπ(θ′) dλ(x)dπ(θ).
The conditional entropy is defined as h(θ|X) := h(θ)− I(θ;X). The entropy power of a random variable U is defined as exp(2h(U)), and for any two random variables U and V with well-defined conditional entropy, the conditional entropy power is defined similarly as exp(2h(U |V )). Lower bounds on conditional entropy power can be translated into lower bounds of other losses, via tools in rate distortion theory [17]. To illustrate this, let’s consider the following two Bayes risks, with suprema taken over all priors π on the parameter space Θ ⊆ Rd, and infima taken over all valid estimators θ̂ (i.e., measurable functions of X).
1. Entropic estimation loss, where the Bayes risk is defined as
Re(M,Θ) := inf θ̂ sup π n∑ i=1 exp ( 2h(θi|θ̂i) ) . (5)
2. Entropic prediction loss, where the Bayes risk is defined as
Rp(M,Θ) := inf θ̂ sup π
1
n n∑ i=1 exp ( 2h(m>i θ|m>i θ̂) ) . (6)
The following simple observation shows that any lower bound derived for the entropic Bayes risks implies a lower bound on the minimax L2 risks.
Lemma 1. We have inf θ̂ supθ∈Θ L1(θ, θ̂) & Re(M,Θ) and inf θ̂ supθ∈Θ L2(θ, θ̂) & Rp(M,Θ).
Proof. This follows since Gaussians maximize entropy subject to second moment constraints and conditioning reduces entropy: E(θi−θ̂i)2 ≥ Var(θi−θ̂i) & exp(2h(θi−θ̂i)) & exp(2h(θi|θ̂i)).
Here and onwards, we use “&” (also “.” and “ ”) to refer to “≥” (and “≤”, “=”, respectively) up to constants that do not depend on parameters.
Although we focus on L2 loss in the present work, we remark that minimax bounds on entropic loss directly yield corresponding estimates on Lp loss using standard arguments involving covering and packing numbers of Lp spaces. See, for example, the work by Raskutti et al. [18]. Despite its universal nature, there is relatively limited work on deriving minimax bounds for the entropic loss. This is the focus of the present work, and as a consequence, we obtain bounds on L2 loss that significantly improve on prior results when the matrix M is poorly structured.
1.1 Contributions
In this paper, we make three main contributions.
1. First, we establish L2 minimax risk and entropic Bayes risk bounds for the generalized linear model (2). The generality of the GLM allows us to extend our results to specific instances of the GLM such as the Gaussian linear model, phase retrieval and matrix recovery.
2. Second, we establish L2 minimax risk and entropic Bayes risk bounds for the Gaussian linear model. In particular, our bounds are nontrivial for many instances where previous results fail (for example when M ∈ Rn×d does not have full column rank, including cases with d > n), and can be naturally applied to the sparse problem where ‖θ‖0 ≤ k. Further, we show that both our minimax risk and entropic Bayes risk bounds are tight up to constants and log factors when M is sampled from a Gaussian ensemble.
3. Third, we investigate the L2 minimax risk via the lens of the entropic Bayes risk, and provide evidence that information theoretic minimax methods can naturally extract dependencies on the structure of design matrix M via analysis of Fisher information. The techniques we develop are general and can be used to establish minimax results for other problems.
2 Main Results and Discussion
The following notation is used throughout: upper-case letters (e.g., X , Y ) denote random variables or matrices, and lower-case letters (e.g., x, y) denote realizations of random variables or vectors. We use subscript notation vi to denote the i-th component of a vector v = (v1, v2, . . . , vd). We let [k] denote the set {1, 2, . . . , k}. We will be making the following assumption.
Assumption: The second derivative of the cumulant function Φ is bounded uniformly by a constant L > 0: Φ′′(·) ≤ L. The following lemma characterizes the mean and variance of densities in the exponential family.
Lemma 2 (Page 29, [11]). Any observation X generated according to the exponential family (1) has mean Φ′(η) and variance s(σ) · Φ′′(η).
In other words, our assumption is equivalent to saying that the variance of each observation X1, . . . , Xn is bounded. This is a common assumption made in the literature; See, for example, [19–22].
Our first main result establishes a minimax prediction lower bound corresponding to the generalized linear model (2). Let us first make a few definitions. For an n × k matrix A, we define the vector ΛA := (λ1, . . . , λk) ∈ Rk, where the λi’s denote the eigenvalues of the k × k symmetric matrix
A>A in descending order. ‖ΛA‖p denotes the usual Lp norm of the vector ΛA for p ≥ 1. Finally, we define
Γ(A) := max ( ‖ΛA‖21 ‖ΛA‖22 , λmin(A >A) ‖Λ−1A ‖1 ) , (7)
where Λ−1A := (λ −1 1 , . . . , λ −1 k ), with the convention that λmin(A >A)‖Λ−1A ‖1 = 0 when λmin(A >A) = 0.
Theorem 3. For observations X ∈ Rn generated via the generalized linear model (2) with a fixed design matrix M ∈ Rn×d, the minimax L2 prediction risk and the entropic Bayes prediction risk are lower bounded by
1 n inf θ̂ sup θ∈Rd E‖Mθ̂ −Mθ‖2 & 1 n s(σ) L Γ(M).
1 n inf θ̂ sup π n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) ) & 1 n s(σ) L ‖ΛM‖21 ‖ΛM‖22 .
Bounds on minimax risk under an additional sparsity constraint ‖θ‖0 ≤ k (i.e., the true parameter θ has at most k non-zero entries) can be derived as a corollary.
Corollary 4 (Sparse Version of Theorem 3). For observations X ∈ Rn generated via the generalized linear model (2), with the additional constraint that ‖θ‖0 ≤ k (i.e., Θ := {θ ∈ Rd : ‖θ‖0 ≤ k}), the minimax prediction error is lower bounded by
1 n inf θ̂ sup θ∈Θ E‖Mθ̂ −Mθ‖2 & 1 n s(σ) L max Q∈Mk Γ(Q).
1 n inf θ̂ sup π n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) ) & 1 n s(σ) L max Q∈Mk ‖ΛQ‖21 ‖ΛQ‖22 .
Here, the maximum is taken overMk, the set of all n× k′ submatrices of M , with k′ ≤ k.
We now note an important specialization of Corollary 4. In particular, consider the Gaussian linear model with observations X ∈ Rn generated according to
X = Mθ + Z, (8)
with Z ∼ N (0, σ2 In) the standard Gaussian vector. This corresponds to the GLM of (2) when the functions are taken to be h(x) = e−x
2/(2σ2), s(σ) = σ2, and Φ(t) = t2/2 (hence, L = 1). This is a particularly important instance worth highlighting because of the ubiquity of the Gaussian linear model in applications.
Theorem 5. For observations X ∈ Rn generated via the Gaussian linear model (8), with the sparsity constraint ‖θ‖0 ≤ k (i.e., Θ := {θ ∈ Rd : ‖θ‖0 ≤ k}), the minimax prediction error is lower bounded by
1 n inf θ̂ sup θ∈Θ E‖Mθ̂ −Mθ‖2 & σ 2 n max Q∈Mk Γ(Q).
1 n inf θ̂ sup π n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) ) & σ2 n max Q∈Mk ‖ΛQ‖21 ‖ΛQ‖22 .
Here, the maximum is taken overMk, the set of all n× k′ submatrices of M , with k′ ≤ k. Remark 6. In the above results, the function Γ(·) can in fact be replaced with
Γ̃(M) := max ( n∑ i=1 ‖mi‖42 ‖Mmi‖2 , λmin(M >M)‖Λ−1M ‖1 ) ,
which is stronger than the original statements. However, the chosen statements above highlight the simple dependence on the spectrum of ΛM .
2.1 Related Work
Most relevant to our results is the following lower bound on minimax L2 estimation risk and entropic Bayes estimation risk, developed in a recent work by Lee and Courtade [23]. We note that [23] does not bound prediction loss (which is often of primary interest), as we have done in the present paper. Theorem 7 (Theorem 3, [23]). Let observation X be generated via the generalized linear model defined in (2), with the additional structural constraint Θ = Bd2(R) := {v : ‖v‖22 ≤ R2}. Suppose the cumulant function Φ satisfies Φ′′ ≤ L for some constant L. Then, the minimax estimation error is lower bounded by
inf θ̂ sup θ∈Θ E‖θ̂ − θ‖2 & inf θ̂ sup π n∑ i=1 exp(2h(θi|θ̂i)) & min ( R2, s(σ) L Tr((M>M)−1) ) . (9)
The bound of (9) is tight when X is generated by the Gaussian linear model, showing that (Gaussian) linear models are most favorable in the sense of minimax estimation error amongst the class of GLMs considered here. Lee and Courtade extracted the dependence on the Tr(M>M) term by analyzing a Fisher information term in the class of Bayesian Cramér-Rao-type bounds from [24]. Earlier work (see, e.g., [25]) yielded bounds on the order of d/λmax(M>M), which is loose compared to (9).
There is a large body of work that establish minimax lower bounds on prediction error for specific models of the generalized linear model. Typically, these analyses depend on methods involving metric entropy (see, for example, [4, 18, 19, 26–28]). A popular minimax result is due to Raskutti et al. [18], who consider the sparse Gaussian linear model, where for a fixed design matrix M with an additional sparsity constraint ‖θ‖0 ≤ k,
σ2 Φ2k,−(M)
Φ2k,+(M)
k n log
( ed
k
) . inf
θ̂ sup ‖θ‖0≤k
1 n E‖Mθ̂ −Mθ‖22 . σ2 min
( k
n log
( ed
k
) , 1 ) . (10)
Here the terms Φr,−(M) and Φr,+(M) correspond to the constrained eigenvalues,
Φr,−(M) := inf 06=‖θ‖0≤r
‖Mθ‖2
‖θ‖2 , Φr,+(M) := sup
06=‖θ‖0≤r
‖Mθ‖2
‖θ‖2 . (11)
The upper bound of (10) is achieved by classical methods such as aggregation [29–32].
One can readily observe that the lower bound of (10) becomes degenerate for even mildly ill-structured design matrices M . For example, in the case where M has repeating columns, the above result gives a lower bound of 0, which is not very interesting. This suggests that the metric entropy approach does not easily capture the dependence of the structure of design matrixM at the resolution of the complete spectrum of M>M as our results do. In fact, it can be shown that Corollary 4 uniformly improves upon (10) up to logarithmic factors; see Section 4.1 of the supplementary. Further, the lower bound of Raskutti et al. does not hold for k > n, which is a disadvantage for high dimensional problems where d n. Verzelen [30] discusses the regime where kn log ( ed k ) ≥ 12 and k ≤ max(d
1/3, n/5) and provide bounds for the worst-case matrix M , which is a different setting from ours.
There are also lines of work on specific settings of the generalized linear model. For example, Candes et al. [28] discusses low-rank matrix recovery, and Cai et al. [4] considers phase retrieval. There are, however, fewer results that directly look at the generalized linear model of our setting. The closest work related is that of Abramovich and Grinshtein [19], where they consider estimating the entire vector Mθ, as opposed to our setting where we estimate θ first with θ̂, then evaluate Mθ̂. Their result also depends on the ratio between (constrained) minimum and maximum eigenvalues as in (10), and hence fails when M is not full rank or otherwise has divergent maximum and minimum (constrained) eigenvalues.
Comparing Theorems 3 and 5 with the results surveyed above raises several points (illustrated in Table 1):
• Nontrivialness when M is not full rank. Unlike the lower bound in (10), the ratio ‖ΛM‖21/‖ΛM‖22 does not vanish when M is not full rank; see Case (d) in Table 1. This is particularly important when the dimension of the parameter is large relative to the number of observed samples.
Remark 8. In some cases, (10) can be improved by ignoring certain components of θ ∈ Rd via dimensionality reduction. For example, if the first two columns of M are the same, then it is possible to ignore the first component of θ and simply look at the remaining d− 1 components. We remark that even with this reduction, (10) still depends on the ratio between minimum and maximum constrained eigenvalues of the new “effective” matrix, and leads to a poor lower bound when the minimum and maximum constrained eigenvalues are of a different order. We remark that other dimensionality reduction methods (such as rotations) may be limited by the sparsity constraint ‖θ‖0 ≤ k. Moreover, in general when the spectrum of M is all positive (with divergent large/small eigenvalues), one cannot use dimensionality reduction to improve the result of (10).
2.2 Application to Gaussian Designs
Gaussian designs are frequently adopted in machine learning and compressed sensing (see, for example, [18, 33–35]). The following proposition provides a concentration bound for the ratio ‖ΛM‖21/‖ΛM‖22 when M is sampled from the standard Gaussian ensemble (i.e., where each component of M is sampled i.i.d. according to a standard Gaussian). Proposition 9. Let the design matrix M ∈ Rn×k be sampled from the Gaussian ensemble. There exist universal constants c1, c2, c3 > 0 such that ‖ΛM‖21/‖ΛM‖22 ≥ c1 min(n, k) with probability at least 1− c2 exp(−c3 min(n, k)).
Proposition 9 implies that, with high probability, the lower bound of Theorem 5 (and therefore the corresponding estimate in Theorem 3) is sharp up to a logarithmic term that is negligible when d k. In particular, under the assumptions of Theorem 5, we obtain with the help of (10) that
σ2 min
( k
n , 1
) . inf
θ̂ sup ‖θ‖0≤k
1 n E‖Mθ̂ −Mθ‖22 . σ2 min
( k
n log
( ed
k
) , 1 ) , (12)
with the lower bound holding with high probability in min(n, k). This can significantly improve on the lower bound (10); consider, for example, the case where s := min(2k, d) = αn for
some fixed α < 1. Note that any n × s submatrix M ′ of M satisfies Φ2k,−(M)/Φ2k,+(M) ≤ λmin(M ′>M ′)/λmax(M ′>M ′). An asymptotic result by Bai and Yin [36] implies that if α is fixed then this latter ratio converges to (1− √ α) 2 / (1 + √ α)
2 almost surely as n, k, d → ∞. Hence, asymptotically speaking, the result of (10) is tight at most up to constants depending on α while our results of Corollary 4 is tight (up to log factors) without dependency of α.
Interestingly, Proposition 9 also holds for square matrices, where the minimum eigenvalue is close to zero (more precisely, for a square Gaussian matrix M ∈ Rn×n, λmin(M>M) is of the order n−1, as shown in the work of Rudelson and Vershynin [37]). Proposition 9 follows from Szarek’s work [38] on concentration of the largest n/2 singular values for a square Gaussian matrix M ∈ Rn×n, concentration of singular values of rectangular subgaussian matrices [26], and an application of interlacing inequalities for singular values of submatrices [39]. Similar results can be shown for subgaussian matrices under additional assumptions using tools from [40].
3 Key Points of Proofs of Main Theorems
In our approach, we will be using classical information theory tools inspired by the techniques developed by Lee and Courtade [23].
3.1 Preliminaries
We say that a measure µ is log-concave if dµ(x) = e−V (x)dx for some convex function V (·). The Fisher information IX(θ) given θ ∈ Rd corresponding to the map θ 7−→ Pθ is defined as
IX(θ) = EX ‖∇θ log f(X; θ)‖22 ,
where the gradient is taken with respect to θ, and the expectation is taken with respect to X ∼ Pθ. If the parameter θ has a prior π that is log-concave, the following lemma gives an upper bound on the mutual information I(θ;X), which depends on the covariance matrix of θ, defined as Cov(θ).
Lemma 10 (Theorem 2, [24]). Suppose the prior π of θ ∈ Rd is log-concave. Then, under mild regularity conditions on the map θ 7−→ Pθ, we have
I(θ;X) ≤ d · φ (
Tr(Cov(θ)) · E IX(θ) d2
) , (13)
where the function φ(·) is defined as φ(x) := {√
x if 0 ≤ x < 1, 1 + 12 log x if x ≥ 1.
We note that the regularity condition in Lemma 10 requires that each member of the parametric family Pθ has density f(·; θ) smooth enough to permit the following change of integral and differentiation,∫
X ∇θf(x; θ)dλ(x) = 0, µ− a.e. θ. (14)
In our case, since we are working with the GLM of (2), the regularity condition is automatically satisfied.
When θ is a one-dimensional (i.e., d = 1) log-concave random variable, the bound of (13) is sharp up to a (modest) multiplicative constant when Var(θ)E IX(θ) is bounded away from zero. There exists a tighter version of Lemma 10 when π is uniformly log-concave, however Lemma 10 is enough for our purposes. We direct the interested reader to the paper [24].
3.2 Proof Sketch of Theorem 3
We start off by noting that we can lower bound the entropic Bayes risk of (6) by taking a specific prior π. For our purposes, we will let θ have a multivariate Gaussian prior π = N ( 0, β2 Id ) .
We continue with a bound on the sum of conditional entropy powers n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) ) ≥ n∑ i=1 exp ( 2h(m>i θ)− 2I(m>i θ;X) ) , (15)
which follows from the data-processing inequality I(m>i θ;m > i θ̂) ≤ I(m>i θ;X), since m>i θ → X → m>i θ̂ forms a Markov chain.
When mi ∈ Rd is a zero-vector, exp ( 2h(m>i θ|m>i θ) ) = exp ( 2h(m>i θ)− 2I(m>i θ;X) ) = 0 and hence does not contribute to the summations within (15). This implies that removing zero vector rows from M does not affect the proof following (15). Hence, in the following we will assume that the matrix M does not have rows that are zero vectors.
By our choice of the prior π, the density of m>i θ is Gaussian and hence log-concave, which allows us to invoke Lemma 10, implying
n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) ) ≥ n∑ i=1 exp ( 2h(m>i θ)− 2φ(Var(m>i θ) · E IX(m>i θ)) ) . (16)
Here, the expectation is taken with respect to the marginal density of m>i θ. The primary task is now to obtain a reasonable bound on the expected Fisher information term E IX(m>i θ). To do this, we introduce the following lemma, which provides an upper bound for the expected Fisher information E IX(m>i θ). Lemma 11. Fix M ∈ Rn×d. If parameter θ has a prior π = N (0, β2 Id) and X ∈ Rn is sampled according to the generalized linear model defined as (2), then
E IX(m>i θ) ≤ L s(σ) · ‖Mmi‖ 2 2 ‖mi‖42 + 1 β2 ·Ψi(M) for all i = 1, 2, . . . , n. (17)
The function Ψi(M) depends only on M and is finite. The expectation is taken with respect to the marginal density of m>i θ.
The functions Ψi(·) are not explictly stated here because later we will be taking β large enough so that Ψi(·)/β2 in (17) can be ignored. A proof of Lemma 11 and more details about the functions Ψi(·) are included in the supplementary. We can continue from (16) and see that
n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) )
& n∑ i=1 β2‖mi‖22 exp ( −2φ [ β2‖mi‖22 ( L s(σ) · ‖Mmi‖ 2 2 ‖mi‖42 + 1 β2 Ψi(M) )]) (a)
n∑ i=1
1
L s(σ) · ‖Mmi‖22 ‖mi‖42 + 1β2 Ψi(M)
(b) = (1− )s(σ)
L n∑ i=1 ‖mi‖42 ‖Mmi‖22 . (18)
In the above, both (a) and (b) require a selection of β2 to be large enough. In particular, in (a), β2 ≥ s(σ)/L would guarantee that the function φ behaves logarithmically (recall from Lemma 10 that φ(t) behaves logarithmically if t ≥ 1). In (b), the variable depends on the selection of β. Since the function Ψi(M) is finite for all i = 1, . . . , n, by taking β2 a constant large enough, we can force to be as close to zero as possible. Hence, we can say that the inequality holds with = 0. A direct application of the Cauchy-Schwarz inequality then yields
n∑ i=1 exp ( 2h(m>i θ |m>i θ̂) ) ≥ s(σ) L (∑n i=1 ‖mi‖22 )2∑n i=1 ‖Mmi‖22 = s(σ) L ‖ΛM‖21 ‖ΛM‖22 . (19)
On the other hand, from Theorem 7 and the matrix identity ‖Mv‖22 ≥ λmin(M>M)‖v‖22,
inf θ̂ sup θ∈Rd
E‖Mθ̂ −Mθ‖22 ≥ λmin(M>M) · Tr ( (M>M)−1 ) = λd‖Λ−1M ‖1. (20)
Combining (19) and (20) with Lemma 1 finishes the proof.
3.3 An Alternative Proof of Theorem 5
For the Gaussian linear model, we have the following tighter version of Lemma 11.
Lemma 12. Fix M ∈ Rn×d. If θ ∼ N (0, β2 Id) and X ∈ Rn is sampled according to the Gaussian linear model defined as (8). Then,
E IX(m>i θ) ≤ 1 σ2 · ‖Mmi‖ 2 2 ‖mi‖42 for 1 ≤ i ≤ n. (21)
By taking any β2 ≥ σ2 maxi ( ‖mi‖22/‖Mmi‖22 ) , the function φ(·) in (16) will again behave logarithmically, directly implying (18) with = 0. The remaining proof follows similarly as before. Remark 13. The functions Ψi(·) can be difficult to bound directly (see supplementary for more details). Hence, the improved tightness and simplicity of Lemma 12 over Lemma 11 for the Gaussian linear model provides more flexibility on the selection of β. This can be helpful when dealing with problem settings where there are other constraints on the parameter space Θ. Remark 14. There is a subtle but crucial difference in the proof techniques employed here compared to those in [23]. The key step in [23] requires bounding the Fisher information IX(θi) with diagonal terms in the Fisher information matrix IX(θ), i.e., Lemma 9 of [23]. In our case, we need to bound the Fisher information IX(m>i θ) (e.g., Lemma 11), and here, the terms m>i θ are not necessarily mutually independent as required in Lemma 9 of [23], which prevents us from a direct application. Instead, we choose θ to have a Gaussian prior and try to bound IX(θi) directly. This is facilitated by properties of the Gaussian distribution; see Section 4.3 in the appendix for more details.
Broader Impact
The generalized linear model (GLM) is a broad class of statistical models that have extensive applications in machine learning, electrical engineering, finance, biology, and many areas not stated here. Many algorithms have been proposed for inference, prediction and classification tasks under the umbrella of the GLM, such as the Lasso algorithm, the EM algorithm, Dantzig selectors, etc., but often it is hard to confidently assess optimality. Lower bounds for minimax and Bayes risks play a key role here by providing theoretical benchmarks with which one can evaluate the performance of algorithms. While many previous approaches have focused on the Gaussian linear model, in this paper we establish minimax and Bayes risk lower bounds that hold uniformly over all statistical models within the GLM. Our arguments demonstrate a set of information-theoretic techniques that are general and applicable to setups other than the GLM. As a result, many applications stand to potentially benefit from our work.
Acknowledgments
This work was supported in part by NSF grants CCF-1704967, CCF-1750430, CCF-0939370. | 1. What is the focus of the paper in terms of the lower bound for the minimax prediction loss?
2. What are the improvements introduced by the authors in the context of generalized linear models?
3. How does the paper compare to related works, particularly in terms of contributions and novelty? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The authors establish a lower bound for the minimax prediction loss of generalized linear model through entropic loss.
Strengths
It improves on the current literature when the design matrix is ill-posed. They then discuss the situation for the Gaussian linear model and Gaussian design matrix.
Weaknesses
The contribution of the paper compared to its closely related work [19] seems to incremental. -- Update: the author's explanation partially addressed my concern. |
NIPS | Title
GILBO: One Metric to Measure Them All
Abstract
We propose a simple, tractable lower bound on the mutual information contained in the joint generative density of any latent variable generative model: the GILBO (Generative Information Lower BOund). It offers a data-independent measure of the complexity of the learned latent variable description, giving the log of the effective description length. It is well-defined for both VAEs and GANs. We compute the GILBO for 800 GANs and VAEs each trained on four datasets (MNIST, FashionMNIST, CIFAR-10 and CelebA) and discuss the results.
1 Introduction
GANs (Goodfellow et al., 2014) and VAEs (Kingma & Welling, 2014) are the most popular latent variable generative models because of their relative ease of training and high expressivity. However quantitative comparisons across different algorithms and architectures remains a challenge. VAEs are generally measured using the ELBO, which measures their fit to data. Many metrics have been proposed for GANs, including the INCEPTION score (Gao et al., 2017), the FID score (Heusel et al., 2017), independent Wasserstein critics (Danihelka et al., 2017), birthday paradox testing (Arora & Zhang, 2017), and using Annealed Importance Sampling to evaluate log-likelihoods (Wu et al., 2017), among others.
Instead of focusing on metrics tied to the data distribution, we believe a useful additional independent metric worth consideration is the complexity of the trained generative model. Such a metric would help answer questions related to overfitting and memorization, and may also correlate well with sample quality. To work with both GANs and VAEs our metric should not require a tractable joint density p(x, z). To address these desiderata, we propose the GILBO.
2 GILBO: Generative Information Lower BOund
A symmetric, non-negative, reparameterization independent measure of the information shared between two random variables is given by the mutual information:
I(X;Z) = ∫∫ dx dz p(x, z) log p(x, z)
p(x)p(z) =
∫ dz p(z) ∫ dx p(x|z) log p(z|x)
p(z) ≥ 0. (1)
I(X;Z) measures how much information (in nats) is learned about one variable given the other. As such it is a measure of the complexity of the generative model. It can be interpreted (when converted to bits) as the reduction in the number of yes-no questions needed to guess X = x if you observe Z = z and know p(x), or vice-versa. It gives the log of the effective description length of the generative model. This is roughly the log of the number of distinct sample pairs (Tishby & Zaslavsky, 2015). I(X;Z) is well-defined even for continuous distributions. This contrasts with the continuous entropy H(X) of the marginal distribution, which is not reparameterization independent (Marsh,
∗Authors contributed equally.
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
2013). I(X;Z) is intractable due to the presence of p(x) = ∫ dz p(z)p(x|z), but we can derive a tractable variational lower bound (Agakov, 2006):
I(X;Z) = ∫∫ dx dz p(x, z) log p(x, z)
p(x)p(z) (2)
= ∫∫ dx dz p(x, z) log
p(z|x) p(z)
(3) ≥ ∫∫ dx dz p(x, z) log p(z|x)− ∫ dz p(z) log p(z)−KL[p(z|x)||e(z|x)] (4)
= ∫ dz p(z) ∫ dx p(x|z) log e(z|x)
p(z) = Ep(x,z)
[ log
e(z|x) p(z)
] ≡ GILBO ≤ I(X;Z) (5)
We call this bound the GILBO for Generative Information Lower BOund. It requires learning a tractable variational approximation to the intractable posterior p(z|x) = p(x, z)/p(x), termed e(z|x) since it acts as an encoder mapping from data to a prediction of its associated latent variables.2 As a variational approximation, e(z|x) depends on some parameters, θ, which we elide in the notation. The encoder e(z|x) performs a regression for the inverse of the GAN or VAE generative model, approximating the latents that gave rise to an observed sample. This encoder should be a tractable distribution, and must respect the domain of the latent variables, but does not need to be reparameterizable as no sampling from e(z|x) is needed during training. We suggest the use of (−1, 1) remapped Beta distributions in the case of uniform latents, and Gaussians in the case of Gaussian latents. In either case, training the variational encoder consists of simply generating pairs of (x, z) from the trained generative model and maximizing the likelihood of the encoder to generate the observed z, conditioned on its paired x, divided by the likelihood of the observed z under the generative model’s prior, p(z). For the GANs in this study, the prior was a fixed uniform distribution, so the log p(z) term contributes a constant offset to the variational encoder’s likelihood. Optimizing the GILBO for the parameters of the encoder gives a lower bound on the true generative mutual information in the GAN or VAE. Any failure to converge or for the approximate encoder to match the true distribution does not invalidate the bound, it simply makes the bound looser.
The GILBO contrasts with the representational mutual information of VAEs defined by the data and encoder, which motivates VAE objectives (Alemi et al., 2017). For VAEs, both lower and upper variational bounds can be defined on the representational joint distribution (p(x)e(z|x)). These have demonstrated their utility for cross-model comparisons. However, they require a tractable posterior, preventing their use with most GANs. The GILBO provides a theoretically-justified and dataset-independent metric that allows direct comparison of VAEs and GANs.
The GILBO is entirely independent of the true data, being purely a function of the generative joint distribution. This makes it distinct from other proposed metrics like estimated marginal log likelihoods (often reported for VAEs and very expensive to estimate for GANs) (Wu et al., 2017)3, an independent Wasserstein critic (Danihelka et al., 2017), or the common INCEPTION (Gao et al., 2017) and FID (Heusel et al., 2017) scores which attempt to measure how well the generated samples match the observed true data samples. Being independent of data, the GILBO does not directly measure sample quality, but extreme values (either low or high) correlate with poor sample quality, as demonstrated in the experiments below.
Similarly, in Im et al. (2018), the authors propose using various GAN training objectives to quantitatively measure the performance of GANs on their own generated data. Interestingly, they find that evaluating GANs on the same metric they were trained on gives paradoxically weaker performance – an LS-GAN appears to perform worse than a Wasserstein GAN when evaluated with the least-squares metric, for example, even though the LS-GAN otherwise outperforms the WGAN. If this result holds in general, it would indicate that using the GILBO during training might result in less-interpretable evaluation GILBOs. We do not investigate this hypothesis here.
2Note that a new e(z|x) is trained for both GANs and VAEs. VAEs do not use their own e(z|x), which would also give a valid lower bound. In this work, we train a new e(z|x) for both to treat both model classes uniformly. We don’t know if using a new e(z|x) or the original would tend to result in a tighter bound.
3Note that Wu et al. (2017) is complementary to our work, providing both upper and lower bounds on the log-likelihood. It is our opinion that their estimates should also become standard practice when measuring GANs and VAEs.
Although the GILBO doesn’t directly reference the dataset, the dataset provides useful signposts. First is at logC, the number of distinguishable classes in the data. If the GILBO is lower than that, the model has almost certainly failed to learn a reasonable model of the data. Another is at logN , the number of training points. A GILBO near this value may indicate that the model has largely memorized the training set, or that the model’s capacity happens to be constrained near the size of the training set. At the other end is the entropy of the data itself (H(X)) taken either from a rough estimate, or from the best achieved data log likelihood of any known generative model on the data. Any reasonable generative model should have a GILBO no higher than this value.
Unlike other metrics, GILBO does not monotonically map to quality of the generated output. Both extremes indicate failures. A vanishing GILBO denotes a generative model with vanishing complexity, either due to independence of the latents and samples, or a collapse to a small number of possible outputs. A diverging GILBO suggests over-sensitivity to the latent variables.
In this work, we focus on variational approximations to the generative information. However, other means of estimating the GILBO are also valid. In Section 4.3 we explore a computationallyexpensive method to find a very tight bound. Other possibilities exist as well, including the recently proposed Mutual Information Neural Estimation (Belghazi et al., 2018) and Contrastive Predictive Coding (Oord et al., 2018). We do not explore these possibilities here, but any valid estimator of the mutual information can be used for the same purpose.
3 Experiments
We computed the GILBO for each of the 700 GANs and 100 VAEs tested in Lucic et al. (2017) on the MNIST, FashionMNIST, CIFAR and CelebA datasets in their wide range hyperparameter search. This allowed us to compare FID scores and GILBO scores for a large set of different GAN objectives on the same architecture. For our encoder network, we duplicated the discriminator, but adjusted the final output to be a linear layer predicting the 64× 2 = 128 parameters defining a (−1, 1) remapped Beta distribution (or Gaussian in the case of the VAE) over the latent space. We used a Beta since all of the GANs were trained with a (−1, 1) 64-dimensional uniform distribution. The parameters of the encoder were optimized for up to 500k steps with ADAM (Kingma & Ba, 2015) using a scheduled multiplicative learning rate decay. We used the same batch size (64) as in the original training. Training time for estimating GILBO is comparable to doing FID evaluations (a few minutes) on the small datasets (MNIST, FashionMNIST, CIFAR), or over 10 minutes for larger datasets and models (CelebA).
In Figure 1 we show the distributions of FID and GILBO scores for all 800 models as well as their scatter plot for MNIST. We can immediately see that each of the GAN objectives collapse to GILBO ∼ 0 for some hyperparameter settings, but none of the VAEs do. In Figure 2 we show generated samples from all of the models, split into relevant regions. A GILBO near zero signals a failure of the model to make any use of its latent space (Figure 2a).
The best performing models by FID all sit at a GILBO ∼ 11 nats. An MNIST model that simply memorized the training set and partitioned the latent space into 50,000 unique outputs would have a GILBO of log 50,000 = 10.8 nats, so the cluster around 11 nats is suspicious. Since mutual information is invariant to any invertible transformation, a model that partitioned the latent space into 50,000 bins, associated each with a training point and then performed some random elastic transformation but with a magnitude low enough to not turn one training point into another would still have a generative mutual information of 10.8 nats. Larger elastic transformations that could confuse one training point for another would only act to lower the generative information. Among a large set of hyperparameters and across 7 different GAN objectives, we notice a conspicuous increase in FID score as GILBO moves away from ∼ 11 nats to either side. This demonstrates the failure of these GANs to achieve a meaningful range of complexities while maintaining visual quality. Most striking is the distinct separation in GILBOs between GANs and VAEs. These GANs learn less complex joint densities than a vanilla VAE on MNIST at the same FID score.
Figures 3 to 5 show the same plots as in Figure 1 but for the FashionMNIST, CIFAR-10 and CelebA datasets respectively. The best performing models as measured by FID on FashionMNIST continue to have GILBOs near logN . However, on the more complex CIFAR-10 and CelebA datasets we see nontrivial variation in the complexities of the trained GANs with competitive FID. On these more complex datasets, the visual performance (e.g. Figure 8) of the models leaves much to be desired. We speculate that the models’ inability to acheive high visual quality is due to insufficient model capacity for the dataset.
4 Discussion
4.1 Reproducibility
While the GILBO is a valid lower bound regardless of the accuracy of the learned encoder, its utility as a metric naturally requires it to be comparable across models. The first worry is whether it is reproducible in its values. To address this, in Figure 6 we show the result of 128 different training runs to independently compute the GILBO for three models on CelebA. In each case the error in the measurement was below 2% of the mean GILBO and much smaller in variation than the variations between models, suggesting comparisons between models are valid if we use the same encoder architecture (e(z|x)) for each.
4.2 Tightness
Another concern would be whether the learned variational encoder was a good match to the true posterior of the generative model (e(z|x) ∼ p(z|x)). Perhaps the model with a measured GILBO of 41 nats simply had a harder to capture p(z|x) than the GILBO ∼ 104 nat model. Even if the values were reproducible between runs, maybe there is a systemic bias in the approximation that differs between different models.
To test this, we used the Simulation-Based Calibration (SBC) technique of Talts et al. (2018). If one were to implement a cycle, wherein a single draw from the prior z′ ∼ p(z) is decoded into an image x′ ∼ p(x|z′) and then inverted back to its corresponding latent zi ∼ p(z|x′), the rank statistic∑
i I [zi < z′] should be uniformly distributed. Replacing the true p(z|x′) with the approximate e(z|x) gives a visual test for the accuracy of the approximation. Figure 7 shows a histogram of the rank statistic for 128 draws from e(z|x) for each of 1270 batches of 64 elements each drawn from the 64 dimensional prior p(z) for the same three GANs as in Figure 6. The red line denotes the 99% confidence interval for the corresponding uniform distribution. All three GANs show a systematic ∩-shaped distribution denoting overdispersion in e(z|x) relative to the true p(z|x). This is to be expected from a variational approximation, but importantly the degree of mismatch seems to correlate with the scores, not anticorrelate. It is likely that the 41 nat GILBO is a more accurate lower bound than the 103 nat GILBO. This further reinforces the utility of the GILBO for cross-model comparisons.
4.3 Precision of the GILBO
While comparisons between models seem well-motivated, the SBC results in Section 4.2 highlight some mismatch in the variational approximation. How well can we trust the absolute numbers computed by the GILBO? While they are guaranteed to be valid lower bounds, how tight are those bounds?
To answer these questions, note that the GILBO is a valid lower bound even if we learn separate per-instance variational encoders. Here we replicate the results of Lipton & Tripathi (2017) and attempt to learn the precise z that gave rise to an image by minimizing the L2 distance between the produced image and the target (|x− g(z)|2). We can then define a distribution centered on z and adjust the magnitude of the variance to get the best GILBO possible. In other words, by minimizing the L2 distance between an image x sampled from the generative model and some other x′ sampled from the same model, we can directly recover some z′ equivalent to the z that generated x. We can then do a simple optimization to find the variance that maximizes the GILBO, allowing us to compute a very tight GILBO in a very computationally-expensive manner.
Doing this procedure on the same three models as in Figures 6 and 7 gives (87, 111, 155) nats respectfully for the (41, 70, 104) GILBO models, when trained for 150k steps to minimize the L2 distance. These approximations are also valid lower bounds, and demonstrate that our amortized GILBO calculations above might be off by as much as a factor of 2 in their values from the true generative information, but again highlights that the comparisons between different models appear to be real. Also note that these per-image bounds are finite. We discuss the finiteness of the generative information in more detail in Section 4.6.
Naturally, learning a single parametric amortized variational encoder is much less computationally expensive than doing an independent optimization for each image, and still seems to allow for comparative measurements. However, we caution against directly comparing GILBO scores from different encoder architectures or optimization procedures. Fair comparison between models requires holding the encoder architecture and training procedure fixed.
4.4 Consistency
The GILBO offers a signal distinct from data-based metrics like FID. In Figure 8, we visually demonstrate the nature of the retained information for the same three models as above in Figures 6 and 7. All three checkpoints for CelebA have the same FID score of 49, making them each competitive amongst the GANs studied; however, they have GILBO values that span a range of 63 nats (91 bits), which indicates a massive difference in model complexity. In each figure, the left-most column shows a set of independent generated samples from the GAN. Each of these generated images are then sent through the variational encoder e(z|x) from which 15 independent samples of the corresponding z are drawn. These latent codes are then sent back through the GAN’s generator to form the remaining 15 columns.
The images in Figure 8 show the type of information that is retained in the mapping from image to latent and back to image space. On the right in Figure 8c with a GILBO of 104 nats, practically all of the human-perceptible information is retained by doing this cycle. In contrast, on the left in Figure 8a with a GILBO of only 41 nats, there is a good degree of variation in the synthesized images, although they generally retain the overall gross attributes of the faces. In the middle, at 70 nats, the variation in the synthesized images is small, but noticeable, such as the sunglasses that appear and disappear 6 rows from the top.
4.5 Overfitting of the GILBO Encoder
Since the GILBO is trained on generated samples, the dataset is limited only by the number of unique samples the generative model can produce. Consequently, it should not be possible for the encoder, e(z|x), to overfit to the training data. Regardless, when we actually evaluate the GILBO, it is always on newly generated data.
Likewise, given that the GILBO is trained on the “true” generative model p(z)p(x|z), we do not expect regularization to be necessary. The encoders we trained are unregularized. However, we note that any regularization procedure on the encoder could be thought of as a modification of the variational family used in the variational approximation.
The same argument is true about architectural choices. We used a convolutional encoder, as we expect it to be a good match with the deconvolutional generative models under study, but the GILBO would still be valid if we used an MLP or any other architecture. The computed GILBO may be more or less tight depending on such choices, though – the architectural choices for the encoder are a form of
inductive bias and should be made in a problem-dependent manner just like any other architectural choice.
4.6 Finiteness of the Generative Information
The generative mutual information is only infinite if the generator network is not only deterministic, but is also invertible. Deterministic many-to-one functions can have finite mutual informations between their inputs and outputs. Take for instance the following: p(z) = U [−1, 1], the prior being uniform from -1 to 1, and a generator x = G(z) = sign(z) being the sign function (which is C∞ almost everywhere), for which p(x|z) = δ(x− sign(z)) the conditional distribution of x given z is the delta function concentrated on the sign of z.
p(x, z) = p(x|z)p(z) = 1 2 δ(x−sign(z)) p(z) = ∫ 1 −1 dx p(x, z) = 1 2 δ(z−1)+ 1 2 δ(z+1) (6)
I(X;Z) = ∫ dx dz p(x, z) log p(x, z)
p(x)p(z) (7)
= ∫ 1 −1 dx ∫ 1 −1 dz 1 2 δ(x− sign(z)) log δ(x− sign(z))1 2δ(z − 1) + 1 2δ(z + 1)
(8)
=
[ 1
2 log 2 ] z=−1 + [ 1 2 log 2 ] z=1 = log 2 = 1bit (9)
In other words, the deterministic function x = sign(z) induces a mutual information of 1 bit between X and Z. This makes sense when interpreting the mutual information as the reduction in the number of yes-no questions needed to specify the value. It takes an infinite number of yes-no questions to precisely determine a real number in the range [−1, 1], but if we observe the sign of the value, it takes one fewer question (while still being infinite) to determine.
Even if we take Z to be a continuous real-valued random variable on the range [−1, 1], if we consider a function x = float(z) which casts that number to a float, for a 32-bit float on the range [−1, 1] the mutual information that results is I(X;Z) = 26 bits (we verified this numerically). In any chain Z → float(Z) → X by the data processing inequality, the mutual information I(X;Z) is limited by I(Z; float(Z)) = 26 bits (per dimension). Given that we train neural networks with limited precision arithmetic, this ensures that there is always some finite mutual information in the representations, since our random variables are actually discrete, albeit discretized on a very fine grid.
5 Conclusion
We’ve defined a new metric for evaluating generative models, the GILBO, and measured its value on over 3200 models. We’ve investigated and discussed strengths and potential limitations of the metric. We’ve observed that GILBO gives us different information than is currently available in sample-quality based metrics like FID, both signifying a qualitative difference in the performance of most GANs on MNIST versus richer datasets, as well as being able to distinguish between GANs with qualitatively different latent representations even if they have the same FID score.
On simple datasets, in an information-theoretic sense we cannot distinguish what GANs with the best FIDs are doing from models that are limited to making some local deformations of the training set. On more complicated datasets, GANs show a wider array of complexities in their trained generative models. These complexities cannot be discerned by existing sample-quality based metrics, but would have important implications for any use of the trained generative models for auxiliary tasks, such as compression or representation learning.
A truly invertible continuous map from the latent space to the image space would have a divergent mutual information. Since GANs are implemented as a feed forward neural network, the fact that we can measure finite and distinct values for the GILBO for different architectures suggest not only are they fundamentally not perfectly invertible, but the degree of invertibility is an interesting signal of the complexity of the learned generative model. Given that GANs are implemented as deterministic feed forward maps, they naturally want to live at high generative mutual information.
Humans seem to extract only roughly a dozen bits (∼ 8 nats) from natural images into long term memory (Landauer, 1986). This calls into question the utility of the usual qualitative visual comparisons of highly complex generative models. We might also be interested in trying to train models that learn much more compressed representations. VAEs can naturally target a wide range of mutual informations (Alemi et al., 2017). GANs are harder to steer. One approach to make GANs steerable is to modify the GAN objective and specifically designate a subset of the full latent space as the informative subspace, as in Chen et al. (2016), where the maximum complexity can be controlled for by limiting the dimensionality of a discrete categorical latent. The remaining stochasticity in the latent can be used for novelty in the conditional generations. Alternatively one could imagine adding the GILBO as an auxiliary objective to ordinary GAN training, though as a lower bound, it may not prove useful for helping to keep the generative information low. Regardless, we believe it is important to consider the complexity in information-theoretic terms of the generative models we train, and the GILBO offers a relatively cheap comparative measure.
We believe using GILBO for further comparisons across architectures, datasets, and GAN and VAE variants will illuminate the strengths and weaknesses of each. The GILBO should be measured and reported when evaluating any latent variable model. To that end, our implementation is available at https://github.com/google/compare_gan.
Acknowledgements
We would like to thank Mario Lucic, Karol Kurach, and Marcin Michalski for the use of their 3200 previously-trained GANs and VAEs and their codebase (described in Lucic et al. (2017)), without which this paper would have had much weaker experiments, as well as for their help adding our GILBO code to their public repository. We would also like to thank our anonymous reviewers for substantial helpful feedback. | 1. What is the main contribution of the paper, and how does it address the problem of evaluating GANs?
2. What are some strengths and weaknesses of the proposed approach compared to other methods, such as FID and MINE?
3. How does the reviewer assess the significance of the paper's findings, particularly in shedding light on the situation of evaluating GANs?
4. What are some concerns regarding the encoder's potential for overfitting and the consequences of doing so?
5. How might the choice of a convolutional neural network as the encoder impact the results, and what are some potential effects of this inductive bias?
6. Are there any issues with the boundedness of the mutual information, especially when the true mutual information between the generator input and output is high?
7. How would regularization techniques, such as those mentioned by Roth et al. (ICML 2017) or Mescheder et al. (ICML 2018), affect the quality of the estimator? | Review | Review
Overall I think is a very good paper and it is one of the better papers I've seen addressing evaluating GANs. I myself are fairly skeptical of FID and have seen other works criticizing that approach, and this work sheds some light on the situation. I think anyone who follows this work would be better informed than work that introduced inception or FID in how to evaluate GANs. That said, there is some missing discussion or comparison to related work (notably mutual information neural estimation (MINE) by Belghazi et al, 2018) as well as some discussion related to the inductive bias and boundedness of their estimator. I'd like to see a discussion of these things. Comments: Donât forget "Quantitatively Evaluating GANs With Divergences Proposed for Trainingâ (Im et al, ICLR 2018) as another method for evaluating GANs. Since you are estimating mutual information, it would be worth comparing to MINE (Belghazi et al, ICML 2018), or at least a good discussion relating to this model. Is there any way to tell if your encoder is overfitting? What would be the consequence of overfitting in your model? For instance, could your encoder preference to learn to map trivial noisy information from x to z, ignoring or overlooking important but more difficult to encode structure? Your generator and encoder are deterministic and the variables continuous, are there issues with the boundedness of the mutual information? If so, how well do we expect this estimator to do as the true mutual information between the generator input and output is high? In addition, how can regularization of your encoder (a la Roth et al ICML, 2017 or Mescheder et al ICML 2018) change the quality of the estimator (is it better or worse)? Furthermore, what is the potential effect of the inductive bias from the choice of convolutional neural network as your encoder? -------------- I have read the response, and I am happy with the response. However, I encourage the authors to think more about the potential unboundedness of the KL mutual information and the boundedness of their family of functions / regularization. |
NIPS | Title
GILBO: One Metric to Measure Them All
Abstract
We propose a simple, tractable lower bound on the mutual information contained in the joint generative density of any latent variable generative model: the GILBO (Generative Information Lower BOund). It offers a data-independent measure of the complexity of the learned latent variable description, giving the log of the effective description length. It is well-defined for both VAEs and GANs. We compute the GILBO for 800 GANs and VAEs each trained on four datasets (MNIST, FashionMNIST, CIFAR-10 and CelebA) and discuss the results.
1 Introduction
GANs (Goodfellow et al., 2014) and VAEs (Kingma & Welling, 2014) are the most popular latent variable generative models because of their relative ease of training and high expressivity. However quantitative comparisons across different algorithms and architectures remains a challenge. VAEs are generally measured using the ELBO, which measures their fit to data. Many metrics have been proposed for GANs, including the INCEPTION score (Gao et al., 2017), the FID score (Heusel et al., 2017), independent Wasserstein critics (Danihelka et al., 2017), birthday paradox testing (Arora & Zhang, 2017), and using Annealed Importance Sampling to evaluate log-likelihoods (Wu et al., 2017), among others.
Instead of focusing on metrics tied to the data distribution, we believe a useful additional independent metric worth consideration is the complexity of the trained generative model. Such a metric would help answer questions related to overfitting and memorization, and may also correlate well with sample quality. To work with both GANs and VAEs our metric should not require a tractable joint density p(x, z). To address these desiderata, we propose the GILBO.
2 GILBO: Generative Information Lower BOund
A symmetric, non-negative, reparameterization independent measure of the information shared between two random variables is given by the mutual information:
I(X;Z) = ∫∫ dx dz p(x, z) log p(x, z)
p(x)p(z) =
∫ dz p(z) ∫ dx p(x|z) log p(z|x)
p(z) ≥ 0. (1)
I(X;Z) measures how much information (in nats) is learned about one variable given the other. As such it is a measure of the complexity of the generative model. It can be interpreted (when converted to bits) as the reduction in the number of yes-no questions needed to guess X = x if you observe Z = z and know p(x), or vice-versa. It gives the log of the effective description length of the generative model. This is roughly the log of the number of distinct sample pairs (Tishby & Zaslavsky, 2015). I(X;Z) is well-defined even for continuous distributions. This contrasts with the continuous entropy H(X) of the marginal distribution, which is not reparameterization independent (Marsh,
∗Authors contributed equally.
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
2013). I(X;Z) is intractable due to the presence of p(x) = ∫ dz p(z)p(x|z), but we can derive a tractable variational lower bound (Agakov, 2006):
I(X;Z) = ∫∫ dx dz p(x, z) log p(x, z)
p(x)p(z) (2)
= ∫∫ dx dz p(x, z) log
p(z|x) p(z)
(3) ≥ ∫∫ dx dz p(x, z) log p(z|x)− ∫ dz p(z) log p(z)−KL[p(z|x)||e(z|x)] (4)
= ∫ dz p(z) ∫ dx p(x|z) log e(z|x)
p(z) = Ep(x,z)
[ log
e(z|x) p(z)
] ≡ GILBO ≤ I(X;Z) (5)
We call this bound the GILBO for Generative Information Lower BOund. It requires learning a tractable variational approximation to the intractable posterior p(z|x) = p(x, z)/p(x), termed e(z|x) since it acts as an encoder mapping from data to a prediction of its associated latent variables.2 As a variational approximation, e(z|x) depends on some parameters, θ, which we elide in the notation. The encoder e(z|x) performs a regression for the inverse of the GAN or VAE generative model, approximating the latents that gave rise to an observed sample. This encoder should be a tractable distribution, and must respect the domain of the latent variables, but does not need to be reparameterizable as no sampling from e(z|x) is needed during training. We suggest the use of (−1, 1) remapped Beta distributions in the case of uniform latents, and Gaussians in the case of Gaussian latents. In either case, training the variational encoder consists of simply generating pairs of (x, z) from the trained generative model and maximizing the likelihood of the encoder to generate the observed z, conditioned on its paired x, divided by the likelihood of the observed z under the generative model’s prior, p(z). For the GANs in this study, the prior was a fixed uniform distribution, so the log p(z) term contributes a constant offset to the variational encoder’s likelihood. Optimizing the GILBO for the parameters of the encoder gives a lower bound on the true generative mutual information in the GAN or VAE. Any failure to converge or for the approximate encoder to match the true distribution does not invalidate the bound, it simply makes the bound looser.
The GILBO contrasts with the representational mutual information of VAEs defined by the data and encoder, which motivates VAE objectives (Alemi et al., 2017). For VAEs, both lower and upper variational bounds can be defined on the representational joint distribution (p(x)e(z|x)). These have demonstrated their utility for cross-model comparisons. However, they require a tractable posterior, preventing their use with most GANs. The GILBO provides a theoretically-justified and dataset-independent metric that allows direct comparison of VAEs and GANs.
The GILBO is entirely independent of the true data, being purely a function of the generative joint distribution. This makes it distinct from other proposed metrics like estimated marginal log likelihoods (often reported for VAEs and very expensive to estimate for GANs) (Wu et al., 2017)3, an independent Wasserstein critic (Danihelka et al., 2017), or the common INCEPTION (Gao et al., 2017) and FID (Heusel et al., 2017) scores which attempt to measure how well the generated samples match the observed true data samples. Being independent of data, the GILBO does not directly measure sample quality, but extreme values (either low or high) correlate with poor sample quality, as demonstrated in the experiments below.
Similarly, in Im et al. (2018), the authors propose using various GAN training objectives to quantitatively measure the performance of GANs on their own generated data. Interestingly, they find that evaluating GANs on the same metric they were trained on gives paradoxically weaker performance – an LS-GAN appears to perform worse than a Wasserstein GAN when evaluated with the least-squares metric, for example, even though the LS-GAN otherwise outperforms the WGAN. If this result holds in general, it would indicate that using the GILBO during training might result in less-interpretable evaluation GILBOs. We do not investigate this hypothesis here.
2Note that a new e(z|x) is trained for both GANs and VAEs. VAEs do not use their own e(z|x), which would also give a valid lower bound. In this work, we train a new e(z|x) for both to treat both model classes uniformly. We don’t know if using a new e(z|x) or the original would tend to result in a tighter bound.
3Note that Wu et al. (2017) is complementary to our work, providing both upper and lower bounds on the log-likelihood. It is our opinion that their estimates should also become standard practice when measuring GANs and VAEs.
Although the GILBO doesn’t directly reference the dataset, the dataset provides useful signposts. First is at logC, the number of distinguishable classes in the data. If the GILBO is lower than that, the model has almost certainly failed to learn a reasonable model of the data. Another is at logN , the number of training points. A GILBO near this value may indicate that the model has largely memorized the training set, or that the model’s capacity happens to be constrained near the size of the training set. At the other end is the entropy of the data itself (H(X)) taken either from a rough estimate, or from the best achieved data log likelihood of any known generative model on the data. Any reasonable generative model should have a GILBO no higher than this value.
Unlike other metrics, GILBO does not monotonically map to quality of the generated output. Both extremes indicate failures. A vanishing GILBO denotes a generative model with vanishing complexity, either due to independence of the latents and samples, or a collapse to a small number of possible outputs. A diverging GILBO suggests over-sensitivity to the latent variables.
In this work, we focus on variational approximations to the generative information. However, other means of estimating the GILBO are also valid. In Section 4.3 we explore a computationallyexpensive method to find a very tight bound. Other possibilities exist as well, including the recently proposed Mutual Information Neural Estimation (Belghazi et al., 2018) and Contrastive Predictive Coding (Oord et al., 2018). We do not explore these possibilities here, but any valid estimator of the mutual information can be used for the same purpose.
3 Experiments
We computed the GILBO for each of the 700 GANs and 100 VAEs tested in Lucic et al. (2017) on the MNIST, FashionMNIST, CIFAR and CelebA datasets in their wide range hyperparameter search. This allowed us to compare FID scores and GILBO scores for a large set of different GAN objectives on the same architecture. For our encoder network, we duplicated the discriminator, but adjusted the final output to be a linear layer predicting the 64× 2 = 128 parameters defining a (−1, 1) remapped Beta distribution (or Gaussian in the case of the VAE) over the latent space. We used a Beta since all of the GANs were trained with a (−1, 1) 64-dimensional uniform distribution. The parameters of the encoder were optimized for up to 500k steps with ADAM (Kingma & Ba, 2015) using a scheduled multiplicative learning rate decay. We used the same batch size (64) as in the original training. Training time for estimating GILBO is comparable to doing FID evaluations (a few minutes) on the small datasets (MNIST, FashionMNIST, CIFAR), or over 10 minutes for larger datasets and models (CelebA).
In Figure 1 we show the distributions of FID and GILBO scores for all 800 models as well as their scatter plot for MNIST. We can immediately see that each of the GAN objectives collapse to GILBO ∼ 0 for some hyperparameter settings, but none of the VAEs do. In Figure 2 we show generated samples from all of the models, split into relevant regions. A GILBO near zero signals a failure of the model to make any use of its latent space (Figure 2a).
The best performing models by FID all sit at a GILBO ∼ 11 nats. An MNIST model that simply memorized the training set and partitioned the latent space into 50,000 unique outputs would have a GILBO of log 50,000 = 10.8 nats, so the cluster around 11 nats is suspicious. Since mutual information is invariant to any invertible transformation, a model that partitioned the latent space into 50,000 bins, associated each with a training point and then performed some random elastic transformation but with a magnitude low enough to not turn one training point into another would still have a generative mutual information of 10.8 nats. Larger elastic transformations that could confuse one training point for another would only act to lower the generative information. Among a large set of hyperparameters and across 7 different GAN objectives, we notice a conspicuous increase in FID score as GILBO moves away from ∼ 11 nats to either side. This demonstrates the failure of these GANs to achieve a meaningful range of complexities while maintaining visual quality. Most striking is the distinct separation in GILBOs between GANs and VAEs. These GANs learn less complex joint densities than a vanilla VAE on MNIST at the same FID score.
Figures 3 to 5 show the same plots as in Figure 1 but for the FashionMNIST, CIFAR-10 and CelebA datasets respectively. The best performing models as measured by FID on FashionMNIST continue to have GILBOs near logN . However, on the more complex CIFAR-10 and CelebA datasets we see nontrivial variation in the complexities of the trained GANs with competitive FID. On these more complex datasets, the visual performance (e.g. Figure 8) of the models leaves much to be desired. We speculate that the models’ inability to acheive high visual quality is due to insufficient model capacity for the dataset.
4 Discussion
4.1 Reproducibility
While the GILBO is a valid lower bound regardless of the accuracy of the learned encoder, its utility as a metric naturally requires it to be comparable across models. The first worry is whether it is reproducible in its values. To address this, in Figure 6 we show the result of 128 different training runs to independently compute the GILBO for three models on CelebA. In each case the error in the measurement was below 2% of the mean GILBO and much smaller in variation than the variations between models, suggesting comparisons between models are valid if we use the same encoder architecture (e(z|x)) for each.
4.2 Tightness
Another concern would be whether the learned variational encoder was a good match to the true posterior of the generative model (e(z|x) ∼ p(z|x)). Perhaps the model with a measured GILBO of 41 nats simply had a harder to capture p(z|x) than the GILBO ∼ 104 nat model. Even if the values were reproducible between runs, maybe there is a systemic bias in the approximation that differs between different models.
To test this, we used the Simulation-Based Calibration (SBC) technique of Talts et al. (2018). If one were to implement a cycle, wherein a single draw from the prior z′ ∼ p(z) is decoded into an image x′ ∼ p(x|z′) and then inverted back to its corresponding latent zi ∼ p(z|x′), the rank statistic∑
i I [zi < z′] should be uniformly distributed. Replacing the true p(z|x′) with the approximate e(z|x) gives a visual test for the accuracy of the approximation. Figure 7 shows a histogram of the rank statistic for 128 draws from e(z|x) for each of 1270 batches of 64 elements each drawn from the 64 dimensional prior p(z) for the same three GANs as in Figure 6. The red line denotes the 99% confidence interval for the corresponding uniform distribution. All three GANs show a systematic ∩-shaped distribution denoting overdispersion in e(z|x) relative to the true p(z|x). This is to be expected from a variational approximation, but importantly the degree of mismatch seems to correlate with the scores, not anticorrelate. It is likely that the 41 nat GILBO is a more accurate lower bound than the 103 nat GILBO. This further reinforces the utility of the GILBO for cross-model comparisons.
4.3 Precision of the GILBO
While comparisons between models seem well-motivated, the SBC results in Section 4.2 highlight some mismatch in the variational approximation. How well can we trust the absolute numbers computed by the GILBO? While they are guaranteed to be valid lower bounds, how tight are those bounds?
To answer these questions, note that the GILBO is a valid lower bound even if we learn separate per-instance variational encoders. Here we replicate the results of Lipton & Tripathi (2017) and attempt to learn the precise z that gave rise to an image by minimizing the L2 distance between the produced image and the target (|x− g(z)|2). We can then define a distribution centered on z and adjust the magnitude of the variance to get the best GILBO possible. In other words, by minimizing the L2 distance between an image x sampled from the generative model and some other x′ sampled from the same model, we can directly recover some z′ equivalent to the z that generated x. We can then do a simple optimization to find the variance that maximizes the GILBO, allowing us to compute a very tight GILBO in a very computationally-expensive manner.
Doing this procedure on the same three models as in Figures 6 and 7 gives (87, 111, 155) nats respectfully for the (41, 70, 104) GILBO models, when trained for 150k steps to minimize the L2 distance. These approximations are also valid lower bounds, and demonstrate that our amortized GILBO calculations above might be off by as much as a factor of 2 in their values from the true generative information, but again highlights that the comparisons between different models appear to be real. Also note that these per-image bounds are finite. We discuss the finiteness of the generative information in more detail in Section 4.6.
Naturally, learning a single parametric amortized variational encoder is much less computationally expensive than doing an independent optimization for each image, and still seems to allow for comparative measurements. However, we caution against directly comparing GILBO scores from different encoder architectures or optimization procedures. Fair comparison between models requires holding the encoder architecture and training procedure fixed.
4.4 Consistency
The GILBO offers a signal distinct from data-based metrics like FID. In Figure 8, we visually demonstrate the nature of the retained information for the same three models as above in Figures 6 and 7. All three checkpoints for CelebA have the same FID score of 49, making them each competitive amongst the GANs studied; however, they have GILBO values that span a range of 63 nats (91 bits), which indicates a massive difference in model complexity. In each figure, the left-most column shows a set of independent generated samples from the GAN. Each of these generated images are then sent through the variational encoder e(z|x) from which 15 independent samples of the corresponding z are drawn. These latent codes are then sent back through the GAN’s generator to form the remaining 15 columns.
The images in Figure 8 show the type of information that is retained in the mapping from image to latent and back to image space. On the right in Figure 8c with a GILBO of 104 nats, practically all of the human-perceptible information is retained by doing this cycle. In contrast, on the left in Figure 8a with a GILBO of only 41 nats, there is a good degree of variation in the synthesized images, although they generally retain the overall gross attributes of the faces. In the middle, at 70 nats, the variation in the synthesized images is small, but noticeable, such as the sunglasses that appear and disappear 6 rows from the top.
4.5 Overfitting of the GILBO Encoder
Since the GILBO is trained on generated samples, the dataset is limited only by the number of unique samples the generative model can produce. Consequently, it should not be possible for the encoder, e(z|x), to overfit to the training data. Regardless, when we actually evaluate the GILBO, it is always on newly generated data.
Likewise, given that the GILBO is trained on the “true” generative model p(z)p(x|z), we do not expect regularization to be necessary. The encoders we trained are unregularized. However, we note that any regularization procedure on the encoder could be thought of as a modification of the variational family used in the variational approximation.
The same argument is true about architectural choices. We used a convolutional encoder, as we expect it to be a good match with the deconvolutional generative models under study, but the GILBO would still be valid if we used an MLP or any other architecture. The computed GILBO may be more or less tight depending on such choices, though – the architectural choices for the encoder are a form of
inductive bias and should be made in a problem-dependent manner just like any other architectural choice.
4.6 Finiteness of the Generative Information
The generative mutual information is only infinite if the generator network is not only deterministic, but is also invertible. Deterministic many-to-one functions can have finite mutual informations between their inputs and outputs. Take for instance the following: p(z) = U [−1, 1], the prior being uniform from -1 to 1, and a generator x = G(z) = sign(z) being the sign function (which is C∞ almost everywhere), for which p(x|z) = δ(x− sign(z)) the conditional distribution of x given z is the delta function concentrated on the sign of z.
p(x, z) = p(x|z)p(z) = 1 2 δ(x−sign(z)) p(z) = ∫ 1 −1 dx p(x, z) = 1 2 δ(z−1)+ 1 2 δ(z+1) (6)
I(X;Z) = ∫ dx dz p(x, z) log p(x, z)
p(x)p(z) (7)
= ∫ 1 −1 dx ∫ 1 −1 dz 1 2 δ(x− sign(z)) log δ(x− sign(z))1 2δ(z − 1) + 1 2δ(z + 1)
(8)
=
[ 1
2 log 2 ] z=−1 + [ 1 2 log 2 ] z=1 = log 2 = 1bit (9)
In other words, the deterministic function x = sign(z) induces a mutual information of 1 bit between X and Z. This makes sense when interpreting the mutual information as the reduction in the number of yes-no questions needed to specify the value. It takes an infinite number of yes-no questions to precisely determine a real number in the range [−1, 1], but if we observe the sign of the value, it takes one fewer question (while still being infinite) to determine.
Even if we take Z to be a continuous real-valued random variable on the range [−1, 1], if we consider a function x = float(z) which casts that number to a float, for a 32-bit float on the range [−1, 1] the mutual information that results is I(X;Z) = 26 bits (we verified this numerically). In any chain Z → float(Z) → X by the data processing inequality, the mutual information I(X;Z) is limited by I(Z; float(Z)) = 26 bits (per dimension). Given that we train neural networks with limited precision arithmetic, this ensures that there is always some finite mutual information in the representations, since our random variables are actually discrete, albeit discretized on a very fine grid.
5 Conclusion
We’ve defined a new metric for evaluating generative models, the GILBO, and measured its value on over 3200 models. We’ve investigated and discussed strengths and potential limitations of the metric. We’ve observed that GILBO gives us different information than is currently available in sample-quality based metrics like FID, both signifying a qualitative difference in the performance of most GANs on MNIST versus richer datasets, as well as being able to distinguish between GANs with qualitatively different latent representations even if they have the same FID score.
On simple datasets, in an information-theoretic sense we cannot distinguish what GANs with the best FIDs are doing from models that are limited to making some local deformations of the training set. On more complicated datasets, GANs show a wider array of complexities in their trained generative models. These complexities cannot be discerned by existing sample-quality based metrics, but would have important implications for any use of the trained generative models for auxiliary tasks, such as compression or representation learning.
A truly invertible continuous map from the latent space to the image space would have a divergent mutual information. Since GANs are implemented as a feed forward neural network, the fact that we can measure finite and distinct values for the GILBO for different architectures suggest not only are they fundamentally not perfectly invertible, but the degree of invertibility is an interesting signal of the complexity of the learned generative model. Given that GANs are implemented as deterministic feed forward maps, they naturally want to live at high generative mutual information.
Humans seem to extract only roughly a dozen bits (∼ 8 nats) from natural images into long term memory (Landauer, 1986). This calls into question the utility of the usual qualitative visual comparisons of highly complex generative models. We might also be interested in trying to train models that learn much more compressed representations. VAEs can naturally target a wide range of mutual informations (Alemi et al., 2017). GANs are harder to steer. One approach to make GANs steerable is to modify the GAN objective and specifically designate a subset of the full latent space as the informative subspace, as in Chen et al. (2016), where the maximum complexity can be controlled for by limiting the dimensionality of a discrete categorical latent. The remaining stochasticity in the latent can be used for novelty in the conditional generations. Alternatively one could imagine adding the GILBO as an auxiliary objective to ordinary GAN training, though as a lower bound, it may not prove useful for helping to keep the generative information low. Regardless, we believe it is important to consider the complexity in information-theoretic terms of the generative models we train, and the GILBO offers a relatively cheap comparative measure.
We believe using GILBO for further comparisons across architectures, datasets, and GAN and VAE variants will illuminate the strengths and weaknesses of each. The GILBO should be measured and reported when evaluating any latent variable model. To that end, our implementation is available at https://github.com/google/compare_gan.
Acknowledgements
We would like to thank Mario Lucic, Karol Kurach, and Marcin Michalski for the use of their 3200 previously-trained GANs and VAEs and their codebase (described in Lucic et al. (2017)), without which this paper would have had much weaker experiments, as well as for their help adding our GILBO code to their public repository. We would also like to thank our anonymous reviewers for substantial helpful feedback. | 1. What is the focus of the paper regarding generative models?
2. What are the strengths and weaknesses of the proposed metric for evaluating these models?
3. How does the reviewer interpret the results of the paper, particularly regarding memorization?
4. What concerns does the reviewer have regarding the definition and interpretation of the mutual information in the context of GANs?
5. How might the choice of noise level impact the utility of the proposed metric?
6. Does the reviewer think there are other issues with how the paper addresses the goal of generative modeling? | Review | Review
Summary ======= This paper proposes a metric to evaluate generative models. It suggests to use a variational lower bound on the mutual information between the latent variables and the observed variables under the distribution of the model. Good ==== The most interesting finding of this paper is that the proposed metric sits close to ln(training_set_size) nats for models with low FID on MNIST and FashionMNIST (although not for CelebA or CIFAR), which the authors suggest might be because the models memorized the training set. If the authors understood this result better and provided additional evidence that GANs are memorizing the training set, it might make for an interesting paper. Bad === The proposed metric is in general not well defined for GANs mapping z to x via a deterministic function. The mutual information is expressed in terms of a differential entropy using densities, but those densities might not exist. Even if it was properly defined, the mutual information is likely to diverge to infinity for GANs. I suspect that the estimated finite values are an artefact of the approximation error to the posterior p(z | x), and it is therefore not clear how to interpret these values. The authors claim "any failure to converge for the approximate encoder to match the true distribution does not invalidate the bound, it simply makes the bound looser." Yet if the mutual information is infinite, the bound is not just loose but may be completely meaningless, which seems like a big problem that should be addressed in the paper. The authors suggest that the proposed metric can be used as a measure of "complexity of the generative model". Yet simple linear models (PCA) would have infinite mutual information. I would be inclined to change my score if the authors can provide an example of a deterministic differentiable functios g(z) where I(z, g(z)) isn't either 0 or infinity. Of course, adding a bit of noise would be one way to limit the mutual information, but then I'd argue we need to understand the dependence on the noise and a principled way of choosing the noise level before the metric becomes useful. At a higher level, the authors seem to (incorrectly) assume that the goal of generative modeling is to generate visually pleasing images. Yet this is probably the least interesting application of generative models. If generating realistic images was the goal, a large database of images would do such a good job that it would be hard to justify any work on generative models. For more well-defined tasks, measuring generalization performance is usually not an issue (e.g., evaluating unsupervisedly learned representations in classification tasks), diminishing the value of the proposed metric. |
NIPS | Title
GILBO: One Metric to Measure Them All
Abstract
We propose a simple, tractable lower bound on the mutual information contained in the joint generative density of any latent variable generative model: the GILBO (Generative Information Lower BOund). It offers a data-independent measure of the complexity of the learned latent variable description, giving the log of the effective description length. It is well-defined for both VAEs and GANs. We compute the GILBO for 800 GANs and VAEs each trained on four datasets (MNIST, FashionMNIST, CIFAR-10 and CelebA) and discuss the results.
1 Introduction
GANs (Goodfellow et al., 2014) and VAEs (Kingma & Welling, 2014) are the most popular latent variable generative models because of their relative ease of training and high expressivity. However quantitative comparisons across different algorithms and architectures remains a challenge. VAEs are generally measured using the ELBO, which measures their fit to data. Many metrics have been proposed for GANs, including the INCEPTION score (Gao et al., 2017), the FID score (Heusel et al., 2017), independent Wasserstein critics (Danihelka et al., 2017), birthday paradox testing (Arora & Zhang, 2017), and using Annealed Importance Sampling to evaluate log-likelihoods (Wu et al., 2017), among others.
Instead of focusing on metrics tied to the data distribution, we believe a useful additional independent metric worth consideration is the complexity of the trained generative model. Such a metric would help answer questions related to overfitting and memorization, and may also correlate well with sample quality. To work with both GANs and VAEs our metric should not require a tractable joint density p(x, z). To address these desiderata, we propose the GILBO.
2 GILBO: Generative Information Lower BOund
A symmetric, non-negative, reparameterization independent measure of the information shared between two random variables is given by the mutual information:
I(X;Z) = ∫∫ dx dz p(x, z) log p(x, z)
p(x)p(z) =
∫ dz p(z) ∫ dx p(x|z) log p(z|x)
p(z) ≥ 0. (1)
I(X;Z) measures how much information (in nats) is learned about one variable given the other. As such it is a measure of the complexity of the generative model. It can be interpreted (when converted to bits) as the reduction in the number of yes-no questions needed to guess X = x if you observe Z = z and know p(x), or vice-versa. It gives the log of the effective description length of the generative model. This is roughly the log of the number of distinct sample pairs (Tishby & Zaslavsky, 2015). I(X;Z) is well-defined even for continuous distributions. This contrasts with the continuous entropy H(X) of the marginal distribution, which is not reparameterization independent (Marsh,
∗Authors contributed equally.
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
2013). I(X;Z) is intractable due to the presence of p(x) = ∫ dz p(z)p(x|z), but we can derive a tractable variational lower bound (Agakov, 2006):
I(X;Z) = ∫∫ dx dz p(x, z) log p(x, z)
p(x)p(z) (2)
= ∫∫ dx dz p(x, z) log
p(z|x) p(z)
(3) ≥ ∫∫ dx dz p(x, z) log p(z|x)− ∫ dz p(z) log p(z)−KL[p(z|x)||e(z|x)] (4)
= ∫ dz p(z) ∫ dx p(x|z) log e(z|x)
p(z) = Ep(x,z)
[ log
e(z|x) p(z)
] ≡ GILBO ≤ I(X;Z) (5)
We call this bound the GILBO for Generative Information Lower BOund. It requires learning a tractable variational approximation to the intractable posterior p(z|x) = p(x, z)/p(x), termed e(z|x) since it acts as an encoder mapping from data to a prediction of its associated latent variables.2 As a variational approximation, e(z|x) depends on some parameters, θ, which we elide in the notation. The encoder e(z|x) performs a regression for the inverse of the GAN or VAE generative model, approximating the latents that gave rise to an observed sample. This encoder should be a tractable distribution, and must respect the domain of the latent variables, but does not need to be reparameterizable as no sampling from e(z|x) is needed during training. We suggest the use of (−1, 1) remapped Beta distributions in the case of uniform latents, and Gaussians in the case of Gaussian latents. In either case, training the variational encoder consists of simply generating pairs of (x, z) from the trained generative model and maximizing the likelihood of the encoder to generate the observed z, conditioned on its paired x, divided by the likelihood of the observed z under the generative model’s prior, p(z). For the GANs in this study, the prior was a fixed uniform distribution, so the log p(z) term contributes a constant offset to the variational encoder’s likelihood. Optimizing the GILBO for the parameters of the encoder gives a lower bound on the true generative mutual information in the GAN or VAE. Any failure to converge or for the approximate encoder to match the true distribution does not invalidate the bound, it simply makes the bound looser.
The GILBO contrasts with the representational mutual information of VAEs defined by the data and encoder, which motivates VAE objectives (Alemi et al., 2017). For VAEs, both lower and upper variational bounds can be defined on the representational joint distribution (p(x)e(z|x)). These have demonstrated their utility for cross-model comparisons. However, they require a tractable posterior, preventing their use with most GANs. The GILBO provides a theoretically-justified and dataset-independent metric that allows direct comparison of VAEs and GANs.
The GILBO is entirely independent of the true data, being purely a function of the generative joint distribution. This makes it distinct from other proposed metrics like estimated marginal log likelihoods (often reported for VAEs and very expensive to estimate for GANs) (Wu et al., 2017)3, an independent Wasserstein critic (Danihelka et al., 2017), or the common INCEPTION (Gao et al., 2017) and FID (Heusel et al., 2017) scores which attempt to measure how well the generated samples match the observed true data samples. Being independent of data, the GILBO does not directly measure sample quality, but extreme values (either low or high) correlate with poor sample quality, as demonstrated in the experiments below.
Similarly, in Im et al. (2018), the authors propose using various GAN training objectives to quantitatively measure the performance of GANs on their own generated data. Interestingly, they find that evaluating GANs on the same metric they were trained on gives paradoxically weaker performance – an LS-GAN appears to perform worse than a Wasserstein GAN when evaluated with the least-squares metric, for example, even though the LS-GAN otherwise outperforms the WGAN. If this result holds in general, it would indicate that using the GILBO during training might result in less-interpretable evaluation GILBOs. We do not investigate this hypothesis here.
2Note that a new e(z|x) is trained for both GANs and VAEs. VAEs do not use their own e(z|x), which would also give a valid lower bound. In this work, we train a new e(z|x) for both to treat both model classes uniformly. We don’t know if using a new e(z|x) or the original would tend to result in a tighter bound.
3Note that Wu et al. (2017) is complementary to our work, providing both upper and lower bounds on the log-likelihood. It is our opinion that their estimates should also become standard practice when measuring GANs and VAEs.
Although the GILBO doesn’t directly reference the dataset, the dataset provides useful signposts. First is at logC, the number of distinguishable classes in the data. If the GILBO is lower than that, the model has almost certainly failed to learn a reasonable model of the data. Another is at logN , the number of training points. A GILBO near this value may indicate that the model has largely memorized the training set, or that the model’s capacity happens to be constrained near the size of the training set. At the other end is the entropy of the data itself (H(X)) taken either from a rough estimate, or from the best achieved data log likelihood of any known generative model on the data. Any reasonable generative model should have a GILBO no higher than this value.
Unlike other metrics, GILBO does not monotonically map to quality of the generated output. Both extremes indicate failures. A vanishing GILBO denotes a generative model with vanishing complexity, either due to independence of the latents and samples, or a collapse to a small number of possible outputs. A diverging GILBO suggests over-sensitivity to the latent variables.
In this work, we focus on variational approximations to the generative information. However, other means of estimating the GILBO are also valid. In Section 4.3 we explore a computationallyexpensive method to find a very tight bound. Other possibilities exist as well, including the recently proposed Mutual Information Neural Estimation (Belghazi et al., 2018) and Contrastive Predictive Coding (Oord et al., 2018). We do not explore these possibilities here, but any valid estimator of the mutual information can be used for the same purpose.
3 Experiments
We computed the GILBO for each of the 700 GANs and 100 VAEs tested in Lucic et al. (2017) on the MNIST, FashionMNIST, CIFAR and CelebA datasets in their wide range hyperparameter search. This allowed us to compare FID scores and GILBO scores for a large set of different GAN objectives on the same architecture. For our encoder network, we duplicated the discriminator, but adjusted the final output to be a linear layer predicting the 64× 2 = 128 parameters defining a (−1, 1) remapped Beta distribution (or Gaussian in the case of the VAE) over the latent space. We used a Beta since all of the GANs were trained with a (−1, 1) 64-dimensional uniform distribution. The parameters of the encoder were optimized for up to 500k steps with ADAM (Kingma & Ba, 2015) using a scheduled multiplicative learning rate decay. We used the same batch size (64) as in the original training. Training time for estimating GILBO is comparable to doing FID evaluations (a few minutes) on the small datasets (MNIST, FashionMNIST, CIFAR), or over 10 minutes for larger datasets and models (CelebA).
In Figure 1 we show the distributions of FID and GILBO scores for all 800 models as well as their scatter plot for MNIST. We can immediately see that each of the GAN objectives collapse to GILBO ∼ 0 for some hyperparameter settings, but none of the VAEs do. In Figure 2 we show generated samples from all of the models, split into relevant regions. A GILBO near zero signals a failure of the model to make any use of its latent space (Figure 2a).
The best performing models by FID all sit at a GILBO ∼ 11 nats. An MNIST model that simply memorized the training set and partitioned the latent space into 50,000 unique outputs would have a GILBO of log 50,000 = 10.8 nats, so the cluster around 11 nats is suspicious. Since mutual information is invariant to any invertible transformation, a model that partitioned the latent space into 50,000 bins, associated each with a training point and then performed some random elastic transformation but with a magnitude low enough to not turn one training point into another would still have a generative mutual information of 10.8 nats. Larger elastic transformations that could confuse one training point for another would only act to lower the generative information. Among a large set of hyperparameters and across 7 different GAN objectives, we notice a conspicuous increase in FID score as GILBO moves away from ∼ 11 nats to either side. This demonstrates the failure of these GANs to achieve a meaningful range of complexities while maintaining visual quality. Most striking is the distinct separation in GILBOs between GANs and VAEs. These GANs learn less complex joint densities than a vanilla VAE on MNIST at the same FID score.
Figures 3 to 5 show the same plots as in Figure 1 but for the FashionMNIST, CIFAR-10 and CelebA datasets respectively. The best performing models as measured by FID on FashionMNIST continue to have GILBOs near logN . However, on the more complex CIFAR-10 and CelebA datasets we see nontrivial variation in the complexities of the trained GANs with competitive FID. On these more complex datasets, the visual performance (e.g. Figure 8) of the models leaves much to be desired. We speculate that the models’ inability to acheive high visual quality is due to insufficient model capacity for the dataset.
4 Discussion
4.1 Reproducibility
While the GILBO is a valid lower bound regardless of the accuracy of the learned encoder, its utility as a metric naturally requires it to be comparable across models. The first worry is whether it is reproducible in its values. To address this, in Figure 6 we show the result of 128 different training runs to independently compute the GILBO for three models on CelebA. In each case the error in the measurement was below 2% of the mean GILBO and much smaller in variation than the variations between models, suggesting comparisons between models are valid if we use the same encoder architecture (e(z|x)) for each.
4.2 Tightness
Another concern would be whether the learned variational encoder was a good match to the true posterior of the generative model (e(z|x) ∼ p(z|x)). Perhaps the model with a measured GILBO of 41 nats simply had a harder to capture p(z|x) than the GILBO ∼ 104 nat model. Even if the values were reproducible between runs, maybe there is a systemic bias in the approximation that differs between different models.
To test this, we used the Simulation-Based Calibration (SBC) technique of Talts et al. (2018). If one were to implement a cycle, wherein a single draw from the prior z′ ∼ p(z) is decoded into an image x′ ∼ p(x|z′) and then inverted back to its corresponding latent zi ∼ p(z|x′), the rank statistic∑
i I [zi < z′] should be uniformly distributed. Replacing the true p(z|x′) with the approximate e(z|x) gives a visual test for the accuracy of the approximation. Figure 7 shows a histogram of the rank statistic for 128 draws from e(z|x) for each of 1270 batches of 64 elements each drawn from the 64 dimensional prior p(z) for the same three GANs as in Figure 6. The red line denotes the 99% confidence interval for the corresponding uniform distribution. All three GANs show a systematic ∩-shaped distribution denoting overdispersion in e(z|x) relative to the true p(z|x). This is to be expected from a variational approximation, but importantly the degree of mismatch seems to correlate with the scores, not anticorrelate. It is likely that the 41 nat GILBO is a more accurate lower bound than the 103 nat GILBO. This further reinforces the utility of the GILBO for cross-model comparisons.
4.3 Precision of the GILBO
While comparisons between models seem well-motivated, the SBC results in Section 4.2 highlight some mismatch in the variational approximation. How well can we trust the absolute numbers computed by the GILBO? While they are guaranteed to be valid lower bounds, how tight are those bounds?
To answer these questions, note that the GILBO is a valid lower bound even if we learn separate per-instance variational encoders. Here we replicate the results of Lipton & Tripathi (2017) and attempt to learn the precise z that gave rise to an image by minimizing the L2 distance between the produced image and the target (|x− g(z)|2). We can then define a distribution centered on z and adjust the magnitude of the variance to get the best GILBO possible. In other words, by minimizing the L2 distance between an image x sampled from the generative model and some other x′ sampled from the same model, we can directly recover some z′ equivalent to the z that generated x. We can then do a simple optimization to find the variance that maximizes the GILBO, allowing us to compute a very tight GILBO in a very computationally-expensive manner.
Doing this procedure on the same three models as in Figures 6 and 7 gives (87, 111, 155) nats respectfully for the (41, 70, 104) GILBO models, when trained for 150k steps to minimize the L2 distance. These approximations are also valid lower bounds, and demonstrate that our amortized GILBO calculations above might be off by as much as a factor of 2 in their values from the true generative information, but again highlights that the comparisons between different models appear to be real. Also note that these per-image bounds are finite. We discuss the finiteness of the generative information in more detail in Section 4.6.
Naturally, learning a single parametric amortized variational encoder is much less computationally expensive than doing an independent optimization for each image, and still seems to allow for comparative measurements. However, we caution against directly comparing GILBO scores from different encoder architectures or optimization procedures. Fair comparison between models requires holding the encoder architecture and training procedure fixed.
4.4 Consistency
The GILBO offers a signal distinct from data-based metrics like FID. In Figure 8, we visually demonstrate the nature of the retained information for the same three models as above in Figures 6 and 7. All three checkpoints for CelebA have the same FID score of 49, making them each competitive amongst the GANs studied; however, they have GILBO values that span a range of 63 nats (91 bits), which indicates a massive difference in model complexity. In each figure, the left-most column shows a set of independent generated samples from the GAN. Each of these generated images are then sent through the variational encoder e(z|x) from which 15 independent samples of the corresponding z are drawn. These latent codes are then sent back through the GAN’s generator to form the remaining 15 columns.
The images in Figure 8 show the type of information that is retained in the mapping from image to latent and back to image space. On the right in Figure 8c with a GILBO of 104 nats, practically all of the human-perceptible information is retained by doing this cycle. In contrast, on the left in Figure 8a with a GILBO of only 41 nats, there is a good degree of variation in the synthesized images, although they generally retain the overall gross attributes of the faces. In the middle, at 70 nats, the variation in the synthesized images is small, but noticeable, such as the sunglasses that appear and disappear 6 rows from the top.
4.5 Overfitting of the GILBO Encoder
Since the GILBO is trained on generated samples, the dataset is limited only by the number of unique samples the generative model can produce. Consequently, it should not be possible for the encoder, e(z|x), to overfit to the training data. Regardless, when we actually evaluate the GILBO, it is always on newly generated data.
Likewise, given that the GILBO is trained on the “true” generative model p(z)p(x|z), we do not expect regularization to be necessary. The encoders we trained are unregularized. However, we note that any regularization procedure on the encoder could be thought of as a modification of the variational family used in the variational approximation.
The same argument is true about architectural choices. We used a convolutional encoder, as we expect it to be a good match with the deconvolutional generative models under study, but the GILBO would still be valid if we used an MLP or any other architecture. The computed GILBO may be more or less tight depending on such choices, though – the architectural choices for the encoder are a form of
inductive bias and should be made in a problem-dependent manner just like any other architectural choice.
4.6 Finiteness of the Generative Information
The generative mutual information is only infinite if the generator network is not only deterministic, but is also invertible. Deterministic many-to-one functions can have finite mutual informations between their inputs and outputs. Take for instance the following: p(z) = U [−1, 1], the prior being uniform from -1 to 1, and a generator x = G(z) = sign(z) being the sign function (which is C∞ almost everywhere), for which p(x|z) = δ(x− sign(z)) the conditional distribution of x given z is the delta function concentrated on the sign of z.
p(x, z) = p(x|z)p(z) = 1 2 δ(x−sign(z)) p(z) = ∫ 1 −1 dx p(x, z) = 1 2 δ(z−1)+ 1 2 δ(z+1) (6)
I(X;Z) = ∫ dx dz p(x, z) log p(x, z)
p(x)p(z) (7)
= ∫ 1 −1 dx ∫ 1 −1 dz 1 2 δ(x− sign(z)) log δ(x− sign(z))1 2δ(z − 1) + 1 2δ(z + 1)
(8)
=
[ 1
2 log 2 ] z=−1 + [ 1 2 log 2 ] z=1 = log 2 = 1bit (9)
In other words, the deterministic function x = sign(z) induces a mutual information of 1 bit between X and Z. This makes sense when interpreting the mutual information as the reduction in the number of yes-no questions needed to specify the value. It takes an infinite number of yes-no questions to precisely determine a real number in the range [−1, 1], but if we observe the sign of the value, it takes one fewer question (while still being infinite) to determine.
Even if we take Z to be a continuous real-valued random variable on the range [−1, 1], if we consider a function x = float(z) which casts that number to a float, for a 32-bit float on the range [−1, 1] the mutual information that results is I(X;Z) = 26 bits (we verified this numerically). In any chain Z → float(Z) → X by the data processing inequality, the mutual information I(X;Z) is limited by I(Z; float(Z)) = 26 bits (per dimension). Given that we train neural networks with limited precision arithmetic, this ensures that there is always some finite mutual information in the representations, since our random variables are actually discrete, albeit discretized on a very fine grid.
5 Conclusion
We’ve defined a new metric for evaluating generative models, the GILBO, and measured its value on over 3200 models. We’ve investigated and discussed strengths and potential limitations of the metric. We’ve observed that GILBO gives us different information than is currently available in sample-quality based metrics like FID, both signifying a qualitative difference in the performance of most GANs on MNIST versus richer datasets, as well as being able to distinguish between GANs with qualitatively different latent representations even if they have the same FID score.
On simple datasets, in an information-theoretic sense we cannot distinguish what GANs with the best FIDs are doing from models that are limited to making some local deformations of the training set. On more complicated datasets, GANs show a wider array of complexities in their trained generative models. These complexities cannot be discerned by existing sample-quality based metrics, but would have important implications for any use of the trained generative models for auxiliary tasks, such as compression or representation learning.
A truly invertible continuous map from the latent space to the image space would have a divergent mutual information. Since GANs are implemented as a feed forward neural network, the fact that we can measure finite and distinct values for the GILBO for different architectures suggest not only are they fundamentally not perfectly invertible, but the degree of invertibility is an interesting signal of the complexity of the learned generative model. Given that GANs are implemented as deterministic feed forward maps, they naturally want to live at high generative mutual information.
Humans seem to extract only roughly a dozen bits (∼ 8 nats) from natural images into long term memory (Landauer, 1986). This calls into question the utility of the usual qualitative visual comparisons of highly complex generative models. We might also be interested in trying to train models that learn much more compressed representations. VAEs can naturally target a wide range of mutual informations (Alemi et al., 2017). GANs are harder to steer. One approach to make GANs steerable is to modify the GAN objective and specifically designate a subset of the full latent space as the informative subspace, as in Chen et al. (2016), where the maximum complexity can be controlled for by limiting the dimensionality of a discrete categorical latent. The remaining stochasticity in the latent can be used for novelty in the conditional generations. Alternatively one could imagine adding the GILBO as an auxiliary objective to ordinary GAN training, though as a lower bound, it may not prove useful for helping to keep the generative information low. Regardless, we believe it is important to consider the complexity in information-theoretic terms of the generative models we train, and the GILBO offers a relatively cheap comparative measure.
We believe using GILBO for further comparisons across architectures, datasets, and GAN and VAE variants will illuminate the strengths and weaknesses of each. The GILBO should be measured and reported when evaluating any latent variable model. To that end, our implementation is available at https://github.com/google/compare_gan.
Acknowledgements
We would like to thank Mario Lucic, Karol Kurach, and Marcin Michalski for the use of their 3200 previously-trained GANs and VAEs and their codebase (described in Lucic et al. (2017)), without which this paper would have had much weaker experiments, as well as for their help adding our GILBO code to their public repository. We would also like to thank our anonymous reviewers for substantial helpful feedback. | 1. What is the focus of the reviewed paper, and what are the strengths and weaknesses of the proposed approach?
2. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
3. What are some potential shortcomings of the method addressed by the reviewer, and how might they be resolved?
4. Are there any suggestions for improvements or additions to the paper that the reviewer has provided? | Review | Review
[Response to rebuttal: I am happy and thoroughly satisfied with the author's response to the reviews, which is why I have increased my rating. I only want to touch upon one aspect once more: Reviewer 2 has raised an interesting issue (and in my view the authors have responded well and convincingly), namely the problem that the continuous MI might diverge for a differentiable deterministic model. The paper might benefit from explicitly including and addressing this issue, at least in the appendix. The issue is trivially resolved by acknowledging the fact that we perform computation with limited bit-precision, so the authors might even choose to make this explicit by replacing integrals with sums in their equations. For the continuous case, the problem remains when using a deterministic, reversible generator and I think it is interesting to comment on this in the paper and discuss how (and potentially under what circumstances) typical Relu networks tend to produce non-reversible functions ("many-to-one" mapping) while still being (mostly) differentiable. A full investigation of this is beyond the scope of the paper, but it might spark interest to look into this in some future work and thoroughly investigate what kind of compression GAN decoders perform (in the continuous limit).] The paper introduces a new metric to quantitatively characterize the complexity of a generative latent-variable model with the ultimate goal of providing a performance indicator to quantitatively characterize generative models obtained from both GANs and VAEs. The aim is to quantify the mutual information between the latent variable z and the generated data x. This quantity is interesting since it provides a training-data independent measure, is invariant under re-parametrization and has interesting and well-known information-theoretic properties and interpretations. Since the computation of the mutual information involves the intractable posterior p(x|z), the paper proposes to instead consider a lower bound (GILBO) based on a variational approximation of the posterior instead. The measure is demonstrated in an exhaustive experimental study, where 800 models (VAEs and GANs) are compared on four different datasets. The authors find that the proposed measure provides important insight into the complexity of the generative model which is not captured by previously proposed data-dependent and sample-quality focused measures such as the FID score. The GILBO allows to qualitatively divide trained models into different regimes such as âdata memorizationâ or âfailure to capture all classes in latent spaceâ. Additionally the GILBO highlights some qualitative differences between GANs and VAEs. The paper concludes by addressing some potential shortcomings of the method: reproducibility of quantitative values, tightness of lower bound, quality of variational approximation and consistency with qualitative âbehaviorâ of the generative model. The recent literature reports several attempts at objectively comparing the quality of complex generative models obtained from GANs or VAEs. While many methods so far focused on capturing the match between the true and the generated data-distribution or quantitative measures that correlate well with visual quality of generated samples, these measures seem to be insufficient to fully capture all important aspects of a generative model. Reporting the GILBO as an additional quantity that characterizes complexity of the generative model is a novel and original proposal. The paper is generally well written (except for some hiccups at the beginning of section 2) and the experimental section is impressive (certainly in terms of invested computation). Main points of criticism regarding the method are already addressed in the paper. For these reasons, I argue for accepting the paper. My comments below are meant as suggestions to authors for further increasing the quality of the paper. (1 Intro of the GILBO in Sec. 2) Section 2 (lines 23 to 48) could use some polishing and additional details, particularly for readers unfamiliar with standard notation in generative latent-variable models. In particular: -) please explain what the variables X and Z correspond to in the context of generative models, ideally even mention that such a generative model consists of a prior over the latent variable and a (deterministic) generator p(x|z) which is a deep neural network in GAN and VAE. -) line(27): I guess to be precise, it is the reduction in the number of questions to guess X if you observe Z and know p(X). -) line (32): I think it would not hurt if you (in-line) add the equation for marginalization to make it obvious why p(x) is intractable. Also consider giving the equation for the posterior p(z|x) as you have it in line 34 before mentioning p(x), since p(x) does not appear in Eq. (1) but p(z|x) does. This might help readers follow the reasoning more easily. -) Eq (2): perhaps consider being explicit about the fact that GILBO is a function which depends on the parameters of e(z|x), for instance by writing GILBO(\theta)=⦠and then using e_\theta(z|x) or similar. -) line (42 â 44): âmaximizing the likelihoodâ¦â â it could be helpful to write down the same as an equation, just to make it precise for readers. (2 Proofs) There are two arguments that you might want to back up with a proof (e.g. in the supplementary material). Though the proofs should not be too difficult, some readers might appreciate them and they are actually central statements for the appealing properties of the GILBO. Additionally it might be beneficial because it would require being more specific about the technical conditions that need to be fulfilled to get the appealing properties of GILBO. -) line 46: show that the GILBO is actually a lower bound to the mutual information. -)line 48: show that failure to converge, etc. simply makes the bound looser, i.e. show that Eq. (2) cannot diverge due to some ill-parameterized e(z|x). (3 Discussion) Three more issues that you might want to include in your discussion. -) Do you think it would be possible/computationally feasible to replace the variational approximation of the posterior with some sampling-approximation, which would in principle allow for arbitrary precision and thus guarantee convergence to the actual mutual information (while computationally expensive, this could be used to compute a gold-standard compared to the variational approximation)? -) How sensitive is the GILBO to different settings of the optimization process (optimizer, hyper-parameters)? Ideally, this answer is backed up by some simulations. This would be of practical importance, since comparing GILBOs of different models could in principle require following very closely the same procedure (and of course encoder architecture) across papers. -) Besides qualitative âsanity-checksâ (as in Fig. 2: which regime does my generative model fall into), how do you propose to use the GILBO to quantitatively compare models. Say I have two models with different GILBO (but in the same qualitative regime) and different FID score â which model is favorable, higher GILBO, higher FID, â¦? Misc: -) Footnote 1, last sentence: ⦠it could also achieve a looser bound -) Figure 1a: Any specific reason for a different color-map here, otherwise please use the same color-map as in the other plots. -) Fig. 7 caption vs. line 133: did you use 128 or 127 samples? -) Fig. 8 caption vs. line 165/166: did you use 16 or 15 samples? -) citations: I quite like the author, year citations (even though they âcostâ extra space in a conference paper) â please check that all your citations have the year properly set (see line 194, where it is missing) -) line 207/208: Reporting the GILBO as an additional measure could be a good idea, please be a bit more specific that this would require fixing the encoder architecture and optimization procedure as well and comment on practical aspects such as whether you would use a different encoder architecture per data-set or the same across many data-sets, etc. |
NIPS | Title
The Nearest Neighbor Information Estimator is Adaptively Near Minimax Rate-Optimal
Abstract
We analyze the Kozachenko–Leonenko (KL) fixed k-nearest neighbor estimator for the differential entropy. We obtain the first uniform upper bound on its performance for any fixed k over Hölder balls on a torus without assuming any conditions on how close the density could be from zero. Accompanying a recent minimax lower bound over the Hölder ball, we show that the KL estimator for any fixed k is achieving the minimax rates up to logarithmic factors without cognizance of the smoothness parameter s of the Hölder ball for s ∈ (0, 2] and arbitrary dimension d, rendering it the first estimator that provably satisfies this property.
1 Introduction
Information theoretic measures such as entropy, Kullback-Leibler divergence and mutual information quantify the amount of information among random variables. They have many applications in modern machine learning tasks, such as classification [48], clustering [46, 58, 10, 41] and feature selection [1, 17]. Information theoretic measures and their variants can also be applied in several data science domains such as causal inference [18], sociology [49] and computational biology [36]. Estimating information theoretic measures from data is a crucial sub-routine in the aforementioned applications and has attracted much interest in statistics community. In this paper, we study the problem of estimating Shannon differential entropy, which is the basis of estimating other information theoretic measures for continuous random variables.
Suppose we observe n independent identically distributed random vectors X = {X1, . . . , Xn} drawn from density function f where Xi ∈ Rd. We consider the problem of estimating the differential entropy
h(f) = − ∫ f(x) ln f(x)dx , (1)
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
from the empirical observations X. The fundamental limit of estimating the differential entropy is given by the minimax risk
inf ĥ sup f∈F
( E(ĥ(X)− h(f))2 )1/2 , (2)
where the infimum is taken over all estimators ĥ that is a function of the empirical data X. Here F denotes a (nonparametric) class of density functions.
The problem of differential entropy estimation has been investigated extensively in the literature. As discussed in [2], there exist two main approaches, where one is based on kernel density estimators [30], and the other is based on the nearest neighbor methods [56, 53, 52, 11, 3], which is pioneered by the work of [33].
The problem of differential entropy estimation lies in the general problem of estimating nonparametric functionals. Unlike the parametric counterparts, the problem of estimating nonparametric functionals is challenging even for smooth functionals. Initial efforts have focused on inference of linear, quadratic, and cubic functionals in Gaussian white noise and density models and have laid the foundation for the ensuing research. We do not attempt to survey the extensive literature in this area, but instead refer to the interested reader to, e.g., [24, 5, 12, 16, 6, 32, 37, 47, 8, 9, 54] and the references therein. For non-smooth functionals such as entropy, there is some recent progress [38, 26, 27] on designing theoretically minimax optimal estimators, while these estimators typically require the knowledge of the smoothness parameters, and the practical performances of these estimators are not yet known.
The k-nearest neighbor differential entropy estimator, or Kozachenko-Leonenko (KL) estimator is computed in the following way. Let Ri,k be the distance between Xi and its k-nearest neighbor among {X1, . . . , Xi−1, Xi+1, . . . , Xn}. Precisely, Ri,k equals the k-th smallest number in the list {‖Xi − Xj‖ : j 6= i, j ∈ [n]}, here [n] = {1, 2, . . . , n}. Let B(x, ρ) denote the closed `2 ball centered at x of radius ρ and λ be the Lebesgue measure on Rd. The KL differential entropy estimator is defined as
ĥn,k(X) = ln k − ψ(k) + 1n ∑n i=1 ln ( n kλ(B(Xi, Ri,k)) ) , (3)
where ψ(x) is the digamma function with ψ(1) = −γ, γ = − ∫∞ 0 e−t ln tdt = 0.5772156 . . . is the Euler–Mascheroni constant.
There exists an intuitive explanation behind the construction of the KL differential entropy estimator. Writing informally, we have
h(f) = Ef [− ln f(X)] ≈ 1
n n∑ i=1 − ln f(Xi) ≈ 1 n n∑ i=1 − ln f̂(Xi), (4)
where the first approximation is based on the law of large numbers, and in the second approximation we have replaced f by a nearest neighbor density estimator f̂ . The nearest neighbor density estimator f̂(Xi) follows from the “intuition” 1that
f̂(Xi)λ(B(Xi, Ri,k)) ≈ k
n . (5)
Here the final additive bias correction term ln k − ψ(k) follows from a detailed analysis of the bias of the KL estimator, which will become apparent later.
We focus on the regime where k is a fixed: in other words, it does not grow as the number of samples n increases. The fixed k version of the KL estimator is widely applied in practice and enjoys smaller computational complexity, see [52].
There exists extensive literature on the analysis of the KL differential entropy estimator, which we refer to [4] for a recent survey. One of the major difficulties in analyzing the KL estimator is that the nearest neighbor density estimator exhibits a huge bias when the density is small. Indeed, it was shown in [42] that the bias of the nearest neighbor density estimator in fact does not vanish even
1Precisely, we have ∫ B(Xi,Ri,k)
f(u)du ∼ Beta(k, n − k) [4, Chap. 1.2]. A Beta(k, n − k) distributed random variable has mean k
n .
when n→∞ and deteriorates as f(x) gets close to zero. In the literature, a large collection of work assume that the density is uniformly bounded away from zero [23, 29, 57, 30, 53], while others put various assumptions quantifying on average how close the density is to zero [25, 40, 56, 14, 20, 52, 11]. In this paper, we focus on removing assumptions on how close the density is to zero.
1.1 Main Contribution
Let Hsd(L; [0, 1]d) be the Hölder ball in the unit cube (torus) (formally defined later in Definition 2 in Appendix A) and s ∈ (0, 2] is the Hölder smoothness parameter. Then, the worst case risk of the fixed k-nearest neighbor differential entropy estimator over Hsd(L; [0, 1]d) is controlled by the following theorem.
Theorem 1 Let X = {X1, . . . , Xn} be i.i.d. samples from density function f . Then, for 0 < s ≤ 2, the fixed k-nearest neighbor KL differential entropy estimator ĥn,k in (3) satisfies(
sup f∈Hsd(L;[0,1]d)
Ef ( ĥn,k(X)− h(f) )2) 12 ≤ C ( n− s s+d ln(n+ 1) + n− 1 2 ) . (6)
where C is a constant depends only on s, L, k and d.
The KL estimator is in fact nearly minimax up to logarithmic factors, as shown in the following result from [26].
Theorem 2 [26] Let X = {X1, . . . , Xn} be i.i.d. samples from density function f . Then, there exists a constant L0 depending on s, d only such that for all L ≥ L0, s > 0,(
inf ĥ sup f∈Hsd(L;[0,1]d)
Ef ( ĥ(X)− h(f) )2) 12 ≥ c ( n− s s+d (ln(n+ 1))− s+2d s+d + n− 1 2 ) . (7)
where c is a constant depends only on s, L and d.
Remark 1 We emphasize that one cannot remove the condition L ≥ L0 in Theorem 2. Indeed, if the Hölder ball has a too small width, then the density itself is bounded away from zero, which makes the differential entropy a smooth functional, with minimax rates n− 4s 4s+d + n−1/2 [51, 50, 43].
Theorem 1 and 2 imply that for any fixed k, the KL estimator achieves the minimax rates up to logarithmic factors without knowing s for all s ∈ (0, 2], which implies that it is near minimax rate-optimal (within logarithmic factors) when the dimension d ≤ 2. We cannot expect the vanilla version of the KL estimator to adapt to higher order of smoothness since the nearest neighbor density estimator can be viewed as a variable width kernel density estimator with the box kernel, and it is well known in the literature (see, e.g., [55, Chapter 1]) that any positive kernel cannot exploit the smoothness s > 2. We refer to [26] for a more detailed discussion on this difficulty and potential solutions. The Jackknife idea, such as the one presented in [11, 3] might be useful for adapting to s > 2.
The significance of our work is multi-folded:
• We obtain the first uniform upper bound on the performance of the fixed k-nearest neighbor KL differential entropy estimator over Hölder balls without assuming how close the density could be from zero. We emphasize that assuming conditions of this type, such as the density is bounded away from zero, could make the problem significantly easier. For example, if the density f is assumed to satisfy f(x) ≥ c for some constant c > 0, then the differential entropy becomes a smooth functional and consequently, the general technique for estimating smooth nonparametric functionals [51, 50, 43] can be directly applied here to achieve the minimax rates n− 4s 4s+d + n−1/2. The main technical tools that enabled us
to remove the conditions on how close the density could be from zero are the Besicovitch covering lemma (Lemma. 4) and the generalized Hardy–Littlewood maximal inequality.
• We show that, for any fixed k, the k-nearest neighbor KL entropy estimator nearly achieves the minimax rates without knowing the smoothness parameter s. In the functional estimation literature, designing estimators that can be theoretically proved to adapt to unknown
levels of smoothness is usually achieved using the Lepski method [39, 22, 45, 44, 27], which is not known to be performing well in general in practice. On the other hand, a simple plug-in approach can achieves the rate of n−s/(s+d), but only when s is known [26]. The KL estimator is well known to exhibit excellent empirical performance, but existing theory has not yet demonstrated its near-“optimality” when the smoothness parameter s is not known. Recent works [3, 52, 11] analyzed the performance of the KL estimator under various assumptions on how close the density could be to zero, with no matching lower bound up to logarithmic factors in general. Our work makes a step towards closing this gap and provides a theoretical explanation for the wide usage of the KL estimator in practice.
The rest of the paper is organized as follows. Section 2 is dedicated to the proof of Theorem 1. We discuss some future directions in Section 3.
1.2 Notations
For positive sequences aγ , bγ , we use the notation aγ .α bγ to denote that there exists a universal constant C that only depends on α such that supγ aγ bγ ≤ C, and aγ &α bγ is equivalent to bγ .α aγ . Notation aγ α bγ is equivalent to aγ .α bγ and bγ .α aγ . We write aγ . bγ if the constant is universal and does not depend on any parameters. Notation aγ bγ means that lim infγ aγbγ = ∞, and aγ bγ is equivalent to bγ aγ . We write a ∧ b = min{a, b} and a ∨ b = max{a, b}.
2 Proof of Theorem 1
In this section, we will prove that( E ( ĥn,k(X)− h(f) )2) 12 .s,L,d,k n − ss+d ln(n+ 1) + n− 1 2 , (8)
for any f ∈ Hsd(L; [0, 1]d) and s ∈ (0, 2]. The proof consists two parts: (i) the upper bound of the bias in the form of Os,L,d,k(n−s/(s+d) ln(n + 1)); (ii) the upper bound of the variance is Os,L,d,k(n −1). Below we show the bias proof and relegate the variance proof to Appendix B.
First, we introduce the following notation
ft(x) = µ(B(x, t))
λ(B(x, t)) =
1
Vdtd ∫ u:|u−x|≤t f(u)du . (9)
Here µ is the probability measure specified by density function f on the torus, λ is the Lebesgue measure on Rd, and Vd = πd/2/Γ(1+d/2) is the Lebesgue measure of the unit ball in d-dimensional Euclidean space. Hence ft(x) is the average density of a neighborhood near x. We first state two main lemmas about ft(x) which will be used later in the proof.
Lemma 1 If f ∈ Hsd(L; [0, 1]d) for some 0 < s ≤ 2, then for any x ∈ [0, 1]d and t > 0, we have
| ft(x)− f(x) | ≤ dLts
s+ d , (10)
Lemma 2 If f ∈ Hsd(L; [0, 1]d) for some 0 < s ≤ 2 and f(x) ≥ 0 for all x ∈ [0, 1]d, then for any x and any t > 0, we have
f(x) .s,L,d max { ft(x), ( ft(x)Vdt d )s/(s+d) } , (11)
Furthermore, f(x) .s,L,d 1.
We relegate the proof of Lemma 1 and Lemma 2 to Appendix C. Now we investigate the bias of ĥn,k(X). The following argument reduces the bias analysis of ĥn,k(X) to a function analytic problem. For notation simplicity, we introduce a new random variable X ∼ f independent of
{X1, . . . , Xn} and study ĥn+1,k({X1, . . . , Xn, X}). For every x ∈ Rd, denote Rk(x) by the knearest neighbor distance from x to {X1, X2, . . . , Xn} under distance d(x, y) = minm∈Zd ‖m + x− y‖, i.e., the k-nearest neighbor distance on the torus. Then,
E[ĥn+1,k({X1, . . . , Xn, X})]− h(f) (12) = −ψ(k) + E [ ln ( (n+ 1)λ(B(X,Rk(X))) )] + E [ln f(X)] (13)
= E [ ln ( f(X)λ(B(X,Rk(X)))
µ(B(X,Rk(X)))
)] + E [ ln ((n+ 1)µ(B(X,Rk(X))) ) ]− ψ(k) (14)
= E [ ln f(X)
fRk(X)(X)
] + (E [ ln ((n+ 1)µ(B(X,Rk(X))) ) ]− ψ(k) ) . (15)
We first show that the second term E [ln ((n+ 1)µ(B(X,Rk(X))))] − ψ(k) can be universally controlled regardless of the smoothness of f . Indeed, the random variable µ(B(X,Rk(X))) ∼ Beta(k, n+ 1− k) [4, Chap. 1.2] and it was shown in [4, Theorem 7.2] that there exists a universal constant C > 0 such that∣∣∣E [ln ((n+ 1)µ(B(X,Rk(X))))]− ψ(k) ∣∣∣ ≤ C
n . (16)
Hence, it suffices to show that for 0 < s ≤ 2,∣∣∣∣E [ln f(X)fRk(X)(X) ]∣∣∣∣ .s,L,d,k n− ss+d ln(n+ 1). (17)
We split our analysis into two parts. Section 2.1 shows that E [ ln fRk(X)(X)
f(X)
] .s,L,d,k n − ss+d and
Section 2.2 shows that E [ ln f(X)fRk(X)(X) ] .s,L,d,k n − ss+d ln(n+ 1), which completes the proof.
2.1 Upper bound on E [ ln fRk(X)(X)
f(X) ] By the fact that ln y ≤ y − 1 for any y > 0, we have
E [ ln fRk(X)(X)
f(X)
] ≤ E [ fRk(X)(X)− f(X)
f(X)
] (18)
= ∫ [0,1]d∩{x:f(x)6=0} ( E[fRk(x)(x)]− f(x) ) dx. (19)
Here the expectation is taken with respect to the randomness in Rk(x) = min1≤i≤n,m∈Zd ‖m + Xi − x‖, x ∈ Rd. Define function g(x; f, n) as
g(x; f, n) = sup { u ≥ 0 : Vdudfu(x) ≤ 1
n
} , (20)
g(x; f, n) intuitively means the distance R such that the probability mass µ(B(x,R)) within R is 1/n. Then for any x ∈ [0, 1]d, we can split E[fRk(x)(x)]− f(x) into three terms as
E[fRk(x)(x)]− f(x) = E[(fRk(x)(x)− f(x))1(Rk(x) ≤ n −1/(s+d))] (21)
+ E[(fRk(x)(x)− f(x))1(n −1/(s+d) < Rk(x) ≤ g(x; f, n))] (22) + E[(fRk(x)(x)− f(x))1(Rk(x) > g(x; f, n) ∨ n −1/(s+d))] (23) = C1 + C2 + C3. (24)
Now we handle three terms separately. Our goal is to show that for every x ∈ [0, 1], Ci .s,L,d n−s/(s+d) for i ∈ {1, 2, 3}. Then, taking the integral with respect to x leads to the desired bound.
1. Term C1: whenever Rk(x) ≤ n−1/(s+d), by Lemma 1, we have
|fRk(x)(x)− f(x)| ≤ dLRk(x)
s
s+ d .s,L,d n
−s/(s+d), (25)
which implies that C1 ≤ E [∣∣fRk(x)(x)− f(x)∣∣1(Rk(x) ≤ n−1/(s+d))] .s,L,d n−s/(s+d). (26)
2. Term C2: whenever Rk(x) satisfies that n−1/(s+d) < Rk(x) ≤ g(x; f, n), by definition of g(x; f, n), we have VdRk(x)dfRk(x)(x) ≤ 1n , which implies that
fRk(x)(x) ≤ 1 nVdRk(x)d ≤ 1 nVdn−d/(s+d) .s,L,d n −s/(s+d). (27)
It follows from Lemma 2 that in this case
f(x) .s,L,d fRk(x)(x) ∨ ( fRk(x)(x)VdRk(x) d )s/(s+d)
(28)
≤ n−s/(s+d) ∨ n−s/(s+d) = n−s/(s+d). (29)
Hence, we have C2 = E [ (fRk(x)(x)− f(x))1 ( n−1/(s+d) < Rk(x) ≤ g(x; f, n) )] (30)
≤ E [ (fRk(x)(x) + f(x))1 ( n−1/(s+d) < Rk(x) ≤ g(x; f, n) )] (31)
.s,L,d n −s/(s+d). (32)
3. Term C3: we have C3 ≤ E [ (fRk(x)(x) + f(x))1 ( Rk(x) > g(x; f, n) ∨ n−1/(s+d) )] . (33)
For any x such that Rk(x) > n−1/(s+d), we have
fRk(x)(x) .s,L,d VdRk(x) dfRk(x)(x)n d/(s+d), (34)
and by Lemma 2,
f(x) .s,L,d fRk(x)(x) ∨ (VdRk(x) dfRk(x)(x)) s/(s+d) (35)
≤ fRk(x)(x) + (VdRk(x) dfRk(x)(x)) s/(s+d). (36)
Hence,
f(x) + fRk(x)(x) .s,L,d 2fRk(x)(x) + (VdRk(x) dfRk(x)(x)) s/(s+d) (37)
.s,L,d VdRk(x) dfRk(x)(x)n d/(s+d) + (VdRk(x) dfRk(x)(x)) s/(s+d)
(38)
.s,L,d VdRk(x) dfRk(x)(x)n d/(s+d), (39)
where in the last step we have used the fact that VdRk(x)dfRk(x)(x) > n −1 sinceRk(x) > g(x; f, n). Finally, we have
C3 .s,L,d n d/(s+d)E[(VdRk(x)dfRk(x)(x))1(Rk(x) > g(x; f, n))] (40) = nd/(s+d)E [ (VdRk(x) dfRk(x)(x))1 ( VdRk(x) dfRk(x)(x) > 1/n )] .(41)
Note that VdRk(x)dfRk(x)(x) ∼ Beta(k, n+ 1− k), and if Y ∼ Beta(k, n+ 1− k), we have
E[Y 2] = ( k
n+ 1
)2 +
k(n+ 1− k) (n+ 1)2(n+ 2) .k 1 n2 . (42)
Notice that E[Y 1 (Y > 1/n)] ≤ nE[Y 2]. Hence, we have
C3 .s,L,d n d/(s+d) nE [ (VdRk(x) dfRk(x)(x)) 2 ]
(43)
.s,L,d,k nd/(s+d)n
n2 = n−s/(s+d). (44)
2.2 Upper bound on E [ ln f(X)fRk(X)(X) ] By splitting the term into two parts, we have
E [ ln f(X)
fRk(X)(X)
] = E [∫ [0,1]d∩{x:f(x) 6=0} f(x) ln f(x) fRk(x)(x) dx ] (45)
= E [∫
A
f(x) ln f(x)
fRk(x)(x) 1(fRk(x)(x) > n
−s/(s+d))dx ] (46)
+ E [∫
A
f(x) ln f(x)
fRk(x)(x) 1(fRk(x)(x) ≤ n
−s/(s+d))dx ] (47)
= C4 + C5. (48)
here we denote A = [0, 1]d ∩ {x : f(x) 6= 0} for simplicity of notation. For the term C4, we have
C4 ≤ E [∫
A
f(x)
( f(x)− fRk(x)(x)
fRk(x)(x)
) 1(fRk(x)(x) > n −s/(s+d))dx ] (49)
= E [∫
A
(f(x)− fRk(x)(x))2
fRk(x)(x) 1(fRk(x)(x) > n
−s/(s+d))dx ] (50)
+ E [∫
A
( f(x)− fRk(x)(x) ) 1(fRk(x)(x) > n −s/(s+d))dx ] (51)
≤ ns/(s+d)E [∫
A
( f(x)− fRk(x)(x) )2 dx ] + E [∫ A ( f(x)− fRk(x)(x) ) dx ] . (52)
In the proof of upper bound of E [ ln fRk(X)(X)
f(X) ] , we have shown that E[fRk(x)(x)− f(x)] .s,L,d,k
n−s/(s+d) for any x ∈ A. Similarly as in the proof of upper bound of E [ ln fRk(X)(X)
f(X)
] , we have
E [ (fRk(x)(x)− f(x))2 ] .s,L,d,k n−2s/(s+d) for every x ∈ A. Therefore, we have
C4 .s,L,d,k n s/(s+d)n−2s/(s+d) + n−s/(s+d) .s,L,d,k n −s/(s+d). (53)
Now we consider C5. We conjecture that C5 .s,L,d,k n−s/(s+d) in this case, but we were not able to prove it. Below we prove that C5 .s,L,d,k n−s/(s+d) ln(n+ 1). Define the function
M(x) = sup t>0
1
ft(x) . (54)
Since fRk(x)(x) ≤ n−s/(s+d), we have M(x) = supt>0(1/ft(x)) ≥ 1/fRk(x)(x) ≥ ns/(s+d). Denote ln+(y) = max{ln(y), 0} for any y > 0, therefore, we have that
C5 ≤ E [∫
A
f(x) ln+ ( f(x)
fRk(x)(x)
) 1(fRk(x)(x) ≤ n −s/(s+d))dx ] (55)
≤ E [∫
A
f(x) ln+ ( f(x)
fRk(x)(x)
) 1(M(x) ≥ ns/(s+d))dx ] (56)
≤ ∫ A f(x)E [ ln+ ( 1 (n+ 1)VdRk(x)dfRk(x)(x) )] 1(M(x) ≥ ns/(s+d))dx (57)
+ ∫ A f(x)E [ ln+ ( (n+ 1)VdRk(x) df(x) )] 1(M(x) ≥ ns/(s+d))dx (58) = C51 + C52, (59)
where the last inequality uses the fact ln+(xy) ≤ ln+ x + ln+ y for all x, y > 0. As for C51, since VdRk(x) dfRk(x)(x) ∼ Beta(k, n+ 1− k), and for Y ∼ Beta(k, n+ 1− k), we have
E [ ln+ ( 1
(n+ 1)Y
)] = ∫ 1 n+1
0
ln
( 1
(n+ 1)x
) pY (x)dx (60)
= E [ ln ( 1
(n+ 1)Y
)] + ∫ 1 1
n+1
ln ((n+ 1)x) pY (x)dx (61)
≤ E [ ln ( 1
(n+ 1)Y
)] + ln(n+ 1) ∫ 1 1
n+1
pY (x)dx (62)
≤ E [ ln ( 1
(n+ 1)Y
)] + ln(n+ 1) (63)
≤ ln(n+ 1) (64) where in the last inequality we used the fact that E [ ln (
1 (n+1)Y
)] = ψ(n+1)−ψ(k)−ln(n+1) ≤ 0
for any k ≥ 1. Hence, C51 .s,L,d ln(n+ 1) ∫ A f(x)1(M(x) ≥ ns/(s+d))dx. (65)
Now we introduce the following lemma, which is proved in Appendix C.
Lemma 3 Let µ1, µ2 be two Borel measures that are finite on the bounded Borel sets of Rd. Then, for all t > 0 and any Borel set A ⊂ Rd,
µ1
({ x ∈ A : sup
0<ρ≤D
( µ2(B(x, ρ))
µ1(B(x, ρ))
) > t }) ≤ Cd
t µ2(AD). (66)
Here Cd > 0 is a constant that depends only on the dimension d and
AD = {x : ∃y ∈ A, |y − x| ≤ D}. (67)
Applying the second part of Lemma 3 with µ2 being the Lebesgue measure and µ1 being the measure specified by f(x) on the torus, we can view the function M(x) as
M(x) = sup 0<ρ≤1/2
µ2(B(x, ρ)) µ1(B(x, ρ)) . (68)
Taking A = [0, 1]d ∩ {x : f(x) 6= 0}, t = ns/(s+d), then µ2(A 1 2 ) ≤ 2d, so we know that C51 .s,L,d ln(n+ 1) · ∫ A f(x)1(M(x) ≥ ns/(s+d))dx (69)
= ln(n+ 1) · µ1 ( x ∈ [0, 1]d, f(x) 6= 0,M(x) ≥ ns/(s+d) ) (70)
≤ ln(n+ 1) · Cdn−s/(s+d)µ2(A 1 2 ) .s,L,d n −s/(s+d) ln(n+ 1). (71)
Now we deal with C52. Recall that in Lemma 2, we know that f(x) .s,L,d 1 for any x, and Rk(x) ≤ 1, so ln+((n+ 1)VdRk(x)df(x)) .s,L,d ln(n+ 1). Therefore,
C52 .s,L,d ln(n+ 1) · ∫ A f(x)1(M(x) ≥ ns/(s+d))dx (72)
.s,L,d n −s/(s+d) ln(n+ 1). (73)
Therefore, we have proved that C5 ≤ C51 + C52 .s,L,d n−s/(s+d) ln(n+ 1), which completes the proof of the upper bound on E [ ln f(X)fRk(X)(X) ] .
3 Future directions
It is an tempting question to ask whether one can close the logarithmic gap between Theorem 1 and 2. We believe that neither the upper bound nor the lower bound are tight. In fact, we conjecture that the upper bound in Theorem 1 could be improved to n− s s+d +n−1/2 due to a more careful analysis of the bias, since Hardy–Littlewood maximal inequalities apply to arbitrary measurable functions but we have assumed regularity properties of the underlying density. We conjecture that the minimax lower bound could be improved to (n lnn)− s s+d +n−1/2, since a kernel density estimator based differential entropy estimator was constructed in [26] which achieves upper bound (n lnn)− s s+d + n−1/2 over Hsd(L; [0, 1]d) with the knowledge of s. It would be interesting to extend our analysis to that of the k-nearest neighbor based Kullback– Leibler divergence estimator [59]. The discrete case has been studied recently [28, 7].
It is also interesting to analyze k-nearest neighbor based mutual information estimators, such as the KSG estimator [34], and show that they are “near”-optimal and adaptive to both the smoothness and the dimension of the distributions. There exists some analysis of the KSG estimator [21] but we suspect the upper bound is not tight. Moreover, a slightly revised version of KSG estimator is proved to be consistent even if the underlying distribution is not purely continuous nor purely discrete [19], but the optimality properties are not yet well understood. | 1. What is the focus of the paper regarding the nearest neighbor information estimator?
2. What are the significant theoretical contributions of the paper, particularly in handling density near zero?
3. How does the KL estimator compare with previous works, such as Han et al. (2017), in terms of technique and assumption?
4. What are the minor comments or suggestions for improvement in the review? | Review | Review
This paper studies the nearest neighbor information estimator, aka the Kozachenko-Leonenko (KL) estimator for the differential entropy. Matching upper and lower bounds (up to log factor) are proven under a H\"older ball condition. The paper is of high quality and clarity. The introductory and the technical parts are smoothly written, although I could not verify all the details of the proofs. References to the literature look great. Intuitions are given along the way of the presentation. There are two significant theoretical contributions. First, estimating the entropy becomes significantly harder when the density is allowed to be close to zero. A previous work I am aware of that handles this situation is [Han, Jiao, Weissman, Wu 2017] that uses sophisticated kernel and polynomial approximation techniques, while this works shows that the simple KL estimator achieves similar effect (albeit under other different assumptions). Second, the KL estimator does not use the smoothness parameter $s$, so it is naturally adaptive and achieves bounds depending on $s$. A couple of minor comments are in the sequel. When introducing the quantity $R_{i,k}$, it should be clearly stated what ``distance between $X_i$ and its $k$-nearest neighbor" means here. This might be too much beyond the scope of the paper, but do you have results for the KL estimator under the more general Lipschitz ball condition [HJWW17]? |
NIPS | Title
The Nearest Neighbor Information Estimator is Adaptively Near Minimax Rate-Optimal
Abstract
We analyze the Kozachenko–Leonenko (KL) fixed k-nearest neighbor estimator for the differential entropy. We obtain the first uniform upper bound on its performance for any fixed k over Hölder balls on a torus without assuming any conditions on how close the density could be from zero. Accompanying a recent minimax lower bound over the Hölder ball, we show that the KL estimator for any fixed k is achieving the minimax rates up to logarithmic factors without cognizance of the smoothness parameter s of the Hölder ball for s ∈ (0, 2] and arbitrary dimension d, rendering it the first estimator that provably satisfies this property.
1 Introduction
Information theoretic measures such as entropy, Kullback-Leibler divergence and mutual information quantify the amount of information among random variables. They have many applications in modern machine learning tasks, such as classification [48], clustering [46, 58, 10, 41] and feature selection [1, 17]. Information theoretic measures and their variants can also be applied in several data science domains such as causal inference [18], sociology [49] and computational biology [36]. Estimating information theoretic measures from data is a crucial sub-routine in the aforementioned applications and has attracted much interest in statistics community. In this paper, we study the problem of estimating Shannon differential entropy, which is the basis of estimating other information theoretic measures for continuous random variables.
Suppose we observe n independent identically distributed random vectors X = {X1, . . . , Xn} drawn from density function f where Xi ∈ Rd. We consider the problem of estimating the differential entropy
h(f) = − ∫ f(x) ln f(x)dx , (1)
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
from the empirical observations X. The fundamental limit of estimating the differential entropy is given by the minimax risk
inf ĥ sup f∈F
( E(ĥ(X)− h(f))2 )1/2 , (2)
where the infimum is taken over all estimators ĥ that is a function of the empirical data X. Here F denotes a (nonparametric) class of density functions.
The problem of differential entropy estimation has been investigated extensively in the literature. As discussed in [2], there exist two main approaches, where one is based on kernel density estimators [30], and the other is based on the nearest neighbor methods [56, 53, 52, 11, 3], which is pioneered by the work of [33].
The problem of differential entropy estimation lies in the general problem of estimating nonparametric functionals. Unlike the parametric counterparts, the problem of estimating nonparametric functionals is challenging even for smooth functionals. Initial efforts have focused on inference of linear, quadratic, and cubic functionals in Gaussian white noise and density models and have laid the foundation for the ensuing research. We do not attempt to survey the extensive literature in this area, but instead refer to the interested reader to, e.g., [24, 5, 12, 16, 6, 32, 37, 47, 8, 9, 54] and the references therein. For non-smooth functionals such as entropy, there is some recent progress [38, 26, 27] on designing theoretically minimax optimal estimators, while these estimators typically require the knowledge of the smoothness parameters, and the practical performances of these estimators are not yet known.
The k-nearest neighbor differential entropy estimator, or Kozachenko-Leonenko (KL) estimator is computed in the following way. Let Ri,k be the distance between Xi and its k-nearest neighbor among {X1, . . . , Xi−1, Xi+1, . . . , Xn}. Precisely, Ri,k equals the k-th smallest number in the list {‖Xi − Xj‖ : j 6= i, j ∈ [n]}, here [n] = {1, 2, . . . , n}. Let B(x, ρ) denote the closed `2 ball centered at x of radius ρ and λ be the Lebesgue measure on Rd. The KL differential entropy estimator is defined as
ĥn,k(X) = ln k − ψ(k) + 1n ∑n i=1 ln ( n kλ(B(Xi, Ri,k)) ) , (3)
where ψ(x) is the digamma function with ψ(1) = −γ, γ = − ∫∞ 0 e−t ln tdt = 0.5772156 . . . is the Euler–Mascheroni constant.
There exists an intuitive explanation behind the construction of the KL differential entropy estimator. Writing informally, we have
h(f) = Ef [− ln f(X)] ≈ 1
n n∑ i=1 − ln f(Xi) ≈ 1 n n∑ i=1 − ln f̂(Xi), (4)
where the first approximation is based on the law of large numbers, and in the second approximation we have replaced f by a nearest neighbor density estimator f̂ . The nearest neighbor density estimator f̂(Xi) follows from the “intuition” 1that
f̂(Xi)λ(B(Xi, Ri,k)) ≈ k
n . (5)
Here the final additive bias correction term ln k − ψ(k) follows from a detailed analysis of the bias of the KL estimator, which will become apparent later.
We focus on the regime where k is a fixed: in other words, it does not grow as the number of samples n increases. The fixed k version of the KL estimator is widely applied in practice and enjoys smaller computational complexity, see [52].
There exists extensive literature on the analysis of the KL differential entropy estimator, which we refer to [4] for a recent survey. One of the major difficulties in analyzing the KL estimator is that the nearest neighbor density estimator exhibits a huge bias when the density is small. Indeed, it was shown in [42] that the bias of the nearest neighbor density estimator in fact does not vanish even
1Precisely, we have ∫ B(Xi,Ri,k)
f(u)du ∼ Beta(k, n − k) [4, Chap. 1.2]. A Beta(k, n − k) distributed random variable has mean k
n .
when n→∞ and deteriorates as f(x) gets close to zero. In the literature, a large collection of work assume that the density is uniformly bounded away from zero [23, 29, 57, 30, 53], while others put various assumptions quantifying on average how close the density is to zero [25, 40, 56, 14, 20, 52, 11]. In this paper, we focus on removing assumptions on how close the density is to zero.
1.1 Main Contribution
Let Hsd(L; [0, 1]d) be the Hölder ball in the unit cube (torus) (formally defined later in Definition 2 in Appendix A) and s ∈ (0, 2] is the Hölder smoothness parameter. Then, the worst case risk of the fixed k-nearest neighbor differential entropy estimator over Hsd(L; [0, 1]d) is controlled by the following theorem.
Theorem 1 Let X = {X1, . . . , Xn} be i.i.d. samples from density function f . Then, for 0 < s ≤ 2, the fixed k-nearest neighbor KL differential entropy estimator ĥn,k in (3) satisfies(
sup f∈Hsd(L;[0,1]d)
Ef ( ĥn,k(X)− h(f) )2) 12 ≤ C ( n− s s+d ln(n+ 1) + n− 1 2 ) . (6)
where C is a constant depends only on s, L, k and d.
The KL estimator is in fact nearly minimax up to logarithmic factors, as shown in the following result from [26].
Theorem 2 [26] Let X = {X1, . . . , Xn} be i.i.d. samples from density function f . Then, there exists a constant L0 depending on s, d only such that for all L ≥ L0, s > 0,(
inf ĥ sup f∈Hsd(L;[0,1]d)
Ef ( ĥ(X)− h(f) )2) 12 ≥ c ( n− s s+d (ln(n+ 1))− s+2d s+d + n− 1 2 ) . (7)
where c is a constant depends only on s, L and d.
Remark 1 We emphasize that one cannot remove the condition L ≥ L0 in Theorem 2. Indeed, if the Hölder ball has a too small width, then the density itself is bounded away from zero, which makes the differential entropy a smooth functional, with minimax rates n− 4s 4s+d + n−1/2 [51, 50, 43].
Theorem 1 and 2 imply that for any fixed k, the KL estimator achieves the minimax rates up to logarithmic factors without knowing s for all s ∈ (0, 2], which implies that it is near minimax rate-optimal (within logarithmic factors) when the dimension d ≤ 2. We cannot expect the vanilla version of the KL estimator to adapt to higher order of smoothness since the nearest neighbor density estimator can be viewed as a variable width kernel density estimator with the box kernel, and it is well known in the literature (see, e.g., [55, Chapter 1]) that any positive kernel cannot exploit the smoothness s > 2. We refer to [26] for a more detailed discussion on this difficulty and potential solutions. The Jackknife idea, such as the one presented in [11, 3] might be useful for adapting to s > 2.
The significance of our work is multi-folded:
• We obtain the first uniform upper bound on the performance of the fixed k-nearest neighbor KL differential entropy estimator over Hölder balls without assuming how close the density could be from zero. We emphasize that assuming conditions of this type, such as the density is bounded away from zero, could make the problem significantly easier. For example, if the density f is assumed to satisfy f(x) ≥ c for some constant c > 0, then the differential entropy becomes a smooth functional and consequently, the general technique for estimating smooth nonparametric functionals [51, 50, 43] can be directly applied here to achieve the minimax rates n− 4s 4s+d + n−1/2. The main technical tools that enabled us
to remove the conditions on how close the density could be from zero are the Besicovitch covering lemma (Lemma. 4) and the generalized Hardy–Littlewood maximal inequality.
• We show that, for any fixed k, the k-nearest neighbor KL entropy estimator nearly achieves the minimax rates without knowing the smoothness parameter s. In the functional estimation literature, designing estimators that can be theoretically proved to adapt to unknown
levels of smoothness is usually achieved using the Lepski method [39, 22, 45, 44, 27], which is not known to be performing well in general in practice. On the other hand, a simple plug-in approach can achieves the rate of n−s/(s+d), but only when s is known [26]. The KL estimator is well known to exhibit excellent empirical performance, but existing theory has not yet demonstrated its near-“optimality” when the smoothness parameter s is not known. Recent works [3, 52, 11] analyzed the performance of the KL estimator under various assumptions on how close the density could be to zero, with no matching lower bound up to logarithmic factors in general. Our work makes a step towards closing this gap and provides a theoretical explanation for the wide usage of the KL estimator in practice.
The rest of the paper is organized as follows. Section 2 is dedicated to the proof of Theorem 1. We discuss some future directions in Section 3.
1.2 Notations
For positive sequences aγ , bγ , we use the notation aγ .α bγ to denote that there exists a universal constant C that only depends on α such that supγ aγ bγ ≤ C, and aγ &α bγ is equivalent to bγ .α aγ . Notation aγ α bγ is equivalent to aγ .α bγ and bγ .α aγ . We write aγ . bγ if the constant is universal and does not depend on any parameters. Notation aγ bγ means that lim infγ aγbγ = ∞, and aγ bγ is equivalent to bγ aγ . We write a ∧ b = min{a, b} and a ∨ b = max{a, b}.
2 Proof of Theorem 1
In this section, we will prove that( E ( ĥn,k(X)− h(f) )2) 12 .s,L,d,k n − ss+d ln(n+ 1) + n− 1 2 , (8)
for any f ∈ Hsd(L; [0, 1]d) and s ∈ (0, 2]. The proof consists two parts: (i) the upper bound of the bias in the form of Os,L,d,k(n−s/(s+d) ln(n + 1)); (ii) the upper bound of the variance is Os,L,d,k(n −1). Below we show the bias proof and relegate the variance proof to Appendix B.
First, we introduce the following notation
ft(x) = µ(B(x, t))
λ(B(x, t)) =
1
Vdtd ∫ u:|u−x|≤t f(u)du . (9)
Here µ is the probability measure specified by density function f on the torus, λ is the Lebesgue measure on Rd, and Vd = πd/2/Γ(1+d/2) is the Lebesgue measure of the unit ball in d-dimensional Euclidean space. Hence ft(x) is the average density of a neighborhood near x. We first state two main lemmas about ft(x) which will be used later in the proof.
Lemma 1 If f ∈ Hsd(L; [0, 1]d) for some 0 < s ≤ 2, then for any x ∈ [0, 1]d and t > 0, we have
| ft(x)− f(x) | ≤ dLts
s+ d , (10)
Lemma 2 If f ∈ Hsd(L; [0, 1]d) for some 0 < s ≤ 2 and f(x) ≥ 0 for all x ∈ [0, 1]d, then for any x and any t > 0, we have
f(x) .s,L,d max { ft(x), ( ft(x)Vdt d )s/(s+d) } , (11)
Furthermore, f(x) .s,L,d 1.
We relegate the proof of Lemma 1 and Lemma 2 to Appendix C. Now we investigate the bias of ĥn,k(X). The following argument reduces the bias analysis of ĥn,k(X) to a function analytic problem. For notation simplicity, we introduce a new random variable X ∼ f independent of
{X1, . . . , Xn} and study ĥn+1,k({X1, . . . , Xn, X}). For every x ∈ Rd, denote Rk(x) by the knearest neighbor distance from x to {X1, X2, . . . , Xn} under distance d(x, y) = minm∈Zd ‖m + x− y‖, i.e., the k-nearest neighbor distance on the torus. Then,
E[ĥn+1,k({X1, . . . , Xn, X})]− h(f) (12) = −ψ(k) + E [ ln ( (n+ 1)λ(B(X,Rk(X))) )] + E [ln f(X)] (13)
= E [ ln ( f(X)λ(B(X,Rk(X)))
µ(B(X,Rk(X)))
)] + E [ ln ((n+ 1)µ(B(X,Rk(X))) ) ]− ψ(k) (14)
= E [ ln f(X)
fRk(X)(X)
] + (E [ ln ((n+ 1)µ(B(X,Rk(X))) ) ]− ψ(k) ) . (15)
We first show that the second term E [ln ((n+ 1)µ(B(X,Rk(X))))] − ψ(k) can be universally controlled regardless of the smoothness of f . Indeed, the random variable µ(B(X,Rk(X))) ∼ Beta(k, n+ 1− k) [4, Chap. 1.2] and it was shown in [4, Theorem 7.2] that there exists a universal constant C > 0 such that∣∣∣E [ln ((n+ 1)µ(B(X,Rk(X))))]− ψ(k) ∣∣∣ ≤ C
n . (16)
Hence, it suffices to show that for 0 < s ≤ 2,∣∣∣∣E [ln f(X)fRk(X)(X) ]∣∣∣∣ .s,L,d,k n− ss+d ln(n+ 1). (17)
We split our analysis into two parts. Section 2.1 shows that E [ ln fRk(X)(X)
f(X)
] .s,L,d,k n − ss+d and
Section 2.2 shows that E [ ln f(X)fRk(X)(X) ] .s,L,d,k n − ss+d ln(n+ 1), which completes the proof.
2.1 Upper bound on E [ ln fRk(X)(X)
f(X) ] By the fact that ln y ≤ y − 1 for any y > 0, we have
E [ ln fRk(X)(X)
f(X)
] ≤ E [ fRk(X)(X)− f(X)
f(X)
] (18)
= ∫ [0,1]d∩{x:f(x)6=0} ( E[fRk(x)(x)]− f(x) ) dx. (19)
Here the expectation is taken with respect to the randomness in Rk(x) = min1≤i≤n,m∈Zd ‖m + Xi − x‖, x ∈ Rd. Define function g(x; f, n) as
g(x; f, n) = sup { u ≥ 0 : Vdudfu(x) ≤ 1
n
} , (20)
g(x; f, n) intuitively means the distance R such that the probability mass µ(B(x,R)) within R is 1/n. Then for any x ∈ [0, 1]d, we can split E[fRk(x)(x)]− f(x) into three terms as
E[fRk(x)(x)]− f(x) = E[(fRk(x)(x)− f(x))1(Rk(x) ≤ n −1/(s+d))] (21)
+ E[(fRk(x)(x)− f(x))1(n −1/(s+d) < Rk(x) ≤ g(x; f, n))] (22) + E[(fRk(x)(x)− f(x))1(Rk(x) > g(x; f, n) ∨ n −1/(s+d))] (23) = C1 + C2 + C3. (24)
Now we handle three terms separately. Our goal is to show that for every x ∈ [0, 1], Ci .s,L,d n−s/(s+d) for i ∈ {1, 2, 3}. Then, taking the integral with respect to x leads to the desired bound.
1. Term C1: whenever Rk(x) ≤ n−1/(s+d), by Lemma 1, we have
|fRk(x)(x)− f(x)| ≤ dLRk(x)
s
s+ d .s,L,d n
−s/(s+d), (25)
which implies that C1 ≤ E [∣∣fRk(x)(x)− f(x)∣∣1(Rk(x) ≤ n−1/(s+d))] .s,L,d n−s/(s+d). (26)
2. Term C2: whenever Rk(x) satisfies that n−1/(s+d) < Rk(x) ≤ g(x; f, n), by definition of g(x; f, n), we have VdRk(x)dfRk(x)(x) ≤ 1n , which implies that
fRk(x)(x) ≤ 1 nVdRk(x)d ≤ 1 nVdn−d/(s+d) .s,L,d n −s/(s+d). (27)
It follows from Lemma 2 that in this case
f(x) .s,L,d fRk(x)(x) ∨ ( fRk(x)(x)VdRk(x) d )s/(s+d)
(28)
≤ n−s/(s+d) ∨ n−s/(s+d) = n−s/(s+d). (29)
Hence, we have C2 = E [ (fRk(x)(x)− f(x))1 ( n−1/(s+d) < Rk(x) ≤ g(x; f, n) )] (30)
≤ E [ (fRk(x)(x) + f(x))1 ( n−1/(s+d) < Rk(x) ≤ g(x; f, n) )] (31)
.s,L,d n −s/(s+d). (32)
3. Term C3: we have C3 ≤ E [ (fRk(x)(x) + f(x))1 ( Rk(x) > g(x; f, n) ∨ n−1/(s+d) )] . (33)
For any x such that Rk(x) > n−1/(s+d), we have
fRk(x)(x) .s,L,d VdRk(x) dfRk(x)(x)n d/(s+d), (34)
and by Lemma 2,
f(x) .s,L,d fRk(x)(x) ∨ (VdRk(x) dfRk(x)(x)) s/(s+d) (35)
≤ fRk(x)(x) + (VdRk(x) dfRk(x)(x)) s/(s+d). (36)
Hence,
f(x) + fRk(x)(x) .s,L,d 2fRk(x)(x) + (VdRk(x) dfRk(x)(x)) s/(s+d) (37)
.s,L,d VdRk(x) dfRk(x)(x)n d/(s+d) + (VdRk(x) dfRk(x)(x)) s/(s+d)
(38)
.s,L,d VdRk(x) dfRk(x)(x)n d/(s+d), (39)
where in the last step we have used the fact that VdRk(x)dfRk(x)(x) > n −1 sinceRk(x) > g(x; f, n). Finally, we have
C3 .s,L,d n d/(s+d)E[(VdRk(x)dfRk(x)(x))1(Rk(x) > g(x; f, n))] (40) = nd/(s+d)E [ (VdRk(x) dfRk(x)(x))1 ( VdRk(x) dfRk(x)(x) > 1/n )] .(41)
Note that VdRk(x)dfRk(x)(x) ∼ Beta(k, n+ 1− k), and if Y ∼ Beta(k, n+ 1− k), we have
E[Y 2] = ( k
n+ 1
)2 +
k(n+ 1− k) (n+ 1)2(n+ 2) .k 1 n2 . (42)
Notice that E[Y 1 (Y > 1/n)] ≤ nE[Y 2]. Hence, we have
C3 .s,L,d n d/(s+d) nE [ (VdRk(x) dfRk(x)(x)) 2 ]
(43)
.s,L,d,k nd/(s+d)n
n2 = n−s/(s+d). (44)
2.2 Upper bound on E [ ln f(X)fRk(X)(X) ] By splitting the term into two parts, we have
E [ ln f(X)
fRk(X)(X)
] = E [∫ [0,1]d∩{x:f(x) 6=0} f(x) ln f(x) fRk(x)(x) dx ] (45)
= E [∫
A
f(x) ln f(x)
fRk(x)(x) 1(fRk(x)(x) > n
−s/(s+d))dx ] (46)
+ E [∫
A
f(x) ln f(x)
fRk(x)(x) 1(fRk(x)(x) ≤ n
−s/(s+d))dx ] (47)
= C4 + C5. (48)
here we denote A = [0, 1]d ∩ {x : f(x) 6= 0} for simplicity of notation. For the term C4, we have
C4 ≤ E [∫
A
f(x)
( f(x)− fRk(x)(x)
fRk(x)(x)
) 1(fRk(x)(x) > n −s/(s+d))dx ] (49)
= E [∫
A
(f(x)− fRk(x)(x))2
fRk(x)(x) 1(fRk(x)(x) > n
−s/(s+d))dx ] (50)
+ E [∫
A
( f(x)− fRk(x)(x) ) 1(fRk(x)(x) > n −s/(s+d))dx ] (51)
≤ ns/(s+d)E [∫
A
( f(x)− fRk(x)(x) )2 dx ] + E [∫ A ( f(x)− fRk(x)(x) ) dx ] . (52)
In the proof of upper bound of E [ ln fRk(X)(X)
f(X) ] , we have shown that E[fRk(x)(x)− f(x)] .s,L,d,k
n−s/(s+d) for any x ∈ A. Similarly as in the proof of upper bound of E [ ln fRk(X)(X)
f(X)
] , we have
E [ (fRk(x)(x)− f(x))2 ] .s,L,d,k n−2s/(s+d) for every x ∈ A. Therefore, we have
C4 .s,L,d,k n s/(s+d)n−2s/(s+d) + n−s/(s+d) .s,L,d,k n −s/(s+d). (53)
Now we consider C5. We conjecture that C5 .s,L,d,k n−s/(s+d) in this case, but we were not able to prove it. Below we prove that C5 .s,L,d,k n−s/(s+d) ln(n+ 1). Define the function
M(x) = sup t>0
1
ft(x) . (54)
Since fRk(x)(x) ≤ n−s/(s+d), we have M(x) = supt>0(1/ft(x)) ≥ 1/fRk(x)(x) ≥ ns/(s+d). Denote ln+(y) = max{ln(y), 0} for any y > 0, therefore, we have that
C5 ≤ E [∫
A
f(x) ln+ ( f(x)
fRk(x)(x)
) 1(fRk(x)(x) ≤ n −s/(s+d))dx ] (55)
≤ E [∫
A
f(x) ln+ ( f(x)
fRk(x)(x)
) 1(M(x) ≥ ns/(s+d))dx ] (56)
≤ ∫ A f(x)E [ ln+ ( 1 (n+ 1)VdRk(x)dfRk(x)(x) )] 1(M(x) ≥ ns/(s+d))dx (57)
+ ∫ A f(x)E [ ln+ ( (n+ 1)VdRk(x) df(x) )] 1(M(x) ≥ ns/(s+d))dx (58) = C51 + C52, (59)
where the last inequality uses the fact ln+(xy) ≤ ln+ x + ln+ y for all x, y > 0. As for C51, since VdRk(x) dfRk(x)(x) ∼ Beta(k, n+ 1− k), and for Y ∼ Beta(k, n+ 1− k), we have
E [ ln+ ( 1
(n+ 1)Y
)] = ∫ 1 n+1
0
ln
( 1
(n+ 1)x
) pY (x)dx (60)
= E [ ln ( 1
(n+ 1)Y
)] + ∫ 1 1
n+1
ln ((n+ 1)x) pY (x)dx (61)
≤ E [ ln ( 1
(n+ 1)Y
)] + ln(n+ 1) ∫ 1 1
n+1
pY (x)dx (62)
≤ E [ ln ( 1
(n+ 1)Y
)] + ln(n+ 1) (63)
≤ ln(n+ 1) (64) where in the last inequality we used the fact that E [ ln (
1 (n+1)Y
)] = ψ(n+1)−ψ(k)−ln(n+1) ≤ 0
for any k ≥ 1. Hence, C51 .s,L,d ln(n+ 1) ∫ A f(x)1(M(x) ≥ ns/(s+d))dx. (65)
Now we introduce the following lemma, which is proved in Appendix C.
Lemma 3 Let µ1, µ2 be two Borel measures that are finite on the bounded Borel sets of Rd. Then, for all t > 0 and any Borel set A ⊂ Rd,
µ1
({ x ∈ A : sup
0<ρ≤D
( µ2(B(x, ρ))
µ1(B(x, ρ))
) > t }) ≤ Cd
t µ2(AD). (66)
Here Cd > 0 is a constant that depends only on the dimension d and
AD = {x : ∃y ∈ A, |y − x| ≤ D}. (67)
Applying the second part of Lemma 3 with µ2 being the Lebesgue measure and µ1 being the measure specified by f(x) on the torus, we can view the function M(x) as
M(x) = sup 0<ρ≤1/2
µ2(B(x, ρ)) µ1(B(x, ρ)) . (68)
Taking A = [0, 1]d ∩ {x : f(x) 6= 0}, t = ns/(s+d), then µ2(A 1 2 ) ≤ 2d, so we know that C51 .s,L,d ln(n+ 1) · ∫ A f(x)1(M(x) ≥ ns/(s+d))dx (69)
= ln(n+ 1) · µ1 ( x ∈ [0, 1]d, f(x) 6= 0,M(x) ≥ ns/(s+d) ) (70)
≤ ln(n+ 1) · Cdn−s/(s+d)µ2(A 1 2 ) .s,L,d n −s/(s+d) ln(n+ 1). (71)
Now we deal with C52. Recall that in Lemma 2, we know that f(x) .s,L,d 1 for any x, and Rk(x) ≤ 1, so ln+((n+ 1)VdRk(x)df(x)) .s,L,d ln(n+ 1). Therefore,
C52 .s,L,d ln(n+ 1) · ∫ A f(x)1(M(x) ≥ ns/(s+d))dx (72)
.s,L,d n −s/(s+d) ln(n+ 1). (73)
Therefore, we have proved that C5 ≤ C51 + C52 .s,L,d n−s/(s+d) ln(n+ 1), which completes the proof of the upper bound on E [ ln f(X)fRk(X)(X) ] .
3 Future directions
It is an tempting question to ask whether one can close the logarithmic gap between Theorem 1 and 2. We believe that neither the upper bound nor the lower bound are tight. In fact, we conjecture that the upper bound in Theorem 1 could be improved to n− s s+d +n−1/2 due to a more careful analysis of the bias, since Hardy–Littlewood maximal inequalities apply to arbitrary measurable functions but we have assumed regularity properties of the underlying density. We conjecture that the minimax lower bound could be improved to (n lnn)− s s+d +n−1/2, since a kernel density estimator based differential entropy estimator was constructed in [26] which achieves upper bound (n lnn)− s s+d + n−1/2 over Hsd(L; [0, 1]d) with the knowledge of s. It would be interesting to extend our analysis to that of the k-nearest neighbor based Kullback– Leibler divergence estimator [59]. The discrete case has been studied recently [28, 7].
It is also interesting to analyze k-nearest neighbor based mutual information estimators, such as the KSG estimator [34], and show that they are “near”-optimal and adaptive to both the smoothness and the dimension of the distributions. There exists some analysis of the KSG estimator [21] but we suspect the upper bound is not tight. Moreover, a slightly revised version of KSG estimator is proved to be consistent even if the underlying distribution is not purely continuous nor purely discrete [19], but the optimality properties are not yet well understood. | 1. What is the focus of the paper, and what are the significant contributions regarding the Kozachenko-Leonenko estimator?
2. What are the strengths of the paper, particularly in terms of the proof and the results shown?
3. Do you have any concerns or questions about the assumptions and conditions in the paper, such as the periodic boundary condition?
4. How does the reviewer assess the clarity and quality of the writing in the paper?
5. Are there any comparisons or discussions of related works in the field, such as the weighted KL estimator proposed in [1], that could enhance the paper's content? | Review | Review
Paper 1614 This paper studies the Kozachenko-Leonenko estimator for the differential entropy of a multivariate smooth density that satisfy a periodic boundary condition; an equivalent way to state the condition is to let the density be defined on the [0,1]^d-torus. The authors show that the K-L estimator achieves a rate of convergence that is optimal up to poly-log factors. The result is interesting and the paper is well-written. I could not check the entirety of the proof but the parts I checked are correct. I recommend that the paper be accepted. Some questions and remarks: * The periodic boundary condition is unnatural, though one can imagine that the problem is much harder without it. Can the authors comment on whether anything can be shown for densities over R^d? * Can the weighted KL estimator proposed in [1] be used to control the bias and handle cases where s > 2 ? * It seems that the Besicovitch (typo on line 99) covering lemma implies the Hardy-Littlewood maximal inequality. I only see the variance bound use the former so perhaps the latter result need not be mentioned. On an unrelated note, I am curious whether the authors know what the dependence of the C_d constant on dimensionality d is in the Besicovitch covering lemma. * Can similar estimator and analysis be used for estimating the Renyi entropy? [1] T. Berrett, R. Samworth, and M. Yuan, Efficient multivariate entropy estimation via nearest k-neighbor estimators. |
NIPS | Title
The Nearest Neighbor Information Estimator is Adaptively Near Minimax Rate-Optimal
Abstract
We analyze the Kozachenko–Leonenko (KL) fixed k-nearest neighbor estimator for the differential entropy. We obtain the first uniform upper bound on its performance for any fixed k over Hölder balls on a torus without assuming any conditions on how close the density could be from zero. Accompanying a recent minimax lower bound over the Hölder ball, we show that the KL estimator for any fixed k is achieving the minimax rates up to logarithmic factors without cognizance of the smoothness parameter s of the Hölder ball for s ∈ (0, 2] and arbitrary dimension d, rendering it the first estimator that provably satisfies this property.
1 Introduction
Information theoretic measures such as entropy, Kullback-Leibler divergence and mutual information quantify the amount of information among random variables. They have many applications in modern machine learning tasks, such as classification [48], clustering [46, 58, 10, 41] and feature selection [1, 17]. Information theoretic measures and their variants can also be applied in several data science domains such as causal inference [18], sociology [49] and computational biology [36]. Estimating information theoretic measures from data is a crucial sub-routine in the aforementioned applications and has attracted much interest in statistics community. In this paper, we study the problem of estimating Shannon differential entropy, which is the basis of estimating other information theoretic measures for continuous random variables.
Suppose we observe n independent identically distributed random vectors X = {X1, . . . , Xn} drawn from density function f where Xi ∈ Rd. We consider the problem of estimating the differential entropy
h(f) = − ∫ f(x) ln f(x)dx , (1)
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
from the empirical observations X. The fundamental limit of estimating the differential entropy is given by the minimax risk
inf ĥ sup f∈F
( E(ĥ(X)− h(f))2 )1/2 , (2)
where the infimum is taken over all estimators ĥ that is a function of the empirical data X. Here F denotes a (nonparametric) class of density functions.
The problem of differential entropy estimation has been investigated extensively in the literature. As discussed in [2], there exist two main approaches, where one is based on kernel density estimators [30], and the other is based on the nearest neighbor methods [56, 53, 52, 11, 3], which is pioneered by the work of [33].
The problem of differential entropy estimation lies in the general problem of estimating nonparametric functionals. Unlike the parametric counterparts, the problem of estimating nonparametric functionals is challenging even for smooth functionals. Initial efforts have focused on inference of linear, quadratic, and cubic functionals in Gaussian white noise and density models and have laid the foundation for the ensuing research. We do not attempt to survey the extensive literature in this area, but instead refer to the interested reader to, e.g., [24, 5, 12, 16, 6, 32, 37, 47, 8, 9, 54] and the references therein. For non-smooth functionals such as entropy, there is some recent progress [38, 26, 27] on designing theoretically minimax optimal estimators, while these estimators typically require the knowledge of the smoothness parameters, and the practical performances of these estimators are not yet known.
The k-nearest neighbor differential entropy estimator, or Kozachenko-Leonenko (KL) estimator is computed in the following way. Let Ri,k be the distance between Xi and its k-nearest neighbor among {X1, . . . , Xi−1, Xi+1, . . . , Xn}. Precisely, Ri,k equals the k-th smallest number in the list {‖Xi − Xj‖ : j 6= i, j ∈ [n]}, here [n] = {1, 2, . . . , n}. Let B(x, ρ) denote the closed `2 ball centered at x of radius ρ and λ be the Lebesgue measure on Rd. The KL differential entropy estimator is defined as
ĥn,k(X) = ln k − ψ(k) + 1n ∑n i=1 ln ( n kλ(B(Xi, Ri,k)) ) , (3)
where ψ(x) is the digamma function with ψ(1) = −γ, γ = − ∫∞ 0 e−t ln tdt = 0.5772156 . . . is the Euler–Mascheroni constant.
There exists an intuitive explanation behind the construction of the KL differential entropy estimator. Writing informally, we have
h(f) = Ef [− ln f(X)] ≈ 1
n n∑ i=1 − ln f(Xi) ≈ 1 n n∑ i=1 − ln f̂(Xi), (4)
where the first approximation is based on the law of large numbers, and in the second approximation we have replaced f by a nearest neighbor density estimator f̂ . The nearest neighbor density estimator f̂(Xi) follows from the “intuition” 1that
f̂(Xi)λ(B(Xi, Ri,k)) ≈ k
n . (5)
Here the final additive bias correction term ln k − ψ(k) follows from a detailed analysis of the bias of the KL estimator, which will become apparent later.
We focus on the regime where k is a fixed: in other words, it does not grow as the number of samples n increases. The fixed k version of the KL estimator is widely applied in practice and enjoys smaller computational complexity, see [52].
There exists extensive literature on the analysis of the KL differential entropy estimator, which we refer to [4] for a recent survey. One of the major difficulties in analyzing the KL estimator is that the nearest neighbor density estimator exhibits a huge bias when the density is small. Indeed, it was shown in [42] that the bias of the nearest neighbor density estimator in fact does not vanish even
1Precisely, we have ∫ B(Xi,Ri,k)
f(u)du ∼ Beta(k, n − k) [4, Chap. 1.2]. A Beta(k, n − k) distributed random variable has mean k
n .
when n→∞ and deteriorates as f(x) gets close to zero. In the literature, a large collection of work assume that the density is uniformly bounded away from zero [23, 29, 57, 30, 53], while others put various assumptions quantifying on average how close the density is to zero [25, 40, 56, 14, 20, 52, 11]. In this paper, we focus on removing assumptions on how close the density is to zero.
1.1 Main Contribution
Let Hsd(L; [0, 1]d) be the Hölder ball in the unit cube (torus) (formally defined later in Definition 2 in Appendix A) and s ∈ (0, 2] is the Hölder smoothness parameter. Then, the worst case risk of the fixed k-nearest neighbor differential entropy estimator over Hsd(L; [0, 1]d) is controlled by the following theorem.
Theorem 1 Let X = {X1, . . . , Xn} be i.i.d. samples from density function f . Then, for 0 < s ≤ 2, the fixed k-nearest neighbor KL differential entropy estimator ĥn,k in (3) satisfies(
sup f∈Hsd(L;[0,1]d)
Ef ( ĥn,k(X)− h(f) )2) 12 ≤ C ( n− s s+d ln(n+ 1) + n− 1 2 ) . (6)
where C is a constant depends only on s, L, k and d.
The KL estimator is in fact nearly minimax up to logarithmic factors, as shown in the following result from [26].
Theorem 2 [26] Let X = {X1, . . . , Xn} be i.i.d. samples from density function f . Then, there exists a constant L0 depending on s, d only such that for all L ≥ L0, s > 0,(
inf ĥ sup f∈Hsd(L;[0,1]d)
Ef ( ĥ(X)− h(f) )2) 12 ≥ c ( n− s s+d (ln(n+ 1))− s+2d s+d + n− 1 2 ) . (7)
where c is a constant depends only on s, L and d.
Remark 1 We emphasize that one cannot remove the condition L ≥ L0 in Theorem 2. Indeed, if the Hölder ball has a too small width, then the density itself is bounded away from zero, which makes the differential entropy a smooth functional, with minimax rates n− 4s 4s+d + n−1/2 [51, 50, 43].
Theorem 1 and 2 imply that for any fixed k, the KL estimator achieves the minimax rates up to logarithmic factors without knowing s for all s ∈ (0, 2], which implies that it is near minimax rate-optimal (within logarithmic factors) when the dimension d ≤ 2. We cannot expect the vanilla version of the KL estimator to adapt to higher order of smoothness since the nearest neighbor density estimator can be viewed as a variable width kernel density estimator with the box kernel, and it is well known in the literature (see, e.g., [55, Chapter 1]) that any positive kernel cannot exploit the smoothness s > 2. We refer to [26] for a more detailed discussion on this difficulty and potential solutions. The Jackknife idea, such as the one presented in [11, 3] might be useful for adapting to s > 2.
The significance of our work is multi-folded:
• We obtain the first uniform upper bound on the performance of the fixed k-nearest neighbor KL differential entropy estimator over Hölder balls without assuming how close the density could be from zero. We emphasize that assuming conditions of this type, such as the density is bounded away from zero, could make the problem significantly easier. For example, if the density f is assumed to satisfy f(x) ≥ c for some constant c > 0, then the differential entropy becomes a smooth functional and consequently, the general technique for estimating smooth nonparametric functionals [51, 50, 43] can be directly applied here to achieve the minimax rates n− 4s 4s+d + n−1/2. The main technical tools that enabled us
to remove the conditions on how close the density could be from zero are the Besicovitch covering lemma (Lemma. 4) and the generalized Hardy–Littlewood maximal inequality.
• We show that, for any fixed k, the k-nearest neighbor KL entropy estimator nearly achieves the minimax rates without knowing the smoothness parameter s. In the functional estimation literature, designing estimators that can be theoretically proved to adapt to unknown
levels of smoothness is usually achieved using the Lepski method [39, 22, 45, 44, 27], which is not known to be performing well in general in practice. On the other hand, a simple plug-in approach can achieves the rate of n−s/(s+d), but only when s is known [26]. The KL estimator is well known to exhibit excellent empirical performance, but existing theory has not yet demonstrated its near-“optimality” when the smoothness parameter s is not known. Recent works [3, 52, 11] analyzed the performance of the KL estimator under various assumptions on how close the density could be to zero, with no matching lower bound up to logarithmic factors in general. Our work makes a step towards closing this gap and provides a theoretical explanation for the wide usage of the KL estimator in practice.
The rest of the paper is organized as follows. Section 2 is dedicated to the proof of Theorem 1. We discuss some future directions in Section 3.
1.2 Notations
For positive sequences aγ , bγ , we use the notation aγ .α bγ to denote that there exists a universal constant C that only depends on α such that supγ aγ bγ ≤ C, and aγ &α bγ is equivalent to bγ .α aγ . Notation aγ α bγ is equivalent to aγ .α bγ and bγ .α aγ . We write aγ . bγ if the constant is universal and does not depend on any parameters. Notation aγ bγ means that lim infγ aγbγ = ∞, and aγ bγ is equivalent to bγ aγ . We write a ∧ b = min{a, b} and a ∨ b = max{a, b}.
2 Proof of Theorem 1
In this section, we will prove that( E ( ĥn,k(X)− h(f) )2) 12 .s,L,d,k n − ss+d ln(n+ 1) + n− 1 2 , (8)
for any f ∈ Hsd(L; [0, 1]d) and s ∈ (0, 2]. The proof consists two parts: (i) the upper bound of the bias in the form of Os,L,d,k(n−s/(s+d) ln(n + 1)); (ii) the upper bound of the variance is Os,L,d,k(n −1). Below we show the bias proof and relegate the variance proof to Appendix B.
First, we introduce the following notation
ft(x) = µ(B(x, t))
λ(B(x, t)) =
1
Vdtd ∫ u:|u−x|≤t f(u)du . (9)
Here µ is the probability measure specified by density function f on the torus, λ is the Lebesgue measure on Rd, and Vd = πd/2/Γ(1+d/2) is the Lebesgue measure of the unit ball in d-dimensional Euclidean space. Hence ft(x) is the average density of a neighborhood near x. We first state two main lemmas about ft(x) which will be used later in the proof.
Lemma 1 If f ∈ Hsd(L; [0, 1]d) for some 0 < s ≤ 2, then for any x ∈ [0, 1]d and t > 0, we have
| ft(x)− f(x) | ≤ dLts
s+ d , (10)
Lemma 2 If f ∈ Hsd(L; [0, 1]d) for some 0 < s ≤ 2 and f(x) ≥ 0 for all x ∈ [0, 1]d, then for any x and any t > 0, we have
f(x) .s,L,d max { ft(x), ( ft(x)Vdt d )s/(s+d) } , (11)
Furthermore, f(x) .s,L,d 1.
We relegate the proof of Lemma 1 and Lemma 2 to Appendix C. Now we investigate the bias of ĥn,k(X). The following argument reduces the bias analysis of ĥn,k(X) to a function analytic problem. For notation simplicity, we introduce a new random variable X ∼ f independent of
{X1, . . . , Xn} and study ĥn+1,k({X1, . . . , Xn, X}). For every x ∈ Rd, denote Rk(x) by the knearest neighbor distance from x to {X1, X2, . . . , Xn} under distance d(x, y) = minm∈Zd ‖m + x− y‖, i.e., the k-nearest neighbor distance on the torus. Then,
E[ĥn+1,k({X1, . . . , Xn, X})]− h(f) (12) = −ψ(k) + E [ ln ( (n+ 1)λ(B(X,Rk(X))) )] + E [ln f(X)] (13)
= E [ ln ( f(X)λ(B(X,Rk(X)))
µ(B(X,Rk(X)))
)] + E [ ln ((n+ 1)µ(B(X,Rk(X))) ) ]− ψ(k) (14)
= E [ ln f(X)
fRk(X)(X)
] + (E [ ln ((n+ 1)µ(B(X,Rk(X))) ) ]− ψ(k) ) . (15)
We first show that the second term E [ln ((n+ 1)µ(B(X,Rk(X))))] − ψ(k) can be universally controlled regardless of the smoothness of f . Indeed, the random variable µ(B(X,Rk(X))) ∼ Beta(k, n+ 1− k) [4, Chap. 1.2] and it was shown in [4, Theorem 7.2] that there exists a universal constant C > 0 such that∣∣∣E [ln ((n+ 1)µ(B(X,Rk(X))))]− ψ(k) ∣∣∣ ≤ C
n . (16)
Hence, it suffices to show that for 0 < s ≤ 2,∣∣∣∣E [ln f(X)fRk(X)(X) ]∣∣∣∣ .s,L,d,k n− ss+d ln(n+ 1). (17)
We split our analysis into two parts. Section 2.1 shows that E [ ln fRk(X)(X)
f(X)
] .s,L,d,k n − ss+d and
Section 2.2 shows that E [ ln f(X)fRk(X)(X) ] .s,L,d,k n − ss+d ln(n+ 1), which completes the proof.
2.1 Upper bound on E [ ln fRk(X)(X)
f(X) ] By the fact that ln y ≤ y − 1 for any y > 0, we have
E [ ln fRk(X)(X)
f(X)
] ≤ E [ fRk(X)(X)− f(X)
f(X)
] (18)
= ∫ [0,1]d∩{x:f(x)6=0} ( E[fRk(x)(x)]− f(x) ) dx. (19)
Here the expectation is taken with respect to the randomness in Rk(x) = min1≤i≤n,m∈Zd ‖m + Xi − x‖, x ∈ Rd. Define function g(x; f, n) as
g(x; f, n) = sup { u ≥ 0 : Vdudfu(x) ≤ 1
n
} , (20)
g(x; f, n) intuitively means the distance R such that the probability mass µ(B(x,R)) within R is 1/n. Then for any x ∈ [0, 1]d, we can split E[fRk(x)(x)]− f(x) into three terms as
E[fRk(x)(x)]− f(x) = E[(fRk(x)(x)− f(x))1(Rk(x) ≤ n −1/(s+d))] (21)
+ E[(fRk(x)(x)− f(x))1(n −1/(s+d) < Rk(x) ≤ g(x; f, n))] (22) + E[(fRk(x)(x)− f(x))1(Rk(x) > g(x; f, n) ∨ n −1/(s+d))] (23) = C1 + C2 + C3. (24)
Now we handle three terms separately. Our goal is to show that for every x ∈ [0, 1], Ci .s,L,d n−s/(s+d) for i ∈ {1, 2, 3}. Then, taking the integral with respect to x leads to the desired bound.
1. Term C1: whenever Rk(x) ≤ n−1/(s+d), by Lemma 1, we have
|fRk(x)(x)− f(x)| ≤ dLRk(x)
s
s+ d .s,L,d n
−s/(s+d), (25)
which implies that C1 ≤ E [∣∣fRk(x)(x)− f(x)∣∣1(Rk(x) ≤ n−1/(s+d))] .s,L,d n−s/(s+d). (26)
2. Term C2: whenever Rk(x) satisfies that n−1/(s+d) < Rk(x) ≤ g(x; f, n), by definition of g(x; f, n), we have VdRk(x)dfRk(x)(x) ≤ 1n , which implies that
fRk(x)(x) ≤ 1 nVdRk(x)d ≤ 1 nVdn−d/(s+d) .s,L,d n −s/(s+d). (27)
It follows from Lemma 2 that in this case
f(x) .s,L,d fRk(x)(x) ∨ ( fRk(x)(x)VdRk(x) d )s/(s+d)
(28)
≤ n−s/(s+d) ∨ n−s/(s+d) = n−s/(s+d). (29)
Hence, we have C2 = E [ (fRk(x)(x)− f(x))1 ( n−1/(s+d) < Rk(x) ≤ g(x; f, n) )] (30)
≤ E [ (fRk(x)(x) + f(x))1 ( n−1/(s+d) < Rk(x) ≤ g(x; f, n) )] (31)
.s,L,d n −s/(s+d). (32)
3. Term C3: we have C3 ≤ E [ (fRk(x)(x) + f(x))1 ( Rk(x) > g(x; f, n) ∨ n−1/(s+d) )] . (33)
For any x such that Rk(x) > n−1/(s+d), we have
fRk(x)(x) .s,L,d VdRk(x) dfRk(x)(x)n d/(s+d), (34)
and by Lemma 2,
f(x) .s,L,d fRk(x)(x) ∨ (VdRk(x) dfRk(x)(x)) s/(s+d) (35)
≤ fRk(x)(x) + (VdRk(x) dfRk(x)(x)) s/(s+d). (36)
Hence,
f(x) + fRk(x)(x) .s,L,d 2fRk(x)(x) + (VdRk(x) dfRk(x)(x)) s/(s+d) (37)
.s,L,d VdRk(x) dfRk(x)(x)n d/(s+d) + (VdRk(x) dfRk(x)(x)) s/(s+d)
(38)
.s,L,d VdRk(x) dfRk(x)(x)n d/(s+d), (39)
where in the last step we have used the fact that VdRk(x)dfRk(x)(x) > n −1 sinceRk(x) > g(x; f, n). Finally, we have
C3 .s,L,d n d/(s+d)E[(VdRk(x)dfRk(x)(x))1(Rk(x) > g(x; f, n))] (40) = nd/(s+d)E [ (VdRk(x) dfRk(x)(x))1 ( VdRk(x) dfRk(x)(x) > 1/n )] .(41)
Note that VdRk(x)dfRk(x)(x) ∼ Beta(k, n+ 1− k), and if Y ∼ Beta(k, n+ 1− k), we have
E[Y 2] = ( k
n+ 1
)2 +
k(n+ 1− k) (n+ 1)2(n+ 2) .k 1 n2 . (42)
Notice that E[Y 1 (Y > 1/n)] ≤ nE[Y 2]. Hence, we have
C3 .s,L,d n d/(s+d) nE [ (VdRk(x) dfRk(x)(x)) 2 ]
(43)
.s,L,d,k nd/(s+d)n
n2 = n−s/(s+d). (44)
2.2 Upper bound on E [ ln f(X)fRk(X)(X) ] By splitting the term into two parts, we have
E [ ln f(X)
fRk(X)(X)
] = E [∫ [0,1]d∩{x:f(x) 6=0} f(x) ln f(x) fRk(x)(x) dx ] (45)
= E [∫
A
f(x) ln f(x)
fRk(x)(x) 1(fRk(x)(x) > n
−s/(s+d))dx ] (46)
+ E [∫
A
f(x) ln f(x)
fRk(x)(x) 1(fRk(x)(x) ≤ n
−s/(s+d))dx ] (47)
= C4 + C5. (48)
here we denote A = [0, 1]d ∩ {x : f(x) 6= 0} for simplicity of notation. For the term C4, we have
C4 ≤ E [∫
A
f(x)
( f(x)− fRk(x)(x)
fRk(x)(x)
) 1(fRk(x)(x) > n −s/(s+d))dx ] (49)
= E [∫
A
(f(x)− fRk(x)(x))2
fRk(x)(x) 1(fRk(x)(x) > n
−s/(s+d))dx ] (50)
+ E [∫
A
( f(x)− fRk(x)(x) ) 1(fRk(x)(x) > n −s/(s+d))dx ] (51)
≤ ns/(s+d)E [∫
A
( f(x)− fRk(x)(x) )2 dx ] + E [∫ A ( f(x)− fRk(x)(x) ) dx ] . (52)
In the proof of upper bound of E [ ln fRk(X)(X)
f(X) ] , we have shown that E[fRk(x)(x)− f(x)] .s,L,d,k
n−s/(s+d) for any x ∈ A. Similarly as in the proof of upper bound of E [ ln fRk(X)(X)
f(X)
] , we have
E [ (fRk(x)(x)− f(x))2 ] .s,L,d,k n−2s/(s+d) for every x ∈ A. Therefore, we have
C4 .s,L,d,k n s/(s+d)n−2s/(s+d) + n−s/(s+d) .s,L,d,k n −s/(s+d). (53)
Now we consider C5. We conjecture that C5 .s,L,d,k n−s/(s+d) in this case, but we were not able to prove it. Below we prove that C5 .s,L,d,k n−s/(s+d) ln(n+ 1). Define the function
M(x) = sup t>0
1
ft(x) . (54)
Since fRk(x)(x) ≤ n−s/(s+d), we have M(x) = supt>0(1/ft(x)) ≥ 1/fRk(x)(x) ≥ ns/(s+d). Denote ln+(y) = max{ln(y), 0} for any y > 0, therefore, we have that
C5 ≤ E [∫
A
f(x) ln+ ( f(x)
fRk(x)(x)
) 1(fRk(x)(x) ≤ n −s/(s+d))dx ] (55)
≤ E [∫
A
f(x) ln+ ( f(x)
fRk(x)(x)
) 1(M(x) ≥ ns/(s+d))dx ] (56)
≤ ∫ A f(x)E [ ln+ ( 1 (n+ 1)VdRk(x)dfRk(x)(x) )] 1(M(x) ≥ ns/(s+d))dx (57)
+ ∫ A f(x)E [ ln+ ( (n+ 1)VdRk(x) df(x) )] 1(M(x) ≥ ns/(s+d))dx (58) = C51 + C52, (59)
where the last inequality uses the fact ln+(xy) ≤ ln+ x + ln+ y for all x, y > 0. As for C51, since VdRk(x) dfRk(x)(x) ∼ Beta(k, n+ 1− k), and for Y ∼ Beta(k, n+ 1− k), we have
E [ ln+ ( 1
(n+ 1)Y
)] = ∫ 1 n+1
0
ln
( 1
(n+ 1)x
) pY (x)dx (60)
= E [ ln ( 1
(n+ 1)Y
)] + ∫ 1 1
n+1
ln ((n+ 1)x) pY (x)dx (61)
≤ E [ ln ( 1
(n+ 1)Y
)] + ln(n+ 1) ∫ 1 1
n+1
pY (x)dx (62)
≤ E [ ln ( 1
(n+ 1)Y
)] + ln(n+ 1) (63)
≤ ln(n+ 1) (64) where in the last inequality we used the fact that E [ ln (
1 (n+1)Y
)] = ψ(n+1)−ψ(k)−ln(n+1) ≤ 0
for any k ≥ 1. Hence, C51 .s,L,d ln(n+ 1) ∫ A f(x)1(M(x) ≥ ns/(s+d))dx. (65)
Now we introduce the following lemma, which is proved in Appendix C.
Lemma 3 Let µ1, µ2 be two Borel measures that are finite on the bounded Borel sets of Rd. Then, for all t > 0 and any Borel set A ⊂ Rd,
µ1
({ x ∈ A : sup
0<ρ≤D
( µ2(B(x, ρ))
µ1(B(x, ρ))
) > t }) ≤ Cd
t µ2(AD). (66)
Here Cd > 0 is a constant that depends only on the dimension d and
AD = {x : ∃y ∈ A, |y − x| ≤ D}. (67)
Applying the second part of Lemma 3 with µ2 being the Lebesgue measure and µ1 being the measure specified by f(x) on the torus, we can view the function M(x) as
M(x) = sup 0<ρ≤1/2
µ2(B(x, ρ)) µ1(B(x, ρ)) . (68)
Taking A = [0, 1]d ∩ {x : f(x) 6= 0}, t = ns/(s+d), then µ2(A 1 2 ) ≤ 2d, so we know that C51 .s,L,d ln(n+ 1) · ∫ A f(x)1(M(x) ≥ ns/(s+d))dx (69)
= ln(n+ 1) · µ1 ( x ∈ [0, 1]d, f(x) 6= 0,M(x) ≥ ns/(s+d) ) (70)
≤ ln(n+ 1) · Cdn−s/(s+d)µ2(A 1 2 ) .s,L,d n −s/(s+d) ln(n+ 1). (71)
Now we deal with C52. Recall that in Lemma 2, we know that f(x) .s,L,d 1 for any x, and Rk(x) ≤ 1, so ln+((n+ 1)VdRk(x)df(x)) .s,L,d ln(n+ 1). Therefore,
C52 .s,L,d ln(n+ 1) · ∫ A f(x)1(M(x) ≥ ns/(s+d))dx (72)
.s,L,d n −s/(s+d) ln(n+ 1). (73)
Therefore, we have proved that C5 ≤ C51 + C52 .s,L,d n−s/(s+d) ln(n+ 1), which completes the proof of the upper bound on E [ ln f(X)fRk(X)(X) ] .
3 Future directions
It is an tempting question to ask whether one can close the logarithmic gap between Theorem 1 and 2. We believe that neither the upper bound nor the lower bound are tight. In fact, we conjecture that the upper bound in Theorem 1 could be improved to n− s s+d +n−1/2 due to a more careful analysis of the bias, since Hardy–Littlewood maximal inequalities apply to arbitrary measurable functions but we have assumed regularity properties of the underlying density. We conjecture that the minimax lower bound could be improved to (n lnn)− s s+d +n−1/2, since a kernel density estimator based differential entropy estimator was constructed in [26] which achieves upper bound (n lnn)− s s+d + n−1/2 over Hsd(L; [0, 1]d) with the knowledge of s. It would be interesting to extend our analysis to that of the k-nearest neighbor based Kullback– Leibler divergence estimator [59]. The discrete case has been studied recently [28, 7].
It is also interesting to analyze k-nearest neighbor based mutual information estimators, such as the KSG estimator [34], and show that they are “near”-optimal and adaptive to both the smoothness and the dimension of the distributions. There exists some analysis of the KSG estimator [21] but we suspect the upper bound is not tight. Moreover, a slightly revised version of KSG estimator is proved to be consistent even if the underlying distribution is not purely continuous nor purely discrete [19], but the optimality properties are not yet well understood. | 1. What is the focus of the paper regarding differential entropy estimation?
2. What are the strengths of the proposed approach, particularly in comparison to prior works?
3. How does the reviewer assess the clarity and quality of the paper's content?
4. What is the significance of the main result of the paper in practical applications?
5. Are there any concerns or limitations regarding the Kozachenko–Leonenko estimator? | Review | Review
This paper shows that the KozachenkoâLeonenko estimator for differential entropy is near optimal considering the minimax risk of differential entropy estimation. In particular, the assumption for density functions to be away from zero is not used, while previous methods use the assumption for the convergence proof. The paper shows a thorough literature review and explain what is the contribution of this paper very clearly. Many parts of the paper are based on the book of Biau and Devroye [4], and the proof explanation in the Appendix covers the fundamental parts of the derivation in the main paper very well. The main result of this paper is also interesting. The KozachenkoâLeonenko estimator and the Kullback-Leibler divergence estimator based on this estimator is known to work well in practice compared with other methods even though nearest neighbor methods are usually considered as a simple but poor method. The derived bound of the error is interestingly has a similar form to the minimum error of such estimators (minimax bound), and the bound is very close to the minimum error as well. This makes reading the paper enjoyable. |
NIPS | Title
A Geometric Perspective on Optimal Representations for Reinforcement Learning
Abstract
We propose a new perspective on representation learning in reinforcement learning based on geometric properties of the space of value functions. We leverage this perspective to provide formal evidence regarding the usefulness of value functions as auxiliary tasks. Our formulation considers adapting the representation to minimize the (linear) approximation of the value function of all stationary policies for a given environment. We show that this optimization reduces to making accurate predictions regarding a special class of value functions which we call adversarial value functions (AVFs). We demonstrate that using value functions as auxiliary tasks corresponds to an expected-error relaxation of our formulation, with AVFs a natural candidate, and identify a close relationship with proto-value functions (Mahadevan, 2005). We highlight characteristics of AVFs and their usefulness as auxiliary tasks in a series of experiments on the four-room domain.
1 Introduction
A good representation of state is key to practical success in reinforcement learning. While early applications used hand-engineered features (e.g. Samuel, 1959), these have proven onerous to generate and difficult to scale. As a result, methods in representation learning have flourished, ranging from basis adaptation (Menache et al., 2005; Keller et al., 2006), gradient-based learning (Yu and Bertsekas, 2009), proto-value functions (Mahadevan and Maggioni, 2007), feature generation schemes such as tile coding (Sutton, 1996) and the domain-independent features used in some Atari 2600 game-playing agents (Bellemare et al., 2013; Liang et al., 2016), and nonparametric methods (Ernst et al., 2005; Farahmand et al., 2016; Tosatto et al., 2017). Today, the method of choice is deep learning. Deep learning has made its mark by showing it can learn complex representations of relatively unprocessed inputs using gradient-based optimization (Tesauro, 1995; Mnih et al., 2015; Silver et al., 2016).
Most current deep reinforcement learning methods augment their main objective with additional losses called auxiliary tasks, typically with the aim of facilitating and regularizing the representation learning process. The UNREAL algorithm, for example, makes predictions about future pixel values (Jaderberg et al., 2017); recent work approximates a one-step transition model to achieve a similar effect (François-Lavet et al., 2018; Gelada et al., 2019). The good empirical performance of distributional reinforcement learning (Bellemare et al., 2017) has also been attributed to representation learning effects, with recent visualizations supporting this claim (Such et al., 2019). However, while there is now conclusive empirical evidence of the usefulness of auxiliary tasks, their design and justification remain on the whole ad-hoc. One of our main contributions is to provides a formal framework in which to reason about auxiliary tasks in reinforcement learning.
We begin by formulating an optimization problem whose solution is a form of optimal representation. Specifically, we seek a state representation from which we can best approximate the value function of any stationary policy for a given Markov Decision Process. Simultaneously, the largest approximation
1Google Research 2DeepMind 3Mila, Université de Montréal 4University of Alberta 5University of Oxford
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
error in that class serves as a measure of the quality of the representation. While our approach may appear naive – in real settings, most policies are uninteresting and hence may distract the representation learning process – we show that our representation learning problem can in fact be restricted to a special subset of value functions which we call adversarial value functions (AVFs). We then characterize these adversarial value functions and show they correspond to deterministic policies that either minimize or maximize the expected return at each state, based on the solution of a network-flow optimization derived from an interest function δ.
A consequence of our work is to formalize why predicting value function-like objects is helpful in learning representations, as has been argued in the past (Sutton et al., 2011, 2016). We show how using these predictions as auxiliary tasks can be interpreted as a relaxation of our optimization problem. From our analysis, we hypothesize that auxiliary tasks that resemble adversarial value functions should give rise to good representations in practice. We complement our theoretical results with an empirical study in a simple grid world environment, focusing on the use of deep learning techniques to learn representations. We find that predicting adversarial value functions as auxiliary tasks leads to rich representations.
2 Setting
We consider an environment described by a Markov Decision Process 〈X ,A, r, P, γ〉 (Puterman, 1994); X and A are finite state and action spaces, P : X × A → P(X ) is the transition function, γ the discount factor, and r : X → R the reward function. For a finite set S, write P(S) for the probability simplex over S . A (stationary) policy π is a mapping X →P(A), also denoted π(a |x). We denote the set of policies by P = P(A)X . We combine a policy π with the transition function P to obtain the state-to-state transition function Pπ(x′ |x) := ∑a∈A π(a |x)P (x′ |x, a). The value function V π describes the expected discounted sum of rewards obtained by following π:
V π(x) = E [ ∞∑
t=0
γtr(xt) ∣∣x0 = x, xt+1 ∼ Pπ(· |xt) ] .
The value function satisfies Bellman’s equation (Bellman, 1957): V π(x) = r(x) + γ EPπ V π(x′). Assuming there are n = |X | states, we view r and V π as vectors in Rn and Pπ ∈ Rn×n, such that
V π = r + γPπV π = (I − γPπ)−1r.
A d-dimensional representation is a mapping φ : X → Rd; φ(x) is the feature vector for state x. We write Φ ∈ Rn×d to denote the matrix whose rows are φ(X ), and with some abuse of notation denote the set of d-dimensional representations by R ≡ Rn×d. For a given representation and weight vector θ ∈ Rd, the linear approximation for a value function is
V̂φ,θ(x) := φ(x) >θ. (1)
We consider the approximation minimizing the uniformly weighted squared error ∥∥V̂φ,θ − V π ∥∥2 2 = ∑
x∈X (φ(x)>θ − V π(x))2.
We denote by V̂ πφ the projection of V π onto the linear subspace H =
{ Φθ : θ ∈ Rd } .
2.1 Two-Part Networks
Most deep networks used in value-based reinforcement learning can be modelled as two interacting parts φ and θ which give rise to a linear approximation (Figure 1, left). Here, the representation φ can also be adjusted and is almost always nonlinear in x. Two-part networks are a simple framework in which to study the behaviour of representation learning in deep reinforcement learning. We will especially consider the use of φ(x) to make additional predictions, called auxiliary tasks following common usage, and whose purpose is to improve or stabilize the representation.
We study two-part networks in an idealized setting where the length d of φ(x) is fixed and smaller than n, but the mapping is otherwise unconstrained. Even this idealized design offers interesting
Goal state
problems to study. We might be interested in sharing a representation across problems, as is often done in transfer or continual learning. In this context, auxiliary tasks may inform how the value function should generalize to these new problems. In many problems of interest, the weights θ can also be optimized more efficiently than the representation itself, warranting the view that the representation should be adapted using a different process (Levine et al., 2017; Chung et al., 2019).
Note that a trivial “value-as-feature” representation exists for the single-policy optimization problem
minimize ∥∥V̂ πφ − V π ∥∥2 2
w.r.t. φ ∈ R; this approximation sets φ(x) = V π(x) and θ = 1. In this paper we take the stance that this is not a satisfying representation, and that a good representation should be in the service of a broader goal (e.g. control, transfer, or fairness).
3 Representation Learning by Approximating Value Functions
We measure the quality of a representation φ in terms of how well it can approximate all possible value functions, formalized as the representation error
L(φ) := max π∈P
L(φ;π), L(φ;π) := ∥∥V̂ πφ − V π ∥∥2 2 .
We consider the problem of finding the representation φ ∈ R minimizing L(φ):
minimize max π∈P
∥∥V̂ πφ − V π ∥∥2 2 w.r.t. φ ∈ R. (2)
In the context of our work, we call this the representation learning problem (RLP) and say that a representation φ∗ is optimal when it minimizes the error in (2). Note that L(φ) (and hence φ∗) depends on characteristics of the environment, in particular on both reward and transition functions.
We consider the RLP from a geometric perspective (Figure 1, right). Dadashi et al. (2019) showed that the set of value functions achieved by the set of policies P , denoted
V := {V π ∈ Rn : π ∈ P}, forms a (possibly nonconvex) polytope. As previously noted, a representation φ defines a subspace H of possible value approximations. The maximal error is achieved by the value function in V which is furthest along the subspace normal to H , since V̂ πφ is the orthogonal projection of V π .
We say that V ∈ V is an extremal vertex if it is a vertex of the convex hull of V . We will make use of the relationship between directions δ ∈ Rd, the set of extremal vertices, and the set of deterministic policies. The following lemma, based on a well-known notion of duality from convex analysis (Boyd and Vandenberghe, 2004), states this relationship formally. Lemma 1. Let δ ∈ Rn and define the functional fδ(V ) := δ>V , with domain V . Then fδ is maximized by an extremal vertex U ∈ V , and there is a deterministic policy π for which V π = U . Furthermore, the set of directions δ ∈ Rn for which the maximum of fδ is achieved by multiple extremal vertices has Lebesgue measure zero in Rn.
Denote by Pv the set of policies corresponding to extremal vertices of V . We next derive an equivalence between the RLP and an optimization problem which only considers policies in Pv .
Theorem 1. For any representation φ ∈ R, the maximal approximation error measured over all value functions is the same as the error measured over the set of extremal vertices:
max π∈P
∥∥V̂ πφ − V π ∥∥2
2 = max π∈Pv
∥∥V̂ πφ − V π ∥∥2 2 .
Theorem 1 indicates that we can find an optimal representation by considering a finite (albeit exponential) number of value functions: Each extremal vertex corresponds to the value function of some deterministic policy, of which there are at most an exponential number. We will call these adversarial value functions (AVFs), because of the minimax flavour of the RLP.
Solving the RLP allows us to provide quantifiable guarantees on the performance of certain value-based learning algorithms. For example, in the context of least-squares policy iteration (LSPI; Lagoudakis and Parr, 2003), minimizing the representation error L directly improves the performance bound. By contrast, we cannot have the same guarantee if φ is learned by minimizing the approximation error for a single value function. Corollary 1. Let φ∗ be an optimal representation in the RLP. Consider the sequence of policies π0, π1, . . . derived from LSPI using φ∗ to approximate V π0 , V π1 , . . . under a uniform sampling of the state-space. Then there exists an MDP-dependent constant C ∈ R such that
lim sup k→∞
∥∥V ∗ − V πk ∥∥2
2 ≤ CL(φ∗).
This result is a direct application of the quadratic norm bounds given by Munos (2003), in whose work the constant is made explicit. We emphasize that the result is illustrative; our approach should enable similar guarantees in other contexts (e.g. Munos, 2007; Petrik and Zilberstein, 2011).
3.1 The Structure of Adversarial Value Functions
The RLP suggests that an agent trained to predict various value functions should develop a good state representation. Intuitively, one may worry that there are simply too many “uninteresting” policies, and that a representation learned from their value functions emphasizes the wrong quantities. However, the search for an optimal representation φ∗ is closely tied to the much smaller set of adversarial value functions (AVFs). The aim of this section is to characterize the structure of AVFs and show that they form an interesting subset of all value functions. From this, we argue that their use as auxiliary tasks should also produce structured representations.
From Lemma 1, recall that an AVF is geometrically defined using a vector δ ∈ Rn and the functional fδ(V ) := δ
>V , which the AVF maximizes. Since fδ is restricted to the value polytope, we can consider the equivalent policy-space functional gδ : π 7→ δ>V π . Observe that
max π∈P gδ(π) = max π∈P δ>V π = max π∈P
∑ x∈X δ(x)V π(x). (3)
In this optimization problem, the vector δ defines a weighting over the state space X ; for this reason, we call δ an interest function in the context of AVFs. Whenever δ ≥ 0 componentwise, we recover the optimal value function, irrespective of the exact magnitude of δ (Bertsekas, 2012). If δ(x) < 0 for some x, however, the maximization becomes a minimization. As the next result shows, the policy maximizing fδ(π) depends on a network flow dπ derived from δ and the transition function P . Theorem 2. Maximizing the functional gδ is equivalent to finding a network flow dπ that satisfies a reverse Bellman equation:
max π∈P δ>V π = max π∈P d>π r, dπ = δ + γP π>dπ.
For a policy π̃ maximizing the above we have
V π̃(x) = r(x) + γ { maxa∈A Ex′∼P V π̃(x′) dπ̃(x) > 0, mina∈A Ex′∼P V π̃(x′) dπ̃(x) < 0.
Corollary 2. There are at most 2n distinct adversarial value functions.
The vector dπ corresponds to the sum of discounted interest weights flowing through a state x, similar to the dual variables in the theory of linear programming for MDPs (Puterman, 1994). Theorem 2, by way of the corollary, implies that there are fewer AVFs (≤ 2n) than deterministic policies (= |A|n). It also implies that AVFs relate to a reward-driven purpose, similar to how the optimal value function describes the goal of maximizing return. We will illustrate this point empirically in Section 4.1.
3.2 Relationship to Auxiliary Tasks
So far we have argued that solving the RLP leads to a representation which is optimal in a meaningful sense. However, solving the RLP seems computationally intractable: there are an exponential number of deterministic policies to consider (Prop. 1 in the appendix gives a quadratic formulation with quadratic constraints). Using interest functions does not mitigate this difficulty: the computational problem of finding the AVF for a single interest function is NP-hard, even when restricted to deterministic MDPs (Prop. 2 in the appendix).
Instead, in this section we consider a relaxation of the RLP and show that this relaxation describes existing representation learning methods, in particular those that use auxiliary tasks. Let ξ be some distribution over Rn. We begin by replacing the maximum in (2) by an expectation:
minimize E V∼ξ
∥∥V̂φ − V ∥∥2
2 w.r.t. φ ∈ R. (4)
The use of the expectation offers three practical advantages over the use of the maximum. First, this leads to a differentiable objective which can be minimized using deep learning techniques. Second, the choice of ξ gives us an additional degree of freedom; in particular, ξ needs not be restricted to the value polytope. Third, the minimizer in (4) is easily characterized, as the following theorem shows. Theorem 3. Let u∗1, . . . , u∗d ∈ Rn be the principal components of the distribution ξ, in the sense that
u∗i := arg max u∈Bi E V∼ξ (u>V )2, where Bi := {u ∈ Rn : ‖u‖22 = 1, u>u∗j = 0 ∀j < i}.
Equivalently, u∗1, . . . , u ∗ d are the eigenvectors of Eξ V V > ∈ Rn×n with the d largest eigenvalues. Then the matrix [u∗1, . . . , u ∗ d] ∈ Rn×d, viewed as a map X → Rd, is a solution to (4). When the principal components are uniquely defined, any minimizer of (4) spans the same subspace as u∗1, . . . , u ∗ d.
One may expect the quality of the learned representation to depend on how closely the distribution ξ relates to the RLP. From an auxiliary tasks perspective, this corresponds to choosing tasks that are in some sense useful. For example, generating value functions from the uniform distribution over the set of policies P , while a natural choice, may put too much weight on “uninteresting” value functions. In practice, we may further restrict ξ to a finite set V . Under a uniform weighting, this leads to a representation loss
L(φ;V ) := ∑
V ∈V
∥∥V̂φ − V ∥∥2
2 (5)
which corresponds to the typical formulation of an auxiliary-task loss (e.g. Jaderberg et al., 2017). In a deep reinforcement learning setting, one typically minimizes (5) using stochastic gradient descent methods, which scale better than batch methods such as singular value decomposition (but see Wu et al. (2019) for further discussion).
Our analysis leads us to conclude that, in many cases of interest, the use of auxiliary tasks produces representations that are close to the principal components of the set of tasks under consideration. If V is well-aligned with the RLP, minimizing L(φ;V ) should give rise to a reasonable representation. To demonstrate the power of this approach, in Section 4 we will study the case when the set V is constructed by sampling AVFs – emphasizing the policies that support the solution to the RLP.
3.3 Relationship to Proto-Value Functions
Proto-value functions (Mahadevan and Maggioni, 2007, PVF) are a family of representations which vary smoothly across the state space. Although the original formulation defines this representation as the largest-eigenvalue eigenvectors of the Laplacian of the transition function’s graphical structure, recent formulations use the top singular vectors of (I − γPπ)−1, where π is the uniformly random policy (Stachenfeld et al., 2014; Machado et al., 2017; Behzadian and Petrik, 2018).
In line with the analysis of the previous section, proto-value functions can also be interpreted as defining a set of value-based auxiliary tasks. Specifically, if we define an indicator reward function ry(x) := I[x=y] and a set of value functions V = {(I − γPπ)−1ry}y∈X with π the uniformly random policy, then any d-dimensional representation that minimizes (5) spans the same basis as the d-dimensional PVF (up to the bias term). This suggests a connection with hindsight experience replay (Andrychowicz et al., 2017), whose auxiliary tasks consists in reaching previously experienced states.
4 Empirical Studies
In this section we complement our theoretical analysis with an experimental study. In turn, we take a closer look at 1) the structure of adversarial value functions, 2) the shape of representations learned using AVFs, and 3) the performance profile of these representations in a control setting. Our eventual goal is to demonstrate that the RLP, which is based on approximating value functions, gives rise to representations that are both interesting and comparable to previously proposed schemes. Our concrete instantiation (Algorithm 1) uses the representation loss (5). As-is, this algorithm is of limited practical relevance (our AVFs are learned using a tabular representation) but we believe provides an inspirational basis for further developments.
Algorithm 1 Representation learning using AVFs input k – desired number of AVFs, d – desired number of features.
Sample δ1, . . . , δk ∼ [−1, 1]n Compute µi = arg maxπ δ > i V
π using a policy gradient method Find φ∗ = arg minφ L(φ; {V µ1 , . . . , V µk}) (Equation 5)
We perform all of our experiments within the four-room domain (Sutton et al., 1999; Solway et al., 2014; Machado et al., 2017, Figure 2, see also Appendix H.1).
We consider a two-part network where we pretrain φ end-to-end to predict a set of value functions. Our aim here is to compare the effects of using different sets of value functions, including AVFs, on the learned representation. As our focus is on the efficient use of a d-dimensional representation (with d < n, the number of states), we encode individual states as one-hot vectors and map them into φ(x) without capacity constraints. Additional details may be found in Appendix H.
4.1 Adversarial Value Functions
Our first set of results studies the structure of adversarial value functions in the four-room domain. We generated interest functions by assigning a value δ(x) ∈ {−1, 0, 1} uniformly at random to each state x (Figure 2, left). We restricted δ to these discrete choices for illustrative purposes.
We then used model-based policy gradient (Sutton et al., 2000) to find the policy maximizing∑ x∈X δ(x)V
π(x). We observed some local minima or accumulation points but as a whole reasonable solutions were found. The resulting network flow and AVF for a particular sample are shown in Figure 2. For most states, the signs of δ and dπ agree; however, this is not true of all states (larger version and more examples in appendix, Figures 6, 7). As expected, states for which dπ > 0 (respectively, dπ < 0) correspond to states maximizing (resp. minimizing) the value function. Finally, we remark on the “flow” nature of dπ: trajectories over minimizing states accumulate in corners or loops, while those over maximizing states flow to the goal. We conclude that AVFs exhibit interesting structure, and are generated by policies that are not random (Figure 2, right). As we will see next, this is a key differentiator in making AVFs good auxiliary tasks.
4.2 Representation Learning with AVFs
We next consider the representations that arise from training a deep network to predict AVFs (denoted AVF from here on). We sample k = 1000 interest functions and use Algorithm 1 to generate k AVFs.
We combine these AVFs into the representation loss (5) and adapt the parameters of the deep network using Rmsprop (Tieleman and Hinton, 2012).
We contrast the AVF-driven representation with one learned by predicting the value function of random deterministic policies (RP). Specifically, these policies are generated by assigning an action uniformly at random to each state. We also consider the value function of the uniformly random policy (VALUE). While we make these choices here for concreteness, other experiments yielded similar results (e.g. predicting the value of the optimal policy; appendix, Figure 8). In all cases, we learn a d = 16 dimensional representation, not including the bias unit.
Figure 3 shows the representations learned by the three methods. The features learned by VALUE resemble the value function itself (top left feature) or its negated image (bottom left feature). Coarsely speaking, these features capture the general distance to the goal but little else. The features learned by RP are of even worse quality. This is because almost all random deterministic policies cause the agent to avoid the goal (appendix, Figure 12). The representation learned by AVF, on the other hand, captures the structure of the domain, including paths between distal states and focal points corresponding to rooms or parts of rooms.
Although our focus is on the use of AVFs as auxiliary tasks to a deep network, we observe the same results when discovering a representation using singular value decomposition (Section 3.2), as described in Appendix I. All in all, our results illustrate that, among all value functions, AVFs are particularly useful auxiliary tasks for representation learning.
4.3 Learning the Optimal Policy
In a final set of experiments, we consider learning a reward-maximizing policy using a pretrained representation and a model-based version of the SARSA algorithm (Rummery and Niranjan, 1994; Sutton and Barto, 1998). We compare the value-based and AVF-based representations from the previous section (VALUE and AVF), and also proto-value functions (PVF; details in Appendix H.3).
We report the quality of the learned policies after training, as a function of d, the size of the representation. Our quality measure is the average return from the designated start state (bottom left). Results are provided in Figure 4 and Figure 13 (appendix). We observe a failure of the VALUE representation to provide a useful basis for learning a good policy, even as d increases;
while the representation is not rank-deficient, the features do not help reduce the approximation error.
In comparison, our AVF representations perform similarly to PVFs. Increasing the number of auxiliary tasks also leads to better representations; recall that PVF implicitly uses n = 104 auxiliary tasks.
5 Related Work
Our work takes inspiration from research in basis or feature construction for reinforcement learning. Ratitch and Precup (2004), Foster and Dayan (2002), Menache et al. (2005), Yu and Bertsekas (2009), Bhatnagar et al. (2013), and Song et al. (2016) consider methods for adapting parametrized basis functions using iterative schemes. Including Mahadevan and Maggioni (2007)’s proto-value functions, a number of works (we note Dayan, 1993; Petrik, 2007; Mahadevan and Liu, 2010; Ruan et al., 2015; Barreto et al., 2017) have used characteristics of the transition structure of the MDP to generate representations; these are the closest in spirit to our approach, although none use the reward or consider the geometry of the space of value functions. Parr et al. (2007) proposed constructing a representation from successive Bellman errors, Keller et al. (2006) used dimensionality reduction methods; finally Hutter (2009) proposes a universal scheme for selecting representations.
Deep reinforcement learning algorithms have made extensive use of auxiliary tasks to improve agent performance, beginning perhaps with universal value function approximators (Schaul et al., 2015) and the UNREAL architecture (Jaderberg et al., 2017); see also Dosovitskiy and Koltun (2017), François-Lavet et al. (2018) and, more tangentially, van den Oord et al. (2018). Levine et al. (2017) and Chung et al. (2019) make explicit use of two-part network to derive more sample efficient deep reinforcement learning algorithms. Veeriah et al. (2019) use a meta-gradient approach to generate auxiliary tasks. The notion of augmenting an agent with side predictions is not new, with roots in TD models (Sutton, 1995), predictive state representations (Littman et al., 2002), and the Horde architecture (Sutton et al., 2011), itself inspired by the work of Selfridge (1959).
A number of works quantify or explain the usefulness of a representation. Parr et al. (2008) demonstrated that a good representation should support a good approximation of both reward and expected next state. We conjecture that the relaxed problem (4) trades these two quantities off in a principled fashion. Li et al. (2006); Abel et al. (2016) consider the approximation error that arises from state abstraction. More recently, Nachum et al. (2019) provide some interesting guarantees in the context of hierarchical reinforcement learning, while Such et al. (2019) visualizes the representations learned by Atari-playing agents. Finally, Bertsekas (2018) remarks on the two-part network we study here.
6 Conclusion
In this paper we studied the notion of an adversarial value function, derived from a geometric perspective on representation learning in RL. Our work shows that adversarial value functions exhibit interesting structure, and are good auxiliary tasks when learning a representation of an environment. We believe our work to be the first to provide formal evidence as to the usefulness of predicting value functions for shaping an agent’s representation.
Our work opens up the possibility of automatically generating auxiliary tasks in deep reinforcement learning, analogous to how deep learning itself enabled a move away from hand-crafted features. To do so, we expect that a number of practical challenges will need to be overcome:
Off-policy learning. A practical implementation will require learning AVFs concurrently with the main task. Doing so results in off-policy learning, whose negative effects are well-documented even in recent applications (e.g. van Hasselt et al., 2018).
Policy parametrization. AVFs are the value function of deterministic policies. While a natural choice is to look for policies that maximize representation error, this poses the problem of how to parametrize the policies themselves. In particular, a policy parametrized using the representation φ may not provide a sufficient degree of “adversariality”.
Smoothness in the interest function. In continuous or large state spaces, it is desirable for interest functions to incorporate some degree of smoothness, rather than vary rapidly from state to state. It is not clear how to control this smoothness in a principled manner.
From a mathematical perspective, our formulation of the RLP was made with both convenience and geometry in mind. Conceptually, it may be interesting to consider our approach in other norms,
including the weighted norms used in approximation results. Practically, this would translate into an emphasis on “interesting” value functions, for example by giving additional weight to the optimal value function and its neighbouring AVFs.
7 Acknowledgements
The authors thank the many people who helped shape this project through discussions and feedback on early and late drafts: Lihong Li, George Tucker, Doina Precup, Ofir Nachum, Csaba Szepesvári, Georg Ostrovski, Marek Petrik, Marlos Machado, Tim Lillicrap, Danny Tarlow, Hugo Larochelle, Saurabh Kumar, Carles Gelada, Rémi Munos, David Silver, and André Barreto. Special thanks also to Philip Thomas and Scott Niekum, who gave this project its initial impetus.
8 Author Contributions
M.G.B., W.D., D.S., and N.L.R. conceptualized the representation learning problem. M.G.B., W.D., T.L., A.A.T., R.D., D.S., and N.L.R. contributed to the theoretical results. M.G.B., W.D., P.S.C., R.D., and C.L. performed experiments and collated results. All authors contributed to the writing. | 1. What is the focus of the paper, and how does it contribute to understanding the learning process?
2. What is the proposed method for representing learning, and how does it differ from previous approaches?
3. How effective are the auxiliary tasks designed using the proposed method, and what implications do they have for deep reinforcement learning?
4. What are the strengths and weaknesses of the paper regarding its innovation, clarity, and relevance to the field? | Review | Review
This paper is very intriguing. Although there is no conclusive empirical evidence of the usefulness of auxiliary tasks, their design and justification remain on the whole ad-hoc. This paper describes a new method based on geometric properties of the space of value functions to represent learning. The results show that predicting adversarial value functions as auxiliary tasks leads to rich representations. Overall, this innovative perspective to represent learning is good for us to understand the learning process. and the literature review shows that the author is knowledgeable in this field. As the author said as I quote, their work may âopens up the possibility of automatically generating auxiliary tasks in deep reinforcement learning â. Here are my major concerns: The author tries to describe his representation starting at part 2. A description of the previous version of representation would be better for a reviewer to get a general idea before describing a new representation for RL. I would suggest the author put âRelated workâ in part 2 instead of part 5. |
NIPS | Title
A Geometric Perspective on Optimal Representations for Reinforcement Learning
Abstract
We propose a new perspective on representation learning in reinforcement learning based on geometric properties of the space of value functions. We leverage this perspective to provide formal evidence regarding the usefulness of value functions as auxiliary tasks. Our formulation considers adapting the representation to minimize the (linear) approximation of the value function of all stationary policies for a given environment. We show that this optimization reduces to making accurate predictions regarding a special class of value functions which we call adversarial value functions (AVFs). We demonstrate that using value functions as auxiliary tasks corresponds to an expected-error relaxation of our formulation, with AVFs a natural candidate, and identify a close relationship with proto-value functions (Mahadevan, 2005). We highlight characteristics of AVFs and their usefulness as auxiliary tasks in a series of experiments on the four-room domain.
1 Introduction
A good representation of state is key to practical success in reinforcement learning. While early applications used hand-engineered features (e.g. Samuel, 1959), these have proven onerous to generate and difficult to scale. As a result, methods in representation learning have flourished, ranging from basis adaptation (Menache et al., 2005; Keller et al., 2006), gradient-based learning (Yu and Bertsekas, 2009), proto-value functions (Mahadevan and Maggioni, 2007), feature generation schemes such as tile coding (Sutton, 1996) and the domain-independent features used in some Atari 2600 game-playing agents (Bellemare et al., 2013; Liang et al., 2016), and nonparametric methods (Ernst et al., 2005; Farahmand et al., 2016; Tosatto et al., 2017). Today, the method of choice is deep learning. Deep learning has made its mark by showing it can learn complex representations of relatively unprocessed inputs using gradient-based optimization (Tesauro, 1995; Mnih et al., 2015; Silver et al., 2016).
Most current deep reinforcement learning methods augment their main objective with additional losses called auxiliary tasks, typically with the aim of facilitating and regularizing the representation learning process. The UNREAL algorithm, for example, makes predictions about future pixel values (Jaderberg et al., 2017); recent work approximates a one-step transition model to achieve a similar effect (François-Lavet et al., 2018; Gelada et al., 2019). The good empirical performance of distributional reinforcement learning (Bellemare et al., 2017) has also been attributed to representation learning effects, with recent visualizations supporting this claim (Such et al., 2019). However, while there is now conclusive empirical evidence of the usefulness of auxiliary tasks, their design and justification remain on the whole ad-hoc. One of our main contributions is to provides a formal framework in which to reason about auxiliary tasks in reinforcement learning.
We begin by formulating an optimization problem whose solution is a form of optimal representation. Specifically, we seek a state representation from which we can best approximate the value function of any stationary policy for a given Markov Decision Process. Simultaneously, the largest approximation
1Google Research 2DeepMind 3Mila, Université de Montréal 4University of Alberta 5University of Oxford
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
error in that class serves as a measure of the quality of the representation. While our approach may appear naive – in real settings, most policies are uninteresting and hence may distract the representation learning process – we show that our representation learning problem can in fact be restricted to a special subset of value functions which we call adversarial value functions (AVFs). We then characterize these adversarial value functions and show they correspond to deterministic policies that either minimize or maximize the expected return at each state, based on the solution of a network-flow optimization derived from an interest function δ.
A consequence of our work is to formalize why predicting value function-like objects is helpful in learning representations, as has been argued in the past (Sutton et al., 2011, 2016). We show how using these predictions as auxiliary tasks can be interpreted as a relaxation of our optimization problem. From our analysis, we hypothesize that auxiliary tasks that resemble adversarial value functions should give rise to good representations in practice. We complement our theoretical results with an empirical study in a simple grid world environment, focusing on the use of deep learning techniques to learn representations. We find that predicting adversarial value functions as auxiliary tasks leads to rich representations.
2 Setting
We consider an environment described by a Markov Decision Process 〈X ,A, r, P, γ〉 (Puterman, 1994); X and A are finite state and action spaces, P : X × A → P(X ) is the transition function, γ the discount factor, and r : X → R the reward function. For a finite set S, write P(S) for the probability simplex over S . A (stationary) policy π is a mapping X →P(A), also denoted π(a |x). We denote the set of policies by P = P(A)X . We combine a policy π with the transition function P to obtain the state-to-state transition function Pπ(x′ |x) := ∑a∈A π(a |x)P (x′ |x, a). The value function V π describes the expected discounted sum of rewards obtained by following π:
V π(x) = E [ ∞∑
t=0
γtr(xt) ∣∣x0 = x, xt+1 ∼ Pπ(· |xt) ] .
The value function satisfies Bellman’s equation (Bellman, 1957): V π(x) = r(x) + γ EPπ V π(x′). Assuming there are n = |X | states, we view r and V π as vectors in Rn and Pπ ∈ Rn×n, such that
V π = r + γPπV π = (I − γPπ)−1r.
A d-dimensional representation is a mapping φ : X → Rd; φ(x) is the feature vector for state x. We write Φ ∈ Rn×d to denote the matrix whose rows are φ(X ), and with some abuse of notation denote the set of d-dimensional representations by R ≡ Rn×d. For a given representation and weight vector θ ∈ Rd, the linear approximation for a value function is
V̂φ,θ(x) := φ(x) >θ. (1)
We consider the approximation minimizing the uniformly weighted squared error ∥∥V̂φ,θ − V π ∥∥2 2 = ∑
x∈X (φ(x)>θ − V π(x))2.
We denote by V̂ πφ the projection of V π onto the linear subspace H =
{ Φθ : θ ∈ Rd } .
2.1 Two-Part Networks
Most deep networks used in value-based reinforcement learning can be modelled as two interacting parts φ and θ which give rise to a linear approximation (Figure 1, left). Here, the representation φ can also be adjusted and is almost always nonlinear in x. Two-part networks are a simple framework in which to study the behaviour of representation learning in deep reinforcement learning. We will especially consider the use of φ(x) to make additional predictions, called auxiliary tasks following common usage, and whose purpose is to improve or stabilize the representation.
We study two-part networks in an idealized setting where the length d of φ(x) is fixed and smaller than n, but the mapping is otherwise unconstrained. Even this idealized design offers interesting
Goal state
problems to study. We might be interested in sharing a representation across problems, as is often done in transfer or continual learning. In this context, auxiliary tasks may inform how the value function should generalize to these new problems. In many problems of interest, the weights θ can also be optimized more efficiently than the representation itself, warranting the view that the representation should be adapted using a different process (Levine et al., 2017; Chung et al., 2019).
Note that a trivial “value-as-feature” representation exists for the single-policy optimization problem
minimize ∥∥V̂ πφ − V π ∥∥2 2
w.r.t. φ ∈ R; this approximation sets φ(x) = V π(x) and θ = 1. In this paper we take the stance that this is not a satisfying representation, and that a good representation should be in the service of a broader goal (e.g. control, transfer, or fairness).
3 Representation Learning by Approximating Value Functions
We measure the quality of a representation φ in terms of how well it can approximate all possible value functions, formalized as the representation error
L(φ) := max π∈P
L(φ;π), L(φ;π) := ∥∥V̂ πφ − V π ∥∥2 2 .
We consider the problem of finding the representation φ ∈ R minimizing L(φ):
minimize max π∈P
∥∥V̂ πφ − V π ∥∥2 2 w.r.t. φ ∈ R. (2)
In the context of our work, we call this the representation learning problem (RLP) and say that a representation φ∗ is optimal when it minimizes the error in (2). Note that L(φ) (and hence φ∗) depends on characteristics of the environment, in particular on both reward and transition functions.
We consider the RLP from a geometric perspective (Figure 1, right). Dadashi et al. (2019) showed that the set of value functions achieved by the set of policies P , denoted
V := {V π ∈ Rn : π ∈ P}, forms a (possibly nonconvex) polytope. As previously noted, a representation φ defines a subspace H of possible value approximations. The maximal error is achieved by the value function in V which is furthest along the subspace normal to H , since V̂ πφ is the orthogonal projection of V π .
We say that V ∈ V is an extremal vertex if it is a vertex of the convex hull of V . We will make use of the relationship between directions δ ∈ Rd, the set of extremal vertices, and the set of deterministic policies. The following lemma, based on a well-known notion of duality from convex analysis (Boyd and Vandenberghe, 2004), states this relationship formally. Lemma 1. Let δ ∈ Rn and define the functional fδ(V ) := δ>V , with domain V . Then fδ is maximized by an extremal vertex U ∈ V , and there is a deterministic policy π for which V π = U . Furthermore, the set of directions δ ∈ Rn for which the maximum of fδ is achieved by multiple extremal vertices has Lebesgue measure zero in Rn.
Denote by Pv the set of policies corresponding to extremal vertices of V . We next derive an equivalence between the RLP and an optimization problem which only considers policies in Pv .
Theorem 1. For any representation φ ∈ R, the maximal approximation error measured over all value functions is the same as the error measured over the set of extremal vertices:
max π∈P
∥∥V̂ πφ − V π ∥∥2
2 = max π∈Pv
∥∥V̂ πφ − V π ∥∥2 2 .
Theorem 1 indicates that we can find an optimal representation by considering a finite (albeit exponential) number of value functions: Each extremal vertex corresponds to the value function of some deterministic policy, of which there are at most an exponential number. We will call these adversarial value functions (AVFs), because of the minimax flavour of the RLP.
Solving the RLP allows us to provide quantifiable guarantees on the performance of certain value-based learning algorithms. For example, in the context of least-squares policy iteration (LSPI; Lagoudakis and Parr, 2003), minimizing the representation error L directly improves the performance bound. By contrast, we cannot have the same guarantee if φ is learned by minimizing the approximation error for a single value function. Corollary 1. Let φ∗ be an optimal representation in the RLP. Consider the sequence of policies π0, π1, . . . derived from LSPI using φ∗ to approximate V π0 , V π1 , . . . under a uniform sampling of the state-space. Then there exists an MDP-dependent constant C ∈ R such that
lim sup k→∞
∥∥V ∗ − V πk ∥∥2
2 ≤ CL(φ∗).
This result is a direct application of the quadratic norm bounds given by Munos (2003), in whose work the constant is made explicit. We emphasize that the result is illustrative; our approach should enable similar guarantees in other contexts (e.g. Munos, 2007; Petrik and Zilberstein, 2011).
3.1 The Structure of Adversarial Value Functions
The RLP suggests that an agent trained to predict various value functions should develop a good state representation. Intuitively, one may worry that there are simply too many “uninteresting” policies, and that a representation learned from their value functions emphasizes the wrong quantities. However, the search for an optimal representation φ∗ is closely tied to the much smaller set of adversarial value functions (AVFs). The aim of this section is to characterize the structure of AVFs and show that they form an interesting subset of all value functions. From this, we argue that their use as auxiliary tasks should also produce structured representations.
From Lemma 1, recall that an AVF is geometrically defined using a vector δ ∈ Rn and the functional fδ(V ) := δ
>V , which the AVF maximizes. Since fδ is restricted to the value polytope, we can consider the equivalent policy-space functional gδ : π 7→ δ>V π . Observe that
max π∈P gδ(π) = max π∈P δ>V π = max π∈P
∑ x∈X δ(x)V π(x). (3)
In this optimization problem, the vector δ defines a weighting over the state space X ; for this reason, we call δ an interest function in the context of AVFs. Whenever δ ≥ 0 componentwise, we recover the optimal value function, irrespective of the exact magnitude of δ (Bertsekas, 2012). If δ(x) < 0 for some x, however, the maximization becomes a minimization. As the next result shows, the policy maximizing fδ(π) depends on a network flow dπ derived from δ and the transition function P . Theorem 2. Maximizing the functional gδ is equivalent to finding a network flow dπ that satisfies a reverse Bellman equation:
max π∈P δ>V π = max π∈P d>π r, dπ = δ + γP π>dπ.
For a policy π̃ maximizing the above we have
V π̃(x) = r(x) + γ { maxa∈A Ex′∼P V π̃(x′) dπ̃(x) > 0, mina∈A Ex′∼P V π̃(x′) dπ̃(x) < 0.
Corollary 2. There are at most 2n distinct adversarial value functions.
The vector dπ corresponds to the sum of discounted interest weights flowing through a state x, similar to the dual variables in the theory of linear programming for MDPs (Puterman, 1994). Theorem 2, by way of the corollary, implies that there are fewer AVFs (≤ 2n) than deterministic policies (= |A|n). It also implies that AVFs relate to a reward-driven purpose, similar to how the optimal value function describes the goal of maximizing return. We will illustrate this point empirically in Section 4.1.
3.2 Relationship to Auxiliary Tasks
So far we have argued that solving the RLP leads to a representation which is optimal in a meaningful sense. However, solving the RLP seems computationally intractable: there are an exponential number of deterministic policies to consider (Prop. 1 in the appendix gives a quadratic formulation with quadratic constraints). Using interest functions does not mitigate this difficulty: the computational problem of finding the AVF for a single interest function is NP-hard, even when restricted to deterministic MDPs (Prop. 2 in the appendix).
Instead, in this section we consider a relaxation of the RLP and show that this relaxation describes existing representation learning methods, in particular those that use auxiliary tasks. Let ξ be some distribution over Rn. We begin by replacing the maximum in (2) by an expectation:
minimize E V∼ξ
∥∥V̂φ − V ∥∥2
2 w.r.t. φ ∈ R. (4)
The use of the expectation offers three practical advantages over the use of the maximum. First, this leads to a differentiable objective which can be minimized using deep learning techniques. Second, the choice of ξ gives us an additional degree of freedom; in particular, ξ needs not be restricted to the value polytope. Third, the minimizer in (4) is easily characterized, as the following theorem shows. Theorem 3. Let u∗1, . . . , u∗d ∈ Rn be the principal components of the distribution ξ, in the sense that
u∗i := arg max u∈Bi E V∼ξ (u>V )2, where Bi := {u ∈ Rn : ‖u‖22 = 1, u>u∗j = 0 ∀j < i}.
Equivalently, u∗1, . . . , u ∗ d are the eigenvectors of Eξ V V > ∈ Rn×n with the d largest eigenvalues. Then the matrix [u∗1, . . . , u ∗ d] ∈ Rn×d, viewed as a map X → Rd, is a solution to (4). When the principal components are uniquely defined, any minimizer of (4) spans the same subspace as u∗1, . . . , u ∗ d.
One may expect the quality of the learned representation to depend on how closely the distribution ξ relates to the RLP. From an auxiliary tasks perspective, this corresponds to choosing tasks that are in some sense useful. For example, generating value functions from the uniform distribution over the set of policies P , while a natural choice, may put too much weight on “uninteresting” value functions. In practice, we may further restrict ξ to a finite set V . Under a uniform weighting, this leads to a representation loss
L(φ;V ) := ∑
V ∈V
∥∥V̂φ − V ∥∥2
2 (5)
which corresponds to the typical formulation of an auxiliary-task loss (e.g. Jaderberg et al., 2017). In a deep reinforcement learning setting, one typically minimizes (5) using stochastic gradient descent methods, which scale better than batch methods such as singular value decomposition (but see Wu et al. (2019) for further discussion).
Our analysis leads us to conclude that, in many cases of interest, the use of auxiliary tasks produces representations that are close to the principal components of the set of tasks under consideration. If V is well-aligned with the RLP, minimizing L(φ;V ) should give rise to a reasonable representation. To demonstrate the power of this approach, in Section 4 we will study the case when the set V is constructed by sampling AVFs – emphasizing the policies that support the solution to the RLP.
3.3 Relationship to Proto-Value Functions
Proto-value functions (Mahadevan and Maggioni, 2007, PVF) are a family of representations which vary smoothly across the state space. Although the original formulation defines this representation as the largest-eigenvalue eigenvectors of the Laplacian of the transition function’s graphical structure, recent formulations use the top singular vectors of (I − γPπ)−1, where π is the uniformly random policy (Stachenfeld et al., 2014; Machado et al., 2017; Behzadian and Petrik, 2018).
In line with the analysis of the previous section, proto-value functions can also be interpreted as defining a set of value-based auxiliary tasks. Specifically, if we define an indicator reward function ry(x) := I[x=y] and a set of value functions V = {(I − γPπ)−1ry}y∈X with π the uniformly random policy, then any d-dimensional representation that minimizes (5) spans the same basis as the d-dimensional PVF (up to the bias term). This suggests a connection with hindsight experience replay (Andrychowicz et al., 2017), whose auxiliary tasks consists in reaching previously experienced states.
4 Empirical Studies
In this section we complement our theoretical analysis with an experimental study. In turn, we take a closer look at 1) the structure of adversarial value functions, 2) the shape of representations learned using AVFs, and 3) the performance profile of these representations in a control setting. Our eventual goal is to demonstrate that the RLP, which is based on approximating value functions, gives rise to representations that are both interesting and comparable to previously proposed schemes. Our concrete instantiation (Algorithm 1) uses the representation loss (5). As-is, this algorithm is of limited practical relevance (our AVFs are learned using a tabular representation) but we believe provides an inspirational basis for further developments.
Algorithm 1 Representation learning using AVFs input k – desired number of AVFs, d – desired number of features.
Sample δ1, . . . , δk ∼ [−1, 1]n Compute µi = arg maxπ δ > i V
π using a policy gradient method Find φ∗ = arg minφ L(φ; {V µ1 , . . . , V µk}) (Equation 5)
We perform all of our experiments within the four-room domain (Sutton et al., 1999; Solway et al., 2014; Machado et al., 2017, Figure 2, see also Appendix H.1).
We consider a two-part network where we pretrain φ end-to-end to predict a set of value functions. Our aim here is to compare the effects of using different sets of value functions, including AVFs, on the learned representation. As our focus is on the efficient use of a d-dimensional representation (with d < n, the number of states), we encode individual states as one-hot vectors and map them into φ(x) without capacity constraints. Additional details may be found in Appendix H.
4.1 Adversarial Value Functions
Our first set of results studies the structure of adversarial value functions in the four-room domain. We generated interest functions by assigning a value δ(x) ∈ {−1, 0, 1} uniformly at random to each state x (Figure 2, left). We restricted δ to these discrete choices for illustrative purposes.
We then used model-based policy gradient (Sutton et al., 2000) to find the policy maximizing∑ x∈X δ(x)V
π(x). We observed some local minima or accumulation points but as a whole reasonable solutions were found. The resulting network flow and AVF for a particular sample are shown in Figure 2. For most states, the signs of δ and dπ agree; however, this is not true of all states (larger version and more examples in appendix, Figures 6, 7). As expected, states for which dπ > 0 (respectively, dπ < 0) correspond to states maximizing (resp. minimizing) the value function. Finally, we remark on the “flow” nature of dπ: trajectories over minimizing states accumulate in corners or loops, while those over maximizing states flow to the goal. We conclude that AVFs exhibit interesting structure, and are generated by policies that are not random (Figure 2, right). As we will see next, this is a key differentiator in making AVFs good auxiliary tasks.
4.2 Representation Learning with AVFs
We next consider the representations that arise from training a deep network to predict AVFs (denoted AVF from here on). We sample k = 1000 interest functions and use Algorithm 1 to generate k AVFs.
We combine these AVFs into the representation loss (5) and adapt the parameters of the deep network using Rmsprop (Tieleman and Hinton, 2012).
We contrast the AVF-driven representation with one learned by predicting the value function of random deterministic policies (RP). Specifically, these policies are generated by assigning an action uniformly at random to each state. We also consider the value function of the uniformly random policy (VALUE). While we make these choices here for concreteness, other experiments yielded similar results (e.g. predicting the value of the optimal policy; appendix, Figure 8). In all cases, we learn a d = 16 dimensional representation, not including the bias unit.
Figure 3 shows the representations learned by the three methods. The features learned by VALUE resemble the value function itself (top left feature) or its negated image (bottom left feature). Coarsely speaking, these features capture the general distance to the goal but little else. The features learned by RP are of even worse quality. This is because almost all random deterministic policies cause the agent to avoid the goal (appendix, Figure 12). The representation learned by AVF, on the other hand, captures the structure of the domain, including paths between distal states and focal points corresponding to rooms or parts of rooms.
Although our focus is on the use of AVFs as auxiliary tasks to a deep network, we observe the same results when discovering a representation using singular value decomposition (Section 3.2), as described in Appendix I. All in all, our results illustrate that, among all value functions, AVFs are particularly useful auxiliary tasks for representation learning.
4.3 Learning the Optimal Policy
In a final set of experiments, we consider learning a reward-maximizing policy using a pretrained representation and a model-based version of the SARSA algorithm (Rummery and Niranjan, 1994; Sutton and Barto, 1998). We compare the value-based and AVF-based representations from the previous section (VALUE and AVF), and also proto-value functions (PVF; details in Appendix H.3).
We report the quality of the learned policies after training, as a function of d, the size of the representation. Our quality measure is the average return from the designated start state (bottom left). Results are provided in Figure 4 and Figure 13 (appendix). We observe a failure of the VALUE representation to provide a useful basis for learning a good policy, even as d increases;
while the representation is not rank-deficient, the features do not help reduce the approximation error.
In comparison, our AVF representations perform similarly to PVFs. Increasing the number of auxiliary tasks also leads to better representations; recall that PVF implicitly uses n = 104 auxiliary tasks.
5 Related Work
Our work takes inspiration from research in basis or feature construction for reinforcement learning. Ratitch and Precup (2004), Foster and Dayan (2002), Menache et al. (2005), Yu and Bertsekas (2009), Bhatnagar et al. (2013), and Song et al. (2016) consider methods for adapting parametrized basis functions using iterative schemes. Including Mahadevan and Maggioni (2007)’s proto-value functions, a number of works (we note Dayan, 1993; Petrik, 2007; Mahadevan and Liu, 2010; Ruan et al., 2015; Barreto et al., 2017) have used characteristics of the transition structure of the MDP to generate representations; these are the closest in spirit to our approach, although none use the reward or consider the geometry of the space of value functions. Parr et al. (2007) proposed constructing a representation from successive Bellman errors, Keller et al. (2006) used dimensionality reduction methods; finally Hutter (2009) proposes a universal scheme for selecting representations.
Deep reinforcement learning algorithms have made extensive use of auxiliary tasks to improve agent performance, beginning perhaps with universal value function approximators (Schaul et al., 2015) and the UNREAL architecture (Jaderberg et al., 2017); see also Dosovitskiy and Koltun (2017), François-Lavet et al. (2018) and, more tangentially, van den Oord et al. (2018). Levine et al. (2017) and Chung et al. (2019) make explicit use of two-part network to derive more sample efficient deep reinforcement learning algorithms. Veeriah et al. (2019) use a meta-gradient approach to generate auxiliary tasks. The notion of augmenting an agent with side predictions is not new, with roots in TD models (Sutton, 1995), predictive state representations (Littman et al., 2002), and the Horde architecture (Sutton et al., 2011), itself inspired by the work of Selfridge (1959).
A number of works quantify or explain the usefulness of a representation. Parr et al. (2008) demonstrated that a good representation should support a good approximation of both reward and expected next state. We conjecture that the relaxed problem (4) trades these two quantities off in a principled fashion. Li et al. (2006); Abel et al. (2016) consider the approximation error that arises from state abstraction. More recently, Nachum et al. (2019) provide some interesting guarantees in the context of hierarchical reinforcement learning, while Such et al. (2019) visualizes the representations learned by Atari-playing agents. Finally, Bertsekas (2018) remarks on the two-part network we study here.
6 Conclusion
In this paper we studied the notion of an adversarial value function, derived from a geometric perspective on representation learning in RL. Our work shows that adversarial value functions exhibit interesting structure, and are good auxiliary tasks when learning a representation of an environment. We believe our work to be the first to provide formal evidence as to the usefulness of predicting value functions for shaping an agent’s representation.
Our work opens up the possibility of automatically generating auxiliary tasks in deep reinforcement learning, analogous to how deep learning itself enabled a move away from hand-crafted features. To do so, we expect that a number of practical challenges will need to be overcome:
Off-policy learning. A practical implementation will require learning AVFs concurrently with the main task. Doing so results in off-policy learning, whose negative effects are well-documented even in recent applications (e.g. van Hasselt et al., 2018).
Policy parametrization. AVFs are the value function of deterministic policies. While a natural choice is to look for policies that maximize representation error, this poses the problem of how to parametrize the policies themselves. In particular, a policy parametrized using the representation φ may not provide a sufficient degree of “adversariality”.
Smoothness in the interest function. In continuous or large state spaces, it is desirable for interest functions to incorporate some degree of smoothness, rather than vary rapidly from state to state. It is not clear how to control this smoothness in a principled manner.
From a mathematical perspective, our formulation of the RLP was made with both convenience and geometry in mind. Conceptually, it may be interesting to consider our approach in other norms,
including the weighted norms used in approximation results. Practically, this would translate into an emphasis on “interesting” value functions, for example by giving additional weight to the optimal value function and its neighbouring AVFs.
7 Acknowledgements
The authors thank the many people who helped shape this project through discussions and feedback on early and late drafts: Lihong Li, George Tucker, Doina Precup, Ofir Nachum, Csaba Szepesvári, Georg Ostrovski, Marek Petrik, Marlos Machado, Tim Lillicrap, Danny Tarlow, Hugo Larochelle, Saurabh Kumar, Carles Gelada, Rémi Munos, David Silver, and André Barreto. Special thanks also to Philip Thomas and Scott Niekum, who gave this project its initial impetus.
8 Author Contributions
M.G.B., W.D., D.S., and N.L.R. conceptualized the representation learning problem. M.G.B., W.D., T.L., A.A.T., R.D., D.S., and N.L.R. contributed to the theoretical results. M.G.B., W.D., P.S.C., R.D., and C.L. performed experiments and collated results. All authors contributed to the writing. | 1. What is the main contribution of the paper regarding reinforcement learning?
2. What are the strengths of the paper, particularly in its theoretical analysis?
3. Do you have any questions or concerns regarding the paper's content?
4. How does the reviewer assess the paper's experimental results and their relation to the proposed approach?
5. Are there any potential connections between the paper's adversarial framework and other related works in model-based RL? | Review | Review
This paper studies the problem of learning useful representations for reinforcement learning through the lens of an adversarial framework. In particular, a good representation is identified as one that yields low linear value-function estimation error if an adversary is able to choose a value function (induced by a policy). The paper shows first that the the only policies that should be considered are deterministic, and then identifies a more narrowed set of adversarial values, though the number is still exponential. I really liked the theoretical insights of this paper, and because of this I tend to vote for acceptance, though I claim that experiments are too preliminary. Some more comments below: 1- in (1) highlight more clearly that \phi is the only optimization knob. 2- in terms of readability, it is unclear why Lemma 1 is useful until after i read the proof of Theorem 1 from the Appendix. Maybe consider saying why this Lemma is useful, or move things around 3- isn't the first half of Lemma 1 (solution lies in the set of extreme points) a very well-known result in linear programming? If yes, then be more clear that this is not new. 4- this adversarial framework reminds me a lot of the use of Wasserstein distance in model-based RL, whereby a good model is defined as one that yields low error in the context of adversarial choice of value functions (that are Lipschitz). Do you see a synergy here? is there any deep connection? Also, can you clarify why you used model-based algorithms in experiments? There is no mention of model-based stuff until we get to experiments, so i am wondering if there is a connection. 5- for proof of Theorem 2 in appendix, maybe do define idempotent matrices and their properties. I checked the proofs of the first two theorems and otherwise they seem sound and clear. 6- the part that the paper falls short is experiments. It could still be OK if the authors showed a clear path towards extending the idea to function approximation, but this is lacking. Plus, the method cannot really beat the baseline even in the toy domain. Any comment on challenges when going to function approximation? ---- post rebuttal: I am happy to see that the authors are willing to add a section that more seriously tackles/starts to think about challenges when going to arbitrary function approximators in practice. As for the point about a potential model-based RL result, Farahmand and friends was indeed the paper that I had in mind. Also, because of the focus on linearity, Parr and friends 2008 on linear models shows a deeper connection/equivalence, and so could be useful. It would be very neat if there was a deeper connection. If one cannot be shown in this paper, can a conjecture still be made? |
NIPS | Title
A Geometric Perspective on Optimal Representations for Reinforcement Learning
Abstract
We propose a new perspective on representation learning in reinforcement learning based on geometric properties of the space of value functions. We leverage this perspective to provide formal evidence regarding the usefulness of value functions as auxiliary tasks. Our formulation considers adapting the representation to minimize the (linear) approximation of the value function of all stationary policies for a given environment. We show that this optimization reduces to making accurate predictions regarding a special class of value functions which we call adversarial value functions (AVFs). We demonstrate that using value functions as auxiliary tasks corresponds to an expected-error relaxation of our formulation, with AVFs a natural candidate, and identify a close relationship with proto-value functions (Mahadevan, 2005). We highlight characteristics of AVFs and their usefulness as auxiliary tasks in a series of experiments on the four-room domain.
1 Introduction
A good representation of state is key to practical success in reinforcement learning. While early applications used hand-engineered features (e.g. Samuel, 1959), these have proven onerous to generate and difficult to scale. As a result, methods in representation learning have flourished, ranging from basis adaptation (Menache et al., 2005; Keller et al., 2006), gradient-based learning (Yu and Bertsekas, 2009), proto-value functions (Mahadevan and Maggioni, 2007), feature generation schemes such as tile coding (Sutton, 1996) and the domain-independent features used in some Atari 2600 game-playing agents (Bellemare et al., 2013; Liang et al., 2016), and nonparametric methods (Ernst et al., 2005; Farahmand et al., 2016; Tosatto et al., 2017). Today, the method of choice is deep learning. Deep learning has made its mark by showing it can learn complex representations of relatively unprocessed inputs using gradient-based optimization (Tesauro, 1995; Mnih et al., 2015; Silver et al., 2016).
Most current deep reinforcement learning methods augment their main objective with additional losses called auxiliary tasks, typically with the aim of facilitating and regularizing the representation learning process. The UNREAL algorithm, for example, makes predictions about future pixel values (Jaderberg et al., 2017); recent work approximates a one-step transition model to achieve a similar effect (François-Lavet et al., 2018; Gelada et al., 2019). The good empirical performance of distributional reinforcement learning (Bellemare et al., 2017) has also been attributed to representation learning effects, with recent visualizations supporting this claim (Such et al., 2019). However, while there is now conclusive empirical evidence of the usefulness of auxiliary tasks, their design and justification remain on the whole ad-hoc. One of our main contributions is to provides a formal framework in which to reason about auxiliary tasks in reinforcement learning.
We begin by formulating an optimization problem whose solution is a form of optimal representation. Specifically, we seek a state representation from which we can best approximate the value function of any stationary policy for a given Markov Decision Process. Simultaneously, the largest approximation
1Google Research 2DeepMind 3Mila, Université de Montréal 4University of Alberta 5University of Oxford
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
error in that class serves as a measure of the quality of the representation. While our approach may appear naive – in real settings, most policies are uninteresting and hence may distract the representation learning process – we show that our representation learning problem can in fact be restricted to a special subset of value functions which we call adversarial value functions (AVFs). We then characterize these adversarial value functions and show they correspond to deterministic policies that either minimize or maximize the expected return at each state, based on the solution of a network-flow optimization derived from an interest function δ.
A consequence of our work is to formalize why predicting value function-like objects is helpful in learning representations, as has been argued in the past (Sutton et al., 2011, 2016). We show how using these predictions as auxiliary tasks can be interpreted as a relaxation of our optimization problem. From our analysis, we hypothesize that auxiliary tasks that resemble adversarial value functions should give rise to good representations in practice. We complement our theoretical results with an empirical study in a simple grid world environment, focusing on the use of deep learning techniques to learn representations. We find that predicting adversarial value functions as auxiliary tasks leads to rich representations.
2 Setting
We consider an environment described by a Markov Decision Process 〈X ,A, r, P, γ〉 (Puterman, 1994); X and A are finite state and action spaces, P : X × A → P(X ) is the transition function, γ the discount factor, and r : X → R the reward function. For a finite set S, write P(S) for the probability simplex over S . A (stationary) policy π is a mapping X →P(A), also denoted π(a |x). We denote the set of policies by P = P(A)X . We combine a policy π with the transition function P to obtain the state-to-state transition function Pπ(x′ |x) := ∑a∈A π(a |x)P (x′ |x, a). The value function V π describes the expected discounted sum of rewards obtained by following π:
V π(x) = E [ ∞∑
t=0
γtr(xt) ∣∣x0 = x, xt+1 ∼ Pπ(· |xt) ] .
The value function satisfies Bellman’s equation (Bellman, 1957): V π(x) = r(x) + γ EPπ V π(x′). Assuming there are n = |X | states, we view r and V π as vectors in Rn and Pπ ∈ Rn×n, such that
V π = r + γPπV π = (I − γPπ)−1r.
A d-dimensional representation is a mapping φ : X → Rd; φ(x) is the feature vector for state x. We write Φ ∈ Rn×d to denote the matrix whose rows are φ(X ), and with some abuse of notation denote the set of d-dimensional representations by R ≡ Rn×d. For a given representation and weight vector θ ∈ Rd, the linear approximation for a value function is
V̂φ,θ(x) := φ(x) >θ. (1)
We consider the approximation minimizing the uniformly weighted squared error ∥∥V̂φ,θ − V π ∥∥2 2 = ∑
x∈X (φ(x)>θ − V π(x))2.
We denote by V̂ πφ the projection of V π onto the linear subspace H =
{ Φθ : θ ∈ Rd } .
2.1 Two-Part Networks
Most deep networks used in value-based reinforcement learning can be modelled as two interacting parts φ and θ which give rise to a linear approximation (Figure 1, left). Here, the representation φ can also be adjusted and is almost always nonlinear in x. Two-part networks are a simple framework in which to study the behaviour of representation learning in deep reinforcement learning. We will especially consider the use of φ(x) to make additional predictions, called auxiliary tasks following common usage, and whose purpose is to improve or stabilize the representation.
We study two-part networks in an idealized setting where the length d of φ(x) is fixed and smaller than n, but the mapping is otherwise unconstrained. Even this idealized design offers interesting
Goal state
problems to study. We might be interested in sharing a representation across problems, as is often done in transfer or continual learning. In this context, auxiliary tasks may inform how the value function should generalize to these new problems. In many problems of interest, the weights θ can also be optimized more efficiently than the representation itself, warranting the view that the representation should be adapted using a different process (Levine et al., 2017; Chung et al., 2019).
Note that a trivial “value-as-feature” representation exists for the single-policy optimization problem
minimize ∥∥V̂ πφ − V π ∥∥2 2
w.r.t. φ ∈ R; this approximation sets φ(x) = V π(x) and θ = 1. In this paper we take the stance that this is not a satisfying representation, and that a good representation should be in the service of a broader goal (e.g. control, transfer, or fairness).
3 Representation Learning by Approximating Value Functions
We measure the quality of a representation φ in terms of how well it can approximate all possible value functions, formalized as the representation error
L(φ) := max π∈P
L(φ;π), L(φ;π) := ∥∥V̂ πφ − V π ∥∥2 2 .
We consider the problem of finding the representation φ ∈ R minimizing L(φ):
minimize max π∈P
∥∥V̂ πφ − V π ∥∥2 2 w.r.t. φ ∈ R. (2)
In the context of our work, we call this the representation learning problem (RLP) and say that a representation φ∗ is optimal when it minimizes the error in (2). Note that L(φ) (and hence φ∗) depends on characteristics of the environment, in particular on both reward and transition functions.
We consider the RLP from a geometric perspective (Figure 1, right). Dadashi et al. (2019) showed that the set of value functions achieved by the set of policies P , denoted
V := {V π ∈ Rn : π ∈ P}, forms a (possibly nonconvex) polytope. As previously noted, a representation φ defines a subspace H of possible value approximations. The maximal error is achieved by the value function in V which is furthest along the subspace normal to H , since V̂ πφ is the orthogonal projection of V π .
We say that V ∈ V is an extremal vertex if it is a vertex of the convex hull of V . We will make use of the relationship between directions δ ∈ Rd, the set of extremal vertices, and the set of deterministic policies. The following lemma, based on a well-known notion of duality from convex analysis (Boyd and Vandenberghe, 2004), states this relationship formally. Lemma 1. Let δ ∈ Rn and define the functional fδ(V ) := δ>V , with domain V . Then fδ is maximized by an extremal vertex U ∈ V , and there is a deterministic policy π for which V π = U . Furthermore, the set of directions δ ∈ Rn for which the maximum of fδ is achieved by multiple extremal vertices has Lebesgue measure zero in Rn.
Denote by Pv the set of policies corresponding to extremal vertices of V . We next derive an equivalence between the RLP and an optimization problem which only considers policies in Pv .
Theorem 1. For any representation φ ∈ R, the maximal approximation error measured over all value functions is the same as the error measured over the set of extremal vertices:
max π∈P
∥∥V̂ πφ − V π ∥∥2
2 = max π∈Pv
∥∥V̂ πφ − V π ∥∥2 2 .
Theorem 1 indicates that we can find an optimal representation by considering a finite (albeit exponential) number of value functions: Each extremal vertex corresponds to the value function of some deterministic policy, of which there are at most an exponential number. We will call these adversarial value functions (AVFs), because of the minimax flavour of the RLP.
Solving the RLP allows us to provide quantifiable guarantees on the performance of certain value-based learning algorithms. For example, in the context of least-squares policy iteration (LSPI; Lagoudakis and Parr, 2003), minimizing the representation error L directly improves the performance bound. By contrast, we cannot have the same guarantee if φ is learned by minimizing the approximation error for a single value function. Corollary 1. Let φ∗ be an optimal representation in the RLP. Consider the sequence of policies π0, π1, . . . derived from LSPI using φ∗ to approximate V π0 , V π1 , . . . under a uniform sampling of the state-space. Then there exists an MDP-dependent constant C ∈ R such that
lim sup k→∞
∥∥V ∗ − V πk ∥∥2
2 ≤ CL(φ∗).
This result is a direct application of the quadratic norm bounds given by Munos (2003), in whose work the constant is made explicit. We emphasize that the result is illustrative; our approach should enable similar guarantees in other contexts (e.g. Munos, 2007; Petrik and Zilberstein, 2011).
3.1 The Structure of Adversarial Value Functions
The RLP suggests that an agent trained to predict various value functions should develop a good state representation. Intuitively, one may worry that there are simply too many “uninteresting” policies, and that a representation learned from their value functions emphasizes the wrong quantities. However, the search for an optimal representation φ∗ is closely tied to the much smaller set of adversarial value functions (AVFs). The aim of this section is to characterize the structure of AVFs and show that they form an interesting subset of all value functions. From this, we argue that their use as auxiliary tasks should also produce structured representations.
From Lemma 1, recall that an AVF is geometrically defined using a vector δ ∈ Rn and the functional fδ(V ) := δ
>V , which the AVF maximizes. Since fδ is restricted to the value polytope, we can consider the equivalent policy-space functional gδ : π 7→ δ>V π . Observe that
max π∈P gδ(π) = max π∈P δ>V π = max π∈P
∑ x∈X δ(x)V π(x). (3)
In this optimization problem, the vector δ defines a weighting over the state space X ; for this reason, we call δ an interest function in the context of AVFs. Whenever δ ≥ 0 componentwise, we recover the optimal value function, irrespective of the exact magnitude of δ (Bertsekas, 2012). If δ(x) < 0 for some x, however, the maximization becomes a minimization. As the next result shows, the policy maximizing fδ(π) depends on a network flow dπ derived from δ and the transition function P . Theorem 2. Maximizing the functional gδ is equivalent to finding a network flow dπ that satisfies a reverse Bellman equation:
max π∈P δ>V π = max π∈P d>π r, dπ = δ + γP π>dπ.
For a policy π̃ maximizing the above we have
V π̃(x) = r(x) + γ { maxa∈A Ex′∼P V π̃(x′) dπ̃(x) > 0, mina∈A Ex′∼P V π̃(x′) dπ̃(x) < 0.
Corollary 2. There are at most 2n distinct adversarial value functions.
The vector dπ corresponds to the sum of discounted interest weights flowing through a state x, similar to the dual variables in the theory of linear programming for MDPs (Puterman, 1994). Theorem 2, by way of the corollary, implies that there are fewer AVFs (≤ 2n) than deterministic policies (= |A|n). It also implies that AVFs relate to a reward-driven purpose, similar to how the optimal value function describes the goal of maximizing return. We will illustrate this point empirically in Section 4.1.
3.2 Relationship to Auxiliary Tasks
So far we have argued that solving the RLP leads to a representation which is optimal in a meaningful sense. However, solving the RLP seems computationally intractable: there are an exponential number of deterministic policies to consider (Prop. 1 in the appendix gives a quadratic formulation with quadratic constraints). Using interest functions does not mitigate this difficulty: the computational problem of finding the AVF for a single interest function is NP-hard, even when restricted to deterministic MDPs (Prop. 2 in the appendix).
Instead, in this section we consider a relaxation of the RLP and show that this relaxation describes existing representation learning methods, in particular those that use auxiliary tasks. Let ξ be some distribution over Rn. We begin by replacing the maximum in (2) by an expectation:
minimize E V∼ξ
∥∥V̂φ − V ∥∥2
2 w.r.t. φ ∈ R. (4)
The use of the expectation offers three practical advantages over the use of the maximum. First, this leads to a differentiable objective which can be minimized using deep learning techniques. Second, the choice of ξ gives us an additional degree of freedom; in particular, ξ needs not be restricted to the value polytope. Third, the minimizer in (4) is easily characterized, as the following theorem shows. Theorem 3. Let u∗1, . . . , u∗d ∈ Rn be the principal components of the distribution ξ, in the sense that
u∗i := arg max u∈Bi E V∼ξ (u>V )2, where Bi := {u ∈ Rn : ‖u‖22 = 1, u>u∗j = 0 ∀j < i}.
Equivalently, u∗1, . . . , u ∗ d are the eigenvectors of Eξ V V > ∈ Rn×n with the d largest eigenvalues. Then the matrix [u∗1, . . . , u ∗ d] ∈ Rn×d, viewed as a map X → Rd, is a solution to (4). When the principal components are uniquely defined, any minimizer of (4) spans the same subspace as u∗1, . . . , u ∗ d.
One may expect the quality of the learned representation to depend on how closely the distribution ξ relates to the RLP. From an auxiliary tasks perspective, this corresponds to choosing tasks that are in some sense useful. For example, generating value functions from the uniform distribution over the set of policies P , while a natural choice, may put too much weight on “uninteresting” value functions. In practice, we may further restrict ξ to a finite set V . Under a uniform weighting, this leads to a representation loss
L(φ;V ) := ∑
V ∈V
∥∥V̂φ − V ∥∥2
2 (5)
which corresponds to the typical formulation of an auxiliary-task loss (e.g. Jaderberg et al., 2017). In a deep reinforcement learning setting, one typically minimizes (5) using stochastic gradient descent methods, which scale better than batch methods such as singular value decomposition (but see Wu et al. (2019) for further discussion).
Our analysis leads us to conclude that, in many cases of interest, the use of auxiliary tasks produces representations that are close to the principal components of the set of tasks under consideration. If V is well-aligned with the RLP, minimizing L(φ;V ) should give rise to a reasonable representation. To demonstrate the power of this approach, in Section 4 we will study the case when the set V is constructed by sampling AVFs – emphasizing the policies that support the solution to the RLP.
3.3 Relationship to Proto-Value Functions
Proto-value functions (Mahadevan and Maggioni, 2007, PVF) are a family of representations which vary smoothly across the state space. Although the original formulation defines this representation as the largest-eigenvalue eigenvectors of the Laplacian of the transition function’s graphical structure, recent formulations use the top singular vectors of (I − γPπ)−1, where π is the uniformly random policy (Stachenfeld et al., 2014; Machado et al., 2017; Behzadian and Petrik, 2018).
In line with the analysis of the previous section, proto-value functions can also be interpreted as defining a set of value-based auxiliary tasks. Specifically, if we define an indicator reward function ry(x) := I[x=y] and a set of value functions V = {(I − γPπ)−1ry}y∈X with π the uniformly random policy, then any d-dimensional representation that minimizes (5) spans the same basis as the d-dimensional PVF (up to the bias term). This suggests a connection with hindsight experience replay (Andrychowicz et al., 2017), whose auxiliary tasks consists in reaching previously experienced states.
4 Empirical Studies
In this section we complement our theoretical analysis with an experimental study. In turn, we take a closer look at 1) the structure of adversarial value functions, 2) the shape of representations learned using AVFs, and 3) the performance profile of these representations in a control setting. Our eventual goal is to demonstrate that the RLP, which is based on approximating value functions, gives rise to representations that are both interesting and comparable to previously proposed schemes. Our concrete instantiation (Algorithm 1) uses the representation loss (5). As-is, this algorithm is of limited practical relevance (our AVFs are learned using a tabular representation) but we believe provides an inspirational basis for further developments.
Algorithm 1 Representation learning using AVFs input k – desired number of AVFs, d – desired number of features.
Sample δ1, . . . , δk ∼ [−1, 1]n Compute µi = arg maxπ δ > i V
π using a policy gradient method Find φ∗ = arg minφ L(φ; {V µ1 , . . . , V µk}) (Equation 5)
We perform all of our experiments within the four-room domain (Sutton et al., 1999; Solway et al., 2014; Machado et al., 2017, Figure 2, see also Appendix H.1).
We consider a two-part network where we pretrain φ end-to-end to predict a set of value functions. Our aim here is to compare the effects of using different sets of value functions, including AVFs, on the learned representation. As our focus is on the efficient use of a d-dimensional representation (with d < n, the number of states), we encode individual states as one-hot vectors and map them into φ(x) without capacity constraints. Additional details may be found in Appendix H.
4.1 Adversarial Value Functions
Our first set of results studies the structure of adversarial value functions in the four-room domain. We generated interest functions by assigning a value δ(x) ∈ {−1, 0, 1} uniformly at random to each state x (Figure 2, left). We restricted δ to these discrete choices for illustrative purposes.
We then used model-based policy gradient (Sutton et al., 2000) to find the policy maximizing∑ x∈X δ(x)V
π(x). We observed some local minima or accumulation points but as a whole reasonable solutions were found. The resulting network flow and AVF for a particular sample are shown in Figure 2. For most states, the signs of δ and dπ agree; however, this is not true of all states (larger version and more examples in appendix, Figures 6, 7). As expected, states for which dπ > 0 (respectively, dπ < 0) correspond to states maximizing (resp. minimizing) the value function. Finally, we remark on the “flow” nature of dπ: trajectories over minimizing states accumulate in corners or loops, while those over maximizing states flow to the goal. We conclude that AVFs exhibit interesting structure, and are generated by policies that are not random (Figure 2, right). As we will see next, this is a key differentiator in making AVFs good auxiliary tasks.
4.2 Representation Learning with AVFs
We next consider the representations that arise from training a deep network to predict AVFs (denoted AVF from here on). We sample k = 1000 interest functions and use Algorithm 1 to generate k AVFs.
We combine these AVFs into the representation loss (5) and adapt the parameters of the deep network using Rmsprop (Tieleman and Hinton, 2012).
We contrast the AVF-driven representation with one learned by predicting the value function of random deterministic policies (RP). Specifically, these policies are generated by assigning an action uniformly at random to each state. We also consider the value function of the uniformly random policy (VALUE). While we make these choices here for concreteness, other experiments yielded similar results (e.g. predicting the value of the optimal policy; appendix, Figure 8). In all cases, we learn a d = 16 dimensional representation, not including the bias unit.
Figure 3 shows the representations learned by the three methods. The features learned by VALUE resemble the value function itself (top left feature) or its negated image (bottom left feature). Coarsely speaking, these features capture the general distance to the goal but little else. The features learned by RP are of even worse quality. This is because almost all random deterministic policies cause the agent to avoid the goal (appendix, Figure 12). The representation learned by AVF, on the other hand, captures the structure of the domain, including paths between distal states and focal points corresponding to rooms or parts of rooms.
Although our focus is on the use of AVFs as auxiliary tasks to a deep network, we observe the same results when discovering a representation using singular value decomposition (Section 3.2), as described in Appendix I. All in all, our results illustrate that, among all value functions, AVFs are particularly useful auxiliary tasks for representation learning.
4.3 Learning the Optimal Policy
In a final set of experiments, we consider learning a reward-maximizing policy using a pretrained representation and a model-based version of the SARSA algorithm (Rummery and Niranjan, 1994; Sutton and Barto, 1998). We compare the value-based and AVF-based representations from the previous section (VALUE and AVF), and also proto-value functions (PVF; details in Appendix H.3).
We report the quality of the learned policies after training, as a function of d, the size of the representation. Our quality measure is the average return from the designated start state (bottom left). Results are provided in Figure 4 and Figure 13 (appendix). We observe a failure of the VALUE representation to provide a useful basis for learning a good policy, even as d increases;
while the representation is not rank-deficient, the features do not help reduce the approximation error.
In comparison, our AVF representations perform similarly to PVFs. Increasing the number of auxiliary tasks also leads to better representations; recall that PVF implicitly uses n = 104 auxiliary tasks.
5 Related Work
Our work takes inspiration from research in basis or feature construction for reinforcement learning. Ratitch and Precup (2004), Foster and Dayan (2002), Menache et al. (2005), Yu and Bertsekas (2009), Bhatnagar et al. (2013), and Song et al. (2016) consider methods for adapting parametrized basis functions using iterative schemes. Including Mahadevan and Maggioni (2007)’s proto-value functions, a number of works (we note Dayan, 1993; Petrik, 2007; Mahadevan and Liu, 2010; Ruan et al., 2015; Barreto et al., 2017) have used characteristics of the transition structure of the MDP to generate representations; these are the closest in spirit to our approach, although none use the reward or consider the geometry of the space of value functions. Parr et al. (2007) proposed constructing a representation from successive Bellman errors, Keller et al. (2006) used dimensionality reduction methods; finally Hutter (2009) proposes a universal scheme for selecting representations.
Deep reinforcement learning algorithms have made extensive use of auxiliary tasks to improve agent performance, beginning perhaps with universal value function approximators (Schaul et al., 2015) and the UNREAL architecture (Jaderberg et al., 2017); see also Dosovitskiy and Koltun (2017), François-Lavet et al. (2018) and, more tangentially, van den Oord et al. (2018). Levine et al. (2017) and Chung et al. (2019) make explicit use of two-part network to derive more sample efficient deep reinforcement learning algorithms. Veeriah et al. (2019) use a meta-gradient approach to generate auxiliary tasks. The notion of augmenting an agent with side predictions is not new, with roots in TD models (Sutton, 1995), predictive state representations (Littman et al., 2002), and the Horde architecture (Sutton et al., 2011), itself inspired by the work of Selfridge (1959).
A number of works quantify or explain the usefulness of a representation. Parr et al. (2008) demonstrated that a good representation should support a good approximation of both reward and expected next state. We conjecture that the relaxed problem (4) trades these two quantities off in a principled fashion. Li et al. (2006); Abel et al. (2016) consider the approximation error that arises from state abstraction. More recently, Nachum et al. (2019) provide some interesting guarantees in the context of hierarchical reinforcement learning, while Such et al. (2019) visualizes the representations learned by Atari-playing agents. Finally, Bertsekas (2018) remarks on the two-part network we study here.
6 Conclusion
In this paper we studied the notion of an adversarial value function, derived from a geometric perspective on representation learning in RL. Our work shows that adversarial value functions exhibit interesting structure, and are good auxiliary tasks when learning a representation of an environment. We believe our work to be the first to provide formal evidence as to the usefulness of predicting value functions for shaping an agent’s representation.
Our work opens up the possibility of automatically generating auxiliary tasks in deep reinforcement learning, analogous to how deep learning itself enabled a move away from hand-crafted features. To do so, we expect that a number of practical challenges will need to be overcome:
Off-policy learning. A practical implementation will require learning AVFs concurrently with the main task. Doing so results in off-policy learning, whose negative effects are well-documented even in recent applications (e.g. van Hasselt et al., 2018).
Policy parametrization. AVFs are the value function of deterministic policies. While a natural choice is to look for policies that maximize representation error, this poses the problem of how to parametrize the policies themselves. In particular, a policy parametrized using the representation φ may not provide a sufficient degree of “adversariality”.
Smoothness in the interest function. In continuous or large state spaces, it is desirable for interest functions to incorporate some degree of smoothness, rather than vary rapidly from state to state. It is not clear how to control this smoothness in a principled manner.
From a mathematical perspective, our formulation of the RLP was made with both convenience and geometry in mind. Conceptually, it may be interesting to consider our approach in other norms,
including the weighted norms used in approximation results. Practically, this would translate into an emphasis on “interesting” value functions, for example by giving additional weight to the optimal value function and its neighbouring AVFs.
7 Acknowledgements
The authors thank the many people who helped shape this project through discussions and feedback on early and late drafts: Lihong Li, George Tucker, Doina Precup, Ofir Nachum, Csaba Szepesvári, Georg Ostrovski, Marek Petrik, Marlos Machado, Tim Lillicrap, Danny Tarlow, Hugo Larochelle, Saurabh Kumar, Carles Gelada, Rémi Munos, David Silver, and André Barreto. Special thanks also to Philip Thomas and Scott Niekum, who gave this project its initial impetus.
8 Author Contributions
M.G.B., W.D., D.S., and N.L.R. conceptualized the representation learning problem. M.G.B., W.D., T.L., A.A.T., R.D., D.S., and N.L.R. contributed to the theoretical results. M.G.B., W.D., P.S.C., R.D., and C.L. performed experiments and collated results. All authors contributed to the writing. | 1. What is the focus of the paper regarding value functions and policy approximation?
2. What are the strengths of the proposed approach, particularly in reducing maximization over policies?
3. Do you have any concerns or suggestions regarding the presentation and clarity of the paper's content?
4. How does the reviewer assess the novelty and relevance of the paper's contributions?
5. Are there any typos or errors in the paper that need correction? | Review | Review
The paper claims that the best representation should minimize the maximum error in approximating all the possible value functions and not a single one that pertains to a given policy. As such, the authors establish further results, which reduce the maximization over all policies to the maximization over the finite set of external vertices which corresponds to (distinct in Lebesgue measure) deterministic policies. These value functions are termed adversarial (although these are adversarial only from approximation point-of-view). I enjoyed reading the paper. Here are some comments, which may help improving the paper. The presented theory is indeed linear, but it is coated as being generic through some introduction on the representation mapping \phi and the so-called "two-part approximation". The distinction is that in a generic case, \phi becomes part of the computation through its gradient and such, which is not the case here. I am personally not favouring the presented perspective, yet it is not a real issue to argument against. Note that in all classic sources (e.g. Bertsekas and Tsitsiklis numerous books and papers) V = \Phi \theta is the definition of linear ADP. There is no need for saying otherwise. Figure 1 (Right): I would add the axis. The current figure is hard to understand. Is it the value space for a 2 dimensional state-space? i.e., V^{pi}(x(1)) vs V^{pi}(x(2)) ? L94 --> \mathcal{V} definition: looks like it should be âfor allâ not âfor someâ. Additionally, mathematical statements should be succinct. I would remove it if you mean âfor allâ. Supplementary material -> the first equation after equation 6: the sub-scripts of vâs should be super-script. L107: \phi should be in R^d and not \mathcal{R}=R^{nxd}. In other words, \phi is a row of \Phi. \delta is called interest function in section 3.1, yet it was called direction in Lemma 1. |
NIPS | Title
On Regret with Multiple Best Arms
Abstract
We study a regret minimization problem with the existence of multiple best/nearoptimal arms in the multi-armed bandit setting. We consider the case when the number of arms/actions is comparable or much larger than the time horizon, and make no assumptions about the structure of the bandit instance. Our goal is to design algorithms that can automatically adapt to the unknown hardness of the problem, i.e., the number of best arms. Our setting captures many modern applications of bandit algorithms where the action space is enormous and the information about the underlying instance/structure is unavailable. We first propose an adaptive algorithm that is agnostic to the hardness level and theoretically derive its regret bound. We then prove a lower bound for our problem setting, which indicates: (1) no algorithm can be minimax optimal simultaneously over all hardness levels; and (2) our algorithm achieves a rate function that is Pareto optimal. With additional knowledge of the expected reward of the best arm, we propose another adaptive algorithm that is minimax optimal, up to polylog factors, over all hardness levels. Experimental results confirm our theoretical guarantees and show advantages of our algorithms over the previous state-of-the-art.
1 Introduction
Multi-armed bandit problems describe exploration-exploitation trade-offs in sequential decision making. Most existing bandit algorithms tend to provide regret guarantees when the number of available arms/actions is smaller than the time horizon. In modern applications of bandit algorithm, however, the action space is usually comparable or even much larger than the allowed time horizon so that many existing bandit algorithms cannot even complete their initial exploration phases. Consider a problem of personalized recommendations, for example. For most users, the total number of movies, or even the amount of sub-categories, far exceeds the number of times they visit a recommendation site. Similarly, the enormous amount of user-generated content on YouTube and Twitter makes it increasingly challenging to make optimal recommendations. The tension between a very large action space and a limited time horizon poses a realistic problem in which deploying algorithms that converge to an optimal solution over an asymptotically long time horizon do not give satisfying results. There is a need to design algorithms that can exploit the highest possible reward within a limited time horizon. Past work has partially addressed this challenge. The quantile regret proposed in [12] to calculate regret with respect to an satisfactory action rather than the best one. The discounted regret analyzed in [25, 24] is used to emphasize short time horizon performance. Other existing works consider the extreme case when the number of actions is indeed infinite, and tackle such problems with one of two main assumptions: (1) the discovery of a near-optimal/best arm follows some probability measure with known parameters [6, 30, 4, 15]; (2) the existence of a smooth function represents the mean-payoff over a continuous subset [1, 20, 19, 8, 23, 17]. However, in many situations, neither assumption may be realistic. We make minimal assumptions in this paper. We study the regret minimization problem over a time horizon T , which might be unknown, with respect
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
to a bandit instance with n total arms, out of which m are best/near-optimal arms. We emphasize that the allowed time horizon and the given bandit instance should be viewed as features of one problem and together they indicate an intrinsic hardness level. We consider the case when the number of arms n is comparable or larger than the time horizon T so that no standard algorithm provides satisfying result. Our goal is to design algorithms that could adapt to the unknown m and achieve optimal regret.
1.1 Contributions and paper organization
We make the following contributions. In Section 2, we formally define the regret minimization problem that represents the tension between a very large action space and a limited time horizon; and capture the hardness level in terms of the number of best arms. We provide an adaptive algorithm that is agnostic to the unknown number of best arms in Section 3, and theoretically derive its regret bound. In Section 4, we prove a lower bound for our problem setting that indicates that there is no algorithm that can be optimal simultaneously over all hardness levels. Our lower bound also shows that our algorithm provided in Section 3 is Pareto optimal. With additional knowledge of the expected reward of the best arm, in Section 5, we provide an algorithm that achieves the non-adaptive minimax optimal regret, up to polylog factors, without the knowledge of the number of best arms. Experiments conducted in Section 6 confirm our theoretical guarantees and show advantages of our algorithms over previous state-of-the-art. We conclude our paper in Section 7. Most of the proofs are deferred to the Appendix due to lack of space.
1.2 Related work
Time sensitivity and large action space. As bandit models are getting much more complex, usually with large or infinite action spaces, researchers have begun to pay attention to tradeoffs between regret and time horizons when deploying such models. [13] study a linear bandit problem with ultra-high dimension, and provide algorithms that, under various assumptions, can achieve good reward within short time horizon. [24] also take time horizon into account and model time preference by analyzing a discounted regret. [12] consider a quantile regret minimization problem where they define their regret with respect to expected reward ranked at (1− ρ)-th quantile. One could easily transfer their problem to our setting; however, their regret guarantee is sub-optimal. [18, 4] also consider the problem with m best/near-optimal arms with no other assumptions, but they focus on the pure exploration setting; [4] additionally requires the knowledge of m. Another line of research considers the extreme case when the number arms is infinite, but with some known regularities. [6] proposes an algorithm with a minimax optimality guarantee under the situation where the reward of each arm follows strictly Bernoulli distribution; [27] provides an anytime algorithm that works under the same assumption. [30] relaxes the assumption on Bernoulli reward distribution, however, some other parameters are assumed to be known in their setting.
Continuum-armed bandit. Many papers also study bandit problems with continuous action spaces, where they embed each arm x into a bounded subset X ⊆ Rd and assume there exists a smooth function f governing the mean-payoff for each arm. This setting is firstly introduced by [1]. When the smoothness parameters are known to the learner or under various assumptions, there exists algorithms [20, 19, 8] with near-optimal regret guarantees. When the smoothness parameters are unknown, however, [23] proves a lower bound indicating no strategy can be optimal simultaneously over all smoothness classes; under extra information, they provide adaptive algorithms with near-optimal regret guarantees. Although achieving optimal regret for all settings is impossible, [17] design adaptive algorithms and prove that they are Pareto optimal. Our algorithms are mainly inspired by the ones in [17, 23]. A closely related line of work [28, 16, 5, 26] aims at minimizing simple regret in the continuum-armed bandit setting.
Adaptivity to unknown parameters. [9] argues the awareness of regularity is flawed and one should design algorithms that can adapt to the unknown environment. In situations where the goal is pure exploration or simple regret minimization, [18, 28, 16, 5, 26] achieve near-optimal guarantees with unknown regularity because their objectives trade-off exploitation in favor of exploration. In the case of cumulative regret minimization, however, [23] shows no strategy can be optimal simultaneously over all smoothness classes. In special situations or under extra information, [9, 10, 23] provide algorithms that adapt in different ways. [17] borrows the concept of Pareto optimality from economics and provide algorithms with rate functions that are Pareto optimal. Adaptivity is studied in statistics
as well: in some cases, only additional logarithmic factors are required [22, 7]; in others, however, there exists an additional polynomial cost of adaptation [11].
2 Problem statement and notation
We consider the multi-armed bandit instance ν = (ν1, . . . , νn) with n probability distributions with means µi = EX∼νi [X] ∈ [0, 1]. Let µ? = maxi∈[n]{µi} be the highest mean and S? = {i ∈ [n] : µi = µ?} denote the subset of best arms.1 The cardinality |S?| = m is unknown to the learner. We could also generalize our setting to S′? = {i ∈ [n] : µi ≥ µ? − (T )} with unknown |S′?| (i.e., situations where there is an unknown number of near-optimal arms). Setting to be dependent on T is to avoid an additive term linear in T , e.g., ≤ 1/ √ T ⇒ T ≤ √ T . All theoretical results and algorithms presented in this paper are applicable to this generalized setting with minor modifications. For ease of exposition, we focus on the case with multiple best arms throughout the paper. At each time step t ∈ [T ], the algorithm/learner selects an actionAt ∈ [n] and receives an independent reward Xt ∼ νAt . We assume that Xt − µAt is (1/2)-sub-Gaussian conditioned on At.2 We measure the success of an algorithm through the expected cumulative (pseudo) regret:
RT = T · µ? − E [ T∑
t=1
µAt
] .
We useR(T, n,m) to denote the set of regret minimization problems with allowed time horizon T and any bandit instance ν with n total arms and m best arms.3 We emphasize that T is part of the problem instance. We are particularly interested in the case when n is comparable or even larger than T , which captures many modern applications where the available action space far exceeds the allowed time horizon. Although learning algorithms may not be able to pull each arm once, one should notice that the true/intrinsic hardness level of the problem could be viewed as n/m: selecting a subset uniformly at random with cardinality Θ(n/m) guarantees, with constant probability, the access to at least one best arm; but of course it is impossible to do this without knowing m. We quantify the intrinsic hardness level over a set of regret minimization problemsR(T, n,m) as
ψ(R(T, n,m)) = inf{α ≥ 0 : n/m ≤ 2Tα}, where the constant 2 in front of Tα is added to avoid otherwise the trivial case with all best arms when the infimum is 0. ψ(R(T, n,m)) is used here as it captures the minimax optimal regret over the set of regret minimization problemR(T, n,m), as explained later in our review of the MOSS algorithm and the lower bound. As smaller ψ(R(T, n,m)) indicates easier problems, we then define the family of regret minimization problems with hardness level at most α as
HT (α) = {∪R(T, n,m) : ψ(R(T, n,m)) ≤ α}, with α ∈ [0, 1]. Although T is necessary to define a regret minimization problem, we actually encode the hardness level into a single parameter α, which captures the tension between the complexity of bandit instance at hand and the allowed time horizon T : problems with different time horizons but the same α are equally difficult in terms of the achievable minimax regret (the exponent of T ). We thus mainly study problems with T large enough so that we could mainly focus on the polynomial terms of T . We are interested in designing algorithms with minimax guarantees over HT (α), but without the knowledge of α.
MOSS and upper bound. In the classical setting, MOSS , designed by [2] and further generalized to the sub-Gaussian case [21] and improved in terms of constant factors [14], achieves the minimax optimal regret. In this paper, we will use MOSS as a subroutine with regret upper bound O( √ nT ) when T ≥ n. For any problem in HT (α) with known α, one could run MOSS on a subset selected uniformly at random with cardinality Õ(Tα) and achieve regret Õ(T (1+α)/2).
1Throughout the paper, we denote by [K] the set {1, . . . ,K} for any positive integer K. 2We say a random variable X is σ-sub-Gaussian if E[exp(λX)] ≤ exp(σ2λ2/2) for all λ ∈ R. 3Our setting could be generalized to the case with infinite arms: one can consider embedding arms into an arm space X and let p be the probability that an arm sampled uniformly at random is (near-) optimal. 1/p will then serve a similar role as n/m does in the original definition.
Lower bound. The lower bound Ω( √ nT ) in the classical setting does not work for our setting as its proof heavily relies on the existence of single best arm [21]. However, for problems inHT (α), we do have a matching lower bound Ω(T (1+α)/2) as one could always apply the standard lower bound on an bandit instance with n = bTαc and m = 1. For general value of m, a lower bound of the order Ω( √ T (n−m)/m) = Ω(T (1+α)/2) for the m-best arms case could be obtained following similar analysis in Chapter 15 of [21].
Although log T may appear in our bounds, throughout the paper, we focus on problems with T ≥ 2 as otherwise the bound is trivial.
3 An adaptive algorithm
Algorithm 1 takes time horizon T and a user-specified β ∈ [1/2, 1] as input, and it is mainly inspired by [17]. Algorithm 1 operates in iterations with geometrically-increasing length ∆Ti = 2p+i with p = dlog2 T βe. At each iteration i, it restarts MOSS on a set Si consisting of Ki = 2p+2−i real arms selected uniformly at random plus a set of “virtual” mixture-arms (one from each of the 1 ≤ j < i previous iterations, none if i = 1). The mixture-arms are constructed as follows. After each iteration i, let p̂i denote the vector of empirical sampling frequencies of the arms in that iteration (i.e., the k-th element of p̂i is the number of times arm k, including all previously constructed mixture-arms, was sampled in iteration i divided by the total number of samples ∆Ti). The mixture-arm for iteration i is the p̂i-mixture of the arms, denoted by ν̃i. When MOSS samples from ν̃i it first draws it ∼ p̂i, then draws a sample from the corresponding arm νit (or ν̃it ). The mixture-arms provide a convenient summary of the information gained in the previous iterations, which is key to our theoretical analysis. Although our algorithm is working on fewer regular arms in later iterations, information summarized in mixture-arms is good enough to provide guarantees. We name our algorithm MOSS++ as it restarts MOSS at each iteration with past information summarized in mixture-arms. We provide an anytime version of Algorithm 1 in Appendix A.2 via the standard doubling trick.
Algorithm 1: MOSS++ Input: Time horizon T and user-specified parameter β ∈ [1/2, 1].
1: Set: p = dlog2 T βe, Ki = 2p+2−i and ∆Ti = min{2p+i, T}. 2: for i = 1, . . . , p do 3: Run MOSS on a subset of arms Si for ∆Ti rounds. Si contains Ki real arms selected uniformly at random and the set of virtual mixture-arms from previous iterations, i.e., {ν̃j}j<i. 4: Construct a virtual mixture-arm ν̃i based on empirical sampling frequencies of MOSS above. 5: end for
3.1 Analysis and discussion
We use µS = maxν∈S{EX∼ν [X]} to denote the highest expected reward over a set of distributions/arms S. For any algorithm that only works on S, we can decompose the regret into approximation error and learning error:
RT = T · (µ? − µS)︸ ︷︷ ︸ approximation error due to the selection of S
+ T · µS − E [ T∑
t=1
µAt
]
︸ ︷︷ ︸ learning error due to the sampling rule {At}Tt=1
. (1)
This type of regret decomposition was previously used in [20, 3, 17] to deal with the continuum-armed bandit problem. We consider here a probabilistic version, with randomness in the selection of S, for the classical setting.
The main idea behind providing guarantees for MOSS++ is to decompose its regret at each iteration, using Eq. (1), and then bound the expected approximation error and learning error separately. The expected learning error at each iteration could always be controlled as Õ(T β) thanks to regret guarantees for MOSS and specifically chosen parameters p, Ki, ∆Ti. Let i? be the largest integer such that Ki ≥ 2Tα log √ T still holds. The expected approximation error in iteration i ≤ i? could be
upper bounded by √ T following an analysis on hypergeometric distribution. As a result, the expected regret in iteration i ≤ i? is Õ(T β). Since the mixture-arm ν̃i? is included in all following iterations, we could further bound the expected approximation error in iteration i > i? by Õ(T 1+α−β) after a careful analysis on ∆Ti/∆Ti? . This intuition is formally stated and proved in Theorem 1. Theorem 1. Run MOSS++with time horizon T and an user-specified parameter β ∈ [1/2, 1] leads to the following regret upper bound:
sup ω∈HT (α)
RT ≤ C (log2 T )5/2 · Tmin{max{β,1+α−β},1},
where C is a universal constant. Remark 1. We primarily focus on the polynomial terms in T when deriving the bound, but put no effort in optimizing the polylog term. The 5/2 exponent of log2 T might be tightened as well.
The theoretical guarantee is closely related to the user-specified parameter β: when β > α, we suffer a multiplicative cost of adaptation Õ(T |(2β−α−1)/2|), with β = (1 + α)/2 hitting the sweet spot, comparing to non-adaptive minimax regret; when β ≤ α, there is essentially no guarantees. One may hope to improve this result. However, our analysis in Section 4 indicates: (1) achieving minimax optimal regret for all settings simultaneously is impossible; and (2) the rate function achieved by MOSS++ is already Pareto optimal.
4 Lower bound and Pareto optimality
4.1 Lower bound
In this section, we show that designing algorithms with the non-adaptive minimax optimal guarantee over all values of α is impossible. We first state the result in the following general theorem. Theorem 2. For any 0 ≤ α′ < α ≤ 1, assume Tα ≤ B and bTαc − 1 ≥ max{Tα/4, 2}. If an algorithm is such that supω∈HT (α′)RT ≤ B, then the regret of this algorithm is lower bounded on HT (α):
sup ω∈HT (α)
RT ≥ 2−10T 1+αB−1. (2)
To give an interpretation of Theorem 2, we consider any algorithm/policy π together with regret minimization problemsHT (α′) andHT (α) satisfying corresponding requirements. On one hand, if algorithm π achieves a regret that is order-wise larger than Õ(T (1+α
′)/2) overHT (α′), it is already not minimax optimal forHT (α′). Now suppose π achieves a near-optimal regret, i.e., Õ(T (1+α
′)/2), over HT (α′); then, according to Eq. (2), π must incur a regret of order at least Ω̃(T 1/2+α−α
′/2) on one problem in HT (α′). This, on the other hand, makes algorithm π strictly sub-optimal over HT (α).
4.2 Pareto optimality
We capture the performance of any algorithm by its dependence on polynomial terms of T in the asymptotic sense. Note that the hardness level of a problem is encoded in α. Definition 1. Let θ : [0, 1] → [0, 1] denote a non-decreasing function. An algorithm achieves the rate function θ if
∀ > 0,∀α ∈ [0, 1], lim sup T→∞ supω∈HT (α)RT T θ(α)+ < +∞.
Recall that a function θ′ is strictly smaller than another function θ in pointwise order if θ′(α) ≤ θ(α) for all α and θ′(α0) < θ(α0) for at least one value of α0. As there may not always exist a pointwise ordering over rate functions, following [17], we consider the notion of Pareto optimality over rate functions achieved by some algorithms. Definition 2. A rate function θ is Pareto optimal if it is achieved by an algorithm, and there is no other algorithm achieving a strictly smaller rate function θ′ in pointwise order. An algorithm is Pareto optimal if it achieves a Pareto optimal rate function.
Combining the results in Theorem 1 and Theorem 2 with above definitions, we could further obtain the following result in Theorem 3. Theorem 3. The rate function achieved by MOSS++with any β ∈ [1/2, 1], i.e.,
θβ : α 7→ min{max{β, 1 + α− β}, 1}, (3) is Pareto optimal.
5 Learning with extra information
Although previous Section 4 gives negative results on designing algorithms that could optimally adapt to all settings, one could actually design such an algorithm with extra information. In this section, we provide an algorithm that takes the expected reward of the best arm µ? (or an estimated one with error up to 1/ √ T ) as extra information, and achieves near minimax optimal regret over all settings simultaneously. Our algorithm is mainly inspired by [23].
5.1 Algorithm
We name our Algorithm 3 Parallel as it maintains dlog T e instances of subroutine, i.e., Algorithm 2, in parallel. Each subroutine SRi is initialized with time horizon T and hardness level αi = i/dlog T e. We use Ti,t to denote the number of samples allocated to SRi up to time t, and represent its empirical regret at time t as R̂i,t = Ti,t · µ? − ∑Ti,t t=1Xi,t with Xi,t ∼ νAi,t being the t-th empirical reward obtained by SRi and Ai,t being the index of the t-th arm pulled by SRi.
Algorithm 2: MOSS Subroutine (SR) Input: Time horizon T and hardness level α.
1: Select a subset of arms Sα uniformly at random with |Sα| = d2Tα log √ T e and run MOSS on
Sα.
Parallel operates in iterations of length d √ T e. At the beginning of each iteration, i.e., at time t = i · d √ T e for i ∈ {0} ∪ [d √ T e − 1], Parallel first selects the subroutine with the lowest
(breaking ties arbitrarily) empirical regret so far, i.e., k = arg mini∈[dlog Te] R̂i,t; it then resumes the learning process of SRk, from where it halted, for another d √ T e more pulls. All the information is updated at the end of that iteration. An anytime version of Algorithm 3 is provided in Appendix C.3.
5.2 Analysis
As Parallel discretizes the hardness parameter over a grid with interval 1/dlog T e, we first show that running the best subroutine alone leads to regret Õ(T (1+α)/2).
Algorithm 3: Parallel Input: Time horizon T and the optimal reward µ?.
1: set: p = dlog T e, ∆ = d √ T e and t = 0. 2: for i = 1, . . . , p do 3: Set αi = i/p, initialize SRi with αi, T ; set Ti,t = 0, and R̂i,t = 0. 4: end for 5: for i = 1, . . . ,∆− 1 do 6: Select k = arg mini∈[p] R̂i,t and run SRk for ∆ rounds. 7: Update Tk,t = Tk,t + ∆, R̂k,t = Tk,t · µ? − ∑Tk,t t=1 Xk,t, t = t+ ∆. 8: end for
Lemma 1. Suppose α is the true hardness parameter and αi−1/dlog T e < α ≤ αi, run Algorithm 2 with time horizon T and αi leads to the following regret bound:
sup ω∈HT (α)
RT ≤ C log T · T (1+α)/2,
where C is a universal constant.
Since Parallel always allocates new samples to the subroutine with the lowest empirical regret so far, we know that the regret of every subroutine should be roughly of the same order at time T . In particular, all subroutines should achieve regret Õ(T (1+α)/2), as the best subroutine does. Parallel then achieves the non-adaptive minimax optimal regret, up to polylog factors, without knowing the true hardness level α. Theorem 4. For any α ∈ [0, 1] unknown to the learner, run Parallelwith time horizon T and optimal expected reward µ? leads to the following regret upper bound:
sup ω∈HT (α)
RT ≤ C (log T )2 T (1+α)/2,
where C is a universal constant.
6 Experiments
We conduct three experiments to compare our algorithms with baselines. In Section 6.1, we compare the performance of each algorithm on problems with varying hardness levels. We examine how the regret curve of each algorithm increases on synthetic and real-world datasets in Section 6.2 and Section 6.3, respectively.
We first introduce the nomenclature of the algorithms. We use MOSS to denote the standard MOSS algorithm; and MOSS Oracle to denote Algorithm 2 with known α. Quantile represents the algorithm (QRM2) proposed by [12] to minimize the regret with respect to the (1− ρ)-th quantile of means among arms, without the knowledge of ρ. One could easily transfer Quantile to our settings with top-ρ fraction of arms treated as best arms. As suggested in [12], we reuse the statistics obtained in previous iterations of Quantile to improve its sample efficiency. We use MOSS++ to represent the vanilla version of Algorithm 1; and use empMOSS++ to represent an empirical version such that: (1) empMOSS++ reuse statistics obtained in previous round, as did in Quantile ; and (2) instead of selecting Ki real arms uniformly at random at the i-th iteration, empMOSS++ selects Ki arms with the highest empirical mean for i > 1. We choose β = 0.5 for MOSS++ and empMOSS++ in all experiments.4 All results are averaged over 100 experiments. Shaded area represents 0.5 standard deviation for each algorithm.
6.1 Adaptivity to hardness level
We compare our algorithms with baselines on regret minimization problems with different hardness levels. For this experiment, we generate best arms with expected reward 0.9 and sub-optimal arms
4Increasing β generally leads to worse performance on problems with small α but better performance on problems with large α.
with expected reward evenly distributed among {0.1, 0.2, 0.3, 0.4, 0.5}. All arms follow Bernoulli distribution. We set the time horizon to T = 50000 and consider the total number of arms n = 20000. We vary α from 0.1 to 0.8 (with interval 0.1) to control the number of best arms m = dn/2Tαe and thus the hardness level. In Fig. 2(a), the regret of any algorithm gets larger as α increases, which is expected. MOSS does not provide satisfying performance due to the large action space and the relatively small time horizon. Although implemented in an anytime fashion, Quantile could be roughly viewed as an algorithm that runs MOSS on a subset selected uniformly at random with cardinality T 0.347. Quantile displays good performance when α = 0.1, but suffers regret much worse than MOSS++ and empMOSS++when α gets larger. Note that the regret curve of Quantile gets flattened at 20000 is expected: it simply learns the best sub-optimal arm and suffers a regret 50000×(0.9−0.5). Although Parallel enjoys near minimax optimal regret, the regret it suffers from is the summation of 11 subroutines, which hurts its empirical performance. empMOSS++ achieves performance comparable to MOSS Oraclewhen α is small, and achieve the best empirical performance when α ≥ 0.3. When α ≥ 0.7, MOSS Oracle needs to explore most/all of the arms to statistically guarantee the finding of at least one best arm, which hurts its empirical performance.
6.2 Regret curve comparison
We compare how the regret curve of each algorithm increases in Fig. 2(b). We consider the same regret minimization configurations as described in Section 6.1 with α = 0.25. empMOSS++ , MOSS++ and Parallel all outperform Quantilewith empMOSS++ achieving the performance closest to MOSS Oracle . MOSS Oracle , Parallel and empMOSS++ have flattened their regret curve indicating they could confidently recommend the best arm. The regret curves of MOSS++ and Quantile do not flat as the random-sampling component in each of their iterations encourage them to explore new arms. Comparing to MOSS++ , Quantile keeps increasing its regret at a much faster rate and with a much larger variance, which empirically confirms the sub-optimality of their regret guarantees.
6.3 Real-world dataset
We also compare all algorithms in a realistic setting of recommending funny captions to website visitors. We use a real-world dataset from the New Yorker Magazine Cartoon Caption Contest5. The dataset of 1-3 star caption ratings/rewards for Contest 652 consists of n = 10025 captions6. We use the ratings to compute Bernoulli reward distributions for each caption as follows. The mean of each caption/arm i is calculated as the percentage pi of its ratings that were funny or somewhat funny (i.e., 2 or 3 stars). We normalize each pi with the best one and then threshold each: if pi ≥ 0.8, then put pi = 1; otherwise leave pi unaltered. This produces a set of m = 54 best arms with rewards 1 and all
5https://www.newyorker.com/cartoons/contest. 6Available online at https://nextml.github.io/caption-contest-data.
other 9971 arms with rewards among [0, 0.8]. We set T = 105 and this results in a hardness level around α ≈ 0.43.
0 20000 40000 60000 80000 100000 Time
0
5000
10000
15000
20000
25000
30000
Ex pe
ct ed
re gr
et MOSS MOSS Oracle Quantile Parallel (ours) MOSS++ (ours) empMOSS++ (ours)
effectiveness of empMOSS++ and MOSS++ in modern applications of bandit algorithm with large action space and limited time horizon.
7 Conclusion
We study a regret minimization problem with large action space but limited time horizon, which captures many modern applications of bandit algorithms. Depending on the number of best/nearoptimal arms, we encode the hardness level, in terms of minimax regret achievable, of the given regret minimization problem into a single parameter α, and we design algorithms that could adapt to this unknown hardness level. Our first algorithm MOSS++ takes a user-specified parameter β as input and provides guarantees as long as α < β; our lower bound further indicates the rate function achieved by MOSS++ is Pareto optimal. Although no algorithm can achieve near minimax optimal regret over all α simultaneously, as demonstrated by our lower bound, we overcome this limitation with an (often) easily-obtained extra information and propose Parallel that is near-optimal for all settings. Inspired by MOSS++ , We also propose empMOSS++with excellent empirical performance. Experiments on both synthetic and real-world datasets demonstrate the efficiency of our algorithms over the previous state-of-the-art.
Broader Impact
This paper provides efficient algorithms that work well in modern applications of bandit algorithms with large action space but limited time horizon. We make minimal assumption about the setting, and our algorithms can automatically adapt to unknown hardness levels. Worst-case regret guarantees are provided for our algorithms; we also show MOSS++ is Pareto optimal and Parallel is minimax optimal, up to polylog factors. empMOSS++ is provided as a practical version of MOSS++with excellent empirical performance. Our algorithms are particularly useful in areas such as e-commence and movie/content recommendation, where the action space is enormous but possibly contains multiple best/satisfactory actions. If deployed, our algorithms could automatically adapt to the hardness level of the recommendation task and benefit both service-providers and customers through efficiently delivering satisfactory content. One possible negative outcome is that items recommended to a specific user/customer might only come from a subset of the action space. However, this is unavoidable when the number of items/actions exceeds the allowed time horizon. In fact, one should notice that all items/actions will be selected with essentially the same probability, thanks to the incorporation of random selection processes in our algorithms. Our algorithms will not leverage/create biases due to the same reason. Overall, we believe this paper’s contribution will have a net positive impact.
Acknowledgments and Disclosure of Funding
The authors would like to thank anonymous reviewers for their comments and suggestions. This work was partially supported by NSF grant no. 1934612. | 1. What is the focus and contribution of the paper regarding cumulative regret?
2. What are the strengths of the proposed approach, particularly in terms of its ability to handle different levels of difficulty?
3. What are the weaknesses of the paper, especially regarding the practicality of the proposed algorithm?
4. Do you have any concerns about the assumption of knowing the minimum mean squared error (mu_*) in advance?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The paper studies cumulative regret when there are multiple best arms and the number of best arms is unknown. It defines a new notion of difficulty for such problems, \psi. It provides a new algorithm for this problem that is Pareto optimal over problems with different hardness levels. It also proves that there is no algorithm that is simultaneously optimal across all problem difficulties. However, the authors show that if mu_* is known then simultaneous optimality is possible. Finally, they demonstrate the superior performance of their algorithms in experiments.
Strengths
The regime where n can be larger than T is important and well-motivated by many modern appplications. Their observation that there are various levels of difficulty for this problem and it is not possible to be simultaneously optimal for all of them is very insightful. It seems that Theorem 2 implies that no algorithm is within log(T) of the minimax rate for all problem difficulties, correct? If so this polynomial gap seems quite convincing. Algorithm 1 is a non-trivial algorithm and leads to an algorithm with strong empirical performance. The experiments are thorough.
Weaknesses
The paper does not seem to develop fundamentally new algorithmic ideas, although they apply sophisticated algorithms from the literature to an important problem and in an enlightening way. It would seem that mu_* is never known exactly in practice. How does misspecification of mu_* affect the performance of algorithm 3? is Minimax optimality achievable even under misspecification? How would this affect empirical performance? It seems odd that Algorithm 3 does worse in the experiments even though it has extra information. Is there a way to get around the issue that you are running several versions of the algorithm? Does it remain an open question how to leverage this extra information into a empirically superior algorithm? |
NIPS | Title
On Regret with Multiple Best Arms
Abstract
We study a regret minimization problem with the existence of multiple best/nearoptimal arms in the multi-armed bandit setting. We consider the case when the number of arms/actions is comparable or much larger than the time horizon, and make no assumptions about the structure of the bandit instance. Our goal is to design algorithms that can automatically adapt to the unknown hardness of the problem, i.e., the number of best arms. Our setting captures many modern applications of bandit algorithms where the action space is enormous and the information about the underlying instance/structure is unavailable. We first propose an adaptive algorithm that is agnostic to the hardness level and theoretically derive its regret bound. We then prove a lower bound for our problem setting, which indicates: (1) no algorithm can be minimax optimal simultaneously over all hardness levels; and (2) our algorithm achieves a rate function that is Pareto optimal. With additional knowledge of the expected reward of the best arm, we propose another adaptive algorithm that is minimax optimal, up to polylog factors, over all hardness levels. Experimental results confirm our theoretical guarantees and show advantages of our algorithms over the previous state-of-the-art.
1 Introduction
Multi-armed bandit problems describe exploration-exploitation trade-offs in sequential decision making. Most existing bandit algorithms tend to provide regret guarantees when the number of available arms/actions is smaller than the time horizon. In modern applications of bandit algorithm, however, the action space is usually comparable or even much larger than the allowed time horizon so that many existing bandit algorithms cannot even complete their initial exploration phases. Consider a problem of personalized recommendations, for example. For most users, the total number of movies, or even the amount of sub-categories, far exceeds the number of times they visit a recommendation site. Similarly, the enormous amount of user-generated content on YouTube and Twitter makes it increasingly challenging to make optimal recommendations. The tension between a very large action space and a limited time horizon poses a realistic problem in which deploying algorithms that converge to an optimal solution over an asymptotically long time horizon do not give satisfying results. There is a need to design algorithms that can exploit the highest possible reward within a limited time horizon. Past work has partially addressed this challenge. The quantile regret proposed in [12] to calculate regret with respect to an satisfactory action rather than the best one. The discounted regret analyzed in [25, 24] is used to emphasize short time horizon performance. Other existing works consider the extreme case when the number of actions is indeed infinite, and tackle such problems with one of two main assumptions: (1) the discovery of a near-optimal/best arm follows some probability measure with known parameters [6, 30, 4, 15]; (2) the existence of a smooth function represents the mean-payoff over a continuous subset [1, 20, 19, 8, 23, 17]. However, in many situations, neither assumption may be realistic. We make minimal assumptions in this paper. We study the regret minimization problem over a time horizon T , which might be unknown, with respect
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
to a bandit instance with n total arms, out of which m are best/near-optimal arms. We emphasize that the allowed time horizon and the given bandit instance should be viewed as features of one problem and together they indicate an intrinsic hardness level. We consider the case when the number of arms n is comparable or larger than the time horizon T so that no standard algorithm provides satisfying result. Our goal is to design algorithms that could adapt to the unknown m and achieve optimal regret.
1.1 Contributions and paper organization
We make the following contributions. In Section 2, we formally define the regret minimization problem that represents the tension between a very large action space and a limited time horizon; and capture the hardness level in terms of the number of best arms. We provide an adaptive algorithm that is agnostic to the unknown number of best arms in Section 3, and theoretically derive its regret bound. In Section 4, we prove a lower bound for our problem setting that indicates that there is no algorithm that can be optimal simultaneously over all hardness levels. Our lower bound also shows that our algorithm provided in Section 3 is Pareto optimal. With additional knowledge of the expected reward of the best arm, in Section 5, we provide an algorithm that achieves the non-adaptive minimax optimal regret, up to polylog factors, without the knowledge of the number of best arms. Experiments conducted in Section 6 confirm our theoretical guarantees and show advantages of our algorithms over previous state-of-the-art. We conclude our paper in Section 7. Most of the proofs are deferred to the Appendix due to lack of space.
1.2 Related work
Time sensitivity and large action space. As bandit models are getting much more complex, usually with large or infinite action spaces, researchers have begun to pay attention to tradeoffs between regret and time horizons when deploying such models. [13] study a linear bandit problem with ultra-high dimension, and provide algorithms that, under various assumptions, can achieve good reward within short time horizon. [24] also take time horizon into account and model time preference by analyzing a discounted regret. [12] consider a quantile regret minimization problem where they define their regret with respect to expected reward ranked at (1− ρ)-th quantile. One could easily transfer their problem to our setting; however, their regret guarantee is sub-optimal. [18, 4] also consider the problem with m best/near-optimal arms with no other assumptions, but they focus on the pure exploration setting; [4] additionally requires the knowledge of m. Another line of research considers the extreme case when the number arms is infinite, but with some known regularities. [6] proposes an algorithm with a minimax optimality guarantee under the situation where the reward of each arm follows strictly Bernoulli distribution; [27] provides an anytime algorithm that works under the same assumption. [30] relaxes the assumption on Bernoulli reward distribution, however, some other parameters are assumed to be known in their setting.
Continuum-armed bandit. Many papers also study bandit problems with continuous action spaces, where they embed each arm x into a bounded subset X ⊆ Rd and assume there exists a smooth function f governing the mean-payoff for each arm. This setting is firstly introduced by [1]. When the smoothness parameters are known to the learner or under various assumptions, there exists algorithms [20, 19, 8] with near-optimal regret guarantees. When the smoothness parameters are unknown, however, [23] proves a lower bound indicating no strategy can be optimal simultaneously over all smoothness classes; under extra information, they provide adaptive algorithms with near-optimal regret guarantees. Although achieving optimal regret for all settings is impossible, [17] design adaptive algorithms and prove that they are Pareto optimal. Our algorithms are mainly inspired by the ones in [17, 23]. A closely related line of work [28, 16, 5, 26] aims at minimizing simple regret in the continuum-armed bandit setting.
Adaptivity to unknown parameters. [9] argues the awareness of regularity is flawed and one should design algorithms that can adapt to the unknown environment. In situations where the goal is pure exploration or simple regret minimization, [18, 28, 16, 5, 26] achieve near-optimal guarantees with unknown regularity because their objectives trade-off exploitation in favor of exploration. In the case of cumulative regret minimization, however, [23] shows no strategy can be optimal simultaneously over all smoothness classes. In special situations or under extra information, [9, 10, 23] provide algorithms that adapt in different ways. [17] borrows the concept of Pareto optimality from economics and provide algorithms with rate functions that are Pareto optimal. Adaptivity is studied in statistics
as well: in some cases, only additional logarithmic factors are required [22, 7]; in others, however, there exists an additional polynomial cost of adaptation [11].
2 Problem statement and notation
We consider the multi-armed bandit instance ν = (ν1, . . . , νn) with n probability distributions with means µi = EX∼νi [X] ∈ [0, 1]. Let µ? = maxi∈[n]{µi} be the highest mean and S? = {i ∈ [n] : µi = µ?} denote the subset of best arms.1 The cardinality |S?| = m is unknown to the learner. We could also generalize our setting to S′? = {i ∈ [n] : µi ≥ µ? − (T )} with unknown |S′?| (i.e., situations where there is an unknown number of near-optimal arms). Setting to be dependent on T is to avoid an additive term linear in T , e.g., ≤ 1/ √ T ⇒ T ≤ √ T . All theoretical results and algorithms presented in this paper are applicable to this generalized setting with minor modifications. For ease of exposition, we focus on the case with multiple best arms throughout the paper. At each time step t ∈ [T ], the algorithm/learner selects an actionAt ∈ [n] and receives an independent reward Xt ∼ νAt . We assume that Xt − µAt is (1/2)-sub-Gaussian conditioned on At.2 We measure the success of an algorithm through the expected cumulative (pseudo) regret:
RT = T · µ? − E [ T∑
t=1
µAt
] .
We useR(T, n,m) to denote the set of regret minimization problems with allowed time horizon T and any bandit instance ν with n total arms and m best arms.3 We emphasize that T is part of the problem instance. We are particularly interested in the case when n is comparable or even larger than T , which captures many modern applications where the available action space far exceeds the allowed time horizon. Although learning algorithms may not be able to pull each arm once, one should notice that the true/intrinsic hardness level of the problem could be viewed as n/m: selecting a subset uniformly at random with cardinality Θ(n/m) guarantees, with constant probability, the access to at least one best arm; but of course it is impossible to do this without knowing m. We quantify the intrinsic hardness level over a set of regret minimization problemsR(T, n,m) as
ψ(R(T, n,m)) = inf{α ≥ 0 : n/m ≤ 2Tα}, where the constant 2 in front of Tα is added to avoid otherwise the trivial case with all best arms when the infimum is 0. ψ(R(T, n,m)) is used here as it captures the minimax optimal regret over the set of regret minimization problemR(T, n,m), as explained later in our review of the MOSS algorithm and the lower bound. As smaller ψ(R(T, n,m)) indicates easier problems, we then define the family of regret minimization problems with hardness level at most α as
HT (α) = {∪R(T, n,m) : ψ(R(T, n,m)) ≤ α}, with α ∈ [0, 1]. Although T is necessary to define a regret minimization problem, we actually encode the hardness level into a single parameter α, which captures the tension between the complexity of bandit instance at hand and the allowed time horizon T : problems with different time horizons but the same α are equally difficult in terms of the achievable minimax regret (the exponent of T ). We thus mainly study problems with T large enough so that we could mainly focus on the polynomial terms of T . We are interested in designing algorithms with minimax guarantees over HT (α), but without the knowledge of α.
MOSS and upper bound. In the classical setting, MOSS , designed by [2] and further generalized to the sub-Gaussian case [21] and improved in terms of constant factors [14], achieves the minimax optimal regret. In this paper, we will use MOSS as a subroutine with regret upper bound O( √ nT ) when T ≥ n. For any problem in HT (α) with known α, one could run MOSS on a subset selected uniformly at random with cardinality Õ(Tα) and achieve regret Õ(T (1+α)/2).
1Throughout the paper, we denote by [K] the set {1, . . . ,K} for any positive integer K. 2We say a random variable X is σ-sub-Gaussian if E[exp(λX)] ≤ exp(σ2λ2/2) for all λ ∈ R. 3Our setting could be generalized to the case with infinite arms: one can consider embedding arms into an arm space X and let p be the probability that an arm sampled uniformly at random is (near-) optimal. 1/p will then serve a similar role as n/m does in the original definition.
Lower bound. The lower bound Ω( √ nT ) in the classical setting does not work for our setting as its proof heavily relies on the existence of single best arm [21]. However, for problems inHT (α), we do have a matching lower bound Ω(T (1+α)/2) as one could always apply the standard lower bound on an bandit instance with n = bTαc and m = 1. For general value of m, a lower bound of the order Ω( √ T (n−m)/m) = Ω(T (1+α)/2) for the m-best arms case could be obtained following similar analysis in Chapter 15 of [21].
Although log T may appear in our bounds, throughout the paper, we focus on problems with T ≥ 2 as otherwise the bound is trivial.
3 An adaptive algorithm
Algorithm 1 takes time horizon T and a user-specified β ∈ [1/2, 1] as input, and it is mainly inspired by [17]. Algorithm 1 operates in iterations with geometrically-increasing length ∆Ti = 2p+i with p = dlog2 T βe. At each iteration i, it restarts MOSS on a set Si consisting of Ki = 2p+2−i real arms selected uniformly at random plus a set of “virtual” mixture-arms (one from each of the 1 ≤ j < i previous iterations, none if i = 1). The mixture-arms are constructed as follows. After each iteration i, let p̂i denote the vector of empirical sampling frequencies of the arms in that iteration (i.e., the k-th element of p̂i is the number of times arm k, including all previously constructed mixture-arms, was sampled in iteration i divided by the total number of samples ∆Ti). The mixture-arm for iteration i is the p̂i-mixture of the arms, denoted by ν̃i. When MOSS samples from ν̃i it first draws it ∼ p̂i, then draws a sample from the corresponding arm νit (or ν̃it ). The mixture-arms provide a convenient summary of the information gained in the previous iterations, which is key to our theoretical analysis. Although our algorithm is working on fewer regular arms in later iterations, information summarized in mixture-arms is good enough to provide guarantees. We name our algorithm MOSS++ as it restarts MOSS at each iteration with past information summarized in mixture-arms. We provide an anytime version of Algorithm 1 in Appendix A.2 via the standard doubling trick.
Algorithm 1: MOSS++ Input: Time horizon T and user-specified parameter β ∈ [1/2, 1].
1: Set: p = dlog2 T βe, Ki = 2p+2−i and ∆Ti = min{2p+i, T}. 2: for i = 1, . . . , p do 3: Run MOSS on a subset of arms Si for ∆Ti rounds. Si contains Ki real arms selected uniformly at random and the set of virtual mixture-arms from previous iterations, i.e., {ν̃j}j<i. 4: Construct a virtual mixture-arm ν̃i based on empirical sampling frequencies of MOSS above. 5: end for
3.1 Analysis and discussion
We use µS = maxν∈S{EX∼ν [X]} to denote the highest expected reward over a set of distributions/arms S. For any algorithm that only works on S, we can decompose the regret into approximation error and learning error:
RT = T · (µ? − µS)︸ ︷︷ ︸ approximation error due to the selection of S
+ T · µS − E [ T∑
t=1
µAt
]
︸ ︷︷ ︸ learning error due to the sampling rule {At}Tt=1
. (1)
This type of regret decomposition was previously used in [20, 3, 17] to deal with the continuum-armed bandit problem. We consider here a probabilistic version, with randomness in the selection of S, for the classical setting.
The main idea behind providing guarantees for MOSS++ is to decompose its regret at each iteration, using Eq. (1), and then bound the expected approximation error and learning error separately. The expected learning error at each iteration could always be controlled as Õ(T β) thanks to regret guarantees for MOSS and specifically chosen parameters p, Ki, ∆Ti. Let i? be the largest integer such that Ki ≥ 2Tα log √ T still holds. The expected approximation error in iteration i ≤ i? could be
upper bounded by √ T following an analysis on hypergeometric distribution. As a result, the expected regret in iteration i ≤ i? is Õ(T β). Since the mixture-arm ν̃i? is included in all following iterations, we could further bound the expected approximation error in iteration i > i? by Õ(T 1+α−β) after a careful analysis on ∆Ti/∆Ti? . This intuition is formally stated and proved in Theorem 1. Theorem 1. Run MOSS++with time horizon T and an user-specified parameter β ∈ [1/2, 1] leads to the following regret upper bound:
sup ω∈HT (α)
RT ≤ C (log2 T )5/2 · Tmin{max{β,1+α−β},1},
where C is a universal constant. Remark 1. We primarily focus on the polynomial terms in T when deriving the bound, but put no effort in optimizing the polylog term. The 5/2 exponent of log2 T might be tightened as well.
The theoretical guarantee is closely related to the user-specified parameter β: when β > α, we suffer a multiplicative cost of adaptation Õ(T |(2β−α−1)/2|), with β = (1 + α)/2 hitting the sweet spot, comparing to non-adaptive minimax regret; when β ≤ α, there is essentially no guarantees. One may hope to improve this result. However, our analysis in Section 4 indicates: (1) achieving minimax optimal regret for all settings simultaneously is impossible; and (2) the rate function achieved by MOSS++ is already Pareto optimal.
4 Lower bound and Pareto optimality
4.1 Lower bound
In this section, we show that designing algorithms with the non-adaptive minimax optimal guarantee over all values of α is impossible. We first state the result in the following general theorem. Theorem 2. For any 0 ≤ α′ < α ≤ 1, assume Tα ≤ B and bTαc − 1 ≥ max{Tα/4, 2}. If an algorithm is such that supω∈HT (α′)RT ≤ B, then the regret of this algorithm is lower bounded on HT (α):
sup ω∈HT (α)
RT ≥ 2−10T 1+αB−1. (2)
To give an interpretation of Theorem 2, we consider any algorithm/policy π together with regret minimization problemsHT (α′) andHT (α) satisfying corresponding requirements. On one hand, if algorithm π achieves a regret that is order-wise larger than Õ(T (1+α
′)/2) overHT (α′), it is already not minimax optimal forHT (α′). Now suppose π achieves a near-optimal regret, i.e., Õ(T (1+α
′)/2), over HT (α′); then, according to Eq. (2), π must incur a regret of order at least Ω̃(T 1/2+α−α
′/2) on one problem in HT (α′). This, on the other hand, makes algorithm π strictly sub-optimal over HT (α).
4.2 Pareto optimality
We capture the performance of any algorithm by its dependence on polynomial terms of T in the asymptotic sense. Note that the hardness level of a problem is encoded in α. Definition 1. Let θ : [0, 1] → [0, 1] denote a non-decreasing function. An algorithm achieves the rate function θ if
∀ > 0,∀α ∈ [0, 1], lim sup T→∞ supω∈HT (α)RT T θ(α)+ < +∞.
Recall that a function θ′ is strictly smaller than another function θ in pointwise order if θ′(α) ≤ θ(α) for all α and θ′(α0) < θ(α0) for at least one value of α0. As there may not always exist a pointwise ordering over rate functions, following [17], we consider the notion of Pareto optimality over rate functions achieved by some algorithms. Definition 2. A rate function θ is Pareto optimal if it is achieved by an algorithm, and there is no other algorithm achieving a strictly smaller rate function θ′ in pointwise order. An algorithm is Pareto optimal if it achieves a Pareto optimal rate function.
Combining the results in Theorem 1 and Theorem 2 with above definitions, we could further obtain the following result in Theorem 3. Theorem 3. The rate function achieved by MOSS++with any β ∈ [1/2, 1], i.e.,
θβ : α 7→ min{max{β, 1 + α− β}, 1}, (3) is Pareto optimal.
5 Learning with extra information
Although previous Section 4 gives negative results on designing algorithms that could optimally adapt to all settings, one could actually design such an algorithm with extra information. In this section, we provide an algorithm that takes the expected reward of the best arm µ? (or an estimated one with error up to 1/ √ T ) as extra information, and achieves near minimax optimal regret over all settings simultaneously. Our algorithm is mainly inspired by [23].
5.1 Algorithm
We name our Algorithm 3 Parallel as it maintains dlog T e instances of subroutine, i.e., Algorithm 2, in parallel. Each subroutine SRi is initialized with time horizon T and hardness level αi = i/dlog T e. We use Ti,t to denote the number of samples allocated to SRi up to time t, and represent its empirical regret at time t as R̂i,t = Ti,t · µ? − ∑Ti,t t=1Xi,t with Xi,t ∼ νAi,t being the t-th empirical reward obtained by SRi and Ai,t being the index of the t-th arm pulled by SRi.
Algorithm 2: MOSS Subroutine (SR) Input: Time horizon T and hardness level α.
1: Select a subset of arms Sα uniformly at random with |Sα| = d2Tα log √ T e and run MOSS on
Sα.
Parallel operates in iterations of length d √ T e. At the beginning of each iteration, i.e., at time t = i · d √ T e for i ∈ {0} ∪ [d √ T e − 1], Parallel first selects the subroutine with the lowest
(breaking ties arbitrarily) empirical regret so far, i.e., k = arg mini∈[dlog Te] R̂i,t; it then resumes the learning process of SRk, from where it halted, for another d √ T e more pulls. All the information is updated at the end of that iteration. An anytime version of Algorithm 3 is provided in Appendix C.3.
5.2 Analysis
As Parallel discretizes the hardness parameter over a grid with interval 1/dlog T e, we first show that running the best subroutine alone leads to regret Õ(T (1+α)/2).
Algorithm 3: Parallel Input: Time horizon T and the optimal reward µ?.
1: set: p = dlog T e, ∆ = d √ T e and t = 0. 2: for i = 1, . . . , p do 3: Set αi = i/p, initialize SRi with αi, T ; set Ti,t = 0, and R̂i,t = 0. 4: end for 5: for i = 1, . . . ,∆− 1 do 6: Select k = arg mini∈[p] R̂i,t and run SRk for ∆ rounds. 7: Update Tk,t = Tk,t + ∆, R̂k,t = Tk,t · µ? − ∑Tk,t t=1 Xk,t, t = t+ ∆. 8: end for
Lemma 1. Suppose α is the true hardness parameter and αi−1/dlog T e < α ≤ αi, run Algorithm 2 with time horizon T and αi leads to the following regret bound:
sup ω∈HT (α)
RT ≤ C log T · T (1+α)/2,
where C is a universal constant.
Since Parallel always allocates new samples to the subroutine with the lowest empirical regret so far, we know that the regret of every subroutine should be roughly of the same order at time T . In particular, all subroutines should achieve regret Õ(T (1+α)/2), as the best subroutine does. Parallel then achieves the non-adaptive minimax optimal regret, up to polylog factors, without knowing the true hardness level α. Theorem 4. For any α ∈ [0, 1] unknown to the learner, run Parallelwith time horizon T and optimal expected reward µ? leads to the following regret upper bound:
sup ω∈HT (α)
RT ≤ C (log T )2 T (1+α)/2,
where C is a universal constant.
6 Experiments
We conduct three experiments to compare our algorithms with baselines. In Section 6.1, we compare the performance of each algorithm on problems with varying hardness levels. We examine how the regret curve of each algorithm increases on synthetic and real-world datasets in Section 6.2 and Section 6.3, respectively.
We first introduce the nomenclature of the algorithms. We use MOSS to denote the standard MOSS algorithm; and MOSS Oracle to denote Algorithm 2 with known α. Quantile represents the algorithm (QRM2) proposed by [12] to minimize the regret with respect to the (1− ρ)-th quantile of means among arms, without the knowledge of ρ. One could easily transfer Quantile to our settings with top-ρ fraction of arms treated as best arms. As suggested in [12], we reuse the statistics obtained in previous iterations of Quantile to improve its sample efficiency. We use MOSS++ to represent the vanilla version of Algorithm 1; and use empMOSS++ to represent an empirical version such that: (1) empMOSS++ reuse statistics obtained in previous round, as did in Quantile ; and (2) instead of selecting Ki real arms uniformly at random at the i-th iteration, empMOSS++ selects Ki arms with the highest empirical mean for i > 1. We choose β = 0.5 for MOSS++ and empMOSS++ in all experiments.4 All results are averaged over 100 experiments. Shaded area represents 0.5 standard deviation for each algorithm.
6.1 Adaptivity to hardness level
We compare our algorithms with baselines on regret minimization problems with different hardness levels. For this experiment, we generate best arms with expected reward 0.9 and sub-optimal arms
4Increasing β generally leads to worse performance on problems with small α but better performance on problems with large α.
with expected reward evenly distributed among {0.1, 0.2, 0.3, 0.4, 0.5}. All arms follow Bernoulli distribution. We set the time horizon to T = 50000 and consider the total number of arms n = 20000. We vary α from 0.1 to 0.8 (with interval 0.1) to control the number of best arms m = dn/2Tαe and thus the hardness level. In Fig. 2(a), the regret of any algorithm gets larger as α increases, which is expected. MOSS does not provide satisfying performance due to the large action space and the relatively small time horizon. Although implemented in an anytime fashion, Quantile could be roughly viewed as an algorithm that runs MOSS on a subset selected uniformly at random with cardinality T 0.347. Quantile displays good performance when α = 0.1, but suffers regret much worse than MOSS++ and empMOSS++when α gets larger. Note that the regret curve of Quantile gets flattened at 20000 is expected: it simply learns the best sub-optimal arm and suffers a regret 50000×(0.9−0.5). Although Parallel enjoys near minimax optimal regret, the regret it suffers from is the summation of 11 subroutines, which hurts its empirical performance. empMOSS++ achieves performance comparable to MOSS Oraclewhen α is small, and achieve the best empirical performance when α ≥ 0.3. When α ≥ 0.7, MOSS Oracle needs to explore most/all of the arms to statistically guarantee the finding of at least one best arm, which hurts its empirical performance.
6.2 Regret curve comparison
We compare how the regret curve of each algorithm increases in Fig. 2(b). We consider the same regret minimization configurations as described in Section 6.1 with α = 0.25. empMOSS++ , MOSS++ and Parallel all outperform Quantilewith empMOSS++ achieving the performance closest to MOSS Oracle . MOSS Oracle , Parallel and empMOSS++ have flattened their regret curve indicating they could confidently recommend the best arm. The regret curves of MOSS++ and Quantile do not flat as the random-sampling component in each of their iterations encourage them to explore new arms. Comparing to MOSS++ , Quantile keeps increasing its regret at a much faster rate and with a much larger variance, which empirically confirms the sub-optimality of their regret guarantees.
6.3 Real-world dataset
We also compare all algorithms in a realistic setting of recommending funny captions to website visitors. We use a real-world dataset from the New Yorker Magazine Cartoon Caption Contest5. The dataset of 1-3 star caption ratings/rewards for Contest 652 consists of n = 10025 captions6. We use the ratings to compute Bernoulli reward distributions for each caption as follows. The mean of each caption/arm i is calculated as the percentage pi of its ratings that were funny or somewhat funny (i.e., 2 or 3 stars). We normalize each pi with the best one and then threshold each: if pi ≥ 0.8, then put pi = 1; otherwise leave pi unaltered. This produces a set of m = 54 best arms with rewards 1 and all
5https://www.newyorker.com/cartoons/contest. 6Available online at https://nextml.github.io/caption-contest-data.
other 9971 arms with rewards among [0, 0.8]. We set T = 105 and this results in a hardness level around α ≈ 0.43.
0 20000 40000 60000 80000 100000 Time
0
5000
10000
15000
20000
25000
30000
Ex pe
ct ed
re gr
et MOSS MOSS Oracle Quantile Parallel (ours) MOSS++ (ours) empMOSS++ (ours)
effectiveness of empMOSS++ and MOSS++ in modern applications of bandit algorithm with large action space and limited time horizon.
7 Conclusion
We study a regret minimization problem with large action space but limited time horizon, which captures many modern applications of bandit algorithms. Depending on the number of best/nearoptimal arms, we encode the hardness level, in terms of minimax regret achievable, of the given regret minimization problem into a single parameter α, and we design algorithms that could adapt to this unknown hardness level. Our first algorithm MOSS++ takes a user-specified parameter β as input and provides guarantees as long as α < β; our lower bound further indicates the rate function achieved by MOSS++ is Pareto optimal. Although no algorithm can achieve near minimax optimal regret over all α simultaneously, as demonstrated by our lower bound, we overcome this limitation with an (often) easily-obtained extra information and propose Parallel that is near-optimal for all settings. Inspired by MOSS++ , We also propose empMOSS++with excellent empirical performance. Experiments on both synthetic and real-world datasets demonstrate the efficiency of our algorithms over the previous state-of-the-art.
Broader Impact
This paper provides efficient algorithms that work well in modern applications of bandit algorithms with large action space but limited time horizon. We make minimal assumption about the setting, and our algorithms can automatically adapt to unknown hardness levels. Worst-case regret guarantees are provided for our algorithms; we also show MOSS++ is Pareto optimal and Parallel is minimax optimal, up to polylog factors. empMOSS++ is provided as a practical version of MOSS++with excellent empirical performance. Our algorithms are particularly useful in areas such as e-commence and movie/content recommendation, where the action space is enormous but possibly contains multiple best/satisfactory actions. If deployed, our algorithms could automatically adapt to the hardness level of the recommendation task and benefit both service-providers and customers through efficiently delivering satisfactory content. One possible negative outcome is that items recommended to a specific user/customer might only come from a subset of the action space. However, this is unavoidable when the number of items/actions exceeds the allowed time horizon. In fact, one should notice that all items/actions will be selected with essentially the same probability, thanks to the incorporation of random selection processes in our algorithms. Our algorithms will not leverage/create biases due to the same reason. Overall, we believe this paper’s contribution will have a net positive impact.
Acknowledgments and Disclosure of Funding
The authors would like to thank anonymous reviewers for their comments and suggestions. This work was partially supported by NSF grant no. 1934612. | 1. What is the focus of the paper regarding stochastic bandit settings?
2. What are the strengths of the proposed approach, particularly in terms of its algorithmic ideas and analysis?
3. Do you have any concerns about the optimization of the bound's hyperparameter?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. Are there any recent works related to this paper that should be mentioned? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The authors consider a stochastic bandit setting with n arms, where n is taken to be very large. Unlike other large-armed bandit work, there is no structure assumed between the payoff of the various arms. Their methods rely heavily on MOSS [Audibert et al.], which attains an optimal regret rate of \sqrt{nT} in the usual setting where n <= T. Letting m <= n denote the number of arms that are optimal (or near optimal), the authors begin by defining alpha satisfying n/m = T^{\alpha} as a key quantity characterizing the hardness. They observe that if alpha were known, one could sample a subset of the arms of cardinality T^alpha log T, which contains an optimal arm whp. Applying MOSS attains a regret bound of O(T^(1+alpha)/2). Meanwhile, there exists an instance with parameter alpha, and matching lower bound (when m = 1 and n = T^alpha). Thus, the key difficulty is attaining this same result when alpha is not known. They give an algorithm that runs multiple epochs of MOSS for exponentially increasing rounds on subsampled arms. In each epoch, the number of sampled arms decays by a factor of 2. However, the algorithm also includes a “virtual” arm that simulates the empirical distribution of arms played on each of the previous epochs. The algorithm’s analysis is very clean and follows by identifying an epoch i* where the number of samples arms falls below T^{alpha} \log T. Before i*, so many arms are sampled, that an optimal one is bound to be among them. After i*, the performance of the virtual arm becomes near-optimal. The ultimate bound, however, depends on a correct setting of a user-specified parameter beta in order to achieve the desired rate of O(T^(1+alpha)/2). The remainder of the theoretical content of the paper essentially defends this property of their algorithm in two ways. (1) They argue that no algorithm can be optimal for all levels of alpha, and that their algorithm sits on the Pareto frontier, while naive algorithms such as subsampling arms and running MOSS do not, and (2) with knowledge of the mean of the best arm, an algorithm achieving the lower bound is possible. Finally, they give experiments demonstrating the performance of their algorithms at at fixed beta, and varying alphas. They also give an algorithm inspired by the theory but that is more empirically robust (re-using statistics from previous epochs). The experiments demonstrate strong performance until alpha becomes larger than user-defined beta (at which point the theory predicts vacuous regret bounds).
Strengths
1) The paper does a very good job of analyzing a large MAB problem without a great deal of cumbersome assumptions. The only salient quantity is the fraction of arms that are optimal. 2) The paper is very clear and well written. 3) The algorithmic ideas and analysis are interesting, and could be of value beyond the setting considered.
Weaknesses
1) The optimality of the bound ultimately depends on a hyperparameter being set properly. However, the authors do a good job defending this, and one can imagine making reasonable choices for this hyperparameter in practice. |
NIPS | Title
On Regret with Multiple Best Arms
Abstract
We study a regret minimization problem with the existence of multiple best/nearoptimal arms in the multi-armed bandit setting. We consider the case when the number of arms/actions is comparable or much larger than the time horizon, and make no assumptions about the structure of the bandit instance. Our goal is to design algorithms that can automatically adapt to the unknown hardness of the problem, i.e., the number of best arms. Our setting captures many modern applications of bandit algorithms where the action space is enormous and the information about the underlying instance/structure is unavailable. We first propose an adaptive algorithm that is agnostic to the hardness level and theoretically derive its regret bound. We then prove a lower bound for our problem setting, which indicates: (1) no algorithm can be minimax optimal simultaneously over all hardness levels; and (2) our algorithm achieves a rate function that is Pareto optimal. With additional knowledge of the expected reward of the best arm, we propose another adaptive algorithm that is minimax optimal, up to polylog factors, over all hardness levels. Experimental results confirm our theoretical guarantees and show advantages of our algorithms over the previous state-of-the-art.
1 Introduction
Multi-armed bandit problems describe exploration-exploitation trade-offs in sequential decision making. Most existing bandit algorithms tend to provide regret guarantees when the number of available arms/actions is smaller than the time horizon. In modern applications of bandit algorithm, however, the action space is usually comparable or even much larger than the allowed time horizon so that many existing bandit algorithms cannot even complete their initial exploration phases. Consider a problem of personalized recommendations, for example. For most users, the total number of movies, or even the amount of sub-categories, far exceeds the number of times they visit a recommendation site. Similarly, the enormous amount of user-generated content on YouTube and Twitter makes it increasingly challenging to make optimal recommendations. The tension between a very large action space and a limited time horizon poses a realistic problem in which deploying algorithms that converge to an optimal solution over an asymptotically long time horizon do not give satisfying results. There is a need to design algorithms that can exploit the highest possible reward within a limited time horizon. Past work has partially addressed this challenge. The quantile regret proposed in [12] to calculate regret with respect to an satisfactory action rather than the best one. The discounted regret analyzed in [25, 24] is used to emphasize short time horizon performance. Other existing works consider the extreme case when the number of actions is indeed infinite, and tackle such problems with one of two main assumptions: (1) the discovery of a near-optimal/best arm follows some probability measure with known parameters [6, 30, 4, 15]; (2) the existence of a smooth function represents the mean-payoff over a continuous subset [1, 20, 19, 8, 23, 17]. However, in many situations, neither assumption may be realistic. We make minimal assumptions in this paper. We study the regret minimization problem over a time horizon T , which might be unknown, with respect
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
to a bandit instance with n total arms, out of which m are best/near-optimal arms. We emphasize that the allowed time horizon and the given bandit instance should be viewed as features of one problem and together they indicate an intrinsic hardness level. We consider the case when the number of arms n is comparable or larger than the time horizon T so that no standard algorithm provides satisfying result. Our goal is to design algorithms that could adapt to the unknown m and achieve optimal regret.
1.1 Contributions and paper organization
We make the following contributions. In Section 2, we formally define the regret minimization problem that represents the tension between a very large action space and a limited time horizon; and capture the hardness level in terms of the number of best arms. We provide an adaptive algorithm that is agnostic to the unknown number of best arms in Section 3, and theoretically derive its regret bound. In Section 4, we prove a lower bound for our problem setting that indicates that there is no algorithm that can be optimal simultaneously over all hardness levels. Our lower bound also shows that our algorithm provided in Section 3 is Pareto optimal. With additional knowledge of the expected reward of the best arm, in Section 5, we provide an algorithm that achieves the non-adaptive minimax optimal regret, up to polylog factors, without the knowledge of the number of best arms. Experiments conducted in Section 6 confirm our theoretical guarantees and show advantages of our algorithms over previous state-of-the-art. We conclude our paper in Section 7. Most of the proofs are deferred to the Appendix due to lack of space.
1.2 Related work
Time sensitivity and large action space. As bandit models are getting much more complex, usually with large or infinite action spaces, researchers have begun to pay attention to tradeoffs between regret and time horizons when deploying such models. [13] study a linear bandit problem with ultra-high dimension, and provide algorithms that, under various assumptions, can achieve good reward within short time horizon. [24] also take time horizon into account and model time preference by analyzing a discounted regret. [12] consider a quantile regret minimization problem where they define their regret with respect to expected reward ranked at (1− ρ)-th quantile. One could easily transfer their problem to our setting; however, their regret guarantee is sub-optimal. [18, 4] also consider the problem with m best/near-optimal arms with no other assumptions, but they focus on the pure exploration setting; [4] additionally requires the knowledge of m. Another line of research considers the extreme case when the number arms is infinite, but with some known regularities. [6] proposes an algorithm with a minimax optimality guarantee under the situation where the reward of each arm follows strictly Bernoulli distribution; [27] provides an anytime algorithm that works under the same assumption. [30] relaxes the assumption on Bernoulli reward distribution, however, some other parameters are assumed to be known in their setting.
Continuum-armed bandit. Many papers also study bandit problems with continuous action spaces, where they embed each arm x into a bounded subset X ⊆ Rd and assume there exists a smooth function f governing the mean-payoff for each arm. This setting is firstly introduced by [1]. When the smoothness parameters are known to the learner or under various assumptions, there exists algorithms [20, 19, 8] with near-optimal regret guarantees. When the smoothness parameters are unknown, however, [23] proves a lower bound indicating no strategy can be optimal simultaneously over all smoothness classes; under extra information, they provide adaptive algorithms with near-optimal regret guarantees. Although achieving optimal regret for all settings is impossible, [17] design adaptive algorithms and prove that they are Pareto optimal. Our algorithms are mainly inspired by the ones in [17, 23]. A closely related line of work [28, 16, 5, 26] aims at minimizing simple regret in the continuum-armed bandit setting.
Adaptivity to unknown parameters. [9] argues the awareness of regularity is flawed and one should design algorithms that can adapt to the unknown environment. In situations where the goal is pure exploration or simple regret minimization, [18, 28, 16, 5, 26] achieve near-optimal guarantees with unknown regularity because their objectives trade-off exploitation in favor of exploration. In the case of cumulative regret minimization, however, [23] shows no strategy can be optimal simultaneously over all smoothness classes. In special situations or under extra information, [9, 10, 23] provide algorithms that adapt in different ways. [17] borrows the concept of Pareto optimality from economics and provide algorithms with rate functions that are Pareto optimal. Adaptivity is studied in statistics
as well: in some cases, only additional logarithmic factors are required [22, 7]; in others, however, there exists an additional polynomial cost of adaptation [11].
2 Problem statement and notation
We consider the multi-armed bandit instance ν = (ν1, . . . , νn) with n probability distributions with means µi = EX∼νi [X] ∈ [0, 1]. Let µ? = maxi∈[n]{µi} be the highest mean and S? = {i ∈ [n] : µi = µ?} denote the subset of best arms.1 The cardinality |S?| = m is unknown to the learner. We could also generalize our setting to S′? = {i ∈ [n] : µi ≥ µ? − (T )} with unknown |S′?| (i.e., situations where there is an unknown number of near-optimal arms). Setting to be dependent on T is to avoid an additive term linear in T , e.g., ≤ 1/ √ T ⇒ T ≤ √ T . All theoretical results and algorithms presented in this paper are applicable to this generalized setting with minor modifications. For ease of exposition, we focus on the case with multiple best arms throughout the paper. At each time step t ∈ [T ], the algorithm/learner selects an actionAt ∈ [n] and receives an independent reward Xt ∼ νAt . We assume that Xt − µAt is (1/2)-sub-Gaussian conditioned on At.2 We measure the success of an algorithm through the expected cumulative (pseudo) regret:
RT = T · µ? − E [ T∑
t=1
µAt
] .
We useR(T, n,m) to denote the set of regret minimization problems with allowed time horizon T and any bandit instance ν with n total arms and m best arms.3 We emphasize that T is part of the problem instance. We are particularly interested in the case when n is comparable or even larger than T , which captures many modern applications where the available action space far exceeds the allowed time horizon. Although learning algorithms may not be able to pull each arm once, one should notice that the true/intrinsic hardness level of the problem could be viewed as n/m: selecting a subset uniformly at random with cardinality Θ(n/m) guarantees, with constant probability, the access to at least one best arm; but of course it is impossible to do this without knowing m. We quantify the intrinsic hardness level over a set of regret minimization problemsR(T, n,m) as
ψ(R(T, n,m)) = inf{α ≥ 0 : n/m ≤ 2Tα}, where the constant 2 in front of Tα is added to avoid otherwise the trivial case with all best arms when the infimum is 0. ψ(R(T, n,m)) is used here as it captures the minimax optimal regret over the set of regret minimization problemR(T, n,m), as explained later in our review of the MOSS algorithm and the lower bound. As smaller ψ(R(T, n,m)) indicates easier problems, we then define the family of regret minimization problems with hardness level at most α as
HT (α) = {∪R(T, n,m) : ψ(R(T, n,m)) ≤ α}, with α ∈ [0, 1]. Although T is necessary to define a regret minimization problem, we actually encode the hardness level into a single parameter α, which captures the tension between the complexity of bandit instance at hand and the allowed time horizon T : problems with different time horizons but the same α are equally difficult in terms of the achievable minimax regret (the exponent of T ). We thus mainly study problems with T large enough so that we could mainly focus on the polynomial terms of T . We are interested in designing algorithms with minimax guarantees over HT (α), but without the knowledge of α.
MOSS and upper bound. In the classical setting, MOSS , designed by [2] and further generalized to the sub-Gaussian case [21] and improved in terms of constant factors [14], achieves the minimax optimal regret. In this paper, we will use MOSS as a subroutine with regret upper bound O( √ nT ) when T ≥ n. For any problem in HT (α) with known α, one could run MOSS on a subset selected uniformly at random with cardinality Õ(Tα) and achieve regret Õ(T (1+α)/2).
1Throughout the paper, we denote by [K] the set {1, . . . ,K} for any positive integer K. 2We say a random variable X is σ-sub-Gaussian if E[exp(λX)] ≤ exp(σ2λ2/2) for all λ ∈ R. 3Our setting could be generalized to the case with infinite arms: one can consider embedding arms into an arm space X and let p be the probability that an arm sampled uniformly at random is (near-) optimal. 1/p will then serve a similar role as n/m does in the original definition.
Lower bound. The lower bound Ω( √ nT ) in the classical setting does not work for our setting as its proof heavily relies on the existence of single best arm [21]. However, for problems inHT (α), we do have a matching lower bound Ω(T (1+α)/2) as one could always apply the standard lower bound on an bandit instance with n = bTαc and m = 1. For general value of m, a lower bound of the order Ω( √ T (n−m)/m) = Ω(T (1+α)/2) for the m-best arms case could be obtained following similar analysis in Chapter 15 of [21].
Although log T may appear in our bounds, throughout the paper, we focus on problems with T ≥ 2 as otherwise the bound is trivial.
3 An adaptive algorithm
Algorithm 1 takes time horizon T and a user-specified β ∈ [1/2, 1] as input, and it is mainly inspired by [17]. Algorithm 1 operates in iterations with geometrically-increasing length ∆Ti = 2p+i with p = dlog2 T βe. At each iteration i, it restarts MOSS on a set Si consisting of Ki = 2p+2−i real arms selected uniformly at random plus a set of “virtual” mixture-arms (one from each of the 1 ≤ j < i previous iterations, none if i = 1). The mixture-arms are constructed as follows. After each iteration i, let p̂i denote the vector of empirical sampling frequencies of the arms in that iteration (i.e., the k-th element of p̂i is the number of times arm k, including all previously constructed mixture-arms, was sampled in iteration i divided by the total number of samples ∆Ti). The mixture-arm for iteration i is the p̂i-mixture of the arms, denoted by ν̃i. When MOSS samples from ν̃i it first draws it ∼ p̂i, then draws a sample from the corresponding arm νit (or ν̃it ). The mixture-arms provide a convenient summary of the information gained in the previous iterations, which is key to our theoretical analysis. Although our algorithm is working on fewer regular arms in later iterations, information summarized in mixture-arms is good enough to provide guarantees. We name our algorithm MOSS++ as it restarts MOSS at each iteration with past information summarized in mixture-arms. We provide an anytime version of Algorithm 1 in Appendix A.2 via the standard doubling trick.
Algorithm 1: MOSS++ Input: Time horizon T and user-specified parameter β ∈ [1/2, 1].
1: Set: p = dlog2 T βe, Ki = 2p+2−i and ∆Ti = min{2p+i, T}. 2: for i = 1, . . . , p do 3: Run MOSS on a subset of arms Si for ∆Ti rounds. Si contains Ki real arms selected uniformly at random and the set of virtual mixture-arms from previous iterations, i.e., {ν̃j}j<i. 4: Construct a virtual mixture-arm ν̃i based on empirical sampling frequencies of MOSS above. 5: end for
3.1 Analysis and discussion
We use µS = maxν∈S{EX∼ν [X]} to denote the highest expected reward over a set of distributions/arms S. For any algorithm that only works on S, we can decompose the regret into approximation error and learning error:
RT = T · (µ? − µS)︸ ︷︷ ︸ approximation error due to the selection of S
+ T · µS − E [ T∑
t=1
µAt
]
︸ ︷︷ ︸ learning error due to the sampling rule {At}Tt=1
. (1)
This type of regret decomposition was previously used in [20, 3, 17] to deal with the continuum-armed bandit problem. We consider here a probabilistic version, with randomness in the selection of S, for the classical setting.
The main idea behind providing guarantees for MOSS++ is to decompose its regret at each iteration, using Eq. (1), and then bound the expected approximation error and learning error separately. The expected learning error at each iteration could always be controlled as Õ(T β) thanks to regret guarantees for MOSS and specifically chosen parameters p, Ki, ∆Ti. Let i? be the largest integer such that Ki ≥ 2Tα log √ T still holds. The expected approximation error in iteration i ≤ i? could be
upper bounded by √ T following an analysis on hypergeometric distribution. As a result, the expected regret in iteration i ≤ i? is Õ(T β). Since the mixture-arm ν̃i? is included in all following iterations, we could further bound the expected approximation error in iteration i > i? by Õ(T 1+α−β) after a careful analysis on ∆Ti/∆Ti? . This intuition is formally stated and proved in Theorem 1. Theorem 1. Run MOSS++with time horizon T and an user-specified parameter β ∈ [1/2, 1] leads to the following regret upper bound:
sup ω∈HT (α)
RT ≤ C (log2 T )5/2 · Tmin{max{β,1+α−β},1},
where C is a universal constant. Remark 1. We primarily focus on the polynomial terms in T when deriving the bound, but put no effort in optimizing the polylog term. The 5/2 exponent of log2 T might be tightened as well.
The theoretical guarantee is closely related to the user-specified parameter β: when β > α, we suffer a multiplicative cost of adaptation Õ(T |(2β−α−1)/2|), with β = (1 + α)/2 hitting the sweet spot, comparing to non-adaptive minimax regret; when β ≤ α, there is essentially no guarantees. One may hope to improve this result. However, our analysis in Section 4 indicates: (1) achieving minimax optimal regret for all settings simultaneously is impossible; and (2) the rate function achieved by MOSS++ is already Pareto optimal.
4 Lower bound and Pareto optimality
4.1 Lower bound
In this section, we show that designing algorithms with the non-adaptive minimax optimal guarantee over all values of α is impossible. We first state the result in the following general theorem. Theorem 2. For any 0 ≤ α′ < α ≤ 1, assume Tα ≤ B and bTαc − 1 ≥ max{Tα/4, 2}. If an algorithm is such that supω∈HT (α′)RT ≤ B, then the regret of this algorithm is lower bounded on HT (α):
sup ω∈HT (α)
RT ≥ 2−10T 1+αB−1. (2)
To give an interpretation of Theorem 2, we consider any algorithm/policy π together with regret minimization problemsHT (α′) andHT (α) satisfying corresponding requirements. On one hand, if algorithm π achieves a regret that is order-wise larger than Õ(T (1+α
′)/2) overHT (α′), it is already not minimax optimal forHT (α′). Now suppose π achieves a near-optimal regret, i.e., Õ(T (1+α
′)/2), over HT (α′); then, according to Eq. (2), π must incur a regret of order at least Ω̃(T 1/2+α−α
′/2) on one problem in HT (α′). This, on the other hand, makes algorithm π strictly sub-optimal over HT (α).
4.2 Pareto optimality
We capture the performance of any algorithm by its dependence on polynomial terms of T in the asymptotic sense. Note that the hardness level of a problem is encoded in α. Definition 1. Let θ : [0, 1] → [0, 1] denote a non-decreasing function. An algorithm achieves the rate function θ if
∀ > 0,∀α ∈ [0, 1], lim sup T→∞ supω∈HT (α)RT T θ(α)+ < +∞.
Recall that a function θ′ is strictly smaller than another function θ in pointwise order if θ′(α) ≤ θ(α) for all α and θ′(α0) < θ(α0) for at least one value of α0. As there may not always exist a pointwise ordering over rate functions, following [17], we consider the notion of Pareto optimality over rate functions achieved by some algorithms. Definition 2. A rate function θ is Pareto optimal if it is achieved by an algorithm, and there is no other algorithm achieving a strictly smaller rate function θ′ in pointwise order. An algorithm is Pareto optimal if it achieves a Pareto optimal rate function.
Combining the results in Theorem 1 and Theorem 2 with above definitions, we could further obtain the following result in Theorem 3. Theorem 3. The rate function achieved by MOSS++with any β ∈ [1/2, 1], i.e.,
θβ : α 7→ min{max{β, 1 + α− β}, 1}, (3) is Pareto optimal.
5 Learning with extra information
Although previous Section 4 gives negative results on designing algorithms that could optimally adapt to all settings, one could actually design such an algorithm with extra information. In this section, we provide an algorithm that takes the expected reward of the best arm µ? (or an estimated one with error up to 1/ √ T ) as extra information, and achieves near minimax optimal regret over all settings simultaneously. Our algorithm is mainly inspired by [23].
5.1 Algorithm
We name our Algorithm 3 Parallel as it maintains dlog T e instances of subroutine, i.e., Algorithm 2, in parallel. Each subroutine SRi is initialized with time horizon T and hardness level αi = i/dlog T e. We use Ti,t to denote the number of samples allocated to SRi up to time t, and represent its empirical regret at time t as R̂i,t = Ti,t · µ? − ∑Ti,t t=1Xi,t with Xi,t ∼ νAi,t being the t-th empirical reward obtained by SRi and Ai,t being the index of the t-th arm pulled by SRi.
Algorithm 2: MOSS Subroutine (SR) Input: Time horizon T and hardness level α.
1: Select a subset of arms Sα uniformly at random with |Sα| = d2Tα log √ T e and run MOSS on
Sα.
Parallel operates in iterations of length d √ T e. At the beginning of each iteration, i.e., at time t = i · d √ T e for i ∈ {0} ∪ [d √ T e − 1], Parallel first selects the subroutine with the lowest
(breaking ties arbitrarily) empirical regret so far, i.e., k = arg mini∈[dlog Te] R̂i,t; it then resumes the learning process of SRk, from where it halted, for another d √ T e more pulls. All the information is updated at the end of that iteration. An anytime version of Algorithm 3 is provided in Appendix C.3.
5.2 Analysis
As Parallel discretizes the hardness parameter over a grid with interval 1/dlog T e, we first show that running the best subroutine alone leads to regret Õ(T (1+α)/2).
Algorithm 3: Parallel Input: Time horizon T and the optimal reward µ?.
1: set: p = dlog T e, ∆ = d √ T e and t = 0. 2: for i = 1, . . . , p do 3: Set αi = i/p, initialize SRi with αi, T ; set Ti,t = 0, and R̂i,t = 0. 4: end for 5: for i = 1, . . . ,∆− 1 do 6: Select k = arg mini∈[p] R̂i,t and run SRk for ∆ rounds. 7: Update Tk,t = Tk,t + ∆, R̂k,t = Tk,t · µ? − ∑Tk,t t=1 Xk,t, t = t+ ∆. 8: end for
Lemma 1. Suppose α is the true hardness parameter and αi−1/dlog T e < α ≤ αi, run Algorithm 2 with time horizon T and αi leads to the following regret bound:
sup ω∈HT (α)
RT ≤ C log T · T (1+α)/2,
where C is a universal constant.
Since Parallel always allocates new samples to the subroutine with the lowest empirical regret so far, we know that the regret of every subroutine should be roughly of the same order at time T . In particular, all subroutines should achieve regret Õ(T (1+α)/2), as the best subroutine does. Parallel then achieves the non-adaptive minimax optimal regret, up to polylog factors, without knowing the true hardness level α. Theorem 4. For any α ∈ [0, 1] unknown to the learner, run Parallelwith time horizon T and optimal expected reward µ? leads to the following regret upper bound:
sup ω∈HT (α)
RT ≤ C (log T )2 T (1+α)/2,
where C is a universal constant.
6 Experiments
We conduct three experiments to compare our algorithms with baselines. In Section 6.1, we compare the performance of each algorithm on problems with varying hardness levels. We examine how the regret curve of each algorithm increases on synthetic and real-world datasets in Section 6.2 and Section 6.3, respectively.
We first introduce the nomenclature of the algorithms. We use MOSS to denote the standard MOSS algorithm; and MOSS Oracle to denote Algorithm 2 with known α. Quantile represents the algorithm (QRM2) proposed by [12] to minimize the regret with respect to the (1− ρ)-th quantile of means among arms, without the knowledge of ρ. One could easily transfer Quantile to our settings with top-ρ fraction of arms treated as best arms. As suggested in [12], we reuse the statistics obtained in previous iterations of Quantile to improve its sample efficiency. We use MOSS++ to represent the vanilla version of Algorithm 1; and use empMOSS++ to represent an empirical version such that: (1) empMOSS++ reuse statistics obtained in previous round, as did in Quantile ; and (2) instead of selecting Ki real arms uniformly at random at the i-th iteration, empMOSS++ selects Ki arms with the highest empirical mean for i > 1. We choose β = 0.5 for MOSS++ and empMOSS++ in all experiments.4 All results are averaged over 100 experiments. Shaded area represents 0.5 standard deviation for each algorithm.
6.1 Adaptivity to hardness level
We compare our algorithms with baselines on regret minimization problems with different hardness levels. For this experiment, we generate best arms with expected reward 0.9 and sub-optimal arms
4Increasing β generally leads to worse performance on problems with small α but better performance on problems with large α.
with expected reward evenly distributed among {0.1, 0.2, 0.3, 0.4, 0.5}. All arms follow Bernoulli distribution. We set the time horizon to T = 50000 and consider the total number of arms n = 20000. We vary α from 0.1 to 0.8 (with interval 0.1) to control the number of best arms m = dn/2Tαe and thus the hardness level. In Fig. 2(a), the regret of any algorithm gets larger as α increases, which is expected. MOSS does not provide satisfying performance due to the large action space and the relatively small time horizon. Although implemented in an anytime fashion, Quantile could be roughly viewed as an algorithm that runs MOSS on a subset selected uniformly at random with cardinality T 0.347. Quantile displays good performance when α = 0.1, but suffers regret much worse than MOSS++ and empMOSS++when α gets larger. Note that the regret curve of Quantile gets flattened at 20000 is expected: it simply learns the best sub-optimal arm and suffers a regret 50000×(0.9−0.5). Although Parallel enjoys near minimax optimal regret, the regret it suffers from is the summation of 11 subroutines, which hurts its empirical performance. empMOSS++ achieves performance comparable to MOSS Oraclewhen α is small, and achieve the best empirical performance when α ≥ 0.3. When α ≥ 0.7, MOSS Oracle needs to explore most/all of the arms to statistically guarantee the finding of at least one best arm, which hurts its empirical performance.
6.2 Regret curve comparison
We compare how the regret curve of each algorithm increases in Fig. 2(b). We consider the same regret minimization configurations as described in Section 6.1 with α = 0.25. empMOSS++ , MOSS++ and Parallel all outperform Quantilewith empMOSS++ achieving the performance closest to MOSS Oracle . MOSS Oracle , Parallel and empMOSS++ have flattened their regret curve indicating they could confidently recommend the best arm. The regret curves of MOSS++ and Quantile do not flat as the random-sampling component in each of their iterations encourage them to explore new arms. Comparing to MOSS++ , Quantile keeps increasing its regret at a much faster rate and with a much larger variance, which empirically confirms the sub-optimality of their regret guarantees.
6.3 Real-world dataset
We also compare all algorithms in a realistic setting of recommending funny captions to website visitors. We use a real-world dataset from the New Yorker Magazine Cartoon Caption Contest5. The dataset of 1-3 star caption ratings/rewards for Contest 652 consists of n = 10025 captions6. We use the ratings to compute Bernoulli reward distributions for each caption as follows. The mean of each caption/arm i is calculated as the percentage pi of its ratings that were funny or somewhat funny (i.e., 2 or 3 stars). We normalize each pi with the best one and then threshold each: if pi ≥ 0.8, then put pi = 1; otherwise leave pi unaltered. This produces a set of m = 54 best arms with rewards 1 and all
5https://www.newyorker.com/cartoons/contest. 6Available online at https://nextml.github.io/caption-contest-data.
other 9971 arms with rewards among [0, 0.8]. We set T = 105 and this results in a hardness level around α ≈ 0.43.
0 20000 40000 60000 80000 100000 Time
0
5000
10000
15000
20000
25000
30000
Ex pe
ct ed
re gr
et MOSS MOSS Oracle Quantile Parallel (ours) MOSS++ (ours) empMOSS++ (ours)
effectiveness of empMOSS++ and MOSS++ in modern applications of bandit algorithm with large action space and limited time horizon.
7 Conclusion
We study a regret minimization problem with large action space but limited time horizon, which captures many modern applications of bandit algorithms. Depending on the number of best/nearoptimal arms, we encode the hardness level, in terms of minimax regret achievable, of the given regret minimization problem into a single parameter α, and we design algorithms that could adapt to this unknown hardness level. Our first algorithm MOSS++ takes a user-specified parameter β as input and provides guarantees as long as α < β; our lower bound further indicates the rate function achieved by MOSS++ is Pareto optimal. Although no algorithm can achieve near minimax optimal regret over all α simultaneously, as demonstrated by our lower bound, we overcome this limitation with an (often) easily-obtained extra information and propose Parallel that is near-optimal for all settings. Inspired by MOSS++ , We also propose empMOSS++with excellent empirical performance. Experiments on both synthetic and real-world datasets demonstrate the efficiency of our algorithms over the previous state-of-the-art.
Broader Impact
This paper provides efficient algorithms that work well in modern applications of bandit algorithms with large action space but limited time horizon. We make minimal assumption about the setting, and our algorithms can automatically adapt to unknown hardness levels. Worst-case regret guarantees are provided for our algorithms; we also show MOSS++ is Pareto optimal and Parallel is minimax optimal, up to polylog factors. empMOSS++ is provided as a practical version of MOSS++with excellent empirical performance. Our algorithms are particularly useful in areas such as e-commence and movie/content recommendation, where the action space is enormous but possibly contains multiple best/satisfactory actions. If deployed, our algorithms could automatically adapt to the hardness level of the recommendation task and benefit both service-providers and customers through efficiently delivering satisfactory content. One possible negative outcome is that items recommended to a specific user/customer might only come from a subset of the action space. However, this is unavoidable when the number of items/actions exceeds the allowed time horizon. In fact, one should notice that all items/actions will be selected with essentially the same probability, thanks to the incorporation of random selection processes in our algorithms. Our algorithms will not leverage/create biases due to the same reason. Overall, we believe this paper’s contribution will have a net positive impact.
Acknowledgments and Disclosure of Funding
The authors would like to thank anonymous reviewers for their comments and suggestions. This work was partially supported by NSF grant no. 1934612. | 1. What is the focus and contribution of the paper regarding regret minimization in multi-armed bandit settings?
2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis and algorithmic solutions?
3. Do you have any concerns or questions regarding the paper's content, such as the proof details or the significance of certain results?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The authors consider the regret minimization problem with the existence of multiple best arms in the multi-armed bandit setting. Precisely, given a bandit problem with n arms, m optimal arms and horizon T, they define the hardness of this problem as \alpha = \log(n/m) /\log(T). They show that the minimax rate over the class of bandit problems such that their hardness is lower than a fixed \alpha is of order T^{(1+\alpha)/2}. Then they propose algorithm Restarting that without the knowledge of \alpha enjoys a regret of order T^{min(\max(\beta,1+\alpha-\beta),1)} for a bandit problem of hardness at most \alpha ad where $\beta\in[1/2,1] is some parameter of the algorithm. They also prove that it is not possible to construct an algorithm that is simultaneously optimal for all the classes of bandit problems of hardness at most \alpha. Nevertheless, they show that Restarting is Pareto optimal. When the \mean of an optimal arm is known they propose algorithm Parallel which matches the minimax rate simultaneously for all \alpha. Finally, they compare empirically Restarting, Parallel with MOSS algorithm, MOSS tuned knowing the hardness \alpha and Quantile an algorithm proposed by Chaudhuri and Kalyanakrishnan (2018).
Strengths
-methodological: notion of hardness (significance: medium). -theoretical:minimax rate for the class of bandit problems with a common upper bound on the hardness (significance: low), impossibility to adapt to the hardness of a problem (Th3) (significance: medium). -algorithmic: algorithm Restarting Pareto optimal and Parallel with the minimax rate O(T^{(1+\alpha)/2}) for all \alpha (with the knowledge of \mu^\star). (significance: medium).
Weaknesses
The impossibility to adapt to the hardness is interesting. For Th 4, the extra information about the optimal mean probably change also the lower bound (see Th 3), thus it is difficult to see how sharp this bound is. From a technical point of view, as acknowledged by the authors, algorithm Restarting borrows ideas from [17] and Parallele from [23]. I could increase my score if the few doubts on the proofs are cleared up. |
NIPS | Title
On Regret with Multiple Best Arms
Abstract
We study a regret minimization problem with the existence of multiple best/nearoptimal arms in the multi-armed bandit setting. We consider the case when the number of arms/actions is comparable or much larger than the time horizon, and make no assumptions about the structure of the bandit instance. Our goal is to design algorithms that can automatically adapt to the unknown hardness of the problem, i.e., the number of best arms. Our setting captures many modern applications of bandit algorithms where the action space is enormous and the information about the underlying instance/structure is unavailable. We first propose an adaptive algorithm that is agnostic to the hardness level and theoretically derive its regret bound. We then prove a lower bound for our problem setting, which indicates: (1) no algorithm can be minimax optimal simultaneously over all hardness levels; and (2) our algorithm achieves a rate function that is Pareto optimal. With additional knowledge of the expected reward of the best arm, we propose another adaptive algorithm that is minimax optimal, up to polylog factors, over all hardness levels. Experimental results confirm our theoretical guarantees and show advantages of our algorithms over the previous state-of-the-art.
1 Introduction
Multi-armed bandit problems describe exploration-exploitation trade-offs in sequential decision making. Most existing bandit algorithms tend to provide regret guarantees when the number of available arms/actions is smaller than the time horizon. In modern applications of bandit algorithm, however, the action space is usually comparable or even much larger than the allowed time horizon so that many existing bandit algorithms cannot even complete their initial exploration phases. Consider a problem of personalized recommendations, for example. For most users, the total number of movies, or even the amount of sub-categories, far exceeds the number of times they visit a recommendation site. Similarly, the enormous amount of user-generated content on YouTube and Twitter makes it increasingly challenging to make optimal recommendations. The tension between a very large action space and a limited time horizon poses a realistic problem in which deploying algorithms that converge to an optimal solution over an asymptotically long time horizon do not give satisfying results. There is a need to design algorithms that can exploit the highest possible reward within a limited time horizon. Past work has partially addressed this challenge. The quantile regret proposed in [12] to calculate regret with respect to an satisfactory action rather than the best one. The discounted regret analyzed in [25, 24] is used to emphasize short time horizon performance. Other existing works consider the extreme case when the number of actions is indeed infinite, and tackle such problems with one of two main assumptions: (1) the discovery of a near-optimal/best arm follows some probability measure with known parameters [6, 30, 4, 15]; (2) the existence of a smooth function represents the mean-payoff over a continuous subset [1, 20, 19, 8, 23, 17]. However, in many situations, neither assumption may be realistic. We make minimal assumptions in this paper. We study the regret minimization problem over a time horizon T , which might be unknown, with respect
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
to a bandit instance with n total arms, out of which m are best/near-optimal arms. We emphasize that the allowed time horizon and the given bandit instance should be viewed as features of one problem and together they indicate an intrinsic hardness level. We consider the case when the number of arms n is comparable or larger than the time horizon T so that no standard algorithm provides satisfying result. Our goal is to design algorithms that could adapt to the unknown m and achieve optimal regret.
1.1 Contributions and paper organization
We make the following contributions. In Section 2, we formally define the regret minimization problem that represents the tension between a very large action space and a limited time horizon; and capture the hardness level in terms of the number of best arms. We provide an adaptive algorithm that is agnostic to the unknown number of best arms in Section 3, and theoretically derive its regret bound. In Section 4, we prove a lower bound for our problem setting that indicates that there is no algorithm that can be optimal simultaneously over all hardness levels. Our lower bound also shows that our algorithm provided in Section 3 is Pareto optimal. With additional knowledge of the expected reward of the best arm, in Section 5, we provide an algorithm that achieves the non-adaptive minimax optimal regret, up to polylog factors, without the knowledge of the number of best arms. Experiments conducted in Section 6 confirm our theoretical guarantees and show advantages of our algorithms over previous state-of-the-art. We conclude our paper in Section 7. Most of the proofs are deferred to the Appendix due to lack of space.
1.2 Related work
Time sensitivity and large action space. As bandit models are getting much more complex, usually with large or infinite action spaces, researchers have begun to pay attention to tradeoffs between regret and time horizons when deploying such models. [13] study a linear bandit problem with ultra-high dimension, and provide algorithms that, under various assumptions, can achieve good reward within short time horizon. [24] also take time horizon into account and model time preference by analyzing a discounted regret. [12] consider a quantile regret minimization problem where they define their regret with respect to expected reward ranked at (1− ρ)-th quantile. One could easily transfer their problem to our setting; however, their regret guarantee is sub-optimal. [18, 4] also consider the problem with m best/near-optimal arms with no other assumptions, but they focus on the pure exploration setting; [4] additionally requires the knowledge of m. Another line of research considers the extreme case when the number arms is infinite, but with some known regularities. [6] proposes an algorithm with a minimax optimality guarantee under the situation where the reward of each arm follows strictly Bernoulli distribution; [27] provides an anytime algorithm that works under the same assumption. [30] relaxes the assumption on Bernoulli reward distribution, however, some other parameters are assumed to be known in their setting.
Continuum-armed bandit. Many papers also study bandit problems with continuous action spaces, where they embed each arm x into a bounded subset X ⊆ Rd and assume there exists a smooth function f governing the mean-payoff for each arm. This setting is firstly introduced by [1]. When the smoothness parameters are known to the learner or under various assumptions, there exists algorithms [20, 19, 8] with near-optimal regret guarantees. When the smoothness parameters are unknown, however, [23] proves a lower bound indicating no strategy can be optimal simultaneously over all smoothness classes; under extra information, they provide adaptive algorithms with near-optimal regret guarantees. Although achieving optimal regret for all settings is impossible, [17] design adaptive algorithms and prove that they are Pareto optimal. Our algorithms are mainly inspired by the ones in [17, 23]. A closely related line of work [28, 16, 5, 26] aims at minimizing simple regret in the continuum-armed bandit setting.
Adaptivity to unknown parameters. [9] argues the awareness of regularity is flawed and one should design algorithms that can adapt to the unknown environment. In situations where the goal is pure exploration or simple regret minimization, [18, 28, 16, 5, 26] achieve near-optimal guarantees with unknown regularity because their objectives trade-off exploitation in favor of exploration. In the case of cumulative regret minimization, however, [23] shows no strategy can be optimal simultaneously over all smoothness classes. In special situations or under extra information, [9, 10, 23] provide algorithms that adapt in different ways. [17] borrows the concept of Pareto optimality from economics and provide algorithms with rate functions that are Pareto optimal. Adaptivity is studied in statistics
as well: in some cases, only additional logarithmic factors are required [22, 7]; in others, however, there exists an additional polynomial cost of adaptation [11].
2 Problem statement and notation
We consider the multi-armed bandit instance ν = (ν1, . . . , νn) with n probability distributions with means µi = EX∼νi [X] ∈ [0, 1]. Let µ? = maxi∈[n]{µi} be the highest mean and S? = {i ∈ [n] : µi = µ?} denote the subset of best arms.1 The cardinality |S?| = m is unknown to the learner. We could also generalize our setting to S′? = {i ∈ [n] : µi ≥ µ? − (T )} with unknown |S′?| (i.e., situations where there is an unknown number of near-optimal arms). Setting to be dependent on T is to avoid an additive term linear in T , e.g., ≤ 1/ √ T ⇒ T ≤ √ T . All theoretical results and algorithms presented in this paper are applicable to this generalized setting with minor modifications. For ease of exposition, we focus on the case with multiple best arms throughout the paper. At each time step t ∈ [T ], the algorithm/learner selects an actionAt ∈ [n] and receives an independent reward Xt ∼ νAt . We assume that Xt − µAt is (1/2)-sub-Gaussian conditioned on At.2 We measure the success of an algorithm through the expected cumulative (pseudo) regret:
RT = T · µ? − E [ T∑
t=1
µAt
] .
We useR(T, n,m) to denote the set of regret minimization problems with allowed time horizon T and any bandit instance ν with n total arms and m best arms.3 We emphasize that T is part of the problem instance. We are particularly interested in the case when n is comparable or even larger than T , which captures many modern applications where the available action space far exceeds the allowed time horizon. Although learning algorithms may not be able to pull each arm once, one should notice that the true/intrinsic hardness level of the problem could be viewed as n/m: selecting a subset uniformly at random with cardinality Θ(n/m) guarantees, with constant probability, the access to at least one best arm; but of course it is impossible to do this without knowing m. We quantify the intrinsic hardness level over a set of regret minimization problemsR(T, n,m) as
ψ(R(T, n,m)) = inf{α ≥ 0 : n/m ≤ 2Tα}, where the constant 2 in front of Tα is added to avoid otherwise the trivial case with all best arms when the infimum is 0. ψ(R(T, n,m)) is used here as it captures the minimax optimal regret over the set of regret minimization problemR(T, n,m), as explained later in our review of the MOSS algorithm and the lower bound. As smaller ψ(R(T, n,m)) indicates easier problems, we then define the family of regret minimization problems with hardness level at most α as
HT (α) = {∪R(T, n,m) : ψ(R(T, n,m)) ≤ α}, with α ∈ [0, 1]. Although T is necessary to define a regret minimization problem, we actually encode the hardness level into a single parameter α, which captures the tension between the complexity of bandit instance at hand and the allowed time horizon T : problems with different time horizons but the same α are equally difficult in terms of the achievable minimax regret (the exponent of T ). We thus mainly study problems with T large enough so that we could mainly focus on the polynomial terms of T . We are interested in designing algorithms with minimax guarantees over HT (α), but without the knowledge of α.
MOSS and upper bound. In the classical setting, MOSS , designed by [2] and further generalized to the sub-Gaussian case [21] and improved in terms of constant factors [14], achieves the minimax optimal regret. In this paper, we will use MOSS as a subroutine with regret upper bound O( √ nT ) when T ≥ n. For any problem in HT (α) with known α, one could run MOSS on a subset selected uniformly at random with cardinality Õ(Tα) and achieve regret Õ(T (1+α)/2).
1Throughout the paper, we denote by [K] the set {1, . . . ,K} for any positive integer K. 2We say a random variable X is σ-sub-Gaussian if E[exp(λX)] ≤ exp(σ2λ2/2) for all λ ∈ R. 3Our setting could be generalized to the case with infinite arms: one can consider embedding arms into an arm space X and let p be the probability that an arm sampled uniformly at random is (near-) optimal. 1/p will then serve a similar role as n/m does in the original definition.
Lower bound. The lower bound Ω( √ nT ) in the classical setting does not work for our setting as its proof heavily relies on the existence of single best arm [21]. However, for problems inHT (α), we do have a matching lower bound Ω(T (1+α)/2) as one could always apply the standard lower bound on an bandit instance with n = bTαc and m = 1. For general value of m, a lower bound of the order Ω( √ T (n−m)/m) = Ω(T (1+α)/2) for the m-best arms case could be obtained following similar analysis in Chapter 15 of [21].
Although log T may appear in our bounds, throughout the paper, we focus on problems with T ≥ 2 as otherwise the bound is trivial.
3 An adaptive algorithm
Algorithm 1 takes time horizon T and a user-specified β ∈ [1/2, 1] as input, and it is mainly inspired by [17]. Algorithm 1 operates in iterations with geometrically-increasing length ∆Ti = 2p+i with p = dlog2 T βe. At each iteration i, it restarts MOSS on a set Si consisting of Ki = 2p+2−i real arms selected uniformly at random plus a set of “virtual” mixture-arms (one from each of the 1 ≤ j < i previous iterations, none if i = 1). The mixture-arms are constructed as follows. After each iteration i, let p̂i denote the vector of empirical sampling frequencies of the arms in that iteration (i.e., the k-th element of p̂i is the number of times arm k, including all previously constructed mixture-arms, was sampled in iteration i divided by the total number of samples ∆Ti). The mixture-arm for iteration i is the p̂i-mixture of the arms, denoted by ν̃i. When MOSS samples from ν̃i it first draws it ∼ p̂i, then draws a sample from the corresponding arm νit (or ν̃it ). The mixture-arms provide a convenient summary of the information gained in the previous iterations, which is key to our theoretical analysis. Although our algorithm is working on fewer regular arms in later iterations, information summarized in mixture-arms is good enough to provide guarantees. We name our algorithm MOSS++ as it restarts MOSS at each iteration with past information summarized in mixture-arms. We provide an anytime version of Algorithm 1 in Appendix A.2 via the standard doubling trick.
Algorithm 1: MOSS++ Input: Time horizon T and user-specified parameter β ∈ [1/2, 1].
1: Set: p = dlog2 T βe, Ki = 2p+2−i and ∆Ti = min{2p+i, T}. 2: for i = 1, . . . , p do 3: Run MOSS on a subset of arms Si for ∆Ti rounds. Si contains Ki real arms selected uniformly at random and the set of virtual mixture-arms from previous iterations, i.e., {ν̃j}j<i. 4: Construct a virtual mixture-arm ν̃i based on empirical sampling frequencies of MOSS above. 5: end for
3.1 Analysis and discussion
We use µS = maxν∈S{EX∼ν [X]} to denote the highest expected reward over a set of distributions/arms S. For any algorithm that only works on S, we can decompose the regret into approximation error and learning error:
RT = T · (µ? − µS)︸ ︷︷ ︸ approximation error due to the selection of S
+ T · µS − E [ T∑
t=1
µAt
]
︸ ︷︷ ︸ learning error due to the sampling rule {At}Tt=1
. (1)
This type of regret decomposition was previously used in [20, 3, 17] to deal with the continuum-armed bandit problem. We consider here a probabilistic version, with randomness in the selection of S, for the classical setting.
The main idea behind providing guarantees for MOSS++ is to decompose its regret at each iteration, using Eq. (1), and then bound the expected approximation error and learning error separately. The expected learning error at each iteration could always be controlled as Õ(T β) thanks to regret guarantees for MOSS and specifically chosen parameters p, Ki, ∆Ti. Let i? be the largest integer such that Ki ≥ 2Tα log √ T still holds. The expected approximation error in iteration i ≤ i? could be
upper bounded by √ T following an analysis on hypergeometric distribution. As a result, the expected regret in iteration i ≤ i? is Õ(T β). Since the mixture-arm ν̃i? is included in all following iterations, we could further bound the expected approximation error in iteration i > i? by Õ(T 1+α−β) after a careful analysis on ∆Ti/∆Ti? . This intuition is formally stated and proved in Theorem 1. Theorem 1. Run MOSS++with time horizon T and an user-specified parameter β ∈ [1/2, 1] leads to the following regret upper bound:
sup ω∈HT (α)
RT ≤ C (log2 T )5/2 · Tmin{max{β,1+α−β},1},
where C is a universal constant. Remark 1. We primarily focus on the polynomial terms in T when deriving the bound, but put no effort in optimizing the polylog term. The 5/2 exponent of log2 T might be tightened as well.
The theoretical guarantee is closely related to the user-specified parameter β: when β > α, we suffer a multiplicative cost of adaptation Õ(T |(2β−α−1)/2|), with β = (1 + α)/2 hitting the sweet spot, comparing to non-adaptive minimax regret; when β ≤ α, there is essentially no guarantees. One may hope to improve this result. However, our analysis in Section 4 indicates: (1) achieving minimax optimal regret for all settings simultaneously is impossible; and (2) the rate function achieved by MOSS++ is already Pareto optimal.
4 Lower bound and Pareto optimality
4.1 Lower bound
In this section, we show that designing algorithms with the non-adaptive minimax optimal guarantee over all values of α is impossible. We first state the result in the following general theorem. Theorem 2. For any 0 ≤ α′ < α ≤ 1, assume Tα ≤ B and bTαc − 1 ≥ max{Tα/4, 2}. If an algorithm is such that supω∈HT (α′)RT ≤ B, then the regret of this algorithm is lower bounded on HT (α):
sup ω∈HT (α)
RT ≥ 2−10T 1+αB−1. (2)
To give an interpretation of Theorem 2, we consider any algorithm/policy π together with regret minimization problemsHT (α′) andHT (α) satisfying corresponding requirements. On one hand, if algorithm π achieves a regret that is order-wise larger than Õ(T (1+α
′)/2) overHT (α′), it is already not minimax optimal forHT (α′). Now suppose π achieves a near-optimal regret, i.e., Õ(T (1+α
′)/2), over HT (α′); then, according to Eq. (2), π must incur a regret of order at least Ω̃(T 1/2+α−α
′/2) on one problem in HT (α′). This, on the other hand, makes algorithm π strictly sub-optimal over HT (α).
4.2 Pareto optimality
We capture the performance of any algorithm by its dependence on polynomial terms of T in the asymptotic sense. Note that the hardness level of a problem is encoded in α. Definition 1. Let θ : [0, 1] → [0, 1] denote a non-decreasing function. An algorithm achieves the rate function θ if
∀ > 0,∀α ∈ [0, 1], lim sup T→∞ supω∈HT (α)RT T θ(α)+ < +∞.
Recall that a function θ′ is strictly smaller than another function θ in pointwise order if θ′(α) ≤ θ(α) for all α and θ′(α0) < θ(α0) for at least one value of α0. As there may not always exist a pointwise ordering over rate functions, following [17], we consider the notion of Pareto optimality over rate functions achieved by some algorithms. Definition 2. A rate function θ is Pareto optimal if it is achieved by an algorithm, and there is no other algorithm achieving a strictly smaller rate function θ′ in pointwise order. An algorithm is Pareto optimal if it achieves a Pareto optimal rate function.
Combining the results in Theorem 1 and Theorem 2 with above definitions, we could further obtain the following result in Theorem 3. Theorem 3. The rate function achieved by MOSS++with any β ∈ [1/2, 1], i.e.,
θβ : α 7→ min{max{β, 1 + α− β}, 1}, (3) is Pareto optimal.
5 Learning with extra information
Although previous Section 4 gives negative results on designing algorithms that could optimally adapt to all settings, one could actually design such an algorithm with extra information. In this section, we provide an algorithm that takes the expected reward of the best arm µ? (or an estimated one with error up to 1/ √ T ) as extra information, and achieves near minimax optimal regret over all settings simultaneously. Our algorithm is mainly inspired by [23].
5.1 Algorithm
We name our Algorithm 3 Parallel as it maintains dlog T e instances of subroutine, i.e., Algorithm 2, in parallel. Each subroutine SRi is initialized with time horizon T and hardness level αi = i/dlog T e. We use Ti,t to denote the number of samples allocated to SRi up to time t, and represent its empirical regret at time t as R̂i,t = Ti,t · µ? − ∑Ti,t t=1Xi,t with Xi,t ∼ νAi,t being the t-th empirical reward obtained by SRi and Ai,t being the index of the t-th arm pulled by SRi.
Algorithm 2: MOSS Subroutine (SR) Input: Time horizon T and hardness level α.
1: Select a subset of arms Sα uniformly at random with |Sα| = d2Tα log √ T e and run MOSS on
Sα.
Parallel operates in iterations of length d √ T e. At the beginning of each iteration, i.e., at time t = i · d √ T e for i ∈ {0} ∪ [d √ T e − 1], Parallel first selects the subroutine with the lowest
(breaking ties arbitrarily) empirical regret so far, i.e., k = arg mini∈[dlog Te] R̂i,t; it then resumes the learning process of SRk, from where it halted, for another d √ T e more pulls. All the information is updated at the end of that iteration. An anytime version of Algorithm 3 is provided in Appendix C.3.
5.2 Analysis
As Parallel discretizes the hardness parameter over a grid with interval 1/dlog T e, we first show that running the best subroutine alone leads to regret Õ(T (1+α)/2).
Algorithm 3: Parallel Input: Time horizon T and the optimal reward µ?.
1: set: p = dlog T e, ∆ = d √ T e and t = 0. 2: for i = 1, . . . , p do 3: Set αi = i/p, initialize SRi with αi, T ; set Ti,t = 0, and R̂i,t = 0. 4: end for 5: for i = 1, . . . ,∆− 1 do 6: Select k = arg mini∈[p] R̂i,t and run SRk for ∆ rounds. 7: Update Tk,t = Tk,t + ∆, R̂k,t = Tk,t · µ? − ∑Tk,t t=1 Xk,t, t = t+ ∆. 8: end for
Lemma 1. Suppose α is the true hardness parameter and αi−1/dlog T e < α ≤ αi, run Algorithm 2 with time horizon T and αi leads to the following regret bound:
sup ω∈HT (α)
RT ≤ C log T · T (1+α)/2,
where C is a universal constant.
Since Parallel always allocates new samples to the subroutine with the lowest empirical regret so far, we know that the regret of every subroutine should be roughly of the same order at time T . In particular, all subroutines should achieve regret Õ(T (1+α)/2), as the best subroutine does. Parallel then achieves the non-adaptive minimax optimal regret, up to polylog factors, without knowing the true hardness level α. Theorem 4. For any α ∈ [0, 1] unknown to the learner, run Parallelwith time horizon T and optimal expected reward µ? leads to the following regret upper bound:
sup ω∈HT (α)
RT ≤ C (log T )2 T (1+α)/2,
where C is a universal constant.
6 Experiments
We conduct three experiments to compare our algorithms with baselines. In Section 6.1, we compare the performance of each algorithm on problems with varying hardness levels. We examine how the regret curve of each algorithm increases on synthetic and real-world datasets in Section 6.2 and Section 6.3, respectively.
We first introduce the nomenclature of the algorithms. We use MOSS to denote the standard MOSS algorithm; and MOSS Oracle to denote Algorithm 2 with known α. Quantile represents the algorithm (QRM2) proposed by [12] to minimize the regret with respect to the (1− ρ)-th quantile of means among arms, without the knowledge of ρ. One could easily transfer Quantile to our settings with top-ρ fraction of arms treated as best arms. As suggested in [12], we reuse the statistics obtained in previous iterations of Quantile to improve its sample efficiency. We use MOSS++ to represent the vanilla version of Algorithm 1; and use empMOSS++ to represent an empirical version such that: (1) empMOSS++ reuse statistics obtained in previous round, as did in Quantile ; and (2) instead of selecting Ki real arms uniformly at random at the i-th iteration, empMOSS++ selects Ki arms with the highest empirical mean for i > 1. We choose β = 0.5 for MOSS++ and empMOSS++ in all experiments.4 All results are averaged over 100 experiments. Shaded area represents 0.5 standard deviation for each algorithm.
6.1 Adaptivity to hardness level
We compare our algorithms with baselines on regret minimization problems with different hardness levels. For this experiment, we generate best arms with expected reward 0.9 and sub-optimal arms
4Increasing β generally leads to worse performance on problems with small α but better performance on problems with large α.
with expected reward evenly distributed among {0.1, 0.2, 0.3, 0.4, 0.5}. All arms follow Bernoulli distribution. We set the time horizon to T = 50000 and consider the total number of arms n = 20000. We vary α from 0.1 to 0.8 (with interval 0.1) to control the number of best arms m = dn/2Tαe and thus the hardness level. In Fig. 2(a), the regret of any algorithm gets larger as α increases, which is expected. MOSS does not provide satisfying performance due to the large action space and the relatively small time horizon. Although implemented in an anytime fashion, Quantile could be roughly viewed as an algorithm that runs MOSS on a subset selected uniformly at random with cardinality T 0.347. Quantile displays good performance when α = 0.1, but suffers regret much worse than MOSS++ and empMOSS++when α gets larger. Note that the regret curve of Quantile gets flattened at 20000 is expected: it simply learns the best sub-optimal arm and suffers a regret 50000×(0.9−0.5). Although Parallel enjoys near minimax optimal regret, the regret it suffers from is the summation of 11 subroutines, which hurts its empirical performance. empMOSS++ achieves performance comparable to MOSS Oraclewhen α is small, and achieve the best empirical performance when α ≥ 0.3. When α ≥ 0.7, MOSS Oracle needs to explore most/all of the arms to statistically guarantee the finding of at least one best arm, which hurts its empirical performance.
6.2 Regret curve comparison
We compare how the regret curve of each algorithm increases in Fig. 2(b). We consider the same regret minimization configurations as described in Section 6.1 with α = 0.25. empMOSS++ , MOSS++ and Parallel all outperform Quantilewith empMOSS++ achieving the performance closest to MOSS Oracle . MOSS Oracle , Parallel and empMOSS++ have flattened their regret curve indicating they could confidently recommend the best arm. The regret curves of MOSS++ and Quantile do not flat as the random-sampling component in each of their iterations encourage them to explore new arms. Comparing to MOSS++ , Quantile keeps increasing its regret at a much faster rate and with a much larger variance, which empirically confirms the sub-optimality of their regret guarantees.
6.3 Real-world dataset
We also compare all algorithms in a realistic setting of recommending funny captions to website visitors. We use a real-world dataset from the New Yorker Magazine Cartoon Caption Contest5. The dataset of 1-3 star caption ratings/rewards for Contest 652 consists of n = 10025 captions6. We use the ratings to compute Bernoulli reward distributions for each caption as follows. The mean of each caption/arm i is calculated as the percentage pi of its ratings that were funny or somewhat funny (i.e., 2 or 3 stars). We normalize each pi with the best one and then threshold each: if pi ≥ 0.8, then put pi = 1; otherwise leave pi unaltered. This produces a set of m = 54 best arms with rewards 1 and all
5https://www.newyorker.com/cartoons/contest. 6Available online at https://nextml.github.io/caption-contest-data.
other 9971 arms with rewards among [0, 0.8]. We set T = 105 and this results in a hardness level around α ≈ 0.43.
0 20000 40000 60000 80000 100000 Time
0
5000
10000
15000
20000
25000
30000
Ex pe
ct ed
re gr
et MOSS MOSS Oracle Quantile Parallel (ours) MOSS++ (ours) empMOSS++ (ours)
effectiveness of empMOSS++ and MOSS++ in modern applications of bandit algorithm with large action space and limited time horizon.
7 Conclusion
We study a regret minimization problem with large action space but limited time horizon, which captures many modern applications of bandit algorithms. Depending on the number of best/nearoptimal arms, we encode the hardness level, in terms of minimax regret achievable, of the given regret minimization problem into a single parameter α, and we design algorithms that could adapt to this unknown hardness level. Our first algorithm MOSS++ takes a user-specified parameter β as input and provides guarantees as long as α < β; our lower bound further indicates the rate function achieved by MOSS++ is Pareto optimal. Although no algorithm can achieve near minimax optimal regret over all α simultaneously, as demonstrated by our lower bound, we overcome this limitation with an (often) easily-obtained extra information and propose Parallel that is near-optimal for all settings. Inspired by MOSS++ , We also propose empMOSS++with excellent empirical performance. Experiments on both synthetic and real-world datasets demonstrate the efficiency of our algorithms over the previous state-of-the-art.
Broader Impact
This paper provides efficient algorithms that work well in modern applications of bandit algorithms with large action space but limited time horizon. We make minimal assumption about the setting, and our algorithms can automatically adapt to unknown hardness levels. Worst-case regret guarantees are provided for our algorithms; we also show MOSS++ is Pareto optimal and Parallel is minimax optimal, up to polylog factors. empMOSS++ is provided as a practical version of MOSS++with excellent empirical performance. Our algorithms are particularly useful in areas such as e-commence and movie/content recommendation, where the action space is enormous but possibly contains multiple best/satisfactory actions. If deployed, our algorithms could automatically adapt to the hardness level of the recommendation task and benefit both service-providers and customers through efficiently delivering satisfactory content. One possible negative outcome is that items recommended to a specific user/customer might only come from a subset of the action space. However, this is unavoidable when the number of items/actions exceeds the allowed time horizon. In fact, one should notice that all items/actions will be selected with essentially the same probability, thanks to the incorporation of random selection processes in our algorithms. Our algorithms will not leverage/create biases due to the same reason. Overall, we believe this paper’s contribution will have a net positive impact.
Acknowledgments and Disclosure of Funding
The authors would like to thank anonymous reviewers for their comments and suggestions. This work was partially supported by NSF grant no. 1934612. | 1. What is the focus and contribution of the paper regarding optimal strategies for arm selection?
2. What are the strengths of the proposed approach, particularly in terms of its theoretical foundation and novelty compared to other works?
3. What are the weaknesses of the paper, especially regarding its practicality and self-containment? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper studies the optimal strategies when the number of arms is larger than the time horizon. The paper provides an algorithm that is able to pull a large number of arms given the time horizon, with an assumed density (beta). Whether the algorithm yields any optimal arms depends on the true density (alpha) of the optimal arms. The paper also argues that the provided algorithm is minimax optimal if alpha<beta yet fundamental hardness exists if alpha is large.
Strengths
* Theoretical grounding: The paper studies a basic question and provides some sound arguments. I did not follow everything but the cases where I spot-checked made sense. * Novelty of contribution: While there is prior work on pulling from infinite pool of arms (QRM), this work is unique by providing an adaptive minimax-optimal rate under fewer assumptions.
Weaknesses
* Practicality: the algorithmic improvements seem trivial. It is just an MOSS algorithm with some micro improvements based on law of iterated logarithm. While there is an EMP version to fine-tune the parameters, I am not seeing how it introduces practical changes if we similarly fine-tune MOSS. * Self-containment: The paper omitted the explanation of MOSS (through reference, I guess this method is equivalent to EXP4, which is better known). |
NIPS | Title
On Regret with Multiple Best Arms
Abstract
We study a regret minimization problem with the existence of multiple best/nearoptimal arms in the multi-armed bandit setting. We consider the case when the number of arms/actions is comparable or much larger than the time horizon, and make no assumptions about the structure of the bandit instance. Our goal is to design algorithms that can automatically adapt to the unknown hardness of the problem, i.e., the number of best arms. Our setting captures many modern applications of bandit algorithms where the action space is enormous and the information about the underlying instance/structure is unavailable. We first propose an adaptive algorithm that is agnostic to the hardness level and theoretically derive its regret bound. We then prove a lower bound for our problem setting, which indicates: (1) no algorithm can be minimax optimal simultaneously over all hardness levels; and (2) our algorithm achieves a rate function that is Pareto optimal. With additional knowledge of the expected reward of the best arm, we propose another adaptive algorithm that is minimax optimal, up to polylog factors, over all hardness levels. Experimental results confirm our theoretical guarantees and show advantages of our algorithms over the previous state-of-the-art.
1 Introduction
Multi-armed bandit problems describe exploration-exploitation trade-offs in sequential decision making. Most existing bandit algorithms tend to provide regret guarantees when the number of available arms/actions is smaller than the time horizon. In modern applications of bandit algorithm, however, the action space is usually comparable or even much larger than the allowed time horizon so that many existing bandit algorithms cannot even complete their initial exploration phases. Consider a problem of personalized recommendations, for example. For most users, the total number of movies, or even the amount of sub-categories, far exceeds the number of times they visit a recommendation site. Similarly, the enormous amount of user-generated content on YouTube and Twitter makes it increasingly challenging to make optimal recommendations. The tension between a very large action space and a limited time horizon poses a realistic problem in which deploying algorithms that converge to an optimal solution over an asymptotically long time horizon do not give satisfying results. There is a need to design algorithms that can exploit the highest possible reward within a limited time horizon. Past work has partially addressed this challenge. The quantile regret proposed in [12] to calculate regret with respect to an satisfactory action rather than the best one. The discounted regret analyzed in [25, 24] is used to emphasize short time horizon performance. Other existing works consider the extreme case when the number of actions is indeed infinite, and tackle such problems with one of two main assumptions: (1) the discovery of a near-optimal/best arm follows some probability measure with known parameters [6, 30, 4, 15]; (2) the existence of a smooth function represents the mean-payoff over a continuous subset [1, 20, 19, 8, 23, 17]. However, in many situations, neither assumption may be realistic. We make minimal assumptions in this paper. We study the regret minimization problem over a time horizon T , which might be unknown, with respect
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
to a bandit instance with n total arms, out of which m are best/near-optimal arms. We emphasize that the allowed time horizon and the given bandit instance should be viewed as features of one problem and together they indicate an intrinsic hardness level. We consider the case when the number of arms n is comparable or larger than the time horizon T so that no standard algorithm provides satisfying result. Our goal is to design algorithms that could adapt to the unknown m and achieve optimal regret.
1.1 Contributions and paper organization
We make the following contributions. In Section 2, we formally define the regret minimization problem that represents the tension between a very large action space and a limited time horizon; and capture the hardness level in terms of the number of best arms. We provide an adaptive algorithm that is agnostic to the unknown number of best arms in Section 3, and theoretically derive its regret bound. In Section 4, we prove a lower bound for our problem setting that indicates that there is no algorithm that can be optimal simultaneously over all hardness levels. Our lower bound also shows that our algorithm provided in Section 3 is Pareto optimal. With additional knowledge of the expected reward of the best arm, in Section 5, we provide an algorithm that achieves the non-adaptive minimax optimal regret, up to polylog factors, without the knowledge of the number of best arms. Experiments conducted in Section 6 confirm our theoretical guarantees and show advantages of our algorithms over previous state-of-the-art. We conclude our paper in Section 7. Most of the proofs are deferred to the Appendix due to lack of space.
1.2 Related work
Time sensitivity and large action space. As bandit models are getting much more complex, usually with large or infinite action spaces, researchers have begun to pay attention to tradeoffs between regret and time horizons when deploying such models. [13] study a linear bandit problem with ultra-high dimension, and provide algorithms that, under various assumptions, can achieve good reward within short time horizon. [24] also take time horizon into account and model time preference by analyzing a discounted regret. [12] consider a quantile regret minimization problem where they define their regret with respect to expected reward ranked at (1− ρ)-th quantile. One could easily transfer their problem to our setting; however, their regret guarantee is sub-optimal. [18, 4] also consider the problem with m best/near-optimal arms with no other assumptions, but they focus on the pure exploration setting; [4] additionally requires the knowledge of m. Another line of research considers the extreme case when the number arms is infinite, but with some known regularities. [6] proposes an algorithm with a minimax optimality guarantee under the situation where the reward of each arm follows strictly Bernoulli distribution; [27] provides an anytime algorithm that works under the same assumption. [30] relaxes the assumption on Bernoulli reward distribution, however, some other parameters are assumed to be known in their setting.
Continuum-armed bandit. Many papers also study bandit problems with continuous action spaces, where they embed each arm x into a bounded subset X ⊆ Rd and assume there exists a smooth function f governing the mean-payoff for each arm. This setting is firstly introduced by [1]. When the smoothness parameters are known to the learner or under various assumptions, there exists algorithms [20, 19, 8] with near-optimal regret guarantees. When the smoothness parameters are unknown, however, [23] proves a lower bound indicating no strategy can be optimal simultaneously over all smoothness classes; under extra information, they provide adaptive algorithms with near-optimal regret guarantees. Although achieving optimal regret for all settings is impossible, [17] design adaptive algorithms and prove that they are Pareto optimal. Our algorithms are mainly inspired by the ones in [17, 23]. A closely related line of work [28, 16, 5, 26] aims at minimizing simple regret in the continuum-armed bandit setting.
Adaptivity to unknown parameters. [9] argues the awareness of regularity is flawed and one should design algorithms that can adapt to the unknown environment. In situations where the goal is pure exploration or simple regret minimization, [18, 28, 16, 5, 26] achieve near-optimal guarantees with unknown regularity because their objectives trade-off exploitation in favor of exploration. In the case of cumulative regret minimization, however, [23] shows no strategy can be optimal simultaneously over all smoothness classes. In special situations or under extra information, [9, 10, 23] provide algorithms that adapt in different ways. [17] borrows the concept of Pareto optimality from economics and provide algorithms with rate functions that are Pareto optimal. Adaptivity is studied in statistics
as well: in some cases, only additional logarithmic factors are required [22, 7]; in others, however, there exists an additional polynomial cost of adaptation [11].
2 Problem statement and notation
We consider the multi-armed bandit instance ν = (ν1, . . . , νn) with n probability distributions with means µi = EX∼νi [X] ∈ [0, 1]. Let µ? = maxi∈[n]{µi} be the highest mean and S? = {i ∈ [n] : µi = µ?} denote the subset of best arms.1 The cardinality |S?| = m is unknown to the learner. We could also generalize our setting to S′? = {i ∈ [n] : µi ≥ µ? − (T )} with unknown |S′?| (i.e., situations where there is an unknown number of near-optimal arms). Setting to be dependent on T is to avoid an additive term linear in T , e.g., ≤ 1/ √ T ⇒ T ≤ √ T . All theoretical results and algorithms presented in this paper are applicable to this generalized setting with minor modifications. For ease of exposition, we focus on the case with multiple best arms throughout the paper. At each time step t ∈ [T ], the algorithm/learner selects an actionAt ∈ [n] and receives an independent reward Xt ∼ νAt . We assume that Xt − µAt is (1/2)-sub-Gaussian conditioned on At.2 We measure the success of an algorithm through the expected cumulative (pseudo) regret:
RT = T · µ? − E [ T∑
t=1
µAt
] .
We useR(T, n,m) to denote the set of regret minimization problems with allowed time horizon T and any bandit instance ν with n total arms and m best arms.3 We emphasize that T is part of the problem instance. We are particularly interested in the case when n is comparable or even larger than T , which captures many modern applications where the available action space far exceeds the allowed time horizon. Although learning algorithms may not be able to pull each arm once, one should notice that the true/intrinsic hardness level of the problem could be viewed as n/m: selecting a subset uniformly at random with cardinality Θ(n/m) guarantees, with constant probability, the access to at least one best arm; but of course it is impossible to do this without knowing m. We quantify the intrinsic hardness level over a set of regret minimization problemsR(T, n,m) as
ψ(R(T, n,m)) = inf{α ≥ 0 : n/m ≤ 2Tα}, where the constant 2 in front of Tα is added to avoid otherwise the trivial case with all best arms when the infimum is 0. ψ(R(T, n,m)) is used here as it captures the minimax optimal regret over the set of regret minimization problemR(T, n,m), as explained later in our review of the MOSS algorithm and the lower bound. As smaller ψ(R(T, n,m)) indicates easier problems, we then define the family of regret minimization problems with hardness level at most α as
HT (α) = {∪R(T, n,m) : ψ(R(T, n,m)) ≤ α}, with α ∈ [0, 1]. Although T is necessary to define a regret minimization problem, we actually encode the hardness level into a single parameter α, which captures the tension between the complexity of bandit instance at hand and the allowed time horizon T : problems with different time horizons but the same α are equally difficult in terms of the achievable minimax regret (the exponent of T ). We thus mainly study problems with T large enough so that we could mainly focus on the polynomial terms of T . We are interested in designing algorithms with minimax guarantees over HT (α), but without the knowledge of α.
MOSS and upper bound. In the classical setting, MOSS , designed by [2] and further generalized to the sub-Gaussian case [21] and improved in terms of constant factors [14], achieves the minimax optimal regret. In this paper, we will use MOSS as a subroutine with regret upper bound O( √ nT ) when T ≥ n. For any problem in HT (α) with known α, one could run MOSS on a subset selected uniformly at random with cardinality Õ(Tα) and achieve regret Õ(T (1+α)/2).
1Throughout the paper, we denote by [K] the set {1, . . . ,K} for any positive integer K. 2We say a random variable X is σ-sub-Gaussian if E[exp(λX)] ≤ exp(σ2λ2/2) for all λ ∈ R. 3Our setting could be generalized to the case with infinite arms: one can consider embedding arms into an arm space X and let p be the probability that an arm sampled uniformly at random is (near-) optimal. 1/p will then serve a similar role as n/m does in the original definition.
Lower bound. The lower bound Ω( √ nT ) in the classical setting does not work for our setting as its proof heavily relies on the existence of single best arm [21]. However, for problems inHT (α), we do have a matching lower bound Ω(T (1+α)/2) as one could always apply the standard lower bound on an bandit instance with n = bTαc and m = 1. For general value of m, a lower bound of the order Ω( √ T (n−m)/m) = Ω(T (1+α)/2) for the m-best arms case could be obtained following similar analysis in Chapter 15 of [21].
Although log T may appear in our bounds, throughout the paper, we focus on problems with T ≥ 2 as otherwise the bound is trivial.
3 An adaptive algorithm
Algorithm 1 takes time horizon T and a user-specified β ∈ [1/2, 1] as input, and it is mainly inspired by [17]. Algorithm 1 operates in iterations with geometrically-increasing length ∆Ti = 2p+i with p = dlog2 T βe. At each iteration i, it restarts MOSS on a set Si consisting of Ki = 2p+2−i real arms selected uniformly at random plus a set of “virtual” mixture-arms (one from each of the 1 ≤ j < i previous iterations, none if i = 1). The mixture-arms are constructed as follows. After each iteration i, let p̂i denote the vector of empirical sampling frequencies of the arms in that iteration (i.e., the k-th element of p̂i is the number of times arm k, including all previously constructed mixture-arms, was sampled in iteration i divided by the total number of samples ∆Ti). The mixture-arm for iteration i is the p̂i-mixture of the arms, denoted by ν̃i. When MOSS samples from ν̃i it first draws it ∼ p̂i, then draws a sample from the corresponding arm νit (or ν̃it ). The mixture-arms provide a convenient summary of the information gained in the previous iterations, which is key to our theoretical analysis. Although our algorithm is working on fewer regular arms in later iterations, information summarized in mixture-arms is good enough to provide guarantees. We name our algorithm MOSS++ as it restarts MOSS at each iteration with past information summarized in mixture-arms. We provide an anytime version of Algorithm 1 in Appendix A.2 via the standard doubling trick.
Algorithm 1: MOSS++ Input: Time horizon T and user-specified parameter β ∈ [1/2, 1].
1: Set: p = dlog2 T βe, Ki = 2p+2−i and ∆Ti = min{2p+i, T}. 2: for i = 1, . . . , p do 3: Run MOSS on a subset of arms Si for ∆Ti rounds. Si contains Ki real arms selected uniformly at random and the set of virtual mixture-arms from previous iterations, i.e., {ν̃j}j<i. 4: Construct a virtual mixture-arm ν̃i based on empirical sampling frequencies of MOSS above. 5: end for
3.1 Analysis and discussion
We use µS = maxν∈S{EX∼ν [X]} to denote the highest expected reward over a set of distributions/arms S. For any algorithm that only works on S, we can decompose the regret into approximation error and learning error:
RT = T · (µ? − µS)︸ ︷︷ ︸ approximation error due to the selection of S
+ T · µS − E [ T∑
t=1
µAt
]
︸ ︷︷ ︸ learning error due to the sampling rule {At}Tt=1
. (1)
This type of regret decomposition was previously used in [20, 3, 17] to deal with the continuum-armed bandit problem. We consider here a probabilistic version, with randomness in the selection of S, for the classical setting.
The main idea behind providing guarantees for MOSS++ is to decompose its regret at each iteration, using Eq. (1), and then bound the expected approximation error and learning error separately. The expected learning error at each iteration could always be controlled as Õ(T β) thanks to regret guarantees for MOSS and specifically chosen parameters p, Ki, ∆Ti. Let i? be the largest integer such that Ki ≥ 2Tα log √ T still holds. The expected approximation error in iteration i ≤ i? could be
upper bounded by √ T following an analysis on hypergeometric distribution. As a result, the expected regret in iteration i ≤ i? is Õ(T β). Since the mixture-arm ν̃i? is included in all following iterations, we could further bound the expected approximation error in iteration i > i? by Õ(T 1+α−β) after a careful analysis on ∆Ti/∆Ti? . This intuition is formally stated and proved in Theorem 1. Theorem 1. Run MOSS++with time horizon T and an user-specified parameter β ∈ [1/2, 1] leads to the following regret upper bound:
sup ω∈HT (α)
RT ≤ C (log2 T )5/2 · Tmin{max{β,1+α−β},1},
where C is a universal constant. Remark 1. We primarily focus on the polynomial terms in T when deriving the bound, but put no effort in optimizing the polylog term. The 5/2 exponent of log2 T might be tightened as well.
The theoretical guarantee is closely related to the user-specified parameter β: when β > α, we suffer a multiplicative cost of adaptation Õ(T |(2β−α−1)/2|), with β = (1 + α)/2 hitting the sweet spot, comparing to non-adaptive minimax regret; when β ≤ α, there is essentially no guarantees. One may hope to improve this result. However, our analysis in Section 4 indicates: (1) achieving minimax optimal regret for all settings simultaneously is impossible; and (2) the rate function achieved by MOSS++ is already Pareto optimal.
4 Lower bound and Pareto optimality
4.1 Lower bound
In this section, we show that designing algorithms with the non-adaptive minimax optimal guarantee over all values of α is impossible. We first state the result in the following general theorem. Theorem 2. For any 0 ≤ α′ < α ≤ 1, assume Tα ≤ B and bTαc − 1 ≥ max{Tα/4, 2}. If an algorithm is such that supω∈HT (α′)RT ≤ B, then the regret of this algorithm is lower bounded on HT (α):
sup ω∈HT (α)
RT ≥ 2−10T 1+αB−1. (2)
To give an interpretation of Theorem 2, we consider any algorithm/policy π together with regret minimization problemsHT (α′) andHT (α) satisfying corresponding requirements. On one hand, if algorithm π achieves a regret that is order-wise larger than Õ(T (1+α
′)/2) overHT (α′), it is already not minimax optimal forHT (α′). Now suppose π achieves a near-optimal regret, i.e., Õ(T (1+α
′)/2), over HT (α′); then, according to Eq. (2), π must incur a regret of order at least Ω̃(T 1/2+α−α
′/2) on one problem in HT (α′). This, on the other hand, makes algorithm π strictly sub-optimal over HT (α).
4.2 Pareto optimality
We capture the performance of any algorithm by its dependence on polynomial terms of T in the asymptotic sense. Note that the hardness level of a problem is encoded in α. Definition 1. Let θ : [0, 1] → [0, 1] denote a non-decreasing function. An algorithm achieves the rate function θ if
∀ > 0,∀α ∈ [0, 1], lim sup T→∞ supω∈HT (α)RT T θ(α)+ < +∞.
Recall that a function θ′ is strictly smaller than another function θ in pointwise order if θ′(α) ≤ θ(α) for all α and θ′(α0) < θ(α0) for at least one value of α0. As there may not always exist a pointwise ordering over rate functions, following [17], we consider the notion of Pareto optimality over rate functions achieved by some algorithms. Definition 2. A rate function θ is Pareto optimal if it is achieved by an algorithm, and there is no other algorithm achieving a strictly smaller rate function θ′ in pointwise order. An algorithm is Pareto optimal if it achieves a Pareto optimal rate function.
Combining the results in Theorem 1 and Theorem 2 with above definitions, we could further obtain the following result in Theorem 3. Theorem 3. The rate function achieved by MOSS++with any β ∈ [1/2, 1], i.e.,
θβ : α 7→ min{max{β, 1 + α− β}, 1}, (3) is Pareto optimal.
5 Learning with extra information
Although previous Section 4 gives negative results on designing algorithms that could optimally adapt to all settings, one could actually design such an algorithm with extra information. In this section, we provide an algorithm that takes the expected reward of the best arm µ? (or an estimated one with error up to 1/ √ T ) as extra information, and achieves near minimax optimal regret over all settings simultaneously. Our algorithm is mainly inspired by [23].
5.1 Algorithm
We name our Algorithm 3 Parallel as it maintains dlog T e instances of subroutine, i.e., Algorithm 2, in parallel. Each subroutine SRi is initialized with time horizon T and hardness level αi = i/dlog T e. We use Ti,t to denote the number of samples allocated to SRi up to time t, and represent its empirical regret at time t as R̂i,t = Ti,t · µ? − ∑Ti,t t=1Xi,t with Xi,t ∼ νAi,t being the t-th empirical reward obtained by SRi and Ai,t being the index of the t-th arm pulled by SRi.
Algorithm 2: MOSS Subroutine (SR) Input: Time horizon T and hardness level α.
1: Select a subset of arms Sα uniformly at random with |Sα| = d2Tα log √ T e and run MOSS on
Sα.
Parallel operates in iterations of length d √ T e. At the beginning of each iteration, i.e., at time t = i · d √ T e for i ∈ {0} ∪ [d √ T e − 1], Parallel first selects the subroutine with the lowest
(breaking ties arbitrarily) empirical regret so far, i.e., k = arg mini∈[dlog Te] R̂i,t; it then resumes the learning process of SRk, from where it halted, for another d √ T e more pulls. All the information is updated at the end of that iteration. An anytime version of Algorithm 3 is provided in Appendix C.3.
5.2 Analysis
As Parallel discretizes the hardness parameter over a grid with interval 1/dlog T e, we first show that running the best subroutine alone leads to regret Õ(T (1+α)/2).
Algorithm 3: Parallel Input: Time horizon T and the optimal reward µ?.
1: set: p = dlog T e, ∆ = d √ T e and t = 0. 2: for i = 1, . . . , p do 3: Set αi = i/p, initialize SRi with αi, T ; set Ti,t = 0, and R̂i,t = 0. 4: end for 5: for i = 1, . . . ,∆− 1 do 6: Select k = arg mini∈[p] R̂i,t and run SRk for ∆ rounds. 7: Update Tk,t = Tk,t + ∆, R̂k,t = Tk,t · µ? − ∑Tk,t t=1 Xk,t, t = t+ ∆. 8: end for
Lemma 1. Suppose α is the true hardness parameter and αi−1/dlog T e < α ≤ αi, run Algorithm 2 with time horizon T and αi leads to the following regret bound:
sup ω∈HT (α)
RT ≤ C log T · T (1+α)/2,
where C is a universal constant.
Since Parallel always allocates new samples to the subroutine with the lowest empirical regret so far, we know that the regret of every subroutine should be roughly of the same order at time T . In particular, all subroutines should achieve regret Õ(T (1+α)/2), as the best subroutine does. Parallel then achieves the non-adaptive minimax optimal regret, up to polylog factors, without knowing the true hardness level α. Theorem 4. For any α ∈ [0, 1] unknown to the learner, run Parallelwith time horizon T and optimal expected reward µ? leads to the following regret upper bound:
sup ω∈HT (α)
RT ≤ C (log T )2 T (1+α)/2,
where C is a universal constant.
6 Experiments
We conduct three experiments to compare our algorithms with baselines. In Section 6.1, we compare the performance of each algorithm on problems with varying hardness levels. We examine how the regret curve of each algorithm increases on synthetic and real-world datasets in Section 6.2 and Section 6.3, respectively.
We first introduce the nomenclature of the algorithms. We use MOSS to denote the standard MOSS algorithm; and MOSS Oracle to denote Algorithm 2 with known α. Quantile represents the algorithm (QRM2) proposed by [12] to minimize the regret with respect to the (1− ρ)-th quantile of means among arms, without the knowledge of ρ. One could easily transfer Quantile to our settings with top-ρ fraction of arms treated as best arms. As suggested in [12], we reuse the statistics obtained in previous iterations of Quantile to improve its sample efficiency. We use MOSS++ to represent the vanilla version of Algorithm 1; and use empMOSS++ to represent an empirical version such that: (1) empMOSS++ reuse statistics obtained in previous round, as did in Quantile ; and (2) instead of selecting Ki real arms uniformly at random at the i-th iteration, empMOSS++ selects Ki arms with the highest empirical mean for i > 1. We choose β = 0.5 for MOSS++ and empMOSS++ in all experiments.4 All results are averaged over 100 experiments. Shaded area represents 0.5 standard deviation for each algorithm.
6.1 Adaptivity to hardness level
We compare our algorithms with baselines on regret minimization problems with different hardness levels. For this experiment, we generate best arms with expected reward 0.9 and sub-optimal arms
4Increasing β generally leads to worse performance on problems with small α but better performance on problems with large α.
with expected reward evenly distributed among {0.1, 0.2, 0.3, 0.4, 0.5}. All arms follow Bernoulli distribution. We set the time horizon to T = 50000 and consider the total number of arms n = 20000. We vary α from 0.1 to 0.8 (with interval 0.1) to control the number of best arms m = dn/2Tαe and thus the hardness level. In Fig. 2(a), the regret of any algorithm gets larger as α increases, which is expected. MOSS does not provide satisfying performance due to the large action space and the relatively small time horizon. Although implemented in an anytime fashion, Quantile could be roughly viewed as an algorithm that runs MOSS on a subset selected uniformly at random with cardinality T 0.347. Quantile displays good performance when α = 0.1, but suffers regret much worse than MOSS++ and empMOSS++when α gets larger. Note that the regret curve of Quantile gets flattened at 20000 is expected: it simply learns the best sub-optimal arm and suffers a regret 50000×(0.9−0.5). Although Parallel enjoys near minimax optimal regret, the regret it suffers from is the summation of 11 subroutines, which hurts its empirical performance. empMOSS++ achieves performance comparable to MOSS Oraclewhen α is small, and achieve the best empirical performance when α ≥ 0.3. When α ≥ 0.7, MOSS Oracle needs to explore most/all of the arms to statistically guarantee the finding of at least one best arm, which hurts its empirical performance.
6.2 Regret curve comparison
We compare how the regret curve of each algorithm increases in Fig. 2(b). We consider the same regret minimization configurations as described in Section 6.1 with α = 0.25. empMOSS++ , MOSS++ and Parallel all outperform Quantilewith empMOSS++ achieving the performance closest to MOSS Oracle . MOSS Oracle , Parallel and empMOSS++ have flattened their regret curve indicating they could confidently recommend the best arm. The regret curves of MOSS++ and Quantile do not flat as the random-sampling component in each of their iterations encourage them to explore new arms. Comparing to MOSS++ , Quantile keeps increasing its regret at a much faster rate and with a much larger variance, which empirically confirms the sub-optimality of their regret guarantees.
6.3 Real-world dataset
We also compare all algorithms in a realistic setting of recommending funny captions to website visitors. We use a real-world dataset from the New Yorker Magazine Cartoon Caption Contest5. The dataset of 1-3 star caption ratings/rewards for Contest 652 consists of n = 10025 captions6. We use the ratings to compute Bernoulli reward distributions for each caption as follows. The mean of each caption/arm i is calculated as the percentage pi of its ratings that were funny or somewhat funny (i.e., 2 or 3 stars). We normalize each pi with the best one and then threshold each: if pi ≥ 0.8, then put pi = 1; otherwise leave pi unaltered. This produces a set of m = 54 best arms with rewards 1 and all
5https://www.newyorker.com/cartoons/contest. 6Available online at https://nextml.github.io/caption-contest-data.
other 9971 arms with rewards among [0, 0.8]. We set T = 105 and this results in a hardness level around α ≈ 0.43.
0 20000 40000 60000 80000 100000 Time
0
5000
10000
15000
20000
25000
30000
Ex pe
ct ed
re gr
et MOSS MOSS Oracle Quantile Parallel (ours) MOSS++ (ours) empMOSS++ (ours)
effectiveness of empMOSS++ and MOSS++ in modern applications of bandit algorithm with large action space and limited time horizon.
7 Conclusion
We study a regret minimization problem with large action space but limited time horizon, which captures many modern applications of bandit algorithms. Depending on the number of best/nearoptimal arms, we encode the hardness level, in terms of minimax regret achievable, of the given regret minimization problem into a single parameter α, and we design algorithms that could adapt to this unknown hardness level. Our first algorithm MOSS++ takes a user-specified parameter β as input and provides guarantees as long as α < β; our lower bound further indicates the rate function achieved by MOSS++ is Pareto optimal. Although no algorithm can achieve near minimax optimal regret over all α simultaneously, as demonstrated by our lower bound, we overcome this limitation with an (often) easily-obtained extra information and propose Parallel that is near-optimal for all settings. Inspired by MOSS++ , We also propose empMOSS++with excellent empirical performance. Experiments on both synthetic and real-world datasets demonstrate the efficiency of our algorithms over the previous state-of-the-art.
Broader Impact
This paper provides efficient algorithms that work well in modern applications of bandit algorithms with large action space but limited time horizon. We make minimal assumption about the setting, and our algorithms can automatically adapt to unknown hardness levels. Worst-case regret guarantees are provided for our algorithms; we also show MOSS++ is Pareto optimal and Parallel is minimax optimal, up to polylog factors. empMOSS++ is provided as a practical version of MOSS++with excellent empirical performance. Our algorithms are particularly useful in areas such as e-commence and movie/content recommendation, where the action space is enormous but possibly contains multiple best/satisfactory actions. If deployed, our algorithms could automatically adapt to the hardness level of the recommendation task and benefit both service-providers and customers through efficiently delivering satisfactory content. One possible negative outcome is that items recommended to a specific user/customer might only come from a subset of the action space. However, this is unavoidable when the number of items/actions exceeds the allowed time horizon. In fact, one should notice that all items/actions will be selected with essentially the same probability, thanks to the incorporation of random selection processes in our algorithms. Our algorithms will not leverage/create biases due to the same reason. Overall, we believe this paper’s contribution will have a net positive impact.
Acknowledgments and Disclosure of Funding
The authors would like to thank anonymous reviewers for their comments and suggestions. This work was partially supported by NSF grant no. 1934612. | 1. What is the main contribution of the paper regarding bandit instances?
2. What are the strengths of the proposed algorithm, particularly in adaptiveness and Pareto optimality?
3. What are the weaknesses of the paper, such as questionable assumptions and limitations in experimental designs?
4. How does the reviewer assess the novelty and originality of the paper's content?
5. Are there any concerns regarding the applicability and practicality of the proposed approach? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The paper studies bandit instances where the number of arms is larger than the horizon. Usual asymptotic analyses of the regret are typically of no use in this case. The authors focus on cases where there are multiple optimal arms and define a complexity measure that relies on the proportion of optimal arms and on the horizon. Namely, the complexity of a bandit instance with $n$ arms, $m$ best arms and a horizon $T$ as $\inf_{\alpha \in [0, 1]} {n/m \leq T^{\alpha}}$. They propose an algorithm that is adaptive to the complexity defined like so. They provide an upper bound of the regret, scaling like a power of $T$, with an exponent being a function of the parameter of the algorithm. The optimal choice of the parameter yields a regret of the order of $T^{(1+ \alpha}/2}$, which is the non-adaptive-minimax-optimal rate. They derive a lower bound and prove that the algorithm is Pareto optimal. Next, they provide another algorithm for the case when the reward of the best arms is known in advance. The analysis of the regret shows that the regret is non-adaptive-minimax-optimal. Some experiments illustrate these findings but fail to show the superiority of the second algorithm (which has access to additional information) over the first one in practice.
Strengths
I appreciate the originality of the paper, that comes from the fact that it introduces a measure of complexity, which I believe is new, and which relies on the proportion of optimal arms and on the horizon, while the usual measures of complexity usually omit to take the horizon into account. The results include a nice lower bound that shows that it is impossible to construct an algorithm that achieves minimax optimality for all complexities simultaneously. Despite this impossibility, the authors build an adaptive algorithm that is Pareto optimal. The presence of experiments, which show the behaviour of the regret with respect to the complexity and the time is also a positive element.
Weaknesses
- The fact that the measure of the complexity relies on the number of optimal arms is rather questionable, since there are not many applications where there can be a large set of optimal arms. I think the paper would benefit from a longer discussion of the generalization to near-optimal arms. Also, there does not seem to be any result of this kind in the Appendix, which is regrettable. - The lower bound in section 2, on which the hardness measure is based, relies on an example where there is a single best arm. At this point in the paper, it would be interesting to know of a lower bound for a case with more arms, in order to know if the hardness classes are uniform in this regard (i.e. if the claim that “problems with different time horizons but the same $\alpha$ are equally difficult in terms of the achievable minimax regret (the exponent of $T$) is true). Later on, we learn that a regret of the order of $T^{1+\alpha}$ can be achieved over the whole class, thanks to the restart algorithm, but it does not fully answer the question. - In Section 3, the authors rightly point out that the provided upper bound on the regret does not give any guarantees when $\beta<\alpha$ since it boils down to saying that the regret is bounded by $T$. The impossibility to achieve the non-adaptive minimax regret bound for every hardness class simultaneously does not mean that algorithms are bound to be this inefficient on a large range of $alpha$s. Furthermore, the choice of $\beta = 0.5$ in the experiments puts us exactly in the situation where $\beta<\alpha$ for half of the choices of $\alpha$. This raises two questions : -Can the bound be improved (the experiments seem to indicate that it can)? -What choice of $\beta$ should a user make when agnostic of $\alpha$ ? An answer or a discussion about these questions would have been appreciated. Even a graph showing the influence of the parameter on the regret would have been useful. - Although I understand that the theoretical result that additional information about the value of the best arm allows to achieve minimax optimality is satisfactory, I wonder whether this case is really relevant. I do not know of any practical case where the value of the optimal arm would be known in advance. - Another slightly weak point of the paper is that the experiments have been made with only 100 Monte Carlo trials for a horizon of 50,000 time steps. - Lastly, the paper seems to have been hastily written, which makes it difficult to read due to the large number of typos. I have read the authors' response and agree on their comment on the first and second points. I think that the answer to point 2 should figure in the paper, for it to be complete. On the role of $\beta$, I can only believe the authors, since I can not see the results of the experiments. |
NIPS | Title
Neural Surface Reconstruction of Dynamic Scenes with Monocular RGB-D Camera
Abstract
We propose Neural-DynamicReconstruction (NDR), a template-free method to recover high-fidelity geometry and motions of a dynamic scene from a monocular RGB-D camera. In NDR, we adopt the neural implicit function for surface representation and rendering such that the captured color and depth can be fully utilized to jointly optimize the surface and deformations. To represent and constrain the non-rigid deformations, we propose a novel neural invertible deforming network such that the cycle consistency between arbitrary two frames is automatically satisfied. Considering that the surface topology of dynamic scene might change over time, we employ a topology-aware strategy to construct the topology-variant correspondence for the fused frames. NDR also further refines the camera poses in a global optimization manner. Experiments on public datasets and our collected dataset demonstrate that NDR outperforms existing monocular dynamic reconstruction methods.
1 Introduction
Reconstructing 3D geometry shape, texture and motions of the dynamic scene from a monocular video is a classical and challenging problem in computer vision. It has broad applications in many areas like virtual and augmented reality. Although existing methods [63, 65] have demonstrated impressive reconstruction results for dynamic scenes only with 2D images, they are still difficult to recover high-fidelity geometry shapes, especially for some casually captured data as abundant potential solutions exist without depth constraints. Only with 2D measurements, dynamic reconstruction methods require that motions of interested object hold in a nearby z-plane. Meanwhile, it is difficult to construct reliable correspondences in areas with weak texture, which causes error accumulation in the canonical space.
To solve this under-constrained problem, some methods propose to utilize shape priors for some special object types. For example, category-specific parametric shape models like 3DMM [6], SMPL [41] and SMAL [72] are first constructed and then used to help the reconstruction. However, templated-based methods could not generalize to unknown object types. On the other hand, some methods utilize annotations, like keypoints and optical flow, obtained from manual annotators or off-the-shelf tools [31, 33, 63, 65]. The motion trajectories of sparse or dense 2D points can effectively help recover the exact motion of the whole structure. However, it needs human labeling for supervision or highly depends on the quality of learned priors from a large-scale dataset.
One straightforward solution to this under-constrained problem is to reconstruct the interested object based on observations from RGB-D cameras like Microsoft Kinect [69] and Apple iPhone X. Existing fusion-based methods [44, 27, 54] utilize a dense non-rigid warp field and a canonical truncated signed
∗Corresponding author.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
distance (TSDF) volume to represent motion and shape, respectively. However, these fusion-based methods might fail due to accumulated tracking errors, especially for long sequences. To alleviate this problem, some learning-based methods [9, 8, 39] utilize more accurate correspondences which are annotated or learned from synthesis datasets to guide the dynamic fusion process. However, the captured color and depth information is not represented together within one differentiable framework in these methods. Recently, a neural implicit representation based method [3] has been proposed to reconstruct a room-scale scene from RGB-D inputs, but it is only designed for static scenes and can not be directly applied to dynamic scenes.
Time
Figure 1: Examples of reconstructed (right) and rendered (left) results by NDR. Given a monocular RGB-D video sequence, NDR recovers high-fidelity geometry and motions of a dynamic scene.
In this paper, we present Neural-DynamicReconstruction (NDR), a neural dynamic reconstruction method from a monocular RGB-D camera (Fig. 1). To represent the high-fidelity geometry and texture of deformable object, NDR maintains a neural implicit field as the canonical space. With extra depth constraint, there still exist multiple potential solutions since the correspondences between different frames are still unknown. In this paper, we propose the following strategies to constrain and regularize the solution space: (1) integrating all RGB-D frames to a high-fidelity textured shape in the canonical space; (2) maintaining cycle consistency between arbitrary two frames; (3) a surface representation which can handle topological changes.
Specifically, we adopt the neural SDF and radiance field to respectively represent the high-fidelity geometry and appearance in the canonical space instead of the TSDF volume frequently used in fusion-based methods [28, 44, 54, 9, 8, 39]. In our framework, each RGB-D frame can be integrated into the canonical representation. We propose a novel neural deformation representation that implies a continuous bijective map between observation and canonical space. The designed invertible module applies a cycle consistency constraint through the whole RGB-D video; meanwhile, it fits the natural properties of non-rigid motion well. To support topology changes of dynamic scene, we adopt the topology-aware network in HyperNeRF [47]. Thanks for modeling topology-variant correspondence, our framework can handle topology changes while existing deformation graph based methods [44, 39, 65] could not. NDR also further refines camera intrinsic parameters and poses during training. Extensive experimental results demonstrate that NDR can recover high-fidelity geometry and photorealistic texture for monocular category-agnostic RGB-D videos.
2 Related Works
RGB based dynamic reconstruction. Dynamic reconstruction approach can be divided into template-based and template-free types. Templates [6, 41, 50, 72] are category-specific statistical models constructed from large-scale datasets. With the help of pre-constructed 3D morphable models [6, 12, 36], some researches [5, 11, 26, 57, 21, 25, 19] reconstruct faces or heads from RGB inputs. Most of them need 2D keypoints as extra supervisory information to guide dynamic tracking [71, 17, 19]. With the aid of human parametric models [1, 41], some works [7, 62, 23, 24, 70, 29] recover digital avatars based on monocular image or video cues. However, it is unpractical to extend templates to general objects with limited 3D scanned priors, such as articulated objects, clothed human and animals. Non-rigid structure from motion (NR-SFM) algorithms [10, 51, 15, 34, 53] are to reconstruct category-agnostic object from 2D observations. Although NR-SFM can reconstruct reasonable result for general dynamic scenes, it heavily depends on reliable point trajectories throughout observed sequences [52, 56]. Recently, some methods [63, 64, 65] obtain promising results from a long monocular video or several short videos of a category. LASR [63] and ViSER [64] recover articulated shapes via a differentiable rendering manner [40], while BANMo [65] models them with the help of Neural Radiance Fields (NeRF) [43]. However, due to the depth ambiguity of input 2D images, the reconstruction might fails for some challenging inputs.
RGB-D based dynamic reconstruction. Recovering 3D deforming shapes from a monocular RGB video is a highly under-constrained problem. On the other hand, The progress in consumer-grade RGB-D sensors has made depth map capture from a single camera more convenient. Therefore, it is quite natural to reconstruct the target objects based on RGB-D sequences. DynamicFusion [44], the seminar work of RGB-D camera based dynamic object reconstruction, proposes to estimate a templatefree 6D motion field to warp live frames into a TSDF surface. The surface representation strategy has also been used in KinectFusion [28]. VolumeDeform [27] represents motion in a grid and incorporates global sparse SIFT [42] features during alignment. Guo et al. [20] coheres albedo, geometry and motion estimation in an optimization pipeline. KillingFusion [54] and SobolevFusion [55] are proposed to deal with topology changes. During deep learning era, DeepDeform [9] and Bozic et al. [8] aim to learn more accurate correspondences for tracking improvement of faster and more complex motions. OcclusionFusion [39] probes and handles the occlusion problem via an LSTMinvolved graph neural network but fails when topology changes. Although these methods obtain promising reconstruction results with the additional depth cues, their reconstructed shapes mainly depend on the captured depths, while the RGB images are not fully utilized to further improve the results.
Dynamic NeRF. Given a range of image cues, prior works on NeRF [43] optimize an underlying continuous scene function for novel view synthesis. Some NeRF-like methods [37, 49, 58, 18, 46, 47] achieve promising results on dynamic scenes without prior templates. Nerfies [46] and AD-NeRF [22] reconstruct free-viewpoint selfies from monocular videos. HyperNeRF [47] models an ambient slicing surface to express topologically varying regions. Recent approaches [60, 3] introduce neural representation for static object/scene reconstruction, but theirs can not be used for non-rigid scenes.
Cycle consistency constraint. To maintain cycle consistency between deformed frames is an important regularization in perceiving and modeling dynamic scenes [61]. However, recent methods [64, 37, 65] try to leverage a loss term to constrain estimated surface features or scene flow, which is a weak but not strict property. Therefore, constructing an invertible representation for deformation field is a reasonable design. Several invertible networks are proposed to represent deformation, such as Real-NVP [16], Neural-ODE [13], I-ResNet [4]. Based on these manners, there exist some methods modeling deformation in space [30, 66, 48] or time [45, 35] domain. CaDeX [35] is a novel dynamic surface representation method using a real-valued non-volume preserving module [16]. Different from these strategies, we propose a novel scale-invariant binary map between observation space and 3D canonical space to process RGB-D sequences, which is more suitable for modeling non-rigid motion.
3 Method
The input of NDR is an RGB-D sequence {(Ii,Di), i = 1, · · · , N} captured by a monocular RGB-D camera (e.g., Kinect and iPhone X), where Ii ∈ RH×W×3 is the i-th RGB frame and
Di ∈ RH×W×1 is the corresponding aligned depth map. To optimize a canonical textured shape and motion through the sequence, we leverage full N color frames Ii as well as corresponding depth frames Di. Specifically, we first adopt video segmentation methods [14, 38] to obtain the mask Mi of interested object. Then, we integrate RGB-D video sequence into a canonical hyper-space composed of a 3D canonical space and a topology space. We propose a continuous bijective representation between the 3D canonical and observation space such that the cycle consistency can be strictly satisfied. The implicit surface is represented by a neural SDF and volume rendering field, as a function of input hyper-coordinate and camera view. The geometry, appearance, and motions of dynamic object are optimized without any template or structured priors, like optical flow [65], 2D annotations [9] and estimated normal map [29]. The pipeline of NDR is shown in Fig. 2(a).
3.1 Bijective Map in Space-time Synthesis
Invertible representation. Given a 3D point sampled in the space of i-th frame, recent methods [44, 65, 47] model its motion as a 6D transformation in SE(3) space. Nerfies [46] and HyperNeRF [47] construct a continuous dense field to estimate the motion. To reduce the complexity, DynamicFusion [44] and BANMo [65] define warp functions based on several control points. The latter designs both 2D and 3D cycle consistency loss terms to apply bijective constraints to deformation representation, but it is just a guide for learning instead of a rigorous inference module. Similar to the previous works, we also construct the deformation between each current frame and the 3D canonical space. Further, we employ a strictly invertible bijective mapping, which is naturally compatible with the cycle consistency strategy. Specifically, we decompose the non-rigid deformation into several reversible bijective blocks, where each block represents the transformation along and around a certain axis. In this manner, our deformation representation is strictly invertible and fits the natural properties of non-rigid motion well, which is helpful for the reconstruction effect.
We denote pi = [xi, yi, zi] ∈ R3 as a position of the observation space at time ti, in which a deformed surface Ui is embedded. It is noticeable that pi represents any position, both surface and free-space points. A continuous homeomorphic mapping Hi : R3 → R3 maps pi back to the 3D canonical position p = [x, y, z]. Supposes that there exists a canonical shape U of the interested object, which is independent of time and is shared across the video sequence. Notes that map Hi is invertible, and thus we can directly obtain the deformed surface at time ti:
Ui = {H−1i ([x, y, z])|∀[x, y, z] ∈ U}. (1)
Then, the correspondence of pi can be expressed by the bijective map, factorized as:
[xj , yj , zj ] = Gij([xi, yi, zi]) = H−1j ◦ Hi([xi, yi, zi]). (2)
The deformation representation G is cycle consistent strictly, since it is invariant on deforming path (Gjk ◦ Gij = Gik). As a composite function of two bijective maps (Eq. 2), it is a topology-invariant function between arbitrary double time stamps.
Implementation. Based on these observations, we implement the bijective map H by a novel invertible network h. While Real-NVP [16] seems a suitable network structure, its scale-variant property limits its usage in our object reconstruction task. Inspired by the idea of Real-NVP to split the coordinates, we decompose our scale-invariant deformation into several blocks. In each block, we set an axis and represent the motion steps as simple axis-related rotations and translations, which are totally shared by the forward and backward deformations. In this manner, the inverse deformation H−1 can be viewed as the composite of the inverse of these simple rotations and translations in H. On the other hand, this map also regularizes the freedom of deformation.
Fig. 2(b) shows the detailed structure of each block. Given a latent deformation code φ binding with time, we firstly consider the forward deformation, where the 3D positions [u, v, w] ∈ R3 of observation space is input, and the positions [u′, v′, w′] ∈ R3 of 3D canonical space is output. The cause of the invertible property is that after specifying a certain coordinate axis, each block predicts the movement along and rotation around the axis in turn, and the process of predicting the deformation is reversible, owing to coordinate split. In the inverse process, each block can infer the rotation around and movement along the axis from [u′, v′, w′] and invert them in turn to recover the original [u, v, w].
Without loss of generality, let the w-axis to be the chosen axis. With [u, v] fixed, we compute a displacement δw and update w′ as w + δw. With [w′] fixed, we then compute the rotation Ruv and translation δuv for [u, v] and update them as [u′, v′]. Oppositely, for the backward deformation, we apply −δuv, R−1uv , and −δw in turn to recover [u′, v′, w′] back to [u, v, w]. We refer the reader to supplementary material for the inverse process. Therefore, if the network h consists of these invertible blocks, it can represent a bijective map as well. At time ti, h(·|φi) : R3 → R3 maps 3D positions pi of observation space back to 3D canonical correspondences p, where φi denotes the deformation code of i-th frame. In our experiment, we use a Multi-Layer Perceptron (MLP) as the implementation of h, so we design a continuous bijective map Fh for space-time synthesis.
3.2 Deformation Field
Although the proposed deformation representation is a continuous homeomorphic mapping that satisfies the cycle consistency between different frames, it also preserves the surface topology. However, several dynamic scenes (e.g., varying body motion and facial expression) may undergo topology changes. Therefore, we combine a topology-aware design [47] into our deformation field. 3D positions pi observed at time ti are mapped to topology coordinates q(pi) through a network q : R3 → Rm. We regress topology coordinates from an MLP Fq. Then the corresponding coordinate of pi in the canonical hyper-space is represented as:
x = [p,q(pi)] = [Fh(pi,φi), Fq(pi,φi)] ∈ R3+m, (3)
conditioned on time-varying deformation φi.
3.3 Implicit Canonical Geometry and Appearance
Inspired by NeRF [43], we consider that a sample point x ∈ R3+m in the canonical hyper-space is associated with two properties: density σ and color c ∈ R3.
Neural SDF. Notes that the object embeds in the (3 +m)-D canonical hyper-space. In this work, we represent its geometry as the zero-level set of an SDF:
S = {x ∈ R3+m|d(x) = 0}. (4)
Following NeuS [60], we utilize a probability function to calculate the density value σ(x) based on the estimated signed distance value, which is an unbiased and occlusion-aware approximation. We refer the reader to their paper for more details.
Implicit rendering network. We utilize a neural renderer Fc as the implicit appearance network. At time ti, it takes in a 3D canonical coordinate p, its corresponding normal, a canonical view direction as well as a geometry feature vector, then outputs the color of the point, conditioned on a time-varying appearance code ψi. Specifically, we first compute its normal np = ∇pd(x) by gradient calculation. Then, the view direction vp in 3D canonical space can be obtained by transforming the view direction vi in observation space with the Jacobian matrix Jp(pi) = ∂p/∂pi of the 3D canonical map p w.r.t pi: vp = Jp(pi)vi. Except the SDF value, we adopt a larger MLP Fd(x) = (d(x), z(x)) to compute the embedded geometry feature zx = z(x) to help the prediction of global shadow [67]. Finally, noticing pi is the correspondence of x at time ti, we can formulate its color ci as:
ci = Fc(p,np,vp, zx,ψi) = Fc(p,∇pd(x), Jp(pi)vi, z(x),ψi). (5)
It can be seen that the color of point pi viewed from direction vi depends on the deformation field, canonical representation, a deformation code as well an appearance code combined with time.
3.4 Optimization
Given an RGB-D sequence with the masks of interested object {(Ii,Di,Mi), i = 1, 2, · · · , N}, the optimizable parameters include MLPs {Fh, Fq, Fd, Fc}, learnable codes {φi,ψi}, RGB and depth camera intrinsics {Krgb,Kdepth}, as well as SE(3) camera pose Ti at each time ti. Our target is to design the loss terms to match input masks, color images and depth images. Since we leverage neural implicit functions for representing the geometry, appearance and motion of dynamic object, we divide all constraints into two parts, on free-space points and on surface points:
L = ( λ1Lmask + λ2Lcolor + λ3Ldepth + λ4Lreg ) ︸ ︷︷ ︸
free-space
+ ( λ5Lsdf + λ6Lvisible ) ︸ ︷︷ ︸
surface
, (6)
where λj(j = 1, 2, · · · , 6) are balancing weights.
Constraints on free-space. Given a ray parameterized as r(s) = o+ sv (pass through a pixel), we sample the implicit radiance field at points lying along this ray to approximate its color and depth:
Ĉ(r) = ∫ sf sn T (s)σ(s)c(s) ds, D̂(r) = ∫ sf sn T (s)σ(s)sds, (7)
where sn and sf represent near and far bounds, and T (s) = exp(− ∫ s sn
σ(u) du) denotes the accumulated transmittance along the ray. The density and color calculation are described in Sec. 3.3. Then the color and depth reconstruction loss are defined as:
Lcolor = ∑
r∈R(Krgb,Ti)
∥M(r)(Ĉ(r)−C(r))∥1, (8)
Ldepth = ∑
r∈R(Kdepth,Ti)
∥M(r)(D̂(r)−D(r))∥1, (9)
where R(Krgb, Ti) and R(Kdepth, Ti) represent the set of rays to RGB and depth camera, respectively. M(r) ∈ {0, 1} is the object mask value, while C(r) and D(r) are the observed color and depth value. To focus on dynamic object reconstruction, we also define a mask loss as
Lmask = BCE(M̂(r),M(r)), (10) where M̂(r) = ∫ sf sn
T (s)σ(s) ds is the density accumulation along the ray, and BCE is the binary cross entropy loss.
An Eikonal loss is introduced to regularize d(x) to be a signed distance function of p, and it has the following form: Lreg = ∑ x∈X (∥∇pd(x)∥2 − 1)2, (11)
where x are points sampled in the canonical hyper-space X . In our implementation, to obtain x, we first sample some points pi on the observed free-space and then deform sampled points back to X using Eq. 3. We constrain points sampled by a uniform and importance sampling strategy.
Constraints on surface. Except for the losses on the free-space, we also constrain the property of points lying on the depth images Di. We add an SDF loss term:
Lsdf = ∑
pi∈Di
∥d(x)∥1. (12)
To avoid the deformed surface at each time fuses into the canonical space which causes multi-surfaces phenomenon, we design a visible loss term to constrain surface:
Lvisible = ∑
pi∈Di
max(⟨ np ∥np∥2 , vp ∥vp∥2 ⟩, 0), (13)
where ⟨·, ·⟩ denotes the inner product. The visible loss term is to constrain the angle between the normal vector of the sampled point on depth map and the view direction to be larger than 90 degrees, which aims to guide depth points to be visible surface points under the RGB-D camera view.
4 Experiments
4.1 Experimental Settings
Implementation details. We initialize d(x) such that it approximates a unit sphere [2]. We train our neural networks using the ADAM optimizer [32] with a learning rate 5× 10−4. We run most of our experiments with 6×104 iterations for 12 hours on a single NVIDIA A100 40GB GPU. On free-space, we sample 2, 048 rays per batch (128 points along each ray). Following NeuS [60], we first uniformly sample 64 points, and then adopt importance sampling iteratively for 4 times (16 points each iteration). On depth map, we uniformly sample 2, 048 points per batch. For coarse-to-fine training, we utilize an incremental positional encoding strategy on sampled points, similar with Nerfies [46]. The weights in Eq. 6 are set as: λ1 = 0.1, λ2 = 1.0, λ3 = 0.5, λ4 = 0.1, λ5 = 0.5, λ6 = 0.1.
For non-rigid object segmentation, we leverage off-the-shelf methods, RVM [38] for human and MiVOS [14] for other objects. Since we assume the region of object is inside a unit sphere, we normalize the points back-projected from depth maps first. If the collected sequence implies larger global rotation, we leverage Robust ICP method [68] for per-frame initialization of poses Ti.
Datasets. To evaluate our NDR and baseline approaches, we use 6 scenes from DeepDeform [9] dataset, 7 scenes from KillingFusion [54] dataset, 1 scene from AMA [59] dataset and 11 scenes captured by ourselves. The evaluation data contains 6 classes: human faces, human bodies, domestic animals, plants, toys, and clothes. It includes challenging cases, such as rapid movement, self-rotation motion, topology change and complex shape. DeepDeform [9] dataset is captured by an iPad. Its RGB-D streams are recorded and aligned at a resolution of 640 × 480 and 30 frames per second. Since our NDR does not need any annotated or estimated correspondences, we only leverage RGB-D sequences and camera intrinsics as initialization when evaluating NDR, without scene flow or optical flow data. We choose 6 scenes from the whole dataset, including human bodies, dogs, and clothes. All sequences in KillingFusion [54] dataset were recorded with a Kinect v1, also aligned to 640×480 resolution. We choose all scenes from it, which contain toys and human motions. For evaluation on synthetic data, we use AMA [59] dataset, which contains reconstructed mesh corresponding to each video frame. To construct synthetic depth data, we render meshes to a chosen camera view. In the experiment, we do not utilize any multi-view messages but only monocular RGB-D frames. To increase the data diversity, especially for adding more challenging but routine conditions (e.g., topology change and complex details), we capture some human head and plant videos with iPhone X (resolution 480× 640 at 30 fps). When capturing head data, we ask the person to rotate the face while freely varying expressions. When capturing plant data, we record the states of leaf swings.
Comparison methods. (1) A widely-used classical fusion-based method, DynamicFusion [44]: It is the pioneering work that estimates and utilizes the motion of hierarchical node graph for deforming guidance, and it assumes the shape inside a canonical TSDF volume. (2) Two recent fusion-based methods, DeepDeform [9] and Bozic et al. [8]: These methods utilize the learning-based correspondences to help handle challenging motions. (3) A state-of-the-art fusion-based method, OcclusionFusion [39]: It computes occlusion-aware 3D motion through a neural network for modeling guidance. (4) A state-of-the-art RGB reconstruction method from monocular video, BANMo [65]: It models articulated 3D shapes in a neural blend skinning and differentiable rendering framework. For
comparison with RGB-D based methods, we use our re-implementation of DynamicFusion [44] and the results provided by the authors of OcclusionFusion [39].
4.2 Comparisons
RGB-D based methods. For qualitative evaluation, we exhibit some comparisons with DynamicFusion [44] and OcclusionFusion [39] in Fig. 3, also with DeepDeform [9] and Bozic et al. [8] in Fig. 4. Specifically, results of detailed modeling verify that bijective deformation mapping help match photometric correspondences between observed frames. As Fig. 3 shown, our NDR models geometry details while fusion-based methods [44, 39] are easy to form artifacts on the reconstructed surfaces. NDR also achieves considerable reconstruction accuracy on handling rapid movement (Fig. 4).
For quantitative evaluation, we calculate geometry errors on some testing sequences, following previous works [9, 8, 39]. The geometry metric is to compare depth values inside the object mask to the reconstructed geometry. The sequences are on behalf of various class objects and cases, including domestic animal (seq. Dog from DeepDeform [9]), rotated body,
human-object interaction, general object (seq. Alex, Hat, Frog from KillingFusion [54], separately), and human heads (seq. Human1, Human2 from our collected dataset). The quantitative results are shown in Tab. 1. We can see that our NDR outperforms previous works [44, 39], owing to jointly optimizing geometry, appearance and motion on a total video. On seq. Alex, the geometry error of OccluionFusion [39] is lower than that of ours. However, NDR can handle topology varying well, as shown in the corresponding qualitative results on the right of Fig. 3.
RGB based method. Fig. 5 exhibits several comparisons with a recent RGB based method - BANMo [65]. BANMo takes the RGB sequence as input and optimizes the geometry, appearance and motion based on the precomputed annotations, including the camera pose and optical flow. For a fair comparison, we also compare BANMo [65] with our NDR with only RGB supervision, where we provide them with the same camera initialization and frame-wise mask. For the RGB-only situation, both our method and BANMo may make some structural mistakes, such as the human arm in ours and the Snoopy’s ears in BANMo. Moreover, compared to our RGB-only results, BANMo suffers more from the local geometry noise, which should be due to the error caused by incorrect precomputed annotations. Meanwhile, our method does not rely on any precomputed annotations and achieves flat results. With the RGB-D sequence as input, our NDR full model performs robust and well in modeling geometry details and rapid motions.
4.3 Robustness on Camera Initialization
In order to systematically analyze the performance of our camera pose optimization ability, we add an experiment to test the robustness under various degrees of noise on both real and synthetic data. We choose 2 sequences of small rigid motion separately from DeepDeform [9] dataset (a body with moving joints, 200 frames) and AMA [59] dataset (a Samba dancer, 175
monocular frames). As Tab. 2, we add Gaussian noises with 5, 10, 20, 40, 60 degrees of standard deviation to initial Euler angles and calculate mean geometry errors (0 denotes without adding noises). The results show that NDR is robust against noisy camera poses to a certain extent, owing to its neural implicit representation and abundant optimization with RGB-D messages. If the standard deviation of Gaussian Noises is over 20 degrees, the reconstruction quality will be obviously affected (geometry error is over 1 cm). We refer the reader to supplementary material for qualitative results.
Input RGB Ours (only depth) Ours (6D motion) Ours (full)
4.4 Ablation Studies
We evaluate 3 components of our NDR regarding their effects on the final reconstruction result.
Depth cues. We evaluate the reconstruction results with only RGB supervision, i.e. removing depth images and only supervised with loss terms Lmask,Lcolor,Lreg. As shown in Fig. 5, the reconstruction results with only RGB information are not correct (especially seen from a novel view) since monocular camera scenes exist the ambiguity of depth.
RGB cues. We also evaluate the reconstruction results with only depth supervision, i.e. removing RGB images and color loss term Lcolor. As shown in Fig. 6, the reconstructed shapes lack geometrical details as color messages are not used.
Bijective map Fh. To verify the effect of our proposed bijective map Fh (Sec. 3.1), we change it to 6D motion representation in SE(3) space. As shown in Fig. 6, since Fh can satisfy the cycle consistency strictly, it is less prone to accumulate artifacts and thus performs better in local geometry. In comparison, the irreversible transformation is easy to fail in preserving high-quality surfaces.
4.5 Evaluation of Cycle Consistency
We perform a numerical experiment for cycle consistency evaluation on the whole deformation field. In the experiment, we randomly select 3 frames (indexed by i, j, k) as a group in a video sequence. Given points on one frame, we calculate the corresponding coordinates on another frame and record this scene flow as f . Then it includes 2 deforming paths from frame i to k, based on the direct flow fik, or the composite flow fij + fjk. To evaluate the
cycle consistency, we calculate the Euclidean norm of fij + fjk − fik as the error. The error smaller, the cycle consistency (invariant on deforming path) maintains better. We conduct experiments on a human body rotated in 360 degrees (200 frames) from KillingFusion [54] dataset and a talking head (300 frames) from our captured dataset. In the experiment, we randomly select 1, 000 groups of frames and calculate the mean error on depth points of object surface. Since the topology-aware network is irreversible, we optimize the corresponding positions with fixed network parameters and ADAM optimizer [32]. As a comparison, we also evaluate them on our framework with 6D motion. As Tab. 3 shown, cycle consistency of the whole deformation field among frames is maintained by bijective map Fh quite well, although it might be affected by irreversible topology-aware network.
5 Conclusion
We have presented NDR, a new approach for reconstructing the high-fidelity geometry and motions of a dynamic scene from a monocular RGB-D video without any template priors. Other than previous works, NDR integrates observed color and depth into a canonical SDF and radiance field for joint optimization of surface and deformation. For maintaining cycle consistency throughout the whole video, we propose an invertible bijective mapping between observation space and canonical space, which fits perfectly with non-rigid motions. To handle topology change, we employ a topology-aware network to model topology-variant correspondence. On public datasets and our collected dataset, NDR shows a strong empirical performance in modeling different class objects and handling various challenging cases. Negative societal impact and limitation: like many other works with neural implicit representation, our method needs plenty of computation resources and optimization time, which can be a concern for energy resource consumption. We will explore alleviating these in future work.
Acknowledgements. This research was partially supported by the National Natural Science Foundation of China (No.62122071, No.62272433), the Fundamental Research Funds for the Central Universities (No. WK3470000021), and Alibaba Group through Alibaba Innovation Research Program (AIR). The opinions, findings, conclusions, and recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the funding agencies or the government. We thank the authors of OcclusionFussion for sharing the fusion results of several RGB-D sequences. We also thank the authors of BANMo for their suggestions on experimental parameter settings. Special thanks to Prof. Weiwei Xu for providing some help. | 1. What is the main contribution of the paper regarding 3D reconstruction from a single RGB-D camera?
2. What are the strengths of the proposed approach, particularly in preserving cycle consistency and modeling geometry and motion?
3. What are the weaknesses of the paper, such as limited ablation studies and lack of quantitative experimental results?
4. How would you assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Do you have any concerns or suggestions regarding the proposed method, such as handling noisy depth information or using synthetic datasets for ablation studies? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper introduces a template-free method to reconstruct a high quality geometry and motion of a dynamic scene from a single RGB-D camera. It proposes a bijective deformation map to preserve the cycle consistency between two frames, thus it doesn’t require any scene or optical flow map. To handle the topology changes, the deformation network is combined with a topological-aware network. Experimental results show that the proposed method overcomes the state-of-the-art RGBD methods, such as DynamicFusion and OcclusionFusion, and RGB methods, such as BANMO.
Strengths And Weaknesses
Strenghts
A novel 3D reconstruction network of a dynamic scene from a single RGB-D camera. The combination between topology-aware network and deformation-based network makes the network to model the geometry and motion of a dynamic object.
The proposed bijective map is able to preserve the cycle-consistency because it models the points in 3D observation space to the points in 3D canonical space.
Weaknesses
There is only a qualitative ablation study. The overall framework is like an extended version of HyperNERF for RGBD videos. Thus, it is essential to perform a quantitative ablation study, especially compared to HyperNERF.
Lack of quantitative experimental results. The proposed method only performs a qualitative evaluation for various comparison methods. While qualitative evaluation can be subjective, it is essential to perform quantitative evaluation, especially with BANMo, HyperNERF, VolumeDeform, etc.
It is unclear about the cycle-consistency performance of the proposed bijective map. The evaluation method only focuses on a single frame. There should be a way to evaluate the consistency between frames because it is also a part of the proposed contribution.
It is recommended to follow the HyperNERF Fig. 8 to show the performance of the proposed method. It is unclear how the topology-aware and bijective map-based deformation networks affect the overall performance.
Questions
While the proposed method utilizes depth from Kinect as the ground truth, is there any effect if the captured depth information is noisy? Note that an active depth camera could not be used as the ground truth due to its noisy characteristic?
Why don’t use a synthetic dataset for the ablation study to prove the proposed idea?
Limitations
The authors have addressed the limitations in the conclusion. |
NIPS | Title
Neural Surface Reconstruction of Dynamic Scenes with Monocular RGB-D Camera
Abstract
We propose Neural-DynamicReconstruction (NDR), a template-free method to recover high-fidelity geometry and motions of a dynamic scene from a monocular RGB-D camera. In NDR, we adopt the neural implicit function for surface representation and rendering such that the captured color and depth can be fully utilized to jointly optimize the surface and deformations. To represent and constrain the non-rigid deformations, we propose a novel neural invertible deforming network such that the cycle consistency between arbitrary two frames is automatically satisfied. Considering that the surface topology of dynamic scene might change over time, we employ a topology-aware strategy to construct the topology-variant correspondence for the fused frames. NDR also further refines the camera poses in a global optimization manner. Experiments on public datasets and our collected dataset demonstrate that NDR outperforms existing monocular dynamic reconstruction methods.
1 Introduction
Reconstructing 3D geometry shape, texture and motions of the dynamic scene from a monocular video is a classical and challenging problem in computer vision. It has broad applications in many areas like virtual and augmented reality. Although existing methods [63, 65] have demonstrated impressive reconstruction results for dynamic scenes only with 2D images, they are still difficult to recover high-fidelity geometry shapes, especially for some casually captured data as abundant potential solutions exist without depth constraints. Only with 2D measurements, dynamic reconstruction methods require that motions of interested object hold in a nearby z-plane. Meanwhile, it is difficult to construct reliable correspondences in areas with weak texture, which causes error accumulation in the canonical space.
To solve this under-constrained problem, some methods propose to utilize shape priors for some special object types. For example, category-specific parametric shape models like 3DMM [6], SMPL [41] and SMAL [72] are first constructed and then used to help the reconstruction. However, templated-based methods could not generalize to unknown object types. On the other hand, some methods utilize annotations, like keypoints and optical flow, obtained from manual annotators or off-the-shelf tools [31, 33, 63, 65]. The motion trajectories of sparse or dense 2D points can effectively help recover the exact motion of the whole structure. However, it needs human labeling for supervision or highly depends on the quality of learned priors from a large-scale dataset.
One straightforward solution to this under-constrained problem is to reconstruct the interested object based on observations from RGB-D cameras like Microsoft Kinect [69] and Apple iPhone X. Existing fusion-based methods [44, 27, 54] utilize a dense non-rigid warp field and a canonical truncated signed
∗Corresponding author.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
distance (TSDF) volume to represent motion and shape, respectively. However, these fusion-based methods might fail due to accumulated tracking errors, especially for long sequences. To alleviate this problem, some learning-based methods [9, 8, 39] utilize more accurate correspondences which are annotated or learned from synthesis datasets to guide the dynamic fusion process. However, the captured color and depth information is not represented together within one differentiable framework in these methods. Recently, a neural implicit representation based method [3] has been proposed to reconstruct a room-scale scene from RGB-D inputs, but it is only designed for static scenes and can not be directly applied to dynamic scenes.
Time
Figure 1: Examples of reconstructed (right) and rendered (left) results by NDR. Given a monocular RGB-D video sequence, NDR recovers high-fidelity geometry and motions of a dynamic scene.
In this paper, we present Neural-DynamicReconstruction (NDR), a neural dynamic reconstruction method from a monocular RGB-D camera (Fig. 1). To represent the high-fidelity geometry and texture of deformable object, NDR maintains a neural implicit field as the canonical space. With extra depth constraint, there still exist multiple potential solutions since the correspondences between different frames are still unknown. In this paper, we propose the following strategies to constrain and regularize the solution space: (1) integrating all RGB-D frames to a high-fidelity textured shape in the canonical space; (2) maintaining cycle consistency between arbitrary two frames; (3) a surface representation which can handle topological changes.
Specifically, we adopt the neural SDF and radiance field to respectively represent the high-fidelity geometry and appearance in the canonical space instead of the TSDF volume frequently used in fusion-based methods [28, 44, 54, 9, 8, 39]. In our framework, each RGB-D frame can be integrated into the canonical representation. We propose a novel neural deformation representation that implies a continuous bijective map between observation and canonical space. The designed invertible module applies a cycle consistency constraint through the whole RGB-D video; meanwhile, it fits the natural properties of non-rigid motion well. To support topology changes of dynamic scene, we adopt the topology-aware network in HyperNeRF [47]. Thanks for modeling topology-variant correspondence, our framework can handle topology changes while existing deformation graph based methods [44, 39, 65] could not. NDR also further refines camera intrinsic parameters and poses during training. Extensive experimental results demonstrate that NDR can recover high-fidelity geometry and photorealistic texture for monocular category-agnostic RGB-D videos.
2 Related Works
RGB based dynamic reconstruction. Dynamic reconstruction approach can be divided into template-based and template-free types. Templates [6, 41, 50, 72] are category-specific statistical models constructed from large-scale datasets. With the help of pre-constructed 3D morphable models [6, 12, 36], some researches [5, 11, 26, 57, 21, 25, 19] reconstruct faces or heads from RGB inputs. Most of them need 2D keypoints as extra supervisory information to guide dynamic tracking [71, 17, 19]. With the aid of human parametric models [1, 41], some works [7, 62, 23, 24, 70, 29] recover digital avatars based on monocular image or video cues. However, it is unpractical to extend templates to general objects with limited 3D scanned priors, such as articulated objects, clothed human and animals. Non-rigid structure from motion (NR-SFM) algorithms [10, 51, 15, 34, 53] are to reconstruct category-agnostic object from 2D observations. Although NR-SFM can reconstruct reasonable result for general dynamic scenes, it heavily depends on reliable point trajectories throughout observed sequences [52, 56]. Recently, some methods [63, 64, 65] obtain promising results from a long monocular video or several short videos of a category. LASR [63] and ViSER [64] recover articulated shapes via a differentiable rendering manner [40], while BANMo [65] models them with the help of Neural Radiance Fields (NeRF) [43]. However, due to the depth ambiguity of input 2D images, the reconstruction might fails for some challenging inputs.
RGB-D based dynamic reconstruction. Recovering 3D deforming shapes from a monocular RGB video is a highly under-constrained problem. On the other hand, The progress in consumer-grade RGB-D sensors has made depth map capture from a single camera more convenient. Therefore, it is quite natural to reconstruct the target objects based on RGB-D sequences. DynamicFusion [44], the seminar work of RGB-D camera based dynamic object reconstruction, proposes to estimate a templatefree 6D motion field to warp live frames into a TSDF surface. The surface representation strategy has also been used in KinectFusion [28]. VolumeDeform [27] represents motion in a grid and incorporates global sparse SIFT [42] features during alignment. Guo et al. [20] coheres albedo, geometry and motion estimation in an optimization pipeline. KillingFusion [54] and SobolevFusion [55] are proposed to deal with topology changes. During deep learning era, DeepDeform [9] and Bozic et al. [8] aim to learn more accurate correspondences for tracking improvement of faster and more complex motions. OcclusionFusion [39] probes and handles the occlusion problem via an LSTMinvolved graph neural network but fails when topology changes. Although these methods obtain promising reconstruction results with the additional depth cues, their reconstructed shapes mainly depend on the captured depths, while the RGB images are not fully utilized to further improve the results.
Dynamic NeRF. Given a range of image cues, prior works on NeRF [43] optimize an underlying continuous scene function for novel view synthesis. Some NeRF-like methods [37, 49, 58, 18, 46, 47] achieve promising results on dynamic scenes without prior templates. Nerfies [46] and AD-NeRF [22] reconstruct free-viewpoint selfies from monocular videos. HyperNeRF [47] models an ambient slicing surface to express topologically varying regions. Recent approaches [60, 3] introduce neural representation for static object/scene reconstruction, but theirs can not be used for non-rigid scenes.
Cycle consistency constraint. To maintain cycle consistency between deformed frames is an important regularization in perceiving and modeling dynamic scenes [61]. However, recent methods [64, 37, 65] try to leverage a loss term to constrain estimated surface features or scene flow, which is a weak but not strict property. Therefore, constructing an invertible representation for deformation field is a reasonable design. Several invertible networks are proposed to represent deformation, such as Real-NVP [16], Neural-ODE [13], I-ResNet [4]. Based on these manners, there exist some methods modeling deformation in space [30, 66, 48] or time [45, 35] domain. CaDeX [35] is a novel dynamic surface representation method using a real-valued non-volume preserving module [16]. Different from these strategies, we propose a novel scale-invariant binary map between observation space and 3D canonical space to process RGB-D sequences, which is more suitable for modeling non-rigid motion.
3 Method
The input of NDR is an RGB-D sequence {(Ii,Di), i = 1, · · · , N} captured by a monocular RGB-D camera (e.g., Kinect and iPhone X), where Ii ∈ RH×W×3 is the i-th RGB frame and
Di ∈ RH×W×1 is the corresponding aligned depth map. To optimize a canonical textured shape and motion through the sequence, we leverage full N color frames Ii as well as corresponding depth frames Di. Specifically, we first adopt video segmentation methods [14, 38] to obtain the mask Mi of interested object. Then, we integrate RGB-D video sequence into a canonical hyper-space composed of a 3D canonical space and a topology space. We propose a continuous bijective representation between the 3D canonical and observation space such that the cycle consistency can be strictly satisfied. The implicit surface is represented by a neural SDF and volume rendering field, as a function of input hyper-coordinate and camera view. The geometry, appearance, and motions of dynamic object are optimized without any template or structured priors, like optical flow [65], 2D annotations [9] and estimated normal map [29]. The pipeline of NDR is shown in Fig. 2(a).
3.1 Bijective Map in Space-time Synthesis
Invertible representation. Given a 3D point sampled in the space of i-th frame, recent methods [44, 65, 47] model its motion as a 6D transformation in SE(3) space. Nerfies [46] and HyperNeRF [47] construct a continuous dense field to estimate the motion. To reduce the complexity, DynamicFusion [44] and BANMo [65] define warp functions based on several control points. The latter designs both 2D and 3D cycle consistency loss terms to apply bijective constraints to deformation representation, but it is just a guide for learning instead of a rigorous inference module. Similar to the previous works, we also construct the deformation between each current frame and the 3D canonical space. Further, we employ a strictly invertible bijective mapping, which is naturally compatible with the cycle consistency strategy. Specifically, we decompose the non-rigid deformation into several reversible bijective blocks, where each block represents the transformation along and around a certain axis. In this manner, our deformation representation is strictly invertible and fits the natural properties of non-rigid motion well, which is helpful for the reconstruction effect.
We denote pi = [xi, yi, zi] ∈ R3 as a position of the observation space at time ti, in which a deformed surface Ui is embedded. It is noticeable that pi represents any position, both surface and free-space points. A continuous homeomorphic mapping Hi : R3 → R3 maps pi back to the 3D canonical position p = [x, y, z]. Supposes that there exists a canonical shape U of the interested object, which is independent of time and is shared across the video sequence. Notes that map Hi is invertible, and thus we can directly obtain the deformed surface at time ti:
Ui = {H−1i ([x, y, z])|∀[x, y, z] ∈ U}. (1)
Then, the correspondence of pi can be expressed by the bijective map, factorized as:
[xj , yj , zj ] = Gij([xi, yi, zi]) = H−1j ◦ Hi([xi, yi, zi]). (2)
The deformation representation G is cycle consistent strictly, since it is invariant on deforming path (Gjk ◦ Gij = Gik). As a composite function of two bijective maps (Eq. 2), it is a topology-invariant function between arbitrary double time stamps.
Implementation. Based on these observations, we implement the bijective map H by a novel invertible network h. While Real-NVP [16] seems a suitable network structure, its scale-variant property limits its usage in our object reconstruction task. Inspired by the idea of Real-NVP to split the coordinates, we decompose our scale-invariant deformation into several blocks. In each block, we set an axis and represent the motion steps as simple axis-related rotations and translations, which are totally shared by the forward and backward deformations. In this manner, the inverse deformation H−1 can be viewed as the composite of the inverse of these simple rotations and translations in H. On the other hand, this map also regularizes the freedom of deformation.
Fig. 2(b) shows the detailed structure of each block. Given a latent deformation code φ binding with time, we firstly consider the forward deformation, where the 3D positions [u, v, w] ∈ R3 of observation space is input, and the positions [u′, v′, w′] ∈ R3 of 3D canonical space is output. The cause of the invertible property is that after specifying a certain coordinate axis, each block predicts the movement along and rotation around the axis in turn, and the process of predicting the deformation is reversible, owing to coordinate split. In the inverse process, each block can infer the rotation around and movement along the axis from [u′, v′, w′] and invert them in turn to recover the original [u, v, w].
Without loss of generality, let the w-axis to be the chosen axis. With [u, v] fixed, we compute a displacement δw and update w′ as w + δw. With [w′] fixed, we then compute the rotation Ruv and translation δuv for [u, v] and update them as [u′, v′]. Oppositely, for the backward deformation, we apply −δuv, R−1uv , and −δw in turn to recover [u′, v′, w′] back to [u, v, w]. We refer the reader to supplementary material for the inverse process. Therefore, if the network h consists of these invertible blocks, it can represent a bijective map as well. At time ti, h(·|φi) : R3 → R3 maps 3D positions pi of observation space back to 3D canonical correspondences p, where φi denotes the deformation code of i-th frame. In our experiment, we use a Multi-Layer Perceptron (MLP) as the implementation of h, so we design a continuous bijective map Fh for space-time synthesis.
3.2 Deformation Field
Although the proposed deformation representation is a continuous homeomorphic mapping that satisfies the cycle consistency between different frames, it also preserves the surface topology. However, several dynamic scenes (e.g., varying body motion and facial expression) may undergo topology changes. Therefore, we combine a topology-aware design [47] into our deformation field. 3D positions pi observed at time ti are mapped to topology coordinates q(pi) through a network q : R3 → Rm. We regress topology coordinates from an MLP Fq. Then the corresponding coordinate of pi in the canonical hyper-space is represented as:
x = [p,q(pi)] = [Fh(pi,φi), Fq(pi,φi)] ∈ R3+m, (3)
conditioned on time-varying deformation φi.
3.3 Implicit Canonical Geometry and Appearance
Inspired by NeRF [43], we consider that a sample point x ∈ R3+m in the canonical hyper-space is associated with two properties: density σ and color c ∈ R3.
Neural SDF. Notes that the object embeds in the (3 +m)-D canonical hyper-space. In this work, we represent its geometry as the zero-level set of an SDF:
S = {x ∈ R3+m|d(x) = 0}. (4)
Following NeuS [60], we utilize a probability function to calculate the density value σ(x) based on the estimated signed distance value, which is an unbiased and occlusion-aware approximation. We refer the reader to their paper for more details.
Implicit rendering network. We utilize a neural renderer Fc as the implicit appearance network. At time ti, it takes in a 3D canonical coordinate p, its corresponding normal, a canonical view direction as well as a geometry feature vector, then outputs the color of the point, conditioned on a time-varying appearance code ψi. Specifically, we first compute its normal np = ∇pd(x) by gradient calculation. Then, the view direction vp in 3D canonical space can be obtained by transforming the view direction vi in observation space with the Jacobian matrix Jp(pi) = ∂p/∂pi of the 3D canonical map p w.r.t pi: vp = Jp(pi)vi. Except the SDF value, we adopt a larger MLP Fd(x) = (d(x), z(x)) to compute the embedded geometry feature zx = z(x) to help the prediction of global shadow [67]. Finally, noticing pi is the correspondence of x at time ti, we can formulate its color ci as:
ci = Fc(p,np,vp, zx,ψi) = Fc(p,∇pd(x), Jp(pi)vi, z(x),ψi). (5)
It can be seen that the color of point pi viewed from direction vi depends on the deformation field, canonical representation, a deformation code as well an appearance code combined with time.
3.4 Optimization
Given an RGB-D sequence with the masks of interested object {(Ii,Di,Mi), i = 1, 2, · · · , N}, the optimizable parameters include MLPs {Fh, Fq, Fd, Fc}, learnable codes {φi,ψi}, RGB and depth camera intrinsics {Krgb,Kdepth}, as well as SE(3) camera pose Ti at each time ti. Our target is to design the loss terms to match input masks, color images and depth images. Since we leverage neural implicit functions for representing the geometry, appearance and motion of dynamic object, we divide all constraints into two parts, on free-space points and on surface points:
L = ( λ1Lmask + λ2Lcolor + λ3Ldepth + λ4Lreg ) ︸ ︷︷ ︸
free-space
+ ( λ5Lsdf + λ6Lvisible ) ︸ ︷︷ ︸
surface
, (6)
where λj(j = 1, 2, · · · , 6) are balancing weights.
Constraints on free-space. Given a ray parameterized as r(s) = o+ sv (pass through a pixel), we sample the implicit radiance field at points lying along this ray to approximate its color and depth:
Ĉ(r) = ∫ sf sn T (s)σ(s)c(s) ds, D̂(r) = ∫ sf sn T (s)σ(s)sds, (7)
where sn and sf represent near and far bounds, and T (s) = exp(− ∫ s sn
σ(u) du) denotes the accumulated transmittance along the ray. The density and color calculation are described in Sec. 3.3. Then the color and depth reconstruction loss are defined as:
Lcolor = ∑
r∈R(Krgb,Ti)
∥M(r)(Ĉ(r)−C(r))∥1, (8)
Ldepth = ∑
r∈R(Kdepth,Ti)
∥M(r)(D̂(r)−D(r))∥1, (9)
where R(Krgb, Ti) and R(Kdepth, Ti) represent the set of rays to RGB and depth camera, respectively. M(r) ∈ {0, 1} is the object mask value, while C(r) and D(r) are the observed color and depth value. To focus on dynamic object reconstruction, we also define a mask loss as
Lmask = BCE(M̂(r),M(r)), (10) where M̂(r) = ∫ sf sn
T (s)σ(s) ds is the density accumulation along the ray, and BCE is the binary cross entropy loss.
An Eikonal loss is introduced to regularize d(x) to be a signed distance function of p, and it has the following form: Lreg = ∑ x∈X (∥∇pd(x)∥2 − 1)2, (11)
where x are points sampled in the canonical hyper-space X . In our implementation, to obtain x, we first sample some points pi on the observed free-space and then deform sampled points back to X using Eq. 3. We constrain points sampled by a uniform and importance sampling strategy.
Constraints on surface. Except for the losses on the free-space, we also constrain the property of points lying on the depth images Di. We add an SDF loss term:
Lsdf = ∑
pi∈Di
∥d(x)∥1. (12)
To avoid the deformed surface at each time fuses into the canonical space which causes multi-surfaces phenomenon, we design a visible loss term to constrain surface:
Lvisible = ∑
pi∈Di
max(⟨ np ∥np∥2 , vp ∥vp∥2 ⟩, 0), (13)
where ⟨·, ·⟩ denotes the inner product. The visible loss term is to constrain the angle between the normal vector of the sampled point on depth map and the view direction to be larger than 90 degrees, which aims to guide depth points to be visible surface points under the RGB-D camera view.
4 Experiments
4.1 Experimental Settings
Implementation details. We initialize d(x) such that it approximates a unit sphere [2]. We train our neural networks using the ADAM optimizer [32] with a learning rate 5× 10−4. We run most of our experiments with 6×104 iterations for 12 hours on a single NVIDIA A100 40GB GPU. On free-space, we sample 2, 048 rays per batch (128 points along each ray). Following NeuS [60], we first uniformly sample 64 points, and then adopt importance sampling iteratively for 4 times (16 points each iteration). On depth map, we uniformly sample 2, 048 points per batch. For coarse-to-fine training, we utilize an incremental positional encoding strategy on sampled points, similar with Nerfies [46]. The weights in Eq. 6 are set as: λ1 = 0.1, λ2 = 1.0, λ3 = 0.5, λ4 = 0.1, λ5 = 0.5, λ6 = 0.1.
For non-rigid object segmentation, we leverage off-the-shelf methods, RVM [38] for human and MiVOS [14] for other objects. Since we assume the region of object is inside a unit sphere, we normalize the points back-projected from depth maps first. If the collected sequence implies larger global rotation, we leverage Robust ICP method [68] for per-frame initialization of poses Ti.
Datasets. To evaluate our NDR and baseline approaches, we use 6 scenes from DeepDeform [9] dataset, 7 scenes from KillingFusion [54] dataset, 1 scene from AMA [59] dataset and 11 scenes captured by ourselves. The evaluation data contains 6 classes: human faces, human bodies, domestic animals, plants, toys, and clothes. It includes challenging cases, such as rapid movement, self-rotation motion, topology change and complex shape. DeepDeform [9] dataset is captured by an iPad. Its RGB-D streams are recorded and aligned at a resolution of 640 × 480 and 30 frames per second. Since our NDR does not need any annotated or estimated correspondences, we only leverage RGB-D sequences and camera intrinsics as initialization when evaluating NDR, without scene flow or optical flow data. We choose 6 scenes from the whole dataset, including human bodies, dogs, and clothes. All sequences in KillingFusion [54] dataset were recorded with a Kinect v1, also aligned to 640×480 resolution. We choose all scenes from it, which contain toys and human motions. For evaluation on synthetic data, we use AMA [59] dataset, which contains reconstructed mesh corresponding to each video frame. To construct synthetic depth data, we render meshes to a chosen camera view. In the experiment, we do not utilize any multi-view messages but only monocular RGB-D frames. To increase the data diversity, especially for adding more challenging but routine conditions (e.g., topology change and complex details), we capture some human head and plant videos with iPhone X (resolution 480× 640 at 30 fps). When capturing head data, we ask the person to rotate the face while freely varying expressions. When capturing plant data, we record the states of leaf swings.
Comparison methods. (1) A widely-used classical fusion-based method, DynamicFusion [44]: It is the pioneering work that estimates and utilizes the motion of hierarchical node graph for deforming guidance, and it assumes the shape inside a canonical TSDF volume. (2) Two recent fusion-based methods, DeepDeform [9] and Bozic et al. [8]: These methods utilize the learning-based correspondences to help handle challenging motions. (3) A state-of-the-art fusion-based method, OcclusionFusion [39]: It computes occlusion-aware 3D motion through a neural network for modeling guidance. (4) A state-of-the-art RGB reconstruction method from monocular video, BANMo [65]: It models articulated 3D shapes in a neural blend skinning and differentiable rendering framework. For
comparison with RGB-D based methods, we use our re-implementation of DynamicFusion [44] and the results provided by the authors of OcclusionFusion [39].
4.2 Comparisons
RGB-D based methods. For qualitative evaluation, we exhibit some comparisons with DynamicFusion [44] and OcclusionFusion [39] in Fig. 3, also with DeepDeform [9] and Bozic et al. [8] in Fig. 4. Specifically, results of detailed modeling verify that bijective deformation mapping help match photometric correspondences between observed frames. As Fig. 3 shown, our NDR models geometry details while fusion-based methods [44, 39] are easy to form artifacts on the reconstructed surfaces. NDR also achieves considerable reconstruction accuracy on handling rapid movement (Fig. 4).
For quantitative evaluation, we calculate geometry errors on some testing sequences, following previous works [9, 8, 39]. The geometry metric is to compare depth values inside the object mask to the reconstructed geometry. The sequences are on behalf of various class objects and cases, including domestic animal (seq. Dog from DeepDeform [9]), rotated body,
human-object interaction, general object (seq. Alex, Hat, Frog from KillingFusion [54], separately), and human heads (seq. Human1, Human2 from our collected dataset). The quantitative results are shown in Tab. 1. We can see that our NDR outperforms previous works [44, 39], owing to jointly optimizing geometry, appearance and motion on a total video. On seq. Alex, the geometry error of OccluionFusion [39] is lower than that of ours. However, NDR can handle topology varying well, as shown in the corresponding qualitative results on the right of Fig. 3.
RGB based method. Fig. 5 exhibits several comparisons with a recent RGB based method - BANMo [65]. BANMo takes the RGB sequence as input and optimizes the geometry, appearance and motion based on the precomputed annotations, including the camera pose and optical flow. For a fair comparison, we also compare BANMo [65] with our NDR with only RGB supervision, where we provide them with the same camera initialization and frame-wise mask. For the RGB-only situation, both our method and BANMo may make some structural mistakes, such as the human arm in ours and the Snoopy’s ears in BANMo. Moreover, compared to our RGB-only results, BANMo suffers more from the local geometry noise, which should be due to the error caused by incorrect precomputed annotations. Meanwhile, our method does not rely on any precomputed annotations and achieves flat results. With the RGB-D sequence as input, our NDR full model performs robust and well in modeling geometry details and rapid motions.
4.3 Robustness on Camera Initialization
In order to systematically analyze the performance of our camera pose optimization ability, we add an experiment to test the robustness under various degrees of noise on both real and synthetic data. We choose 2 sequences of small rigid motion separately from DeepDeform [9] dataset (a body with moving joints, 200 frames) and AMA [59] dataset (a Samba dancer, 175
monocular frames). As Tab. 2, we add Gaussian noises with 5, 10, 20, 40, 60 degrees of standard deviation to initial Euler angles and calculate mean geometry errors (0 denotes without adding noises). The results show that NDR is robust against noisy camera poses to a certain extent, owing to its neural implicit representation and abundant optimization with RGB-D messages. If the standard deviation of Gaussian Noises is over 20 degrees, the reconstruction quality will be obviously affected (geometry error is over 1 cm). We refer the reader to supplementary material for qualitative results.
Input RGB Ours (only depth) Ours (6D motion) Ours (full)
4.4 Ablation Studies
We evaluate 3 components of our NDR regarding their effects on the final reconstruction result.
Depth cues. We evaluate the reconstruction results with only RGB supervision, i.e. removing depth images and only supervised with loss terms Lmask,Lcolor,Lreg. As shown in Fig. 5, the reconstruction results with only RGB information are not correct (especially seen from a novel view) since monocular camera scenes exist the ambiguity of depth.
RGB cues. We also evaluate the reconstruction results with only depth supervision, i.e. removing RGB images and color loss term Lcolor. As shown in Fig. 6, the reconstructed shapes lack geometrical details as color messages are not used.
Bijective map Fh. To verify the effect of our proposed bijective map Fh (Sec. 3.1), we change it to 6D motion representation in SE(3) space. As shown in Fig. 6, since Fh can satisfy the cycle consistency strictly, it is less prone to accumulate artifacts and thus performs better in local geometry. In comparison, the irreversible transformation is easy to fail in preserving high-quality surfaces.
4.5 Evaluation of Cycle Consistency
We perform a numerical experiment for cycle consistency evaluation on the whole deformation field. In the experiment, we randomly select 3 frames (indexed by i, j, k) as a group in a video sequence. Given points on one frame, we calculate the corresponding coordinates on another frame and record this scene flow as f . Then it includes 2 deforming paths from frame i to k, based on the direct flow fik, or the composite flow fij + fjk. To evaluate the
cycle consistency, we calculate the Euclidean norm of fij + fjk − fik as the error. The error smaller, the cycle consistency (invariant on deforming path) maintains better. We conduct experiments on a human body rotated in 360 degrees (200 frames) from KillingFusion [54] dataset and a talking head (300 frames) from our captured dataset. In the experiment, we randomly select 1, 000 groups of frames and calculate the mean error on depth points of object surface. Since the topology-aware network is irreversible, we optimize the corresponding positions with fixed network parameters and ADAM optimizer [32]. As a comparison, we also evaluate them on our framework with 6D motion. As Tab. 3 shown, cycle consistency of the whole deformation field among frames is maintained by bijective map Fh quite well, although it might be affected by irreversible topology-aware network.
5 Conclusion
We have presented NDR, a new approach for reconstructing the high-fidelity geometry and motions of a dynamic scene from a monocular RGB-D video without any template priors. Other than previous works, NDR integrates observed color and depth into a canonical SDF and radiance field for joint optimization of surface and deformation. For maintaining cycle consistency throughout the whole video, we propose an invertible bijective mapping between observation space and canonical space, which fits perfectly with non-rigid motions. To handle topology change, we employ a topology-aware network to model topology-variant correspondence. On public datasets and our collected dataset, NDR shows a strong empirical performance in modeling different class objects and handling various challenging cases. Negative societal impact and limitation: like many other works with neural implicit representation, our method needs plenty of computation resources and optimization time, which can be a concern for energy resource consumption. We will explore alleviating these in future work.
Acknowledgements. This research was partially supported by the National Natural Science Foundation of China (No.62122071, No.62272433), the Fundamental Research Funds for the Central Universities (No. WK3470000021), and Alibaba Group through Alibaba Innovation Research Program (AIR). The opinions, findings, conclusions, and recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the funding agencies or the government. We thank the authors of OcclusionFussion for sharing the fusion results of several RGB-D sequences. We also thank the authors of BANMo for their suggestions on experimental parameter settings. Special thanks to Prof. Weiwei Xu for providing some help. | 1. How does Neural Dynamic Reconstruction (NDR) handle non-rigid changes with deformation in dynamic scenes?
2. What are the strengths and weaknesses of NDR compared to other monocular dynamic scene reconstruction methods?
3. How does the proposed method utilize a bijective map and a canonical space to constrain non-rigid deformation?
4. What is the significance of the topology-aware network in tackling challenges of dynamic scene reconstruction?
5. Can you provide more details on the evaluation of the bijective map and its dependency on the actual topology-aware network?
6. How does the method handle wrongly handled topology, and what affects the bijective map quality?
7. Are there any potential ideas or open discussions to reduce computational expenses without compromising accuracy?
8. What is the main source of stripe artifacts in some results from DynamicFusion?
9. How can we improve the scalability and usability of the method for real-world applications?
10. Can you provide additional discussion and detailed evaluation of the bijective map and topology-aware network? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper proposes a template-free RGB-D based 3D scene reconstruction method that handles non-rigid changes with deformation in dynamic scenes. The proposed method, Neural Dynamic Reconstruction (NDR) follows the similar steps with the classic dynamic scene reconstruction methods. It uses a neural implicit function for surface representation, and thus using neural signed distance fields (SDF) and proposes a novel neural invertible deformation network that utilizes a bijective map between frames and a canonical space in order to constrain the non-rigid deformation of observed surfaces. The paper also adds a topology aware network that tackles the well known challenges of dynamic scene reconstruction of the free-form deformation, where dramatic changes of topology (or assumptions of topology) could make handling deformation/motion constraints hard. The experiments show that the proposed method outperforms (in the context of the accuracy of surface reconstruction over time) the other existing monocular dynamic scene reconstruction methods.
Strengths And Weaknesses
I appreciate the research effort from the authors. The results look very impressive and the contribution look very clear. Here are the +/- of the proposed methods and the submitted article.
+Convincing results compared to existing methods
+Provides solutions of each challenging limitation of classic methods (and other latest methods).
+Great idea on the use of bijective map together with topology aware network
-Need more detailed discussion and the evaluation of the bijective map and topology aware network
-Need more detailed discussion of computational expenses and how to handle them.
Questions
To make the paper more solid and to help the readers understand the article more clearly, I enumerated several questions below. Some questions may focus on how far the use of the proposed method is from the real world applications.
-How much offset the method can handle refining camera poses, and how much error or residual that wrongly initialized camera pose could be handled?
-How the segmentation results affect the overall results? How do the residuals in the boundary of target surfaces affect the quality of the reconstruction results?
-What is the sole inference time (including optimization steps) for each example? Particularly, providing some numbers, settings (settings, pose accuracy, # samples for each example demonstrated in the result sections would be very helpful to understand the correlation between the complexity of the scene/topology/motion.
-Is there any number or visualization that shows how accurately the bijective map is constructed?
-How wrongly handled topology affects the bijective map quality?
Regarding the importance of the major contribution of this paper, rather than the comparison to the 6D motion, extra discussion on the bijective map and its dependency on the actual topology aware network would make the paper more solid.
-The method is obviously very expensive. Is there any potential idea or open discussion to reduce the computational expenses? For example, not completely evaluate cycle consistency all over the frames, making some steps sequentially updatable. etc.
-In Figure 3. the results from DynamicFusion do not look right. What is the main source of the stripe artifacts in the most of results?
Limitations
As addressed in the conclusion, the major limitation of the method is probably the computational expense. At least adding a small section about the discussion of how to make the method scalable (even with the trade off between quality) or how to make it easier to use the method in the general use cases (end to end scenario) would make the paper more solid. Finally, as the major contribution also lies in the use of bijective map and topology aware network, providing more discussion and detailed evaluation (in addition to ablation test) would make the paper even more solid. |
NIPS | Title
Neural Surface Reconstruction of Dynamic Scenes with Monocular RGB-D Camera
Abstract
We propose Neural-DynamicReconstruction (NDR), a template-free method to recover high-fidelity geometry and motions of a dynamic scene from a monocular RGB-D camera. In NDR, we adopt the neural implicit function for surface representation and rendering such that the captured color and depth can be fully utilized to jointly optimize the surface and deformations. To represent and constrain the non-rigid deformations, we propose a novel neural invertible deforming network such that the cycle consistency between arbitrary two frames is automatically satisfied. Considering that the surface topology of dynamic scene might change over time, we employ a topology-aware strategy to construct the topology-variant correspondence for the fused frames. NDR also further refines the camera poses in a global optimization manner. Experiments on public datasets and our collected dataset demonstrate that NDR outperforms existing monocular dynamic reconstruction methods.
1 Introduction
Reconstructing 3D geometry shape, texture and motions of the dynamic scene from a monocular video is a classical and challenging problem in computer vision. It has broad applications in many areas like virtual and augmented reality. Although existing methods [63, 65] have demonstrated impressive reconstruction results for dynamic scenes only with 2D images, they are still difficult to recover high-fidelity geometry shapes, especially for some casually captured data as abundant potential solutions exist without depth constraints. Only with 2D measurements, dynamic reconstruction methods require that motions of interested object hold in a nearby z-plane. Meanwhile, it is difficult to construct reliable correspondences in areas with weak texture, which causes error accumulation in the canonical space.
To solve this under-constrained problem, some methods propose to utilize shape priors for some special object types. For example, category-specific parametric shape models like 3DMM [6], SMPL [41] and SMAL [72] are first constructed and then used to help the reconstruction. However, templated-based methods could not generalize to unknown object types. On the other hand, some methods utilize annotations, like keypoints and optical flow, obtained from manual annotators or off-the-shelf tools [31, 33, 63, 65]. The motion trajectories of sparse or dense 2D points can effectively help recover the exact motion of the whole structure. However, it needs human labeling for supervision or highly depends on the quality of learned priors from a large-scale dataset.
One straightforward solution to this under-constrained problem is to reconstruct the interested object based on observations from RGB-D cameras like Microsoft Kinect [69] and Apple iPhone X. Existing fusion-based methods [44, 27, 54] utilize a dense non-rigid warp field and a canonical truncated signed
∗Corresponding author.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
distance (TSDF) volume to represent motion and shape, respectively. However, these fusion-based methods might fail due to accumulated tracking errors, especially for long sequences. To alleviate this problem, some learning-based methods [9, 8, 39] utilize more accurate correspondences which are annotated or learned from synthesis datasets to guide the dynamic fusion process. However, the captured color and depth information is not represented together within one differentiable framework in these methods. Recently, a neural implicit representation based method [3] has been proposed to reconstruct a room-scale scene from RGB-D inputs, but it is only designed for static scenes and can not be directly applied to dynamic scenes.
Time
Figure 1: Examples of reconstructed (right) and rendered (left) results by NDR. Given a monocular RGB-D video sequence, NDR recovers high-fidelity geometry and motions of a dynamic scene.
In this paper, we present Neural-DynamicReconstruction (NDR), a neural dynamic reconstruction method from a monocular RGB-D camera (Fig. 1). To represent the high-fidelity geometry and texture of deformable object, NDR maintains a neural implicit field as the canonical space. With extra depth constraint, there still exist multiple potential solutions since the correspondences between different frames are still unknown. In this paper, we propose the following strategies to constrain and regularize the solution space: (1) integrating all RGB-D frames to a high-fidelity textured shape in the canonical space; (2) maintaining cycle consistency between arbitrary two frames; (3) a surface representation which can handle topological changes.
Specifically, we adopt the neural SDF and radiance field to respectively represent the high-fidelity geometry and appearance in the canonical space instead of the TSDF volume frequently used in fusion-based methods [28, 44, 54, 9, 8, 39]. In our framework, each RGB-D frame can be integrated into the canonical representation. We propose a novel neural deformation representation that implies a continuous bijective map between observation and canonical space. The designed invertible module applies a cycle consistency constraint through the whole RGB-D video; meanwhile, it fits the natural properties of non-rigid motion well. To support topology changes of dynamic scene, we adopt the topology-aware network in HyperNeRF [47]. Thanks for modeling topology-variant correspondence, our framework can handle topology changes while existing deformation graph based methods [44, 39, 65] could not. NDR also further refines camera intrinsic parameters and poses during training. Extensive experimental results demonstrate that NDR can recover high-fidelity geometry and photorealistic texture for monocular category-agnostic RGB-D videos.
2 Related Works
RGB based dynamic reconstruction. Dynamic reconstruction approach can be divided into template-based and template-free types. Templates [6, 41, 50, 72] are category-specific statistical models constructed from large-scale datasets. With the help of pre-constructed 3D morphable models [6, 12, 36], some researches [5, 11, 26, 57, 21, 25, 19] reconstruct faces or heads from RGB inputs. Most of them need 2D keypoints as extra supervisory information to guide dynamic tracking [71, 17, 19]. With the aid of human parametric models [1, 41], some works [7, 62, 23, 24, 70, 29] recover digital avatars based on monocular image or video cues. However, it is unpractical to extend templates to general objects with limited 3D scanned priors, such as articulated objects, clothed human and animals. Non-rigid structure from motion (NR-SFM) algorithms [10, 51, 15, 34, 53] are to reconstruct category-agnostic object from 2D observations. Although NR-SFM can reconstruct reasonable result for general dynamic scenes, it heavily depends on reliable point trajectories throughout observed sequences [52, 56]. Recently, some methods [63, 64, 65] obtain promising results from a long monocular video or several short videos of a category. LASR [63] and ViSER [64] recover articulated shapes via a differentiable rendering manner [40], while BANMo [65] models them with the help of Neural Radiance Fields (NeRF) [43]. However, due to the depth ambiguity of input 2D images, the reconstruction might fails for some challenging inputs.
RGB-D based dynamic reconstruction. Recovering 3D deforming shapes from a monocular RGB video is a highly under-constrained problem. On the other hand, The progress in consumer-grade RGB-D sensors has made depth map capture from a single camera more convenient. Therefore, it is quite natural to reconstruct the target objects based on RGB-D sequences. DynamicFusion [44], the seminar work of RGB-D camera based dynamic object reconstruction, proposes to estimate a templatefree 6D motion field to warp live frames into a TSDF surface. The surface representation strategy has also been used in KinectFusion [28]. VolumeDeform [27] represents motion in a grid and incorporates global sparse SIFT [42] features during alignment. Guo et al. [20] coheres albedo, geometry and motion estimation in an optimization pipeline. KillingFusion [54] and SobolevFusion [55] are proposed to deal with topology changes. During deep learning era, DeepDeform [9] and Bozic et al. [8] aim to learn more accurate correspondences for tracking improvement of faster and more complex motions. OcclusionFusion [39] probes and handles the occlusion problem via an LSTMinvolved graph neural network but fails when topology changes. Although these methods obtain promising reconstruction results with the additional depth cues, their reconstructed shapes mainly depend on the captured depths, while the RGB images are not fully utilized to further improve the results.
Dynamic NeRF. Given a range of image cues, prior works on NeRF [43] optimize an underlying continuous scene function for novel view synthesis. Some NeRF-like methods [37, 49, 58, 18, 46, 47] achieve promising results on dynamic scenes without prior templates. Nerfies [46] and AD-NeRF [22] reconstruct free-viewpoint selfies from monocular videos. HyperNeRF [47] models an ambient slicing surface to express topologically varying regions. Recent approaches [60, 3] introduce neural representation for static object/scene reconstruction, but theirs can not be used for non-rigid scenes.
Cycle consistency constraint. To maintain cycle consistency between deformed frames is an important regularization in perceiving and modeling dynamic scenes [61]. However, recent methods [64, 37, 65] try to leverage a loss term to constrain estimated surface features or scene flow, which is a weak but not strict property. Therefore, constructing an invertible representation for deformation field is a reasonable design. Several invertible networks are proposed to represent deformation, such as Real-NVP [16], Neural-ODE [13], I-ResNet [4]. Based on these manners, there exist some methods modeling deformation in space [30, 66, 48] or time [45, 35] domain. CaDeX [35] is a novel dynamic surface representation method using a real-valued non-volume preserving module [16]. Different from these strategies, we propose a novel scale-invariant binary map between observation space and 3D canonical space to process RGB-D sequences, which is more suitable for modeling non-rigid motion.
3 Method
The input of NDR is an RGB-D sequence {(Ii,Di), i = 1, · · · , N} captured by a monocular RGB-D camera (e.g., Kinect and iPhone X), where Ii ∈ RH×W×3 is the i-th RGB frame and
Di ∈ RH×W×1 is the corresponding aligned depth map. To optimize a canonical textured shape and motion through the sequence, we leverage full N color frames Ii as well as corresponding depth frames Di. Specifically, we first adopt video segmentation methods [14, 38] to obtain the mask Mi of interested object. Then, we integrate RGB-D video sequence into a canonical hyper-space composed of a 3D canonical space and a topology space. We propose a continuous bijective representation between the 3D canonical and observation space such that the cycle consistency can be strictly satisfied. The implicit surface is represented by a neural SDF and volume rendering field, as a function of input hyper-coordinate and camera view. The geometry, appearance, and motions of dynamic object are optimized without any template or structured priors, like optical flow [65], 2D annotations [9] and estimated normal map [29]. The pipeline of NDR is shown in Fig. 2(a).
3.1 Bijective Map in Space-time Synthesis
Invertible representation. Given a 3D point sampled in the space of i-th frame, recent methods [44, 65, 47] model its motion as a 6D transformation in SE(3) space. Nerfies [46] and HyperNeRF [47] construct a continuous dense field to estimate the motion. To reduce the complexity, DynamicFusion [44] and BANMo [65] define warp functions based on several control points. The latter designs both 2D and 3D cycle consistency loss terms to apply bijective constraints to deformation representation, but it is just a guide for learning instead of a rigorous inference module. Similar to the previous works, we also construct the deformation between each current frame and the 3D canonical space. Further, we employ a strictly invertible bijective mapping, which is naturally compatible with the cycle consistency strategy. Specifically, we decompose the non-rigid deformation into several reversible bijective blocks, where each block represents the transformation along and around a certain axis. In this manner, our deformation representation is strictly invertible and fits the natural properties of non-rigid motion well, which is helpful for the reconstruction effect.
We denote pi = [xi, yi, zi] ∈ R3 as a position of the observation space at time ti, in which a deformed surface Ui is embedded. It is noticeable that pi represents any position, both surface and free-space points. A continuous homeomorphic mapping Hi : R3 → R3 maps pi back to the 3D canonical position p = [x, y, z]. Supposes that there exists a canonical shape U of the interested object, which is independent of time and is shared across the video sequence. Notes that map Hi is invertible, and thus we can directly obtain the deformed surface at time ti:
Ui = {H−1i ([x, y, z])|∀[x, y, z] ∈ U}. (1)
Then, the correspondence of pi can be expressed by the bijective map, factorized as:
[xj , yj , zj ] = Gij([xi, yi, zi]) = H−1j ◦ Hi([xi, yi, zi]). (2)
The deformation representation G is cycle consistent strictly, since it is invariant on deforming path (Gjk ◦ Gij = Gik). As a composite function of two bijective maps (Eq. 2), it is a topology-invariant function between arbitrary double time stamps.
Implementation. Based on these observations, we implement the bijective map H by a novel invertible network h. While Real-NVP [16] seems a suitable network structure, its scale-variant property limits its usage in our object reconstruction task. Inspired by the idea of Real-NVP to split the coordinates, we decompose our scale-invariant deformation into several blocks. In each block, we set an axis and represent the motion steps as simple axis-related rotations and translations, which are totally shared by the forward and backward deformations. In this manner, the inverse deformation H−1 can be viewed as the composite of the inverse of these simple rotations and translations in H. On the other hand, this map also regularizes the freedom of deformation.
Fig. 2(b) shows the detailed structure of each block. Given a latent deformation code φ binding with time, we firstly consider the forward deformation, where the 3D positions [u, v, w] ∈ R3 of observation space is input, and the positions [u′, v′, w′] ∈ R3 of 3D canonical space is output. The cause of the invertible property is that after specifying a certain coordinate axis, each block predicts the movement along and rotation around the axis in turn, and the process of predicting the deformation is reversible, owing to coordinate split. In the inverse process, each block can infer the rotation around and movement along the axis from [u′, v′, w′] and invert them in turn to recover the original [u, v, w].
Without loss of generality, let the w-axis to be the chosen axis. With [u, v] fixed, we compute a displacement δw and update w′ as w + δw. With [w′] fixed, we then compute the rotation Ruv and translation δuv for [u, v] and update them as [u′, v′]. Oppositely, for the backward deformation, we apply −δuv, R−1uv , and −δw in turn to recover [u′, v′, w′] back to [u, v, w]. We refer the reader to supplementary material for the inverse process. Therefore, if the network h consists of these invertible blocks, it can represent a bijective map as well. At time ti, h(·|φi) : R3 → R3 maps 3D positions pi of observation space back to 3D canonical correspondences p, where φi denotes the deformation code of i-th frame. In our experiment, we use a Multi-Layer Perceptron (MLP) as the implementation of h, so we design a continuous bijective map Fh for space-time synthesis.
3.2 Deformation Field
Although the proposed deformation representation is a continuous homeomorphic mapping that satisfies the cycle consistency between different frames, it also preserves the surface topology. However, several dynamic scenes (e.g., varying body motion and facial expression) may undergo topology changes. Therefore, we combine a topology-aware design [47] into our deformation field. 3D positions pi observed at time ti are mapped to topology coordinates q(pi) through a network q : R3 → Rm. We regress topology coordinates from an MLP Fq. Then the corresponding coordinate of pi in the canonical hyper-space is represented as:
x = [p,q(pi)] = [Fh(pi,φi), Fq(pi,φi)] ∈ R3+m, (3)
conditioned on time-varying deformation φi.
3.3 Implicit Canonical Geometry and Appearance
Inspired by NeRF [43], we consider that a sample point x ∈ R3+m in the canonical hyper-space is associated with two properties: density σ and color c ∈ R3.
Neural SDF. Notes that the object embeds in the (3 +m)-D canonical hyper-space. In this work, we represent its geometry as the zero-level set of an SDF:
S = {x ∈ R3+m|d(x) = 0}. (4)
Following NeuS [60], we utilize a probability function to calculate the density value σ(x) based on the estimated signed distance value, which is an unbiased and occlusion-aware approximation. We refer the reader to their paper for more details.
Implicit rendering network. We utilize a neural renderer Fc as the implicit appearance network. At time ti, it takes in a 3D canonical coordinate p, its corresponding normal, a canonical view direction as well as a geometry feature vector, then outputs the color of the point, conditioned on a time-varying appearance code ψi. Specifically, we first compute its normal np = ∇pd(x) by gradient calculation. Then, the view direction vp in 3D canonical space can be obtained by transforming the view direction vi in observation space with the Jacobian matrix Jp(pi) = ∂p/∂pi of the 3D canonical map p w.r.t pi: vp = Jp(pi)vi. Except the SDF value, we adopt a larger MLP Fd(x) = (d(x), z(x)) to compute the embedded geometry feature zx = z(x) to help the prediction of global shadow [67]. Finally, noticing pi is the correspondence of x at time ti, we can formulate its color ci as:
ci = Fc(p,np,vp, zx,ψi) = Fc(p,∇pd(x), Jp(pi)vi, z(x),ψi). (5)
It can be seen that the color of point pi viewed from direction vi depends on the deformation field, canonical representation, a deformation code as well an appearance code combined with time.
3.4 Optimization
Given an RGB-D sequence with the masks of interested object {(Ii,Di,Mi), i = 1, 2, · · · , N}, the optimizable parameters include MLPs {Fh, Fq, Fd, Fc}, learnable codes {φi,ψi}, RGB and depth camera intrinsics {Krgb,Kdepth}, as well as SE(3) camera pose Ti at each time ti. Our target is to design the loss terms to match input masks, color images and depth images. Since we leverage neural implicit functions for representing the geometry, appearance and motion of dynamic object, we divide all constraints into two parts, on free-space points and on surface points:
L = ( λ1Lmask + λ2Lcolor + λ3Ldepth + λ4Lreg ) ︸ ︷︷ ︸
free-space
+ ( λ5Lsdf + λ6Lvisible ) ︸ ︷︷ ︸
surface
, (6)
where λj(j = 1, 2, · · · , 6) are balancing weights.
Constraints on free-space. Given a ray parameterized as r(s) = o+ sv (pass through a pixel), we sample the implicit radiance field at points lying along this ray to approximate its color and depth:
Ĉ(r) = ∫ sf sn T (s)σ(s)c(s) ds, D̂(r) = ∫ sf sn T (s)σ(s)sds, (7)
where sn and sf represent near and far bounds, and T (s) = exp(− ∫ s sn
σ(u) du) denotes the accumulated transmittance along the ray. The density and color calculation are described in Sec. 3.3. Then the color and depth reconstruction loss are defined as:
Lcolor = ∑
r∈R(Krgb,Ti)
∥M(r)(Ĉ(r)−C(r))∥1, (8)
Ldepth = ∑
r∈R(Kdepth,Ti)
∥M(r)(D̂(r)−D(r))∥1, (9)
where R(Krgb, Ti) and R(Kdepth, Ti) represent the set of rays to RGB and depth camera, respectively. M(r) ∈ {0, 1} is the object mask value, while C(r) and D(r) are the observed color and depth value. To focus on dynamic object reconstruction, we also define a mask loss as
Lmask = BCE(M̂(r),M(r)), (10) where M̂(r) = ∫ sf sn
T (s)σ(s) ds is the density accumulation along the ray, and BCE is the binary cross entropy loss.
An Eikonal loss is introduced to regularize d(x) to be a signed distance function of p, and it has the following form: Lreg = ∑ x∈X (∥∇pd(x)∥2 − 1)2, (11)
where x are points sampled in the canonical hyper-space X . In our implementation, to obtain x, we first sample some points pi on the observed free-space and then deform sampled points back to X using Eq. 3. We constrain points sampled by a uniform and importance sampling strategy.
Constraints on surface. Except for the losses on the free-space, we also constrain the property of points lying on the depth images Di. We add an SDF loss term:
Lsdf = ∑
pi∈Di
∥d(x)∥1. (12)
To avoid the deformed surface at each time fuses into the canonical space which causes multi-surfaces phenomenon, we design a visible loss term to constrain surface:
Lvisible = ∑
pi∈Di
max(⟨ np ∥np∥2 , vp ∥vp∥2 ⟩, 0), (13)
where ⟨·, ·⟩ denotes the inner product. The visible loss term is to constrain the angle between the normal vector of the sampled point on depth map and the view direction to be larger than 90 degrees, which aims to guide depth points to be visible surface points under the RGB-D camera view.
4 Experiments
4.1 Experimental Settings
Implementation details. We initialize d(x) such that it approximates a unit sphere [2]. We train our neural networks using the ADAM optimizer [32] with a learning rate 5× 10−4. We run most of our experiments with 6×104 iterations for 12 hours on a single NVIDIA A100 40GB GPU. On free-space, we sample 2, 048 rays per batch (128 points along each ray). Following NeuS [60], we first uniformly sample 64 points, and then adopt importance sampling iteratively for 4 times (16 points each iteration). On depth map, we uniformly sample 2, 048 points per batch. For coarse-to-fine training, we utilize an incremental positional encoding strategy on sampled points, similar with Nerfies [46]. The weights in Eq. 6 are set as: λ1 = 0.1, λ2 = 1.0, λ3 = 0.5, λ4 = 0.1, λ5 = 0.5, λ6 = 0.1.
For non-rigid object segmentation, we leverage off-the-shelf methods, RVM [38] for human and MiVOS [14] for other objects. Since we assume the region of object is inside a unit sphere, we normalize the points back-projected from depth maps first. If the collected sequence implies larger global rotation, we leverage Robust ICP method [68] for per-frame initialization of poses Ti.
Datasets. To evaluate our NDR and baseline approaches, we use 6 scenes from DeepDeform [9] dataset, 7 scenes from KillingFusion [54] dataset, 1 scene from AMA [59] dataset and 11 scenes captured by ourselves. The evaluation data contains 6 classes: human faces, human bodies, domestic animals, plants, toys, and clothes. It includes challenging cases, such as rapid movement, self-rotation motion, topology change and complex shape. DeepDeform [9] dataset is captured by an iPad. Its RGB-D streams are recorded and aligned at a resolution of 640 × 480 and 30 frames per second. Since our NDR does not need any annotated or estimated correspondences, we only leverage RGB-D sequences and camera intrinsics as initialization when evaluating NDR, without scene flow or optical flow data. We choose 6 scenes from the whole dataset, including human bodies, dogs, and clothes. All sequences in KillingFusion [54] dataset were recorded with a Kinect v1, also aligned to 640×480 resolution. We choose all scenes from it, which contain toys and human motions. For evaluation on synthetic data, we use AMA [59] dataset, which contains reconstructed mesh corresponding to each video frame. To construct synthetic depth data, we render meshes to a chosen camera view. In the experiment, we do not utilize any multi-view messages but only monocular RGB-D frames. To increase the data diversity, especially for adding more challenging but routine conditions (e.g., topology change and complex details), we capture some human head and plant videos with iPhone X (resolution 480× 640 at 30 fps). When capturing head data, we ask the person to rotate the face while freely varying expressions. When capturing plant data, we record the states of leaf swings.
Comparison methods. (1) A widely-used classical fusion-based method, DynamicFusion [44]: It is the pioneering work that estimates and utilizes the motion of hierarchical node graph for deforming guidance, and it assumes the shape inside a canonical TSDF volume. (2) Two recent fusion-based methods, DeepDeform [9] and Bozic et al. [8]: These methods utilize the learning-based correspondences to help handle challenging motions. (3) A state-of-the-art fusion-based method, OcclusionFusion [39]: It computes occlusion-aware 3D motion through a neural network for modeling guidance. (4) A state-of-the-art RGB reconstruction method from monocular video, BANMo [65]: It models articulated 3D shapes in a neural blend skinning and differentiable rendering framework. For
comparison with RGB-D based methods, we use our re-implementation of DynamicFusion [44] and the results provided by the authors of OcclusionFusion [39].
4.2 Comparisons
RGB-D based methods. For qualitative evaluation, we exhibit some comparisons with DynamicFusion [44] and OcclusionFusion [39] in Fig. 3, also with DeepDeform [9] and Bozic et al. [8] in Fig. 4. Specifically, results of detailed modeling verify that bijective deformation mapping help match photometric correspondences between observed frames. As Fig. 3 shown, our NDR models geometry details while fusion-based methods [44, 39] are easy to form artifacts on the reconstructed surfaces. NDR also achieves considerable reconstruction accuracy on handling rapid movement (Fig. 4).
For quantitative evaluation, we calculate geometry errors on some testing sequences, following previous works [9, 8, 39]. The geometry metric is to compare depth values inside the object mask to the reconstructed geometry. The sequences are on behalf of various class objects and cases, including domestic animal (seq. Dog from DeepDeform [9]), rotated body,
human-object interaction, general object (seq. Alex, Hat, Frog from KillingFusion [54], separately), and human heads (seq. Human1, Human2 from our collected dataset). The quantitative results are shown in Tab. 1. We can see that our NDR outperforms previous works [44, 39], owing to jointly optimizing geometry, appearance and motion on a total video. On seq. Alex, the geometry error of OccluionFusion [39] is lower than that of ours. However, NDR can handle topology varying well, as shown in the corresponding qualitative results on the right of Fig. 3.
RGB based method. Fig. 5 exhibits several comparisons with a recent RGB based method - BANMo [65]. BANMo takes the RGB sequence as input and optimizes the geometry, appearance and motion based on the precomputed annotations, including the camera pose and optical flow. For a fair comparison, we also compare BANMo [65] with our NDR with only RGB supervision, where we provide them with the same camera initialization and frame-wise mask. For the RGB-only situation, both our method and BANMo may make some structural mistakes, such as the human arm in ours and the Snoopy’s ears in BANMo. Moreover, compared to our RGB-only results, BANMo suffers more from the local geometry noise, which should be due to the error caused by incorrect precomputed annotations. Meanwhile, our method does not rely on any precomputed annotations and achieves flat results. With the RGB-D sequence as input, our NDR full model performs robust and well in modeling geometry details and rapid motions.
4.3 Robustness on Camera Initialization
In order to systematically analyze the performance of our camera pose optimization ability, we add an experiment to test the robustness under various degrees of noise on both real and synthetic data. We choose 2 sequences of small rigid motion separately from DeepDeform [9] dataset (a body with moving joints, 200 frames) and AMA [59] dataset (a Samba dancer, 175
monocular frames). As Tab. 2, we add Gaussian noises with 5, 10, 20, 40, 60 degrees of standard deviation to initial Euler angles and calculate mean geometry errors (0 denotes without adding noises). The results show that NDR is robust against noisy camera poses to a certain extent, owing to its neural implicit representation and abundant optimization with RGB-D messages. If the standard deviation of Gaussian Noises is over 20 degrees, the reconstruction quality will be obviously affected (geometry error is over 1 cm). We refer the reader to supplementary material for qualitative results.
Input RGB Ours (only depth) Ours (6D motion) Ours (full)
4.4 Ablation Studies
We evaluate 3 components of our NDR regarding their effects on the final reconstruction result.
Depth cues. We evaluate the reconstruction results with only RGB supervision, i.e. removing depth images and only supervised with loss terms Lmask,Lcolor,Lreg. As shown in Fig. 5, the reconstruction results with only RGB information are not correct (especially seen from a novel view) since monocular camera scenes exist the ambiguity of depth.
RGB cues. We also evaluate the reconstruction results with only depth supervision, i.e. removing RGB images and color loss term Lcolor. As shown in Fig. 6, the reconstructed shapes lack geometrical details as color messages are not used.
Bijective map Fh. To verify the effect of our proposed bijective map Fh (Sec. 3.1), we change it to 6D motion representation in SE(3) space. As shown in Fig. 6, since Fh can satisfy the cycle consistency strictly, it is less prone to accumulate artifacts and thus performs better in local geometry. In comparison, the irreversible transformation is easy to fail in preserving high-quality surfaces.
4.5 Evaluation of Cycle Consistency
We perform a numerical experiment for cycle consistency evaluation on the whole deformation field. In the experiment, we randomly select 3 frames (indexed by i, j, k) as a group in a video sequence. Given points on one frame, we calculate the corresponding coordinates on another frame and record this scene flow as f . Then it includes 2 deforming paths from frame i to k, based on the direct flow fik, or the composite flow fij + fjk. To evaluate the
cycle consistency, we calculate the Euclidean norm of fij + fjk − fik as the error. The error smaller, the cycle consistency (invariant on deforming path) maintains better. We conduct experiments on a human body rotated in 360 degrees (200 frames) from KillingFusion [54] dataset and a talking head (300 frames) from our captured dataset. In the experiment, we randomly select 1, 000 groups of frames and calculate the mean error on depth points of object surface. Since the topology-aware network is irreversible, we optimize the corresponding positions with fixed network parameters and ADAM optimizer [32]. As a comparison, we also evaluate them on our framework with 6D motion. As Tab. 3 shown, cycle consistency of the whole deformation field among frames is maintained by bijective map Fh quite well, although it might be affected by irreversible topology-aware network.
5 Conclusion
We have presented NDR, a new approach for reconstructing the high-fidelity geometry and motions of a dynamic scene from a monocular RGB-D video without any template priors. Other than previous works, NDR integrates observed color and depth into a canonical SDF and radiance field for joint optimization of surface and deformation. For maintaining cycle consistency throughout the whole video, we propose an invertible bijective mapping between observation space and canonical space, which fits perfectly with non-rigid motions. To handle topology change, we employ a topology-aware network to model topology-variant correspondence. On public datasets and our collected dataset, NDR shows a strong empirical performance in modeling different class objects and handling various challenging cases. Negative societal impact and limitation: like many other works with neural implicit representation, our method needs plenty of computation resources and optimization time, which can be a concern for energy resource consumption. We will explore alleviating these in future work.
Acknowledgements. This research was partially supported by the National Natural Science Foundation of China (No.62122071, No.62272433), the Fundamental Research Funds for the Central Universities (No. WK3470000021), and Alibaba Group through Alibaba Innovation Research Program (AIR). The opinions, findings, conclusions, and recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the funding agencies or the government. We thank the authors of OcclusionFussion for sharing the fusion results of several RGB-D sequences. We also thank the authors of BANMo for their suggestions on experimental parameter settings. Special thanks to Prof. Weiwei Xu for providing some help. | 1. What is the focus and contribution of the paper on surface reconstruction?
2. What are the strengths of the proposed approach, particularly in terms of cycle consistency and topology awareness?
3. What are the weaknesses of the paper, especially regarding the definition and choice of the canonical space?
4. Do you have any concerns about the method's ability to handle large motions or long sequences?
5. What are the limitations of the proposed approach, and how do they impact its applicability in certain scenarios? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper proposed a method for surface reconstruction from a sequential RGBD input. The main strategy for this method is the cycle consistency between canonical and observation space. This approach seems to represent non-rigid deformation. Moreover, to be topology-aware, this paper employed [45]. The results show some comparisons with DynamicFusion[42], OcclusionFusion[58], and so on. Both qualitative and quantitative results show that the method proposed in this paper is better than the methods for the comparison in some parts. The ablation study is also conducted but it is qualitative results only.
Strengths And Weaknesses
Strength
The idea of a cycle consistently between observation and canonical space is reasonable.
Topology-awareness improves the results but it is basically the previously proposed method.
The organization and writing are good.
Weakness
As written in the questions and limitations of this review, I cannot find how to define the canonical space. Also, I have some concerns in the limitation of this method.
Questions
As for the canonical space, how is it decided? Moreover, does the choice, deciding, or learning of canonical space affect the performance? In a case of a long sequence with big motion like a 360 rotation, does the cycle consistency correctly work?
Limitations
If there are some motions that this method cannot apply, I think it should be mentioned as a limitation of this method. |
NIPS | Title
Neural Surface Reconstruction of Dynamic Scenes with Monocular RGB-D Camera
Abstract
We propose Neural-DynamicReconstruction (NDR), a template-free method to recover high-fidelity geometry and motions of a dynamic scene from a monocular RGB-D camera. In NDR, we adopt the neural implicit function for surface representation and rendering such that the captured color and depth can be fully utilized to jointly optimize the surface and deformations. To represent and constrain the non-rigid deformations, we propose a novel neural invertible deforming network such that the cycle consistency between arbitrary two frames is automatically satisfied. Considering that the surface topology of dynamic scene might change over time, we employ a topology-aware strategy to construct the topology-variant correspondence for the fused frames. NDR also further refines the camera poses in a global optimization manner. Experiments on public datasets and our collected dataset demonstrate that NDR outperforms existing monocular dynamic reconstruction methods.
1 Introduction
Reconstructing 3D geometry shape, texture and motions of the dynamic scene from a monocular video is a classical and challenging problem in computer vision. It has broad applications in many areas like virtual and augmented reality. Although existing methods [63, 65] have demonstrated impressive reconstruction results for dynamic scenes only with 2D images, they are still difficult to recover high-fidelity geometry shapes, especially for some casually captured data as abundant potential solutions exist without depth constraints. Only with 2D measurements, dynamic reconstruction methods require that motions of interested object hold in a nearby z-plane. Meanwhile, it is difficult to construct reliable correspondences in areas with weak texture, which causes error accumulation in the canonical space.
To solve this under-constrained problem, some methods propose to utilize shape priors for some special object types. For example, category-specific parametric shape models like 3DMM [6], SMPL [41] and SMAL [72] are first constructed and then used to help the reconstruction. However, templated-based methods could not generalize to unknown object types. On the other hand, some methods utilize annotations, like keypoints and optical flow, obtained from manual annotators or off-the-shelf tools [31, 33, 63, 65]. The motion trajectories of sparse or dense 2D points can effectively help recover the exact motion of the whole structure. However, it needs human labeling for supervision or highly depends on the quality of learned priors from a large-scale dataset.
One straightforward solution to this under-constrained problem is to reconstruct the interested object based on observations from RGB-D cameras like Microsoft Kinect [69] and Apple iPhone X. Existing fusion-based methods [44, 27, 54] utilize a dense non-rigid warp field and a canonical truncated signed
∗Corresponding author.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
distance (TSDF) volume to represent motion and shape, respectively. However, these fusion-based methods might fail due to accumulated tracking errors, especially for long sequences. To alleviate this problem, some learning-based methods [9, 8, 39] utilize more accurate correspondences which are annotated or learned from synthesis datasets to guide the dynamic fusion process. However, the captured color and depth information is not represented together within one differentiable framework in these methods. Recently, a neural implicit representation based method [3] has been proposed to reconstruct a room-scale scene from RGB-D inputs, but it is only designed for static scenes and can not be directly applied to dynamic scenes.
Time
Figure 1: Examples of reconstructed (right) and rendered (left) results by NDR. Given a monocular RGB-D video sequence, NDR recovers high-fidelity geometry and motions of a dynamic scene.
In this paper, we present Neural-DynamicReconstruction (NDR), a neural dynamic reconstruction method from a monocular RGB-D camera (Fig. 1). To represent the high-fidelity geometry and texture of deformable object, NDR maintains a neural implicit field as the canonical space. With extra depth constraint, there still exist multiple potential solutions since the correspondences between different frames are still unknown. In this paper, we propose the following strategies to constrain and regularize the solution space: (1) integrating all RGB-D frames to a high-fidelity textured shape in the canonical space; (2) maintaining cycle consistency between arbitrary two frames; (3) a surface representation which can handle topological changes.
Specifically, we adopt the neural SDF and radiance field to respectively represent the high-fidelity geometry and appearance in the canonical space instead of the TSDF volume frequently used in fusion-based methods [28, 44, 54, 9, 8, 39]. In our framework, each RGB-D frame can be integrated into the canonical representation. We propose a novel neural deformation representation that implies a continuous bijective map between observation and canonical space. The designed invertible module applies a cycle consistency constraint through the whole RGB-D video; meanwhile, it fits the natural properties of non-rigid motion well. To support topology changes of dynamic scene, we adopt the topology-aware network in HyperNeRF [47]. Thanks for modeling topology-variant correspondence, our framework can handle topology changes while existing deformation graph based methods [44, 39, 65] could not. NDR also further refines camera intrinsic parameters and poses during training. Extensive experimental results demonstrate that NDR can recover high-fidelity geometry and photorealistic texture for monocular category-agnostic RGB-D videos.
2 Related Works
RGB based dynamic reconstruction. Dynamic reconstruction approach can be divided into template-based and template-free types. Templates [6, 41, 50, 72] are category-specific statistical models constructed from large-scale datasets. With the help of pre-constructed 3D morphable models [6, 12, 36], some researches [5, 11, 26, 57, 21, 25, 19] reconstruct faces or heads from RGB inputs. Most of them need 2D keypoints as extra supervisory information to guide dynamic tracking [71, 17, 19]. With the aid of human parametric models [1, 41], some works [7, 62, 23, 24, 70, 29] recover digital avatars based on monocular image or video cues. However, it is unpractical to extend templates to general objects with limited 3D scanned priors, such as articulated objects, clothed human and animals. Non-rigid structure from motion (NR-SFM) algorithms [10, 51, 15, 34, 53] are to reconstruct category-agnostic object from 2D observations. Although NR-SFM can reconstruct reasonable result for general dynamic scenes, it heavily depends on reliable point trajectories throughout observed sequences [52, 56]. Recently, some methods [63, 64, 65] obtain promising results from a long monocular video or several short videos of a category. LASR [63] and ViSER [64] recover articulated shapes via a differentiable rendering manner [40], while BANMo [65] models them with the help of Neural Radiance Fields (NeRF) [43]. However, due to the depth ambiguity of input 2D images, the reconstruction might fails for some challenging inputs.
RGB-D based dynamic reconstruction. Recovering 3D deforming shapes from a monocular RGB video is a highly under-constrained problem. On the other hand, The progress in consumer-grade RGB-D sensors has made depth map capture from a single camera more convenient. Therefore, it is quite natural to reconstruct the target objects based on RGB-D sequences. DynamicFusion [44], the seminar work of RGB-D camera based dynamic object reconstruction, proposes to estimate a templatefree 6D motion field to warp live frames into a TSDF surface. The surface representation strategy has also been used in KinectFusion [28]. VolumeDeform [27] represents motion in a grid and incorporates global sparse SIFT [42] features during alignment. Guo et al. [20] coheres albedo, geometry and motion estimation in an optimization pipeline. KillingFusion [54] and SobolevFusion [55] are proposed to deal with topology changes. During deep learning era, DeepDeform [9] and Bozic et al. [8] aim to learn more accurate correspondences for tracking improvement of faster and more complex motions. OcclusionFusion [39] probes and handles the occlusion problem via an LSTMinvolved graph neural network but fails when topology changes. Although these methods obtain promising reconstruction results with the additional depth cues, their reconstructed shapes mainly depend on the captured depths, while the RGB images are not fully utilized to further improve the results.
Dynamic NeRF. Given a range of image cues, prior works on NeRF [43] optimize an underlying continuous scene function for novel view synthesis. Some NeRF-like methods [37, 49, 58, 18, 46, 47] achieve promising results on dynamic scenes without prior templates. Nerfies [46] and AD-NeRF [22] reconstruct free-viewpoint selfies from monocular videos. HyperNeRF [47] models an ambient slicing surface to express topologically varying regions. Recent approaches [60, 3] introduce neural representation for static object/scene reconstruction, but theirs can not be used for non-rigid scenes.
Cycle consistency constraint. To maintain cycle consistency between deformed frames is an important regularization in perceiving and modeling dynamic scenes [61]. However, recent methods [64, 37, 65] try to leverage a loss term to constrain estimated surface features or scene flow, which is a weak but not strict property. Therefore, constructing an invertible representation for deformation field is a reasonable design. Several invertible networks are proposed to represent deformation, such as Real-NVP [16], Neural-ODE [13], I-ResNet [4]. Based on these manners, there exist some methods modeling deformation in space [30, 66, 48] or time [45, 35] domain. CaDeX [35] is a novel dynamic surface representation method using a real-valued non-volume preserving module [16]. Different from these strategies, we propose a novel scale-invariant binary map between observation space and 3D canonical space to process RGB-D sequences, which is more suitable for modeling non-rigid motion.
3 Method
The input of NDR is an RGB-D sequence {(Ii,Di), i = 1, · · · , N} captured by a monocular RGB-D camera (e.g., Kinect and iPhone X), where Ii ∈ RH×W×3 is the i-th RGB frame and
Di ∈ RH×W×1 is the corresponding aligned depth map. To optimize a canonical textured shape and motion through the sequence, we leverage full N color frames Ii as well as corresponding depth frames Di. Specifically, we first adopt video segmentation methods [14, 38] to obtain the mask Mi of interested object. Then, we integrate RGB-D video sequence into a canonical hyper-space composed of a 3D canonical space and a topology space. We propose a continuous bijective representation between the 3D canonical and observation space such that the cycle consistency can be strictly satisfied. The implicit surface is represented by a neural SDF and volume rendering field, as a function of input hyper-coordinate and camera view. The geometry, appearance, and motions of dynamic object are optimized without any template or structured priors, like optical flow [65], 2D annotations [9] and estimated normal map [29]. The pipeline of NDR is shown in Fig. 2(a).
3.1 Bijective Map in Space-time Synthesis
Invertible representation. Given a 3D point sampled in the space of i-th frame, recent methods [44, 65, 47] model its motion as a 6D transformation in SE(3) space. Nerfies [46] and HyperNeRF [47] construct a continuous dense field to estimate the motion. To reduce the complexity, DynamicFusion [44] and BANMo [65] define warp functions based on several control points. The latter designs both 2D and 3D cycle consistency loss terms to apply bijective constraints to deformation representation, but it is just a guide for learning instead of a rigorous inference module. Similar to the previous works, we also construct the deformation between each current frame and the 3D canonical space. Further, we employ a strictly invertible bijective mapping, which is naturally compatible with the cycle consistency strategy. Specifically, we decompose the non-rigid deformation into several reversible bijective blocks, where each block represents the transformation along and around a certain axis. In this manner, our deformation representation is strictly invertible and fits the natural properties of non-rigid motion well, which is helpful for the reconstruction effect.
We denote pi = [xi, yi, zi] ∈ R3 as a position of the observation space at time ti, in which a deformed surface Ui is embedded. It is noticeable that pi represents any position, both surface and free-space points. A continuous homeomorphic mapping Hi : R3 → R3 maps pi back to the 3D canonical position p = [x, y, z]. Supposes that there exists a canonical shape U of the interested object, which is independent of time and is shared across the video sequence. Notes that map Hi is invertible, and thus we can directly obtain the deformed surface at time ti:
Ui = {H−1i ([x, y, z])|∀[x, y, z] ∈ U}. (1)
Then, the correspondence of pi can be expressed by the bijective map, factorized as:
[xj , yj , zj ] = Gij([xi, yi, zi]) = H−1j ◦ Hi([xi, yi, zi]). (2)
The deformation representation G is cycle consistent strictly, since it is invariant on deforming path (Gjk ◦ Gij = Gik). As a composite function of two bijective maps (Eq. 2), it is a topology-invariant function between arbitrary double time stamps.
Implementation. Based on these observations, we implement the bijective map H by a novel invertible network h. While Real-NVP [16] seems a suitable network structure, its scale-variant property limits its usage in our object reconstruction task. Inspired by the idea of Real-NVP to split the coordinates, we decompose our scale-invariant deformation into several blocks. In each block, we set an axis and represent the motion steps as simple axis-related rotations and translations, which are totally shared by the forward and backward deformations. In this manner, the inverse deformation H−1 can be viewed as the composite of the inverse of these simple rotations and translations in H. On the other hand, this map also regularizes the freedom of deformation.
Fig. 2(b) shows the detailed structure of each block. Given a latent deformation code φ binding with time, we firstly consider the forward deformation, where the 3D positions [u, v, w] ∈ R3 of observation space is input, and the positions [u′, v′, w′] ∈ R3 of 3D canonical space is output. The cause of the invertible property is that after specifying a certain coordinate axis, each block predicts the movement along and rotation around the axis in turn, and the process of predicting the deformation is reversible, owing to coordinate split. In the inverse process, each block can infer the rotation around and movement along the axis from [u′, v′, w′] and invert them in turn to recover the original [u, v, w].
Without loss of generality, let the w-axis to be the chosen axis. With [u, v] fixed, we compute a displacement δw and update w′ as w + δw. With [w′] fixed, we then compute the rotation Ruv and translation δuv for [u, v] and update them as [u′, v′]. Oppositely, for the backward deformation, we apply −δuv, R−1uv , and −δw in turn to recover [u′, v′, w′] back to [u, v, w]. We refer the reader to supplementary material for the inverse process. Therefore, if the network h consists of these invertible blocks, it can represent a bijective map as well. At time ti, h(·|φi) : R3 → R3 maps 3D positions pi of observation space back to 3D canonical correspondences p, where φi denotes the deformation code of i-th frame. In our experiment, we use a Multi-Layer Perceptron (MLP) as the implementation of h, so we design a continuous bijective map Fh for space-time synthesis.
3.2 Deformation Field
Although the proposed deformation representation is a continuous homeomorphic mapping that satisfies the cycle consistency between different frames, it also preserves the surface topology. However, several dynamic scenes (e.g., varying body motion and facial expression) may undergo topology changes. Therefore, we combine a topology-aware design [47] into our deformation field. 3D positions pi observed at time ti are mapped to topology coordinates q(pi) through a network q : R3 → Rm. We regress topology coordinates from an MLP Fq. Then the corresponding coordinate of pi in the canonical hyper-space is represented as:
x = [p,q(pi)] = [Fh(pi,φi), Fq(pi,φi)] ∈ R3+m, (3)
conditioned on time-varying deformation φi.
3.3 Implicit Canonical Geometry and Appearance
Inspired by NeRF [43], we consider that a sample point x ∈ R3+m in the canonical hyper-space is associated with two properties: density σ and color c ∈ R3.
Neural SDF. Notes that the object embeds in the (3 +m)-D canonical hyper-space. In this work, we represent its geometry as the zero-level set of an SDF:
S = {x ∈ R3+m|d(x) = 0}. (4)
Following NeuS [60], we utilize a probability function to calculate the density value σ(x) based on the estimated signed distance value, which is an unbiased and occlusion-aware approximation. We refer the reader to their paper for more details.
Implicit rendering network. We utilize a neural renderer Fc as the implicit appearance network. At time ti, it takes in a 3D canonical coordinate p, its corresponding normal, a canonical view direction as well as a geometry feature vector, then outputs the color of the point, conditioned on a time-varying appearance code ψi. Specifically, we first compute its normal np = ∇pd(x) by gradient calculation. Then, the view direction vp in 3D canonical space can be obtained by transforming the view direction vi in observation space with the Jacobian matrix Jp(pi) = ∂p/∂pi of the 3D canonical map p w.r.t pi: vp = Jp(pi)vi. Except the SDF value, we adopt a larger MLP Fd(x) = (d(x), z(x)) to compute the embedded geometry feature zx = z(x) to help the prediction of global shadow [67]. Finally, noticing pi is the correspondence of x at time ti, we can formulate its color ci as:
ci = Fc(p,np,vp, zx,ψi) = Fc(p,∇pd(x), Jp(pi)vi, z(x),ψi). (5)
It can be seen that the color of point pi viewed from direction vi depends on the deformation field, canonical representation, a deformation code as well an appearance code combined with time.
3.4 Optimization
Given an RGB-D sequence with the masks of interested object {(Ii,Di,Mi), i = 1, 2, · · · , N}, the optimizable parameters include MLPs {Fh, Fq, Fd, Fc}, learnable codes {φi,ψi}, RGB and depth camera intrinsics {Krgb,Kdepth}, as well as SE(3) camera pose Ti at each time ti. Our target is to design the loss terms to match input masks, color images and depth images. Since we leverage neural implicit functions for representing the geometry, appearance and motion of dynamic object, we divide all constraints into two parts, on free-space points and on surface points:
L = ( λ1Lmask + λ2Lcolor + λ3Ldepth + λ4Lreg ) ︸ ︷︷ ︸
free-space
+ ( λ5Lsdf + λ6Lvisible ) ︸ ︷︷ ︸
surface
, (6)
where λj(j = 1, 2, · · · , 6) are balancing weights.
Constraints on free-space. Given a ray parameterized as r(s) = o+ sv (pass through a pixel), we sample the implicit radiance field at points lying along this ray to approximate its color and depth:
Ĉ(r) = ∫ sf sn T (s)σ(s)c(s) ds, D̂(r) = ∫ sf sn T (s)σ(s)sds, (7)
where sn and sf represent near and far bounds, and T (s) = exp(− ∫ s sn
σ(u) du) denotes the accumulated transmittance along the ray. The density and color calculation are described in Sec. 3.3. Then the color and depth reconstruction loss are defined as:
Lcolor = ∑
r∈R(Krgb,Ti)
∥M(r)(Ĉ(r)−C(r))∥1, (8)
Ldepth = ∑
r∈R(Kdepth,Ti)
∥M(r)(D̂(r)−D(r))∥1, (9)
where R(Krgb, Ti) and R(Kdepth, Ti) represent the set of rays to RGB and depth camera, respectively. M(r) ∈ {0, 1} is the object mask value, while C(r) and D(r) are the observed color and depth value. To focus on dynamic object reconstruction, we also define a mask loss as
Lmask = BCE(M̂(r),M(r)), (10) where M̂(r) = ∫ sf sn
T (s)σ(s) ds is the density accumulation along the ray, and BCE is the binary cross entropy loss.
An Eikonal loss is introduced to regularize d(x) to be a signed distance function of p, and it has the following form: Lreg = ∑ x∈X (∥∇pd(x)∥2 − 1)2, (11)
where x are points sampled in the canonical hyper-space X . In our implementation, to obtain x, we first sample some points pi on the observed free-space and then deform sampled points back to X using Eq. 3. We constrain points sampled by a uniform and importance sampling strategy.
Constraints on surface. Except for the losses on the free-space, we also constrain the property of points lying on the depth images Di. We add an SDF loss term:
Lsdf = ∑
pi∈Di
∥d(x)∥1. (12)
To avoid the deformed surface at each time fuses into the canonical space which causes multi-surfaces phenomenon, we design a visible loss term to constrain surface:
Lvisible = ∑
pi∈Di
max(⟨ np ∥np∥2 , vp ∥vp∥2 ⟩, 0), (13)
where ⟨·, ·⟩ denotes the inner product. The visible loss term is to constrain the angle between the normal vector of the sampled point on depth map and the view direction to be larger than 90 degrees, which aims to guide depth points to be visible surface points under the RGB-D camera view.
4 Experiments
4.1 Experimental Settings
Implementation details. We initialize d(x) such that it approximates a unit sphere [2]. We train our neural networks using the ADAM optimizer [32] with a learning rate 5× 10−4. We run most of our experiments with 6×104 iterations for 12 hours on a single NVIDIA A100 40GB GPU. On free-space, we sample 2, 048 rays per batch (128 points along each ray). Following NeuS [60], we first uniformly sample 64 points, and then adopt importance sampling iteratively for 4 times (16 points each iteration). On depth map, we uniformly sample 2, 048 points per batch. For coarse-to-fine training, we utilize an incremental positional encoding strategy on sampled points, similar with Nerfies [46]. The weights in Eq. 6 are set as: λ1 = 0.1, λ2 = 1.0, λ3 = 0.5, λ4 = 0.1, λ5 = 0.5, λ6 = 0.1.
For non-rigid object segmentation, we leverage off-the-shelf methods, RVM [38] for human and MiVOS [14] for other objects. Since we assume the region of object is inside a unit sphere, we normalize the points back-projected from depth maps first. If the collected sequence implies larger global rotation, we leverage Robust ICP method [68] for per-frame initialization of poses Ti.
Datasets. To evaluate our NDR and baseline approaches, we use 6 scenes from DeepDeform [9] dataset, 7 scenes from KillingFusion [54] dataset, 1 scene from AMA [59] dataset and 11 scenes captured by ourselves. The evaluation data contains 6 classes: human faces, human bodies, domestic animals, plants, toys, and clothes. It includes challenging cases, such as rapid movement, self-rotation motion, topology change and complex shape. DeepDeform [9] dataset is captured by an iPad. Its RGB-D streams are recorded and aligned at a resolution of 640 × 480 and 30 frames per second. Since our NDR does not need any annotated or estimated correspondences, we only leverage RGB-D sequences and camera intrinsics as initialization when evaluating NDR, without scene flow or optical flow data. We choose 6 scenes from the whole dataset, including human bodies, dogs, and clothes. All sequences in KillingFusion [54] dataset were recorded with a Kinect v1, also aligned to 640×480 resolution. We choose all scenes from it, which contain toys and human motions. For evaluation on synthetic data, we use AMA [59] dataset, which contains reconstructed mesh corresponding to each video frame. To construct synthetic depth data, we render meshes to a chosen camera view. In the experiment, we do not utilize any multi-view messages but only monocular RGB-D frames. To increase the data diversity, especially for adding more challenging but routine conditions (e.g., topology change and complex details), we capture some human head and plant videos with iPhone X (resolution 480× 640 at 30 fps). When capturing head data, we ask the person to rotate the face while freely varying expressions. When capturing plant data, we record the states of leaf swings.
Comparison methods. (1) A widely-used classical fusion-based method, DynamicFusion [44]: It is the pioneering work that estimates and utilizes the motion of hierarchical node graph for deforming guidance, and it assumes the shape inside a canonical TSDF volume. (2) Two recent fusion-based methods, DeepDeform [9] and Bozic et al. [8]: These methods utilize the learning-based correspondences to help handle challenging motions. (3) A state-of-the-art fusion-based method, OcclusionFusion [39]: It computes occlusion-aware 3D motion through a neural network for modeling guidance. (4) A state-of-the-art RGB reconstruction method from monocular video, BANMo [65]: It models articulated 3D shapes in a neural blend skinning and differentiable rendering framework. For
comparison with RGB-D based methods, we use our re-implementation of DynamicFusion [44] and the results provided by the authors of OcclusionFusion [39].
4.2 Comparisons
RGB-D based methods. For qualitative evaluation, we exhibit some comparisons with DynamicFusion [44] and OcclusionFusion [39] in Fig. 3, also with DeepDeform [9] and Bozic et al. [8] in Fig. 4. Specifically, results of detailed modeling verify that bijective deformation mapping help match photometric correspondences between observed frames. As Fig. 3 shown, our NDR models geometry details while fusion-based methods [44, 39] are easy to form artifacts on the reconstructed surfaces. NDR also achieves considerable reconstruction accuracy on handling rapid movement (Fig. 4).
For quantitative evaluation, we calculate geometry errors on some testing sequences, following previous works [9, 8, 39]. The geometry metric is to compare depth values inside the object mask to the reconstructed geometry. The sequences are on behalf of various class objects and cases, including domestic animal (seq. Dog from DeepDeform [9]), rotated body,
human-object interaction, general object (seq. Alex, Hat, Frog from KillingFusion [54], separately), and human heads (seq. Human1, Human2 from our collected dataset). The quantitative results are shown in Tab. 1. We can see that our NDR outperforms previous works [44, 39], owing to jointly optimizing geometry, appearance and motion on a total video. On seq. Alex, the geometry error of OccluionFusion [39] is lower than that of ours. However, NDR can handle topology varying well, as shown in the corresponding qualitative results on the right of Fig. 3.
RGB based method. Fig. 5 exhibits several comparisons with a recent RGB based method - BANMo [65]. BANMo takes the RGB sequence as input and optimizes the geometry, appearance and motion based on the precomputed annotations, including the camera pose and optical flow. For a fair comparison, we also compare BANMo [65] with our NDR with only RGB supervision, where we provide them with the same camera initialization and frame-wise mask. For the RGB-only situation, both our method and BANMo may make some structural mistakes, such as the human arm in ours and the Snoopy’s ears in BANMo. Moreover, compared to our RGB-only results, BANMo suffers more from the local geometry noise, which should be due to the error caused by incorrect precomputed annotations. Meanwhile, our method does not rely on any precomputed annotations and achieves flat results. With the RGB-D sequence as input, our NDR full model performs robust and well in modeling geometry details and rapid motions.
4.3 Robustness on Camera Initialization
In order to systematically analyze the performance of our camera pose optimization ability, we add an experiment to test the robustness under various degrees of noise on both real and synthetic data. We choose 2 sequences of small rigid motion separately from DeepDeform [9] dataset (a body with moving joints, 200 frames) and AMA [59] dataset (a Samba dancer, 175
monocular frames). As Tab. 2, we add Gaussian noises with 5, 10, 20, 40, 60 degrees of standard deviation to initial Euler angles and calculate mean geometry errors (0 denotes without adding noises). The results show that NDR is robust against noisy camera poses to a certain extent, owing to its neural implicit representation and abundant optimization with RGB-D messages. If the standard deviation of Gaussian Noises is over 20 degrees, the reconstruction quality will be obviously affected (geometry error is over 1 cm). We refer the reader to supplementary material for qualitative results.
Input RGB Ours (only depth) Ours (6D motion) Ours (full)
4.4 Ablation Studies
We evaluate 3 components of our NDR regarding their effects on the final reconstruction result.
Depth cues. We evaluate the reconstruction results with only RGB supervision, i.e. removing depth images and only supervised with loss terms Lmask,Lcolor,Lreg. As shown in Fig. 5, the reconstruction results with only RGB information are not correct (especially seen from a novel view) since monocular camera scenes exist the ambiguity of depth.
RGB cues. We also evaluate the reconstruction results with only depth supervision, i.e. removing RGB images and color loss term Lcolor. As shown in Fig. 6, the reconstructed shapes lack geometrical details as color messages are not used.
Bijective map Fh. To verify the effect of our proposed bijective map Fh (Sec. 3.1), we change it to 6D motion representation in SE(3) space. As shown in Fig. 6, since Fh can satisfy the cycle consistency strictly, it is less prone to accumulate artifacts and thus performs better in local geometry. In comparison, the irreversible transformation is easy to fail in preserving high-quality surfaces.
4.5 Evaluation of Cycle Consistency
We perform a numerical experiment for cycle consistency evaluation on the whole deformation field. In the experiment, we randomly select 3 frames (indexed by i, j, k) as a group in a video sequence. Given points on one frame, we calculate the corresponding coordinates on another frame and record this scene flow as f . Then it includes 2 deforming paths from frame i to k, based on the direct flow fik, or the composite flow fij + fjk. To evaluate the
cycle consistency, we calculate the Euclidean norm of fij + fjk − fik as the error. The error smaller, the cycle consistency (invariant on deforming path) maintains better. We conduct experiments on a human body rotated in 360 degrees (200 frames) from KillingFusion [54] dataset and a talking head (300 frames) from our captured dataset. In the experiment, we randomly select 1, 000 groups of frames and calculate the mean error on depth points of object surface. Since the topology-aware network is irreversible, we optimize the corresponding positions with fixed network parameters and ADAM optimizer [32]. As a comparison, we also evaluate them on our framework with 6D motion. As Tab. 3 shown, cycle consistency of the whole deformation field among frames is maintained by bijective map Fh quite well, although it might be affected by irreversible topology-aware network.
5 Conclusion
We have presented NDR, a new approach for reconstructing the high-fidelity geometry and motions of a dynamic scene from a monocular RGB-D video without any template priors. Other than previous works, NDR integrates observed color and depth into a canonical SDF and radiance field for joint optimization of surface and deformation. For maintaining cycle consistency throughout the whole video, we propose an invertible bijective mapping between observation space and canonical space, which fits perfectly with non-rigid motions. To handle topology change, we employ a topology-aware network to model topology-variant correspondence. On public datasets and our collected dataset, NDR shows a strong empirical performance in modeling different class objects and handling various challenging cases. Negative societal impact and limitation: like many other works with neural implicit representation, our method needs plenty of computation resources and optimization time, which can be a concern for energy resource consumption. We will explore alleviating these in future work.
Acknowledgements. This research was partially supported by the National Natural Science Foundation of China (No.62122071, No.62272433), the Fundamental Research Funds for the Central Universities (No. WK3470000021), and Alibaba Group through Alibaba Innovation Research Program (AIR). The opinions, findings, conclusions, and recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the funding agencies or the government. We thank the authors of OcclusionFussion for sharing the fusion results of several RGB-D sequences. We also thank the authors of BANMo for their suggestions on experimental parameter settings. Special thanks to Prof. Weiwei Xu for providing some help. | 1. What is the focus and contribution of the paper regarding dynamic object reconstruction?
2. What are the strengths of the proposed approach, particularly in integrating existing solutions?
3. What are the weaknesses of the paper, especially regarding camera pose initialization and writing clarity?
4. Do you have any questions regarding the implementation of invertible warping fields or equation explanations?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The paper tackles the problem of dynamic object reconstruction from a single RGBD video. It solves the problem in a differentiable rendering pipeline and designs a novel 3D warping function that is guaranteed to be invertible. It achieves high-quality reconstruction results on KillingFusion, DeepDeform and iPhone videos.
Strengths And Weaknesses
Strengths
The method is conceptually simple and effective. The authors did well in integrating the best existing solutions (neural surface, canonical hyper-space, and invertible NNs) into a system that not only works well but also stays clean.
The bijective warping field is an interesting technical improvement over SE(3) fields. it ensures 3D warping functions to be invertible by design.
The results are high-quality and state-of-the-art.
The paper is well-written with adequate amount of details. Design choices are also well-motivated.
Weakness
Some details can be clarified. As there is genuine ambiguity between camera motion and object motion, it is worth explaining the camera pose initialization in more detail and analyze the failure modes. When does Robust ICP fail? For instance, does it fail when the object exhibits rotational motion? How robust is the method to inaccurate camera pose initialization?
Some writing could be improved (also see questions). The technical part of sec. 3.1 is not easy to follow possible due to lack of concise equations in l148-165. It is also not obvious what design choice made it invertible. The key idea of coordinate splitting is only mentioned in l150 and the high-level intuition is not conveyed.
Questions
For the implementation of invertible wrapping fields in 155-165, if points in the canonical space happen to have the same
w
′
coordinate, their predicted
R
u
v
is constrained to be the same. Similarly if points in the observation space have the same
(
u
,
v
)
coordinates, their predicted
δ
w
is constrained to be the same. Does this cause undesirable artifacts, for instance, when dealing with an object containing a flat surface with same
w
′
coordinates?
Eq(13) is not clearly explained. What is v_p? What does it mean to force points on depth images to be visible from camera?
Fig. 6: the difference between 6D motion fields and full is not obvious. Consider adding more descriptive captions to highlight the difference or choose the example more carefully.
Limitations
Yes |
NIPS | Title
A Loss Function for Generative Neural Networks Based on Watson’s Perceptual Model
Abstract
To train Variational Autoencoders (VAEs) to generate realistic imagery requires a loss function that reflects human perception of image similarity. We propose such a loss function based on Watson’s perceptual model, which computes a weighted distance in frequency space and accounts for luminance and contrast masking. We extend the model to color images, increase its robustness to translation by using the Fourier Transform, remove artifacts due to splitting the image into blocks, and make it differentiable. In experiments, VAEs trained with the new loss function generated realistic, high-quality image samples. Compared to using the Euclidean distance and the Structural Similarity Index, the images were less blurry; compared to deep neural network based losses, the new approach required less computational resources and generated images with less artifacts.
1 Introduction
Variational Autoencoders (VAEs) [11] are generative neural networks that learn a probability distribution over X from training data D = {x0, ...,xn} ⊂ X . New samples are generated by drawing a latent variable z ∈ Z from a distribution p(z) and using z to sample x ∈ X from a conditional decoder distribution p(x|z). The distribution of p(x|z) induces a similarity measure on X . A generic choice is a normal distribution p(x|z) = N (µx(z), σ
2) with a fixed variance σ2. In this case the underlying energy-function is L(x,x′) = 12σ2 ‖x − x
′‖2. Thus, the model assumes that for two samples which are sufficiently close to each other (as measured by σ2), the similarity measure can be well approximated by the squared loss. The choice of L is crucial for the generative model. For image generation, traditional pixel-by-pixel loss metrics such as the squared loss are popular because of their simplicity, ease of use and efficiency [5]. However, they perform poorly at modeling the human perception of image similarity [30]. Most VAEs trained with such losses produce images that look blurred [3, 5]. Accordingly, perceptual loss functions for VAEs are an active research area. These loss functions fall into two broad categories, namely explicit models, as exemplified by the Structural Similarity Index Model (SSIM) [25], and learned models. The latter include models based on deep feature embeddings extracted from image classification networks [5, 30, 8] as well as combinations of VAEs with discriminator networks of Generative Adversarial Networks (GANs) [4, 13, 18].
Perceptual loss functions based on deep neural networks have produced promising results. However, features optimized for one task need not be a good choice for a different task. Our experimental results suggest that powerful metrics optimized on specific datasets may not generalize to broader categories of images. We argue that using features from networks pre-trained for image classification in loss functions for training VAEs for image generation may be problematic, because invariance properties beneficial for classification make it difficult to capture details required to generate realistic images.
Code and experiments are available at github.com/SteffenCzolbe/PerceptualSimilarity
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Human
Watson-DFT
In this work, we introduce a loss function based on Watson’s visual perception model [27], an explicit perceptual model used in image compression and digital watermarking [15]. The model accounts for the perceptual phenomena of sensitivity, luminance masking, and contrast masking. It computes the loss as a weighted distance in frequency space based on a Discrete Cosine Transform (DCT). We optimize the Watson model for image generation by (i) replacing the DCT with the discrete Fourier Transform (DFT) to improve robustness against translational shifts, (ii) extending the model to color images, (iii) replacing the fixed grid in the block-wise computations by a randomized grid to avoid artifacts, and (iv) replacing the max operator to make the loss function differentiable. We trained the free parameters of our model and several competitors using human similarity judgement data ([30], see Figure 1 for examples). We applied the trained similarity measures to image generation of numerals and celebrity faces. The modified Watson model generalized well to the different image domains and resulted in imagery exhibiting less blur and far fewer artifacts compared to alternative approaches.
2 Background
In this section we briefly review variational autoencoders and Watson’s perceptual model.
Variational Autoencoders Samples from VAEs [11] are drawn from p(x) = ∫
p(x|z)p(z) dz, where p(z) is a prior distribution that can be freely chosen and p(x|z) is typically modeled by a deep neural network. The model is trained using a variational lower bound on the likelihood
log p(x) ≤ Eq(z|x) {log p(x|z)} − βKL(q(z|x)‖p(z)) , (1)
where q(z|x) is an encoder function designed to approximate p(z|x) and β is a scaling factor. We choose p(z) = N (0, I) and q(z|x) = N (µz(x),Σz(x)), where the covariance matrix Σz(x) is restricted to be diagonal and both µz and Σz(x) are modelled by deep neural networks.
Loss functions for VAEs It is possible to incorporate a wide range of loss functions into VAEtraining. If we choose p(x|z) ∝ exp(−L(x, µx(z)), where µx is a neural network and we ensure that L leads to a proper probability function, the first term of (1) becomes
Eq(z|x) {log p(x|z)} = −Eq(z|x) {L(x, µx(z))}+ const . (2)
Choosing L freely comes at the price that we typically lose the ability to sample from p(x) directly. If the loss is a valid unnormalized log-probability, Markov Chain Monte Carlo methods can be applied. In most applications, however, it is assumed that µx(z), z ∼ p(z) is a good approximation of p(x) and most articles present means instead of samples. Typical choices for L are the squared loss L2(x,x ′) = ‖x−x′‖2 and p-norms Lp(x,x ′) = ‖x−x′‖p. A generalization of p-norm based losses is the “General and Adaptive Robust Loss Function” [1], which we refer to as Adaptive-Loss. When used to train VAEs for image generation, the Adaptive-Loss is applied to 2D DCT transformations of entire images. Roughly speaking, it then adapts one shape parameter (similar to a p-value) and one scaling parameter per frequency during training, simultaneously learning a loss function and a
generative model. A common visual similarity metric based on image fidelity is given by Structured Similarity (SSIM) [25], which bases its calculation on the covariance of patches. We refer to section A in the supplementary material for a description of SSIM.
Another approach to define loss functions is to extract features using a deep neural network and to measure the differences between the features from original and reconstructed images [5]. In [5], it is proposed to consider the first five layers L = {1, . . . , 5} of VGGNet [21]. In [30], different feature extraction networks, including AlexNet [12] and SqeezeNet [6], are tested. Furthermore, the metrics are improved by weighting each feature based on data from human perception experiments (see Section 4.1). With adaptive weights ωlc ≥ 0 for each feature map, the resulting loss function reads
Lfcw(x,x ′) =
∑
l∈L
1
HlWl
Hl,Wl,Cl ∑
h,w,c=1
ωlc(y l hwc − ŷ l hwc) 2 , (3)
where Hl, Wl and Cl are the height, width and number of channels (feature maps) in layer l. The normalized Cl-dimensional feature vectors are denoted by y l hw = F l hw(x)/‖F l hw(x)‖ and ŷlhw = F l hw(x ′)/‖F lhw(x ′)‖, where F lhw(x) ∈ R
Cl contains the features of image x in layer l at spatial coordinates h,w (see [30] for details).
Watson’s Perceptual Model Watson’s perceptual model of the human visual system [27] describes an image as a composition of base images of different frequencies. It accounts for the perceptual impact of luminance masking, contrast masking, and sensitivity. Input images are first divided into K disjoint blocks of B ×B pixels, where B = 8. Each block is then transformed into frequency-space using the DCT. We denote the DCT coefficient (i, j) of the k-th block by Cijk for 1 ≤ i, j ≤ B and 1 ≤ k ≤ K.
The Watson model computes the loss as weighted p-norm (typically p = 4) in frequency-space
DWatson(C,C ′) = p
√ √ √ √ B,B,K ∑
i,j,k=1
∣ ∣ ∣ ∣ Cijk −C′ijk Sijk ∣ ∣ ∣ ∣ p , (4)
where S ∈ RK×B×B is derived from the DCT coefficients C. The loss is not symmetric as C′ does not influence S. To compute S, an image-independent sensitivity table T ∈ RB×B is defined. It stores the sensitivity of the image to changes in its individual DCT components. The table is a function of a number of parameters, including the image resolution and the distance of an observer to the image. It can be chosen freely dependent on the application, a popular choice is given in [2]. Watson’s model adjusts T for each block according to the block’s luminance. The luminance-masked threshold TLijk is given by
TLijk = Tij
(
C00k
C̄00
)α
, (5)
where α is a constant with a suggested value of 0.649, C00k is the d.c. coefficient (average brightness) of the k-th block in the original image, and C̄00 is the average luminance of the entire image. As a result, brighter regions of an image are less sensitive to changes.
Contrast masking accounts for the reduction in visibility of one image component by the presence of another. If a DCT frequency is strongly present, an absolute change in its coefficient is less perceptible compared to when the frequency is less pronounced. Contrast masking gives
Sijk = max(TLijk , |Cijk| r T (1−r) Lijk ) , (6)
where the constant r ∈ [0, 1] has a suggested value of 0.7.
3 Modified Watson’s Perceptual Model
A differentiable model To make the loss function differentiable we replace the maximization in the computation of S by a smooth-maximum function smax(x1, x2, . . . ) = ∑ i xie xi ∑
j e xj and the equation
for S becomes S̃ijk = smax(TLijk , |Cijk| r T (1−r) Lijk ) . (7)
For numerical stability, we introduce a small constant ǫ = 10−10 and arrive at the trainable Watsonloss for the coefficients of a single channel
LWatson(C,C ′) = p
√ √ √ √ǫ+ B,B,K ∑
i,j,k=1
∣ ∣ ∣ ∣ Cijk −C′ijk
S̃ijk
∣ ∣ ∣ ∣ p . (8)
Extension to color images Watson’s perceptual model is defined for a single channel (i.e., greyscale). To make the model applicable to color images, we aggregate the loss calculated on multiple separate channels to a single loss value.1 We represent color images in the YCbCr format, consisting of the luminance channel Y and chroma channels Cb and Cr. We calculate the single-channel losses separately and weight the results. Let LY, LCb, LCr be the loss values in the luminance, blue-difference and red-difference components for any greyscale loss function. Then the corresponding multi-channel loss L is calculated as
L = λYLY + λCbLCb + λCrLCr , (9)
where the weighting coefficients are learned from data, see below.
Fourier transform In order to be less sensitive to small translational shifts, we replace the DCT with a discrete Fourier Transform (DFT), which is in accordance with Watson’s original work (e.g., [29, 26]). The later use of the DCT was most likely motivated by its application within JPEG [24, 28]. The DFT separates a signal into amplitude and phase information. Translation of an image affects phase, but not amplitude. We apply Watson’s model on the amplitudes while we use the cosine-distance for changes in phase information. Let A ∈ RB×B be the amplitudes of the DFT and let Φ ∈ RB×B be the phase-information. We then obtain
LWatson-DFT(A,Φ,A ′,Φ′) = LWatson(A,A ′) +
B,B,K ∑
i,j,k=1
wij arccos [ cos(Φijk − Φ ′ ijk) ] , (10)
where wij > 0 are individual weights of the phase-distances that can be learned (see below).
The change of representation going from DCT to DFT disentangles amplitude and phase information, but does not increase the number of parameters as the DFT of real images results in a Hermitian complex coefficient matrix (i.e., the element in row i and column j is the complex conjugate of the element in row j and column i) .
Grid translation Computing the loss from disjoint blocks works for the original application of Watson’s perceptual model, lossy compression. However, a powerful generative model can take advantage of the static blocks, leading to noticeable artifacts at block boundaries. We solve this problem by randomly shifting the block-grid in the loss-computation during training. The offsets are drawn uniformly in the interval J−4, 4K in both dimensions. In expectation, this is equivalent to computing the loss via a sliding window as in SSIM.
Free parameters When benchmarking Watson’s perceptual model with the suggested parameters on data from a Two-Alternative Forced-Choice (2AFC) task measuring human perception of image similarity, see Subsection 4.1, we found that the model underestimated differences in images with strong high-frequency components. This allows compression algorithms to improve compression ratios by omitting noisy image patterns, but does not model the full range of human perception and can be detrimental in image generation tasks, where the underestimation of errors in these frequencies might lead to the generation of an unnatural amount of noise. We solve this problem by training all parameters of all loss variants, including p,T, α, r, wij and for color images λY, λCb and λCr, on the 2AFC dataset (see Section 4.1).
1Many perceptually oriented image processing domains choose color representations that separate luminance from chroma. For example, the HSV color model distinguishes between hue, saturation, and color, and formats such as Lab or YCbCr distinguish between a luminance value and two color planes [22]. The separation of brightness from color information is motivated by a difference in perception. The luminance of an image has a larger influence on human perception than chromatic components [20]. Perceptual image processing standards such as JPEG compression utilize this by encoding chroma at a lower resolution than luminance [24].
4 Experiments
We empirically compared our loss function to traditional baselines and the recently proposed AdaptiveLoss [1] as well as deep neural network based approaches [30]. First, we trained the free parameters of the proposed Watson model as well as of loss functions based on VGGNet [21] and SqueezeNet [6] to mimic human perception on data of human perceptual judgements. Next, we applied the similarity metrics as loss functions of VAEs in two image generation tasks. Finally, we evaluated the perceptual performance, and investigate individual error cases.
4.1 Training on data from human perceptual experiments
The modified Watson model, referred to as Watson-DFT, as well as LPIPS-VGG and LPIPS-Squeeze have tune-able parameters, which have to be chosen before use as a loss function. We train the parameters using the same data. For LPIPS-VGG and LPIPS-Squeeze, we followed the methodology called LPIPS (linear) in [30] and trained feature weights according to (3) for the first 5 or 7 layers, respectively.
We trained on the Two-Alternative ForcedChoice (2AFC) dataset of perceptual judgements published as part of the Berkeley-Adobe Perceptual Patch Similarity (BAPPS) dataset [30]. Participants were asked which of two distortions x1,x2 of an 64 × 64 color image x0 is more similar to the reference x0. A human reference judgement p ∈ [0, 1] is provided indicating whether the human judges on average deemed x1 (p < 0.5) or x2 (p > 0.5) more similar to x0.
2 The dataset is based on a total of 20 different distortions, with the strength of each distortion randomized per sample. Some distortions can be combined, giving 308 combinations. Figure 1 and Fig. B.7 in the supplementary material show examples.
To train a loss function L on the 2AFC dataset, we follow the schema outlined in Figure 2. We first compute the perceptual distances d0 = L(x0,x1) and d1 = L(x0,x2). Then these distances are converted into a probability to determine whether (x0,x1) is perceptually more similar than (x0,x2). To calculate the probability based on distance measures, we use
G(d0, d1) =
{
1 2 , if d0 = d1 = 0 σ (
γ d1−d0|d1|+|d0|
) , otherwise , (11)
where σ(x) is the sigmoid function with learned weight γ > 0 modelling the steepness of the slope. This computation is invariant to linear transformations of the loss functions.
The training loss between the predicted judgment G(d0, d1) and the human judgment p is calculated by the binary cross-entropy:
L2AFC(d0, d1) = p log(G(d0, d1)) + (1− p) log(1−G(d0, d1)) (12)
This objective function was used to adapt the parameters of all considered metrics (used as loss functions in the VAE experiments). We trained the DCT based loss Watson-DCT and the DFT based loss Watson-DFT, see (8) and (10), respectively, both for single-channel greyscale input as well as for color images with the multi-channel aggregator (9). We compared our results to the linearly weighted deep loss functions from [30], which we reproduced using the original methodology, which differs from (3) only in modeling G as a shallow neural network with all positive weights.
2The three image patches x0,x1,x2 and label p form a record. The dataset contains a total of 151,400 training records and 36,500 test records. Each training record was judged by 2, each test record by 5 humans.
4.2 Application to VAEs
We evaluated VAEs trained with our pre-trained modified Watson model, pre-trained deep-learning based LPIPS-VGG and LPIPS-Squeeze, and not pre-trained baselines SSIM and Adaptive-Loss. The latter adapted the parameters of the loss function during VAE training. We used the implementations provided by the original authors when available. Since quantitative evaluation of generative models is challenging [23], we qualitatively assessed the generation, reconstruction and latent-value interpolation of each model on two independent datasets.3 We considered the gray-scale MNIST dataset [14] and the celebA dataset [16] of celebrity faces. The images of the celebA dataset are of higher resolution and visual complexity compared to MNIST. The feature space dimensionalities for the two models, MNIST-VAE and celebA-VAE, were 2 and 256, respectively.4
Results of reconstructed samples from models trained on celebA are given in Fig. 4. Generated images of all models are given in Fig. 5 and Supplement D. For the two-dimensional featurespace of the MNIST model, Fig. 3 shows reconstructions from z-values that lie on a grid over z ∈ [−1.5, 1.5]2. Additional results showing interpolations and reconstructions of the models are given in Supplement D.
Handwritten digits The VAE trained with the Watson-DFT captured the MNIST dataset well (see Fig. 3 and supplementary Fig. D.8). The visualization of the latent-space shows natural-looking handwritten digits. All generated samples are clearly identifiable as numbers. The models trained with SSIM and Adaptive-Loss produced similar results, but edges are slightly less sharp (Fig. D.8). The VAE trained with the LPIPS-VGG metric produced unnatural looking samples, very distinct from the original dataset. Samples generated by VAEs trained with LPIPS-Squeeze were not recognizeable as digits. Both deep feature based metrics performed badly on this simple task; they did not generalize to this domain of images, which differs from the 2AFC images used to tune the learned similarity metrics.
Celebrity photos The model trained with the Watson-DFT metric generated samples of high visual fidelity. Background patterns and haircuts were defined and recognizable, and even strands of hair were partially visible. The images showed no blurring and few artifacts. However, objects lacked fine details like skin imperfections, leading to a smooth appearance. Samples from this generative model overall looked very good and covered the full range of diversity of the original dataset.
The VAE trained with SSIM showed the typical problems of training with traditional losses. Wellaligned components of the images, such as eyes and mouth, were realistically generated. More specific features such as the background and glasses, or features with a greater amount of spatial
3We provide the source code for our methods and the experiments, including the scripts that randomly sampled from the models to generate the plots in this article. We encourage to run the code and generate more samples to verify that the presented results are representative.
4The full architectures are given in supplementary material Appendix C. The optimization algorithm was Adam [10]. The initial learning rate was 10−4 and decreased exponentially throughout training by a factor of 2 every 100 epochs for the MNIST-VAE, and every 20 epochs for the celebA-VAE. For all models, we first performed a hyper-parameter search over the regularization parameter β in (1). We tested β = eλ for λ ∈ Z for 50 epochs on the MNIST set and 10 epochs on the celebA set, then selected the best performing hyper-parameter by visual inspection of generated samples. Values selected for training the full model are shown in Table C.4 in the supplement. For each loss function, we trained the MNIST-VAE for 250 epochs and the celebA-VAE for 100 epochs.
uncertainty, such as hair, were very blurry or not generated at all. The model trained with AdaptiveLoss improves on color accuracy, but blurring is still an issue. The VAE trained with the LPIPS-VGG metric generated samples and visual patterns of the original dataset very well. Minor details such as strands of hair, skin imperfections, and reflections were generated very accurately. However, very strong artifacts were present (e.g., in the form of grid-like patterns, see Fig. 5 (c)). The Adaptive-Loss gave similar results as SSIM, see supplementary Fig. D.11 (a). The VAE trained with LPIPS-Squeeze showed very strong artifacts in reconstructed images as well as generated images, see supplementary
Fig. D.11 (b)).
4.3 Perceptual score
We used the validation part of the 2AFC dataset to compute perceptual scores and investigated similarity judgements on individual samples of the set. The agreement with human judgements is measured by pp̂+ (1− p)(1− p̂) as in [30].5 A human reference score was calculated using p = p̂. The results are summarized in Figure 6. Overall, the scores were similar to the results in [30], which verifies our methodology. We can see that the explicit approaches (L2 and SSIM) performed similarly. Adaptive-Loss, despite the ability ot adapt to the dataset, offers no improvement over the baselines. Watson-DFT performed considerably better, but not as well as LPIPS-VGG or LPIPS-Squeeze. We observe that the ability of metrics to learn perceptual judgement grows with the degrees of freedom (>1000 parameters for deep models, <150 for Watson-based metrics).
Inspecting the errors revealed qualitative difference between the metrics, some representative examples are shown in Fig. 1. We observed that the deep networks are good at semantic matching (see biker in Fig 1), but under-estimate the perceptual impact of graphical artifacts such as
noise (see treeline) and blur. We argue that this is because the features were originally optimized for object recognition, where invariance against distortions and spatial shifts is beneficial. In contrast, the Watson-based metric is sensitive to changes in frequency (noise, blur) and large translations.
4.4 Resource requirements
During training, computing and back-propagating the loss requires computational resources, which are then unavailable for the VAE model and data. We measure the resource requirements in a typical learning scenario. Mini-batches of 128 images of size 64× 64 with either one (greyscale) or three channels (color) were forward-fed through the tested loss functions. The loss with regard to one input image was back-propagated, and the image was updated accordingly using stochastic gradient descent. We measured the time for 500 iterations and the maximum GPU memory allocated. Results
5For example, when 80% of humans judged x1 to be more similar to the reference we have p = 0.2. If the metric predicted x1 to be closer, p̂ = 0, and we grant it 80% score for this judgement.
are averaged over three runs of the experiment. Implementation in PyTorch [19], 32-bit precision, executed on a Nvidia Quadro P6000 GPU. The results are shown in Table 1. We observe that deep model based loss functions require considerably more computation time and GPU memory. For example, evaluation of Watson-DFT was 6 times faster than LPIPS-VGG and required only a few megabytes of GPU memory instead of two gigabytes.
5 Discussion and conclusions
Discussion The 2AFC dataset is suitable to evaluate and tune perceptual similarity measures. But it considers a special, limited, partially artificial set of images and transformations. On the 2AFC task our metric based on Watson’s perceptual model outperformed the simple L1 and L2 metrics as well as the popular structural similarity SSIM [25] and the Adaptive-Loss [1].
Learning a metric using deep neural networks on the 2AFC data gave better results on the corresponding test data. This does not come as a surprise given the high flexibility of this purely data-driven approach. However, the resulting neural networks did not work well when used as a loss function for training VAEs, indicating weak generalization beyond the images and transformations in the training data. This is in accordance with (1) the fact that the higher flexibility of LPIPS-Squeeze compared to LPIPS-VGG yields a better fit in the 2AFC task (see also [30]) but even worse results in the VAE experiments; (2) that deep model based approaches profit from extensive regularization, especially by including the squared error in the loss function (e.g., [8]). In contrast, our approach based on Watson’s Perceptual Model is not very complex (in terms of degrees of freedom) and it has a strong inductive bias to match human perception. Therefore it extrapolates much better in a way expected from a perceptual metric/loss.
Deep neural networks for object recognition are trained to be invariant against translation, noise and blur, distortions, and other visual artifacts. We observed the invariance against noise and artifacts even after tuning on the data from human experiments, see Fig. 1. While these properties are important to perform well in many computer vision tasks, they are not desirable for image generation. The generator/decoder can exploit these areas of ‘blindness’ of the similarity metric, leading to significantly more visual artifacts in generated samples, as we observed in the image generation experiments.
Furthermore, the computational and memory requirements of neural network based loss functions are much higher compared to SSIM or Watson’s model, to an extent that limits their applicability in generative neural network training.
In our experiments, the Adaptive-Loss, which is constructed of many similar components to Watson’s perceptual model, did not perform much better than SSIM and considerably worse than Watson’s model. This shows that our approach goes beyond computing a general weighted distance measure between images transformed to frequency space.
Conclusion We introduced a novel image similarity metric and corresponding loss function based on Watson’s perceptual model, which we transformed to a trainable model and extended to colorimages. We replaced the underlying DCT by a DFT to disentangles amplitude and phase information in order to increase robustness against small shifts.
The novel loss function optimized on data from human experiments can be used to train deep generative neural networks to produce realistic looking, high-quality samples. It is fast to compute and requires little memory. The new perceptual loss function does not suffer from the blurring effects of traditional similarity metrics like Euclidean distance or SSIM, and generates less visual artifacts than current state-of-the-art losses based on deep neural networks.
Acknowledgments and Disclosure of Funding
CI acknowledges support by the Villum Foundation through the project Deep Learning and Remote Sensing for Unlocking Global Ecosystem Resource Dynamics (DeReEco).
Broader impact
The broader impact of our work is defined by the numerous applications of generative deep neural networks, for example the generation of realistic photographs and human faces, image-to-image translation with the special case of semantic-image-to-photo translation; face frontal view generation; generation of human poses; photograph editing, restoration and inpainting; and generation of super resolution images.
A risk of realistic image generation is of course the ability to produce “deepfakes”. Generative neural networks can be used to replace a person in an existing image or video by someone else. While this technology has positive applications (e.g., in the movie industry and entertainment in general), it can be abused. We refer to a recent article by Kietzmann et al. for an overview discussing positive and negative aspects, including potential misuse that can affect almost anybody: “With such a powerful technology and the increasing number of images and videos of all of us on social media, anyone can become a target for online harassment, defamation, revenge porn, identity theft, and bullying — all through the use of deepfakes” [9].
We also refer to [9] for existing and potential commercial applications of deepfakes, such as software that allows consumers to “try on cosmetics, eyeglasses, hairstyles, or clothes virtually” and video game players to “insert their faces onto their favorite characters”.
Our interest in generative neural networks, in particular variational autoencoders, is partially motivated by concrete applications in the analysis of remote sensing data. In a just started project, we will employ deep generative neural networks to the generation of geospatial data, which enables us to simulate the effect of human interaction w.r.t. ecosystems. The goal is to improve our understanding of these interactions, for example to analyse the influence of countermeasures such as afforestation in the context of climate change mitigation. | 1. What is the main contribution of the paper, and how does it differ from other approaches?
2. What are the strengths of the proposed approach, particularly in its adaptation of Watson's Perceptual Model?
3. What are the weaknesses of the paper, especially regarding its experimental evaluation and comparison with other works?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. Are there any suggestions or recommendations for improving the paper, such as including a quantitative evaluation or discussing relevant metrics? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The paper proposes to use an adapted version of Watson's Perceptual Model to train a VAE for higher perceptual quality than e.g. SSIM or a deep-feature based loss.
Strengths
Interesting approach, thorough description and motivation for the adaptions to Watson's model. I found the description of the components of Watson's model interesting.
Weaknesses
W1) The paper (rightfully) dismisses MSE in the introduction, but I would have liked to see a VAE trained with MSE as a comparison. It seems at least for MNIST, this should give somewhat reasonable models? If not, please elaborate why. W2) What's the difference between Deeploss-VGG/-Squeeze and the loss proposed in [29] (LPIPS)? As far as I know, they also use VGG and Squeeze. If it's the same, it would help understanding to call it the same. If not, a short note on what is different somewhere in L140-145 would be nice. W3) The paper only qualitatively evaluates the proposed method and "encourages [the reader] to run the code and generate more samples" (footnote 3), and on the 2AFC dataset (Fig 6). While I agree that there is no state-of-the-art perceptual quality metric, I would still have appreciated a discussion of candidates (e.g. FID / VMAF / NIQE / NIMA), and a plot. Minor: - L on L58 not defined, consider adding it to the previous sentence (...loss functions $L$ into...) - on L79, it would have helped to add a link to the further text, as I was unsure what the terms meant, but they are described (e.g. "luminance masking, contrast masking, and sensitivity *(described below)*") |
NIPS | Title
A Loss Function for Generative Neural Networks Based on Watson’s Perceptual Model
Abstract
To train Variational Autoencoders (VAEs) to generate realistic imagery requires a loss function that reflects human perception of image similarity. We propose such a loss function based on Watson’s perceptual model, which computes a weighted distance in frequency space and accounts for luminance and contrast masking. We extend the model to color images, increase its robustness to translation by using the Fourier Transform, remove artifacts due to splitting the image into blocks, and make it differentiable. In experiments, VAEs trained with the new loss function generated realistic, high-quality image samples. Compared to using the Euclidean distance and the Structural Similarity Index, the images were less blurry; compared to deep neural network based losses, the new approach required less computational resources and generated images with less artifacts.
1 Introduction
Variational Autoencoders (VAEs) [11] are generative neural networks that learn a probability distribution over X from training data D = {x0, ...,xn} ⊂ X . New samples are generated by drawing a latent variable z ∈ Z from a distribution p(z) and using z to sample x ∈ X from a conditional decoder distribution p(x|z). The distribution of p(x|z) induces a similarity measure on X . A generic choice is a normal distribution p(x|z) = N (µx(z), σ
2) with a fixed variance σ2. In this case the underlying energy-function is L(x,x′) = 12σ2 ‖x − x
′‖2. Thus, the model assumes that for two samples which are sufficiently close to each other (as measured by σ2), the similarity measure can be well approximated by the squared loss. The choice of L is crucial for the generative model. For image generation, traditional pixel-by-pixel loss metrics such as the squared loss are popular because of their simplicity, ease of use and efficiency [5]. However, they perform poorly at modeling the human perception of image similarity [30]. Most VAEs trained with such losses produce images that look blurred [3, 5]. Accordingly, perceptual loss functions for VAEs are an active research area. These loss functions fall into two broad categories, namely explicit models, as exemplified by the Structural Similarity Index Model (SSIM) [25], and learned models. The latter include models based on deep feature embeddings extracted from image classification networks [5, 30, 8] as well as combinations of VAEs with discriminator networks of Generative Adversarial Networks (GANs) [4, 13, 18].
Perceptual loss functions based on deep neural networks have produced promising results. However, features optimized for one task need not be a good choice for a different task. Our experimental results suggest that powerful metrics optimized on specific datasets may not generalize to broader categories of images. We argue that using features from networks pre-trained for image classification in loss functions for training VAEs for image generation may be problematic, because invariance properties beneficial for classification make it difficult to capture details required to generate realistic images.
Code and experiments are available at github.com/SteffenCzolbe/PerceptualSimilarity
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Human
Watson-DFT
In this work, we introduce a loss function based on Watson’s visual perception model [27], an explicit perceptual model used in image compression and digital watermarking [15]. The model accounts for the perceptual phenomena of sensitivity, luminance masking, and contrast masking. It computes the loss as a weighted distance in frequency space based on a Discrete Cosine Transform (DCT). We optimize the Watson model for image generation by (i) replacing the DCT with the discrete Fourier Transform (DFT) to improve robustness against translational shifts, (ii) extending the model to color images, (iii) replacing the fixed grid in the block-wise computations by a randomized grid to avoid artifacts, and (iv) replacing the max operator to make the loss function differentiable. We trained the free parameters of our model and several competitors using human similarity judgement data ([30], see Figure 1 for examples). We applied the trained similarity measures to image generation of numerals and celebrity faces. The modified Watson model generalized well to the different image domains and resulted in imagery exhibiting less blur and far fewer artifacts compared to alternative approaches.
2 Background
In this section we briefly review variational autoencoders and Watson’s perceptual model.
Variational Autoencoders Samples from VAEs [11] are drawn from p(x) = ∫
p(x|z)p(z) dz, where p(z) is a prior distribution that can be freely chosen and p(x|z) is typically modeled by a deep neural network. The model is trained using a variational lower bound on the likelihood
log p(x) ≤ Eq(z|x) {log p(x|z)} − βKL(q(z|x)‖p(z)) , (1)
where q(z|x) is an encoder function designed to approximate p(z|x) and β is a scaling factor. We choose p(z) = N (0, I) and q(z|x) = N (µz(x),Σz(x)), where the covariance matrix Σz(x) is restricted to be diagonal and both µz and Σz(x) are modelled by deep neural networks.
Loss functions for VAEs It is possible to incorporate a wide range of loss functions into VAEtraining. If we choose p(x|z) ∝ exp(−L(x, µx(z)), where µx is a neural network and we ensure that L leads to a proper probability function, the first term of (1) becomes
Eq(z|x) {log p(x|z)} = −Eq(z|x) {L(x, µx(z))}+ const . (2)
Choosing L freely comes at the price that we typically lose the ability to sample from p(x) directly. If the loss is a valid unnormalized log-probability, Markov Chain Monte Carlo methods can be applied. In most applications, however, it is assumed that µx(z), z ∼ p(z) is a good approximation of p(x) and most articles present means instead of samples. Typical choices for L are the squared loss L2(x,x ′) = ‖x−x′‖2 and p-norms Lp(x,x ′) = ‖x−x′‖p. A generalization of p-norm based losses is the “General and Adaptive Robust Loss Function” [1], which we refer to as Adaptive-Loss. When used to train VAEs for image generation, the Adaptive-Loss is applied to 2D DCT transformations of entire images. Roughly speaking, it then adapts one shape parameter (similar to a p-value) and one scaling parameter per frequency during training, simultaneously learning a loss function and a
generative model. A common visual similarity metric based on image fidelity is given by Structured Similarity (SSIM) [25], which bases its calculation on the covariance of patches. We refer to section A in the supplementary material for a description of SSIM.
Another approach to define loss functions is to extract features using a deep neural network and to measure the differences between the features from original and reconstructed images [5]. In [5], it is proposed to consider the first five layers L = {1, . . . , 5} of VGGNet [21]. In [30], different feature extraction networks, including AlexNet [12] and SqeezeNet [6], are tested. Furthermore, the metrics are improved by weighting each feature based on data from human perception experiments (see Section 4.1). With adaptive weights ωlc ≥ 0 for each feature map, the resulting loss function reads
Lfcw(x,x ′) =
∑
l∈L
1
HlWl
Hl,Wl,Cl ∑
h,w,c=1
ωlc(y l hwc − ŷ l hwc) 2 , (3)
where Hl, Wl and Cl are the height, width and number of channels (feature maps) in layer l. The normalized Cl-dimensional feature vectors are denoted by y l hw = F l hw(x)/‖F l hw(x)‖ and ŷlhw = F l hw(x ′)/‖F lhw(x ′)‖, where F lhw(x) ∈ R
Cl contains the features of image x in layer l at spatial coordinates h,w (see [30] for details).
Watson’s Perceptual Model Watson’s perceptual model of the human visual system [27] describes an image as a composition of base images of different frequencies. It accounts for the perceptual impact of luminance masking, contrast masking, and sensitivity. Input images are first divided into K disjoint blocks of B ×B pixels, where B = 8. Each block is then transformed into frequency-space using the DCT. We denote the DCT coefficient (i, j) of the k-th block by Cijk for 1 ≤ i, j ≤ B and 1 ≤ k ≤ K.
The Watson model computes the loss as weighted p-norm (typically p = 4) in frequency-space
DWatson(C,C ′) = p
√ √ √ √ B,B,K ∑
i,j,k=1
∣ ∣ ∣ ∣ Cijk −C′ijk Sijk ∣ ∣ ∣ ∣ p , (4)
where S ∈ RK×B×B is derived from the DCT coefficients C. The loss is not symmetric as C′ does not influence S. To compute S, an image-independent sensitivity table T ∈ RB×B is defined. It stores the sensitivity of the image to changes in its individual DCT components. The table is a function of a number of parameters, including the image resolution and the distance of an observer to the image. It can be chosen freely dependent on the application, a popular choice is given in [2]. Watson’s model adjusts T for each block according to the block’s luminance. The luminance-masked threshold TLijk is given by
TLijk = Tij
(
C00k
C̄00
)α
, (5)
where α is a constant with a suggested value of 0.649, C00k is the d.c. coefficient (average brightness) of the k-th block in the original image, and C̄00 is the average luminance of the entire image. As a result, brighter regions of an image are less sensitive to changes.
Contrast masking accounts for the reduction in visibility of one image component by the presence of another. If a DCT frequency is strongly present, an absolute change in its coefficient is less perceptible compared to when the frequency is less pronounced. Contrast masking gives
Sijk = max(TLijk , |Cijk| r T (1−r) Lijk ) , (6)
where the constant r ∈ [0, 1] has a suggested value of 0.7.
3 Modified Watson’s Perceptual Model
A differentiable model To make the loss function differentiable we replace the maximization in the computation of S by a smooth-maximum function smax(x1, x2, . . . ) = ∑ i xie xi ∑
j e xj and the equation
for S becomes S̃ijk = smax(TLijk , |Cijk| r T (1−r) Lijk ) . (7)
For numerical stability, we introduce a small constant ǫ = 10−10 and arrive at the trainable Watsonloss for the coefficients of a single channel
LWatson(C,C ′) = p
√ √ √ √ǫ+ B,B,K ∑
i,j,k=1
∣ ∣ ∣ ∣ Cijk −C′ijk
S̃ijk
∣ ∣ ∣ ∣ p . (8)
Extension to color images Watson’s perceptual model is defined for a single channel (i.e., greyscale). To make the model applicable to color images, we aggregate the loss calculated on multiple separate channels to a single loss value.1 We represent color images in the YCbCr format, consisting of the luminance channel Y and chroma channels Cb and Cr. We calculate the single-channel losses separately and weight the results. Let LY, LCb, LCr be the loss values in the luminance, blue-difference and red-difference components for any greyscale loss function. Then the corresponding multi-channel loss L is calculated as
L = λYLY + λCbLCb + λCrLCr , (9)
where the weighting coefficients are learned from data, see below.
Fourier transform In order to be less sensitive to small translational shifts, we replace the DCT with a discrete Fourier Transform (DFT), which is in accordance with Watson’s original work (e.g., [29, 26]). The later use of the DCT was most likely motivated by its application within JPEG [24, 28]. The DFT separates a signal into amplitude and phase information. Translation of an image affects phase, but not amplitude. We apply Watson’s model on the amplitudes while we use the cosine-distance for changes in phase information. Let A ∈ RB×B be the amplitudes of the DFT and let Φ ∈ RB×B be the phase-information. We then obtain
LWatson-DFT(A,Φ,A ′,Φ′) = LWatson(A,A ′) +
B,B,K ∑
i,j,k=1
wij arccos [ cos(Φijk − Φ ′ ijk) ] , (10)
where wij > 0 are individual weights of the phase-distances that can be learned (see below).
The change of representation going from DCT to DFT disentangles amplitude and phase information, but does not increase the number of parameters as the DFT of real images results in a Hermitian complex coefficient matrix (i.e., the element in row i and column j is the complex conjugate of the element in row j and column i) .
Grid translation Computing the loss from disjoint blocks works for the original application of Watson’s perceptual model, lossy compression. However, a powerful generative model can take advantage of the static blocks, leading to noticeable artifacts at block boundaries. We solve this problem by randomly shifting the block-grid in the loss-computation during training. The offsets are drawn uniformly in the interval J−4, 4K in both dimensions. In expectation, this is equivalent to computing the loss via a sliding window as in SSIM.
Free parameters When benchmarking Watson’s perceptual model with the suggested parameters on data from a Two-Alternative Forced-Choice (2AFC) task measuring human perception of image similarity, see Subsection 4.1, we found that the model underestimated differences in images with strong high-frequency components. This allows compression algorithms to improve compression ratios by omitting noisy image patterns, but does not model the full range of human perception and can be detrimental in image generation tasks, where the underestimation of errors in these frequencies might lead to the generation of an unnatural amount of noise. We solve this problem by training all parameters of all loss variants, including p,T, α, r, wij and for color images λY, λCb and λCr, on the 2AFC dataset (see Section 4.1).
1Many perceptually oriented image processing domains choose color representations that separate luminance from chroma. For example, the HSV color model distinguishes between hue, saturation, and color, and formats such as Lab or YCbCr distinguish between a luminance value and two color planes [22]. The separation of brightness from color information is motivated by a difference in perception. The luminance of an image has a larger influence on human perception than chromatic components [20]. Perceptual image processing standards such as JPEG compression utilize this by encoding chroma at a lower resolution than luminance [24].
4 Experiments
We empirically compared our loss function to traditional baselines and the recently proposed AdaptiveLoss [1] as well as deep neural network based approaches [30]. First, we trained the free parameters of the proposed Watson model as well as of loss functions based on VGGNet [21] and SqueezeNet [6] to mimic human perception on data of human perceptual judgements. Next, we applied the similarity metrics as loss functions of VAEs in two image generation tasks. Finally, we evaluated the perceptual performance, and investigate individual error cases.
4.1 Training on data from human perceptual experiments
The modified Watson model, referred to as Watson-DFT, as well as LPIPS-VGG and LPIPS-Squeeze have tune-able parameters, which have to be chosen before use as a loss function. We train the parameters using the same data. For LPIPS-VGG and LPIPS-Squeeze, we followed the methodology called LPIPS (linear) in [30] and trained feature weights according to (3) for the first 5 or 7 layers, respectively.
We trained on the Two-Alternative ForcedChoice (2AFC) dataset of perceptual judgements published as part of the Berkeley-Adobe Perceptual Patch Similarity (BAPPS) dataset [30]. Participants were asked which of two distortions x1,x2 of an 64 × 64 color image x0 is more similar to the reference x0. A human reference judgement p ∈ [0, 1] is provided indicating whether the human judges on average deemed x1 (p < 0.5) or x2 (p > 0.5) more similar to x0.
2 The dataset is based on a total of 20 different distortions, with the strength of each distortion randomized per sample. Some distortions can be combined, giving 308 combinations. Figure 1 and Fig. B.7 in the supplementary material show examples.
To train a loss function L on the 2AFC dataset, we follow the schema outlined in Figure 2. We first compute the perceptual distances d0 = L(x0,x1) and d1 = L(x0,x2). Then these distances are converted into a probability to determine whether (x0,x1) is perceptually more similar than (x0,x2). To calculate the probability based on distance measures, we use
G(d0, d1) =
{
1 2 , if d0 = d1 = 0 σ (
γ d1−d0|d1|+|d0|
) , otherwise , (11)
where σ(x) is the sigmoid function with learned weight γ > 0 modelling the steepness of the slope. This computation is invariant to linear transformations of the loss functions.
The training loss between the predicted judgment G(d0, d1) and the human judgment p is calculated by the binary cross-entropy:
L2AFC(d0, d1) = p log(G(d0, d1)) + (1− p) log(1−G(d0, d1)) (12)
This objective function was used to adapt the parameters of all considered metrics (used as loss functions in the VAE experiments). We trained the DCT based loss Watson-DCT and the DFT based loss Watson-DFT, see (8) and (10), respectively, both for single-channel greyscale input as well as for color images with the multi-channel aggregator (9). We compared our results to the linearly weighted deep loss functions from [30], which we reproduced using the original methodology, which differs from (3) only in modeling G as a shallow neural network with all positive weights.
2The three image patches x0,x1,x2 and label p form a record. The dataset contains a total of 151,400 training records and 36,500 test records. Each training record was judged by 2, each test record by 5 humans.
4.2 Application to VAEs
We evaluated VAEs trained with our pre-trained modified Watson model, pre-trained deep-learning based LPIPS-VGG and LPIPS-Squeeze, and not pre-trained baselines SSIM and Adaptive-Loss. The latter adapted the parameters of the loss function during VAE training. We used the implementations provided by the original authors when available. Since quantitative evaluation of generative models is challenging [23], we qualitatively assessed the generation, reconstruction and latent-value interpolation of each model on two independent datasets.3 We considered the gray-scale MNIST dataset [14] and the celebA dataset [16] of celebrity faces. The images of the celebA dataset are of higher resolution and visual complexity compared to MNIST. The feature space dimensionalities for the two models, MNIST-VAE and celebA-VAE, were 2 and 256, respectively.4
Results of reconstructed samples from models trained on celebA are given in Fig. 4. Generated images of all models are given in Fig. 5 and Supplement D. For the two-dimensional featurespace of the MNIST model, Fig. 3 shows reconstructions from z-values that lie on a grid over z ∈ [−1.5, 1.5]2. Additional results showing interpolations and reconstructions of the models are given in Supplement D.
Handwritten digits The VAE trained with the Watson-DFT captured the MNIST dataset well (see Fig. 3 and supplementary Fig. D.8). The visualization of the latent-space shows natural-looking handwritten digits. All generated samples are clearly identifiable as numbers. The models trained with SSIM and Adaptive-Loss produced similar results, but edges are slightly less sharp (Fig. D.8). The VAE trained with the LPIPS-VGG metric produced unnatural looking samples, very distinct from the original dataset. Samples generated by VAEs trained with LPIPS-Squeeze were not recognizeable as digits. Both deep feature based metrics performed badly on this simple task; they did not generalize to this domain of images, which differs from the 2AFC images used to tune the learned similarity metrics.
Celebrity photos The model trained with the Watson-DFT metric generated samples of high visual fidelity. Background patterns and haircuts were defined and recognizable, and even strands of hair were partially visible. The images showed no blurring and few artifacts. However, objects lacked fine details like skin imperfections, leading to a smooth appearance. Samples from this generative model overall looked very good and covered the full range of diversity of the original dataset.
The VAE trained with SSIM showed the typical problems of training with traditional losses. Wellaligned components of the images, such as eyes and mouth, were realistically generated. More specific features such as the background and glasses, or features with a greater amount of spatial
3We provide the source code for our methods and the experiments, including the scripts that randomly sampled from the models to generate the plots in this article. We encourage to run the code and generate more samples to verify that the presented results are representative.
4The full architectures are given in supplementary material Appendix C. The optimization algorithm was Adam [10]. The initial learning rate was 10−4 and decreased exponentially throughout training by a factor of 2 every 100 epochs for the MNIST-VAE, and every 20 epochs for the celebA-VAE. For all models, we first performed a hyper-parameter search over the regularization parameter β in (1). We tested β = eλ for λ ∈ Z for 50 epochs on the MNIST set and 10 epochs on the celebA set, then selected the best performing hyper-parameter by visual inspection of generated samples. Values selected for training the full model are shown in Table C.4 in the supplement. For each loss function, we trained the MNIST-VAE for 250 epochs and the celebA-VAE for 100 epochs.
uncertainty, such as hair, were very blurry or not generated at all. The model trained with AdaptiveLoss improves on color accuracy, but blurring is still an issue. The VAE trained with the LPIPS-VGG metric generated samples and visual patterns of the original dataset very well. Minor details such as strands of hair, skin imperfections, and reflections were generated very accurately. However, very strong artifacts were present (e.g., in the form of grid-like patterns, see Fig. 5 (c)). The Adaptive-Loss gave similar results as SSIM, see supplementary Fig. D.11 (a). The VAE trained with LPIPS-Squeeze showed very strong artifacts in reconstructed images as well as generated images, see supplementary
Fig. D.11 (b)).
4.3 Perceptual score
We used the validation part of the 2AFC dataset to compute perceptual scores and investigated similarity judgements on individual samples of the set. The agreement with human judgements is measured by pp̂+ (1− p)(1− p̂) as in [30].5 A human reference score was calculated using p = p̂. The results are summarized in Figure 6. Overall, the scores were similar to the results in [30], which verifies our methodology. We can see that the explicit approaches (L2 and SSIM) performed similarly. Adaptive-Loss, despite the ability ot adapt to the dataset, offers no improvement over the baselines. Watson-DFT performed considerably better, but not as well as LPIPS-VGG or LPIPS-Squeeze. We observe that the ability of metrics to learn perceptual judgement grows with the degrees of freedom (>1000 parameters for deep models, <150 for Watson-based metrics).
Inspecting the errors revealed qualitative difference between the metrics, some representative examples are shown in Fig. 1. We observed that the deep networks are good at semantic matching (see biker in Fig 1), but under-estimate the perceptual impact of graphical artifacts such as
noise (see treeline) and blur. We argue that this is because the features were originally optimized for object recognition, where invariance against distortions and spatial shifts is beneficial. In contrast, the Watson-based metric is sensitive to changes in frequency (noise, blur) and large translations.
4.4 Resource requirements
During training, computing and back-propagating the loss requires computational resources, which are then unavailable for the VAE model and data. We measure the resource requirements in a typical learning scenario. Mini-batches of 128 images of size 64× 64 with either one (greyscale) or three channels (color) were forward-fed through the tested loss functions. The loss with regard to one input image was back-propagated, and the image was updated accordingly using stochastic gradient descent. We measured the time for 500 iterations and the maximum GPU memory allocated. Results
5For example, when 80% of humans judged x1 to be more similar to the reference we have p = 0.2. If the metric predicted x1 to be closer, p̂ = 0, and we grant it 80% score for this judgement.
are averaged over three runs of the experiment. Implementation in PyTorch [19], 32-bit precision, executed on a Nvidia Quadro P6000 GPU. The results are shown in Table 1. We observe that deep model based loss functions require considerably more computation time and GPU memory. For example, evaluation of Watson-DFT was 6 times faster than LPIPS-VGG and required only a few megabytes of GPU memory instead of two gigabytes.
5 Discussion and conclusions
Discussion The 2AFC dataset is suitable to evaluate and tune perceptual similarity measures. But it considers a special, limited, partially artificial set of images and transformations. On the 2AFC task our metric based on Watson’s perceptual model outperformed the simple L1 and L2 metrics as well as the popular structural similarity SSIM [25] and the Adaptive-Loss [1].
Learning a metric using deep neural networks on the 2AFC data gave better results on the corresponding test data. This does not come as a surprise given the high flexibility of this purely data-driven approach. However, the resulting neural networks did not work well when used as a loss function for training VAEs, indicating weak generalization beyond the images and transformations in the training data. This is in accordance with (1) the fact that the higher flexibility of LPIPS-Squeeze compared to LPIPS-VGG yields a better fit in the 2AFC task (see also [30]) but even worse results in the VAE experiments; (2) that deep model based approaches profit from extensive regularization, especially by including the squared error in the loss function (e.g., [8]). In contrast, our approach based on Watson’s Perceptual Model is not very complex (in terms of degrees of freedom) and it has a strong inductive bias to match human perception. Therefore it extrapolates much better in a way expected from a perceptual metric/loss.
Deep neural networks for object recognition are trained to be invariant against translation, noise and blur, distortions, and other visual artifacts. We observed the invariance against noise and artifacts even after tuning on the data from human experiments, see Fig. 1. While these properties are important to perform well in many computer vision tasks, they are not desirable for image generation. The generator/decoder can exploit these areas of ‘blindness’ of the similarity metric, leading to significantly more visual artifacts in generated samples, as we observed in the image generation experiments.
Furthermore, the computational and memory requirements of neural network based loss functions are much higher compared to SSIM or Watson’s model, to an extent that limits their applicability in generative neural network training.
In our experiments, the Adaptive-Loss, which is constructed of many similar components to Watson’s perceptual model, did not perform much better than SSIM and considerably worse than Watson’s model. This shows that our approach goes beyond computing a general weighted distance measure between images transformed to frequency space.
Conclusion We introduced a novel image similarity metric and corresponding loss function based on Watson’s perceptual model, which we transformed to a trainable model and extended to colorimages. We replaced the underlying DCT by a DFT to disentangles amplitude and phase information in order to increase robustness against small shifts.
The novel loss function optimized on data from human experiments can be used to train deep generative neural networks to produce realistic looking, high-quality samples. It is fast to compute and requires little memory. The new perceptual loss function does not suffer from the blurring effects of traditional similarity metrics like Euclidean distance or SSIM, and generates less visual artifacts than current state-of-the-art losses based on deep neural networks.
Acknowledgments and Disclosure of Funding
CI acknowledges support by the Villum Foundation through the project Deep Learning and Remote Sensing for Unlocking Global Ecosystem Resource Dynamics (DeReEco).
Broader impact
The broader impact of our work is defined by the numerous applications of generative deep neural networks, for example the generation of realistic photographs and human faces, image-to-image translation with the special case of semantic-image-to-photo translation; face frontal view generation; generation of human poses; photograph editing, restoration and inpainting; and generation of super resolution images.
A risk of realistic image generation is of course the ability to produce “deepfakes”. Generative neural networks can be used to replace a person in an existing image or video by someone else. While this technology has positive applications (e.g., in the movie industry and entertainment in general), it can be abused. We refer to a recent article by Kietzmann et al. for an overview discussing positive and negative aspects, including potential misuse that can affect almost anybody: “With such a powerful technology and the increasing number of images and videos of all of us on social media, anyone can become a target for online harassment, defamation, revenge porn, identity theft, and bullying — all through the use of deepfakes” [9].
We also refer to [9] for existing and potential commercial applications of deepfakes, such as software that allows consumers to “try on cosmetics, eyeglasses, hairstyles, or clothes virtually” and video game players to “insert their faces onto their favorite characters”.
Our interest in generative neural networks, in particular variational autoencoders, is partially motivated by concrete applications in the analysis of remote sensing data. In a just started project, we will employ deep generative neural networks to the generation of geospatial data, which enables us to simulate the effect of human interaction w.r.t. ecosystems. The goal is to improve our understanding of these interactions, for example to analyse the influence of countermeasures such as afforestation in the context of climate change mitigation. | 1. What is the focus and contribution of the paper regarding generative modeling?
2. What are the strengths of the proposed approach, particularly in its use of an appropriate image loss function?
3. What are the weaknesses of the paper, especially regarding its lack of evidence in guiding generative models towards better image generations?
4. How does the reviewer assess the utility and effectiveness of the proposed method? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The paper proposes a reconstruction loss function that is based on a human perceptual model that is computed in the frequency space. The authors show that this model matches human judgements on different synthetic distortions. The authors also show that this loss can be used to train a VAE.
Strengths
Use of an appropriate image loss function is an important design choice in many generative modelling task, and the authors provide an explicitly defined (not learned) choice based on human perception. Such an approach may be particularly valuable in the design of generative models robust to adversarial noise. Additionally, finding good ways of evaluating generative models is important, as KID, FID and Inception Scores are not interpretable and are biased toward features relevant for classification. Having good image distances are important, and perhaps there is more inspiration to be found in a true perceptual approach.
Weaknesses
Unfortunately, the evidence does not show that this loss metric actually guides generative models towards better image generations, so the utility is limited. [Update after rebuttal: upon re-considering the results and reading the rebuttal, I retract my previous assessment, I do see that there are reasons for why warping artifacts could happen in some images. I agree that the image generations are good, and that the methodology is sound. However, I am not sure that the quantitative evaluation bears out that the proposed approach is superior, as the 2AFC derived metric is the only quantitative evaluation shown.] |
NIPS | Title
A Loss Function for Generative Neural Networks Based on Watson’s Perceptual Model
Abstract
To train Variational Autoencoders (VAEs) to generate realistic imagery requires a loss function that reflects human perception of image similarity. We propose such a loss function based on Watson’s perceptual model, which computes a weighted distance in frequency space and accounts for luminance and contrast masking. We extend the model to color images, increase its robustness to translation by using the Fourier Transform, remove artifacts due to splitting the image into blocks, and make it differentiable. In experiments, VAEs trained with the new loss function generated realistic, high-quality image samples. Compared to using the Euclidean distance and the Structural Similarity Index, the images were less blurry; compared to deep neural network based losses, the new approach required less computational resources and generated images with less artifacts.
1 Introduction
Variational Autoencoders (VAEs) [11] are generative neural networks that learn a probability distribution over X from training data D = {x0, ...,xn} ⊂ X . New samples are generated by drawing a latent variable z ∈ Z from a distribution p(z) and using z to sample x ∈ X from a conditional decoder distribution p(x|z). The distribution of p(x|z) induces a similarity measure on X . A generic choice is a normal distribution p(x|z) = N (µx(z), σ
2) with a fixed variance σ2. In this case the underlying energy-function is L(x,x′) = 12σ2 ‖x − x
′‖2. Thus, the model assumes that for two samples which are sufficiently close to each other (as measured by σ2), the similarity measure can be well approximated by the squared loss. The choice of L is crucial for the generative model. For image generation, traditional pixel-by-pixel loss metrics such as the squared loss are popular because of their simplicity, ease of use and efficiency [5]. However, they perform poorly at modeling the human perception of image similarity [30]. Most VAEs trained with such losses produce images that look blurred [3, 5]. Accordingly, perceptual loss functions for VAEs are an active research area. These loss functions fall into two broad categories, namely explicit models, as exemplified by the Structural Similarity Index Model (SSIM) [25], and learned models. The latter include models based on deep feature embeddings extracted from image classification networks [5, 30, 8] as well as combinations of VAEs with discriminator networks of Generative Adversarial Networks (GANs) [4, 13, 18].
Perceptual loss functions based on deep neural networks have produced promising results. However, features optimized for one task need not be a good choice for a different task. Our experimental results suggest that powerful metrics optimized on specific datasets may not generalize to broader categories of images. We argue that using features from networks pre-trained for image classification in loss functions for training VAEs for image generation may be problematic, because invariance properties beneficial for classification make it difficult to capture details required to generate realistic images.
Code and experiments are available at github.com/SteffenCzolbe/PerceptualSimilarity
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Human
Watson-DFT
In this work, we introduce a loss function based on Watson’s visual perception model [27], an explicit perceptual model used in image compression and digital watermarking [15]. The model accounts for the perceptual phenomena of sensitivity, luminance masking, and contrast masking. It computes the loss as a weighted distance in frequency space based on a Discrete Cosine Transform (DCT). We optimize the Watson model for image generation by (i) replacing the DCT with the discrete Fourier Transform (DFT) to improve robustness against translational shifts, (ii) extending the model to color images, (iii) replacing the fixed grid in the block-wise computations by a randomized grid to avoid artifacts, and (iv) replacing the max operator to make the loss function differentiable. We trained the free parameters of our model and several competitors using human similarity judgement data ([30], see Figure 1 for examples). We applied the trained similarity measures to image generation of numerals and celebrity faces. The modified Watson model generalized well to the different image domains and resulted in imagery exhibiting less blur and far fewer artifacts compared to alternative approaches.
2 Background
In this section we briefly review variational autoencoders and Watson’s perceptual model.
Variational Autoencoders Samples from VAEs [11] are drawn from p(x) = ∫
p(x|z)p(z) dz, where p(z) is a prior distribution that can be freely chosen and p(x|z) is typically modeled by a deep neural network. The model is trained using a variational lower bound on the likelihood
log p(x) ≤ Eq(z|x) {log p(x|z)} − βKL(q(z|x)‖p(z)) , (1)
where q(z|x) is an encoder function designed to approximate p(z|x) and β is a scaling factor. We choose p(z) = N (0, I) and q(z|x) = N (µz(x),Σz(x)), where the covariance matrix Σz(x) is restricted to be diagonal and both µz and Σz(x) are modelled by deep neural networks.
Loss functions for VAEs It is possible to incorporate a wide range of loss functions into VAEtraining. If we choose p(x|z) ∝ exp(−L(x, µx(z)), where µx is a neural network and we ensure that L leads to a proper probability function, the first term of (1) becomes
Eq(z|x) {log p(x|z)} = −Eq(z|x) {L(x, µx(z))}+ const . (2)
Choosing L freely comes at the price that we typically lose the ability to sample from p(x) directly. If the loss is a valid unnormalized log-probability, Markov Chain Monte Carlo methods can be applied. In most applications, however, it is assumed that µx(z), z ∼ p(z) is a good approximation of p(x) and most articles present means instead of samples. Typical choices for L are the squared loss L2(x,x ′) = ‖x−x′‖2 and p-norms Lp(x,x ′) = ‖x−x′‖p. A generalization of p-norm based losses is the “General and Adaptive Robust Loss Function” [1], which we refer to as Adaptive-Loss. When used to train VAEs for image generation, the Adaptive-Loss is applied to 2D DCT transformations of entire images. Roughly speaking, it then adapts one shape parameter (similar to a p-value) and one scaling parameter per frequency during training, simultaneously learning a loss function and a
generative model. A common visual similarity metric based on image fidelity is given by Structured Similarity (SSIM) [25], which bases its calculation on the covariance of patches. We refer to section A in the supplementary material for a description of SSIM.
Another approach to define loss functions is to extract features using a deep neural network and to measure the differences between the features from original and reconstructed images [5]. In [5], it is proposed to consider the first five layers L = {1, . . . , 5} of VGGNet [21]. In [30], different feature extraction networks, including AlexNet [12] and SqeezeNet [6], are tested. Furthermore, the metrics are improved by weighting each feature based on data from human perception experiments (see Section 4.1). With adaptive weights ωlc ≥ 0 for each feature map, the resulting loss function reads
Lfcw(x,x ′) =
∑
l∈L
1
HlWl
Hl,Wl,Cl ∑
h,w,c=1
ωlc(y l hwc − ŷ l hwc) 2 , (3)
where Hl, Wl and Cl are the height, width and number of channels (feature maps) in layer l. The normalized Cl-dimensional feature vectors are denoted by y l hw = F l hw(x)/‖F l hw(x)‖ and ŷlhw = F l hw(x ′)/‖F lhw(x ′)‖, where F lhw(x) ∈ R
Cl contains the features of image x in layer l at spatial coordinates h,w (see [30] for details).
Watson’s Perceptual Model Watson’s perceptual model of the human visual system [27] describes an image as a composition of base images of different frequencies. It accounts for the perceptual impact of luminance masking, contrast masking, and sensitivity. Input images are first divided into K disjoint blocks of B ×B pixels, where B = 8. Each block is then transformed into frequency-space using the DCT. We denote the DCT coefficient (i, j) of the k-th block by Cijk for 1 ≤ i, j ≤ B and 1 ≤ k ≤ K.
The Watson model computes the loss as weighted p-norm (typically p = 4) in frequency-space
DWatson(C,C ′) = p
√ √ √ √ B,B,K ∑
i,j,k=1
∣ ∣ ∣ ∣ Cijk −C′ijk Sijk ∣ ∣ ∣ ∣ p , (4)
where S ∈ RK×B×B is derived from the DCT coefficients C. The loss is not symmetric as C′ does not influence S. To compute S, an image-independent sensitivity table T ∈ RB×B is defined. It stores the sensitivity of the image to changes in its individual DCT components. The table is a function of a number of parameters, including the image resolution and the distance of an observer to the image. It can be chosen freely dependent on the application, a popular choice is given in [2]. Watson’s model adjusts T for each block according to the block’s luminance. The luminance-masked threshold TLijk is given by
TLijk = Tij
(
C00k
C̄00
)α
, (5)
where α is a constant with a suggested value of 0.649, C00k is the d.c. coefficient (average brightness) of the k-th block in the original image, and C̄00 is the average luminance of the entire image. As a result, brighter regions of an image are less sensitive to changes.
Contrast masking accounts for the reduction in visibility of one image component by the presence of another. If a DCT frequency is strongly present, an absolute change in its coefficient is less perceptible compared to when the frequency is less pronounced. Contrast masking gives
Sijk = max(TLijk , |Cijk| r T (1−r) Lijk ) , (6)
where the constant r ∈ [0, 1] has a suggested value of 0.7.
3 Modified Watson’s Perceptual Model
A differentiable model To make the loss function differentiable we replace the maximization in the computation of S by a smooth-maximum function smax(x1, x2, . . . ) = ∑ i xie xi ∑
j e xj and the equation
for S becomes S̃ijk = smax(TLijk , |Cijk| r T (1−r) Lijk ) . (7)
For numerical stability, we introduce a small constant ǫ = 10−10 and arrive at the trainable Watsonloss for the coefficients of a single channel
LWatson(C,C ′) = p
√ √ √ √ǫ+ B,B,K ∑
i,j,k=1
∣ ∣ ∣ ∣ Cijk −C′ijk
S̃ijk
∣ ∣ ∣ ∣ p . (8)
Extension to color images Watson’s perceptual model is defined for a single channel (i.e., greyscale). To make the model applicable to color images, we aggregate the loss calculated on multiple separate channels to a single loss value.1 We represent color images in the YCbCr format, consisting of the luminance channel Y and chroma channels Cb and Cr. We calculate the single-channel losses separately and weight the results. Let LY, LCb, LCr be the loss values in the luminance, blue-difference and red-difference components for any greyscale loss function. Then the corresponding multi-channel loss L is calculated as
L = λYLY + λCbLCb + λCrLCr , (9)
where the weighting coefficients are learned from data, see below.
Fourier transform In order to be less sensitive to small translational shifts, we replace the DCT with a discrete Fourier Transform (DFT), which is in accordance with Watson’s original work (e.g., [29, 26]). The later use of the DCT was most likely motivated by its application within JPEG [24, 28]. The DFT separates a signal into amplitude and phase information. Translation of an image affects phase, but not amplitude. We apply Watson’s model on the amplitudes while we use the cosine-distance for changes in phase information. Let A ∈ RB×B be the amplitudes of the DFT and let Φ ∈ RB×B be the phase-information. We then obtain
LWatson-DFT(A,Φ,A ′,Φ′) = LWatson(A,A ′) +
B,B,K ∑
i,j,k=1
wij arccos [ cos(Φijk − Φ ′ ijk) ] , (10)
where wij > 0 are individual weights of the phase-distances that can be learned (see below).
The change of representation going from DCT to DFT disentangles amplitude and phase information, but does not increase the number of parameters as the DFT of real images results in a Hermitian complex coefficient matrix (i.e., the element in row i and column j is the complex conjugate of the element in row j and column i) .
Grid translation Computing the loss from disjoint blocks works for the original application of Watson’s perceptual model, lossy compression. However, a powerful generative model can take advantage of the static blocks, leading to noticeable artifacts at block boundaries. We solve this problem by randomly shifting the block-grid in the loss-computation during training. The offsets are drawn uniformly in the interval J−4, 4K in both dimensions. In expectation, this is equivalent to computing the loss via a sliding window as in SSIM.
Free parameters When benchmarking Watson’s perceptual model with the suggested parameters on data from a Two-Alternative Forced-Choice (2AFC) task measuring human perception of image similarity, see Subsection 4.1, we found that the model underestimated differences in images with strong high-frequency components. This allows compression algorithms to improve compression ratios by omitting noisy image patterns, but does not model the full range of human perception and can be detrimental in image generation tasks, where the underestimation of errors in these frequencies might lead to the generation of an unnatural amount of noise. We solve this problem by training all parameters of all loss variants, including p,T, α, r, wij and for color images λY, λCb and λCr, on the 2AFC dataset (see Section 4.1).
1Many perceptually oriented image processing domains choose color representations that separate luminance from chroma. For example, the HSV color model distinguishes between hue, saturation, and color, and formats such as Lab or YCbCr distinguish between a luminance value and two color planes [22]. The separation of brightness from color information is motivated by a difference in perception. The luminance of an image has a larger influence on human perception than chromatic components [20]. Perceptual image processing standards such as JPEG compression utilize this by encoding chroma at a lower resolution than luminance [24].
4 Experiments
We empirically compared our loss function to traditional baselines and the recently proposed AdaptiveLoss [1] as well as deep neural network based approaches [30]. First, we trained the free parameters of the proposed Watson model as well as of loss functions based on VGGNet [21] and SqueezeNet [6] to mimic human perception on data of human perceptual judgements. Next, we applied the similarity metrics as loss functions of VAEs in two image generation tasks. Finally, we evaluated the perceptual performance, and investigate individual error cases.
4.1 Training on data from human perceptual experiments
The modified Watson model, referred to as Watson-DFT, as well as LPIPS-VGG and LPIPS-Squeeze have tune-able parameters, which have to be chosen before use as a loss function. We train the parameters using the same data. For LPIPS-VGG and LPIPS-Squeeze, we followed the methodology called LPIPS (linear) in [30] and trained feature weights according to (3) for the first 5 or 7 layers, respectively.
We trained on the Two-Alternative ForcedChoice (2AFC) dataset of perceptual judgements published as part of the Berkeley-Adobe Perceptual Patch Similarity (BAPPS) dataset [30]. Participants were asked which of two distortions x1,x2 of an 64 × 64 color image x0 is more similar to the reference x0. A human reference judgement p ∈ [0, 1] is provided indicating whether the human judges on average deemed x1 (p < 0.5) or x2 (p > 0.5) more similar to x0.
2 The dataset is based on a total of 20 different distortions, with the strength of each distortion randomized per sample. Some distortions can be combined, giving 308 combinations. Figure 1 and Fig. B.7 in the supplementary material show examples.
To train a loss function L on the 2AFC dataset, we follow the schema outlined in Figure 2. We first compute the perceptual distances d0 = L(x0,x1) and d1 = L(x0,x2). Then these distances are converted into a probability to determine whether (x0,x1) is perceptually more similar than (x0,x2). To calculate the probability based on distance measures, we use
G(d0, d1) =
{
1 2 , if d0 = d1 = 0 σ (
γ d1−d0|d1|+|d0|
) , otherwise , (11)
where σ(x) is the sigmoid function with learned weight γ > 0 modelling the steepness of the slope. This computation is invariant to linear transformations of the loss functions.
The training loss between the predicted judgment G(d0, d1) and the human judgment p is calculated by the binary cross-entropy:
L2AFC(d0, d1) = p log(G(d0, d1)) + (1− p) log(1−G(d0, d1)) (12)
This objective function was used to adapt the parameters of all considered metrics (used as loss functions in the VAE experiments). We trained the DCT based loss Watson-DCT and the DFT based loss Watson-DFT, see (8) and (10), respectively, both for single-channel greyscale input as well as for color images with the multi-channel aggregator (9). We compared our results to the linearly weighted deep loss functions from [30], which we reproduced using the original methodology, which differs from (3) only in modeling G as a shallow neural network with all positive weights.
2The three image patches x0,x1,x2 and label p form a record. The dataset contains a total of 151,400 training records and 36,500 test records. Each training record was judged by 2, each test record by 5 humans.
4.2 Application to VAEs
We evaluated VAEs trained with our pre-trained modified Watson model, pre-trained deep-learning based LPIPS-VGG and LPIPS-Squeeze, and not pre-trained baselines SSIM and Adaptive-Loss. The latter adapted the parameters of the loss function during VAE training. We used the implementations provided by the original authors when available. Since quantitative evaluation of generative models is challenging [23], we qualitatively assessed the generation, reconstruction and latent-value interpolation of each model on two independent datasets.3 We considered the gray-scale MNIST dataset [14] and the celebA dataset [16] of celebrity faces. The images of the celebA dataset are of higher resolution and visual complexity compared to MNIST. The feature space dimensionalities for the two models, MNIST-VAE and celebA-VAE, were 2 and 256, respectively.4
Results of reconstructed samples from models trained on celebA are given in Fig. 4. Generated images of all models are given in Fig. 5 and Supplement D. For the two-dimensional featurespace of the MNIST model, Fig. 3 shows reconstructions from z-values that lie on a grid over z ∈ [−1.5, 1.5]2. Additional results showing interpolations and reconstructions of the models are given in Supplement D.
Handwritten digits The VAE trained with the Watson-DFT captured the MNIST dataset well (see Fig. 3 and supplementary Fig. D.8). The visualization of the latent-space shows natural-looking handwritten digits. All generated samples are clearly identifiable as numbers. The models trained with SSIM and Adaptive-Loss produced similar results, but edges are slightly less sharp (Fig. D.8). The VAE trained with the LPIPS-VGG metric produced unnatural looking samples, very distinct from the original dataset. Samples generated by VAEs trained with LPIPS-Squeeze were not recognizeable as digits. Both deep feature based metrics performed badly on this simple task; they did not generalize to this domain of images, which differs from the 2AFC images used to tune the learned similarity metrics.
Celebrity photos The model trained with the Watson-DFT metric generated samples of high visual fidelity. Background patterns and haircuts were defined and recognizable, and even strands of hair were partially visible. The images showed no blurring and few artifacts. However, objects lacked fine details like skin imperfections, leading to a smooth appearance. Samples from this generative model overall looked very good and covered the full range of diversity of the original dataset.
The VAE trained with SSIM showed the typical problems of training with traditional losses. Wellaligned components of the images, such as eyes and mouth, were realistically generated. More specific features such as the background and glasses, or features with a greater amount of spatial
3We provide the source code for our methods and the experiments, including the scripts that randomly sampled from the models to generate the plots in this article. We encourage to run the code and generate more samples to verify that the presented results are representative.
4The full architectures are given in supplementary material Appendix C. The optimization algorithm was Adam [10]. The initial learning rate was 10−4 and decreased exponentially throughout training by a factor of 2 every 100 epochs for the MNIST-VAE, and every 20 epochs for the celebA-VAE. For all models, we first performed a hyper-parameter search over the regularization parameter β in (1). We tested β = eλ for λ ∈ Z for 50 epochs on the MNIST set and 10 epochs on the celebA set, then selected the best performing hyper-parameter by visual inspection of generated samples. Values selected for training the full model are shown in Table C.4 in the supplement. For each loss function, we trained the MNIST-VAE for 250 epochs and the celebA-VAE for 100 epochs.
uncertainty, such as hair, were very blurry or not generated at all. The model trained with AdaptiveLoss improves on color accuracy, but blurring is still an issue. The VAE trained with the LPIPS-VGG metric generated samples and visual patterns of the original dataset very well. Minor details such as strands of hair, skin imperfections, and reflections were generated very accurately. However, very strong artifacts were present (e.g., in the form of grid-like patterns, see Fig. 5 (c)). The Adaptive-Loss gave similar results as SSIM, see supplementary Fig. D.11 (a). The VAE trained with LPIPS-Squeeze showed very strong artifacts in reconstructed images as well as generated images, see supplementary
Fig. D.11 (b)).
4.3 Perceptual score
We used the validation part of the 2AFC dataset to compute perceptual scores and investigated similarity judgements on individual samples of the set. The agreement with human judgements is measured by pp̂+ (1− p)(1− p̂) as in [30].5 A human reference score was calculated using p = p̂. The results are summarized in Figure 6. Overall, the scores were similar to the results in [30], which verifies our methodology. We can see that the explicit approaches (L2 and SSIM) performed similarly. Adaptive-Loss, despite the ability ot adapt to the dataset, offers no improvement over the baselines. Watson-DFT performed considerably better, but not as well as LPIPS-VGG or LPIPS-Squeeze. We observe that the ability of metrics to learn perceptual judgement grows with the degrees of freedom (>1000 parameters for deep models, <150 for Watson-based metrics).
Inspecting the errors revealed qualitative difference between the metrics, some representative examples are shown in Fig. 1. We observed that the deep networks are good at semantic matching (see biker in Fig 1), but under-estimate the perceptual impact of graphical artifacts such as
noise (see treeline) and blur. We argue that this is because the features were originally optimized for object recognition, where invariance against distortions and spatial shifts is beneficial. In contrast, the Watson-based metric is sensitive to changes in frequency (noise, blur) and large translations.
4.4 Resource requirements
During training, computing and back-propagating the loss requires computational resources, which are then unavailable for the VAE model and data. We measure the resource requirements in a typical learning scenario. Mini-batches of 128 images of size 64× 64 with either one (greyscale) or three channels (color) were forward-fed through the tested loss functions. The loss with regard to one input image was back-propagated, and the image was updated accordingly using stochastic gradient descent. We measured the time for 500 iterations and the maximum GPU memory allocated. Results
5For example, when 80% of humans judged x1 to be more similar to the reference we have p = 0.2. If the metric predicted x1 to be closer, p̂ = 0, and we grant it 80% score for this judgement.
are averaged over three runs of the experiment. Implementation in PyTorch [19], 32-bit precision, executed on a Nvidia Quadro P6000 GPU. The results are shown in Table 1. We observe that deep model based loss functions require considerably more computation time and GPU memory. For example, evaluation of Watson-DFT was 6 times faster than LPIPS-VGG and required only a few megabytes of GPU memory instead of two gigabytes.
5 Discussion and conclusions
Discussion The 2AFC dataset is suitable to evaluate and tune perceptual similarity measures. But it considers a special, limited, partially artificial set of images and transformations. On the 2AFC task our metric based on Watson’s perceptual model outperformed the simple L1 and L2 metrics as well as the popular structural similarity SSIM [25] and the Adaptive-Loss [1].
Learning a metric using deep neural networks on the 2AFC data gave better results on the corresponding test data. This does not come as a surprise given the high flexibility of this purely data-driven approach. However, the resulting neural networks did not work well when used as a loss function for training VAEs, indicating weak generalization beyond the images and transformations in the training data. This is in accordance with (1) the fact that the higher flexibility of LPIPS-Squeeze compared to LPIPS-VGG yields a better fit in the 2AFC task (see also [30]) but even worse results in the VAE experiments; (2) that deep model based approaches profit from extensive regularization, especially by including the squared error in the loss function (e.g., [8]). In contrast, our approach based on Watson’s Perceptual Model is not very complex (in terms of degrees of freedom) and it has a strong inductive bias to match human perception. Therefore it extrapolates much better in a way expected from a perceptual metric/loss.
Deep neural networks for object recognition are trained to be invariant against translation, noise and blur, distortions, and other visual artifacts. We observed the invariance against noise and artifacts even after tuning on the data from human experiments, see Fig. 1. While these properties are important to perform well in many computer vision tasks, they are not desirable for image generation. The generator/decoder can exploit these areas of ‘blindness’ of the similarity metric, leading to significantly more visual artifacts in generated samples, as we observed in the image generation experiments.
Furthermore, the computational and memory requirements of neural network based loss functions are much higher compared to SSIM or Watson’s model, to an extent that limits their applicability in generative neural network training.
In our experiments, the Adaptive-Loss, which is constructed of many similar components to Watson’s perceptual model, did not perform much better than SSIM and considerably worse than Watson’s model. This shows that our approach goes beyond computing a general weighted distance measure between images transformed to frequency space.
Conclusion We introduced a novel image similarity metric and corresponding loss function based on Watson’s perceptual model, which we transformed to a trainable model and extended to colorimages. We replaced the underlying DCT by a DFT to disentangles amplitude and phase information in order to increase robustness against small shifts.
The novel loss function optimized on data from human experiments can be used to train deep generative neural networks to produce realistic looking, high-quality samples. It is fast to compute and requires little memory. The new perceptual loss function does not suffer from the blurring effects of traditional similarity metrics like Euclidean distance or SSIM, and generates less visual artifacts than current state-of-the-art losses based on deep neural networks.
Acknowledgments and Disclosure of Funding
CI acknowledges support by the Villum Foundation through the project Deep Learning and Remote Sensing for Unlocking Global Ecosystem Resource Dynamics (DeReEco).
Broader impact
The broader impact of our work is defined by the numerous applications of generative deep neural networks, for example the generation of realistic photographs and human faces, image-to-image translation with the special case of semantic-image-to-photo translation; face frontal view generation; generation of human poses; photograph editing, restoration and inpainting; and generation of super resolution images.
A risk of realistic image generation is of course the ability to produce “deepfakes”. Generative neural networks can be used to replace a person in an existing image or video by someone else. While this technology has positive applications (e.g., in the movie industry and entertainment in general), it can be abused. We refer to a recent article by Kietzmann et al. for an overview discussing positive and negative aspects, including potential misuse that can affect almost anybody: “With such a powerful technology and the increasing number of images and videos of all of us on social media, anyone can become a target for online harassment, defamation, revenge porn, identity theft, and bullying — all through the use of deepfakes” [9].
We also refer to [9] for existing and potential commercial applications of deepfakes, such as software that allows consumers to “try on cosmetics, eyeglasses, hairstyles, or clothes virtually” and video game players to “insert their faces onto their favorite characters”.
Our interest in generative neural networks, in particular variational autoencoders, is partially motivated by concrete applications in the analysis of remote sensing data. In a just started project, we will employ deep generative neural networks to the generation of geospatial data, which enables us to simulate the effect of human interaction w.r.t. ecosystems. The goal is to improve our understanding of these interactions, for example to analyse the influence of countermeasures such as afforestation in the context of climate change mitigation. | 1. What is the main contribution of the paper, and how does it differ from previous works in the field?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its simplicity and effectiveness compared to other perceptual losses?
3. How does the reviewer assess the significance of the paper's findings, especially considering the similarities between the proposed method and a prior work?
4. What additional experiments or evaluations should be conducted to further support the paper's claims and address the reviewer's concerns? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper presents a simple perceptual loss function motivated by the same image processing basics as JPEG: a loss on (scaled) frequency-space representations of YCbCr image patches. This produces significantly better VAE reconstructions/samples than L2/SSIM loss on pixels, and also better results than "perceptual" losses based on deep features (LPIPS etc).
Strengths
I generally like this paper. Grounding the contribution in classic image processing and perceptual studies is satisfying, and the proposed model is well-motivated and simple. I think this direction of research is promising, as it seems that a lot of "deep" perceptual losses are needlessly complicated when the problem they address (that measuring the difference between images in terms of raw RGB pixel differences is problematic) is straightforward enough that a straightforward analytical alternative likely exists. The paper is well-written, and the claims in the paper appear to be well-validated empirically.
Weaknesses
I have one critical concern with this paper, which is that the proposed model presented here is extremely similar to one result from “A General and Adaptive Robust Loss Function”, Jonathan T. Barron, CVPR, 2019. Section 3.1 of that paper (going from the arxiv version) has results on improving reconstruction/sampling quality from VAEs by using a loss on DCT coefficients of YUV images, very similar to what is done here. They also propose a loss with a heavy-tailed distribution that looks a lot like Equation 8 of this submission, and present a method where they optimize over the scale of the loss being imposed on each coefficient of the DCT (similar to this submission). And the improvement in sample/reconstruction quality they demonstrate looks a lot like what is shown in this submission. Given these overwhelming similarities, I'm unable to support the acceptance of this paper without a comparison to the approach presented in that work. Another (less pressing) concern I had for this submission: I’m surprised and confused that the experiment in Figure 6 suggests that the Deeploss-* techniques are preferable to the proposed Watson-DFT technique, which (to my eye) seems to produce much better reconstructions and samples in Figures 4 & 5. What is the source of this mismatch? I trust my eyes more than I trust this benchmark, but I am reluctant to champion a paper that only has one empirical evaluation where the proposed technique is outperformed by such a significant margin. I agree with the claims in the text that there is value to the proposed model being simple and compact, but it is unfortunate that the only empirical result in the paper requires this defending. A user study (perhaps in the same format as the 2AFC/BAPPS dataset) run on the Celeb-A reconstructions or samples would be extremely helpful here. An ablation study of the proposed model components would also be helpful for understanding what aspects of the loss are contributing the most to its performance. This ties into my concerns about the lack of evaluation against the Baron CVPR 2019 paper, which seems like a strongly-ablated version of this proposed method. |
NIPS | Title
Are Disentangled Representations Helpful for Abstract Visual Reasoning?
Abstract
A disentangled representation encodes information about the salient factors of variation in the data independently. Although it is often argued that this representational format is useful in learning to solve many real-world down-stream tasks, there is little empirical evidence that supports this claim. In this paper, we conduct a large-scale study that investigates whether disentangled representations are more suitable for abstract reasoning tasks. Using two new tasks similar to Raven’s Progressive Matrices, we evaluate the usefulness of the representations learned by 360 state-of-the-art unsupervised disentanglement models. Based on these representations, we train 3600 abstract reasoning models and observe that disentangled representations do in fact lead to better down-stream performance. In particular, they enable quicker learning using fewer samples.
1 Introduction
Learning good representations of high-dimensional sensory data is of fundamental importance to Artificial Intelligence [4, 3, 6, 49, 7, 69, 67, 50, 59, 73]. In the supervised case, the quality of a representation is often expressed through the ability to solve the corresponding down-stream task. However, in order to leverage vasts amounts of unlabeled data, we require a set of desiderata that apply to more general real-world settings.
Following the successes in learning distributed representations that efficiently encode the content of high-dimensional sensory data [45, 56, 76], recent work has focused on learning representations that are disentangled [6, 69, 68, 73, 71, 26, 27, 42, 10, 63, 16, 52, 53, 48, 9, 51]. A disentangled representation captures information about the salient (or explanatory) factors of variation in the data, isolating information about each specific factor in only a few dimensions. Although the precise circumstances that give rise to disentanglement are still being debated, the core concept of a local correspondence between data-generative factors and learned latent codes is generally agreed upon [16, 26, 52, 63, 71].
Disentanglement is mostly about how information is encoded in the representation, and it is often argued that a representation that is disentangled is desirable in learning to solve challenging real-world down-stream tasks [6, 73, 59, 7, 26, 68]. Indeed, in a disentangled representation, information about an individual factor value can be readily accessed and is robust to changes in the input that do not affect this factor. Hence, learning to solve a down-stream task from a disentangled representation is expected to require fewer samples and be easier in general [68, 6, 28, 29, 59]. Real-world generative processes are also often based on latent spaces that factorize. In this case, a disentangled
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
representation that captures this product space is expected to help in generalizing systematically in this regard [18, 22, 59].
Several of these purported benefits can be traced back to empirical evidence presented in the recent literature. Disentangled representations have been found to be more sample-efficient [29], less sensitive to nuisance variables [55], and better in terms of (systematic) generalization [1, 16, 28, 35, 70]. However, in other cases it is less clear whether the observed benefits are actually due to disentanglement [48]. Indeed, while these results are generally encouraging, a systematic evaluation on a complex down-stream task of a wide variety of disentangled representations obtained by training different models, using different hyper-parameters and data sets, appears to be lacking.
Contributions In this work, we conduct a large-scale evaluation1 of disentangled representations to systematically evaluate some of these purported benefits. Rather than focusing on a simple single factor classification task, we evaluate the usefulness of disentangled representations on abstract visual reasoning tasks that challenge the current capabilities of state-of-the-art deep neural networks [30, 65]. Our key contributions include:
• We create two new visual abstract reasoning tasks similar to Raven’s Progressive Matrices [61] based on two disentanglement data sets: dSprites [27], and 3dshapes [42]. A key design property of these tasks is that they are hard to solve based on statistical co-occurrences and require reasoning about the relations between different objects.
• We train 360 unsupervised disentanglement models spanning four different disentanglement approaches on the individual images of these two data sets and extract their representations. We then train 3600 Wild Relation Networks [65] that use these disentangled representations to perform abstract reasoning and measure their accuracy at various stages of training.
• We evaluate the usefulness of disentangled representations by comparing the accuracy of these abstract reasoning models to the degree of disentanglement of the representations (measured using five different disentanglement metrics). We observe compelling evidence that more disentangled representations yield better sample-efficiency in learning to solve the considered abstract visual reasoning tasks. In this regard our results are complementary to a recent prior study of disentangled representations that did not find evidence of increased sample efficiency on a much simpler down-stream task [52].
2 Background and Related Work on Learning Disentangled Representations
Despite an increasing interest in learning disentangled representations, a precise definition is still a topic of debate [16, 26, 52, 63]. In recent work, Eastwood et al. [16] and Ridgeway et al. [63] put forth three criteria of disentangled representations: modularity, compactness, and explicitness. Modularity implies that each code in a learned representation is associated with only one factor of variation in the environment, while compactness ensures that information regarding a single factor is represented using only one or few codes. Combined, modularity and compactness suggest that a disentangled representation implements a one-to-one mapping between salient factors of variation in the environment and the learned codes. Finally, a disentangled representation is often assumed to be explicit, in that the mapping between factors and learned codes can be implemented with a simple (i.e. linear) model. While modularity is commonly agreed upon, compactness is a point of contention. Ridgeway et al. [63] argue that some features (eg. the rotation of an object) are best described with multiple codes although this is essentially not compact. The recent work by Higgins et al. [26] suggests an alternative view that may resolve these different perspectives in the future.
Metrics Multiple metrics have been proposed that leverage the ground-truth generative factors of variation in the data to measure disentanglement in learned representations. In recent work, Locatello et al. [52] studied several of these metrics, which we will adopt for our purposes in this work: the BetaVAE score [27], the FactorVAE score [42], the Mutual Information Gap (MIG) [10], the disentanglement score from Eastwood et al. [16] referred to as the DCI Disentanglement score, and the Separated Attribute Predictability (SAP) score [48].
1Reproducing these experiments requires approximately 2.73 GPU years (NVIDIA P100).
The BetaVAE score, FactorVAE score, and DCI Disentanglement score focus primarily on modularity. The former assess this property through interventions, i.e. by keeping one factor fixed and varying all others, while the DCI Disentanglement score estimates this property from the relative importance assigned to each feature by a random forest regressor in predicting the factor values. The SAP score and MIG are mostly focused on compactness. The SAP score reports the difference between the top two most predictive latent codes of a given factor, while MIG reports the difference between the top two latent variables with highest mutual information to a certain factor.
The degree of explicitness captured by any of the disentanglement metrics remain unclear. In prior work it was found that there is a positive correlation between disentanglement metrics and down-stream performance on single factor classification [52]. However, it is not obvious whether disentangled representations are useful for down-stream performance per se, or if the correlation is driven by the explicitness captured in the scores. In particular, the DCI Disentanglement score and the SAP score compute disentanglement by training a classifier on the representation. The former uses a random forest regressor to determine the relative importance of each feature, and the latter considers the gap in prediction accuracy of a support vector machine trained on each feature in the representation. MIG is based on the matrix of pairwise mutual information between factors of variations and dimensions of the representation, which also relates to the explicitness of the representation. On the other hand, the BetaVAE and FactorVAE scores predict the index of a fixed factor of variation and not the exact value.
We note that current disentanglement metrics each require access to the ground-truth factors of variation, which may hinder the practical feasibility of learning disentangled representations. Here our goal is to assess the usefulness of disentangled representations more generally (i.e. assuming it is possible to obtain them), which can be verified independently.
Methods Several methods have been proposed to learn disentangled representations. Here we are interested in evaluating the benefits of disentangled representations that have been learned through unsupervised learning. In order to control for potential confounding factors that may arise in using a single model, we use the representations learned from four state-of-the-art approaches from the literature: β-VAE [27], FactorVAE [42], β-TCVAE [10], and DIP-VAE [48]. A similar choice of models was used in a recent study by Locatello et al. [52].
Using notation from Tschannen et al. [73], we can view all of these models as Auto-Encoders that are trained with the regularized variational objective of the form:
Ep(x)[Eqφ(z|x)[− log pθ(x|z)]] + λ1Ep(x)[R1(qφ(z|x))] + λ2R2(qφ(z)). (1)
The output of the encoder that parametrizes qφ(z|x) yields the representation. Regularization serves to control the information flow through the bottleneck induced by the encoder, while different regularizers primarily vary in the notion of disentanglement that they induce. β-VAE restricts the capacity of the information bottleneck by penalizing the KL-divergence, using β = λ1 > 1 with R1(qφ(z|x)) := DKL[qφ(z|x)||p(z)], and λ2 = 0; FactorVAE penalizes the Total Correlation [77] of the latent variables via adversarial training, using λ1 = 0 and λ2 = 1 with R2(qφ(z)) := TC(qφ(z)); β-TCVAE also penalizes the Total Correlation but estimates its value via a biased Monte Carlo estimator; and finally DIP-VAE penalizes a mismatch in moments between the aggregated posterior and a factorized prior, using λ1 = 0 and λ2 ≥ 1 with R2(qφ(z)) := ||Covqφ(z) − I||2F .
Other Related Works Learning disentangled representations is similar in spirit to non-linear ICA, although it relies primarily on (architectural) inductive biases and different degrees of supervision [13, 2, 39, 36, 37, 38, 25, 33, 32]. Due to the initial poor performance of purely unsupervised methods, the field initially focused on semi-supervised [62, 11, 57, 58, 44, 46] and weakly supervised approaches [31, 12, 40, 21, 78, 20, 15, 35, 80, 54, 47, 64, 8]. In this paper, we consider the setup of the recent unsupervised methods [27, 26, 48, 42, 9, 52, 71, 10]. Finally, while this paper focuses on evaluating the benefits of disentangled features, these are complementary to recent work that focuses on the unsupervised “disentangling” of images into compositional primitives given by object-like representations [17, 23, 24, 22, 60, 74, 75]. Disentangling pose, style, or motion from content are classical vision tasks that has been studied with different degrees of supervision [72, 79, 80, 34, 19, 14, 21, 36].
3 Abstract Visual Reasoning Tasks for Disentangled Representations
In this work we evaluate the purported benefits of disentangled representations on abstract visual reasoning tasks. Abstract reasoning tasks require a learner to infer abstract relationships between multiple entities (i.e. objects in images) and re-apply this knowledge in newly encountered settings [41]. Humans are known to excel at this task, as is evident from experiments with simple visual IQ tests such as Raven’s Progressive Matrices (RPMs) [61]. An RPM consists of several context panels organized in multiple sequences, with one sequence being incomplete. The task consists of completing the final sequence by choosing from a given set of answer panels. Choosing the correct answer panel requires one to infer the relationships between the panels in the complete context sequences, and apply this knowledge to the remaining partial sequence.
In recent work, Santoro et al. [65] evaluated the abstract reasoning capabilities of deep neural networks on this task. Using a data set of RPM-like matrices they found that standard deep neural network architectures struggle at abstract visual reasoning under different training and generalization regimes. Their results indicate that it is difficult to solve these tasks by relying purely on superficial image statistics, and can only be solved efficiently through abstract visual reasoning. This makes this setting particularly appealing for investigating the benefits of disentangled representations.
Generating RPM-like Matrices Rather than evaluating disentangled representations on the Procedurally Generated Matrices (PGM) dataset from Barrett et al. [65] we construct two new abstract RPM-like visual reasoning datasets based on two existing datasets for disentangled representation learning. Our motivation for this is twofold: it is not clear what a ground-truth disentangled representation should look like for the PGM dataset, while the two existing disentanglement data sets include the ground-truth factors of variation. Secondly, in using established data sets for disentanglement, we can reuse hyper-parameter ranges that have proven successful. We note that our study is substantially different to recent work by Steenbrugge et al. [70] who evaluate the representation of a single trained β-VAE [27] on the original PGM data set.
To construct the abstract reasoning tasks, we use the ground-truth generative model of the dSprites [27] and 3dshapes [42] data sets with the following changes2: For dSprites, we ignore the orientation feature for the abstract reasoning tasks as certain objects such as squares and ellipses exhibit rotational symmetries. To compensate, we add background color (5 different shades of gray linearly spaced between white and black) and object color (6 different colors linearly spaced in HUSL hue space) as two new factors of variation. Similarly, for the abstract reasoning tasks (but not when learning representations), we only consider three different values for the scale of the object (instead of 6) and only four values for the x and y position (instead of 32). For 3dshapes, we retain all of the original factors but only consider four different values for scale and azimuth (out of 8 and 16) for the abstract reasoning tasks. We refer to Figure 7 in Appendix B for samples from these data sets.
For the modified dSprites and 3dshapes, we now create corresponding abstract reasoning tasks. The key idea is that one is given a 3× 3 matrix of context image panels with the bottom right image panel missing, as well as a set of six potential answer panels (see Figure 1 for an example). One then has to infer which of the answers fits in the missing panel of the 3× 3 matrix based on relations between
2These were implemented to ensure that humans can visually distinguish between the different values of each factor of variation.
image panels in the rows of the 3× 3 matrices. Due to the categorical nature of ground-truth factors in the underlying data sets, we focus on the AND relationship in which one or more factor values are equal across a sequence of context panels [65].
We generate instances of the abstract reasoning tasks in the following way: First, we uniformly sample whether 1, 2, or 3 ground-truth factors are fixed across rows in the instance to be generated. Second, we uniformly sample without replacement the set of underlying factors in the underlying generative model that should be kept constant. Third, we uniformly sample a factor value from the ground-truth model for each of the three rows and for each of the fixed factors3. Fourth, for all other ground-truth factors we also sample 3× 3 matrices of factor values from the ground-truth model with the single constraint that the factor values are not allowed to be constant across the first two rows (in that case we sample a new set of values). After this we have ground-truth factor values for each of the 9 panels in the correct solution to the abstract reasoning task, and we can sample corresponding images from the ground-truth model. To generate difficult alternative answers, we take the factor values of the correct answer panel and randomly resample the non-fixed factors as well as a random fixed factor until the factor values no longer satisfy the relations in the original abstract reasoning task. We repeat this process to obtain five incorrect answers and finally insert the correct answer in a random position. Examples of the resulting abstract reasoning tasks can be seen in Figure 1 as well as in Figures 18 and 19 in Appendix C.
Models We will make use of the Wild Relation Network (WReN) to solve the abstract visual reasoning tasks [65]. It incorporates relational structure, and was introduced in prior work specifically for such tasks. The WReN is evaluated for each answer panel a ∈ A = {a1, ..., a6} in relation to all the context-panels C = {c1, ..., c8} as follows:
WReN(a,C) = fφ( ∑
e1,e2∈E gθ(e1, e2)) , E = {CNN(c1), ...,CNN(c8)} ∪ {CNN(a)} (2)
First an embedding is computed for each panel using a deep Convolutional Neural Network (CNN), which serve as input to a Relation Network (RN) module [66]. The Relation Network reasons about the different relationships between the context and answer panels, and outputs a score. The answer panel a ∈ A with the highest score is chosen as the final output. The Relation Network implements a suitable inductive bias for (relational) reasoning [5]. It separates the reasoning process into two stages. First gθ is applied to all pairs of panel embeddings to consider relations between the answer panel and each of the context panels, and relations among the context panels. Weight-sharing of gθ between the panel-embedding pairs makes it difficult to overfit to the image statistics of the individual panels. Finally, fφ produces a score for the given answer panel in relation to the context panels by globally considering the different relations between the panels as a whole. Note that in using the same WReN for different answer panels it is ensured that each answer panel is subject to the same reasoning process.
4 Experiments
4.1 Learning Disentangled Representations
We train β-VAE [27], FactorVAE [42], β-TCVAE [10], and DIP-VAE [48] on the panels from the modified dSprites and 3dshapes data sets4. For β-VAE we consider two variations: the standard version using a fixed β, and a version trained with the controlled capacity increase presented by Burgess et al. [9]. Similarly for DIP-VAE we consider both the DIP-VAE-I and DIP-VAE-II variations of the proposed regularizer [48]. For each of these methods, we considered six different values for their (main) hyper-parameter and five different random seeds. The remaining experimental details are presented in Appendix A.
After training, we end up with 360 encoders, whose outputs are expected to cover a wide variation of different representational formats with which to encode information in the images. Figures 9 and 10 in the Appendix show histograms of the reconstruction errors obtained after training, and
3Note that different rows may have different values. 4Code is made available as part of disentanglement_lib at https://git.io/JelEv.
the scores that various disentanglement metrics assigned to the corresponding representations. The reconstructions are mostly good (see also Figure 7), which confirms that the learned representations tend to accurately capture the image content. Correspondingly, we expect any observed difference in down-stream performance when using these representations to be primarily the result of how information is encoded. In terms of the scores of the various disentanglement metrics, we observe a wide range of values. It suggests that in going by different definitions of disentanglement, there are large differences among the quality of the learned representations.
4.2 Abstract Visual Reasoning
We train different WReN models where we control for two potential confounding factors: the representation produced by a specific model used to embed the input images, as well as the hyperparameters of the WReN model. For hyper-parameters, we use a random search space as specified in Appendix A. We used the following training protocol: We train each of these models using a batch size of 32 for 100K iterations where each mini-batch consists of newly generated random instances of the abstract reasoning tasks. Similarly, every 1000 iterations, we evaluate the accuracy on 100 mini-batches of fresh samples. We note that this corresponds to the statistical optimization setting, sidestepping the need to investigate the impact of empirical risk minimization and overfitting5.
4.2.1 Initial Study
First, we trained a set of baseline models to assess the overall complexity of the abstract reasoning task. We consider three types of representations: (i) CNN representations which are learned from scratch (with the same architecture as in the disentanglement models) yielding standard WReN, (ii) pre-trained frozen representations based on a random selection of the pre-trained disentanglement models, and (iii) directly using the ground-truth factors of variation (both one-hot encoded and integer encoded). We train 30 different models for each of these approaches and data sets with different random seeds and different draws from the search space over hyper-parameter values.
An overview of the training behaviour and the accuracies achieved can be seen in Figures 2 and 11 (Appendix B). We observe that the standard WReN model struggles to obtain good results on average, even after having seen many different samples at 100K steps. This is due to the fact that training from scratch is hard and runs may get stuck in local minima where they predict each of the answers with equal probabilities. Given the pre-training and the exposure to additional unsupervised samples, it is not surprising that the learned representations from the disentanglement models perform better. The WReN models that are given the true factors also perform well, already after only few steps of training. We also observe that different runs exhibit a significant spread, which motivates why we analyze the average accuracy across many runs in the next section.
It appears that dSprites is the harder task, with models reaching an average score of 80%, while reaching an average of 90% on 3dshapes. Finally, we note that most learning progress takes place in the first 20K
steps, and thus expect the benefits of disentangled representations to be most clear in this regime.
4.2.2 Evaluating Disentangled Representations
Based on the results from the initial study, we train a full set of WReN models in the following manner: We first sample a set of 10 hyper-parameter configurations from our search space and then trained WReN models using these configurations for each of the 360 representations from the disentanglement
5Note that the state space of the data generating distribution is very large: 106 factor combinations per panel and 14 panels for each instance yield more than 10144 potential instances (minus invalid configurations).
models. We then compare the average down-stream training accuracy of WReN with the BetaVAE score, the FactorVAE score, MIG, the DCI Disentanglement score, and the Reconstruction error obtained by the decoder on the unsupervised learning task. As a sanity check, we also compare with the accuracy of a Gradient Boosted Tree (GBT10000) ensemble and a Logistic Regressor (LR10000) on single factor classification (averaged across factors) as measured on 10K samples. As expected, we observe a positive correlation between the performance of the WReN and the classifiers (see Figure 3).
Differences in Disentanglement Metrics Figure 3 displays the rank correlation (Spearman) between these metrics and the down-stream classification accuracy, evaluated after training for 1K, 2K, 5K, 10K, 20K, 50K, and 100K steps. If we focus on the disentanglement metrics, several interesting observations can be made. In the few-sample regime (up to 20K steps) and across both data sets it can be seen that both the BetaVAE score, and the FactorVAE score are highly correlated with down-stream accuracy. The DCI Disentanglement score is correlated slightly less, while the MIG and SAP score exhibit a relatively weak correlation.
These differences between the different disentanglement metrics are perhaps not surprising, as they are also reflected in their overall correlation (see Figure 8 in Appendix B). Note that the BetaVAE score, and the FactorVAE score directly measure the effect of intervention, i.e. what happens to the representation if all factors but one are varied, which is expected to be beneficial in efficiently comparing the content of two representations as required for this task. Similarly, it may be that MIG and SAP score have a more difficult time in differentiating representations that are only partially disentangled. Finally, we note that the best performing metrics on this task are mostly measuring modularity, as opposed to compactness. A more detailed overview of the correlation between the various metrics and down-stream accuracy can be seen in Figures 12 and 13 in Appendix B.
Disentangled Representations in the Few-Sample Regime If we compare the correlation of the disentanglement metric with the highest correlation (FactorVAE) to that of the Reconstruction error in the few-sample regime, then we find that disentanglement correlates much better with down-stream accuracy. Indeed, while low Reconstruction error indicates that all information is available in the representation (to reconstruct the image) it makes no assumptions about how this information is encoded. We observe strong evidence that disentangled representations yield better down-stream accuracy using relatively few samples, and we therefore conclude that they are indeed more sample efficient compared to entangled representations in this regard.
Figure 4 demonstrates the down-stream accuracy of the WReNs throughout training, binned into quartiles according to their degree of being disentangled as measured by the FactorVAE score (left), and in terms of Reconstruction error (right). It can be seen that representations that are more disentangled give rise to better relative performance consistently throughout all phases of training. If
we group models according to their Reconstruction error then we find that this (reversed) ordering is much less pronounced. An overview for all other metrics can be seen in Figures 14 and 15.
Disentangled Representations in the Many-Sample Regime In the many-sample regime (i.e. when training for 100K steps on batches of randomly drawn instances in Figure 3) we find that there is no longer a strong correlation between the scores assigned by the various disentanglement metrics and down-stream performance. This is perhaps not surprising as neural networks are general function approximators that, given access to enough labeled samples, are expected to overcome potential difficulties in using entangled representations. The observation that Reconstruction error correlates much more strongly with down-stream accuracy in this regime further confirms that this is the case.
A similar observation can be made if we look at the difference in down-stream accuracy between the top and bottom half of the models according to each metric in Figures 5 and 16 (Appendix B). For all disentanglement metrics, larger positive differences are observed in the few-sample regime that gradually reduce as more samples are observed. Meanwhile, the gap gradually increases for Reconstruction error upon seeing additional samples.
Differences in terms of Final Accuracy In our final analysis we consider the rank correlation between down-stream accuracy and the various metrics, split according to their final accuracy. Figure 6 shows the rank correlation for the worst performing fifty percent of the models after 100K steps (top), and for the best performing fifty percent (bottom). While these results should be interpreted with care as the split depends on the final accuracy, we still observe interesting results: It can be seen that disentanglement (i.e. FactorVAE score) remains strongly correlated with down-stream performance for both splits in the
few-sample regime. At the same time, the benefit of lower Reconstruction error appears to be limited to the worst 50% of models. This is intuitive, as when the Reconstruction error is too high there may not be enough information present to solve the down-stream tasks. However, regarding the top performing models (best 50%), it appears that the relative gains from further reducing reconstruction error are of limited use.
5 Conclusion
In this work we investigated whether disentangled representations allow one to learn good models for non-trivial down-stream tasks with fewer samples. We created two abstract visual reasoning tasks based on existing data sets for which the ground truth factors of variation are known. We trained a diverse set of 360 disentanglement models based on four state-of-the-art disentanglement approaches and evaluated their representations using 3600 abstract reasoning models. We observed compelling evidence that more disentangled representations are more sample-efficient in the considered downstream learning task. We draw three main conclusions from these results: First, these results provide concrete motivation why one might want to pursue disentanglement as a property of learned representations in the unsupervised case. Second, we still observed differences between disentanglement metrics, which should motivate further work in understanding what different properties they capture. None of the metrics achieved perfect correlation in the few-sample regime, which also suggests that it is not yet fully understood what makes one representation better than another in terms of learning. Third, it might be useful to extend the methodology in this study to other complex down-stream tasks, or include an investigation of other purported benefits of disentangled representations.
Acknowledgments
The authors thank Adam Santoro, Josip Djolonga, Paulo Rauber and the anonymous reviewers for helpful discussions and comments. This research was partially supported by the Max Planck ETH Center for Learning Systems, a Google Ph.D. Fellowship (to Francesco Locatello), and the Swiss National Science Foundation (grant 200021_165675/1 to Jürgen Schmidhuber). This work was partially done while Francesco Locatello was at Google Research. | 1. What is the main contribution of the paper regarding disentangling representations?
2. What are the strengths of the proposed approach, particularly in its ability to perform well on abstract visual reasoning tasks?
3. What are the weaknesses of the paper, especially regarding the experimental setup and the choice of tasks?
4. How does the reviewer assess the significance and relevance of the study's findings, especially in relation to prior work in disentanglement?
5. Are there any concerns about the generalizability of the results due to the specific choice of tasks and datasets used in the study? | Review | Review
Thanks to the authors for the response. I thought the paper would be really cool and instructive IF the abstract tasks made sense. Based on the author's response and R3's review, these are pretty standard tasks I guess. I don't know much about these RPM tasks, so I will downgrade my confidence to a 2. The example tasks are still a bit confusing to me (compared to if you just look at an RPM example on wikipedia), but I guess once you see the answer to a few, you get the gist of them. Moreover, intuitively it seems that you do need to represent the factors of variation to be good at these tasks. My other concern is the one R2 raised, which is that if you need ground truth to pick out a good disentangling model and if ground truth helps a lot for directly solving the disentangling tasks then 1) why do we need disentangling for AVR? and 2) how would a AVR practitioner without ground truth label information benefit from these results? I think that despite the issues with this approach, I agree with the authors' point that for the sake of the study we can suspend some skepticism about how we get to these representations. Also the most important point the authors raised was that it is of high relevance to validate the motivation of the 20+ preceding disentangling papers by actually measuring how well disentangled representations do on downstream tasks. I wholeheartedly agree (and think more people should be doing this), so I will upgrade to a 7 despite the flaws in the experimental setup. ----------------------------------------------------------- They formulate two abstract visual reasoning tasks based on dSprites and 3dshapes. They then see how well different disentangled representations do when transferred to these tasks by training a relational model on top of these representations. They find that disentangled representations result in more sample efficient transfer to these abstract visual reasoning tasks, whereas in high sample regimes disentangling does not correlate much with upstream accuracy. Strengths: * Well-written. Background and related work is really well explained * Really helpful to show reconstruction errorâs correlation to the upstream tasks as it is often used as a proxy for good performance * A cool, instructive result that disentangling is sample efficient * An unsurprising, but also instructive result that any entangled representation that captures most of the data (low reconstruction error) does well in upstream task when one has a lot of labelled data. Weaknesses: * Really only one real takeaway/useful experiment from the paper, which is that disentangling is sample efficient for this strange set of upstream tasks. * I have a lot of problems with these abstract visual reasoning tasks. They seem a bit unintuitive and overly difficult (I have a lot of trouble solving them). Having multiple rows and having multiple and different factors changing between each frame is very confusing and it seems like it would be hard to interpret how much these models actually learn the pattern or just exploit some artifacts. Do we have any proof that more simpler visual reasoning tasks wouldnât do and this formulation in the paper is the way to go? * It seems weird the authors didnât just consider a task with one row and one panel missing and the same one factor changing between panels. Is there any empirical evidence that this is too easy or uninformative? Why not a row where there are a few panels of the ellipse getting bigger and then for the missing frame the model chooses between a smaller ellipse, same size ellipse, *bigger ellipse*, bigger ellipse but at the wrong angle, bigger ellipse, but translated, bigger ellipse but different color, etc. or at least some progression of difficulty starting from the easiest and working up to the tasks in the paper? |
NIPS | Title
Are Disentangled Representations Helpful for Abstract Visual Reasoning?
Abstract
A disentangled representation encodes information about the salient factors of variation in the data independently. Although it is often argued that this representational format is useful in learning to solve many real-world down-stream tasks, there is little empirical evidence that supports this claim. In this paper, we conduct a large-scale study that investigates whether disentangled representations are more suitable for abstract reasoning tasks. Using two new tasks similar to Raven’s Progressive Matrices, we evaluate the usefulness of the representations learned by 360 state-of-the-art unsupervised disentanglement models. Based on these representations, we train 3600 abstract reasoning models and observe that disentangled representations do in fact lead to better down-stream performance. In particular, they enable quicker learning using fewer samples.
1 Introduction
Learning good representations of high-dimensional sensory data is of fundamental importance to Artificial Intelligence [4, 3, 6, 49, 7, 69, 67, 50, 59, 73]. In the supervised case, the quality of a representation is often expressed through the ability to solve the corresponding down-stream task. However, in order to leverage vasts amounts of unlabeled data, we require a set of desiderata that apply to more general real-world settings.
Following the successes in learning distributed representations that efficiently encode the content of high-dimensional sensory data [45, 56, 76], recent work has focused on learning representations that are disentangled [6, 69, 68, 73, 71, 26, 27, 42, 10, 63, 16, 52, 53, 48, 9, 51]. A disentangled representation captures information about the salient (or explanatory) factors of variation in the data, isolating information about each specific factor in only a few dimensions. Although the precise circumstances that give rise to disentanglement are still being debated, the core concept of a local correspondence between data-generative factors and learned latent codes is generally agreed upon [16, 26, 52, 63, 71].
Disentanglement is mostly about how information is encoded in the representation, and it is often argued that a representation that is disentangled is desirable in learning to solve challenging real-world down-stream tasks [6, 73, 59, 7, 26, 68]. Indeed, in a disentangled representation, information about an individual factor value can be readily accessed and is robust to changes in the input that do not affect this factor. Hence, learning to solve a down-stream task from a disentangled representation is expected to require fewer samples and be easier in general [68, 6, 28, 29, 59]. Real-world generative processes are also often based on latent spaces that factorize. In this case, a disentangled
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
representation that captures this product space is expected to help in generalizing systematically in this regard [18, 22, 59].
Several of these purported benefits can be traced back to empirical evidence presented in the recent literature. Disentangled representations have been found to be more sample-efficient [29], less sensitive to nuisance variables [55], and better in terms of (systematic) generalization [1, 16, 28, 35, 70]. However, in other cases it is less clear whether the observed benefits are actually due to disentanglement [48]. Indeed, while these results are generally encouraging, a systematic evaluation on a complex down-stream task of a wide variety of disentangled representations obtained by training different models, using different hyper-parameters and data sets, appears to be lacking.
Contributions In this work, we conduct a large-scale evaluation1 of disentangled representations to systematically evaluate some of these purported benefits. Rather than focusing on a simple single factor classification task, we evaluate the usefulness of disentangled representations on abstract visual reasoning tasks that challenge the current capabilities of state-of-the-art deep neural networks [30, 65]. Our key contributions include:
• We create two new visual abstract reasoning tasks similar to Raven’s Progressive Matrices [61] based on two disentanglement data sets: dSprites [27], and 3dshapes [42]. A key design property of these tasks is that they are hard to solve based on statistical co-occurrences and require reasoning about the relations between different objects.
• We train 360 unsupervised disentanglement models spanning four different disentanglement approaches on the individual images of these two data sets and extract their representations. We then train 3600 Wild Relation Networks [65] that use these disentangled representations to perform abstract reasoning and measure their accuracy at various stages of training.
• We evaluate the usefulness of disentangled representations by comparing the accuracy of these abstract reasoning models to the degree of disentanglement of the representations (measured using five different disentanglement metrics). We observe compelling evidence that more disentangled representations yield better sample-efficiency in learning to solve the considered abstract visual reasoning tasks. In this regard our results are complementary to a recent prior study of disentangled representations that did not find evidence of increased sample efficiency on a much simpler down-stream task [52].
2 Background and Related Work on Learning Disentangled Representations
Despite an increasing interest in learning disentangled representations, a precise definition is still a topic of debate [16, 26, 52, 63]. In recent work, Eastwood et al. [16] and Ridgeway et al. [63] put forth three criteria of disentangled representations: modularity, compactness, and explicitness. Modularity implies that each code in a learned representation is associated with only one factor of variation in the environment, while compactness ensures that information regarding a single factor is represented using only one or few codes. Combined, modularity and compactness suggest that a disentangled representation implements a one-to-one mapping between salient factors of variation in the environment and the learned codes. Finally, a disentangled representation is often assumed to be explicit, in that the mapping between factors and learned codes can be implemented with a simple (i.e. linear) model. While modularity is commonly agreed upon, compactness is a point of contention. Ridgeway et al. [63] argue that some features (eg. the rotation of an object) are best described with multiple codes although this is essentially not compact. The recent work by Higgins et al. [26] suggests an alternative view that may resolve these different perspectives in the future.
Metrics Multiple metrics have been proposed that leverage the ground-truth generative factors of variation in the data to measure disentanglement in learned representations. In recent work, Locatello et al. [52] studied several of these metrics, which we will adopt for our purposes in this work: the BetaVAE score [27], the FactorVAE score [42], the Mutual Information Gap (MIG) [10], the disentanglement score from Eastwood et al. [16] referred to as the DCI Disentanglement score, and the Separated Attribute Predictability (SAP) score [48].
1Reproducing these experiments requires approximately 2.73 GPU years (NVIDIA P100).
The BetaVAE score, FactorVAE score, and DCI Disentanglement score focus primarily on modularity. The former assess this property through interventions, i.e. by keeping one factor fixed and varying all others, while the DCI Disentanglement score estimates this property from the relative importance assigned to each feature by a random forest regressor in predicting the factor values. The SAP score and MIG are mostly focused on compactness. The SAP score reports the difference between the top two most predictive latent codes of a given factor, while MIG reports the difference between the top two latent variables with highest mutual information to a certain factor.
The degree of explicitness captured by any of the disentanglement metrics remain unclear. In prior work it was found that there is a positive correlation between disentanglement metrics and down-stream performance on single factor classification [52]. However, it is not obvious whether disentangled representations are useful for down-stream performance per se, or if the correlation is driven by the explicitness captured in the scores. In particular, the DCI Disentanglement score and the SAP score compute disentanglement by training a classifier on the representation. The former uses a random forest regressor to determine the relative importance of each feature, and the latter considers the gap in prediction accuracy of a support vector machine trained on each feature in the representation. MIG is based on the matrix of pairwise mutual information between factors of variations and dimensions of the representation, which also relates to the explicitness of the representation. On the other hand, the BetaVAE and FactorVAE scores predict the index of a fixed factor of variation and not the exact value.
We note that current disentanglement metrics each require access to the ground-truth factors of variation, which may hinder the practical feasibility of learning disentangled representations. Here our goal is to assess the usefulness of disentangled representations more generally (i.e. assuming it is possible to obtain them), which can be verified independently.
Methods Several methods have been proposed to learn disentangled representations. Here we are interested in evaluating the benefits of disentangled representations that have been learned through unsupervised learning. In order to control for potential confounding factors that may arise in using a single model, we use the representations learned from four state-of-the-art approaches from the literature: β-VAE [27], FactorVAE [42], β-TCVAE [10], and DIP-VAE [48]. A similar choice of models was used in a recent study by Locatello et al. [52].
Using notation from Tschannen et al. [73], we can view all of these models as Auto-Encoders that are trained with the regularized variational objective of the form:
Ep(x)[Eqφ(z|x)[− log pθ(x|z)]] + λ1Ep(x)[R1(qφ(z|x))] + λ2R2(qφ(z)). (1)
The output of the encoder that parametrizes qφ(z|x) yields the representation. Regularization serves to control the information flow through the bottleneck induced by the encoder, while different regularizers primarily vary in the notion of disentanglement that they induce. β-VAE restricts the capacity of the information bottleneck by penalizing the KL-divergence, using β = λ1 > 1 with R1(qφ(z|x)) := DKL[qφ(z|x)||p(z)], and λ2 = 0; FactorVAE penalizes the Total Correlation [77] of the latent variables via adversarial training, using λ1 = 0 and λ2 = 1 with R2(qφ(z)) := TC(qφ(z)); β-TCVAE also penalizes the Total Correlation but estimates its value via a biased Monte Carlo estimator; and finally DIP-VAE penalizes a mismatch in moments between the aggregated posterior and a factorized prior, using λ1 = 0 and λ2 ≥ 1 with R2(qφ(z)) := ||Covqφ(z) − I||2F .
Other Related Works Learning disentangled representations is similar in spirit to non-linear ICA, although it relies primarily on (architectural) inductive biases and different degrees of supervision [13, 2, 39, 36, 37, 38, 25, 33, 32]. Due to the initial poor performance of purely unsupervised methods, the field initially focused on semi-supervised [62, 11, 57, 58, 44, 46] and weakly supervised approaches [31, 12, 40, 21, 78, 20, 15, 35, 80, 54, 47, 64, 8]. In this paper, we consider the setup of the recent unsupervised methods [27, 26, 48, 42, 9, 52, 71, 10]. Finally, while this paper focuses on evaluating the benefits of disentangled features, these are complementary to recent work that focuses on the unsupervised “disentangling” of images into compositional primitives given by object-like representations [17, 23, 24, 22, 60, 74, 75]. Disentangling pose, style, or motion from content are classical vision tasks that has been studied with different degrees of supervision [72, 79, 80, 34, 19, 14, 21, 36].
3 Abstract Visual Reasoning Tasks for Disentangled Representations
In this work we evaluate the purported benefits of disentangled representations on abstract visual reasoning tasks. Abstract reasoning tasks require a learner to infer abstract relationships between multiple entities (i.e. objects in images) and re-apply this knowledge in newly encountered settings [41]. Humans are known to excel at this task, as is evident from experiments with simple visual IQ tests such as Raven’s Progressive Matrices (RPMs) [61]. An RPM consists of several context panels organized in multiple sequences, with one sequence being incomplete. The task consists of completing the final sequence by choosing from a given set of answer panels. Choosing the correct answer panel requires one to infer the relationships between the panels in the complete context sequences, and apply this knowledge to the remaining partial sequence.
In recent work, Santoro et al. [65] evaluated the abstract reasoning capabilities of deep neural networks on this task. Using a data set of RPM-like matrices they found that standard deep neural network architectures struggle at abstract visual reasoning under different training and generalization regimes. Their results indicate that it is difficult to solve these tasks by relying purely on superficial image statistics, and can only be solved efficiently through abstract visual reasoning. This makes this setting particularly appealing for investigating the benefits of disentangled representations.
Generating RPM-like Matrices Rather than evaluating disentangled representations on the Procedurally Generated Matrices (PGM) dataset from Barrett et al. [65] we construct two new abstract RPM-like visual reasoning datasets based on two existing datasets for disentangled representation learning. Our motivation for this is twofold: it is not clear what a ground-truth disentangled representation should look like for the PGM dataset, while the two existing disentanglement data sets include the ground-truth factors of variation. Secondly, in using established data sets for disentanglement, we can reuse hyper-parameter ranges that have proven successful. We note that our study is substantially different to recent work by Steenbrugge et al. [70] who evaluate the representation of a single trained β-VAE [27] on the original PGM data set.
To construct the abstract reasoning tasks, we use the ground-truth generative model of the dSprites [27] and 3dshapes [42] data sets with the following changes2: For dSprites, we ignore the orientation feature for the abstract reasoning tasks as certain objects such as squares and ellipses exhibit rotational symmetries. To compensate, we add background color (5 different shades of gray linearly spaced between white and black) and object color (6 different colors linearly spaced in HUSL hue space) as two new factors of variation. Similarly, for the abstract reasoning tasks (but not when learning representations), we only consider three different values for the scale of the object (instead of 6) and only four values for the x and y position (instead of 32). For 3dshapes, we retain all of the original factors but only consider four different values for scale and azimuth (out of 8 and 16) for the abstract reasoning tasks. We refer to Figure 7 in Appendix B for samples from these data sets.
For the modified dSprites and 3dshapes, we now create corresponding abstract reasoning tasks. The key idea is that one is given a 3× 3 matrix of context image panels with the bottom right image panel missing, as well as a set of six potential answer panels (see Figure 1 for an example). One then has to infer which of the answers fits in the missing panel of the 3× 3 matrix based on relations between
2These were implemented to ensure that humans can visually distinguish between the different values of each factor of variation.
image panels in the rows of the 3× 3 matrices. Due to the categorical nature of ground-truth factors in the underlying data sets, we focus on the AND relationship in which one or more factor values are equal across a sequence of context panels [65].
We generate instances of the abstract reasoning tasks in the following way: First, we uniformly sample whether 1, 2, or 3 ground-truth factors are fixed across rows in the instance to be generated. Second, we uniformly sample without replacement the set of underlying factors in the underlying generative model that should be kept constant. Third, we uniformly sample a factor value from the ground-truth model for each of the three rows and for each of the fixed factors3. Fourth, for all other ground-truth factors we also sample 3× 3 matrices of factor values from the ground-truth model with the single constraint that the factor values are not allowed to be constant across the first two rows (in that case we sample a new set of values). After this we have ground-truth factor values for each of the 9 panels in the correct solution to the abstract reasoning task, and we can sample corresponding images from the ground-truth model. To generate difficult alternative answers, we take the factor values of the correct answer panel and randomly resample the non-fixed factors as well as a random fixed factor until the factor values no longer satisfy the relations in the original abstract reasoning task. We repeat this process to obtain five incorrect answers and finally insert the correct answer in a random position. Examples of the resulting abstract reasoning tasks can be seen in Figure 1 as well as in Figures 18 and 19 in Appendix C.
Models We will make use of the Wild Relation Network (WReN) to solve the abstract visual reasoning tasks [65]. It incorporates relational structure, and was introduced in prior work specifically for such tasks. The WReN is evaluated for each answer panel a ∈ A = {a1, ..., a6} in relation to all the context-panels C = {c1, ..., c8} as follows:
WReN(a,C) = fφ( ∑
e1,e2∈E gθ(e1, e2)) , E = {CNN(c1), ...,CNN(c8)} ∪ {CNN(a)} (2)
First an embedding is computed for each panel using a deep Convolutional Neural Network (CNN), which serve as input to a Relation Network (RN) module [66]. The Relation Network reasons about the different relationships between the context and answer panels, and outputs a score. The answer panel a ∈ A with the highest score is chosen as the final output. The Relation Network implements a suitable inductive bias for (relational) reasoning [5]. It separates the reasoning process into two stages. First gθ is applied to all pairs of panel embeddings to consider relations between the answer panel and each of the context panels, and relations among the context panels. Weight-sharing of gθ between the panel-embedding pairs makes it difficult to overfit to the image statistics of the individual panels. Finally, fφ produces a score for the given answer panel in relation to the context panels by globally considering the different relations between the panels as a whole. Note that in using the same WReN for different answer panels it is ensured that each answer panel is subject to the same reasoning process.
4 Experiments
4.1 Learning Disentangled Representations
We train β-VAE [27], FactorVAE [42], β-TCVAE [10], and DIP-VAE [48] on the panels from the modified dSprites and 3dshapes data sets4. For β-VAE we consider two variations: the standard version using a fixed β, and a version trained with the controlled capacity increase presented by Burgess et al. [9]. Similarly for DIP-VAE we consider both the DIP-VAE-I and DIP-VAE-II variations of the proposed regularizer [48]. For each of these methods, we considered six different values for their (main) hyper-parameter and five different random seeds. The remaining experimental details are presented in Appendix A.
After training, we end up with 360 encoders, whose outputs are expected to cover a wide variation of different representational formats with which to encode information in the images. Figures 9 and 10 in the Appendix show histograms of the reconstruction errors obtained after training, and
3Note that different rows may have different values. 4Code is made available as part of disentanglement_lib at https://git.io/JelEv.
the scores that various disentanglement metrics assigned to the corresponding representations. The reconstructions are mostly good (see also Figure 7), which confirms that the learned representations tend to accurately capture the image content. Correspondingly, we expect any observed difference in down-stream performance when using these representations to be primarily the result of how information is encoded. In terms of the scores of the various disentanglement metrics, we observe a wide range of values. It suggests that in going by different definitions of disentanglement, there are large differences among the quality of the learned representations.
4.2 Abstract Visual Reasoning
We train different WReN models where we control for two potential confounding factors: the representation produced by a specific model used to embed the input images, as well as the hyperparameters of the WReN model. For hyper-parameters, we use a random search space as specified in Appendix A. We used the following training protocol: We train each of these models using a batch size of 32 for 100K iterations where each mini-batch consists of newly generated random instances of the abstract reasoning tasks. Similarly, every 1000 iterations, we evaluate the accuracy on 100 mini-batches of fresh samples. We note that this corresponds to the statistical optimization setting, sidestepping the need to investigate the impact of empirical risk minimization and overfitting5.
4.2.1 Initial Study
First, we trained a set of baseline models to assess the overall complexity of the abstract reasoning task. We consider three types of representations: (i) CNN representations which are learned from scratch (with the same architecture as in the disentanglement models) yielding standard WReN, (ii) pre-trained frozen representations based on a random selection of the pre-trained disentanglement models, and (iii) directly using the ground-truth factors of variation (both one-hot encoded and integer encoded). We train 30 different models for each of these approaches and data sets with different random seeds and different draws from the search space over hyper-parameter values.
An overview of the training behaviour and the accuracies achieved can be seen in Figures 2 and 11 (Appendix B). We observe that the standard WReN model struggles to obtain good results on average, even after having seen many different samples at 100K steps. This is due to the fact that training from scratch is hard and runs may get stuck in local minima where they predict each of the answers with equal probabilities. Given the pre-training and the exposure to additional unsupervised samples, it is not surprising that the learned representations from the disentanglement models perform better. The WReN models that are given the true factors also perform well, already after only few steps of training. We also observe that different runs exhibit a significant spread, which motivates why we analyze the average accuracy across many runs in the next section.
It appears that dSprites is the harder task, with models reaching an average score of 80%, while reaching an average of 90% on 3dshapes. Finally, we note that most learning progress takes place in the first 20K
steps, and thus expect the benefits of disentangled representations to be most clear in this regime.
4.2.2 Evaluating Disentangled Representations
Based on the results from the initial study, we train a full set of WReN models in the following manner: We first sample a set of 10 hyper-parameter configurations from our search space and then trained WReN models using these configurations for each of the 360 representations from the disentanglement
5Note that the state space of the data generating distribution is very large: 106 factor combinations per panel and 14 panels for each instance yield more than 10144 potential instances (minus invalid configurations).
models. We then compare the average down-stream training accuracy of WReN with the BetaVAE score, the FactorVAE score, MIG, the DCI Disentanglement score, and the Reconstruction error obtained by the decoder on the unsupervised learning task. As a sanity check, we also compare with the accuracy of a Gradient Boosted Tree (GBT10000) ensemble and a Logistic Regressor (LR10000) on single factor classification (averaged across factors) as measured on 10K samples. As expected, we observe a positive correlation between the performance of the WReN and the classifiers (see Figure 3).
Differences in Disentanglement Metrics Figure 3 displays the rank correlation (Spearman) between these metrics and the down-stream classification accuracy, evaluated after training for 1K, 2K, 5K, 10K, 20K, 50K, and 100K steps. If we focus on the disentanglement metrics, several interesting observations can be made. In the few-sample regime (up to 20K steps) and across both data sets it can be seen that both the BetaVAE score, and the FactorVAE score are highly correlated with down-stream accuracy. The DCI Disentanglement score is correlated slightly less, while the MIG and SAP score exhibit a relatively weak correlation.
These differences between the different disentanglement metrics are perhaps not surprising, as they are also reflected in their overall correlation (see Figure 8 in Appendix B). Note that the BetaVAE score, and the FactorVAE score directly measure the effect of intervention, i.e. what happens to the representation if all factors but one are varied, which is expected to be beneficial in efficiently comparing the content of two representations as required for this task. Similarly, it may be that MIG and SAP score have a more difficult time in differentiating representations that are only partially disentangled. Finally, we note that the best performing metrics on this task are mostly measuring modularity, as opposed to compactness. A more detailed overview of the correlation between the various metrics and down-stream accuracy can be seen in Figures 12 and 13 in Appendix B.
Disentangled Representations in the Few-Sample Regime If we compare the correlation of the disentanglement metric with the highest correlation (FactorVAE) to that of the Reconstruction error in the few-sample regime, then we find that disentanglement correlates much better with down-stream accuracy. Indeed, while low Reconstruction error indicates that all information is available in the representation (to reconstruct the image) it makes no assumptions about how this information is encoded. We observe strong evidence that disentangled representations yield better down-stream accuracy using relatively few samples, and we therefore conclude that they are indeed more sample efficient compared to entangled representations in this regard.
Figure 4 demonstrates the down-stream accuracy of the WReNs throughout training, binned into quartiles according to their degree of being disentangled as measured by the FactorVAE score (left), and in terms of Reconstruction error (right). It can be seen that representations that are more disentangled give rise to better relative performance consistently throughout all phases of training. If
we group models according to their Reconstruction error then we find that this (reversed) ordering is much less pronounced. An overview for all other metrics can be seen in Figures 14 and 15.
Disentangled Representations in the Many-Sample Regime In the many-sample regime (i.e. when training for 100K steps on batches of randomly drawn instances in Figure 3) we find that there is no longer a strong correlation between the scores assigned by the various disentanglement metrics and down-stream performance. This is perhaps not surprising as neural networks are general function approximators that, given access to enough labeled samples, are expected to overcome potential difficulties in using entangled representations. The observation that Reconstruction error correlates much more strongly with down-stream accuracy in this regime further confirms that this is the case.
A similar observation can be made if we look at the difference in down-stream accuracy between the top and bottom half of the models according to each metric in Figures 5 and 16 (Appendix B). For all disentanglement metrics, larger positive differences are observed in the few-sample regime that gradually reduce as more samples are observed. Meanwhile, the gap gradually increases for Reconstruction error upon seeing additional samples.
Differences in terms of Final Accuracy In our final analysis we consider the rank correlation between down-stream accuracy and the various metrics, split according to their final accuracy. Figure 6 shows the rank correlation for the worst performing fifty percent of the models after 100K steps (top), and for the best performing fifty percent (bottom). While these results should be interpreted with care as the split depends on the final accuracy, we still observe interesting results: It can be seen that disentanglement (i.e. FactorVAE score) remains strongly correlated with down-stream performance for both splits in the
few-sample regime. At the same time, the benefit of lower Reconstruction error appears to be limited to the worst 50% of models. This is intuitive, as when the Reconstruction error is too high there may not be enough information present to solve the down-stream tasks. However, regarding the top performing models (best 50%), it appears that the relative gains from further reducing reconstruction error are of limited use.
5 Conclusion
In this work we investigated whether disentangled representations allow one to learn good models for non-trivial down-stream tasks with fewer samples. We created two abstract visual reasoning tasks based on existing data sets for which the ground truth factors of variation are known. We trained a diverse set of 360 disentanglement models based on four state-of-the-art disentanglement approaches and evaluated their representations using 3600 abstract reasoning models. We observed compelling evidence that more disentangled representations are more sample-efficient in the considered downstream learning task. We draw three main conclusions from these results: First, these results provide concrete motivation why one might want to pursue disentanglement as a property of learned representations in the unsupervised case. Second, we still observed differences between disentanglement metrics, which should motivate further work in understanding what different properties they capture. None of the metrics achieved perfect correlation in the few-sample regime, which also suggests that it is not yet fully understood what makes one representation better than another in terms of learning. Third, it might be useful to extend the methodology in this study to other complex down-stream tasks, or include an investigation of other purported benefits of disentangled representations.
Acknowledgments
The authors thank Adam Santoro, Josip Djolonga, Paulo Rauber and the anonymous reviewers for helpful discussions and comments. This research was partially supported by the Max Planck ETH Center for Learning Systems, a Google Ph.D. Fellowship (to Francesco Locatello), and the Swiss National Science Foundation (grant 200021_165675/1 to Jürgen Schmidhuber). This work was partially done while Francesco Locatello was at Google Research. | 1. What is the focus of the paper in terms of its contribution?
2. What are the concerns regarding the methodology used in the paper?
3. How does the reviewer assess the quality of the paper's content?
4. What are the limitations of the proposed approach according to the reviewer?
5. How does the reviewer evaluate the clarity and significance of the paper's content? | Review | Review
Originality This paper does not focus on developing a novel method. All disentanglement methods have been previously proposed. The WReN that solves the abstract reasoning tasks is also an existing method. Simply combining these methods does not seem novel. Quality I have concerns about the methodology adopted in this paper. The paper focuses on discussing the relationship between the accuracy of the abstract reasoning tasks and the disentanglement score. However, disentanglement scores can only be computed when the ground-truth factors of variation are available. If ground-truth factors are available, then we can directly use the ground-truth factors to train WReN and achieve excellent performance, as shown in Figure 2, or we can train regressors/classifiers that predict the ground-truth factor before training WReN; but we do not need disentanglement learning. If ground-truth factors are not available, then we can not compute disentanglement scores, and we are not able to utilize the results are shown in Figure 3, 4 and 5 to select the best disentangled representation. Therefore, It looks to me that disentanglement learning is not very helpful in abstract reasoning tasks. Clarity This paper is well-organized and not difficult to follow. Significance The details are provided in Section 1. I think the contribution of this paper would be reasonable, if the authors can address my concerns about the methodology. Minor issues It looks to me that the word "up-stream" in this paper should be changed to "down-stream" |
NIPS | Title
Are Disentangled Representations Helpful for Abstract Visual Reasoning?
Abstract
A disentangled representation encodes information about the salient factors of variation in the data independently. Although it is often argued that this representational format is useful in learning to solve many real-world down-stream tasks, there is little empirical evidence that supports this claim. In this paper, we conduct a large-scale study that investigates whether disentangled representations are more suitable for abstract reasoning tasks. Using two new tasks similar to Raven’s Progressive Matrices, we evaluate the usefulness of the representations learned by 360 state-of-the-art unsupervised disentanglement models. Based on these representations, we train 3600 abstract reasoning models and observe that disentangled representations do in fact lead to better down-stream performance. In particular, they enable quicker learning using fewer samples.
1 Introduction
Learning good representations of high-dimensional sensory data is of fundamental importance to Artificial Intelligence [4, 3, 6, 49, 7, 69, 67, 50, 59, 73]. In the supervised case, the quality of a representation is often expressed through the ability to solve the corresponding down-stream task. However, in order to leverage vasts amounts of unlabeled data, we require a set of desiderata that apply to more general real-world settings.
Following the successes in learning distributed representations that efficiently encode the content of high-dimensional sensory data [45, 56, 76], recent work has focused on learning representations that are disentangled [6, 69, 68, 73, 71, 26, 27, 42, 10, 63, 16, 52, 53, 48, 9, 51]. A disentangled representation captures information about the salient (or explanatory) factors of variation in the data, isolating information about each specific factor in only a few dimensions. Although the precise circumstances that give rise to disentanglement are still being debated, the core concept of a local correspondence between data-generative factors and learned latent codes is generally agreed upon [16, 26, 52, 63, 71].
Disentanglement is mostly about how information is encoded in the representation, and it is often argued that a representation that is disentangled is desirable in learning to solve challenging real-world down-stream tasks [6, 73, 59, 7, 26, 68]. Indeed, in a disentangled representation, information about an individual factor value can be readily accessed and is robust to changes in the input that do not affect this factor. Hence, learning to solve a down-stream task from a disentangled representation is expected to require fewer samples and be easier in general [68, 6, 28, 29, 59]. Real-world generative processes are also often based on latent spaces that factorize. In this case, a disentangled
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
representation that captures this product space is expected to help in generalizing systematically in this regard [18, 22, 59].
Several of these purported benefits can be traced back to empirical evidence presented in the recent literature. Disentangled representations have been found to be more sample-efficient [29], less sensitive to nuisance variables [55], and better in terms of (systematic) generalization [1, 16, 28, 35, 70]. However, in other cases it is less clear whether the observed benefits are actually due to disentanglement [48]. Indeed, while these results are generally encouraging, a systematic evaluation on a complex down-stream task of a wide variety of disentangled representations obtained by training different models, using different hyper-parameters and data sets, appears to be lacking.
Contributions In this work, we conduct a large-scale evaluation1 of disentangled representations to systematically evaluate some of these purported benefits. Rather than focusing on a simple single factor classification task, we evaluate the usefulness of disentangled representations on abstract visual reasoning tasks that challenge the current capabilities of state-of-the-art deep neural networks [30, 65]. Our key contributions include:
• We create two new visual abstract reasoning tasks similar to Raven’s Progressive Matrices [61] based on two disentanglement data sets: dSprites [27], and 3dshapes [42]. A key design property of these tasks is that they are hard to solve based on statistical co-occurrences and require reasoning about the relations between different objects.
• We train 360 unsupervised disentanglement models spanning four different disentanglement approaches on the individual images of these two data sets and extract their representations. We then train 3600 Wild Relation Networks [65] that use these disentangled representations to perform abstract reasoning and measure their accuracy at various stages of training.
• We evaluate the usefulness of disentangled representations by comparing the accuracy of these abstract reasoning models to the degree of disentanglement of the representations (measured using five different disentanglement metrics). We observe compelling evidence that more disentangled representations yield better sample-efficiency in learning to solve the considered abstract visual reasoning tasks. In this regard our results are complementary to a recent prior study of disentangled representations that did not find evidence of increased sample efficiency on a much simpler down-stream task [52].
2 Background and Related Work on Learning Disentangled Representations
Despite an increasing interest in learning disentangled representations, a precise definition is still a topic of debate [16, 26, 52, 63]. In recent work, Eastwood et al. [16] and Ridgeway et al. [63] put forth three criteria of disentangled representations: modularity, compactness, and explicitness. Modularity implies that each code in a learned representation is associated with only one factor of variation in the environment, while compactness ensures that information regarding a single factor is represented using only one or few codes. Combined, modularity and compactness suggest that a disentangled representation implements a one-to-one mapping between salient factors of variation in the environment and the learned codes. Finally, a disentangled representation is often assumed to be explicit, in that the mapping between factors and learned codes can be implemented with a simple (i.e. linear) model. While modularity is commonly agreed upon, compactness is a point of contention. Ridgeway et al. [63] argue that some features (eg. the rotation of an object) are best described with multiple codes although this is essentially not compact. The recent work by Higgins et al. [26] suggests an alternative view that may resolve these different perspectives in the future.
Metrics Multiple metrics have been proposed that leverage the ground-truth generative factors of variation in the data to measure disentanglement in learned representations. In recent work, Locatello et al. [52] studied several of these metrics, which we will adopt for our purposes in this work: the BetaVAE score [27], the FactorVAE score [42], the Mutual Information Gap (MIG) [10], the disentanglement score from Eastwood et al. [16] referred to as the DCI Disentanglement score, and the Separated Attribute Predictability (SAP) score [48].
1Reproducing these experiments requires approximately 2.73 GPU years (NVIDIA P100).
The BetaVAE score, FactorVAE score, and DCI Disentanglement score focus primarily on modularity. The former assess this property through interventions, i.e. by keeping one factor fixed and varying all others, while the DCI Disentanglement score estimates this property from the relative importance assigned to each feature by a random forest regressor in predicting the factor values. The SAP score and MIG are mostly focused on compactness. The SAP score reports the difference between the top two most predictive latent codes of a given factor, while MIG reports the difference between the top two latent variables with highest mutual information to a certain factor.
The degree of explicitness captured by any of the disentanglement metrics remain unclear. In prior work it was found that there is a positive correlation between disentanglement metrics and down-stream performance on single factor classification [52]. However, it is not obvious whether disentangled representations are useful for down-stream performance per se, or if the correlation is driven by the explicitness captured in the scores. In particular, the DCI Disentanglement score and the SAP score compute disentanglement by training a classifier on the representation. The former uses a random forest regressor to determine the relative importance of each feature, and the latter considers the gap in prediction accuracy of a support vector machine trained on each feature in the representation. MIG is based on the matrix of pairwise mutual information between factors of variations and dimensions of the representation, which also relates to the explicitness of the representation. On the other hand, the BetaVAE and FactorVAE scores predict the index of a fixed factor of variation and not the exact value.
We note that current disentanglement metrics each require access to the ground-truth factors of variation, which may hinder the practical feasibility of learning disentangled representations. Here our goal is to assess the usefulness of disentangled representations more generally (i.e. assuming it is possible to obtain them), which can be verified independently.
Methods Several methods have been proposed to learn disentangled representations. Here we are interested in evaluating the benefits of disentangled representations that have been learned through unsupervised learning. In order to control for potential confounding factors that may arise in using a single model, we use the representations learned from four state-of-the-art approaches from the literature: β-VAE [27], FactorVAE [42], β-TCVAE [10], and DIP-VAE [48]. A similar choice of models was used in a recent study by Locatello et al. [52].
Using notation from Tschannen et al. [73], we can view all of these models as Auto-Encoders that are trained with the regularized variational objective of the form:
Ep(x)[Eqφ(z|x)[− log pθ(x|z)]] + λ1Ep(x)[R1(qφ(z|x))] + λ2R2(qφ(z)). (1)
The output of the encoder that parametrizes qφ(z|x) yields the representation. Regularization serves to control the information flow through the bottleneck induced by the encoder, while different regularizers primarily vary in the notion of disentanglement that they induce. β-VAE restricts the capacity of the information bottleneck by penalizing the KL-divergence, using β = λ1 > 1 with R1(qφ(z|x)) := DKL[qφ(z|x)||p(z)], and λ2 = 0; FactorVAE penalizes the Total Correlation [77] of the latent variables via adversarial training, using λ1 = 0 and λ2 = 1 with R2(qφ(z)) := TC(qφ(z)); β-TCVAE also penalizes the Total Correlation but estimates its value via a biased Monte Carlo estimator; and finally DIP-VAE penalizes a mismatch in moments between the aggregated posterior and a factorized prior, using λ1 = 0 and λ2 ≥ 1 with R2(qφ(z)) := ||Covqφ(z) − I||2F .
Other Related Works Learning disentangled representations is similar in spirit to non-linear ICA, although it relies primarily on (architectural) inductive biases and different degrees of supervision [13, 2, 39, 36, 37, 38, 25, 33, 32]. Due to the initial poor performance of purely unsupervised methods, the field initially focused on semi-supervised [62, 11, 57, 58, 44, 46] and weakly supervised approaches [31, 12, 40, 21, 78, 20, 15, 35, 80, 54, 47, 64, 8]. In this paper, we consider the setup of the recent unsupervised methods [27, 26, 48, 42, 9, 52, 71, 10]. Finally, while this paper focuses on evaluating the benefits of disentangled features, these are complementary to recent work that focuses on the unsupervised “disentangling” of images into compositional primitives given by object-like representations [17, 23, 24, 22, 60, 74, 75]. Disentangling pose, style, or motion from content are classical vision tasks that has been studied with different degrees of supervision [72, 79, 80, 34, 19, 14, 21, 36].
3 Abstract Visual Reasoning Tasks for Disentangled Representations
In this work we evaluate the purported benefits of disentangled representations on abstract visual reasoning tasks. Abstract reasoning tasks require a learner to infer abstract relationships between multiple entities (i.e. objects in images) and re-apply this knowledge in newly encountered settings [41]. Humans are known to excel at this task, as is evident from experiments with simple visual IQ tests such as Raven’s Progressive Matrices (RPMs) [61]. An RPM consists of several context panels organized in multiple sequences, with one sequence being incomplete. The task consists of completing the final sequence by choosing from a given set of answer panels. Choosing the correct answer panel requires one to infer the relationships between the panels in the complete context sequences, and apply this knowledge to the remaining partial sequence.
In recent work, Santoro et al. [65] evaluated the abstract reasoning capabilities of deep neural networks on this task. Using a data set of RPM-like matrices they found that standard deep neural network architectures struggle at abstract visual reasoning under different training and generalization regimes. Their results indicate that it is difficult to solve these tasks by relying purely on superficial image statistics, and can only be solved efficiently through abstract visual reasoning. This makes this setting particularly appealing for investigating the benefits of disentangled representations.
Generating RPM-like Matrices Rather than evaluating disentangled representations on the Procedurally Generated Matrices (PGM) dataset from Barrett et al. [65] we construct two new abstract RPM-like visual reasoning datasets based on two existing datasets for disentangled representation learning. Our motivation for this is twofold: it is not clear what a ground-truth disentangled representation should look like for the PGM dataset, while the two existing disentanglement data sets include the ground-truth factors of variation. Secondly, in using established data sets for disentanglement, we can reuse hyper-parameter ranges that have proven successful. We note that our study is substantially different to recent work by Steenbrugge et al. [70] who evaluate the representation of a single trained β-VAE [27] on the original PGM data set.
To construct the abstract reasoning tasks, we use the ground-truth generative model of the dSprites [27] and 3dshapes [42] data sets with the following changes2: For dSprites, we ignore the orientation feature for the abstract reasoning tasks as certain objects such as squares and ellipses exhibit rotational symmetries. To compensate, we add background color (5 different shades of gray linearly spaced between white and black) and object color (6 different colors linearly spaced in HUSL hue space) as two new factors of variation. Similarly, for the abstract reasoning tasks (but not when learning representations), we only consider three different values for the scale of the object (instead of 6) and only four values for the x and y position (instead of 32). For 3dshapes, we retain all of the original factors but only consider four different values for scale and azimuth (out of 8 and 16) for the abstract reasoning tasks. We refer to Figure 7 in Appendix B for samples from these data sets.
For the modified dSprites and 3dshapes, we now create corresponding abstract reasoning tasks. The key idea is that one is given a 3× 3 matrix of context image panels with the bottom right image panel missing, as well as a set of six potential answer panels (see Figure 1 for an example). One then has to infer which of the answers fits in the missing panel of the 3× 3 matrix based on relations between
2These were implemented to ensure that humans can visually distinguish between the different values of each factor of variation.
image panels in the rows of the 3× 3 matrices. Due to the categorical nature of ground-truth factors in the underlying data sets, we focus on the AND relationship in which one or more factor values are equal across a sequence of context panels [65].
We generate instances of the abstract reasoning tasks in the following way: First, we uniformly sample whether 1, 2, or 3 ground-truth factors are fixed across rows in the instance to be generated. Second, we uniformly sample without replacement the set of underlying factors in the underlying generative model that should be kept constant. Third, we uniformly sample a factor value from the ground-truth model for each of the three rows and for each of the fixed factors3. Fourth, for all other ground-truth factors we also sample 3× 3 matrices of factor values from the ground-truth model with the single constraint that the factor values are not allowed to be constant across the first two rows (in that case we sample a new set of values). After this we have ground-truth factor values for each of the 9 panels in the correct solution to the abstract reasoning task, and we can sample corresponding images from the ground-truth model. To generate difficult alternative answers, we take the factor values of the correct answer panel and randomly resample the non-fixed factors as well as a random fixed factor until the factor values no longer satisfy the relations in the original abstract reasoning task. We repeat this process to obtain five incorrect answers and finally insert the correct answer in a random position. Examples of the resulting abstract reasoning tasks can be seen in Figure 1 as well as in Figures 18 and 19 in Appendix C.
Models We will make use of the Wild Relation Network (WReN) to solve the abstract visual reasoning tasks [65]. It incorporates relational structure, and was introduced in prior work specifically for such tasks. The WReN is evaluated for each answer panel a ∈ A = {a1, ..., a6} in relation to all the context-panels C = {c1, ..., c8} as follows:
WReN(a,C) = fφ( ∑
e1,e2∈E gθ(e1, e2)) , E = {CNN(c1), ...,CNN(c8)} ∪ {CNN(a)} (2)
First an embedding is computed for each panel using a deep Convolutional Neural Network (CNN), which serve as input to a Relation Network (RN) module [66]. The Relation Network reasons about the different relationships between the context and answer panels, and outputs a score. The answer panel a ∈ A with the highest score is chosen as the final output. The Relation Network implements a suitable inductive bias for (relational) reasoning [5]. It separates the reasoning process into two stages. First gθ is applied to all pairs of panel embeddings to consider relations between the answer panel and each of the context panels, and relations among the context panels. Weight-sharing of gθ between the panel-embedding pairs makes it difficult to overfit to the image statistics of the individual panels. Finally, fφ produces a score for the given answer panel in relation to the context panels by globally considering the different relations between the panels as a whole. Note that in using the same WReN for different answer panels it is ensured that each answer panel is subject to the same reasoning process.
4 Experiments
4.1 Learning Disentangled Representations
We train β-VAE [27], FactorVAE [42], β-TCVAE [10], and DIP-VAE [48] on the panels from the modified dSprites and 3dshapes data sets4. For β-VAE we consider two variations: the standard version using a fixed β, and a version trained with the controlled capacity increase presented by Burgess et al. [9]. Similarly for DIP-VAE we consider both the DIP-VAE-I and DIP-VAE-II variations of the proposed regularizer [48]. For each of these methods, we considered six different values for their (main) hyper-parameter and five different random seeds. The remaining experimental details are presented in Appendix A.
After training, we end up with 360 encoders, whose outputs are expected to cover a wide variation of different representational formats with which to encode information in the images. Figures 9 and 10 in the Appendix show histograms of the reconstruction errors obtained after training, and
3Note that different rows may have different values. 4Code is made available as part of disentanglement_lib at https://git.io/JelEv.
the scores that various disentanglement metrics assigned to the corresponding representations. The reconstructions are mostly good (see also Figure 7), which confirms that the learned representations tend to accurately capture the image content. Correspondingly, we expect any observed difference in down-stream performance when using these representations to be primarily the result of how information is encoded. In terms of the scores of the various disentanglement metrics, we observe a wide range of values. It suggests that in going by different definitions of disentanglement, there are large differences among the quality of the learned representations.
4.2 Abstract Visual Reasoning
We train different WReN models where we control for two potential confounding factors: the representation produced by a specific model used to embed the input images, as well as the hyperparameters of the WReN model. For hyper-parameters, we use a random search space as specified in Appendix A. We used the following training protocol: We train each of these models using a batch size of 32 for 100K iterations where each mini-batch consists of newly generated random instances of the abstract reasoning tasks. Similarly, every 1000 iterations, we evaluate the accuracy on 100 mini-batches of fresh samples. We note that this corresponds to the statistical optimization setting, sidestepping the need to investigate the impact of empirical risk minimization and overfitting5.
4.2.1 Initial Study
First, we trained a set of baseline models to assess the overall complexity of the abstract reasoning task. We consider three types of representations: (i) CNN representations which are learned from scratch (with the same architecture as in the disentanglement models) yielding standard WReN, (ii) pre-trained frozen representations based on a random selection of the pre-trained disentanglement models, and (iii) directly using the ground-truth factors of variation (both one-hot encoded and integer encoded). We train 30 different models for each of these approaches and data sets with different random seeds and different draws from the search space over hyper-parameter values.
An overview of the training behaviour and the accuracies achieved can be seen in Figures 2 and 11 (Appendix B). We observe that the standard WReN model struggles to obtain good results on average, even after having seen many different samples at 100K steps. This is due to the fact that training from scratch is hard and runs may get stuck in local minima where they predict each of the answers with equal probabilities. Given the pre-training and the exposure to additional unsupervised samples, it is not surprising that the learned representations from the disentanglement models perform better. The WReN models that are given the true factors also perform well, already after only few steps of training. We also observe that different runs exhibit a significant spread, which motivates why we analyze the average accuracy across many runs in the next section.
It appears that dSprites is the harder task, with models reaching an average score of 80%, while reaching an average of 90% on 3dshapes. Finally, we note that most learning progress takes place in the first 20K
steps, and thus expect the benefits of disentangled representations to be most clear in this regime.
4.2.2 Evaluating Disentangled Representations
Based on the results from the initial study, we train a full set of WReN models in the following manner: We first sample a set of 10 hyper-parameter configurations from our search space and then trained WReN models using these configurations for each of the 360 representations from the disentanglement
5Note that the state space of the data generating distribution is very large: 106 factor combinations per panel and 14 panels for each instance yield more than 10144 potential instances (minus invalid configurations).
models. We then compare the average down-stream training accuracy of WReN with the BetaVAE score, the FactorVAE score, MIG, the DCI Disentanglement score, and the Reconstruction error obtained by the decoder on the unsupervised learning task. As a sanity check, we also compare with the accuracy of a Gradient Boosted Tree (GBT10000) ensemble and a Logistic Regressor (LR10000) on single factor classification (averaged across factors) as measured on 10K samples. As expected, we observe a positive correlation between the performance of the WReN and the classifiers (see Figure 3).
Differences in Disentanglement Metrics Figure 3 displays the rank correlation (Spearman) between these metrics and the down-stream classification accuracy, evaluated after training for 1K, 2K, 5K, 10K, 20K, 50K, and 100K steps. If we focus on the disentanglement metrics, several interesting observations can be made. In the few-sample regime (up to 20K steps) and across both data sets it can be seen that both the BetaVAE score, and the FactorVAE score are highly correlated with down-stream accuracy. The DCI Disentanglement score is correlated slightly less, while the MIG and SAP score exhibit a relatively weak correlation.
These differences between the different disentanglement metrics are perhaps not surprising, as they are also reflected in their overall correlation (see Figure 8 in Appendix B). Note that the BetaVAE score, and the FactorVAE score directly measure the effect of intervention, i.e. what happens to the representation if all factors but one are varied, which is expected to be beneficial in efficiently comparing the content of two representations as required for this task. Similarly, it may be that MIG and SAP score have a more difficult time in differentiating representations that are only partially disentangled. Finally, we note that the best performing metrics on this task are mostly measuring modularity, as opposed to compactness. A more detailed overview of the correlation between the various metrics and down-stream accuracy can be seen in Figures 12 and 13 in Appendix B.
Disentangled Representations in the Few-Sample Regime If we compare the correlation of the disentanglement metric with the highest correlation (FactorVAE) to that of the Reconstruction error in the few-sample regime, then we find that disentanglement correlates much better with down-stream accuracy. Indeed, while low Reconstruction error indicates that all information is available in the representation (to reconstruct the image) it makes no assumptions about how this information is encoded. We observe strong evidence that disentangled representations yield better down-stream accuracy using relatively few samples, and we therefore conclude that they are indeed more sample efficient compared to entangled representations in this regard.
Figure 4 demonstrates the down-stream accuracy of the WReNs throughout training, binned into quartiles according to their degree of being disentangled as measured by the FactorVAE score (left), and in terms of Reconstruction error (right). It can be seen that representations that are more disentangled give rise to better relative performance consistently throughout all phases of training. If
we group models according to their Reconstruction error then we find that this (reversed) ordering is much less pronounced. An overview for all other metrics can be seen in Figures 14 and 15.
Disentangled Representations in the Many-Sample Regime In the many-sample regime (i.e. when training for 100K steps on batches of randomly drawn instances in Figure 3) we find that there is no longer a strong correlation between the scores assigned by the various disentanglement metrics and down-stream performance. This is perhaps not surprising as neural networks are general function approximators that, given access to enough labeled samples, are expected to overcome potential difficulties in using entangled representations. The observation that Reconstruction error correlates much more strongly with down-stream accuracy in this regime further confirms that this is the case.
A similar observation can be made if we look at the difference in down-stream accuracy between the top and bottom half of the models according to each metric in Figures 5 and 16 (Appendix B). For all disentanglement metrics, larger positive differences are observed in the few-sample regime that gradually reduce as more samples are observed. Meanwhile, the gap gradually increases for Reconstruction error upon seeing additional samples.
Differences in terms of Final Accuracy In our final analysis we consider the rank correlation between down-stream accuracy and the various metrics, split according to their final accuracy. Figure 6 shows the rank correlation for the worst performing fifty percent of the models after 100K steps (top), and for the best performing fifty percent (bottom). While these results should be interpreted with care as the split depends on the final accuracy, we still observe interesting results: It can be seen that disentanglement (i.e. FactorVAE score) remains strongly correlated with down-stream performance for both splits in the
few-sample regime. At the same time, the benefit of lower Reconstruction error appears to be limited to the worst 50% of models. This is intuitive, as when the Reconstruction error is too high there may not be enough information present to solve the down-stream tasks. However, regarding the top performing models (best 50%), it appears that the relative gains from further reducing reconstruction error are of limited use.
5 Conclusion
In this work we investigated whether disentangled representations allow one to learn good models for non-trivial down-stream tasks with fewer samples. We created two abstract visual reasoning tasks based on existing data sets for which the ground truth factors of variation are known. We trained a diverse set of 360 disentanglement models based on four state-of-the-art disentanglement approaches and evaluated their representations using 3600 abstract reasoning models. We observed compelling evidence that more disentangled representations are more sample-efficient in the considered downstream learning task. We draw three main conclusions from these results: First, these results provide concrete motivation why one might want to pursue disentanglement as a property of learned representations in the unsupervised case. Second, we still observed differences between disentanglement metrics, which should motivate further work in understanding what different properties they capture. None of the metrics achieved perfect correlation in the few-sample regime, which also suggests that it is not yet fully understood what makes one representation better than another in terms of learning. Third, it might be useful to extend the methodology in this study to other complex down-stream tasks, or include an investigation of other purported benefits of disentangled representations.
Acknowledgments
The authors thank Adam Santoro, Josip Djolonga, Paulo Rauber and the anonymous reviewers for helpful discussions and comments. This research was partially supported by the Max Planck ETH Center for Learning Systems, a Google Ph.D. Fellowship (to Francesco Locatello), and the Swiss National Science Foundation (grant 200021_165675/1 to Jürgen Schmidhuber). This work was partially done while Francesco Locatello was at Google Research. | 1. What is the focus of the paper regarding abstract reasoning tasks?
2. What are the strengths of the proposed approach, particularly in terms of disentangled representations?
3. What are the weaknesses of the paper, especially in terms of sample efficiency and comparison methods?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. What are the limitations of the paper, and what aspects could be improved in future works? | Review | Review
The paper conducts a large-scale study of the performance of disentangled representations on upstream abstract reasoning tasks. The abstract reasoning tasks use the methodology of Ravenâs progressive matrices but use samples from dSprites and 3dshapes as the skin, with some modifications. Wild Relation Network is used as the upstream model, which would use representations learned by the models under comparison: beta-vAE, FactorVAE, beta-TCVAE, DIP-VAE, and variants of these which improve on it. There are many small bits of useful information in the paper, such as the fact that metrics which measure modularity as opposed to compactness perform better in the upstream task. However, the main conclusion of the paper is that disentangled representations, in general, do enable sample efficient learning in low-sample regimes as compared to learning from scratch. I wish the analysis could have been clearer and mode space was dedicated to it. I donât fully understand how gradient-boosted trees or logistic regression were used as points of comparison. The first three pages are not very information-dense and perhaps should be compressed so that we get to the good stuff faster. Similarly, small details about the dataset generation could have been moved to the appendix. However, overall the paper is well-written and my criticism on clarity is minor. The paper tackles a very important question on representation learning and provides interesting new insights about it. |
NIPS | Title
End-to-end Multi-modal Video Temporal Grounding
Abstract
We address the problem of text-guided video temporal grounding, which aims to identify the time interval of a certain event based on a natural language description. Different from most existing methods that only consider RGB images as visual features, we propose a multi-modal framework to extract complementary information from videos. Specifically, we adopt RGB images for appearance, optical flow for motion, and depth maps for image structure. While RGB images provide abundant visual cues of certain events, the performance may be affected by background clutters. Therefore, we use optical flow to focus on large motion and depth maps to infer the scene configuration when the action is related to objects recognizable with their shapes. To integrate the three modalities more effectively and enable inter-modal learning, we design a dynamic fusion scheme with transformers to model the interactions between modalities. Furthermore, we apply intra-modal self-supervised learning to enhance feature representations across videos for each modality, which also facilitates multi-modal learning. We conduct extensive experiments on the Charades-STA and ActivityNet Captions datasets, and show that the proposed method performs favorably against state-of-the-art approaches.
1 Introduction
With the rapid growth of video data in our daily lives, video understanding has become an ever increasingly important task in computer vision. Research involving other modalities such as text and speech has also drawn much attention in recent years, e.g., video captioning [17, 23], and video question answering [18, 16]. In this paper, we focus on text-guided video temporal grounding, which aims to localize the starting and ending time of a segment corresponding to a text query. It is one of the most effective approaches to understand video contents, and applicable to numerous tasks, such as video retrieval, video editing and human-computer interaction. This problem is considerably challenging as it requires accurate recognition of objects, scenes and actions, as well as joint comprehension of video and language.
Existing methods [34, 26, 33, 22] usually consider only RGB images as visual cues, which are less effective for recognizing objects and actions in videos with complex backgrounds. To understand the video contents more holistically, we propose a multi-modal framework to learn complementary visual features from RGB images, optical flow and depth maps. RGB images provide abundant visual information, which is essential for visual recognition. However, existing methods based on appearance alone are likely to be less effective for complex scenes with cluttered backgrounds. For example, since the query text descriptions usually involve moving objects such as “Closing a door” or “Throwing a pillow”, using optical flow as input is able to identify such actions with large motion. On the other hand, depth is another cue that is invariant to color and lighting, and is often used to complement the RGB input in object detection and semantic segmentation. In our task, depth information helps the proposed model recognize actions involving objects with distinct shapes as the context. For example, actions such as “Sitting in a bed” or “Working at a table” are not easily recognized by optical flow due to small motion, but depth can provide structural information to assist the learning process. We also note that, our goal is to design an end-to-end multi-modal framework
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
for video grounding by directly utilizing low-level cues such as optical flow and depth, while other alternatives based on object detector or semantic segmentation is out of the scope of this work.
To leverage multi-modal cues, one straightforward way is to construct a multi-stream model that takes individual modality as the input in each stream, and then averages the multi-stream output predictions to obtain final results. However, we find that this scheme is less effective due to the lack of communication across different modalities, e.g., using depth cues alone without considering RGB features is not sufficient to learn the semantic information as the appearance cue does. To tackle this issue, we propose a multi-modal framework with 1) an inter-modal module that learns cross-modal features, and 2) an intra-modal module to self-learn feature representations across videos.
For inter-modal learning, we design a fusion scheme with co-attentional transformers [20] to dynamically fuse features from different modalities. One motivation is that, different videos may require to adopt a different combination of modalities, e.g., “Working at a table” would require more appearance and depth information, while optical flow is more important for “Throwing a pillow”. To enhance feature representations for each modality and thereby improve multi-modal learning, we introduce an intra-modal module via self-supervised contrastive learning [7, 15]. The goal is to ensure the feature consistency across video clips when they contain the same action. For example, with the same action “Eating”, it may happen at different locations with completely different backgrounds and contexts, or with different text descriptions that “eats” different food. With our intra-modal learning, it enforces features close to each other when they describe the same action and learn features that are invariant to other distracted factors across videos, and thus it can improve our multi-modal learning paradigm.
We conduct extensive experiments on the Charades-STA [10] and ActivityNet Captions [17] datasets to demonstrate the effectiveness of our multi-modal learning framework for video temporal grounding using (D)epth, (R)GB, and optical (F)low with the (T)ext as the query, and name our method as DRFT. First, we present the complementary property of multi-modality and the improved performance over the single-modality models. Second, we validate the individual contributions of our proposed components, i.e., inter- and intra-modal modules, that facilitate multi-modal learning. Finally, we show state-of-the-art performance for video temporal grounding against existing methods.
The main contributions of this work are summarized as follows: 1) We propose a multi-modal framework for text-guided video temporal grounding by extracting complementary information from RGB, optical flow and depth features. 2) We design a dynamic fusion mechanism across modalities via co-attentional transformer to effectively learn inter-modal features. 3) We apply self-supervised contrastive learning across videos for each modality to enhance intra-modal feature representations that are invariant to distracted factors with respect to actions.
2 Related Work
Text-Guided Video Temporal Grounding. Given a video and a natural language query, textguided video temporal grounding aims to predict the starting and ending time of the video clip that best matches the query sentence. Existing methods for this task can be categorized into two groups, i.e., two-stage and one-stage schemes (see Figure 1(a)(b)). Most two-stage approaches adopt a propose-and-rank pipeline, where they first generate clip proposals and then rank the proposals based on their similarities with the query sentence. Early two-stage methods [10, 14] obtain proposals by scanning the whole video with sliding windows. Since the sliding window mechanism is computationally expensive and usually produces many redundant proposals, numerous methods are subsequently proposed to improve the efficiency and effectiveness of proposal generation. The TGN model [2] performs frame-by-word interactions and localize the proposals in one single pass. Other approaches focus on reducing redundant proposals by generating query-guided proposals [31] or semantic activity proposals [3]. The MAN method [34] models the temporal relationships between proposals using a graph architecture to improve the quality of proposals. To alleviate the computation of observing the whole video, reinforcement learning [13, 30] is utilized to guide the intelligent agent to glance over the video in a discontinuous way. While the two-stage methods achieve promising results, the computational cost is high for comparing all proposal-query pairs, and the performance is largely limited by the quality of proposal generation.
To overcome the issues of two-stage methods, some recent approaches adopt a one-stage pipeline to directly predict the temporal segment from the fusion of video and text features. Most of the one-stage approaches focus on the attention mechanisms or interaction between modalities. For
example, the ABLR method [32] predicts the temporal coordinates using a co-attention based location regression algorithm. The ExCL mechanism [11] exploits the cross-modal interactions between video and text, and the PfTML-GA model [26] improves the performance by introducing the queryguided dynamic filter. Moreover, the DRN scheme [33] leverages dense supervision from the sparse annotations to facilitate the training process. Recently, the LGI model [22] decomposes the query sentence into multiple semantic phrases and conducts local and global interactions between the video and text features. In our framework, we adopt LGI as the baseline that uses the hierarchical video-text interaction. However, different from LGI that only considers RGB frames as input, we take RGB, optical flow and depth as input, and design the inter-modality learning technique to learn complementary information from the video. Furthermore, we apply contrastive learning across videos to enhance the feature representations in each modality, which helps the learning of the whole model (see Figure 1(c)).
Multi-Modal Learning. As typical event or actions can be described by signals from multiple modalities, understanding the correlation between different modalities is crucial to solve problems more comprehensively. Research on joint vision and language learning [6, 23, 16, 7] has gained much attention in recent years since natural language is an intuitive way for human communication. Recent studies [24, 8] based on the transformer [29] have shown great success in self-supervised learning and transfer learning for natural language tasks. The transformer-based BERT model [8] has also been widely used to learn joint representations for vision and language. These methods [20, 21, 35, 19, 5, 9] aim to learn generic representations from a large amount of image-text pairs in a self-supervised manner, and then fine-tune the model for downstream vision and language tasks. The ViLBERT scheme [20] extracts features from image and text using two parallel BERT-style models, and then connects the two streams with the co-attentional transformer layers. In this work, we focus on the video temporal grounding task guided by texts, while introducing multi-modality to improve model learning, which is not studied before. For fusing the multi-modal information, we leverage the co-attentional transformer layers [20] in our framework and design an approach by fusing the RGB features with optical flow and depth features respectively.
3 Proposed Framework
In this work, we address the problem of text-guided video temporal grounding using a multi-modal framework. The pipeline of the proposed framework is illustrated in Figure 2. Given an input video V = {Vt}Tt=1 with T frames and a query sentence Q = {Qi}Ni=1 with N words, we aim to localize
the starting and ending time [ts, te] of the event corresponding to the query. To this end, we design a multi-modal framework to learn complementary visual information from RGB images, optical flow and depth maps. From the input video, we first compute the depth map of each frame and the optical flow of each pair of consecutive frames. We then apply the visual encoders Ed, Er, Ef to extract features from the depth, RGB and flow inputs. A textual encoder Et is utilized to extract the feature of the query sentence Q. The local-global interaction modules (LGI) then incorporate the textual feature into each visual modality, and generate the multi-modal features Md, Mr and Mf for depth, RGB and flow respectively.
To effectively integrate the features from different modalities and enable inter-modal feature learning, we propose a dynamic fusion scheme with transformers to model the interaction between modalities. The feature after integration is then fed into a regression module (REG) to predict the starting and ending time [ts, te] of the target video segment. To enhance the feature representations in each modality, we introduce an intra-modal learning module that conducts self-supervised contrastive learning across videos. The intra-modal learning is applied on the multi-modal features Md, Mr and Mf separately to enforce features of video segments containing the same action to be close to each other, and those from different action categories to be far apart.
3.1 Inter-Modal Feature Learning
Videos contain rich information in both spatial and temporal dimensions. To learn information more comprehensively, in addition to the RGB modality, we also consider optical flow that captures motion, and depth feature that represents image structure. An intuitive way to combine the three modalities is to utilize a multi-stream model and directly average the outputs of individual streams. However, since the importance of each modality is not the same in different situations, directly averaging them may downweigh the importance of a specific modality and degrade the performance. In Table 1, we present the results of two-stream (RGB and flow) and three-stream (RGB, flow and depth) baseline models, where the outputs from different modalities are averaged before the final output layer. Compared to the single-stream (RGB) baseline model, the multi-stream models do not improve the performance, which shows that it is not intuitive to learn complementary information from multi-modal features.
Such cases may happen frequently in certain actions. For example, flow features would not help much for “Sitting in a bed” but would help more for “Closing a door”. Therefore, having a dynamic mechanism is critical for multi-modal fusion.
Co-attentional Feature Fusion. The ensuing question becomes how to learn effective features across modalities and also fuse them dynamically. First, we observe that, although depth and flow modalities are effective in some situations, they alone are not able to capture the semantic information, which is crucial for video-text understanding. Thereby, we design a co-attentional scheme to allow joint feature learning between RGB and another modality (either depth or flow).
Inspired by the co-attentional transformer layer [20] that consist of multi-headed attention blocks, where it takes a paired feature as the input (e.g., Md and Mr) and forms three matrices, Q, K, and V that represent queries, keys, and values (also see Figure 2). In this way, the multi-headed attention block for each modality takes the keys and values from the other modality as the input, and thus outputs the attention-pooled features conditioned on the other modality. For instance, we consider the pair of Md = {m1d, ...,mTd } and Mr = {m1r, ...,mTr } features for depth and RGB, where T is the number of frames in a video clip. The co-attentional transformer layer performs depth-conditioned RGB feature attention, as well as RGB-conditioned depth feature attention. Similarly, for flow and RGB features, Mf = {m1f , ...,mTf } and Mr = {m1r, ...,mTr }, we obtain another set of flowconditioned RGB feature attention and RGB-conditioned flow feature attention. Note that, since RGB feature generally contains the most abundant information in the video and is used for both depth and flow attention, we adopt a shared multi-headed attention block for RGB as shown in Figure 2.
Dynamic Feature Fusion. To effectively combine these output features from co-attentional transformers and perform the final prediction, we dynamically learn the weights for each multi-modal feature and linearly combine the four features using the weights (see Figure 2). Since the importance of each modality depends on the input data, we generate the weights by feeding each feature into a fully-connected (FC) layer, and normalize the weights to make the sum equal to 1. By dynamically generating weights from the features, we are able to adapt the multi-modal fusion process according to the input video and the query text. The fused feature is then served as input of the regression module (REG) to predict the starting and ending time.
3.2 Intra-Modal Feature Learning
To facilitate the multi-modal training, we introduce an intra-modal feature learning module, which enhances feature representations within each modality by applying self-supervised contrastive learning. Our motivation is that features in the same action category should be similar even if they are from different videos. To this end, for each input video V , we randomly sample positive videos V+ that contain the same action category, and negative videos V− with different action categories. We perform contrastive learning on the multi-modal features Md, Mr and Mf ∈ Rc×T separately for each modality, where c is the feature dimension and T is the number of frames. Since the multi-modal feature contains information from the whole video, we only consider features that contain the action by extracting the corresponding video segment. We then conduct average pooling in the temporal dimension and obtain a feature vector M ∈ Rc. The contrastive loss Lcl is formulated as:
Lcl = − log
∑ M+∈Q+
eh(M) ⊤h(M+)/τ∑
M+∈Q+ eh(M)⊤h(M+)/τ + ∑ M−∈Q− eh(M)⊤h(M−)/τ , (1)
where Q+ and Q− are the sets of positive and negative samples, and τ is the temperature parameter. Following the SimCLR approach [4], we use a linear layer h(·) to project the feature M to another embedding space where we apply contrastive loss. We accumulate the loss from each modality to be the final contrastive loss, namely Lcl = Lrcl(hr(AvgPool(Mr))) + L d cl(hd(AvgPool(Md))) + Lfcl(hf (AvgPool(Mf ))) for RGB, depth, and flow, respectively.
3.3 Model Training and Implementation Details
Overall Objective. The overall objective of the proposed method is composed of the supervised loss Lgrn for predicting temporal grounding that localizes the video segment and the self-supervised contrastive loss Lcl for intra-modal learning in (1): L = Lgrn + Lcl. The supervised loss Lgrn is the same as the loss defined in the LGI method [22], which includes:
1) Location regression loss Lreg = smoothL1(t̂s − ts) + smoothL1(t̂e − te) that calculates the L1 distance between the normalized ground truth time interval (t̂s, t̂e) ∈ [0, 1] and the predicted time interval (ts, te), where smoothL1 is defined as 0.5x2 if |x| < 1 and |x| − 0.5 otherwise. 2) Temporal attention guidance loss Ltag = − T∑ i=1 ôi log(oi) T∑
i=1 ôi
for the temporal attention in the REG
module, where ôi is set to 1 if the i-th segment is located within the ground truth time interval and 0 otherwise.
3) Distinct query attention loss Ldqa = ||(A⊤A) − λI||2F to enforce query attention weights to be distinct along different steps in the LGI module, where A ∈ RN×S is the concatenated query attention weights across S steps, || · ||F denotes Frobenius norm of a matrix, and λ ∈ [0, 1] controls the extent of overlap between query attention distributions. The supervised loss is the sum of the three loss terms Lgrn = Lreg +Ltag +Ldqa and we use the default setting in LGI [22], in which we refer readers to their paper for more details.
Implementation Details. We generate optical flow and depth maps using the RAFT [27] and MiDaS [25] method respectively. For the visual encoder Ed, Er and Ef , we employ the I3D [1] and C3D [28] networks for Charades-STA and ActivityNet Captions datasets respectively. As for the textual encoder Et, we adopt a bi-directional LSTM, where the feature is obtained by concatenating the last hidden states in forward and backward directions. The LGI module in our framework contains the sequential query attention and local-global video-text interactions as in the LGI model [22]. The REG module generates temporal attention weights to aggregate the features and performs regression via an MLP layer. The operations are defined similar to LGI [22]. The feature dimension c is set to 512. In the contrastive loss (1), the temperature parameter τ is set to 0.1. The projection head h(·) is a 2-layer MLP that project the feature to a 512-dimensional latent space. We implement the proposed model in PyTorch with the Adam optimizer and a fixed learning rate of 4× 10−4. The source code and models are available at https://github.com/wenz116/DRFT.
4 Experimental Results
4.1 Datasets and Evaluation Metric
We evaluate the proposed DRFT method against the state-of-the-art approaches on two benchmark datasets, i.e., Charades-STA [10] and ActivityNet Captions [17].
Charades-STA. It is built upon the Charades dataset for evaluating the video temporal grounding task. It contains 6,672 videos involving 16,128 video-query pairs, where 12,408 pairs are used for training and 3,720 pairs are for testing. The average length of the videos is 29.76 seconds. There are 2.4 annotated moments with duration 8.2 seconds in each video.
ActivityNet Captions. It is originally constructed for dense video captioning from the ActivityNet dataset. The captions are used as queries in the video temporal grounding task. It consists of 20k YouTube videos with an average duration of 120 seconds. The videos are annotated with 200 activity categories, which is more diverse compared to the Charades-STA dataset. Each video contains 3.65 queries, where each query has an average length of 13.48 words. The dataset is split into training, validation and testing set with a ratio of 2:1:1, resulting in 37,421, 17,505 and 17,031 video-query
pairs respectively. Since the testing set is not publicly available, we follow previous methods to evaluate the performance on the combination of the two validation sets, which are denoted as val1 and val2.
Following the typical evaluation setups [10, 22], we employ two metrics to assess the performance of video temporal grounding: 1) Recall at various thresholds of temporal Intersection over Union (R@IoU). It measures the percentage of predictions that have IoU with the ground truth larger than the threshold. We adopt 3 values {0.3, 0.5, 0.7} for the IoU threshold. 2) mean tIoU (mIoU). It is the average IoU over all results.
4.2 Overall Performance
In Table 2, we evaluate our framework against state-of-art approaches, including two-stage methods that rely on propose-and-rank schemes [10, 13, 34] and one-stage methods that only consider RGB videos as the input [12, 26, 33, 22]. First, compared to our baseline LGI [22], our results with single/two/three modalities are consistently better than theirs in all the evaluation metrics, which demonstrates the benefit of our intra-modal feature learning scheme and the inter-modal feature fusion mechanism. We also note that for our single-modal model, we use RGB as the input and only apply the intra-modal contrastive learning across videos, where it already performs favorably against existing algorithms. More results of the single-stream models using other modalities are provided in the supplementary material.
Second, we show that with the increased modality used in our model (bottom group in Table 2), the performance on two benchmarks are consistently improved, which demonstrates the complementary property of RGB, depth, and flow for video temporal grounding. Moreover, compared to the worse baseline results when adding more modalities without our proposed modules in Table 1, we validate the importance of designing a proper scheme of exchanging and fusing the information across modalities. It is also worth mentioning that with more modalities involved in the model, our method achieves larger performance gains compared to the baseline, e.g., more than 5% improvement in all the metrics on two benchmarks.
4.3 Ablation Study
In Table 3, we present the ablation of individual components proposed in our framework.
Inter-modal Feature Fusion. To enhance the communication across modalities, we propose to first use co-attentional transformers to learn attentive features across RGB and another modality, and then use a dynamic feature fusing scheme with learned weights to combine different features. In the first four rows of the middle group in Table 3, we show the following properties in this work:
1) Using transformers is effective for multi-modal feature learning. As shown in the first row of the middle group, the performance drops without the co-attentional transformers for feature fusion.
2) RGB information is essential for the temporal grounding task, and thus we conduct co-attention between a) RGB-flow and b) RGB-depth. In the second row of the middle group, if using the flow modality as the common modality, i.e., flow-RGB and flow-depth, the performance is worse than our final model.
3) Since RGB features are used for both flow and depth attention, we adopt a shared co-attention block for RGB as shown in Figure 2, where it can take RGB together with either the flow or depth cue as the input, and further enriches the attention mechanism. This design has not been considered in the prior work. In the third row of the middle group, without sharing the co-attentional module, the performance is worse than our final model.
4) The proposed dynamic fusion scheme via learnable weights is important to fuse features from different modalities. As shown the fourth row of the middle group, the performance drops significantly without learnable weights. Interestingly, learning dynamic weights to combine features is almost equally important compared to feature learning via transformers. This indicates that even with the state-of-the-art feature attention module, it is still challenging to combine multi-modal features.
Intra-modal Feature Learning. In the last row of the middle group in Table 3, we show the benefit of having the intra-modal cross-video feature learning. While multi-modal feature fusion already provides a strong baseline in our framework, improving feature representations in individual modalities is still critical for enhancing the entire multi-modal learning paradigm, in which such observations are not widely studied yet.
Qualitative Results. In Figure 3, we show sample results on the Charades-STA and ActivityNet Captions datasets, where the arrows indicate the starting and ending points of the corresponding grounded segment based on the query. Compared to the baseline method that only consider RGB features, the proposed DRFT approach is able to predict more accurate results by leveraging the multimodal features from RGB, optical flow and depth. More results are presented in the supplementary material.
4.4 Analysis of Multi-modal Learning
To understand the complementary property of each modality, we analyze the video temporal grounding results of some example action categories. Figure 4 shows the performance of the single-stream baseline with RGB as input, single-stream DRFT models with RGB, flow or depth as input, and three-stream DRFT model respectively. The three plots contain categories where RGB, flow or depth performs better than the other two modalities. We first show that the single-stream DRFT model with contrastive learning improves from the single-stream baseline (red bars vs. orange bars). We then investigate the complementary property between the three modalities. For actions with smaller movement (e.g., “smiling”) in the left group of Figure 4, models using RGB as input generally
perform better. For actions with larger motion (e.g., “closing a door” or “throwing a pillow”) in the middle group, flow provides more useful information (denoted as green bars). As for the actions with small motion but can be easily recognized by their structure (e.g., “sitting in a bed” or “working at a table”) in the right group, depth is superior to the other two modalities (denoted as blue bars). With the complementary property between RGB, flow and depth, we can take advantage of each modality and further improve the performance in the three-stream DRFT model (denoted as purple bars).
To further analyze the impact of each modality, we provide the learned weights for dynamic fusion in the three-stream DRFT for these categories in Table 4, where the top, middle and bottom groups contain categories that RGB, flow and depth help the most respectively. Flow → RGB means flowconditioned RGB features, etc. We observe that for actions with smaller movement (top group), the weights for RGB features are larger. For actions with larger motion (middle group), the weights for optical flow are larger. Regarding actions with small motion but can be easily recognized by their structure (bottom group), the weights for depth are larger. This shows that the model can exploit each modality based on the complementary property between RGB, flow and depth.
5 Conclusions
In this paper, we focus on the task of text-guided video temporal grounding. In contrast to existing methods that consider only RGB images as visual features, we propose the DRFT model to learn complementary visual information from RGB, optical flow and depth modality. While RGB features provide abundant appearance information, we show that representation models based on these cues alone are not effective for temporal grounding in videos with cluttered backgrounds. We therefore adopt optical flow to capture motion cues, and depth maps to estimate image structure. To combine the three modalities more effectively, we propose an inter-modal feature learning module, which performs co-attention between modalities using transformers, and dynamically fuses the multi-modal features based on the input data. To further enhance the multi-modal training, we incorporate an intra-modal feature learning module that performs self-supervised contrastive learning within each modality. The contrastive loss enforces cross-video features to be close to each other when they contain the same action, and be far apart otherwise. We conduct extensive experiments on two benchmark datasets, demonstrating the effectiveness of the proposed multi-modal framework with inter- and intra-modal feature learning.
Acknowledgements
This work is supported in part by NSF CAREER grant 1149783 and gifts from Snap as well as eBay. | 1. What is the focus of the paper regarding video grounding?
2. What are the strengths of the proposed approach, particularly in terms of modality fusion and feature representation?
3. What are the weaknesses of the paper, especially concerning novelty and inspiration?
4. How does the reviewer assess the clarity and technical presentation of the paper's content?
5. What are some concerns regarding the necessity of certain modalities for specific actions and judging video similarity? | Summary Of The Paper
Review | Summary Of The Paper
This paper studies the problem of multi-modal video grounding. The authors exploit multiple modalities including RGB, depth, optical flow to extract complementary information for improving video grouding. A dynamic fusion scheme with transformer is proposed to better learn the interaction and integration of multiple modalities. Moreover, a self-supervised intrao-modal module is proposed to obtain better feature representation. The extensive experiments support that the proposed method achieves new state-of-the-art performance on multiple bechmark datasets.
Review
Strengths:
Satisfactory paper writing. Clear technical presentation.
Good performances and thorough empirical studies.
Weaknesses:
My main concern is the novelty. All the proposed modules look natural but kind of straighforward to me. They indeed contribute to the performance, but there is no inspiring techniques or insightful technical conclusion. For example, co-attentional transformer layers are adopted from [17] to model the interaction among different modalities; contrastive learning is adopted to model the intra-modal feature learning.
Questions:
Why actions like "sitting in a bed" require depth information? Can they be recognized by RGB information?
In the intra-modal feature learning, given only the sentence query, how to judge whether two videos are from the same action category?
Rebuttal summary:
I agree with other reviewers that depth information is interesting and properly leveraged for the multi-modality task. From non-technical aspect, it is a novel idea.
The technical novelty might be limited but acceptable. I increase my rating to "Marginally above acceptence threshold". |
NIPS | Title
End-to-end Multi-modal Video Temporal Grounding
Abstract
We address the problem of text-guided video temporal grounding, which aims to identify the time interval of a certain event based on a natural language description. Different from most existing methods that only consider RGB images as visual features, we propose a multi-modal framework to extract complementary information from videos. Specifically, we adopt RGB images for appearance, optical flow for motion, and depth maps for image structure. While RGB images provide abundant visual cues of certain events, the performance may be affected by background clutters. Therefore, we use optical flow to focus on large motion and depth maps to infer the scene configuration when the action is related to objects recognizable with their shapes. To integrate the three modalities more effectively and enable inter-modal learning, we design a dynamic fusion scheme with transformers to model the interactions between modalities. Furthermore, we apply intra-modal self-supervised learning to enhance feature representations across videos for each modality, which also facilitates multi-modal learning. We conduct extensive experiments on the Charades-STA and ActivityNet Captions datasets, and show that the proposed method performs favorably against state-of-the-art approaches.
1 Introduction
With the rapid growth of video data in our daily lives, video understanding has become an ever increasingly important task in computer vision. Research involving other modalities such as text and speech has also drawn much attention in recent years, e.g., video captioning [17, 23], and video question answering [18, 16]. In this paper, we focus on text-guided video temporal grounding, which aims to localize the starting and ending time of a segment corresponding to a text query. It is one of the most effective approaches to understand video contents, and applicable to numerous tasks, such as video retrieval, video editing and human-computer interaction. This problem is considerably challenging as it requires accurate recognition of objects, scenes and actions, as well as joint comprehension of video and language.
Existing methods [34, 26, 33, 22] usually consider only RGB images as visual cues, which are less effective for recognizing objects and actions in videos with complex backgrounds. To understand the video contents more holistically, we propose a multi-modal framework to learn complementary visual features from RGB images, optical flow and depth maps. RGB images provide abundant visual information, which is essential for visual recognition. However, existing methods based on appearance alone are likely to be less effective for complex scenes with cluttered backgrounds. For example, since the query text descriptions usually involve moving objects such as “Closing a door” or “Throwing a pillow”, using optical flow as input is able to identify such actions with large motion. On the other hand, depth is another cue that is invariant to color and lighting, and is often used to complement the RGB input in object detection and semantic segmentation. In our task, depth information helps the proposed model recognize actions involving objects with distinct shapes as the context. For example, actions such as “Sitting in a bed” or “Working at a table” are not easily recognized by optical flow due to small motion, but depth can provide structural information to assist the learning process. We also note that, our goal is to design an end-to-end multi-modal framework
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
for video grounding by directly utilizing low-level cues such as optical flow and depth, while other alternatives based on object detector or semantic segmentation is out of the scope of this work.
To leverage multi-modal cues, one straightforward way is to construct a multi-stream model that takes individual modality as the input in each stream, and then averages the multi-stream output predictions to obtain final results. However, we find that this scheme is less effective due to the lack of communication across different modalities, e.g., using depth cues alone without considering RGB features is not sufficient to learn the semantic information as the appearance cue does. To tackle this issue, we propose a multi-modal framework with 1) an inter-modal module that learns cross-modal features, and 2) an intra-modal module to self-learn feature representations across videos.
For inter-modal learning, we design a fusion scheme with co-attentional transformers [20] to dynamically fuse features from different modalities. One motivation is that, different videos may require to adopt a different combination of modalities, e.g., “Working at a table” would require more appearance and depth information, while optical flow is more important for “Throwing a pillow”. To enhance feature representations for each modality and thereby improve multi-modal learning, we introduce an intra-modal module via self-supervised contrastive learning [7, 15]. The goal is to ensure the feature consistency across video clips when they contain the same action. For example, with the same action “Eating”, it may happen at different locations with completely different backgrounds and contexts, or with different text descriptions that “eats” different food. With our intra-modal learning, it enforces features close to each other when they describe the same action and learn features that are invariant to other distracted factors across videos, and thus it can improve our multi-modal learning paradigm.
We conduct extensive experiments on the Charades-STA [10] and ActivityNet Captions [17] datasets to demonstrate the effectiveness of our multi-modal learning framework for video temporal grounding using (D)epth, (R)GB, and optical (F)low with the (T)ext as the query, and name our method as DRFT. First, we present the complementary property of multi-modality and the improved performance over the single-modality models. Second, we validate the individual contributions of our proposed components, i.e., inter- and intra-modal modules, that facilitate multi-modal learning. Finally, we show state-of-the-art performance for video temporal grounding against existing methods.
The main contributions of this work are summarized as follows: 1) We propose a multi-modal framework for text-guided video temporal grounding by extracting complementary information from RGB, optical flow and depth features. 2) We design a dynamic fusion mechanism across modalities via co-attentional transformer to effectively learn inter-modal features. 3) We apply self-supervised contrastive learning across videos for each modality to enhance intra-modal feature representations that are invariant to distracted factors with respect to actions.
2 Related Work
Text-Guided Video Temporal Grounding. Given a video and a natural language query, textguided video temporal grounding aims to predict the starting and ending time of the video clip that best matches the query sentence. Existing methods for this task can be categorized into two groups, i.e., two-stage and one-stage schemes (see Figure 1(a)(b)). Most two-stage approaches adopt a propose-and-rank pipeline, where they first generate clip proposals and then rank the proposals based on their similarities with the query sentence. Early two-stage methods [10, 14] obtain proposals by scanning the whole video with sliding windows. Since the sliding window mechanism is computationally expensive and usually produces many redundant proposals, numerous methods are subsequently proposed to improve the efficiency and effectiveness of proposal generation. The TGN model [2] performs frame-by-word interactions and localize the proposals in one single pass. Other approaches focus on reducing redundant proposals by generating query-guided proposals [31] or semantic activity proposals [3]. The MAN method [34] models the temporal relationships between proposals using a graph architecture to improve the quality of proposals. To alleviate the computation of observing the whole video, reinforcement learning [13, 30] is utilized to guide the intelligent agent to glance over the video in a discontinuous way. While the two-stage methods achieve promising results, the computational cost is high for comparing all proposal-query pairs, and the performance is largely limited by the quality of proposal generation.
To overcome the issues of two-stage methods, some recent approaches adopt a one-stage pipeline to directly predict the temporal segment from the fusion of video and text features. Most of the one-stage approaches focus on the attention mechanisms or interaction between modalities. For
example, the ABLR method [32] predicts the temporal coordinates using a co-attention based location regression algorithm. The ExCL mechanism [11] exploits the cross-modal interactions between video and text, and the PfTML-GA model [26] improves the performance by introducing the queryguided dynamic filter. Moreover, the DRN scheme [33] leverages dense supervision from the sparse annotations to facilitate the training process. Recently, the LGI model [22] decomposes the query sentence into multiple semantic phrases and conducts local and global interactions between the video and text features. In our framework, we adopt LGI as the baseline that uses the hierarchical video-text interaction. However, different from LGI that only considers RGB frames as input, we take RGB, optical flow and depth as input, and design the inter-modality learning technique to learn complementary information from the video. Furthermore, we apply contrastive learning across videos to enhance the feature representations in each modality, which helps the learning of the whole model (see Figure 1(c)).
Multi-Modal Learning. As typical event or actions can be described by signals from multiple modalities, understanding the correlation between different modalities is crucial to solve problems more comprehensively. Research on joint vision and language learning [6, 23, 16, 7] has gained much attention in recent years since natural language is an intuitive way for human communication. Recent studies [24, 8] based on the transformer [29] have shown great success in self-supervised learning and transfer learning for natural language tasks. The transformer-based BERT model [8] has also been widely used to learn joint representations for vision and language. These methods [20, 21, 35, 19, 5, 9] aim to learn generic representations from a large amount of image-text pairs in a self-supervised manner, and then fine-tune the model for downstream vision and language tasks. The ViLBERT scheme [20] extracts features from image and text using two parallel BERT-style models, and then connects the two streams with the co-attentional transformer layers. In this work, we focus on the video temporal grounding task guided by texts, while introducing multi-modality to improve model learning, which is not studied before. For fusing the multi-modal information, we leverage the co-attentional transformer layers [20] in our framework and design an approach by fusing the RGB features with optical flow and depth features respectively.
3 Proposed Framework
In this work, we address the problem of text-guided video temporal grounding using a multi-modal framework. The pipeline of the proposed framework is illustrated in Figure 2. Given an input video V = {Vt}Tt=1 with T frames and a query sentence Q = {Qi}Ni=1 with N words, we aim to localize
the starting and ending time [ts, te] of the event corresponding to the query. To this end, we design a multi-modal framework to learn complementary visual information from RGB images, optical flow and depth maps. From the input video, we first compute the depth map of each frame and the optical flow of each pair of consecutive frames. We then apply the visual encoders Ed, Er, Ef to extract features from the depth, RGB and flow inputs. A textual encoder Et is utilized to extract the feature of the query sentence Q. The local-global interaction modules (LGI) then incorporate the textual feature into each visual modality, and generate the multi-modal features Md, Mr and Mf for depth, RGB and flow respectively.
To effectively integrate the features from different modalities and enable inter-modal feature learning, we propose a dynamic fusion scheme with transformers to model the interaction between modalities. The feature after integration is then fed into a regression module (REG) to predict the starting and ending time [ts, te] of the target video segment. To enhance the feature representations in each modality, we introduce an intra-modal learning module that conducts self-supervised contrastive learning across videos. The intra-modal learning is applied on the multi-modal features Md, Mr and Mf separately to enforce features of video segments containing the same action to be close to each other, and those from different action categories to be far apart.
3.1 Inter-Modal Feature Learning
Videos contain rich information in both spatial and temporal dimensions. To learn information more comprehensively, in addition to the RGB modality, we also consider optical flow that captures motion, and depth feature that represents image structure. An intuitive way to combine the three modalities is to utilize a multi-stream model and directly average the outputs of individual streams. However, since the importance of each modality is not the same in different situations, directly averaging them may downweigh the importance of a specific modality and degrade the performance. In Table 1, we present the results of two-stream (RGB and flow) and three-stream (RGB, flow and depth) baseline models, where the outputs from different modalities are averaged before the final output layer. Compared to the single-stream (RGB) baseline model, the multi-stream models do not improve the performance, which shows that it is not intuitive to learn complementary information from multi-modal features.
Such cases may happen frequently in certain actions. For example, flow features would not help much for “Sitting in a bed” but would help more for “Closing a door”. Therefore, having a dynamic mechanism is critical for multi-modal fusion.
Co-attentional Feature Fusion. The ensuing question becomes how to learn effective features across modalities and also fuse them dynamically. First, we observe that, although depth and flow modalities are effective in some situations, they alone are not able to capture the semantic information, which is crucial for video-text understanding. Thereby, we design a co-attentional scheme to allow joint feature learning between RGB and another modality (either depth or flow).
Inspired by the co-attentional transformer layer [20] that consist of multi-headed attention blocks, where it takes a paired feature as the input (e.g., Md and Mr) and forms three matrices, Q, K, and V that represent queries, keys, and values (also see Figure 2). In this way, the multi-headed attention block for each modality takes the keys and values from the other modality as the input, and thus outputs the attention-pooled features conditioned on the other modality. For instance, we consider the pair of Md = {m1d, ...,mTd } and Mr = {m1r, ...,mTr } features for depth and RGB, where T is the number of frames in a video clip. The co-attentional transformer layer performs depth-conditioned RGB feature attention, as well as RGB-conditioned depth feature attention. Similarly, for flow and RGB features, Mf = {m1f , ...,mTf } and Mr = {m1r, ...,mTr }, we obtain another set of flowconditioned RGB feature attention and RGB-conditioned flow feature attention. Note that, since RGB feature generally contains the most abundant information in the video and is used for both depth and flow attention, we adopt a shared multi-headed attention block for RGB as shown in Figure 2.
Dynamic Feature Fusion. To effectively combine these output features from co-attentional transformers and perform the final prediction, we dynamically learn the weights for each multi-modal feature and linearly combine the four features using the weights (see Figure 2). Since the importance of each modality depends on the input data, we generate the weights by feeding each feature into a fully-connected (FC) layer, and normalize the weights to make the sum equal to 1. By dynamically generating weights from the features, we are able to adapt the multi-modal fusion process according to the input video and the query text. The fused feature is then served as input of the regression module (REG) to predict the starting and ending time.
3.2 Intra-Modal Feature Learning
To facilitate the multi-modal training, we introduce an intra-modal feature learning module, which enhances feature representations within each modality by applying self-supervised contrastive learning. Our motivation is that features in the same action category should be similar even if they are from different videos. To this end, for each input video V , we randomly sample positive videos V+ that contain the same action category, and negative videos V− with different action categories. We perform contrastive learning on the multi-modal features Md, Mr and Mf ∈ Rc×T separately for each modality, where c is the feature dimension and T is the number of frames. Since the multi-modal feature contains information from the whole video, we only consider features that contain the action by extracting the corresponding video segment. We then conduct average pooling in the temporal dimension and obtain a feature vector M ∈ Rc. The contrastive loss Lcl is formulated as:
Lcl = − log
∑ M+∈Q+
eh(M) ⊤h(M+)/τ∑
M+∈Q+ eh(M)⊤h(M+)/τ + ∑ M−∈Q− eh(M)⊤h(M−)/τ , (1)
where Q+ and Q− are the sets of positive and negative samples, and τ is the temperature parameter. Following the SimCLR approach [4], we use a linear layer h(·) to project the feature M to another embedding space where we apply contrastive loss. We accumulate the loss from each modality to be the final contrastive loss, namely Lcl = Lrcl(hr(AvgPool(Mr))) + L d cl(hd(AvgPool(Md))) + Lfcl(hf (AvgPool(Mf ))) for RGB, depth, and flow, respectively.
3.3 Model Training and Implementation Details
Overall Objective. The overall objective of the proposed method is composed of the supervised loss Lgrn for predicting temporal grounding that localizes the video segment and the self-supervised contrastive loss Lcl for intra-modal learning in (1): L = Lgrn + Lcl. The supervised loss Lgrn is the same as the loss defined in the LGI method [22], which includes:
1) Location regression loss Lreg = smoothL1(t̂s − ts) + smoothL1(t̂e − te) that calculates the L1 distance between the normalized ground truth time interval (t̂s, t̂e) ∈ [0, 1] and the predicted time interval (ts, te), where smoothL1 is defined as 0.5x2 if |x| < 1 and |x| − 0.5 otherwise. 2) Temporal attention guidance loss Ltag = − T∑ i=1 ôi log(oi) T∑
i=1 ôi
for the temporal attention in the REG
module, where ôi is set to 1 if the i-th segment is located within the ground truth time interval and 0 otherwise.
3) Distinct query attention loss Ldqa = ||(A⊤A) − λI||2F to enforce query attention weights to be distinct along different steps in the LGI module, where A ∈ RN×S is the concatenated query attention weights across S steps, || · ||F denotes Frobenius norm of a matrix, and λ ∈ [0, 1] controls the extent of overlap between query attention distributions. The supervised loss is the sum of the three loss terms Lgrn = Lreg +Ltag +Ldqa and we use the default setting in LGI [22], in which we refer readers to their paper for more details.
Implementation Details. We generate optical flow and depth maps using the RAFT [27] and MiDaS [25] method respectively. For the visual encoder Ed, Er and Ef , we employ the I3D [1] and C3D [28] networks for Charades-STA and ActivityNet Captions datasets respectively. As for the textual encoder Et, we adopt a bi-directional LSTM, where the feature is obtained by concatenating the last hidden states in forward and backward directions. The LGI module in our framework contains the sequential query attention and local-global video-text interactions as in the LGI model [22]. The REG module generates temporal attention weights to aggregate the features and performs regression via an MLP layer. The operations are defined similar to LGI [22]. The feature dimension c is set to 512. In the contrastive loss (1), the temperature parameter τ is set to 0.1. The projection head h(·) is a 2-layer MLP that project the feature to a 512-dimensional latent space. We implement the proposed model in PyTorch with the Adam optimizer and a fixed learning rate of 4× 10−4. The source code and models are available at https://github.com/wenz116/DRFT.
4 Experimental Results
4.1 Datasets and Evaluation Metric
We evaluate the proposed DRFT method against the state-of-the-art approaches on two benchmark datasets, i.e., Charades-STA [10] and ActivityNet Captions [17].
Charades-STA. It is built upon the Charades dataset for evaluating the video temporal grounding task. It contains 6,672 videos involving 16,128 video-query pairs, where 12,408 pairs are used for training and 3,720 pairs are for testing. The average length of the videos is 29.76 seconds. There are 2.4 annotated moments with duration 8.2 seconds in each video.
ActivityNet Captions. It is originally constructed for dense video captioning from the ActivityNet dataset. The captions are used as queries in the video temporal grounding task. It consists of 20k YouTube videos with an average duration of 120 seconds. The videos are annotated with 200 activity categories, which is more diverse compared to the Charades-STA dataset. Each video contains 3.65 queries, where each query has an average length of 13.48 words. The dataset is split into training, validation and testing set with a ratio of 2:1:1, resulting in 37,421, 17,505 and 17,031 video-query
pairs respectively. Since the testing set is not publicly available, we follow previous methods to evaluate the performance on the combination of the two validation sets, which are denoted as val1 and val2.
Following the typical evaluation setups [10, 22], we employ two metrics to assess the performance of video temporal grounding: 1) Recall at various thresholds of temporal Intersection over Union (R@IoU). It measures the percentage of predictions that have IoU with the ground truth larger than the threshold. We adopt 3 values {0.3, 0.5, 0.7} for the IoU threshold. 2) mean tIoU (mIoU). It is the average IoU over all results.
4.2 Overall Performance
In Table 2, we evaluate our framework against state-of-art approaches, including two-stage methods that rely on propose-and-rank schemes [10, 13, 34] and one-stage methods that only consider RGB videos as the input [12, 26, 33, 22]. First, compared to our baseline LGI [22], our results with single/two/three modalities are consistently better than theirs in all the evaluation metrics, which demonstrates the benefit of our intra-modal feature learning scheme and the inter-modal feature fusion mechanism. We also note that for our single-modal model, we use RGB as the input and only apply the intra-modal contrastive learning across videos, where it already performs favorably against existing algorithms. More results of the single-stream models using other modalities are provided in the supplementary material.
Second, we show that with the increased modality used in our model (bottom group in Table 2), the performance on two benchmarks are consistently improved, which demonstrates the complementary property of RGB, depth, and flow for video temporal grounding. Moreover, compared to the worse baseline results when adding more modalities without our proposed modules in Table 1, we validate the importance of designing a proper scheme of exchanging and fusing the information across modalities. It is also worth mentioning that with more modalities involved in the model, our method achieves larger performance gains compared to the baseline, e.g., more than 5% improvement in all the metrics on two benchmarks.
4.3 Ablation Study
In Table 3, we present the ablation of individual components proposed in our framework.
Inter-modal Feature Fusion. To enhance the communication across modalities, we propose to first use co-attentional transformers to learn attentive features across RGB and another modality, and then use a dynamic feature fusing scheme with learned weights to combine different features. In the first four rows of the middle group in Table 3, we show the following properties in this work:
1) Using transformers is effective for multi-modal feature learning. As shown in the first row of the middle group, the performance drops without the co-attentional transformers for feature fusion.
2) RGB information is essential for the temporal grounding task, and thus we conduct co-attention between a) RGB-flow and b) RGB-depth. In the second row of the middle group, if using the flow modality as the common modality, i.e., flow-RGB and flow-depth, the performance is worse than our final model.
3) Since RGB features are used for both flow and depth attention, we adopt a shared co-attention block for RGB as shown in Figure 2, where it can take RGB together with either the flow or depth cue as the input, and further enriches the attention mechanism. This design has not been considered in the prior work. In the third row of the middle group, without sharing the co-attentional module, the performance is worse than our final model.
4) The proposed dynamic fusion scheme via learnable weights is important to fuse features from different modalities. As shown the fourth row of the middle group, the performance drops significantly without learnable weights. Interestingly, learning dynamic weights to combine features is almost equally important compared to feature learning via transformers. This indicates that even with the state-of-the-art feature attention module, it is still challenging to combine multi-modal features.
Intra-modal Feature Learning. In the last row of the middle group in Table 3, we show the benefit of having the intra-modal cross-video feature learning. While multi-modal feature fusion already provides a strong baseline in our framework, improving feature representations in individual modalities is still critical for enhancing the entire multi-modal learning paradigm, in which such observations are not widely studied yet.
Qualitative Results. In Figure 3, we show sample results on the Charades-STA and ActivityNet Captions datasets, where the arrows indicate the starting and ending points of the corresponding grounded segment based on the query. Compared to the baseline method that only consider RGB features, the proposed DRFT approach is able to predict more accurate results by leveraging the multimodal features from RGB, optical flow and depth. More results are presented in the supplementary material.
4.4 Analysis of Multi-modal Learning
To understand the complementary property of each modality, we analyze the video temporal grounding results of some example action categories. Figure 4 shows the performance of the single-stream baseline with RGB as input, single-stream DRFT models with RGB, flow or depth as input, and three-stream DRFT model respectively. The three plots contain categories where RGB, flow or depth performs better than the other two modalities. We first show that the single-stream DRFT model with contrastive learning improves from the single-stream baseline (red bars vs. orange bars). We then investigate the complementary property between the three modalities. For actions with smaller movement (e.g., “smiling”) in the left group of Figure 4, models using RGB as input generally
perform better. For actions with larger motion (e.g., “closing a door” or “throwing a pillow”) in the middle group, flow provides more useful information (denoted as green bars). As for the actions with small motion but can be easily recognized by their structure (e.g., “sitting in a bed” or “working at a table”) in the right group, depth is superior to the other two modalities (denoted as blue bars). With the complementary property between RGB, flow and depth, we can take advantage of each modality and further improve the performance in the three-stream DRFT model (denoted as purple bars).
To further analyze the impact of each modality, we provide the learned weights for dynamic fusion in the three-stream DRFT for these categories in Table 4, where the top, middle and bottom groups contain categories that RGB, flow and depth help the most respectively. Flow → RGB means flowconditioned RGB features, etc. We observe that for actions with smaller movement (top group), the weights for RGB features are larger. For actions with larger motion (middle group), the weights for optical flow are larger. Regarding actions with small motion but can be easily recognized by their structure (bottom group), the weights for depth are larger. This shows that the model can exploit each modality based on the complementary property between RGB, flow and depth.
5 Conclusions
In this paper, we focus on the task of text-guided video temporal grounding. In contrast to existing methods that consider only RGB images as visual features, we propose the DRFT model to learn complementary visual information from RGB, optical flow and depth modality. While RGB features provide abundant appearance information, we show that representation models based on these cues alone are not effective for temporal grounding in videos with cluttered backgrounds. We therefore adopt optical flow to capture motion cues, and depth maps to estimate image structure. To combine the three modalities more effectively, we propose an inter-modal feature learning module, which performs co-attention between modalities using transformers, and dynamically fuses the multi-modal features based on the input data. To further enhance the multi-modal training, we incorporate an intra-modal feature learning module that performs self-supervised contrastive learning within each modality. The contrastive loss enforces cross-video features to be close to each other when they contain the same action, and be far apart otherwise. We conduct extensive experiments on two benchmark datasets, demonstrating the effectiveness of the proposed multi-modal framework with inter- and intra-modal feature learning.
Acknowledgements
This work is supported in part by NSF CAREER grant 1149783 and gifts from Snap as well as eBay. | 1. What is the main contribution of the paper in the field of text-guided video temporal grounding?
2. What are the strengths of the proposed approach, particularly in integrating multiple modalities?
3. Are there any weaknesses or areas for improvement regarding the technical novelty of the paper's contributions?
4. What are some minor comments or suggestions for improving the paper's clarity or performance analysis?
5. How does the reviewer assess the overall quality and impact of the paper, particularly regarding its suitability for a NeurIPS poster publication? | Summary Of The Paper
Review | Summary Of The Paper
Paper addresses the problem of text-guided video temporal grounding, which aims to localize the starting and ending time of a segment corresponding to a text query. The key contribution is integration of three modalities of data in this context: video, motion (flow), and depth; along with the textural query. To integrate the three modalities, paper proposes a dynamic fusion scheme with transformers, which takes the form of co-attention (similar to ViLBERT). Further, to improve the performance within each modality contrastive learning is applied to enhance the feature discriminability. In this contrastive learning formulation positive pairs come from instance of the same action class, while negative pairings are formed from videos coming from two different action classes. Competitive (state-of-the-art) performance is illustrated on Charades-STA and ActivityNet Captions benchmark datasets.
Review
The paper is well written and the approach is intuitive and easy to understand and follow. Performance is also competitive and improves on state-of-the-art. The technical novelty is somewhat increment, with components effectively borrowed from other recent works. That being said the choices are well motivated and work well together for the task at hand. Overall, I feel the novelty is sufficient for a poster publication in NeurIPS.
A few minor comments
The supervised loss (L_{and}) is not defined. It should be defined in the paper, if for no other reason than completeness.
The extraction of video segment features (Lines 188-190) is unclear and should be clarified.
In addition, one of the more surprising aspects of the paper, for me, is that depth was helpful for the task. I am not aware of any other works that use estimated depth for video tasks. As such, I would be interested in seeing more analysis on the depth modality. For example, how well would depth features work on their own (as one stream DRFT)? How well would they perform in combination with RGB (as a two stream DRFT)? etc. |
NIPS | Title
End-to-end Multi-modal Video Temporal Grounding
Abstract
We address the problem of text-guided video temporal grounding, which aims to identify the time interval of a certain event based on a natural language description. Different from most existing methods that only consider RGB images as visual features, we propose a multi-modal framework to extract complementary information from videos. Specifically, we adopt RGB images for appearance, optical flow for motion, and depth maps for image structure. While RGB images provide abundant visual cues of certain events, the performance may be affected by background clutters. Therefore, we use optical flow to focus on large motion and depth maps to infer the scene configuration when the action is related to objects recognizable with their shapes. To integrate the three modalities more effectively and enable inter-modal learning, we design a dynamic fusion scheme with transformers to model the interactions between modalities. Furthermore, we apply intra-modal self-supervised learning to enhance feature representations across videos for each modality, which also facilitates multi-modal learning. We conduct extensive experiments on the Charades-STA and ActivityNet Captions datasets, and show that the proposed method performs favorably against state-of-the-art approaches.
1 Introduction
With the rapid growth of video data in our daily lives, video understanding has become an ever increasingly important task in computer vision. Research involving other modalities such as text and speech has also drawn much attention in recent years, e.g., video captioning [17, 23], and video question answering [18, 16]. In this paper, we focus on text-guided video temporal grounding, which aims to localize the starting and ending time of a segment corresponding to a text query. It is one of the most effective approaches to understand video contents, and applicable to numerous tasks, such as video retrieval, video editing and human-computer interaction. This problem is considerably challenging as it requires accurate recognition of objects, scenes and actions, as well as joint comprehension of video and language.
Existing methods [34, 26, 33, 22] usually consider only RGB images as visual cues, which are less effective for recognizing objects and actions in videos with complex backgrounds. To understand the video contents more holistically, we propose a multi-modal framework to learn complementary visual features from RGB images, optical flow and depth maps. RGB images provide abundant visual information, which is essential for visual recognition. However, existing methods based on appearance alone are likely to be less effective for complex scenes with cluttered backgrounds. For example, since the query text descriptions usually involve moving objects such as “Closing a door” or “Throwing a pillow”, using optical flow as input is able to identify such actions with large motion. On the other hand, depth is another cue that is invariant to color and lighting, and is often used to complement the RGB input in object detection and semantic segmentation. In our task, depth information helps the proposed model recognize actions involving objects with distinct shapes as the context. For example, actions such as “Sitting in a bed” or “Working at a table” are not easily recognized by optical flow due to small motion, but depth can provide structural information to assist the learning process. We also note that, our goal is to design an end-to-end multi-modal framework
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
for video grounding by directly utilizing low-level cues such as optical flow and depth, while other alternatives based on object detector or semantic segmentation is out of the scope of this work.
To leverage multi-modal cues, one straightforward way is to construct a multi-stream model that takes individual modality as the input in each stream, and then averages the multi-stream output predictions to obtain final results. However, we find that this scheme is less effective due to the lack of communication across different modalities, e.g., using depth cues alone without considering RGB features is not sufficient to learn the semantic information as the appearance cue does. To tackle this issue, we propose a multi-modal framework with 1) an inter-modal module that learns cross-modal features, and 2) an intra-modal module to self-learn feature representations across videos.
For inter-modal learning, we design a fusion scheme with co-attentional transformers [20] to dynamically fuse features from different modalities. One motivation is that, different videos may require to adopt a different combination of modalities, e.g., “Working at a table” would require more appearance and depth information, while optical flow is more important for “Throwing a pillow”. To enhance feature representations for each modality and thereby improve multi-modal learning, we introduce an intra-modal module via self-supervised contrastive learning [7, 15]. The goal is to ensure the feature consistency across video clips when they contain the same action. For example, with the same action “Eating”, it may happen at different locations with completely different backgrounds and contexts, or with different text descriptions that “eats” different food. With our intra-modal learning, it enforces features close to each other when they describe the same action and learn features that are invariant to other distracted factors across videos, and thus it can improve our multi-modal learning paradigm.
We conduct extensive experiments on the Charades-STA [10] and ActivityNet Captions [17] datasets to demonstrate the effectiveness of our multi-modal learning framework for video temporal grounding using (D)epth, (R)GB, and optical (F)low with the (T)ext as the query, and name our method as DRFT. First, we present the complementary property of multi-modality and the improved performance over the single-modality models. Second, we validate the individual contributions of our proposed components, i.e., inter- and intra-modal modules, that facilitate multi-modal learning. Finally, we show state-of-the-art performance for video temporal grounding against existing methods.
The main contributions of this work are summarized as follows: 1) We propose a multi-modal framework for text-guided video temporal grounding by extracting complementary information from RGB, optical flow and depth features. 2) We design a dynamic fusion mechanism across modalities via co-attentional transformer to effectively learn inter-modal features. 3) We apply self-supervised contrastive learning across videos for each modality to enhance intra-modal feature representations that are invariant to distracted factors with respect to actions.
2 Related Work
Text-Guided Video Temporal Grounding. Given a video and a natural language query, textguided video temporal grounding aims to predict the starting and ending time of the video clip that best matches the query sentence. Existing methods for this task can be categorized into two groups, i.e., two-stage and one-stage schemes (see Figure 1(a)(b)). Most two-stage approaches adopt a propose-and-rank pipeline, where they first generate clip proposals and then rank the proposals based on their similarities with the query sentence. Early two-stage methods [10, 14] obtain proposals by scanning the whole video with sliding windows. Since the sliding window mechanism is computationally expensive and usually produces many redundant proposals, numerous methods are subsequently proposed to improve the efficiency and effectiveness of proposal generation. The TGN model [2] performs frame-by-word interactions and localize the proposals in one single pass. Other approaches focus on reducing redundant proposals by generating query-guided proposals [31] or semantic activity proposals [3]. The MAN method [34] models the temporal relationships between proposals using a graph architecture to improve the quality of proposals. To alleviate the computation of observing the whole video, reinforcement learning [13, 30] is utilized to guide the intelligent agent to glance over the video in a discontinuous way. While the two-stage methods achieve promising results, the computational cost is high for comparing all proposal-query pairs, and the performance is largely limited by the quality of proposal generation.
To overcome the issues of two-stage methods, some recent approaches adopt a one-stage pipeline to directly predict the temporal segment from the fusion of video and text features. Most of the one-stage approaches focus on the attention mechanisms or interaction between modalities. For
example, the ABLR method [32] predicts the temporal coordinates using a co-attention based location regression algorithm. The ExCL mechanism [11] exploits the cross-modal interactions between video and text, and the PfTML-GA model [26] improves the performance by introducing the queryguided dynamic filter. Moreover, the DRN scheme [33] leverages dense supervision from the sparse annotations to facilitate the training process. Recently, the LGI model [22] decomposes the query sentence into multiple semantic phrases and conducts local and global interactions between the video and text features. In our framework, we adopt LGI as the baseline that uses the hierarchical video-text interaction. However, different from LGI that only considers RGB frames as input, we take RGB, optical flow and depth as input, and design the inter-modality learning technique to learn complementary information from the video. Furthermore, we apply contrastive learning across videos to enhance the feature representations in each modality, which helps the learning of the whole model (see Figure 1(c)).
Multi-Modal Learning. As typical event or actions can be described by signals from multiple modalities, understanding the correlation between different modalities is crucial to solve problems more comprehensively. Research on joint vision and language learning [6, 23, 16, 7] has gained much attention in recent years since natural language is an intuitive way for human communication. Recent studies [24, 8] based on the transformer [29] have shown great success in self-supervised learning and transfer learning for natural language tasks. The transformer-based BERT model [8] has also been widely used to learn joint representations for vision and language. These methods [20, 21, 35, 19, 5, 9] aim to learn generic representations from a large amount of image-text pairs in a self-supervised manner, and then fine-tune the model for downstream vision and language tasks. The ViLBERT scheme [20] extracts features from image and text using two parallel BERT-style models, and then connects the two streams with the co-attentional transformer layers. In this work, we focus on the video temporal grounding task guided by texts, while introducing multi-modality to improve model learning, which is not studied before. For fusing the multi-modal information, we leverage the co-attentional transformer layers [20] in our framework and design an approach by fusing the RGB features with optical flow and depth features respectively.
3 Proposed Framework
In this work, we address the problem of text-guided video temporal grounding using a multi-modal framework. The pipeline of the proposed framework is illustrated in Figure 2. Given an input video V = {Vt}Tt=1 with T frames and a query sentence Q = {Qi}Ni=1 with N words, we aim to localize
the starting and ending time [ts, te] of the event corresponding to the query. To this end, we design a multi-modal framework to learn complementary visual information from RGB images, optical flow and depth maps. From the input video, we first compute the depth map of each frame and the optical flow of each pair of consecutive frames. We then apply the visual encoders Ed, Er, Ef to extract features from the depth, RGB and flow inputs. A textual encoder Et is utilized to extract the feature of the query sentence Q. The local-global interaction modules (LGI) then incorporate the textual feature into each visual modality, and generate the multi-modal features Md, Mr and Mf for depth, RGB and flow respectively.
To effectively integrate the features from different modalities and enable inter-modal feature learning, we propose a dynamic fusion scheme with transformers to model the interaction between modalities. The feature after integration is then fed into a regression module (REG) to predict the starting and ending time [ts, te] of the target video segment. To enhance the feature representations in each modality, we introduce an intra-modal learning module that conducts self-supervised contrastive learning across videos. The intra-modal learning is applied on the multi-modal features Md, Mr and Mf separately to enforce features of video segments containing the same action to be close to each other, and those from different action categories to be far apart.
3.1 Inter-Modal Feature Learning
Videos contain rich information in both spatial and temporal dimensions. To learn information more comprehensively, in addition to the RGB modality, we also consider optical flow that captures motion, and depth feature that represents image structure. An intuitive way to combine the three modalities is to utilize a multi-stream model and directly average the outputs of individual streams. However, since the importance of each modality is not the same in different situations, directly averaging them may downweigh the importance of a specific modality and degrade the performance. In Table 1, we present the results of two-stream (RGB and flow) and three-stream (RGB, flow and depth) baseline models, where the outputs from different modalities are averaged before the final output layer. Compared to the single-stream (RGB) baseline model, the multi-stream models do not improve the performance, which shows that it is not intuitive to learn complementary information from multi-modal features.
Such cases may happen frequently in certain actions. For example, flow features would not help much for “Sitting in a bed” but would help more for “Closing a door”. Therefore, having a dynamic mechanism is critical for multi-modal fusion.
Co-attentional Feature Fusion. The ensuing question becomes how to learn effective features across modalities and also fuse them dynamically. First, we observe that, although depth and flow modalities are effective in some situations, they alone are not able to capture the semantic information, which is crucial for video-text understanding. Thereby, we design a co-attentional scheme to allow joint feature learning between RGB and another modality (either depth or flow).
Inspired by the co-attentional transformer layer [20] that consist of multi-headed attention blocks, where it takes a paired feature as the input (e.g., Md and Mr) and forms three matrices, Q, K, and V that represent queries, keys, and values (also see Figure 2). In this way, the multi-headed attention block for each modality takes the keys and values from the other modality as the input, and thus outputs the attention-pooled features conditioned on the other modality. For instance, we consider the pair of Md = {m1d, ...,mTd } and Mr = {m1r, ...,mTr } features for depth and RGB, where T is the number of frames in a video clip. The co-attentional transformer layer performs depth-conditioned RGB feature attention, as well as RGB-conditioned depth feature attention. Similarly, for flow and RGB features, Mf = {m1f , ...,mTf } and Mr = {m1r, ...,mTr }, we obtain another set of flowconditioned RGB feature attention and RGB-conditioned flow feature attention. Note that, since RGB feature generally contains the most abundant information in the video and is used for both depth and flow attention, we adopt a shared multi-headed attention block for RGB as shown in Figure 2.
Dynamic Feature Fusion. To effectively combine these output features from co-attentional transformers and perform the final prediction, we dynamically learn the weights for each multi-modal feature and linearly combine the four features using the weights (see Figure 2). Since the importance of each modality depends on the input data, we generate the weights by feeding each feature into a fully-connected (FC) layer, and normalize the weights to make the sum equal to 1. By dynamically generating weights from the features, we are able to adapt the multi-modal fusion process according to the input video and the query text. The fused feature is then served as input of the regression module (REG) to predict the starting and ending time.
3.2 Intra-Modal Feature Learning
To facilitate the multi-modal training, we introduce an intra-modal feature learning module, which enhances feature representations within each modality by applying self-supervised contrastive learning. Our motivation is that features in the same action category should be similar even if they are from different videos. To this end, for each input video V , we randomly sample positive videos V+ that contain the same action category, and negative videos V− with different action categories. We perform contrastive learning on the multi-modal features Md, Mr and Mf ∈ Rc×T separately for each modality, where c is the feature dimension and T is the number of frames. Since the multi-modal feature contains information from the whole video, we only consider features that contain the action by extracting the corresponding video segment. We then conduct average pooling in the temporal dimension and obtain a feature vector M ∈ Rc. The contrastive loss Lcl is formulated as:
Lcl = − log
∑ M+∈Q+
eh(M) ⊤h(M+)/τ∑
M+∈Q+ eh(M)⊤h(M+)/τ + ∑ M−∈Q− eh(M)⊤h(M−)/τ , (1)
where Q+ and Q− are the sets of positive and negative samples, and τ is the temperature parameter. Following the SimCLR approach [4], we use a linear layer h(·) to project the feature M to another embedding space where we apply contrastive loss. We accumulate the loss from each modality to be the final contrastive loss, namely Lcl = Lrcl(hr(AvgPool(Mr))) + L d cl(hd(AvgPool(Md))) + Lfcl(hf (AvgPool(Mf ))) for RGB, depth, and flow, respectively.
3.3 Model Training and Implementation Details
Overall Objective. The overall objective of the proposed method is composed of the supervised loss Lgrn for predicting temporal grounding that localizes the video segment and the self-supervised contrastive loss Lcl for intra-modal learning in (1): L = Lgrn + Lcl. The supervised loss Lgrn is the same as the loss defined in the LGI method [22], which includes:
1) Location regression loss Lreg = smoothL1(t̂s − ts) + smoothL1(t̂e − te) that calculates the L1 distance between the normalized ground truth time interval (t̂s, t̂e) ∈ [0, 1] and the predicted time interval (ts, te), where smoothL1 is defined as 0.5x2 if |x| < 1 and |x| − 0.5 otherwise. 2) Temporal attention guidance loss Ltag = − T∑ i=1 ôi log(oi) T∑
i=1 ôi
for the temporal attention in the REG
module, where ôi is set to 1 if the i-th segment is located within the ground truth time interval and 0 otherwise.
3) Distinct query attention loss Ldqa = ||(A⊤A) − λI||2F to enforce query attention weights to be distinct along different steps in the LGI module, where A ∈ RN×S is the concatenated query attention weights across S steps, || · ||F denotes Frobenius norm of a matrix, and λ ∈ [0, 1] controls the extent of overlap between query attention distributions. The supervised loss is the sum of the three loss terms Lgrn = Lreg +Ltag +Ldqa and we use the default setting in LGI [22], in which we refer readers to their paper for more details.
Implementation Details. We generate optical flow and depth maps using the RAFT [27] and MiDaS [25] method respectively. For the visual encoder Ed, Er and Ef , we employ the I3D [1] and C3D [28] networks for Charades-STA and ActivityNet Captions datasets respectively. As for the textual encoder Et, we adopt a bi-directional LSTM, where the feature is obtained by concatenating the last hidden states in forward and backward directions. The LGI module in our framework contains the sequential query attention and local-global video-text interactions as in the LGI model [22]. The REG module generates temporal attention weights to aggregate the features and performs regression via an MLP layer. The operations are defined similar to LGI [22]. The feature dimension c is set to 512. In the contrastive loss (1), the temperature parameter τ is set to 0.1. The projection head h(·) is a 2-layer MLP that project the feature to a 512-dimensional latent space. We implement the proposed model in PyTorch with the Adam optimizer and a fixed learning rate of 4× 10−4. The source code and models are available at https://github.com/wenz116/DRFT.
4 Experimental Results
4.1 Datasets and Evaluation Metric
We evaluate the proposed DRFT method against the state-of-the-art approaches on two benchmark datasets, i.e., Charades-STA [10] and ActivityNet Captions [17].
Charades-STA. It is built upon the Charades dataset for evaluating the video temporal grounding task. It contains 6,672 videos involving 16,128 video-query pairs, where 12,408 pairs are used for training and 3,720 pairs are for testing. The average length of the videos is 29.76 seconds. There are 2.4 annotated moments with duration 8.2 seconds in each video.
ActivityNet Captions. It is originally constructed for dense video captioning from the ActivityNet dataset. The captions are used as queries in the video temporal grounding task. It consists of 20k YouTube videos with an average duration of 120 seconds. The videos are annotated with 200 activity categories, which is more diverse compared to the Charades-STA dataset. Each video contains 3.65 queries, where each query has an average length of 13.48 words. The dataset is split into training, validation and testing set with a ratio of 2:1:1, resulting in 37,421, 17,505 and 17,031 video-query
pairs respectively. Since the testing set is not publicly available, we follow previous methods to evaluate the performance on the combination of the two validation sets, which are denoted as val1 and val2.
Following the typical evaluation setups [10, 22], we employ two metrics to assess the performance of video temporal grounding: 1) Recall at various thresholds of temporal Intersection over Union (R@IoU). It measures the percentage of predictions that have IoU with the ground truth larger than the threshold. We adopt 3 values {0.3, 0.5, 0.7} for the IoU threshold. 2) mean tIoU (mIoU). It is the average IoU over all results.
4.2 Overall Performance
In Table 2, we evaluate our framework against state-of-art approaches, including two-stage methods that rely on propose-and-rank schemes [10, 13, 34] and one-stage methods that only consider RGB videos as the input [12, 26, 33, 22]. First, compared to our baseline LGI [22], our results with single/two/three modalities are consistently better than theirs in all the evaluation metrics, which demonstrates the benefit of our intra-modal feature learning scheme and the inter-modal feature fusion mechanism. We also note that for our single-modal model, we use RGB as the input and only apply the intra-modal contrastive learning across videos, where it already performs favorably against existing algorithms. More results of the single-stream models using other modalities are provided in the supplementary material.
Second, we show that with the increased modality used in our model (bottom group in Table 2), the performance on two benchmarks are consistently improved, which demonstrates the complementary property of RGB, depth, and flow for video temporal grounding. Moreover, compared to the worse baseline results when adding more modalities without our proposed modules in Table 1, we validate the importance of designing a proper scheme of exchanging and fusing the information across modalities. It is also worth mentioning that with more modalities involved in the model, our method achieves larger performance gains compared to the baseline, e.g., more than 5% improvement in all the metrics on two benchmarks.
4.3 Ablation Study
In Table 3, we present the ablation of individual components proposed in our framework.
Inter-modal Feature Fusion. To enhance the communication across modalities, we propose to first use co-attentional transformers to learn attentive features across RGB and another modality, and then use a dynamic feature fusing scheme with learned weights to combine different features. In the first four rows of the middle group in Table 3, we show the following properties in this work:
1) Using transformers is effective for multi-modal feature learning. As shown in the first row of the middle group, the performance drops without the co-attentional transformers for feature fusion.
2) RGB information is essential for the temporal grounding task, and thus we conduct co-attention between a) RGB-flow and b) RGB-depth. In the second row of the middle group, if using the flow modality as the common modality, i.e., flow-RGB and flow-depth, the performance is worse than our final model.
3) Since RGB features are used for both flow and depth attention, we adopt a shared co-attention block for RGB as shown in Figure 2, where it can take RGB together with either the flow or depth cue as the input, and further enriches the attention mechanism. This design has not been considered in the prior work. In the third row of the middle group, without sharing the co-attentional module, the performance is worse than our final model.
4) The proposed dynamic fusion scheme via learnable weights is important to fuse features from different modalities. As shown the fourth row of the middle group, the performance drops significantly without learnable weights. Interestingly, learning dynamic weights to combine features is almost equally important compared to feature learning via transformers. This indicates that even with the state-of-the-art feature attention module, it is still challenging to combine multi-modal features.
Intra-modal Feature Learning. In the last row of the middle group in Table 3, we show the benefit of having the intra-modal cross-video feature learning. While multi-modal feature fusion already provides a strong baseline in our framework, improving feature representations in individual modalities is still critical for enhancing the entire multi-modal learning paradigm, in which such observations are not widely studied yet.
Qualitative Results. In Figure 3, we show sample results on the Charades-STA and ActivityNet Captions datasets, where the arrows indicate the starting and ending points of the corresponding grounded segment based on the query. Compared to the baseline method that only consider RGB features, the proposed DRFT approach is able to predict more accurate results by leveraging the multimodal features from RGB, optical flow and depth. More results are presented in the supplementary material.
4.4 Analysis of Multi-modal Learning
To understand the complementary property of each modality, we analyze the video temporal grounding results of some example action categories. Figure 4 shows the performance of the single-stream baseline with RGB as input, single-stream DRFT models with RGB, flow or depth as input, and three-stream DRFT model respectively. The three plots contain categories where RGB, flow or depth performs better than the other two modalities. We first show that the single-stream DRFT model with contrastive learning improves from the single-stream baseline (red bars vs. orange bars). We then investigate the complementary property between the three modalities. For actions with smaller movement (e.g., “smiling”) in the left group of Figure 4, models using RGB as input generally
perform better. For actions with larger motion (e.g., “closing a door” or “throwing a pillow”) in the middle group, flow provides more useful information (denoted as green bars). As for the actions with small motion but can be easily recognized by their structure (e.g., “sitting in a bed” or “working at a table”) in the right group, depth is superior to the other two modalities (denoted as blue bars). With the complementary property between RGB, flow and depth, we can take advantage of each modality and further improve the performance in the three-stream DRFT model (denoted as purple bars).
To further analyze the impact of each modality, we provide the learned weights for dynamic fusion in the three-stream DRFT for these categories in Table 4, where the top, middle and bottom groups contain categories that RGB, flow and depth help the most respectively. Flow → RGB means flowconditioned RGB features, etc. We observe that for actions with smaller movement (top group), the weights for RGB features are larger. For actions with larger motion (middle group), the weights for optical flow are larger. Regarding actions with small motion but can be easily recognized by their structure (bottom group), the weights for depth are larger. This shows that the model can exploit each modality based on the complementary property between RGB, flow and depth.
5 Conclusions
In this paper, we focus on the task of text-guided video temporal grounding. In contrast to existing methods that consider only RGB images as visual features, we propose the DRFT model to learn complementary visual information from RGB, optical flow and depth modality. While RGB features provide abundant appearance information, we show that representation models based on these cues alone are not effective for temporal grounding in videos with cluttered backgrounds. We therefore adopt optical flow to capture motion cues, and depth maps to estimate image structure. To combine the three modalities more effectively, we propose an inter-modal feature learning module, which performs co-attention between modalities using transformers, and dynamically fuses the multi-modal features based on the input data. To further enhance the multi-modal training, we incorporate an intra-modal feature learning module that performs self-supervised contrastive learning within each modality. The contrastive loss enforces cross-video features to be close to each other when they contain the same action, and be far apart otherwise. We conduct extensive experiments on two benchmark datasets, demonstrating the effectiveness of the proposed multi-modal framework with inter- and intra-modal feature learning.
Acknowledgements
This work is supported in part by NSF CAREER grant 1149783 and gifts from Snap as well as eBay. | 1. What is the focus of the paper regarding text-guided video grounding?
2. What are the strengths of the proposed method, particularly in combining multimodal information?
3. What are the weaknesses of the paper, especially regarding design choices and lack of clarity?
4. Do you have any concerns or questions regarding the necessity of certain techniques used in the proposed method?
5. How do the experimental results support the effectiveness of the proposed approach? | Summary Of The Paper
Review | Summary Of The Paper
This paper addresses text-guided video grounding, which identifies the time interval of an event according to the query text. In contrast to prior work that only leverages RGB information, the authors propose DRFT to combine RGB, depth, and flow maps to boost the performance. To facilitate representation learning, the authors adopt a co-attentional transformer to fuse multi-modal information and contrastive learning to enhance the features across different videos. Experimental results shows that the proposed DRFT outperforms state-of-the-art on Charades-STA and ActivityNet.
Review
Overall, the paper is well-written and easy to follow. Experimental results also justify the effectiveness of the proposed method. However, the authors are expected to justify their design choices in the proposed method rather than just give a combination of existing techniques. It would help further extension if the authors could elaborate more on the current design. More details can be found below.
In DRFT, the authors only consider two kinds of fusion, namely RGB-depth and RGB-flow. Is there any specific reason not to use other combinations, e.g., depth-flow?
The baseline models in Sec. 3.1 do not make sense to me. It seems to me that the most straightforward would be to concatenate different modalities and learn a 1x1 conv to fuse those features. Is there any reason that the authors adopt an average strategy? What’s the gap between the concatenation and other models mentioned in the paper?
I am curious about the necessity of using a co-attentional transformer. There have been a few lightweight attention modules, such as CBAM [1]. Can the authors briefly elaborate on this design choice?
Although the experiments suggest multi-modality does help, it is unclear how they help. Maybe the authors could provide visualization showing what kind of information helps which activities more, by showing the attention weights and the corresponding activities.
The reason why intra-model feature learning works remains unclear to me. It might be a novel point of view, but the authors are encouraged to provide more insights in the paper.
[1] Woo, Sanghyun, et al. "Cbam: Convolutional block attention module." Proceedings of the European conference on computer vision (ECCV). 2018. |
NIPS | Title
End-to-end Multi-modal Video Temporal Grounding
Abstract
We address the problem of text-guided video temporal grounding, which aims to identify the time interval of a certain event based on a natural language description. Different from most existing methods that only consider RGB images as visual features, we propose a multi-modal framework to extract complementary information from videos. Specifically, we adopt RGB images for appearance, optical flow for motion, and depth maps for image structure. While RGB images provide abundant visual cues of certain events, the performance may be affected by background clutters. Therefore, we use optical flow to focus on large motion and depth maps to infer the scene configuration when the action is related to objects recognizable with their shapes. To integrate the three modalities more effectively and enable inter-modal learning, we design a dynamic fusion scheme with transformers to model the interactions between modalities. Furthermore, we apply intra-modal self-supervised learning to enhance feature representations across videos for each modality, which also facilitates multi-modal learning. We conduct extensive experiments on the Charades-STA and ActivityNet Captions datasets, and show that the proposed method performs favorably against state-of-the-art approaches.
1 Introduction
With the rapid growth of video data in our daily lives, video understanding has become an ever increasingly important task in computer vision. Research involving other modalities such as text and speech has also drawn much attention in recent years, e.g., video captioning [17, 23], and video question answering [18, 16]. In this paper, we focus on text-guided video temporal grounding, which aims to localize the starting and ending time of a segment corresponding to a text query. It is one of the most effective approaches to understand video contents, and applicable to numerous tasks, such as video retrieval, video editing and human-computer interaction. This problem is considerably challenging as it requires accurate recognition of objects, scenes and actions, as well as joint comprehension of video and language.
Existing methods [34, 26, 33, 22] usually consider only RGB images as visual cues, which are less effective for recognizing objects and actions in videos with complex backgrounds. To understand the video contents more holistically, we propose a multi-modal framework to learn complementary visual features from RGB images, optical flow and depth maps. RGB images provide abundant visual information, which is essential for visual recognition. However, existing methods based on appearance alone are likely to be less effective for complex scenes with cluttered backgrounds. For example, since the query text descriptions usually involve moving objects such as “Closing a door” or “Throwing a pillow”, using optical flow as input is able to identify such actions with large motion. On the other hand, depth is another cue that is invariant to color and lighting, and is often used to complement the RGB input in object detection and semantic segmentation. In our task, depth information helps the proposed model recognize actions involving objects with distinct shapes as the context. For example, actions such as “Sitting in a bed” or “Working at a table” are not easily recognized by optical flow due to small motion, but depth can provide structural information to assist the learning process. We also note that, our goal is to design an end-to-end multi-modal framework
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
for video grounding by directly utilizing low-level cues such as optical flow and depth, while other alternatives based on object detector or semantic segmentation is out of the scope of this work.
To leverage multi-modal cues, one straightforward way is to construct a multi-stream model that takes individual modality as the input in each stream, and then averages the multi-stream output predictions to obtain final results. However, we find that this scheme is less effective due to the lack of communication across different modalities, e.g., using depth cues alone without considering RGB features is not sufficient to learn the semantic information as the appearance cue does. To tackle this issue, we propose a multi-modal framework with 1) an inter-modal module that learns cross-modal features, and 2) an intra-modal module to self-learn feature representations across videos.
For inter-modal learning, we design a fusion scheme with co-attentional transformers [20] to dynamically fuse features from different modalities. One motivation is that, different videos may require to adopt a different combination of modalities, e.g., “Working at a table” would require more appearance and depth information, while optical flow is more important for “Throwing a pillow”. To enhance feature representations for each modality and thereby improve multi-modal learning, we introduce an intra-modal module via self-supervised contrastive learning [7, 15]. The goal is to ensure the feature consistency across video clips when they contain the same action. For example, with the same action “Eating”, it may happen at different locations with completely different backgrounds and contexts, or with different text descriptions that “eats” different food. With our intra-modal learning, it enforces features close to each other when they describe the same action and learn features that are invariant to other distracted factors across videos, and thus it can improve our multi-modal learning paradigm.
We conduct extensive experiments on the Charades-STA [10] and ActivityNet Captions [17] datasets to demonstrate the effectiveness of our multi-modal learning framework for video temporal grounding using (D)epth, (R)GB, and optical (F)low with the (T)ext as the query, and name our method as DRFT. First, we present the complementary property of multi-modality and the improved performance over the single-modality models. Second, we validate the individual contributions of our proposed components, i.e., inter- and intra-modal modules, that facilitate multi-modal learning. Finally, we show state-of-the-art performance for video temporal grounding against existing methods.
The main contributions of this work are summarized as follows: 1) We propose a multi-modal framework for text-guided video temporal grounding by extracting complementary information from RGB, optical flow and depth features. 2) We design a dynamic fusion mechanism across modalities via co-attentional transformer to effectively learn inter-modal features. 3) We apply self-supervised contrastive learning across videos for each modality to enhance intra-modal feature representations that are invariant to distracted factors with respect to actions.
2 Related Work
Text-Guided Video Temporal Grounding. Given a video and a natural language query, textguided video temporal grounding aims to predict the starting and ending time of the video clip that best matches the query sentence. Existing methods for this task can be categorized into two groups, i.e., two-stage and one-stage schemes (see Figure 1(a)(b)). Most two-stage approaches adopt a propose-and-rank pipeline, where they first generate clip proposals and then rank the proposals based on their similarities with the query sentence. Early two-stage methods [10, 14] obtain proposals by scanning the whole video with sliding windows. Since the sliding window mechanism is computationally expensive and usually produces many redundant proposals, numerous methods are subsequently proposed to improve the efficiency and effectiveness of proposal generation. The TGN model [2] performs frame-by-word interactions and localize the proposals in one single pass. Other approaches focus on reducing redundant proposals by generating query-guided proposals [31] or semantic activity proposals [3]. The MAN method [34] models the temporal relationships between proposals using a graph architecture to improve the quality of proposals. To alleviate the computation of observing the whole video, reinforcement learning [13, 30] is utilized to guide the intelligent agent to glance over the video in a discontinuous way. While the two-stage methods achieve promising results, the computational cost is high for comparing all proposal-query pairs, and the performance is largely limited by the quality of proposal generation.
To overcome the issues of two-stage methods, some recent approaches adopt a one-stage pipeline to directly predict the temporal segment from the fusion of video and text features. Most of the one-stage approaches focus on the attention mechanisms or interaction between modalities. For
example, the ABLR method [32] predicts the temporal coordinates using a co-attention based location regression algorithm. The ExCL mechanism [11] exploits the cross-modal interactions between video and text, and the PfTML-GA model [26] improves the performance by introducing the queryguided dynamic filter. Moreover, the DRN scheme [33] leverages dense supervision from the sparse annotations to facilitate the training process. Recently, the LGI model [22] decomposes the query sentence into multiple semantic phrases and conducts local and global interactions between the video and text features. In our framework, we adopt LGI as the baseline that uses the hierarchical video-text interaction. However, different from LGI that only considers RGB frames as input, we take RGB, optical flow and depth as input, and design the inter-modality learning technique to learn complementary information from the video. Furthermore, we apply contrastive learning across videos to enhance the feature representations in each modality, which helps the learning of the whole model (see Figure 1(c)).
Multi-Modal Learning. As typical event or actions can be described by signals from multiple modalities, understanding the correlation between different modalities is crucial to solve problems more comprehensively. Research on joint vision and language learning [6, 23, 16, 7] has gained much attention in recent years since natural language is an intuitive way for human communication. Recent studies [24, 8] based on the transformer [29] have shown great success in self-supervised learning and transfer learning for natural language tasks. The transformer-based BERT model [8] has also been widely used to learn joint representations for vision and language. These methods [20, 21, 35, 19, 5, 9] aim to learn generic representations from a large amount of image-text pairs in a self-supervised manner, and then fine-tune the model for downstream vision and language tasks. The ViLBERT scheme [20] extracts features from image and text using two parallel BERT-style models, and then connects the two streams with the co-attentional transformer layers. In this work, we focus on the video temporal grounding task guided by texts, while introducing multi-modality to improve model learning, which is not studied before. For fusing the multi-modal information, we leverage the co-attentional transformer layers [20] in our framework and design an approach by fusing the RGB features with optical flow and depth features respectively.
3 Proposed Framework
In this work, we address the problem of text-guided video temporal grounding using a multi-modal framework. The pipeline of the proposed framework is illustrated in Figure 2. Given an input video V = {Vt}Tt=1 with T frames and a query sentence Q = {Qi}Ni=1 with N words, we aim to localize
the starting and ending time [ts, te] of the event corresponding to the query. To this end, we design a multi-modal framework to learn complementary visual information from RGB images, optical flow and depth maps. From the input video, we first compute the depth map of each frame and the optical flow of each pair of consecutive frames. We then apply the visual encoders Ed, Er, Ef to extract features from the depth, RGB and flow inputs. A textual encoder Et is utilized to extract the feature of the query sentence Q. The local-global interaction modules (LGI) then incorporate the textual feature into each visual modality, and generate the multi-modal features Md, Mr and Mf for depth, RGB and flow respectively.
To effectively integrate the features from different modalities and enable inter-modal feature learning, we propose a dynamic fusion scheme with transformers to model the interaction between modalities. The feature after integration is then fed into a regression module (REG) to predict the starting and ending time [ts, te] of the target video segment. To enhance the feature representations in each modality, we introduce an intra-modal learning module that conducts self-supervised contrastive learning across videos. The intra-modal learning is applied on the multi-modal features Md, Mr and Mf separately to enforce features of video segments containing the same action to be close to each other, and those from different action categories to be far apart.
3.1 Inter-Modal Feature Learning
Videos contain rich information in both spatial and temporal dimensions. To learn information more comprehensively, in addition to the RGB modality, we also consider optical flow that captures motion, and depth feature that represents image structure. An intuitive way to combine the three modalities is to utilize a multi-stream model and directly average the outputs of individual streams. However, since the importance of each modality is not the same in different situations, directly averaging them may downweigh the importance of a specific modality and degrade the performance. In Table 1, we present the results of two-stream (RGB and flow) and three-stream (RGB, flow and depth) baseline models, where the outputs from different modalities are averaged before the final output layer. Compared to the single-stream (RGB) baseline model, the multi-stream models do not improve the performance, which shows that it is not intuitive to learn complementary information from multi-modal features.
Such cases may happen frequently in certain actions. For example, flow features would not help much for “Sitting in a bed” but would help more for “Closing a door”. Therefore, having a dynamic mechanism is critical for multi-modal fusion.
Co-attentional Feature Fusion. The ensuing question becomes how to learn effective features across modalities and also fuse them dynamically. First, we observe that, although depth and flow modalities are effective in some situations, they alone are not able to capture the semantic information, which is crucial for video-text understanding. Thereby, we design a co-attentional scheme to allow joint feature learning between RGB and another modality (either depth or flow).
Inspired by the co-attentional transformer layer [20] that consist of multi-headed attention blocks, where it takes a paired feature as the input (e.g., Md and Mr) and forms three matrices, Q, K, and V that represent queries, keys, and values (also see Figure 2). In this way, the multi-headed attention block for each modality takes the keys and values from the other modality as the input, and thus outputs the attention-pooled features conditioned on the other modality. For instance, we consider the pair of Md = {m1d, ...,mTd } and Mr = {m1r, ...,mTr } features for depth and RGB, where T is the number of frames in a video clip. The co-attentional transformer layer performs depth-conditioned RGB feature attention, as well as RGB-conditioned depth feature attention. Similarly, for flow and RGB features, Mf = {m1f , ...,mTf } and Mr = {m1r, ...,mTr }, we obtain another set of flowconditioned RGB feature attention and RGB-conditioned flow feature attention. Note that, since RGB feature generally contains the most abundant information in the video and is used for both depth and flow attention, we adopt a shared multi-headed attention block for RGB as shown in Figure 2.
Dynamic Feature Fusion. To effectively combine these output features from co-attentional transformers and perform the final prediction, we dynamically learn the weights for each multi-modal feature and linearly combine the four features using the weights (see Figure 2). Since the importance of each modality depends on the input data, we generate the weights by feeding each feature into a fully-connected (FC) layer, and normalize the weights to make the sum equal to 1. By dynamically generating weights from the features, we are able to adapt the multi-modal fusion process according to the input video and the query text. The fused feature is then served as input of the regression module (REG) to predict the starting and ending time.
3.2 Intra-Modal Feature Learning
To facilitate the multi-modal training, we introduce an intra-modal feature learning module, which enhances feature representations within each modality by applying self-supervised contrastive learning. Our motivation is that features in the same action category should be similar even if they are from different videos. To this end, for each input video V , we randomly sample positive videos V+ that contain the same action category, and negative videos V− with different action categories. We perform contrastive learning on the multi-modal features Md, Mr and Mf ∈ Rc×T separately for each modality, where c is the feature dimension and T is the number of frames. Since the multi-modal feature contains information from the whole video, we only consider features that contain the action by extracting the corresponding video segment. We then conduct average pooling in the temporal dimension and obtain a feature vector M ∈ Rc. The contrastive loss Lcl is formulated as:
Lcl = − log
∑ M+∈Q+
eh(M) ⊤h(M+)/τ∑
M+∈Q+ eh(M)⊤h(M+)/τ + ∑ M−∈Q− eh(M)⊤h(M−)/τ , (1)
where Q+ and Q− are the sets of positive and negative samples, and τ is the temperature parameter. Following the SimCLR approach [4], we use a linear layer h(·) to project the feature M to another embedding space where we apply contrastive loss. We accumulate the loss from each modality to be the final contrastive loss, namely Lcl = Lrcl(hr(AvgPool(Mr))) + L d cl(hd(AvgPool(Md))) + Lfcl(hf (AvgPool(Mf ))) for RGB, depth, and flow, respectively.
3.3 Model Training and Implementation Details
Overall Objective. The overall objective of the proposed method is composed of the supervised loss Lgrn for predicting temporal grounding that localizes the video segment and the self-supervised contrastive loss Lcl for intra-modal learning in (1): L = Lgrn + Lcl. The supervised loss Lgrn is the same as the loss defined in the LGI method [22], which includes:
1) Location regression loss Lreg = smoothL1(t̂s − ts) + smoothL1(t̂e − te) that calculates the L1 distance between the normalized ground truth time interval (t̂s, t̂e) ∈ [0, 1] and the predicted time interval (ts, te), where smoothL1 is defined as 0.5x2 if |x| < 1 and |x| − 0.5 otherwise. 2) Temporal attention guidance loss Ltag = − T∑ i=1 ôi log(oi) T∑
i=1 ôi
for the temporal attention in the REG
module, where ôi is set to 1 if the i-th segment is located within the ground truth time interval and 0 otherwise.
3) Distinct query attention loss Ldqa = ||(A⊤A) − λI||2F to enforce query attention weights to be distinct along different steps in the LGI module, where A ∈ RN×S is the concatenated query attention weights across S steps, || · ||F denotes Frobenius norm of a matrix, and λ ∈ [0, 1] controls the extent of overlap between query attention distributions. The supervised loss is the sum of the three loss terms Lgrn = Lreg +Ltag +Ldqa and we use the default setting in LGI [22], in which we refer readers to their paper for more details.
Implementation Details. We generate optical flow and depth maps using the RAFT [27] and MiDaS [25] method respectively. For the visual encoder Ed, Er and Ef , we employ the I3D [1] and C3D [28] networks for Charades-STA and ActivityNet Captions datasets respectively. As for the textual encoder Et, we adopt a bi-directional LSTM, where the feature is obtained by concatenating the last hidden states in forward and backward directions. The LGI module in our framework contains the sequential query attention and local-global video-text interactions as in the LGI model [22]. The REG module generates temporal attention weights to aggregate the features and performs regression via an MLP layer. The operations are defined similar to LGI [22]. The feature dimension c is set to 512. In the contrastive loss (1), the temperature parameter τ is set to 0.1. The projection head h(·) is a 2-layer MLP that project the feature to a 512-dimensional latent space. We implement the proposed model in PyTorch with the Adam optimizer and a fixed learning rate of 4× 10−4. The source code and models are available at https://github.com/wenz116/DRFT.
4 Experimental Results
4.1 Datasets and Evaluation Metric
We evaluate the proposed DRFT method against the state-of-the-art approaches on two benchmark datasets, i.e., Charades-STA [10] and ActivityNet Captions [17].
Charades-STA. It is built upon the Charades dataset for evaluating the video temporal grounding task. It contains 6,672 videos involving 16,128 video-query pairs, where 12,408 pairs are used for training and 3,720 pairs are for testing. The average length of the videos is 29.76 seconds. There are 2.4 annotated moments with duration 8.2 seconds in each video.
ActivityNet Captions. It is originally constructed for dense video captioning from the ActivityNet dataset. The captions are used as queries in the video temporal grounding task. It consists of 20k YouTube videos with an average duration of 120 seconds. The videos are annotated with 200 activity categories, which is more diverse compared to the Charades-STA dataset. Each video contains 3.65 queries, where each query has an average length of 13.48 words. The dataset is split into training, validation and testing set with a ratio of 2:1:1, resulting in 37,421, 17,505 and 17,031 video-query
pairs respectively. Since the testing set is not publicly available, we follow previous methods to evaluate the performance on the combination of the two validation sets, which are denoted as val1 and val2.
Following the typical evaluation setups [10, 22], we employ two metrics to assess the performance of video temporal grounding: 1) Recall at various thresholds of temporal Intersection over Union (R@IoU). It measures the percentage of predictions that have IoU with the ground truth larger than the threshold. We adopt 3 values {0.3, 0.5, 0.7} for the IoU threshold. 2) mean tIoU (mIoU). It is the average IoU over all results.
4.2 Overall Performance
In Table 2, we evaluate our framework against state-of-art approaches, including two-stage methods that rely on propose-and-rank schemes [10, 13, 34] and one-stage methods that only consider RGB videos as the input [12, 26, 33, 22]. First, compared to our baseline LGI [22], our results with single/two/three modalities are consistently better than theirs in all the evaluation metrics, which demonstrates the benefit of our intra-modal feature learning scheme and the inter-modal feature fusion mechanism. We also note that for our single-modal model, we use RGB as the input and only apply the intra-modal contrastive learning across videos, where it already performs favorably against existing algorithms. More results of the single-stream models using other modalities are provided in the supplementary material.
Second, we show that with the increased modality used in our model (bottom group in Table 2), the performance on two benchmarks are consistently improved, which demonstrates the complementary property of RGB, depth, and flow for video temporal grounding. Moreover, compared to the worse baseline results when adding more modalities without our proposed modules in Table 1, we validate the importance of designing a proper scheme of exchanging and fusing the information across modalities. It is also worth mentioning that with more modalities involved in the model, our method achieves larger performance gains compared to the baseline, e.g., more than 5% improvement in all the metrics on two benchmarks.
4.3 Ablation Study
In Table 3, we present the ablation of individual components proposed in our framework.
Inter-modal Feature Fusion. To enhance the communication across modalities, we propose to first use co-attentional transformers to learn attentive features across RGB and another modality, and then use a dynamic feature fusing scheme with learned weights to combine different features. In the first four rows of the middle group in Table 3, we show the following properties in this work:
1) Using transformers is effective for multi-modal feature learning. As shown in the first row of the middle group, the performance drops without the co-attentional transformers for feature fusion.
2) RGB information is essential for the temporal grounding task, and thus we conduct co-attention between a) RGB-flow and b) RGB-depth. In the second row of the middle group, if using the flow modality as the common modality, i.e., flow-RGB and flow-depth, the performance is worse than our final model.
3) Since RGB features are used for both flow and depth attention, we adopt a shared co-attention block for RGB as shown in Figure 2, where it can take RGB together with either the flow or depth cue as the input, and further enriches the attention mechanism. This design has not been considered in the prior work. In the third row of the middle group, without sharing the co-attentional module, the performance is worse than our final model.
4) The proposed dynamic fusion scheme via learnable weights is important to fuse features from different modalities. As shown the fourth row of the middle group, the performance drops significantly without learnable weights. Interestingly, learning dynamic weights to combine features is almost equally important compared to feature learning via transformers. This indicates that even with the state-of-the-art feature attention module, it is still challenging to combine multi-modal features.
Intra-modal Feature Learning. In the last row of the middle group in Table 3, we show the benefit of having the intra-modal cross-video feature learning. While multi-modal feature fusion already provides a strong baseline in our framework, improving feature representations in individual modalities is still critical for enhancing the entire multi-modal learning paradigm, in which such observations are not widely studied yet.
Qualitative Results. In Figure 3, we show sample results on the Charades-STA and ActivityNet Captions datasets, where the arrows indicate the starting and ending points of the corresponding grounded segment based on the query. Compared to the baseline method that only consider RGB features, the proposed DRFT approach is able to predict more accurate results by leveraging the multimodal features from RGB, optical flow and depth. More results are presented in the supplementary material.
4.4 Analysis of Multi-modal Learning
To understand the complementary property of each modality, we analyze the video temporal grounding results of some example action categories. Figure 4 shows the performance of the single-stream baseline with RGB as input, single-stream DRFT models with RGB, flow or depth as input, and three-stream DRFT model respectively. The three plots contain categories where RGB, flow or depth performs better than the other two modalities. We first show that the single-stream DRFT model with contrastive learning improves from the single-stream baseline (red bars vs. orange bars). We then investigate the complementary property between the three modalities. For actions with smaller movement (e.g., “smiling”) in the left group of Figure 4, models using RGB as input generally
perform better. For actions with larger motion (e.g., “closing a door” or “throwing a pillow”) in the middle group, flow provides more useful information (denoted as green bars). As for the actions with small motion but can be easily recognized by their structure (e.g., “sitting in a bed” or “working at a table”) in the right group, depth is superior to the other two modalities (denoted as blue bars). With the complementary property between RGB, flow and depth, we can take advantage of each modality and further improve the performance in the three-stream DRFT model (denoted as purple bars).
To further analyze the impact of each modality, we provide the learned weights for dynamic fusion in the three-stream DRFT for these categories in Table 4, where the top, middle and bottom groups contain categories that RGB, flow and depth help the most respectively. Flow → RGB means flowconditioned RGB features, etc. We observe that for actions with smaller movement (top group), the weights for RGB features are larger. For actions with larger motion (middle group), the weights for optical flow are larger. Regarding actions with small motion but can be easily recognized by their structure (bottom group), the weights for depth are larger. This shows that the model can exploit each modality based on the complementary property between RGB, flow and depth.
5 Conclusions
In this paper, we focus on the task of text-guided video temporal grounding. In contrast to existing methods that consider only RGB images as visual features, we propose the DRFT model to learn complementary visual information from RGB, optical flow and depth modality. While RGB features provide abundant appearance information, we show that representation models based on these cues alone are not effective for temporal grounding in videos with cluttered backgrounds. We therefore adopt optical flow to capture motion cues, and depth maps to estimate image structure. To combine the three modalities more effectively, we propose an inter-modal feature learning module, which performs co-attention between modalities using transformers, and dynamically fuses the multi-modal features based on the input data. To further enhance the multi-modal training, we incorporate an intra-modal feature learning module that performs self-supervised contrastive learning within each modality. The contrastive loss enforces cross-video features to be close to each other when they contain the same action, and be far apart otherwise. We conduct extensive experiments on two benchmark datasets, demonstrating the effectiveness of the proposed multi-modal framework with inter- and intra-modal feature learning.
Acknowledgements
This work is supported in part by NSF CAREER grant 1149783 and gifts from Snap as well as eBay. | 1. What is the focus of the paper regarding video temporal grounding?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of novelty and existing techniques?
3. Do you have any concerns regarding the definitions and descriptions provided in the paper?
4. How does the reviewer assess the clarity and quality of the content?
5. Are there any gaps in the analysis and studies conducted in the paper? | Summary Of The Paper
Review | Summary Of The Paper
A method for the video temporal grounding in the multi-modal domain (RGB, flow, depth). To this end, the transformer-based co-attention (+ adaptive fusion) and contrastive learning techniques are developed.
Review
The proposed modules are straightforward use of existing techniques (transformers, contrastive learning). Although the co-attention scheme is developed by modifying the self-attention scheme of the transformer, it has already been tried in various multi-modal methods. Comprehensively, the novelty is weak. Also, enough analysis and studies are not contained, and several definitions/descriptions (loss term and compared method in analysis) are unclear.
Not enough analysis and studies and several unclear definitions/descriptions, although there is room for the amount of pages.
In table 1, does the single-stream DRFT means replacing the inter-modal feature learning module with self-attention? If not, the single-stream DRFT is just the method w/o any attention mechanism, and then it is not proper to verify the effect of the proposed co-attention.
Unclear description for REG module. For example, how the start and end timing is obtained if there is multiple start-end in a video?
why the definition/formulation of L_grn is not described?
in line 205-206, the authors mentioned that their LGI module is similar to LGI model [19]. Then, is the proposed method built on top of [19] by adding the proposed modules?
There is no analysis or studies for if the weight of dynamic fusion differs depending on videos or frames. Without showing this point, the reviewer thinks that the gain of dynamic fusion results from merely using more computational cost.
Is there no previous multi-modal methods for this task? As the multi-modal baseline, the authors use the simple combination of independently learned uni-modal models. But, the lower performance of the baseline is not surprising at all. To show the effectiveness of the proposed multi-modal fusion, it is required to compare the proposed fusion with other (even basic) intermediate fusion schemes. |
NIPS | Title
Input-Output Equivalence of Unitary and Contractive RNNs
Abstract
Unitary recurrent neural networks (URNNs) have been proposed as a method to overcome the vanishing and exploding gradient problem in modeling data with long-term dependencies. A basic question is how restrictive is the unitary constraint on the possible input-output mappings of such a network? This work shows that for any contractive RNN with ReLU activations, there is a URNN with at most twice the number of hidden states and the identical input-output mapping. Hence, with ReLU activations, URNNs are as expressive as general RNNs. In contrast, for certain smooth activations, it is shown that the input-output mapping of an RNN cannot be matched with a URNN, even with an arbitrary number of states. The theoretical results are supported by experiments on modeling of slowly-varying dynamical systems.
1 Introduction
Recurrent neural networks (RNNs) – originally proposed in the late 1980s [20, 6] – refer to a widelyused and powerful class of models for time series and sequential data. In recent years, RNNs have become particularly important in speech recognition [9, 10] and natural language processing [5, 2, 24] tasks.
A well-known challenge in training recurrent neural networks is the vanishing and exploding gradient problem [3, 18]. RNNs have a transition matrix that maps the hidden state at one time to the next time. When the transition matrix has an induced norm greater than one, the RNN may become unstable. In this case, small perturbations of the input at some time can result in a change in the output that grows exponentially over the subsequent time. This instability leads to a so-called exploding gradient. Conversely, when the norm is less than one, perturbations can decay exponentially so inputs at one time have negligible effect in the distant future. As a result, the loss surface associated with RNNs can have steep walls that may be difficult to minimize. Such problems are particularly acute in systems with long-term dependencies, where the output sequence can depend strongly on the input sequence many time steps in the past.
Unitary RNNs (URNNs) [1] is a simple and commonly-used approach to mitigate the vanishing and exploding gradient problem. The basic idea is to restrict the transition matrix to be unitary (an orthogonal matrix for the real-valued case). The unitary transitional matrix is then combined with a non-expansive activation such as a ReLU or sigmoid. As a result, the overall transition mapping cannot amplify the hidden states, thereby eliminating the exploding gradient problem. In addition,
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
since all the singular values of a unitary matrix equal 1, the transition matrix does not attenuate the hidden state, potentially mitigating the vanishing gradient problem as well. (Due to activation, the hidden state may still be attenuated). Some early work in URNNs suggested that they could be more effective than other methods, such as long short-term memory (LSTM) architectures and standard RNNs, for certain learning tasks involving long-term dependencies [13, 1] – see a short summary below.
Although URNNs may improve the stability of the network for the purpose of optimization, a basic issue with URNNs is that the unitary contraint may potentially reduce the set of input-output mappings that the network can model. This paper seeks to rigorously characterize how restrictive the unitary constraint is on an RNN. We evaluate this restriction by comparing the set of input-output mappings achievable with URNNs with the set of mappings from all RNNs. As described below, we restrict our attention to RNNs that are contractive in order to avoid unstable systems.
We show three key results:
1. Given any contractive RNN with n hidden states and ReLU activations, there exists a URNN with at most 2n hidden states and the identical input-ouput mapping.
2. This result is tight in the sense that, given any n > 0, there exists at least one contractive RNN such that any URNN with the same input-output mapping must have at least 2n states.
3. The equivalence of URNNs and RNNs depends on the activation. For example, we show that there exists a contractive RNN with sigmoid activations such that there is no URNN with any finite number of states that exactly matches the input-output mapping.
The implication of this result is that, for RNNs with ReLU activations, there is no loss in the expressiveness of model when imposing the unitary constraint. As we discuss below, the penalty is a two-fold increase in the number of parameters.
Of course, the expressiveness of a class of models is only one factor in their real performance. Based on these results alone, one cannot determine if URNNs will outperform RNNs in any particular task. Earlier works have found examples where URNNs offer some benefits over LSTMs and RNNs [1, 28]. But in the simulations below concerning modeling slowly-varying nonlinear dynamical systems, we see that URNNs with 2n states perform approximately equally to RNNs with n states.
Theoretical results on generalization error are an active subject area in deep neural networks. Some measures of model complexity such as [17] are related to the spectral norm of the transition matrices. For RNNs with non-contractive matrices, these complexity bounds will grow exponentially with the number of time steps. In contrast, since unitary matrices can bound the generalization error, this work can also relate to generalizability.
Prior work
The vanishing and exploding gradient problem in RNNs has been known almost as early as RNNs themselves [3, 18]. It is part of a larger problem of training models that can capture long-term dependencies, and several proposed methods address this issue. Most approaches use some form of gate vectors to control the information flow inside the hidden states, the most widely-used being LSTM networks [11]. Other gated models include Highway networks [21] and gated recurrent units (GRUs) [4]. L1/L2 penalization on gradient norms and gradient clipping were proposed to solve the exploding gradient problem in [18]. With L1/L2 penalization, capturing long-term dependencies is still challenging since the regularization term quickly kills the information in the model. A more recent work [19] has successfully trained very deep networks by carefully adjusting the initial conditions to impose an approximate unitary structure of many layers.
Unitary evolution RNNs (URNNs) are a more recent approach first proposed in [1]. Orthogonal constraints were also considered in the context of associative memories [27]. One of the technical difficulties is to efficiently parametrize the set of unitary matrices. The numerical simulations in this work focus on relatively small networks, where the parameterization is not a significant computational issue. Nevertheless, for larger numbers of hidden states, several approaches have been proposed. The model in [1] parametrizes the transition matrix as a product of reflection, diagonal, permutation, and Fourier transform matrices. This model spans a subspace of the whole unitary space, thereby limiting the expressive power of RNNs. The work [28] overcomes this issue by optimizing over
full-capacity unitary matrices. A key limitation in this work, however, is that the projection of weights on to the unitary space is not computationally efficient. A tunable, efficient parametrization of unitary matrices is proposed in [13]. This model provides the computational complexity of O(1) per parameter. The unitary matrix is represented as a product of rotation matrices and a diagonal matrix. By grouping specific rotation matrices, the model provides tunability of the span of the unitary space and enables using different capacities for different tasks. Combining the parametrization in [13] for unitary matrices and the “forget” ability of the GRU structure, [4, 12] presented an architecture that outperforms conventional models in several long-term dependency tasks. Other methods such as orthogonal RNNs proposed by [16] showed that the unitary constraint is a special case of the orthogonal constraint. By representing an orthogonal matrix as a product of Householder reflectors, we are able span the entire space of orthogonal matrices. Imposing hard orthogonality constraints on the transition matrix limits the expressiveness of the model and speed of convergence and performance may degrade [26].
2 RNNs and Input-Output Equivalence
RNNs. We consider recurrent neural networks (RNNs) representing sequence-to-sequence mappings of the form
h(k) = φ(Wh(k−1) + Fx(k) + b), h(−1) = h−1, (1a)
y(k) = Ch(k), (1b)
parameterized by Θ = (W,F,b,C,h−1). The system is shown in Fig. 1. The system maps a sequence of inputs x(k) ∈ Rm, k = 0, 1, . . . , T − 1 to a sequence of outputs y(k) ∈ Rp. In equation (1), φ is the activation function (e.g. sigmoid or ReLU); h(k) ∈ Rn is an internal or hidden state; W ∈ Rn×n,F ∈ Rn×m, and C ∈ Rp×n are the hidden-to-hidden, input-to-hidden, and hidden-to-output weight matrices respectively; and b is the bias vector. We have considered the initial condition, h−1, as part of the parameters, although we will often take h−1 = 0. Given a set of parameters Θ, we will let
y = G(x,Θ) (2)
denote the resulting sequence-to-sequence mapping. Note that the number of time samples, T , is fixed throughout our discussion.
Recall [23] that a matrix W is unitary if WHW = WWH = I. When a unitary matrix is realvalued, it is also called orthogonal. In this work, we will restrict our attention to real-valued matrices, but still use the term unitary for consistency with the URNN literature. A Unitary RNN or URNN is simply an RNN (1) with a unitary state-to-state transition matrix W. A key property of unitary matrices is that they are norm-preserving, meaning that ‖Wh(k)‖2 = ‖h(k)‖2. In the context of (1a), the unitary constraint implies that the transition matrix does not amplify the state.
Equivalence of RNNs. Our goal is to understand the extent to which the unitary constraint in a URNN restricts the set of input-output mappings. To this end, we say that the RNNs for two parameters Θ1 and Θ2 are input-output equivalent if the sequence-to-sequence mappings are identical,
G(x,Θ1) = G(x,Θ2) for all x = (x(0), . . . ,x(T−1)). (3)
That is, for all input sequences x, the two systems have the same output sequence. Note that the hidden internal states h(k) in the two systems may be different. We will also say that two RNNs are equivalent on a set of X of inputs if (3) holds for all x ∈ X . It is important to recognize that input-output equivalence does not imply that the parameters Θ1 and Θ2 are identical. For example, consider the case of linear RNNs where the activation in (1) is the identity, φ(z) = z. Then, for any invertible T, the transformation
W→ TWT−1, C→ CT−1, F→ TF, h−1 → Th−1, (4)
results in the same input-output mapping. However, the internal states h(k) will be mapped to Th(k). The fact that many parameters can lead to identical input-output mappings will be key to finding equivalent RNNs and URNNs.
Contractive RNNs. The spectral norm [23] of a matrix W is the maximum gain of the matrix ‖W‖ := maxh6=0 ‖Wh‖2‖h‖2 . In an RNN (1), the spectral norm ‖W‖ measures how much the transition matrix can amplify the hidden state. For URNNs, ‖W‖ = 1. We will say an RNN is contractive if ‖W‖ < 1, expansive if ‖W‖ > 1, and non-expansive if ‖W‖ ≤ 1. In the sequel, we will restrict our attention to contractive and non-expansive RNNs. In general, given an expansive RNN, we cannot expect to find an equivalent URNN. For example, suppose h(k) = h(k) is scalar. Then, the transition matrix W is also scalar W = w and w is expansive if and only if |w| > 1. Now suppose the activation is a ReLU φ(h) = max{0, h}. Then, it is possible that a constant input x(k) = x0 can result in an output that grows exponentially with time: y(k) = const × wk. Such an exponential increase is not possible with a URNN. We consider only non-expansive RNNs in the remainder of the paper. Some of our results will also need the assumption that the activation function φ(·) in (1) is non-expansive:
‖φ(x)− φ(y)‖2 ≤ ‖x− y‖2, for all x and y. This property is satisfied by the two most common activations, sigmoids and ReLUs.
Equivalence of Linear RNNs. To get an intuition of equivalence, it is useful to briefly review the concept in the case of linear systems [14]. Linear systems are RNNs (1) in the special case where the activation function is identity, φ(z) = z; the initial condition is zero, h−1 = 0; and the bias is zero, b = 0. In this case, it is well-known that two systems are input-output equivalent if and only if they have the same transfer function,
H(s) := C(sI−W)−1F. (5) In the case of scalar inputs and outputs, H(s) is a rational function of the complex variable s with numerator and denominator degree of at most n, the dimension of the hidden state h(k). Any statespace system (1) that achieves a particular transfer function is called a realization of the transfer function. Hence two linear systems are equivalent if and only if they are the realizations of the same transfer function.
A realization is called minimal if it is not equivalent some linear system with fewer hidden states. A basic property of realizations of linear systems is that they are minimal if and only if they are controllable and observable. The formal definition is in any linear systems text, e.g. [14]. Loosely, controllable implies that all internal states can be reached with an appropriate input and observable implies that all hidden states can be observed from the ouptut. In absence of controllability and observability, some hidden states can be removed while maintaining input-output equivalence.
3 Equivalence Results for RNNs with ReLU Activations
Our first results consider contractive RNNs with ReLU activations. For the remainder of the section, we will restrict our attention to the case of zero initial conditions, h(−1) = 0 in (1).
Theorem 3.1 Let y = G(x,Θc) be a contractive RNN with ReLU activation and states of dimension n. Fix M > 0 and let X be the set of all sequences such that ‖x(k)‖2 ≤ M < ∞ for all k. Then there exists a URNN with state dimension 2n and parameters Θu = (Wu,Fu,bu,Cu) such that for all x ∈ X , G(x,Θc) = G(x,Θu). Hence the input-output mapping is matched for bounded inputs.
Proof See Appendix A.
Theorem 3.1 shows that for any contractive RNN with ReLU activations, there exists a URNN with at most twice the number of hidden states and the identical input-output mapping. Thus, there is no loss in the set of input-output mappings with URNNs relative to general contractive RNNs on bounded inputs.
The penalty for using RNNs is the two-fold increase in state dimension, which in turn increases the number of parameters to be learned. We can estimate this increase in parameters as follows: The raw number of parameters for an RNN (1) with n hidden states, p outputs and m inputs is n2+(p+m+1)n. However, for ReLU activations, the RNNs are equivalent under the transformations (4) using diagonal positive T. Hence, the number of degrees of freedom of a general RNN is at most drnn = n
2 + (p + m)n. We can compare this value to a URNN with 2n hidden states. The set of 2n× 2n unitary W has 2n(2n− 1)/2 degrees of freedom [22]. Hence, the total degrees of freedom in a URNN with 2n states is at most durnn = n(2n− 1) + 2n(p+m). We conclude that a URNN with 2n hidden states has slightly fewer than twice the number of parameters as an RNN with n hidden states.
We note that there are cases that the contractivity assumption is limiting, however, the limitations may not always be prohibitive. We will see in our experiments that imposing the contractivity constraint can improve learning for RNNs when models have sufficiently large numbers of time steps. Some related results where bounding the singular values help with the performance can be found in [26].
We next show a converse result.
Theorem 3.2 For every positive n, there exists a contractive RNN with ReLU nonlinearity and state dimension n such that every equivalent URNN has at least 2n states.
Proof See Appendix B.1 in the Supplementary Material.
The result shows that the 2n achievability bound in Theorem 3.1 is tight, at least in the worst case. In addition, the RNN constructed in the proof of Theorem 3.2 is not particularly pathological. We will show in our simulations in Section 5 that URNNs typically need twice the number of hidden states to achieve comparable modeling error as an RNN.
4 Equivalence Results for RNNs with Sigmoid Activations
Equivalence between RNNs and URNNs depends on the particular activation. Our next result shows that with sigmoid activations, URNNs are, in general, never exactly equivalent to RNNs, even with an arbitrary number of states.
We need the following technical definition: Consider an RNN (1) with a standard sigmoid activation φ(z) = 1/(1 + e−z). If W is non-expansive, then a simple application of the contraction mapping principle shows that for any constant input x(k) = x∗, there is a fixed point in the hidden state h∗ = φ(Wh∗ + Fx∗ + b). We will say that the RNN is controllable and observable at x∗ if the linearization of the RNN around (x∗,h∗) is controllable and observable.
Theorem 4.1 There exists a contractive RNN with sigmoid activation function φ with the following property: If a URNN is controllable and observable at any point x∗, then the URNN cannot be equivalent to the RNN for inputs x in the neighborhood of x∗.
Proof See Appendix B.2 in the Supplementary Material.
The result provides a converse on equivalence: Contractive RNNs with sigmoid activations are not in general equivalent to URNNs, even if we allow the URNN to have an arbitrary number of hidden states. Of course, the approximation error between the URNN and RNN may go to zero as the URNN hidden dimension goes to infinity (e.g., similar to the approximation results in [8]). However, exact equivalence is not possible with sigmoid activations, unlike with ReLU activations. Thus, there is fundamental difference in equivalence for smooth and non-smooth activations.
We note that the fundamental distinction between Theorem 3.1 and the opposite result in Theorem 4.1 is that the activation is smooth with a positive slope. With such activations, you can linearize the
system, and the eigenvalues of the transition matrix become visible in the input-output mapping. In contrast, ReLUs can zero out states and suppress these eigenvalues. This is a key insight of the paper and a further contribution in understanding nonlinear systems.
5 Numerical Simulations
In this section, we numerically compare the modeling ability of RNNs and URNNs where the true system is a contractive RNN with long-term dependencies. Specifically, we generate data from multiple instances of a synthetic RNN where the parameters in (1) are randomly generated. For the true system, we use m = 2 input units, p = 2 output units, and n = 4 hidden units at each time step. The matrices F, C and b are generated as i.i.d. Gaussians. We use a random transition matrix,
W = I− ATA/‖A‖2, (6)
where A is Gaussian i.i.d. matrix and is a small value, taken here to be = 0.01. The matrix (6) will be contractive with singular values in (1 − , 1). By making small, the states of the system will vary slowly, hence creating long-term dependencies. In analogy with linear systems, the time constant will be approximately 1/ = 100 time steps. We use ReLU activations. To avoid degenerate cases where the outputs are always zero, the biases b are adjusted to ensure that the each hidden state is on some target 60% of the time using a similar procedure as in [7].
The trials have T = 1000 time steps, which corresponds to 10 times the time constant 1/ = 100 of the system. We added noise to the output of this system such that the signal-to-noise ratio (SNR) is 15 dB or 20 dB. In each trial, we generate 700 training samples and 300 test sequences from this system.
Given the input and the output data of this contractive RNN, we attempt to learn the system with: (i) standard RNNs, (ii) URNNs, and (iii) LSTMs. The hidden states in the model are varied in the range n = [2, 4, 6, 8, 10, 12, 14], which include values both above and below the true number of hidden states ntrue = 4. We used mean-squared error as the loss function. Optimization is performed using Adam [15] optimization with a batch size = 10 and learning rate = 0.01. All models are implemented in the Keras package in Tensorflow. The experiments are done over 30 realizations of the original contractive system.
For the URNN learning, of all the proposed algorithms for enforcing the unitary constraints on transition matrices during training [13, 28, 1, 16], we chose to project the transition matrix on the full space of unitary matrices after each iteration using singular value decomposition (SVD). Although SVD requires O(n3) computation for each projection, for our choices of hidden states it performed faster than the aforementioned methods.
Since we have training noise and since optimization algorithms can get stuck in local minima, we cannot expect “exact" equivalence between the learned model and true system as in the theorems. So, instead, we look at the test error as a measure of the closeness of the learned model to the true system. Figure 2 on the left shows the test R2 for a Gaussian i.i.d. input and output with SNR = 20 dB for RNNs, URNNs, and LSTMs. The red dashed line corresponds to the optimal R2 achievable at the given noise level.
Note that even though the true RNN has ntrue = 4 hidden states, the RNN model does not obtain the optimal test R2 at n = 4. This is not due to training noise, since the RNN is able to capture the full dynamics when we over-parametrize the system to n ≈ 8 hidden states. The test error in the RNN at lower numbers of hidden states is likely due to the optimization being caught in a local minima.
What is important for this work though is to compare the URNN test error with that of the RNN. We observe that URNN requires approximately twice the number of hidden states to obtain the same test error as achieved by an RNN. To make this clear, the right plot shows the same performance data with number of states adjusted for URNN. Since our theory indicates that a URNN with 2n hidden states is as powerful as an RNN with n hidden states, we compare a URNN with 2n hidden units directly with an RNN with n hidden units. We call this the adjusted hidden units. We see that the URNN and RNN have similar test error when we appropriately scale the number of hidden units as predicted by the theory.
For completeness, the left plot in Figure 2 also shows the test error with an LSTM. It is important to note that the URNN has almost the same performance as an LSTM with considerably smaller number of parameters.
Figure 3 shows similar results for the same task with SNR = 15 dB. For this task, the input is sparse Gaussian i.i.d., i.e. Gaussian with some probability p = 0.02 and 0 with probability 1− p. The left plot shows the R2 vs. the number of hidden units for RNNs and URNNs and the right plot shows the same results once the number of hidden units for URNN is adjusted.
We also compared the modeling ability of URNNs and RNNs using the Pixel-Permuted MNIST task. Each MNIST image is a 28 × 28 grayscale image with a label between 0 and 9. A fixed random permutation is applied to the pixels and each pixel is fed to the network in each time step as the input and the output is the predicted label for each image [1, 13, 26].
We evaluated various models on the Pixel-Permuted MNIST task using validation based early stopping. Without imposing a contractivity constraint during learning, the RNN is either unstable or requires a slow learning rate. Imposing a contractivity constraint improves the performance. Incidentally, using a URNN improves the performance further. Thus, contractivity can improve learning for RNNs when models have sufficiently large numbers of time steps.
6 Conclusion
Several works empirically show that using unitary recurrent neural networks improves the stability and performance of the RNNs. In this work, we study how restrictive it is to use URNNs instead of RNNs. We show that URNNs are at least as powerful as contractive RNNs in modeling input-output mappings if enough hidden units are used. More specifically, for any contractive RNN we explicitly construct a URNN with twice the number of states of the RNN and identical input-output mapping. We also provide converse results for the number of state and the activation function needed for exact matching. We emphasize that although it has been shown that URNNs outperform standard RNNs and LSTM in many tasks that involve long-term dependencies, our main goal in this paper is to show that from an approximation viewpoint, URNNs are as expressive as general contractive RNNs. By a two-fold increase in the number of parameters, we can use the stability benefits they bring for optimization of neural networks.
Acknowledgements
The work of M. Emami, M. Sahraee-Ardakan, A. K. Fletcher was supported in part by the National Science Foundation under Grants 1254204 and 1738286, and the Office of Naval Research under
Grant N00014-15-1-2677. S. Rangan was supported in part by the National Science Foundation under Grants 1116589, 1302336, and 1547332, NIST, the industrial affiliates of NYU WIRELESS, and the SRC.
A Proof of Theorem 3.1
The basic idea is to construct a URNN with 2n states such that first n states match the states of RNN and the last n states are always zero. To this end, consider any contractive RNN,
h(k)c = φ(Wch (k−1) c + Fcx (k) + bc), y (k) = Cch (k) c ,
where h(k) ∈ Rn. Since W is contractive, we have ‖W‖ ≤ ρ for some ρ < 1. Also, for a ReLU activation, ‖φ(z)‖ ≤ ‖z‖ for all pre-activation inputs z. Hence,
‖h(k)c ‖2 = ‖φ(Wch(k−1)c + Fcx(k) + bc)‖2 ≤ ‖Wch(k−1)c + Fcx(k) + bc‖2 ≤ ρ‖h(k−1)c ‖2 + ‖Fc‖‖x(k)‖2 + ‖bc‖2.
Therefore, with bounded inputs, ‖x(k)‖ ≤M , we have the state is bounded,
‖h(k)‖2 ≤ 1
1− ρ [‖Fc‖M + ‖bc‖2] =: Mh. (7)
We construct a URNN as,
h(k)u = φ(Wuh (k−1) u + Fux (k) + bu), y (k) = Cuh (k) u
where the parameters are of the form,
hu = [ h1 h2 ] ∈ R2n, Wu = [ W1,W2 W3,W4 ] , Fu = [ Fc 0 ] , bu = [ bc b2 ] . (8)
Let W1 = Wc. Since ‖Wc‖ < 1, we have I−WTcWc 0. Therefore, there exists W3 such that WT3W3 = I−WTcWc. With this choice of W3, the first n columns of Wu are orthonormal. Let[ W2 W4 ] extend these to an orthonormal basis for R2n. Then, the matrix Wu will be orthonormal.
Next, let b2 = −Mh1n×1, where Mh is defined in (7). We show by induction that for all k,
h (k) 1 = h (k) c , h (k) 2 = 0. (9)
If both systems are initialized at zero, (9) is satisfied at k = −1. Now, suppose this holds up to time k − 1. Then,
h (k) 1 = φ(W1h (k−1) 1 + W2h (k−1) 2 + Fcx (k) + bc)
= φ(W1h (k−1) 1 + Fcx (k) + bc) = h (k) c ,
where we have used the induction hypothesis that h(k−1)2 = 0. For h (k) 2 , note that
‖W3h(k−1)1 ‖∞ ≤ ‖W3h (k−1) 1 ‖2 ≤ ‖h (k−1) 1 ‖ ≤Mh, (10)
where the last step follows from (7). Therefore,
W3h (k−1) 1 + W4h (k−1) 2 + b2 = W3h (k−1) 1 −M1n×1 ≤ 0. (11)
Hence with ReLU activation h(k)2 = φ(W3h (k−1) 1 + W4h (k−1) 2 + b2) = 0. By induction, (9) holds for all k. Then, if we define Cu = [Cc0], we have the output of the URNN and RNN systems are identical
y(k)u = Cuh (k) u = Cch (k) 1 = y (k) c .
This shows that the systems are equivalent. | 1. What is the focus and contribution of the paper regarding URNN and RNN?
2. What are the strengths of the proposed approach, particularly in terms of theoretical analysis?
3. What are the weaknesses of the paper, especially regarding the limitation of the unitary constraint?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Do you have any questions regarding the proof construction, assumptions, and results? | Review | Review
UPDATE: Iâm largely happy with how the authors addressed my points. I still think that the requirement for RNN to be non-expansive is quite restrictive per se, but this work may still be a good starting point for further theoretical discussion of such issues. The authors provide a straightforward proof by construction that a URNN with two times the number of hidden states as the corresponding RNN is as expressive as the RNN, i.e. can be formulated such that it produces the same outputs for the same series of inputs. While this is true for RNN with ReLU activation, the authors further prove, by linearizing around fixed points, that this is generally not true for RNN/URNN with sigmoid activation. Strengths: - Given that URNN are an important technique for modeling long-term dependencies, while avoiding some of the complexities of LSTM/GRU, rigorous theoretical results on how restrictive the unitary constraint is are timely and important. As far as Iâm aware, this is the first set of such results. Weaknesses: - The proof works only under the assumption that the corresponding RNN is contractive, i.e. has no diverging directions in its eigenspace. As the authors point out (line #127), for expansive RNN there will usually be no corresponding URNN. While this is true, I think it still imposes a strong limitation a priori on the classes of problems that could be computed by an URNN. For instance chaotic attractors with at least one diverging eigendirection are ruled out to begin with. I think this needs further discussion. For instance, could URNN/ contractive RNN still *efficiently* solve some of the classical long-term RNN benchmarks, like the multiplication problem? Minor stuff: - Statement on line 134: Only true for standard sigmoid [1+exp(-x)]^-1, depends on max. slope - Theorem 4.1: Would be useful to elaborate a bit more in the main text why this holds (intuitively, since the RNN unlike the URNN will converge to the nearest FP). - line 199: The difference is not fundamental but only for the specific class of smooth (sigmoid) and non-smooth (ReLU) activation functions considered I think? Moreover: Is smoothness the crucial difference at all, or rather the fact that sigmoid is truly contractive while ReLU is just non-expansive? - line 223-245: Are URNN at all practical given the costly requirement to enforce the unitary matrix after each iteration? |
NIPS | Title
Input-Output Equivalence of Unitary and Contractive RNNs
Abstract
Unitary recurrent neural networks (URNNs) have been proposed as a method to overcome the vanishing and exploding gradient problem in modeling data with long-term dependencies. A basic question is how restrictive is the unitary constraint on the possible input-output mappings of such a network? This work shows that for any contractive RNN with ReLU activations, there is a URNN with at most twice the number of hidden states and the identical input-output mapping. Hence, with ReLU activations, URNNs are as expressive as general RNNs. In contrast, for certain smooth activations, it is shown that the input-output mapping of an RNN cannot be matched with a URNN, even with an arbitrary number of states. The theoretical results are supported by experiments on modeling of slowly-varying dynamical systems.
1 Introduction
Recurrent neural networks (RNNs) – originally proposed in the late 1980s [20, 6] – refer to a widelyused and powerful class of models for time series and sequential data. In recent years, RNNs have become particularly important in speech recognition [9, 10] and natural language processing [5, 2, 24] tasks.
A well-known challenge in training recurrent neural networks is the vanishing and exploding gradient problem [3, 18]. RNNs have a transition matrix that maps the hidden state at one time to the next time. When the transition matrix has an induced norm greater than one, the RNN may become unstable. In this case, small perturbations of the input at some time can result in a change in the output that grows exponentially over the subsequent time. This instability leads to a so-called exploding gradient. Conversely, when the norm is less than one, perturbations can decay exponentially so inputs at one time have negligible effect in the distant future. As a result, the loss surface associated with RNNs can have steep walls that may be difficult to minimize. Such problems are particularly acute in systems with long-term dependencies, where the output sequence can depend strongly on the input sequence many time steps in the past.
Unitary RNNs (URNNs) [1] is a simple and commonly-used approach to mitigate the vanishing and exploding gradient problem. The basic idea is to restrict the transition matrix to be unitary (an orthogonal matrix for the real-valued case). The unitary transitional matrix is then combined with a non-expansive activation such as a ReLU or sigmoid. As a result, the overall transition mapping cannot amplify the hidden states, thereby eliminating the exploding gradient problem. In addition,
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
since all the singular values of a unitary matrix equal 1, the transition matrix does not attenuate the hidden state, potentially mitigating the vanishing gradient problem as well. (Due to activation, the hidden state may still be attenuated). Some early work in URNNs suggested that they could be more effective than other methods, such as long short-term memory (LSTM) architectures and standard RNNs, for certain learning tasks involving long-term dependencies [13, 1] – see a short summary below.
Although URNNs may improve the stability of the network for the purpose of optimization, a basic issue with URNNs is that the unitary contraint may potentially reduce the set of input-output mappings that the network can model. This paper seeks to rigorously characterize how restrictive the unitary constraint is on an RNN. We evaluate this restriction by comparing the set of input-output mappings achievable with URNNs with the set of mappings from all RNNs. As described below, we restrict our attention to RNNs that are contractive in order to avoid unstable systems.
We show three key results:
1. Given any contractive RNN with n hidden states and ReLU activations, there exists a URNN with at most 2n hidden states and the identical input-ouput mapping.
2. This result is tight in the sense that, given any n > 0, there exists at least one contractive RNN such that any URNN with the same input-output mapping must have at least 2n states.
3. The equivalence of URNNs and RNNs depends on the activation. For example, we show that there exists a contractive RNN with sigmoid activations such that there is no URNN with any finite number of states that exactly matches the input-output mapping.
The implication of this result is that, for RNNs with ReLU activations, there is no loss in the expressiveness of model when imposing the unitary constraint. As we discuss below, the penalty is a two-fold increase in the number of parameters.
Of course, the expressiveness of a class of models is only one factor in their real performance. Based on these results alone, one cannot determine if URNNs will outperform RNNs in any particular task. Earlier works have found examples where URNNs offer some benefits over LSTMs and RNNs [1, 28]. But in the simulations below concerning modeling slowly-varying nonlinear dynamical systems, we see that URNNs with 2n states perform approximately equally to RNNs with n states.
Theoretical results on generalization error are an active subject area in deep neural networks. Some measures of model complexity such as [17] are related to the spectral norm of the transition matrices. For RNNs with non-contractive matrices, these complexity bounds will grow exponentially with the number of time steps. In contrast, since unitary matrices can bound the generalization error, this work can also relate to generalizability.
Prior work
The vanishing and exploding gradient problem in RNNs has been known almost as early as RNNs themselves [3, 18]. It is part of a larger problem of training models that can capture long-term dependencies, and several proposed methods address this issue. Most approaches use some form of gate vectors to control the information flow inside the hidden states, the most widely-used being LSTM networks [11]. Other gated models include Highway networks [21] and gated recurrent units (GRUs) [4]. L1/L2 penalization on gradient norms and gradient clipping were proposed to solve the exploding gradient problem in [18]. With L1/L2 penalization, capturing long-term dependencies is still challenging since the regularization term quickly kills the information in the model. A more recent work [19] has successfully trained very deep networks by carefully adjusting the initial conditions to impose an approximate unitary structure of many layers.
Unitary evolution RNNs (URNNs) are a more recent approach first proposed in [1]. Orthogonal constraints were also considered in the context of associative memories [27]. One of the technical difficulties is to efficiently parametrize the set of unitary matrices. The numerical simulations in this work focus on relatively small networks, where the parameterization is not a significant computational issue. Nevertheless, for larger numbers of hidden states, several approaches have been proposed. The model in [1] parametrizes the transition matrix as a product of reflection, diagonal, permutation, and Fourier transform matrices. This model spans a subspace of the whole unitary space, thereby limiting the expressive power of RNNs. The work [28] overcomes this issue by optimizing over
full-capacity unitary matrices. A key limitation in this work, however, is that the projection of weights on to the unitary space is not computationally efficient. A tunable, efficient parametrization of unitary matrices is proposed in [13]. This model provides the computational complexity of O(1) per parameter. The unitary matrix is represented as a product of rotation matrices and a diagonal matrix. By grouping specific rotation matrices, the model provides tunability of the span of the unitary space and enables using different capacities for different tasks. Combining the parametrization in [13] for unitary matrices and the “forget” ability of the GRU structure, [4, 12] presented an architecture that outperforms conventional models in several long-term dependency tasks. Other methods such as orthogonal RNNs proposed by [16] showed that the unitary constraint is a special case of the orthogonal constraint. By representing an orthogonal matrix as a product of Householder reflectors, we are able span the entire space of orthogonal matrices. Imposing hard orthogonality constraints on the transition matrix limits the expressiveness of the model and speed of convergence and performance may degrade [26].
2 RNNs and Input-Output Equivalence
RNNs. We consider recurrent neural networks (RNNs) representing sequence-to-sequence mappings of the form
h(k) = φ(Wh(k−1) + Fx(k) + b), h(−1) = h−1, (1a)
y(k) = Ch(k), (1b)
parameterized by Θ = (W,F,b,C,h−1). The system is shown in Fig. 1. The system maps a sequence of inputs x(k) ∈ Rm, k = 0, 1, . . . , T − 1 to a sequence of outputs y(k) ∈ Rp. In equation (1), φ is the activation function (e.g. sigmoid or ReLU); h(k) ∈ Rn is an internal or hidden state; W ∈ Rn×n,F ∈ Rn×m, and C ∈ Rp×n are the hidden-to-hidden, input-to-hidden, and hidden-to-output weight matrices respectively; and b is the bias vector. We have considered the initial condition, h−1, as part of the parameters, although we will often take h−1 = 0. Given a set of parameters Θ, we will let
y = G(x,Θ) (2)
denote the resulting sequence-to-sequence mapping. Note that the number of time samples, T , is fixed throughout our discussion.
Recall [23] that a matrix W is unitary if WHW = WWH = I. When a unitary matrix is realvalued, it is also called orthogonal. In this work, we will restrict our attention to real-valued matrices, but still use the term unitary for consistency with the URNN literature. A Unitary RNN or URNN is simply an RNN (1) with a unitary state-to-state transition matrix W. A key property of unitary matrices is that they are norm-preserving, meaning that ‖Wh(k)‖2 = ‖h(k)‖2. In the context of (1a), the unitary constraint implies that the transition matrix does not amplify the state.
Equivalence of RNNs. Our goal is to understand the extent to which the unitary constraint in a URNN restricts the set of input-output mappings. To this end, we say that the RNNs for two parameters Θ1 and Θ2 are input-output equivalent if the sequence-to-sequence mappings are identical,
G(x,Θ1) = G(x,Θ2) for all x = (x(0), . . . ,x(T−1)). (3)
That is, for all input sequences x, the two systems have the same output sequence. Note that the hidden internal states h(k) in the two systems may be different. We will also say that two RNNs are equivalent on a set of X of inputs if (3) holds for all x ∈ X . It is important to recognize that input-output equivalence does not imply that the parameters Θ1 and Θ2 are identical. For example, consider the case of linear RNNs where the activation in (1) is the identity, φ(z) = z. Then, for any invertible T, the transformation
W→ TWT−1, C→ CT−1, F→ TF, h−1 → Th−1, (4)
results in the same input-output mapping. However, the internal states h(k) will be mapped to Th(k). The fact that many parameters can lead to identical input-output mappings will be key to finding equivalent RNNs and URNNs.
Contractive RNNs. The spectral norm [23] of a matrix W is the maximum gain of the matrix ‖W‖ := maxh6=0 ‖Wh‖2‖h‖2 . In an RNN (1), the spectral norm ‖W‖ measures how much the transition matrix can amplify the hidden state. For URNNs, ‖W‖ = 1. We will say an RNN is contractive if ‖W‖ < 1, expansive if ‖W‖ > 1, and non-expansive if ‖W‖ ≤ 1. In the sequel, we will restrict our attention to contractive and non-expansive RNNs. In general, given an expansive RNN, we cannot expect to find an equivalent URNN. For example, suppose h(k) = h(k) is scalar. Then, the transition matrix W is also scalar W = w and w is expansive if and only if |w| > 1. Now suppose the activation is a ReLU φ(h) = max{0, h}. Then, it is possible that a constant input x(k) = x0 can result in an output that grows exponentially with time: y(k) = const × wk. Such an exponential increase is not possible with a URNN. We consider only non-expansive RNNs in the remainder of the paper. Some of our results will also need the assumption that the activation function φ(·) in (1) is non-expansive:
‖φ(x)− φ(y)‖2 ≤ ‖x− y‖2, for all x and y. This property is satisfied by the two most common activations, sigmoids and ReLUs.
Equivalence of Linear RNNs. To get an intuition of equivalence, it is useful to briefly review the concept in the case of linear systems [14]. Linear systems are RNNs (1) in the special case where the activation function is identity, φ(z) = z; the initial condition is zero, h−1 = 0; and the bias is zero, b = 0. In this case, it is well-known that two systems are input-output equivalent if and only if they have the same transfer function,
H(s) := C(sI−W)−1F. (5) In the case of scalar inputs and outputs, H(s) is a rational function of the complex variable s with numerator and denominator degree of at most n, the dimension of the hidden state h(k). Any statespace system (1) that achieves a particular transfer function is called a realization of the transfer function. Hence two linear systems are equivalent if and only if they are the realizations of the same transfer function.
A realization is called minimal if it is not equivalent some linear system with fewer hidden states. A basic property of realizations of linear systems is that they are minimal if and only if they are controllable and observable. The formal definition is in any linear systems text, e.g. [14]. Loosely, controllable implies that all internal states can be reached with an appropriate input and observable implies that all hidden states can be observed from the ouptut. In absence of controllability and observability, some hidden states can be removed while maintaining input-output equivalence.
3 Equivalence Results for RNNs with ReLU Activations
Our first results consider contractive RNNs with ReLU activations. For the remainder of the section, we will restrict our attention to the case of zero initial conditions, h(−1) = 0 in (1).
Theorem 3.1 Let y = G(x,Θc) be a contractive RNN with ReLU activation and states of dimension n. Fix M > 0 and let X be the set of all sequences such that ‖x(k)‖2 ≤ M < ∞ for all k. Then there exists a URNN with state dimension 2n and parameters Θu = (Wu,Fu,bu,Cu) such that for all x ∈ X , G(x,Θc) = G(x,Θu). Hence the input-output mapping is matched for bounded inputs.
Proof See Appendix A.
Theorem 3.1 shows that for any contractive RNN with ReLU activations, there exists a URNN with at most twice the number of hidden states and the identical input-output mapping. Thus, there is no loss in the set of input-output mappings with URNNs relative to general contractive RNNs on bounded inputs.
The penalty for using RNNs is the two-fold increase in state dimension, which in turn increases the number of parameters to be learned. We can estimate this increase in parameters as follows: The raw number of parameters for an RNN (1) with n hidden states, p outputs and m inputs is n2+(p+m+1)n. However, for ReLU activations, the RNNs are equivalent under the transformations (4) using diagonal positive T. Hence, the number of degrees of freedom of a general RNN is at most drnn = n
2 + (p + m)n. We can compare this value to a URNN with 2n hidden states. The set of 2n× 2n unitary W has 2n(2n− 1)/2 degrees of freedom [22]. Hence, the total degrees of freedom in a URNN with 2n states is at most durnn = n(2n− 1) + 2n(p+m). We conclude that a URNN with 2n hidden states has slightly fewer than twice the number of parameters as an RNN with n hidden states.
We note that there are cases that the contractivity assumption is limiting, however, the limitations may not always be prohibitive. We will see in our experiments that imposing the contractivity constraint can improve learning for RNNs when models have sufficiently large numbers of time steps. Some related results where bounding the singular values help with the performance can be found in [26].
We next show a converse result.
Theorem 3.2 For every positive n, there exists a contractive RNN with ReLU nonlinearity and state dimension n such that every equivalent URNN has at least 2n states.
Proof See Appendix B.1 in the Supplementary Material.
The result shows that the 2n achievability bound in Theorem 3.1 is tight, at least in the worst case. In addition, the RNN constructed in the proof of Theorem 3.2 is not particularly pathological. We will show in our simulations in Section 5 that URNNs typically need twice the number of hidden states to achieve comparable modeling error as an RNN.
4 Equivalence Results for RNNs with Sigmoid Activations
Equivalence between RNNs and URNNs depends on the particular activation. Our next result shows that with sigmoid activations, URNNs are, in general, never exactly equivalent to RNNs, even with an arbitrary number of states.
We need the following technical definition: Consider an RNN (1) with a standard sigmoid activation φ(z) = 1/(1 + e−z). If W is non-expansive, then a simple application of the contraction mapping principle shows that for any constant input x(k) = x∗, there is a fixed point in the hidden state h∗ = φ(Wh∗ + Fx∗ + b). We will say that the RNN is controllable and observable at x∗ if the linearization of the RNN around (x∗,h∗) is controllable and observable.
Theorem 4.1 There exists a contractive RNN with sigmoid activation function φ with the following property: If a URNN is controllable and observable at any point x∗, then the URNN cannot be equivalent to the RNN for inputs x in the neighborhood of x∗.
Proof See Appendix B.2 in the Supplementary Material.
The result provides a converse on equivalence: Contractive RNNs with sigmoid activations are not in general equivalent to URNNs, even if we allow the URNN to have an arbitrary number of hidden states. Of course, the approximation error between the URNN and RNN may go to zero as the URNN hidden dimension goes to infinity (e.g., similar to the approximation results in [8]). However, exact equivalence is not possible with sigmoid activations, unlike with ReLU activations. Thus, there is fundamental difference in equivalence for smooth and non-smooth activations.
We note that the fundamental distinction between Theorem 3.1 and the opposite result in Theorem 4.1 is that the activation is smooth with a positive slope. With such activations, you can linearize the
system, and the eigenvalues of the transition matrix become visible in the input-output mapping. In contrast, ReLUs can zero out states and suppress these eigenvalues. This is a key insight of the paper and a further contribution in understanding nonlinear systems.
5 Numerical Simulations
In this section, we numerically compare the modeling ability of RNNs and URNNs where the true system is a contractive RNN with long-term dependencies. Specifically, we generate data from multiple instances of a synthetic RNN where the parameters in (1) are randomly generated. For the true system, we use m = 2 input units, p = 2 output units, and n = 4 hidden units at each time step. The matrices F, C and b are generated as i.i.d. Gaussians. We use a random transition matrix,
W = I− ATA/‖A‖2, (6)
where A is Gaussian i.i.d. matrix and is a small value, taken here to be = 0.01. The matrix (6) will be contractive with singular values in (1 − , 1). By making small, the states of the system will vary slowly, hence creating long-term dependencies. In analogy with linear systems, the time constant will be approximately 1/ = 100 time steps. We use ReLU activations. To avoid degenerate cases where the outputs are always zero, the biases b are adjusted to ensure that the each hidden state is on some target 60% of the time using a similar procedure as in [7].
The trials have T = 1000 time steps, which corresponds to 10 times the time constant 1/ = 100 of the system. We added noise to the output of this system such that the signal-to-noise ratio (SNR) is 15 dB or 20 dB. In each trial, we generate 700 training samples and 300 test sequences from this system.
Given the input and the output data of this contractive RNN, we attempt to learn the system with: (i) standard RNNs, (ii) URNNs, and (iii) LSTMs. The hidden states in the model are varied in the range n = [2, 4, 6, 8, 10, 12, 14], which include values both above and below the true number of hidden states ntrue = 4. We used mean-squared error as the loss function. Optimization is performed using Adam [15] optimization with a batch size = 10 and learning rate = 0.01. All models are implemented in the Keras package in Tensorflow. The experiments are done over 30 realizations of the original contractive system.
For the URNN learning, of all the proposed algorithms for enforcing the unitary constraints on transition matrices during training [13, 28, 1, 16], we chose to project the transition matrix on the full space of unitary matrices after each iteration using singular value decomposition (SVD). Although SVD requires O(n3) computation for each projection, for our choices of hidden states it performed faster than the aforementioned methods.
Since we have training noise and since optimization algorithms can get stuck in local minima, we cannot expect “exact" equivalence between the learned model and true system as in the theorems. So, instead, we look at the test error as a measure of the closeness of the learned model to the true system. Figure 2 on the left shows the test R2 for a Gaussian i.i.d. input and output with SNR = 20 dB for RNNs, URNNs, and LSTMs. The red dashed line corresponds to the optimal R2 achievable at the given noise level.
Note that even though the true RNN has ntrue = 4 hidden states, the RNN model does not obtain the optimal test R2 at n = 4. This is not due to training noise, since the RNN is able to capture the full dynamics when we over-parametrize the system to n ≈ 8 hidden states. The test error in the RNN at lower numbers of hidden states is likely due to the optimization being caught in a local minima.
What is important for this work though is to compare the URNN test error with that of the RNN. We observe that URNN requires approximately twice the number of hidden states to obtain the same test error as achieved by an RNN. To make this clear, the right plot shows the same performance data with number of states adjusted for URNN. Since our theory indicates that a URNN with 2n hidden states is as powerful as an RNN with n hidden states, we compare a URNN with 2n hidden units directly with an RNN with n hidden units. We call this the adjusted hidden units. We see that the URNN and RNN have similar test error when we appropriately scale the number of hidden units as predicted by the theory.
For completeness, the left plot in Figure 2 also shows the test error with an LSTM. It is important to note that the URNN has almost the same performance as an LSTM with considerably smaller number of parameters.
Figure 3 shows similar results for the same task with SNR = 15 dB. For this task, the input is sparse Gaussian i.i.d., i.e. Gaussian with some probability p = 0.02 and 0 with probability 1− p. The left plot shows the R2 vs. the number of hidden units for RNNs and URNNs and the right plot shows the same results once the number of hidden units for URNN is adjusted.
We also compared the modeling ability of URNNs and RNNs using the Pixel-Permuted MNIST task. Each MNIST image is a 28 × 28 grayscale image with a label between 0 and 9. A fixed random permutation is applied to the pixels and each pixel is fed to the network in each time step as the input and the output is the predicted label for each image [1, 13, 26].
We evaluated various models on the Pixel-Permuted MNIST task using validation based early stopping. Without imposing a contractivity constraint during learning, the RNN is either unstable or requires a slow learning rate. Imposing a contractivity constraint improves the performance. Incidentally, using a URNN improves the performance further. Thus, contractivity can improve learning for RNNs when models have sufficiently large numbers of time steps.
6 Conclusion
Several works empirically show that using unitary recurrent neural networks improves the stability and performance of the RNNs. In this work, we study how restrictive it is to use URNNs instead of RNNs. We show that URNNs are at least as powerful as contractive RNNs in modeling input-output mappings if enough hidden units are used. More specifically, for any contractive RNN we explicitly construct a URNN with twice the number of states of the RNN and identical input-output mapping. We also provide converse results for the number of state and the activation function needed for exact matching. We emphasize that although it has been shown that URNNs outperform standard RNNs and LSTM in many tasks that involve long-term dependencies, our main goal in this paper is to show that from an approximation viewpoint, URNNs are as expressive as general contractive RNNs. By a two-fold increase in the number of parameters, we can use the stability benefits they bring for optimization of neural networks.
Acknowledgements
The work of M. Emami, M. Sahraee-Ardakan, A. K. Fletcher was supported in part by the National Science Foundation under Grants 1254204 and 1738286, and the Office of Naval Research under
Grant N00014-15-1-2677. S. Rangan was supported in part by the National Science Foundation under Grants 1116589, 1302336, and 1547332, NIST, the industrial affiliates of NYU WIRELESS, and the SRC.
A Proof of Theorem 3.1
The basic idea is to construct a URNN with 2n states such that first n states match the states of RNN and the last n states are always zero. To this end, consider any contractive RNN,
h(k)c = φ(Wch (k−1) c + Fcx (k) + bc), y (k) = Cch (k) c ,
where h(k) ∈ Rn. Since W is contractive, we have ‖W‖ ≤ ρ for some ρ < 1. Also, for a ReLU activation, ‖φ(z)‖ ≤ ‖z‖ for all pre-activation inputs z. Hence,
‖h(k)c ‖2 = ‖φ(Wch(k−1)c + Fcx(k) + bc)‖2 ≤ ‖Wch(k−1)c + Fcx(k) + bc‖2 ≤ ρ‖h(k−1)c ‖2 + ‖Fc‖‖x(k)‖2 + ‖bc‖2.
Therefore, with bounded inputs, ‖x(k)‖ ≤M , we have the state is bounded,
‖h(k)‖2 ≤ 1
1− ρ [‖Fc‖M + ‖bc‖2] =: Mh. (7)
We construct a URNN as,
h(k)u = φ(Wuh (k−1) u + Fux (k) + bu), y (k) = Cuh (k) u
where the parameters are of the form,
hu = [ h1 h2 ] ∈ R2n, Wu = [ W1,W2 W3,W4 ] , Fu = [ Fc 0 ] , bu = [ bc b2 ] . (8)
Let W1 = Wc. Since ‖Wc‖ < 1, we have I−WTcWc 0. Therefore, there exists W3 such that WT3W3 = I−WTcWc. With this choice of W3, the first n columns of Wu are orthonormal. Let[ W2 W4 ] extend these to an orthonormal basis for R2n. Then, the matrix Wu will be orthonormal.
Next, let b2 = −Mh1n×1, where Mh is defined in (7). We show by induction that for all k,
h (k) 1 = h (k) c , h (k) 2 = 0. (9)
If both systems are initialized at zero, (9) is satisfied at k = −1. Now, suppose this holds up to time k − 1. Then,
h (k) 1 = φ(W1h (k−1) 1 + W2h (k−1) 2 + Fcx (k) + bc)
= φ(W1h (k−1) 1 + Fcx (k) + bc) = h (k) c ,
where we have used the induction hypothesis that h(k−1)2 = 0. For h (k) 2 , note that
‖W3h(k−1)1 ‖∞ ≤ ‖W3h (k−1) 1 ‖2 ≤ ‖h (k−1) 1 ‖ ≤Mh, (10)
where the last step follows from (7). Therefore,
W3h (k−1) 1 + W4h (k−1) 2 + b2 = W3h (k−1) 1 −M1n×1 ≤ 0. (11)
Hence with ReLU activation h(k)2 = φ(W3h (k−1) 1 + W4h (k−1) 2 + b2) = 0. By induction, (9) holds for all k. Then, if we define Cu = [Cc0], we have the output of the URNN and RNN systems are identical
y(k)u = Cuh (k) u = Cch (k) 1 = y (k) c .
This shows that the systems are equivalent. | 1. What is the main contribution of the paper regarding RNNs?
2. What are the limitations of the paper's analysis and scope?
3. How can the paper's analysis be extended to improve the generalization capacity of RNNs for predicting future signals? | Review | Review
Overall the paper focuses on theoretic analysis of the expressive powers of RNN's in terms of generating a desired sequence, but does not provide any implementable strategies to improve over existing algorithms in terms of avoiding vanishing or explosive gradient. Another concern is that ``generating desired output sequence'' may not be directly related to the generalization capacity of RNN in terms of predicting future signals, and so it would be more desirable if the analysis can bridge this gap. |
NIPS | Title
Input-Output Equivalence of Unitary and Contractive RNNs
Abstract
Unitary recurrent neural networks (URNNs) have been proposed as a method to overcome the vanishing and exploding gradient problem in modeling data with long-term dependencies. A basic question is how restrictive is the unitary constraint on the possible input-output mappings of such a network? This work shows that for any contractive RNN with ReLU activations, there is a URNN with at most twice the number of hidden states and the identical input-output mapping. Hence, with ReLU activations, URNNs are as expressive as general RNNs. In contrast, for certain smooth activations, it is shown that the input-output mapping of an RNN cannot be matched with a URNN, even with an arbitrary number of states. The theoretical results are supported by experiments on modeling of slowly-varying dynamical systems.
1 Introduction
Recurrent neural networks (RNNs) – originally proposed in the late 1980s [20, 6] – refer to a widelyused and powerful class of models for time series and sequential data. In recent years, RNNs have become particularly important in speech recognition [9, 10] and natural language processing [5, 2, 24] tasks.
A well-known challenge in training recurrent neural networks is the vanishing and exploding gradient problem [3, 18]. RNNs have a transition matrix that maps the hidden state at one time to the next time. When the transition matrix has an induced norm greater than one, the RNN may become unstable. In this case, small perturbations of the input at some time can result in a change in the output that grows exponentially over the subsequent time. This instability leads to a so-called exploding gradient. Conversely, when the norm is less than one, perturbations can decay exponentially so inputs at one time have negligible effect in the distant future. As a result, the loss surface associated with RNNs can have steep walls that may be difficult to minimize. Such problems are particularly acute in systems with long-term dependencies, where the output sequence can depend strongly on the input sequence many time steps in the past.
Unitary RNNs (URNNs) [1] is a simple and commonly-used approach to mitigate the vanishing and exploding gradient problem. The basic idea is to restrict the transition matrix to be unitary (an orthogonal matrix for the real-valued case). The unitary transitional matrix is then combined with a non-expansive activation such as a ReLU or sigmoid. As a result, the overall transition mapping cannot amplify the hidden states, thereby eliminating the exploding gradient problem. In addition,
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
since all the singular values of a unitary matrix equal 1, the transition matrix does not attenuate the hidden state, potentially mitigating the vanishing gradient problem as well. (Due to activation, the hidden state may still be attenuated). Some early work in URNNs suggested that they could be more effective than other methods, such as long short-term memory (LSTM) architectures and standard RNNs, for certain learning tasks involving long-term dependencies [13, 1] – see a short summary below.
Although URNNs may improve the stability of the network for the purpose of optimization, a basic issue with URNNs is that the unitary contraint may potentially reduce the set of input-output mappings that the network can model. This paper seeks to rigorously characterize how restrictive the unitary constraint is on an RNN. We evaluate this restriction by comparing the set of input-output mappings achievable with URNNs with the set of mappings from all RNNs. As described below, we restrict our attention to RNNs that are contractive in order to avoid unstable systems.
We show three key results:
1. Given any contractive RNN with n hidden states and ReLU activations, there exists a URNN with at most 2n hidden states and the identical input-ouput mapping.
2. This result is tight in the sense that, given any n > 0, there exists at least one contractive RNN such that any URNN with the same input-output mapping must have at least 2n states.
3. The equivalence of URNNs and RNNs depends on the activation. For example, we show that there exists a contractive RNN with sigmoid activations such that there is no URNN with any finite number of states that exactly matches the input-output mapping.
The implication of this result is that, for RNNs with ReLU activations, there is no loss in the expressiveness of model when imposing the unitary constraint. As we discuss below, the penalty is a two-fold increase in the number of parameters.
Of course, the expressiveness of a class of models is only one factor in their real performance. Based on these results alone, one cannot determine if URNNs will outperform RNNs in any particular task. Earlier works have found examples where URNNs offer some benefits over LSTMs and RNNs [1, 28]. But in the simulations below concerning modeling slowly-varying nonlinear dynamical systems, we see that URNNs with 2n states perform approximately equally to RNNs with n states.
Theoretical results on generalization error are an active subject area in deep neural networks. Some measures of model complexity such as [17] are related to the spectral norm of the transition matrices. For RNNs with non-contractive matrices, these complexity bounds will grow exponentially with the number of time steps. In contrast, since unitary matrices can bound the generalization error, this work can also relate to generalizability.
Prior work
The vanishing and exploding gradient problem in RNNs has been known almost as early as RNNs themselves [3, 18]. It is part of a larger problem of training models that can capture long-term dependencies, and several proposed methods address this issue. Most approaches use some form of gate vectors to control the information flow inside the hidden states, the most widely-used being LSTM networks [11]. Other gated models include Highway networks [21] and gated recurrent units (GRUs) [4]. L1/L2 penalization on gradient norms and gradient clipping were proposed to solve the exploding gradient problem in [18]. With L1/L2 penalization, capturing long-term dependencies is still challenging since the regularization term quickly kills the information in the model. A more recent work [19] has successfully trained very deep networks by carefully adjusting the initial conditions to impose an approximate unitary structure of many layers.
Unitary evolution RNNs (URNNs) are a more recent approach first proposed in [1]. Orthogonal constraints were also considered in the context of associative memories [27]. One of the technical difficulties is to efficiently parametrize the set of unitary matrices. The numerical simulations in this work focus on relatively small networks, where the parameterization is not a significant computational issue. Nevertheless, for larger numbers of hidden states, several approaches have been proposed. The model in [1] parametrizes the transition matrix as a product of reflection, diagonal, permutation, and Fourier transform matrices. This model spans a subspace of the whole unitary space, thereby limiting the expressive power of RNNs. The work [28] overcomes this issue by optimizing over
full-capacity unitary matrices. A key limitation in this work, however, is that the projection of weights on to the unitary space is not computationally efficient. A tunable, efficient parametrization of unitary matrices is proposed in [13]. This model provides the computational complexity of O(1) per parameter. The unitary matrix is represented as a product of rotation matrices and a diagonal matrix. By grouping specific rotation matrices, the model provides tunability of the span of the unitary space and enables using different capacities for different tasks. Combining the parametrization in [13] for unitary matrices and the “forget” ability of the GRU structure, [4, 12] presented an architecture that outperforms conventional models in several long-term dependency tasks. Other methods such as orthogonal RNNs proposed by [16] showed that the unitary constraint is a special case of the orthogonal constraint. By representing an orthogonal matrix as a product of Householder reflectors, we are able span the entire space of orthogonal matrices. Imposing hard orthogonality constraints on the transition matrix limits the expressiveness of the model and speed of convergence and performance may degrade [26].
2 RNNs and Input-Output Equivalence
RNNs. We consider recurrent neural networks (RNNs) representing sequence-to-sequence mappings of the form
h(k) = φ(Wh(k−1) + Fx(k) + b), h(−1) = h−1, (1a)
y(k) = Ch(k), (1b)
parameterized by Θ = (W,F,b,C,h−1). The system is shown in Fig. 1. The system maps a sequence of inputs x(k) ∈ Rm, k = 0, 1, . . . , T − 1 to a sequence of outputs y(k) ∈ Rp. In equation (1), φ is the activation function (e.g. sigmoid or ReLU); h(k) ∈ Rn is an internal or hidden state; W ∈ Rn×n,F ∈ Rn×m, and C ∈ Rp×n are the hidden-to-hidden, input-to-hidden, and hidden-to-output weight matrices respectively; and b is the bias vector. We have considered the initial condition, h−1, as part of the parameters, although we will often take h−1 = 0. Given a set of parameters Θ, we will let
y = G(x,Θ) (2)
denote the resulting sequence-to-sequence mapping. Note that the number of time samples, T , is fixed throughout our discussion.
Recall [23] that a matrix W is unitary if WHW = WWH = I. When a unitary matrix is realvalued, it is also called orthogonal. In this work, we will restrict our attention to real-valued matrices, but still use the term unitary for consistency with the URNN literature. A Unitary RNN or URNN is simply an RNN (1) with a unitary state-to-state transition matrix W. A key property of unitary matrices is that they are norm-preserving, meaning that ‖Wh(k)‖2 = ‖h(k)‖2. In the context of (1a), the unitary constraint implies that the transition matrix does not amplify the state.
Equivalence of RNNs. Our goal is to understand the extent to which the unitary constraint in a URNN restricts the set of input-output mappings. To this end, we say that the RNNs for two parameters Θ1 and Θ2 are input-output equivalent if the sequence-to-sequence mappings are identical,
G(x,Θ1) = G(x,Θ2) for all x = (x(0), . . . ,x(T−1)). (3)
That is, for all input sequences x, the two systems have the same output sequence. Note that the hidden internal states h(k) in the two systems may be different. We will also say that two RNNs are equivalent on a set of X of inputs if (3) holds for all x ∈ X . It is important to recognize that input-output equivalence does not imply that the parameters Θ1 and Θ2 are identical. For example, consider the case of linear RNNs where the activation in (1) is the identity, φ(z) = z. Then, for any invertible T, the transformation
W→ TWT−1, C→ CT−1, F→ TF, h−1 → Th−1, (4)
results in the same input-output mapping. However, the internal states h(k) will be mapped to Th(k). The fact that many parameters can lead to identical input-output mappings will be key to finding equivalent RNNs and URNNs.
Contractive RNNs. The spectral norm [23] of a matrix W is the maximum gain of the matrix ‖W‖ := maxh6=0 ‖Wh‖2‖h‖2 . In an RNN (1), the spectral norm ‖W‖ measures how much the transition matrix can amplify the hidden state. For URNNs, ‖W‖ = 1. We will say an RNN is contractive if ‖W‖ < 1, expansive if ‖W‖ > 1, and non-expansive if ‖W‖ ≤ 1. In the sequel, we will restrict our attention to contractive and non-expansive RNNs. In general, given an expansive RNN, we cannot expect to find an equivalent URNN. For example, suppose h(k) = h(k) is scalar. Then, the transition matrix W is also scalar W = w and w is expansive if and only if |w| > 1. Now suppose the activation is a ReLU φ(h) = max{0, h}. Then, it is possible that a constant input x(k) = x0 can result in an output that grows exponentially with time: y(k) = const × wk. Such an exponential increase is not possible with a URNN. We consider only non-expansive RNNs in the remainder of the paper. Some of our results will also need the assumption that the activation function φ(·) in (1) is non-expansive:
‖φ(x)− φ(y)‖2 ≤ ‖x− y‖2, for all x and y. This property is satisfied by the two most common activations, sigmoids and ReLUs.
Equivalence of Linear RNNs. To get an intuition of equivalence, it is useful to briefly review the concept in the case of linear systems [14]. Linear systems are RNNs (1) in the special case where the activation function is identity, φ(z) = z; the initial condition is zero, h−1 = 0; and the bias is zero, b = 0. In this case, it is well-known that two systems are input-output equivalent if and only if they have the same transfer function,
H(s) := C(sI−W)−1F. (5) In the case of scalar inputs and outputs, H(s) is a rational function of the complex variable s with numerator and denominator degree of at most n, the dimension of the hidden state h(k). Any statespace system (1) that achieves a particular transfer function is called a realization of the transfer function. Hence two linear systems are equivalent if and only if they are the realizations of the same transfer function.
A realization is called minimal if it is not equivalent some linear system with fewer hidden states. A basic property of realizations of linear systems is that they are minimal if and only if they are controllable and observable. The formal definition is in any linear systems text, e.g. [14]. Loosely, controllable implies that all internal states can be reached with an appropriate input and observable implies that all hidden states can be observed from the ouptut. In absence of controllability and observability, some hidden states can be removed while maintaining input-output equivalence.
3 Equivalence Results for RNNs with ReLU Activations
Our first results consider contractive RNNs with ReLU activations. For the remainder of the section, we will restrict our attention to the case of zero initial conditions, h(−1) = 0 in (1).
Theorem 3.1 Let y = G(x,Θc) be a contractive RNN with ReLU activation and states of dimension n. Fix M > 0 and let X be the set of all sequences such that ‖x(k)‖2 ≤ M < ∞ for all k. Then there exists a URNN with state dimension 2n and parameters Θu = (Wu,Fu,bu,Cu) such that for all x ∈ X , G(x,Θc) = G(x,Θu). Hence the input-output mapping is matched for bounded inputs.
Proof See Appendix A.
Theorem 3.1 shows that for any contractive RNN with ReLU activations, there exists a URNN with at most twice the number of hidden states and the identical input-output mapping. Thus, there is no loss in the set of input-output mappings with URNNs relative to general contractive RNNs on bounded inputs.
The penalty for using RNNs is the two-fold increase in state dimension, which in turn increases the number of parameters to be learned. We can estimate this increase in parameters as follows: The raw number of parameters for an RNN (1) with n hidden states, p outputs and m inputs is n2+(p+m+1)n. However, for ReLU activations, the RNNs are equivalent under the transformations (4) using diagonal positive T. Hence, the number of degrees of freedom of a general RNN is at most drnn = n
2 + (p + m)n. We can compare this value to a URNN with 2n hidden states. The set of 2n× 2n unitary W has 2n(2n− 1)/2 degrees of freedom [22]. Hence, the total degrees of freedom in a URNN with 2n states is at most durnn = n(2n− 1) + 2n(p+m). We conclude that a URNN with 2n hidden states has slightly fewer than twice the number of parameters as an RNN with n hidden states.
We note that there are cases that the contractivity assumption is limiting, however, the limitations may not always be prohibitive. We will see in our experiments that imposing the contractivity constraint can improve learning for RNNs when models have sufficiently large numbers of time steps. Some related results where bounding the singular values help with the performance can be found in [26].
We next show a converse result.
Theorem 3.2 For every positive n, there exists a contractive RNN with ReLU nonlinearity and state dimension n such that every equivalent URNN has at least 2n states.
Proof See Appendix B.1 in the Supplementary Material.
The result shows that the 2n achievability bound in Theorem 3.1 is tight, at least in the worst case. In addition, the RNN constructed in the proof of Theorem 3.2 is not particularly pathological. We will show in our simulations in Section 5 that URNNs typically need twice the number of hidden states to achieve comparable modeling error as an RNN.
4 Equivalence Results for RNNs with Sigmoid Activations
Equivalence between RNNs and URNNs depends on the particular activation. Our next result shows that with sigmoid activations, URNNs are, in general, never exactly equivalent to RNNs, even with an arbitrary number of states.
We need the following technical definition: Consider an RNN (1) with a standard sigmoid activation φ(z) = 1/(1 + e−z). If W is non-expansive, then a simple application of the contraction mapping principle shows that for any constant input x(k) = x∗, there is a fixed point in the hidden state h∗ = φ(Wh∗ + Fx∗ + b). We will say that the RNN is controllable and observable at x∗ if the linearization of the RNN around (x∗,h∗) is controllable and observable.
Theorem 4.1 There exists a contractive RNN with sigmoid activation function φ with the following property: If a URNN is controllable and observable at any point x∗, then the URNN cannot be equivalent to the RNN for inputs x in the neighborhood of x∗.
Proof See Appendix B.2 in the Supplementary Material.
The result provides a converse on equivalence: Contractive RNNs with sigmoid activations are not in general equivalent to URNNs, even if we allow the URNN to have an arbitrary number of hidden states. Of course, the approximation error between the URNN and RNN may go to zero as the URNN hidden dimension goes to infinity (e.g., similar to the approximation results in [8]). However, exact equivalence is not possible with sigmoid activations, unlike with ReLU activations. Thus, there is fundamental difference in equivalence for smooth and non-smooth activations.
We note that the fundamental distinction between Theorem 3.1 and the opposite result in Theorem 4.1 is that the activation is smooth with a positive slope. With such activations, you can linearize the
system, and the eigenvalues of the transition matrix become visible in the input-output mapping. In contrast, ReLUs can zero out states and suppress these eigenvalues. This is a key insight of the paper and a further contribution in understanding nonlinear systems.
5 Numerical Simulations
In this section, we numerically compare the modeling ability of RNNs and URNNs where the true system is a contractive RNN with long-term dependencies. Specifically, we generate data from multiple instances of a synthetic RNN where the parameters in (1) are randomly generated. For the true system, we use m = 2 input units, p = 2 output units, and n = 4 hidden units at each time step. The matrices F, C and b are generated as i.i.d. Gaussians. We use a random transition matrix,
W = I− ATA/‖A‖2, (6)
where A is Gaussian i.i.d. matrix and is a small value, taken here to be = 0.01. The matrix (6) will be contractive with singular values in (1 − , 1). By making small, the states of the system will vary slowly, hence creating long-term dependencies. In analogy with linear systems, the time constant will be approximately 1/ = 100 time steps. We use ReLU activations. To avoid degenerate cases where the outputs are always zero, the biases b are adjusted to ensure that the each hidden state is on some target 60% of the time using a similar procedure as in [7].
The trials have T = 1000 time steps, which corresponds to 10 times the time constant 1/ = 100 of the system. We added noise to the output of this system such that the signal-to-noise ratio (SNR) is 15 dB or 20 dB. In each trial, we generate 700 training samples and 300 test sequences from this system.
Given the input and the output data of this contractive RNN, we attempt to learn the system with: (i) standard RNNs, (ii) URNNs, and (iii) LSTMs. The hidden states in the model are varied in the range n = [2, 4, 6, 8, 10, 12, 14], which include values both above and below the true number of hidden states ntrue = 4. We used mean-squared error as the loss function. Optimization is performed using Adam [15] optimization with a batch size = 10 and learning rate = 0.01. All models are implemented in the Keras package in Tensorflow. The experiments are done over 30 realizations of the original contractive system.
For the URNN learning, of all the proposed algorithms for enforcing the unitary constraints on transition matrices during training [13, 28, 1, 16], we chose to project the transition matrix on the full space of unitary matrices after each iteration using singular value decomposition (SVD). Although SVD requires O(n3) computation for each projection, for our choices of hidden states it performed faster than the aforementioned methods.
Since we have training noise and since optimization algorithms can get stuck in local minima, we cannot expect “exact" equivalence between the learned model and true system as in the theorems. So, instead, we look at the test error as a measure of the closeness of the learned model to the true system. Figure 2 on the left shows the test R2 for a Gaussian i.i.d. input and output with SNR = 20 dB for RNNs, URNNs, and LSTMs. The red dashed line corresponds to the optimal R2 achievable at the given noise level.
Note that even though the true RNN has ntrue = 4 hidden states, the RNN model does not obtain the optimal test R2 at n = 4. This is not due to training noise, since the RNN is able to capture the full dynamics when we over-parametrize the system to n ≈ 8 hidden states. The test error in the RNN at lower numbers of hidden states is likely due to the optimization being caught in a local minima.
What is important for this work though is to compare the URNN test error with that of the RNN. We observe that URNN requires approximately twice the number of hidden states to obtain the same test error as achieved by an RNN. To make this clear, the right plot shows the same performance data with number of states adjusted for URNN. Since our theory indicates that a URNN with 2n hidden states is as powerful as an RNN with n hidden states, we compare a URNN with 2n hidden units directly with an RNN with n hidden units. We call this the adjusted hidden units. We see that the URNN and RNN have similar test error when we appropriately scale the number of hidden units as predicted by the theory.
For completeness, the left plot in Figure 2 also shows the test error with an LSTM. It is important to note that the URNN has almost the same performance as an LSTM with considerably smaller number of parameters.
Figure 3 shows similar results for the same task with SNR = 15 dB. For this task, the input is sparse Gaussian i.i.d., i.e. Gaussian with some probability p = 0.02 and 0 with probability 1− p. The left plot shows the R2 vs. the number of hidden units for RNNs and URNNs and the right plot shows the same results once the number of hidden units for URNN is adjusted.
We also compared the modeling ability of URNNs and RNNs using the Pixel-Permuted MNIST task. Each MNIST image is a 28 × 28 grayscale image with a label between 0 and 9. A fixed random permutation is applied to the pixels and each pixel is fed to the network in each time step as the input and the output is the predicted label for each image [1, 13, 26].
We evaluated various models on the Pixel-Permuted MNIST task using validation based early stopping. Without imposing a contractivity constraint during learning, the RNN is either unstable or requires a slow learning rate. Imposing a contractivity constraint improves the performance. Incidentally, using a URNN improves the performance further. Thus, contractivity can improve learning for RNNs when models have sufficiently large numbers of time steps.
6 Conclusion
Several works empirically show that using unitary recurrent neural networks improves the stability and performance of the RNNs. In this work, we study how restrictive it is to use URNNs instead of RNNs. We show that URNNs are at least as powerful as contractive RNNs in modeling input-output mappings if enough hidden units are used. More specifically, for any contractive RNN we explicitly construct a URNN with twice the number of states of the RNN and identical input-output mapping. We also provide converse results for the number of state and the activation function needed for exact matching. We emphasize that although it has been shown that URNNs outperform standard RNNs and LSTM in many tasks that involve long-term dependencies, our main goal in this paper is to show that from an approximation viewpoint, URNNs are as expressive as general contractive RNNs. By a two-fold increase in the number of parameters, we can use the stability benefits they bring for optimization of neural networks.
Acknowledgements
The work of M. Emami, M. Sahraee-Ardakan, A. K. Fletcher was supported in part by the National Science Foundation under Grants 1254204 and 1738286, and the Office of Naval Research under
Grant N00014-15-1-2677. S. Rangan was supported in part by the National Science Foundation under Grants 1116589, 1302336, and 1547332, NIST, the industrial affiliates of NYU WIRELESS, and the SRC.
A Proof of Theorem 3.1
The basic idea is to construct a URNN with 2n states such that first n states match the states of RNN and the last n states are always zero. To this end, consider any contractive RNN,
h(k)c = φ(Wch (k−1) c + Fcx (k) + bc), y (k) = Cch (k) c ,
where h(k) ∈ Rn. Since W is contractive, we have ‖W‖ ≤ ρ for some ρ < 1. Also, for a ReLU activation, ‖φ(z)‖ ≤ ‖z‖ for all pre-activation inputs z. Hence,
‖h(k)c ‖2 = ‖φ(Wch(k−1)c + Fcx(k) + bc)‖2 ≤ ‖Wch(k−1)c + Fcx(k) + bc‖2 ≤ ρ‖h(k−1)c ‖2 + ‖Fc‖‖x(k)‖2 + ‖bc‖2.
Therefore, with bounded inputs, ‖x(k)‖ ≤M , we have the state is bounded,
‖h(k)‖2 ≤ 1
1− ρ [‖Fc‖M + ‖bc‖2] =: Mh. (7)
We construct a URNN as,
h(k)u = φ(Wuh (k−1) u + Fux (k) + bu), y (k) = Cuh (k) u
where the parameters are of the form,
hu = [ h1 h2 ] ∈ R2n, Wu = [ W1,W2 W3,W4 ] , Fu = [ Fc 0 ] , bu = [ bc b2 ] . (8)
Let W1 = Wc. Since ‖Wc‖ < 1, we have I−WTcWc 0. Therefore, there exists W3 such that WT3W3 = I−WTcWc. With this choice of W3, the first n columns of Wu are orthonormal. Let[ W2 W4 ] extend these to an orthonormal basis for R2n. Then, the matrix Wu will be orthonormal.
Next, let b2 = −Mh1n×1, where Mh is defined in (7). We show by induction that for all k,
h (k) 1 = h (k) c , h (k) 2 = 0. (9)
If both systems are initialized at zero, (9) is satisfied at k = −1. Now, suppose this holds up to time k − 1. Then,
h (k) 1 = φ(W1h (k−1) 1 + W2h (k−1) 2 + Fcx (k) + bc)
= φ(W1h (k−1) 1 + Fcx (k) + bc) = h (k) c ,
where we have used the induction hypothesis that h(k−1)2 = 0. For h (k) 2 , note that
‖W3h(k−1)1 ‖∞ ≤ ‖W3h (k−1) 1 ‖2 ≤ ‖h (k−1) 1 ‖ ≤Mh, (10)
where the last step follows from (7). Therefore,
W3h (k−1) 1 + W4h (k−1) 2 + b2 = W3h (k−1) 1 −M1n×1 ≤ 0. (11)
Hence with ReLU activation h(k)2 = φ(W3h (k−1) 1 + W4h (k−1) 2 + b2) = 0. By induction, (9) holds for all k. Then, if we define Cu = [Cc0], we have the output of the URNN and RNN systems are identical
y(k)u = Cuh (k) u = Cch (k) 1 = y (k) c .
This shows that the systems are equivalent. | 1. What is the originality of the paper's content, and how does it advance existing works?
2. What is the quality of the paper regarding its completeness, self-containment, and backing up claims with proofs or results?
3. How clear and well-organized is the paper in introducing the problem, and are all relevant terms properly introduced?
4. What is the significance of the paper's results for future research on RNN and their training methods?
5. How does the reviewer assess the expressiveness of the approaches presented in the paper? | Review | Review
Originality: To my knowledge the results in this work are clearly new and interesting. They build on and advance existing works. Quality: The paper appears to be a complete, and self-contained work that backs up claims with proofs or results. The paper states both what can be achieved but also what cannot. Clarity: The paper is well written and organised. It introduces the problem very well, and all relevant terms are well introduced. The supplementary material contains some of the proofs for theorems in the main paper. Significance: I believe the results of this work are important for future research of RNN and their training methods. While earlier work already looked into orthogonal networks (mostly for memory capacity, eg White, O, Lee, D, and Sompolinky, H. Short-term memory in orthogonal neural networks; also others Mikael Henaff et al Recurrent Orthogonal Networks and Long-Memory Tasks), expressiveness of the approaches has not been compared in this form, at least to my knowledge. |
NIPS | Title
Neural Production Systems
Abstract
Visual environments are structured, consisting of distinct objects or entities. These entities have properties—visible or latent—that determine the manner in which they interact with one another. To partition images into entities, deep-learning researchers have proposed structural inductive biases such as slot-based architectures. To model interactions among entities, equivariant graph neural nets (GNNs) are used, but these are not particularly well suited to the task for two reasons. First, GNNs do not predispose interactions to be sparse, as relationships among independent entities are likely to be. Second, GNNs do not factorize knowledge about interactions in an entity-conditional manner. As an alternative, we take inspiration from cognitive science and resurrect a classic approach, production systems, which consist of a set of rule templates that are applied by binding placeholder variables in the rules to specific entities. Rules are scored on their match to entities, and the best fitting rules are applied to update entity properties. In a series of experiments, we demonstrate that this architecture achieves a flexible, dynamic flow of control and serves to factorize entity-specific and rule-based information. This disentangling of knowledge achieves robust future-state prediction in rich visual environments, outperforming state-of-the-art methods using GNNs, and allows for the extrapolation from simple (few object) environments to more complex environments.
1 Introduction
Despite never having taken a physics course, every child beyond a young age appreciates that pushing a plate off the dining table will cause the plate to break. The laws of physics accurately characterize the dynamics of our natural world, and although explicit knowledge of these laws is not necessary to reason, we can reason explicitly about objects interacting through these laws. Humans can verbalize knowledge in propositional expressions such as “If a plate drops from table height, it will break,” and “If a video-game opponent approaches from behind and they are carrying a weapon, they are likely to attack you.” Expressing propositional knowledge is not a strength of current deep learning methods for several reasons. First, propositions are discrete and independent from one another. Second, propositions must be quantified in the manner of first-order logic; for example, the video-game proposition applies to any X for which X is an opponent and has a weapon. Incorporating the ability to express and reason about propositions should improve generalization in deep learning methods because this knowledge is modular— propositions can be formulated independently of each other— and can therefore be acquired incrementally. Propositions can also be composed with each other and applied consistently to all entities that match, yielding a powerful form of systematic generalization.
The classical AI literature from the 1980s can offer deep learning researchers a valuable perspective. In this era, reasoning, planning, and prediction were handled by architectures that performed propositional inference on symbolic knowledge representations. A simple example of such an architecture is
* Equal Contribution, ** Equal Advising 1 Mila, University of Montreal, 2 Google Deepmind, 3 Waverly, 4 Google Research, Brain Team. Corresponding authors: [email protected], [email protected]
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
the production system (Laird et al., 1986; Anderson, 1987), which expresses knowledge by conditionaction rules. The rules operate on a working memory: rule conditions are matched to entities in working memory inspired by cognitive science, and such a match can trigger computational actions that update working memory or external actions that operate on the outside world.
Production systems were typically used to model high-level cognition, e.g., mathematical problem solving or procedure following; perception was not the focus of these models. It was assumed that the results of perception were placed into working memory in a symbolic form that could be operated on with the rules. In this article, we revisit production systems but from a deep learning perspective which naturally integrates perceptual processing and subsequent inference for visual reasoning problems. We describe an end-to-end deep learning model that constructs object-centric representations of entities in videos, and then operates on these entities with differentiable—and thus learnable—production rules. The essence of these rules, carried over from traditional symbolic system, is that they operate on variables that are bound, or linked, to the entities in the world. In the deep learning implementation, each production rule is represented by a distinct MLP with query-key attention mechanisms to specify the rule-entity binding and to determine when the rule should be triggered for a given entity. We are not the first to propose a neural instantiation of a production system architecture. Touretzky & Hinton (1988) gave a proof of principle that neural net hardware could be hardwired to implement a production system for symbolic reasoning; our work fundamentally differs from theirs in that (1) we focus on perceptual inference problems and (2) we use the architecture as an inductive bias for learning.
1.1 Variables and entities
What makes a rule general-purpose is that it incorporates placeholder variables that can be bound to arbitrary values or—the term we prefer in this article—entities. This notion of binding is familiar in functional programming languages, where these variables are called arguments. Analogously, the use of variables in the production rules we describe enable a model to reason about any set of entities that satisfy the selection criteria of the rule.
Consider a simple function in C like int add(int a, int b). This function binds its two integer operands to variables a and b. The function does not apply if the operands are, say, character strings. The use of variables enables a programmer to reuse the same function to add any two integer values
In order for rules to operate on entities, these entities must be represented explicitly. That is, the visual world needs to be parsed in a task-relevant manner, e.g., distinguishing the sprites in a video game or the vehicles and pedestrians approaching an autonomous vehicle. Only in the past few years have deep learning vision researchers developed methods for object-centric representation (Le Roux et al., 2011; Eslami et al., 2016; Greff et al., 2016; Raposo et al., 2017; Van Steenkiste et al., 2018; Kosiorek et al., 2018; Engelcke et al., 2019; Burgess et al., 2019; Greff et al., 2019; Locatello et al., 2020a; Ahmed et al., 2020; Goyal et al., 2019; Zablotskaia et al., 2020; Rahaman et al., 2020; Du et al., 2020; Ding et al., 2020; Goyal et al., 2020; Ke et al., 2021). These methods differ in details but share the notion of a fixed number of slots (see Figure 1 for example), also known as object files, each encapsulating information about a single object. Importantly, the slots are interchangeable, meaning that it doesn’t matter if a scene with an apple and an orange encodes the apple in slot 1 and orange in slot 2 or vice-versa.
A model of visual reasoning must not only be able to represent entities but must also express knowledge about entity dynamics and interactions. To ensure systematic predictions, a model must be capable of applying knowledge to an entity regardless of the slot it is in and must be capable of applying the same knowledge to multiple instances of an entity. Several distinct approaches exist in the literature. The predominant approach uses graph neural networks to model slot-to-slot interactions (Scarselli et al., 2008; Bronstein et al., 2017; Watters et al., 2017; Van Steenkiste et al., 2018; Kipf et al., 2018; Battaglia et al., 2018; Tacchetti et al., 2018). To ensure systematicity, the GNN must share parameters among the edges. In a recent article, Goyal et al. (2020) developed a more general framework in which parameters are shared but slots can dynamically select which parameters to use in a state-dependent manner. Each set of parameters is referred to as a schema, and slots use a query-key attention mechanism to select which schema to apply at each time step. Multiple slots can select the same schema. In both GNNs and SCOFF, modeling dynamics involves each slot interacting with each other slot. In the work we describe in this article, we replace the direct slot-to-slot interactions with rules, which mediate sparse interactions among slots (See arrows in Figure 1).
Thus our main contribution is that we introduce NPS, which offers a way to model dynamic and sparse interactions among the variables in a graph and also allows dynamic sharing of multiple sets of parameters among these interactions. Most architectures used for modelling interactions in the current literature use statically instantiated graph which model all possible interactions for a given variable at each step i.e. dense interactions. Also such dense architectures share a single set of parameters across all interactions which maybe quite restrictive in terms of representational capacity. A visual comparison between these two kinds of architectures is shown in Figure 1. Through our experiments we show the advantage of modeling interactions in the proposed manner using NPS in visually rich physical environments. We also show that our method results in an intuitive factorization of rules and entities.
2 Production System
Formally, our notion of a production system consists of a set of entities and a set of rules, along with a mechanism for selecting rules to apply on subsets of the entities. Implicit in a rule is a specification of the properties of relevant entities, e.g., a rule might apply to one type of sprite in a video game but not another. The control flow of a production system dynamically selects rules as well as bindings between rules and entities, allowing different rules to be chosen and different entities to be manipulated at each point in time.
The neural production system we describe shares essential properties with traditional production system, particularly with regard to the compositionality and generality of the knowledge they embody. Lovett & Anderson (2005) describe four desirable properties commonly attributed to symbolic systems that apply to our work as well.
Production rules are modular. Each production rule represents a unit of knowledge and are atomic such that any production rule can be intervened (added, modified or deleted) independently of other production rules in the system.
Production rules are abstract. Production rules allow for generalization because their conditions may be represented as high-level abstract knowledge that match to a wide range of patterns. These conditions specify the attributes of relationship(s) between entities without specifying the entities themselves. The ability to represent abstract knowledge allows for the transfer of learning across different environments as long as they fit within the conditions of the given production rule.
Production rules are sparse. In order that production rules have broad applicability, they involve only a subset of entities. This assumption imposes a strong prior that dependencies among entities are sparse. In the context of visual reasoning, we conjecture that this prior is superior to what has often been assumed in the past, particularly in the disentanglement literature—independence among entities Higgins et al. (2016); Chen et al. (2018).
Production rules represent causal knowledge and are thus asymmetric. Each rule can be decomposed into a {condition, action} pair, where the action reflects a state change that is a causal consequence of the conditions being met.
These four properties are sufficient conditions for knowledge to be expressed in production rule form. These properties specify how knowledge is represented, but not what knowledge is represented. The
latter is inferred by learning mechanisms under the inductive bias provided by the form of production rules.
3 Neural Production System: Slots and Sparse Rules
The Neural Production System (NPS), illustrated in Figure 2, provides an architectural backbone that supports the detection and inference of entity (object) representations in an input sequence, and the underlying rules which govern the interactions between these entities in time and space. The input sequence indexed by time step t, {x1, . . . ,xt, . . . ,xT }, for instance the frames in a video, are processed by a neural encoder (Burgess et al., 2019; Greff et al., 2019; Goyal et al., 2019, 2020) applied to each xt, to obtain a set of M entity representations {V t1 , . . . , . . . ,V tM}, one for each of the M slots. These representations describe an entity and are updated based on both the previous state, V t−1 and the current input, xt.
NPS consists of N separately encoded rules, {R1,R2, ..,RN}. Each rule consists of two components, Ri = ( ~Ri,MLPi), where ~Ri is a learned rule embedding vector, which can be thought of as a template defining the condition for when a rule applies; and MLPi, which determines the action taken by a rule. Both ~Ri and the parameters of MLPi are learned along with the other parameters of the model using back-propagation on an objective optimized end-to-end.
In the general form of the model, each slot selects a rule that will be applied to it to change its state. This can potentially be performed several times, with possibly different rules applied at each step. Rule selection is done using an attention mechanism described in detail below. Each rule specifies conditions and actions on a pair of slots. Therefore, while modifying the state of a slot using a rule, it can take the state of another slot into account. The slot which is being modified is called the primary slot and other is called the contextual slot. The contextual slot is also selected using an attention mechanism described in detail below.
3.1 Computational Steps in NPS
In this section, we give a detailed description of the rule selection and application procedure for the slots. First, we will formalize the definitions of a few terms that we will use to explain our method. We use the term primary slot to refer to slot Vp whose state gets modified by a rule Rr. We use the term contextual slot to refer to the slot Vc that the rule Rr takes into account while modifying the state of the primary slot Vp.
Notation. We consider a set of N rules {R1,R2, . . . ,RN} and a set of T input frames {x1,x2, . . . ,xT }. Each frame xt is encoded into a set of M slots {V t1 ,V t2 , . . . ,V tM}. In the following discussion, we omit the index over t for simplicity.
Step 1. is external to NPS and involves parsing an input image, xt, into slot-based entities conditioned on the previous state of the slot-based entities. Any of the methods proposed in the literature to obtain a slot-wise representation of entities can be used (Burgess et al., 2019; Greff et al., 2019; Goyal et al., 2019, 2020). The next three steps constitute the rule selection and application procedure.
Step 2. For each primary slot Vp, we attend to a rule Rr to be applied. Here, the queries come from the primary slot: qp = VpW q, and the keys come from the rules: ki = ~RiW k ∀i ∈ {1, . . . ,N}. The rule is selected using a straight-through Gumbel softmax (Jang et al., 2016) to achieve a learnable hard decision: r = argmaxi(qpki + γ), where γ ∼ Gumbel(0, 1). This competition is a noisy version of rule matching and prioritization in traditional production systems.
Step 3. For a given primary slot Vp and selected rule Rr, a contextual slot Vc is selected using another attention mechanism. In this case the query comes from the primary slot: qp = VpW q, and the keys from all the slots: kj = VjW q ∀j ∈ {1, . . . ,M}. The selection takes place using a straightthrough Gumbel softmax similar to step 2: c = argmaxj(qpkj + γ), where γ ∼ Gumbel(0, 1). Note that each rule application is sparse since it takes into account only 1 contextual slot for modifying
a primary slot, while other methods like GNNs take into account all slots for modifying a primary slot.
Step 4. Rule Application: the selected rule Rr is applied to the primary slot Vp based on the rule and the current contents of the primary and contextual slots. The rule-specific MLPr, takes as input the concatenated representation of the state of the primary and contextual slots, Vp and Vc, and produces an output, which is then used to change the state of the primary slot Vp by residual addition.
3.2 Rule Application: Sequential vs Parallel Rule Application
In the previous section, we have described how each rule application only considers another contextual slot for the given primary slot i.e., contextual sparsity. We can also consider application sparsity, wherein we use the rules to update the states of only a subset of the slots. In this scenario, only the selected slots would be primary slots. This setting will be helpful when there is an entity in an environment that is stationary, or it is following its own default dynamics unaffected by other entities. Therefore, it does not need to consider other entities to update its state. We explore two scenarios for enabling application sparsity.
Parallel Rule Application. Each of theM slots selects a rule to potentially change its state. To enable sparse changes, we provide an extra Null Rule in addition to the available N rules. If a slot picks the null rule in step 2 of the above procedure, we do not update its state.
Sequential Rule Application. In this setting, only one slot gets updated in each rule application step. Therefore, only one slot is selected as the primary slot. This can be facilitated by modifying step 2 above to select one {primary
slot, rule} pair among NM {rule, slot} pairs. The queries come from each slot: qj = VjW q ∀j ∈ {1, . . . ,M}, the keys come from the rules: ki = RiW k ∀i ∈ {1, . . . ,N}. The straight-through Gumbel softmax selects one (primary slot, rule) pair: p, r = argmaxi,j(qpki + γ), where γ ∼ Gumbel(0, 1). In the sequential regime, we allow the rule application procedure (step 2, 3, 4 above) to be performed multiple times iteratively in K rule application stages for each time-step t.
A pictorial demonstration of both rule application regimes can be found in Figure 3. We provide detailed algorithms for the sequential and parallel regimes in Appendix.
4 Experiments
We demonstrate the effectiveness of NPS on multiple tasks and compare to a comprehensive set of baselines. To show that NPS can learn intuitive rules from the data generating distribution, we design a couple of simple toy experiments with well-defined discrete operations. Results show that NPS can accurately recover each operation defined by the data and learn to represent each operation using a separate rule. We then move to a much more complicated and visually rich setting with abstract physical rules and show that factorization of knowledge into rules as offered by NPS does scale up to such settings. We study and compare the parallel and sequential rule application procedures and try to understand the settings which favour each. We then
evaluate the benefits of reusable, dynamic and sparse interactions as offered by NPS in a wide variety of physical environments by comparing it against various baselines. We conduct ablation studies to assess the contribution of different components of NPS. Here we briefly outline the tasks considered and direct the reader to the Appendix for full details on each task and details on hyperparameter settings.
Discussion of baselines. NPS is an interaction network, therefore we use other widely used interaction networks such as multihead attention and graph neural networks (Goyal et al. (2019), Goyal et al. (2020), Veerapaneni et al. (2019), Kipf et al. (2019)) for comparison. Goyal et al. (2019) and Goyal
et al. (2020) use an attention based interaction network to capture interactions between the slots, while Veerapaneni et al. (2019) and Kipf et al. (2019) use a GNN based interaction network. We also consider the recently introduced convolutional interaction network (CIN) (Qi et al., 2021) which captures dense pairwise interactions like GNN but uses a convolutional network instead of MLPs to better utilize spatial information. The proposed method, similar to other interaction networks, is agnostic to the encoder backbone used to encode the input image into slots, therefore we compare NPS to other interaction networks across a wide-variety of encoder backbones.
4.1 Learning intuitive rules with NPS: Toy Simulations
We designed a couple of simple tasks with well-defined discrete rules to show that NPS can learn intuitive and interpretable rules. We also show the efficiency and effectiveness of the selection procedure (step 2 and step 3 in section 3.1) by comparing against a baseline with many more parameters. Both tasks require a single modification of only one of the available entities, therefore the use of sequential or parallel rule application would not make a difference here since parallel rule application in which all-but-one slots select the null rule is similar to sequential rule application with 1 rule application step. To simplify the presentation, we describe the setup for both tasks using the sequential rule application procedure.
MNIST Transformation. We test whether NPS can learn simple rules for performing transformations on MNIST digits. We generate data with four transformations: {Translate Up, Translate Down, Rotate Right, Rotate Left}. We feed the input image (X) and the transformation (o) to be performed as a one-hot vector to the model. The detailed setup is described in Appendix. For this task, we evaluate whether NPS can learn to use a unique rule for each transformation.
We use 4 rules corresponding to the 4 transformations with the hope that the correct transformations are recovered. Indeed, we observe that NPS successfully learns to represent each transformation using a separate rule as shown in Table 1. Our model achieves an MSE of 0.02. A visualization of the outputs from our model and further details can be found in Appendix C.
Coordinate Arithmetic Task. The model is tasked with performing arithmetic operations on 2D coordinates. Given (X0, Y0) and (X1, Y1), we can apply the following operations: {X Addition: (Xr, Yr) = (X0 + X1, Y0), X Subtraction: (Xr, Yr) = (X0 − X1, Y0), Y Addition: (Xr, Yr) = (X0, Y0 + Y1), Y Subtraction: Xr, Yr = (X0, Y0−Y1)}, where (Xr, Yr) is the resultant coordinate.
In this task, the model is given 2 input coordinates X = [(xi, yi), (xj , yj)] and the expected output coordinates Y = [(x̂i, ŷi), (x̂j , ŷj)] . The model is supposed to infer the correct rule to produce the correct output coordinates. During data collection, the true output is obtained by performing a random transformation on a randomly selected coordinate in X (primary coordinate), taking another randomly selected coordinate from X (contextual coordinate) into account. The detailed setup is described in Appendix D. We use an NPS model with 4 rules for this task. We use the the selection procedure in step 2 and step 3 of algorithm 1 to select the primary coordinate, contextual coordinate, and the rule. For the baseline we replace the selection procedure in NPS (i.e. step 2 and step 3 in
algorithm 1) with a routing MLP similar to Fedus et al. (2021).
This routing MLP has 3 heads (one each for selecting the primary coordinate, contextual coordinate, and the rule). The baseline has 4 times more parameters than NPS. The final output is produced by
the rule MLP which does not have access to the true output, hence the model cannot simply copy the true output to produce the actual output. Unlike the MNIST transformation task, we do not provide the operation to be performed as a one-hot vector input to the model, therefore it needs to infer the available operations from the data demonstrations.
We show the segregation of rules for NPS and the baseline in Figure 4. We can see that NPS learns to use a unique rule for each operation while the baseline struggles to disentangle the underlying operations properly. NPS also outperforms the baseline in terms of MSE achieving an MSE of 0.01±0.001 while the baseline achieves an MSE of 0.04±0.008. To further confirm that NPS learns all the available operations correctly from raw data demonstrations, we use an NPS model with 5 rules. We expect that in this case NPS should utilize only 4 rules since the data describes only 4 unique operations and indeed we observe that NPS ends up mostly utilizing 4 of the available 5 rules as shown in Table 2.
4.2 Parallel vs Sequential Rule Application
We compare the parallel and sequential rule application procedures, to understand the settings that favour one or the other, over two tasks: (1) Bouncing Balls, (2) Shapes Stack. We use the term PNPS to refer to parallel rule application and SNPS to refer to sequential rule application.
Shapes Stack. We use the shapes stack dataset introduced by Groth et al. (2018). This dataset consists of objects stacked on top of each other as shown in Figure 5. These objects fall under the influence of gravity. For our experiments, We follow the same setup as Qi et al. (2021). In this task, given the first frame, the model is tasked with predicting the object bounding boxes for the next t timesteps. The first frame is encoded using a convolutional network followed by RoIPooling (Girshick (2015)) to extract object-centric visual features. The object-centric features are then passed to the dynamics model to predict object bounding boxes of the next t steps. Qi et al. (2021) propose a Region Proposal Interaction Network (RPIN) to solve this task. The dynamics model in RPIN consists of an Interaction Network proposed in Battaglia et al. (2016).
To better utilize spatial information, Qi et al. (2021) propose an extension of the interaction operators in interaction net to operate on 3D tensors. This is achieved by replacing the MLP operations in the original interaction networks with convolutions. They call this new network Convolutional Interaction Network (CIN). For the proposed model, we replace this CIN in RPIN by NPS. To ensure a fair comparison to CIN, we use CNNs to represent rules in NPS instead of MLPs. CIN captures all pairwise interactions between objects using a convolutional network. In NPS, we capture sparse interactions (contextual sparsity) as compared to dense pairwise interactions captured by CIN. Also, in NPS we update only a few subset of slots per step instead of all slots (application sparsity).
We consider two evaluation settings. (1) Test setting: The number of rollout timesteps is same as that seen during training (i.e. t = 15); (2) Transfer Setting: The number of rollout timesteps is higher than that seen during training (i.e. t = 30).
We present our results on the shapes stack dataset in Table 3. We can see that both PNPS and SNPS outperform the baseline RPIN in the transfer setting, while only PNPS outperforms the baseline in the test setting and SNPS fails to do so. We can see that PNPS outperforms SNPS. We attribute this to the reduced application sparsity with PNPS, i.e., it is more likely that the state of a slot gets updated in PNPS as compared to SNPS. For instance, consider an NPS model with N uniformly chosen rules and M slots. The probability that the state of a slot gets updated in PNPS is PPNPS = N − 1/N (since 1 rule is the null rule), while the same probability for SNPS is PSNPS = 1/M (since only 1 slot gets updated per rule application step).
For this task, we run both PNPS and SNPS for N = {1, 2, 4, 6} rules and M = 3. For any given N , we observe that PPNPS > PSNPS . Even when we have multiple rule application steps in SNPS, it might end up selecting the same slot to be updated in more than one of these steps. We report the best performance obtained for PNPS and SNPS across all N , which is N = {2 + 1 Null Rule} for PNPS and N = 4 for SNPS, in Table 3. Shapes stack is a dataset that would prefer a model with less application sparsity since all the objects are tightly bound to each other (objects are placed on top of each other), therefore all objects spend the majority of their time interacting with the objects directly above or below them. We attribute the higher performance of PNPS compared to RPIN to the higher contextual sparsity of PNPS. Each example in the shapes stack task consists of 3 objects. Even though the blocks are tightly bound to each other, each block is only affected by the objects it is in direct contact with. For example, the top-most object is only affected by the object directly below it. The contextual sparsity offered by PNPS is a strong inductive bias to model such sparse interactions while RPIN models all pairwise interactions between the objects. Figure 5 shows an intuitive illustration of the PNPS model for the shapes stack dataset. In the figure, Rule 2 actually refers to the Null Rule, while Rule 1 refers to all the other non-null rules. The bottom-most block picks the Null Rule most times, as the bottom-most block generally does not move.
Bouncing Balls. We consider a bouncingballs environment in which multiple balls move with billiard-ball dynamics. We validate our model on a colored version of this dataset. This is a next-step prediction task in which the model is tasked with predicting the final binary mask of each ball. We compare the following methods: (a) SCOFF (Goyal et al., 2020): factorization of knowledge in terms of slots (object properties) and schemata, the latter capturing object dynamics; (b) SCOFF++: we extend SCOFF by using the idea of iterative competition as proposed in slot attention (SA) (Locatello et al., 2020a); SCOFF + PNPS/SNPS: We replace pairwise slot-to-slot interaction in SCOFF++ with parallel or sequential rule application. For comparing different methods, we use the Adjusted Rand Index or ARI (Rand, 1971). To investigate how the factorization in the form of rules allows for extrapolating knowledge from fewer to more objects, we increase the number of objects from 4 during training to 6-8 during testing.
We present the results of our experiments in Table 4. Contrary to the shapes stack task, we
see that SNPS outperforms PNPS for the bouncing balls task. The balls are not tightly bound together into a single tower as in the shapes stack. Most of the time, a single ball follows its own dynamics, only occasionally interacting with another ball. Rules in NPS capture interaction dynamics between entities, hence they would only be required to change the state of an entity when it interacts with another entity. In the case of bouncing balls, this interaction takes place through a collision between multiple balls. Since for a single ball, such collisions are rare, SNPS, which has higher application sparsity (less probability of modifying the state of an entity), performs better as compared to PNPS (lower application sparsity). Also note that, SNPS has the ability to compose multiple rules together by virtue of having multiple rule application stages. A visualization of the rule and entity selections by the proposed algorithm can be found in Appendix Figure 9.
Given the analysis in this section, we can conclude that PNPS is expected to work better when interactions among entities are more frequent while SNPS is expected to work better when interactions are rare and most of the time, each entity follows its own dynamics. Note that, for both SNPS and PNPS, the rule application considers only 1 other entity as context. Therefore, both approaches have equal contextual sparsity while the baselines that we consider (SCOFF and RPIN) capture dense pairwise interactions. We discuss the benefits of contextual sparsity in more detail in the next section. More details regarding our setup for the above experiments can be found in Appendix.
4.3 Benefits of Sparse Interactions Offered by NPS
In NPS, one can view the computational graph as a dynamically constructed GNN resulting from applying dynamically selected rules, where the states of the slots are represented on the different nodes of the graph, and different rules dynamically instantiate an hyper-edge between a set of slots (the primary slot and the contextual slot). It is important to emphasize that the topology of the graph induced in NPS is dynamic and sparse (only a few nodes affected), while in most GNNs the topology is fixed and dense (all nodes affected). In this section, through a thorough set of experiments, we show that learning sparse and dynamic interactions using NPS indeed works better for the problems we consider than learning dense interactions using GNNs. We consider two types of tasks: (1) Learning Action Conditioned World Models (2) Physical Reasoning. We use SNPS for all these experiments since in the environments that we consider here, interactions among entities are rare.
Learning Action-Conditioned World Models. For learning action-conditioned world models, we follow the same experimental setup as Kipf et al. (2019). Therefore, all the tasks in this section are next-K step (K = {1, 5, 10}) prediction tasks, given the intermediate actions, and with the predictions being performed in the latent space. We use the Hits at Rank 1 (H@1) metrics described by Kipf et al. (2019) for evaluation. H@1 is 1 for a particular example if the predicted state representation is nearest to the encoded true observation and 0 otherwise. We report the average of this score over the test set (higher is better).
Physics Environment. The physics environment (Ke et al., 2021) simulates a simple physical world. It consists of blocks of unique but unknown weights. The dynamics for the interaction between blocks is that the movement of heavier blocks pushes lighter blocks on their path. This rule creates an acyclic causal graph between the blocks. For an accurate world model, the learner needs to infer the correct weights through demonstrations. Interactions in this environment are sparse and only involve two blocks at a time, therefore we expect NPS to outperform dense architectures like GNNs. This environment is demonstrated in Appendix Fig 11.
We follow the same setup as Kipf et al. (2019). We use their C-SWM model as baseline. For the proposed model, we only replace the GNN from C-SWM by NPS. GNNs generally share parameters across edges, but in NPS each rule has separate parameters. For a fair comparison to GNN, we use an NPS model with 1 rule. Note that this setting is still different from GNNs as in GNNs at each step every slot is updated by instantiating edges between all pairs of slots, while in NPS an edge is dynamically instantiated between a single pair of slots and only the state of the selected slot (i.e., primary slot) gets updated.
The results of our experiments are presented in Figure 6(a). We can see that NPS outperforms GNNs for all rollouts. Multi-step settings are more difficult to model as errors may get compounded over time steps. The sparsity of NPS (only a single slot affected per step) reduces compounding of errors and enhances symmetry-breaking in the assignment of transformations to rules, while in the
case of GNNs, since all entities are affected per step, there is a higher possibility of errors getting compounded. We can see that even with a single rule, we significantly outperform GNNs thus proving the effectiveness of dynamically instantiating edges between entities.
Atari Games. We also test the proposed model in the more complicated setting of Atari. Atari games also have sparse interactions between entities. For instance, in Pong, any interaction involves only 2 entities: (1) paddle and ball or (2) ball and the wall. Therefore, we expect sparse interactions captured by NPS to outperform GNNs here as well.
We follow the same setup as for the physics environment described in the previous section. We present the results for the Atari experiments in Figure 6(b), showing the average H@1 score across 5 games: Pong, Space Invaders, Freeway, Breakout, and QBert. As expected, we can see that the proposed model achieves a higher score than the GNN-based C-SWM. The results for the Atari experiments reinforce the claim that NPS is especially good at learning sparse interactions.
Learning Rules for Physical Reasoning. To show the effectiveness of the proposed approach for physical reasoning tasks, we evaluate NPS on another dataset: Sprites-MOT (He et al., 2018). The Sprites-MOT dataset was introduced by He et al. (2018). The dataset contains a set of moving objects of various shapes. This dataset aims to test whether a model can handle occlusions correctly. Each frame has consistent bounding boxes which may cause the objects to appear or disappear from the scene. A model which performs well should be able to track the motion of all objects irrespective of whether they are occluded or not. We follow the same setup as Weis et al. (2020). We use the OP3 model (Veerapaneni et al., 2019) as our baseline. To test the proposed model, we replace the GNN-based transition model in OP3 with the proposed NPS.
We use the same evaluation protocol as followed by Weis et al. (2020) which is based on the MOT (Multi-object tracking) challenge (Milan et al., 2016). The results on the MOTA and MOTP metrics for this task are presented in Table 5. The results on the other metrics are presented in appendix Table 10. We ask the reader to refer to appendix F.1 for more details about these metrics. We can see that for almost all metrics, NPS outperforms the OP3 baseline. Although this dataset does not contain physical interactions between the objects, sparse rule application should still be useful in dealing with occlusions. At any time step, only a single object is affected by occlusions i.e., it may get
occluded due to another object or due to a prespecified bounding box, while the other objects follow their default dynamics. Therefore, a rule should be applied to only the object (or entity) affected (i.e., not visible) due to occlusion and may take into account any other object or entity that is responsible for the occlusion.
5 Discussion and Conclusion
For AI agents such as robots trying to make sense of their environment, the only observables are low-level variables like pixels in images. To generalize well, an agent must induce high-level entities as well as discover and disentangle the rules that govern how these entities actually interact with each other. Here we have focused on perceptual inference problems and proposed NPS, a neural instantiation of production systems by introducing an important inductive bias in the architecture following the proposals of Bengio (2017); Goyal & Bengio (2020); Ke et al. (2021).
Limitations & Looking Forward. Our experiments highlight the advantages brought by the factorization of knowledge into a small set of entities and sparse sequentially applied rules. Immediate future work would investigate how to take advantage of these inductive biases for more complex physical environments (Ahmed et al., 2020) and novel planning methods, which might be more sample efficient than standard ones (Schrittwieser et al., 2020).
We also find that Sequential and Parallel NPS have different properties suited towards different domains. Future work should explore how to effectively combine these two approaches. We discuss this in more detail in Appendix section E.3.
6 Acknowledgements
The authors would like to thank Matthew Botvinick for useful discussions. The authors would also like to thank Alex Lamb, Stefan Bauer, Nicolas Chapados, Danilo Rezende and Kelsey Allen for brainstorming sessions. We are also thankful to Dianbo Liu, Damjan Kalajdzievski and Osama Ahmed for proofreading. We would like to thank Samsung Electronics Co. Ltd. and CIFAR for funding this research. We would also like to thank Google for providing Google cloud credits used in this work. | 1. What is the focus and contribution of the paper regarding neural networks and visual reasoning?
2. What are the strengths of the proposed approach, particularly in its applicability to various tasks?
3. What are the weaknesses of the paper, especially in comparison to other methods such as GNNs?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. What is the significance of the paper's potential impact on differentiable reasoning?
6. What are some minor comments and suggestions for improving the paper?
7. How would the authors contrast NPS with a transformer trained on the same tasks, but with a rule "memory store"?
8. What related works are of interest to the authors, and how do they compare to the proposed approach? | Summary Of The Paper
Review | Summary Of The Paper
The authors present a new neural network system that can reason over entities. The solution should be applicable to any tasks that require visual reasoning.
Specifically, the NPS algorithm is:
for every step in a sequence:
update the slots (entities)
compute attention between slot queries and rule keys to select a rule
compute attention between primary slot queries and all slot keys to get contextual slot
apply the rule MLP to the primary and contextual slot values
add the result to the primary slot
Slots can either be updated in parallel (i.e., there is only one slot updated per step) or in sequence (i.e., multiple slots can be updated per step).
They compare sequential vs parallel rule application on 4 tasks:
given a transformation and mnist digit, predict the digit with the transformation applied
given two x,y coordinates predict the result of an arithmetic change (e.g., subtract the ys from one another) applied
for a stack of falling shapes, predict the bounding box of each object in future frames
predict the next frame of billiard balls bouncing around
They compare NPS to GNNs and related methods on four tasks. The goal of each task is to predict latent states in future frames:
physics env: blocks of unknown weight interacting with one another
Atari games (pong, space invaders, freeway, breakout, qBert)
sprites (moving objects of different shapes)
Review
Originality: NPS is a novel system for visual reasoning. The computational components borrow heavily from attention mechanisms but separately represent rules to apply.
Quality: The authors test a large number of total tasks, targeting different aspects of NPS. E.g., using the first set of experimental results they are able to provide the recommendation that sequential application is better for sparse interactions.
Clarity: I would have appreciated a more straightforward introduction that was heavier on visuals. E.g., the paragraph beginning at section 64 discusses going from raw observations to slots, which is not what NPS does. It takes up a lot of real estate in the introduction and gave me the impression that the paper focused on going from raw data to slots. It's accompanied by Figure 1, which provides a visualization of rules before they're introduced, which I found confusing. I think it would help with clarity to add more visualizations of the NPS system into the main text (or to at least pull the algorithm into the main text). I also raise some minor inconsistencies that caused confusion under minor comments.
Significance: The potential impact for differentiable reasoning is enormous. The aim of this paper is to take a step towards goal by showing a reasoning system on a variety of restricted settings that are straightforward to analyze. On many tasks, NPS is compared to only one baseline and provides a small performance improvement. E.g., on shapes stack, bouncing balls (test), and sprites, NPS is within 1-2 points of the baseline (and well within the error bars). On physics env and Atari, NPS has a more sizable gain, but is still within the error bars.
Minor comments:
Algorithm 1 should specify that j indexes M (num slots)
The steps in algorithm 1 should be consistent with main text (it omits the primary slot selection)
L208: an -> a?
L402 figure cuts off the text
"Entity abstraction in visual model-based reinforcement learning." is cited twice?
Questions to authors:
how would you contrast NPS to a transformer trained on the same tasks, but with a rule "memory store"?
Related work of interest:
Visual Grounding of Learned Physical Models. Li et. al, 2020.
Learning visual predictive models of physics for playing billiards. Fragkiadaki et. al., 2017. |
NIPS | Title
Neural Production Systems
Abstract
Visual environments are structured, consisting of distinct objects or entities. These entities have properties—visible or latent—that determine the manner in which they interact with one another. To partition images into entities, deep-learning researchers have proposed structural inductive biases such as slot-based architectures. To model interactions among entities, equivariant graph neural nets (GNNs) are used, but these are not particularly well suited to the task for two reasons. First, GNNs do not predispose interactions to be sparse, as relationships among independent entities are likely to be. Second, GNNs do not factorize knowledge about interactions in an entity-conditional manner. As an alternative, we take inspiration from cognitive science and resurrect a classic approach, production systems, which consist of a set of rule templates that are applied by binding placeholder variables in the rules to specific entities. Rules are scored on their match to entities, and the best fitting rules are applied to update entity properties. In a series of experiments, we demonstrate that this architecture achieves a flexible, dynamic flow of control and serves to factorize entity-specific and rule-based information. This disentangling of knowledge achieves robust future-state prediction in rich visual environments, outperforming state-of-the-art methods using GNNs, and allows for the extrapolation from simple (few object) environments to more complex environments.
1 Introduction
Despite never having taken a physics course, every child beyond a young age appreciates that pushing a plate off the dining table will cause the plate to break. The laws of physics accurately characterize the dynamics of our natural world, and although explicit knowledge of these laws is not necessary to reason, we can reason explicitly about objects interacting through these laws. Humans can verbalize knowledge in propositional expressions such as “If a plate drops from table height, it will break,” and “If a video-game opponent approaches from behind and they are carrying a weapon, they are likely to attack you.” Expressing propositional knowledge is not a strength of current deep learning methods for several reasons. First, propositions are discrete and independent from one another. Second, propositions must be quantified in the manner of first-order logic; for example, the video-game proposition applies to any X for which X is an opponent and has a weapon. Incorporating the ability to express and reason about propositions should improve generalization in deep learning methods because this knowledge is modular— propositions can be formulated independently of each other— and can therefore be acquired incrementally. Propositions can also be composed with each other and applied consistently to all entities that match, yielding a powerful form of systematic generalization.
The classical AI literature from the 1980s can offer deep learning researchers a valuable perspective. In this era, reasoning, planning, and prediction were handled by architectures that performed propositional inference on symbolic knowledge representations. A simple example of such an architecture is
* Equal Contribution, ** Equal Advising 1 Mila, University of Montreal, 2 Google Deepmind, 3 Waverly, 4 Google Research, Brain Team. Corresponding authors: [email protected], [email protected]
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
the production system (Laird et al., 1986; Anderson, 1987), which expresses knowledge by conditionaction rules. The rules operate on a working memory: rule conditions are matched to entities in working memory inspired by cognitive science, and such a match can trigger computational actions that update working memory or external actions that operate on the outside world.
Production systems were typically used to model high-level cognition, e.g., mathematical problem solving or procedure following; perception was not the focus of these models. It was assumed that the results of perception were placed into working memory in a symbolic form that could be operated on with the rules. In this article, we revisit production systems but from a deep learning perspective which naturally integrates perceptual processing and subsequent inference for visual reasoning problems. We describe an end-to-end deep learning model that constructs object-centric representations of entities in videos, and then operates on these entities with differentiable—and thus learnable—production rules. The essence of these rules, carried over from traditional symbolic system, is that they operate on variables that are bound, or linked, to the entities in the world. In the deep learning implementation, each production rule is represented by a distinct MLP with query-key attention mechanisms to specify the rule-entity binding and to determine when the rule should be triggered for a given entity. We are not the first to propose a neural instantiation of a production system architecture. Touretzky & Hinton (1988) gave a proof of principle that neural net hardware could be hardwired to implement a production system for symbolic reasoning; our work fundamentally differs from theirs in that (1) we focus on perceptual inference problems and (2) we use the architecture as an inductive bias for learning.
1.1 Variables and entities
What makes a rule general-purpose is that it incorporates placeholder variables that can be bound to arbitrary values or—the term we prefer in this article—entities. This notion of binding is familiar in functional programming languages, where these variables are called arguments. Analogously, the use of variables in the production rules we describe enable a model to reason about any set of entities that satisfy the selection criteria of the rule.
Consider a simple function in C like int add(int a, int b). This function binds its two integer operands to variables a and b. The function does not apply if the operands are, say, character strings. The use of variables enables a programmer to reuse the same function to add any two integer values
In order for rules to operate on entities, these entities must be represented explicitly. That is, the visual world needs to be parsed in a task-relevant manner, e.g., distinguishing the sprites in a video game or the vehicles and pedestrians approaching an autonomous vehicle. Only in the past few years have deep learning vision researchers developed methods for object-centric representation (Le Roux et al., 2011; Eslami et al., 2016; Greff et al., 2016; Raposo et al., 2017; Van Steenkiste et al., 2018; Kosiorek et al., 2018; Engelcke et al., 2019; Burgess et al., 2019; Greff et al., 2019; Locatello et al., 2020a; Ahmed et al., 2020; Goyal et al., 2019; Zablotskaia et al., 2020; Rahaman et al., 2020; Du et al., 2020; Ding et al., 2020; Goyal et al., 2020; Ke et al., 2021). These methods differ in details but share the notion of a fixed number of slots (see Figure 1 for example), also known as object files, each encapsulating information about a single object. Importantly, the slots are interchangeable, meaning that it doesn’t matter if a scene with an apple and an orange encodes the apple in slot 1 and orange in slot 2 or vice-versa.
A model of visual reasoning must not only be able to represent entities but must also express knowledge about entity dynamics and interactions. To ensure systematic predictions, a model must be capable of applying knowledge to an entity regardless of the slot it is in and must be capable of applying the same knowledge to multiple instances of an entity. Several distinct approaches exist in the literature. The predominant approach uses graph neural networks to model slot-to-slot interactions (Scarselli et al., 2008; Bronstein et al., 2017; Watters et al., 2017; Van Steenkiste et al., 2018; Kipf et al., 2018; Battaglia et al., 2018; Tacchetti et al., 2018). To ensure systematicity, the GNN must share parameters among the edges. In a recent article, Goyal et al. (2020) developed a more general framework in which parameters are shared but slots can dynamically select which parameters to use in a state-dependent manner. Each set of parameters is referred to as a schema, and slots use a query-key attention mechanism to select which schema to apply at each time step. Multiple slots can select the same schema. In both GNNs and SCOFF, modeling dynamics involves each slot interacting with each other slot. In the work we describe in this article, we replace the direct slot-to-slot interactions with rules, which mediate sparse interactions among slots (See arrows in Figure 1).
Thus our main contribution is that we introduce NPS, which offers a way to model dynamic and sparse interactions among the variables in a graph and also allows dynamic sharing of multiple sets of parameters among these interactions. Most architectures used for modelling interactions in the current literature use statically instantiated graph which model all possible interactions for a given variable at each step i.e. dense interactions. Also such dense architectures share a single set of parameters across all interactions which maybe quite restrictive in terms of representational capacity. A visual comparison between these two kinds of architectures is shown in Figure 1. Through our experiments we show the advantage of modeling interactions in the proposed manner using NPS in visually rich physical environments. We also show that our method results in an intuitive factorization of rules and entities.
2 Production System
Formally, our notion of a production system consists of a set of entities and a set of rules, along with a mechanism for selecting rules to apply on subsets of the entities. Implicit in a rule is a specification of the properties of relevant entities, e.g., a rule might apply to one type of sprite in a video game but not another. The control flow of a production system dynamically selects rules as well as bindings between rules and entities, allowing different rules to be chosen and different entities to be manipulated at each point in time.
The neural production system we describe shares essential properties with traditional production system, particularly with regard to the compositionality and generality of the knowledge they embody. Lovett & Anderson (2005) describe four desirable properties commonly attributed to symbolic systems that apply to our work as well.
Production rules are modular. Each production rule represents a unit of knowledge and are atomic such that any production rule can be intervened (added, modified or deleted) independently of other production rules in the system.
Production rules are abstract. Production rules allow for generalization because their conditions may be represented as high-level abstract knowledge that match to a wide range of patterns. These conditions specify the attributes of relationship(s) between entities without specifying the entities themselves. The ability to represent abstract knowledge allows for the transfer of learning across different environments as long as they fit within the conditions of the given production rule.
Production rules are sparse. In order that production rules have broad applicability, they involve only a subset of entities. This assumption imposes a strong prior that dependencies among entities are sparse. In the context of visual reasoning, we conjecture that this prior is superior to what has often been assumed in the past, particularly in the disentanglement literature—independence among entities Higgins et al. (2016); Chen et al. (2018).
Production rules represent causal knowledge and are thus asymmetric. Each rule can be decomposed into a {condition, action} pair, where the action reflects a state change that is a causal consequence of the conditions being met.
These four properties are sufficient conditions for knowledge to be expressed in production rule form. These properties specify how knowledge is represented, but not what knowledge is represented. The
latter is inferred by learning mechanisms under the inductive bias provided by the form of production rules.
3 Neural Production System: Slots and Sparse Rules
The Neural Production System (NPS), illustrated in Figure 2, provides an architectural backbone that supports the detection and inference of entity (object) representations in an input sequence, and the underlying rules which govern the interactions between these entities in time and space. The input sequence indexed by time step t, {x1, . . . ,xt, . . . ,xT }, for instance the frames in a video, are processed by a neural encoder (Burgess et al., 2019; Greff et al., 2019; Goyal et al., 2019, 2020) applied to each xt, to obtain a set of M entity representations {V t1 , . . . , . . . ,V tM}, one for each of the M slots. These representations describe an entity and are updated based on both the previous state, V t−1 and the current input, xt.
NPS consists of N separately encoded rules, {R1,R2, ..,RN}. Each rule consists of two components, Ri = ( ~Ri,MLPi), where ~Ri is a learned rule embedding vector, which can be thought of as a template defining the condition for when a rule applies; and MLPi, which determines the action taken by a rule. Both ~Ri and the parameters of MLPi are learned along with the other parameters of the model using back-propagation on an objective optimized end-to-end.
In the general form of the model, each slot selects a rule that will be applied to it to change its state. This can potentially be performed several times, with possibly different rules applied at each step. Rule selection is done using an attention mechanism described in detail below. Each rule specifies conditions and actions on a pair of slots. Therefore, while modifying the state of a slot using a rule, it can take the state of another slot into account. The slot which is being modified is called the primary slot and other is called the contextual slot. The contextual slot is also selected using an attention mechanism described in detail below.
3.1 Computational Steps in NPS
In this section, we give a detailed description of the rule selection and application procedure for the slots. First, we will formalize the definitions of a few terms that we will use to explain our method. We use the term primary slot to refer to slot Vp whose state gets modified by a rule Rr. We use the term contextual slot to refer to the slot Vc that the rule Rr takes into account while modifying the state of the primary slot Vp.
Notation. We consider a set of N rules {R1,R2, . . . ,RN} and a set of T input frames {x1,x2, . . . ,xT }. Each frame xt is encoded into a set of M slots {V t1 ,V t2 , . . . ,V tM}. In the following discussion, we omit the index over t for simplicity.
Step 1. is external to NPS and involves parsing an input image, xt, into slot-based entities conditioned on the previous state of the slot-based entities. Any of the methods proposed in the literature to obtain a slot-wise representation of entities can be used (Burgess et al., 2019; Greff et al., 2019; Goyal et al., 2019, 2020). The next three steps constitute the rule selection and application procedure.
Step 2. For each primary slot Vp, we attend to a rule Rr to be applied. Here, the queries come from the primary slot: qp = VpW q, and the keys come from the rules: ki = ~RiW k ∀i ∈ {1, . . . ,N}. The rule is selected using a straight-through Gumbel softmax (Jang et al., 2016) to achieve a learnable hard decision: r = argmaxi(qpki + γ), where γ ∼ Gumbel(0, 1). This competition is a noisy version of rule matching and prioritization in traditional production systems.
Step 3. For a given primary slot Vp and selected rule Rr, a contextual slot Vc is selected using another attention mechanism. In this case the query comes from the primary slot: qp = VpW q, and the keys from all the slots: kj = VjW q ∀j ∈ {1, . . . ,M}. The selection takes place using a straightthrough Gumbel softmax similar to step 2: c = argmaxj(qpkj + γ), where γ ∼ Gumbel(0, 1). Note that each rule application is sparse since it takes into account only 1 contextual slot for modifying
a primary slot, while other methods like GNNs take into account all slots for modifying a primary slot.
Step 4. Rule Application: the selected rule Rr is applied to the primary slot Vp based on the rule and the current contents of the primary and contextual slots. The rule-specific MLPr, takes as input the concatenated representation of the state of the primary and contextual slots, Vp and Vc, and produces an output, which is then used to change the state of the primary slot Vp by residual addition.
3.2 Rule Application: Sequential vs Parallel Rule Application
In the previous section, we have described how each rule application only considers another contextual slot for the given primary slot i.e., contextual sparsity. We can also consider application sparsity, wherein we use the rules to update the states of only a subset of the slots. In this scenario, only the selected slots would be primary slots. This setting will be helpful when there is an entity in an environment that is stationary, or it is following its own default dynamics unaffected by other entities. Therefore, it does not need to consider other entities to update its state. We explore two scenarios for enabling application sparsity.
Parallel Rule Application. Each of theM slots selects a rule to potentially change its state. To enable sparse changes, we provide an extra Null Rule in addition to the available N rules. If a slot picks the null rule in step 2 of the above procedure, we do not update its state.
Sequential Rule Application. In this setting, only one slot gets updated in each rule application step. Therefore, only one slot is selected as the primary slot. This can be facilitated by modifying step 2 above to select one {primary
slot, rule} pair among NM {rule, slot} pairs. The queries come from each slot: qj = VjW q ∀j ∈ {1, . . . ,M}, the keys come from the rules: ki = RiW k ∀i ∈ {1, . . . ,N}. The straight-through Gumbel softmax selects one (primary slot, rule) pair: p, r = argmaxi,j(qpki + γ), where γ ∼ Gumbel(0, 1). In the sequential regime, we allow the rule application procedure (step 2, 3, 4 above) to be performed multiple times iteratively in K rule application stages for each time-step t.
A pictorial demonstration of both rule application regimes can be found in Figure 3. We provide detailed algorithms for the sequential and parallel regimes in Appendix.
4 Experiments
We demonstrate the effectiveness of NPS on multiple tasks and compare to a comprehensive set of baselines. To show that NPS can learn intuitive rules from the data generating distribution, we design a couple of simple toy experiments with well-defined discrete operations. Results show that NPS can accurately recover each operation defined by the data and learn to represent each operation using a separate rule. We then move to a much more complicated and visually rich setting with abstract physical rules and show that factorization of knowledge into rules as offered by NPS does scale up to such settings. We study and compare the parallel and sequential rule application procedures and try to understand the settings which favour each. We then
evaluate the benefits of reusable, dynamic and sparse interactions as offered by NPS in a wide variety of physical environments by comparing it against various baselines. We conduct ablation studies to assess the contribution of different components of NPS. Here we briefly outline the tasks considered and direct the reader to the Appendix for full details on each task and details on hyperparameter settings.
Discussion of baselines. NPS is an interaction network, therefore we use other widely used interaction networks such as multihead attention and graph neural networks (Goyal et al. (2019), Goyal et al. (2020), Veerapaneni et al. (2019), Kipf et al. (2019)) for comparison. Goyal et al. (2019) and Goyal
et al. (2020) use an attention based interaction network to capture interactions between the slots, while Veerapaneni et al. (2019) and Kipf et al. (2019) use a GNN based interaction network. We also consider the recently introduced convolutional interaction network (CIN) (Qi et al., 2021) which captures dense pairwise interactions like GNN but uses a convolutional network instead of MLPs to better utilize spatial information. The proposed method, similar to other interaction networks, is agnostic to the encoder backbone used to encode the input image into slots, therefore we compare NPS to other interaction networks across a wide-variety of encoder backbones.
4.1 Learning intuitive rules with NPS: Toy Simulations
We designed a couple of simple tasks with well-defined discrete rules to show that NPS can learn intuitive and interpretable rules. We also show the efficiency and effectiveness of the selection procedure (step 2 and step 3 in section 3.1) by comparing against a baseline with many more parameters. Both tasks require a single modification of only one of the available entities, therefore the use of sequential or parallel rule application would not make a difference here since parallel rule application in which all-but-one slots select the null rule is similar to sequential rule application with 1 rule application step. To simplify the presentation, we describe the setup for both tasks using the sequential rule application procedure.
MNIST Transformation. We test whether NPS can learn simple rules for performing transformations on MNIST digits. We generate data with four transformations: {Translate Up, Translate Down, Rotate Right, Rotate Left}. We feed the input image (X) and the transformation (o) to be performed as a one-hot vector to the model. The detailed setup is described in Appendix. For this task, we evaluate whether NPS can learn to use a unique rule for each transformation.
We use 4 rules corresponding to the 4 transformations with the hope that the correct transformations are recovered. Indeed, we observe that NPS successfully learns to represent each transformation using a separate rule as shown in Table 1. Our model achieves an MSE of 0.02. A visualization of the outputs from our model and further details can be found in Appendix C.
Coordinate Arithmetic Task. The model is tasked with performing arithmetic operations on 2D coordinates. Given (X0, Y0) and (X1, Y1), we can apply the following operations: {X Addition: (Xr, Yr) = (X0 + X1, Y0), X Subtraction: (Xr, Yr) = (X0 − X1, Y0), Y Addition: (Xr, Yr) = (X0, Y0 + Y1), Y Subtraction: Xr, Yr = (X0, Y0−Y1)}, where (Xr, Yr) is the resultant coordinate.
In this task, the model is given 2 input coordinates X = [(xi, yi), (xj , yj)] and the expected output coordinates Y = [(x̂i, ŷi), (x̂j , ŷj)] . The model is supposed to infer the correct rule to produce the correct output coordinates. During data collection, the true output is obtained by performing a random transformation on a randomly selected coordinate in X (primary coordinate), taking another randomly selected coordinate from X (contextual coordinate) into account. The detailed setup is described in Appendix D. We use an NPS model with 4 rules for this task. We use the the selection procedure in step 2 and step 3 of algorithm 1 to select the primary coordinate, contextual coordinate, and the rule. For the baseline we replace the selection procedure in NPS (i.e. step 2 and step 3 in
algorithm 1) with a routing MLP similar to Fedus et al. (2021).
This routing MLP has 3 heads (one each for selecting the primary coordinate, contextual coordinate, and the rule). The baseline has 4 times more parameters than NPS. The final output is produced by
the rule MLP which does not have access to the true output, hence the model cannot simply copy the true output to produce the actual output. Unlike the MNIST transformation task, we do not provide the operation to be performed as a one-hot vector input to the model, therefore it needs to infer the available operations from the data demonstrations.
We show the segregation of rules for NPS and the baseline in Figure 4. We can see that NPS learns to use a unique rule for each operation while the baseline struggles to disentangle the underlying operations properly. NPS also outperforms the baseline in terms of MSE achieving an MSE of 0.01±0.001 while the baseline achieves an MSE of 0.04±0.008. To further confirm that NPS learns all the available operations correctly from raw data demonstrations, we use an NPS model with 5 rules. We expect that in this case NPS should utilize only 4 rules since the data describes only 4 unique operations and indeed we observe that NPS ends up mostly utilizing 4 of the available 5 rules as shown in Table 2.
4.2 Parallel vs Sequential Rule Application
We compare the parallel and sequential rule application procedures, to understand the settings that favour one or the other, over two tasks: (1) Bouncing Balls, (2) Shapes Stack. We use the term PNPS to refer to parallel rule application and SNPS to refer to sequential rule application.
Shapes Stack. We use the shapes stack dataset introduced by Groth et al. (2018). This dataset consists of objects stacked on top of each other as shown in Figure 5. These objects fall under the influence of gravity. For our experiments, We follow the same setup as Qi et al. (2021). In this task, given the first frame, the model is tasked with predicting the object bounding boxes for the next t timesteps. The first frame is encoded using a convolutional network followed by RoIPooling (Girshick (2015)) to extract object-centric visual features. The object-centric features are then passed to the dynamics model to predict object bounding boxes of the next t steps. Qi et al. (2021) propose a Region Proposal Interaction Network (RPIN) to solve this task. The dynamics model in RPIN consists of an Interaction Network proposed in Battaglia et al. (2016).
To better utilize spatial information, Qi et al. (2021) propose an extension of the interaction operators in interaction net to operate on 3D tensors. This is achieved by replacing the MLP operations in the original interaction networks with convolutions. They call this new network Convolutional Interaction Network (CIN). For the proposed model, we replace this CIN in RPIN by NPS. To ensure a fair comparison to CIN, we use CNNs to represent rules in NPS instead of MLPs. CIN captures all pairwise interactions between objects using a convolutional network. In NPS, we capture sparse interactions (contextual sparsity) as compared to dense pairwise interactions captured by CIN. Also, in NPS we update only a few subset of slots per step instead of all slots (application sparsity).
We consider two evaluation settings. (1) Test setting: The number of rollout timesteps is same as that seen during training (i.e. t = 15); (2) Transfer Setting: The number of rollout timesteps is higher than that seen during training (i.e. t = 30).
We present our results on the shapes stack dataset in Table 3. We can see that both PNPS and SNPS outperform the baseline RPIN in the transfer setting, while only PNPS outperforms the baseline in the test setting and SNPS fails to do so. We can see that PNPS outperforms SNPS. We attribute this to the reduced application sparsity with PNPS, i.e., it is more likely that the state of a slot gets updated in PNPS as compared to SNPS. For instance, consider an NPS model with N uniformly chosen rules and M slots. The probability that the state of a slot gets updated in PNPS is PPNPS = N − 1/N (since 1 rule is the null rule), while the same probability for SNPS is PSNPS = 1/M (since only 1 slot gets updated per rule application step).
For this task, we run both PNPS and SNPS for N = {1, 2, 4, 6} rules and M = 3. For any given N , we observe that PPNPS > PSNPS . Even when we have multiple rule application steps in SNPS, it might end up selecting the same slot to be updated in more than one of these steps. We report the best performance obtained for PNPS and SNPS across all N , which is N = {2 + 1 Null Rule} for PNPS and N = 4 for SNPS, in Table 3. Shapes stack is a dataset that would prefer a model with less application sparsity since all the objects are tightly bound to each other (objects are placed on top of each other), therefore all objects spend the majority of their time interacting with the objects directly above or below them. We attribute the higher performance of PNPS compared to RPIN to the higher contextual sparsity of PNPS. Each example in the shapes stack task consists of 3 objects. Even though the blocks are tightly bound to each other, each block is only affected by the objects it is in direct contact with. For example, the top-most object is only affected by the object directly below it. The contextual sparsity offered by PNPS is a strong inductive bias to model such sparse interactions while RPIN models all pairwise interactions between the objects. Figure 5 shows an intuitive illustration of the PNPS model for the shapes stack dataset. In the figure, Rule 2 actually refers to the Null Rule, while Rule 1 refers to all the other non-null rules. The bottom-most block picks the Null Rule most times, as the bottom-most block generally does not move.
Bouncing Balls. We consider a bouncingballs environment in which multiple balls move with billiard-ball dynamics. We validate our model on a colored version of this dataset. This is a next-step prediction task in which the model is tasked with predicting the final binary mask of each ball. We compare the following methods: (a) SCOFF (Goyal et al., 2020): factorization of knowledge in terms of slots (object properties) and schemata, the latter capturing object dynamics; (b) SCOFF++: we extend SCOFF by using the idea of iterative competition as proposed in slot attention (SA) (Locatello et al., 2020a); SCOFF + PNPS/SNPS: We replace pairwise slot-to-slot interaction in SCOFF++ with parallel or sequential rule application. For comparing different methods, we use the Adjusted Rand Index or ARI (Rand, 1971). To investigate how the factorization in the form of rules allows for extrapolating knowledge from fewer to more objects, we increase the number of objects from 4 during training to 6-8 during testing.
We present the results of our experiments in Table 4. Contrary to the shapes stack task, we
see that SNPS outperforms PNPS for the bouncing balls task. The balls are not tightly bound together into a single tower as in the shapes stack. Most of the time, a single ball follows its own dynamics, only occasionally interacting with another ball. Rules in NPS capture interaction dynamics between entities, hence they would only be required to change the state of an entity when it interacts with another entity. In the case of bouncing balls, this interaction takes place through a collision between multiple balls. Since for a single ball, such collisions are rare, SNPS, which has higher application sparsity (less probability of modifying the state of an entity), performs better as compared to PNPS (lower application sparsity). Also note that, SNPS has the ability to compose multiple rules together by virtue of having multiple rule application stages. A visualization of the rule and entity selections by the proposed algorithm can be found in Appendix Figure 9.
Given the analysis in this section, we can conclude that PNPS is expected to work better when interactions among entities are more frequent while SNPS is expected to work better when interactions are rare and most of the time, each entity follows its own dynamics. Note that, for both SNPS and PNPS, the rule application considers only 1 other entity as context. Therefore, both approaches have equal contextual sparsity while the baselines that we consider (SCOFF and RPIN) capture dense pairwise interactions. We discuss the benefits of contextual sparsity in more detail in the next section. More details regarding our setup for the above experiments can be found in Appendix.
4.3 Benefits of Sparse Interactions Offered by NPS
In NPS, one can view the computational graph as a dynamically constructed GNN resulting from applying dynamically selected rules, where the states of the slots are represented on the different nodes of the graph, and different rules dynamically instantiate an hyper-edge between a set of slots (the primary slot and the contextual slot). It is important to emphasize that the topology of the graph induced in NPS is dynamic and sparse (only a few nodes affected), while in most GNNs the topology is fixed and dense (all nodes affected). In this section, through a thorough set of experiments, we show that learning sparse and dynamic interactions using NPS indeed works better for the problems we consider than learning dense interactions using GNNs. We consider two types of tasks: (1) Learning Action Conditioned World Models (2) Physical Reasoning. We use SNPS for all these experiments since in the environments that we consider here, interactions among entities are rare.
Learning Action-Conditioned World Models. For learning action-conditioned world models, we follow the same experimental setup as Kipf et al. (2019). Therefore, all the tasks in this section are next-K step (K = {1, 5, 10}) prediction tasks, given the intermediate actions, and with the predictions being performed in the latent space. We use the Hits at Rank 1 (H@1) metrics described by Kipf et al. (2019) for evaluation. H@1 is 1 for a particular example if the predicted state representation is nearest to the encoded true observation and 0 otherwise. We report the average of this score over the test set (higher is better).
Physics Environment. The physics environment (Ke et al., 2021) simulates a simple physical world. It consists of blocks of unique but unknown weights. The dynamics for the interaction between blocks is that the movement of heavier blocks pushes lighter blocks on their path. This rule creates an acyclic causal graph between the blocks. For an accurate world model, the learner needs to infer the correct weights through demonstrations. Interactions in this environment are sparse and only involve two blocks at a time, therefore we expect NPS to outperform dense architectures like GNNs. This environment is demonstrated in Appendix Fig 11.
We follow the same setup as Kipf et al. (2019). We use their C-SWM model as baseline. For the proposed model, we only replace the GNN from C-SWM by NPS. GNNs generally share parameters across edges, but in NPS each rule has separate parameters. For a fair comparison to GNN, we use an NPS model with 1 rule. Note that this setting is still different from GNNs as in GNNs at each step every slot is updated by instantiating edges between all pairs of slots, while in NPS an edge is dynamically instantiated between a single pair of slots and only the state of the selected slot (i.e., primary slot) gets updated.
The results of our experiments are presented in Figure 6(a). We can see that NPS outperforms GNNs for all rollouts. Multi-step settings are more difficult to model as errors may get compounded over time steps. The sparsity of NPS (only a single slot affected per step) reduces compounding of errors and enhances symmetry-breaking in the assignment of transformations to rules, while in the
case of GNNs, since all entities are affected per step, there is a higher possibility of errors getting compounded. We can see that even with a single rule, we significantly outperform GNNs thus proving the effectiveness of dynamically instantiating edges between entities.
Atari Games. We also test the proposed model in the more complicated setting of Atari. Atari games also have sparse interactions between entities. For instance, in Pong, any interaction involves only 2 entities: (1) paddle and ball or (2) ball and the wall. Therefore, we expect sparse interactions captured by NPS to outperform GNNs here as well.
We follow the same setup as for the physics environment described in the previous section. We present the results for the Atari experiments in Figure 6(b), showing the average H@1 score across 5 games: Pong, Space Invaders, Freeway, Breakout, and QBert. As expected, we can see that the proposed model achieves a higher score than the GNN-based C-SWM. The results for the Atari experiments reinforce the claim that NPS is especially good at learning sparse interactions.
Learning Rules for Physical Reasoning. To show the effectiveness of the proposed approach for physical reasoning tasks, we evaluate NPS on another dataset: Sprites-MOT (He et al., 2018). The Sprites-MOT dataset was introduced by He et al. (2018). The dataset contains a set of moving objects of various shapes. This dataset aims to test whether a model can handle occlusions correctly. Each frame has consistent bounding boxes which may cause the objects to appear or disappear from the scene. A model which performs well should be able to track the motion of all objects irrespective of whether they are occluded or not. We follow the same setup as Weis et al. (2020). We use the OP3 model (Veerapaneni et al., 2019) as our baseline. To test the proposed model, we replace the GNN-based transition model in OP3 with the proposed NPS.
We use the same evaluation protocol as followed by Weis et al. (2020) which is based on the MOT (Multi-object tracking) challenge (Milan et al., 2016). The results on the MOTA and MOTP metrics for this task are presented in Table 5. The results on the other metrics are presented in appendix Table 10. We ask the reader to refer to appendix F.1 for more details about these metrics. We can see that for almost all metrics, NPS outperforms the OP3 baseline. Although this dataset does not contain physical interactions between the objects, sparse rule application should still be useful in dealing with occlusions. At any time step, only a single object is affected by occlusions i.e., it may get
occluded due to another object or due to a prespecified bounding box, while the other objects follow their default dynamics. Therefore, a rule should be applied to only the object (or entity) affected (i.e., not visible) due to occlusion and may take into account any other object or entity that is responsible for the occlusion.
5 Discussion and Conclusion
For AI agents such as robots trying to make sense of their environment, the only observables are low-level variables like pixels in images. To generalize well, an agent must induce high-level entities as well as discover and disentangle the rules that govern how these entities actually interact with each other. Here we have focused on perceptual inference problems and proposed NPS, a neural instantiation of production systems by introducing an important inductive bias in the architecture following the proposals of Bengio (2017); Goyal & Bengio (2020); Ke et al. (2021).
Limitations & Looking Forward. Our experiments highlight the advantages brought by the factorization of knowledge into a small set of entities and sparse sequentially applied rules. Immediate future work would investigate how to take advantage of these inductive biases for more complex physical environments (Ahmed et al., 2020) and novel planning methods, which might be more sample efficient than standard ones (Schrittwieser et al., 2020).
We also find that Sequential and Parallel NPS have different properties suited towards different domains. Future work should explore how to effectively combine these two approaches. We discuss this in more detail in Appendix section E.3.
6 Acknowledgements
The authors would like to thank Matthew Botvinick for useful discussions. The authors would also like to thank Alex Lamb, Stefan Bauer, Nicolas Chapados, Danilo Rezende and Kelsey Allen for brainstorming sessions. We are also thankful to Dianbo Liu, Damjan Kalajdzievski and Osama Ahmed for proofreading. We would like to thank Samsung Electronics Co. Ltd. and CIFAR for funding this research. We would also like to thank Google for providing Google cloud credits used in this work. | 1. How does the Neural Production Systems (NPS) model scale when dealing with a large number of entities and rules?
2. Can the model handle complex visual environments with many interacting entities?
3. How does NPS compare to other neural network models such as Graph Neural Networks (GNNs) in terms of performance and efficiency?
4. Are there any limitations or challenges in training the model, especially when dealing with large datasets?
5. Can the learned rules be applied to real-world scenarios, and how would they generalize to unseen situations? | Summary Of The Paper
Review | Summary Of The Paper
The paper presents Neural Production Systems (NPS), a neural model to build entiity-centric representations and model the latent rules that determine their interactions in visual environments, inspired by production systems.
Review
Strengths:
I found this paper very interesting. Although inspired by an old concept (i.e., production systems), its deep learning adaptation is far from trivial.
The reported experiments are convincing. In particular, the authors show that NPS can learn arithmetic rules, map rules to MNIST transformation, work in richer visual settings and perform future-state predictions (such as predicting the number of future steps in action-conditioned world models) better than GNNs.
Rules are differentiable, and could be learned in the context of a broader framework.
Questions:
How does the model scale with respect to the number of entities/rules considered?
Presentation comments:
Figure 1 is not colour-blind friendly.
It might be useful to report results for individual games in Table 5, with the number of entities/rules considered. |
NIPS | Title
Neural Production Systems
Abstract
Visual environments are structured, consisting of distinct objects or entities. These entities have properties—visible or latent—that determine the manner in which they interact with one another. To partition images into entities, deep-learning researchers have proposed structural inductive biases such as slot-based architectures. To model interactions among entities, equivariant graph neural nets (GNNs) are used, but these are not particularly well suited to the task for two reasons. First, GNNs do not predispose interactions to be sparse, as relationships among independent entities are likely to be. Second, GNNs do not factorize knowledge about interactions in an entity-conditional manner. As an alternative, we take inspiration from cognitive science and resurrect a classic approach, production systems, which consist of a set of rule templates that are applied by binding placeholder variables in the rules to specific entities. Rules are scored on their match to entities, and the best fitting rules are applied to update entity properties. In a series of experiments, we demonstrate that this architecture achieves a flexible, dynamic flow of control and serves to factorize entity-specific and rule-based information. This disentangling of knowledge achieves robust future-state prediction in rich visual environments, outperforming state-of-the-art methods using GNNs, and allows for the extrapolation from simple (few object) environments to more complex environments.
1 Introduction
Despite never having taken a physics course, every child beyond a young age appreciates that pushing a plate off the dining table will cause the plate to break. The laws of physics accurately characterize the dynamics of our natural world, and although explicit knowledge of these laws is not necessary to reason, we can reason explicitly about objects interacting through these laws. Humans can verbalize knowledge in propositional expressions such as “If a plate drops from table height, it will break,” and “If a video-game opponent approaches from behind and they are carrying a weapon, they are likely to attack you.” Expressing propositional knowledge is not a strength of current deep learning methods for several reasons. First, propositions are discrete and independent from one another. Second, propositions must be quantified in the manner of first-order logic; for example, the video-game proposition applies to any X for which X is an opponent and has a weapon. Incorporating the ability to express and reason about propositions should improve generalization in deep learning methods because this knowledge is modular— propositions can be formulated independently of each other— and can therefore be acquired incrementally. Propositions can also be composed with each other and applied consistently to all entities that match, yielding a powerful form of systematic generalization.
The classical AI literature from the 1980s can offer deep learning researchers a valuable perspective. In this era, reasoning, planning, and prediction were handled by architectures that performed propositional inference on symbolic knowledge representations. A simple example of such an architecture is
* Equal Contribution, ** Equal Advising 1 Mila, University of Montreal, 2 Google Deepmind, 3 Waverly, 4 Google Research, Brain Team. Corresponding authors: [email protected], [email protected]
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
the production system (Laird et al., 1986; Anderson, 1987), which expresses knowledge by conditionaction rules. The rules operate on a working memory: rule conditions are matched to entities in working memory inspired by cognitive science, and such a match can trigger computational actions that update working memory or external actions that operate on the outside world.
Production systems were typically used to model high-level cognition, e.g., mathematical problem solving or procedure following; perception was not the focus of these models. It was assumed that the results of perception were placed into working memory in a symbolic form that could be operated on with the rules. In this article, we revisit production systems but from a deep learning perspective which naturally integrates perceptual processing and subsequent inference for visual reasoning problems. We describe an end-to-end deep learning model that constructs object-centric representations of entities in videos, and then operates on these entities with differentiable—and thus learnable—production rules. The essence of these rules, carried over from traditional symbolic system, is that they operate on variables that are bound, or linked, to the entities in the world. In the deep learning implementation, each production rule is represented by a distinct MLP with query-key attention mechanisms to specify the rule-entity binding and to determine when the rule should be triggered for a given entity. We are not the first to propose a neural instantiation of a production system architecture. Touretzky & Hinton (1988) gave a proof of principle that neural net hardware could be hardwired to implement a production system for symbolic reasoning; our work fundamentally differs from theirs in that (1) we focus on perceptual inference problems and (2) we use the architecture as an inductive bias for learning.
1.1 Variables and entities
What makes a rule general-purpose is that it incorporates placeholder variables that can be bound to arbitrary values or—the term we prefer in this article—entities. This notion of binding is familiar in functional programming languages, where these variables are called arguments. Analogously, the use of variables in the production rules we describe enable a model to reason about any set of entities that satisfy the selection criteria of the rule.
Consider a simple function in C like int add(int a, int b). This function binds its two integer operands to variables a and b. The function does not apply if the operands are, say, character strings. The use of variables enables a programmer to reuse the same function to add any two integer values
In order for rules to operate on entities, these entities must be represented explicitly. That is, the visual world needs to be parsed in a task-relevant manner, e.g., distinguishing the sprites in a video game or the vehicles and pedestrians approaching an autonomous vehicle. Only in the past few years have deep learning vision researchers developed methods for object-centric representation (Le Roux et al., 2011; Eslami et al., 2016; Greff et al., 2016; Raposo et al., 2017; Van Steenkiste et al., 2018; Kosiorek et al., 2018; Engelcke et al., 2019; Burgess et al., 2019; Greff et al., 2019; Locatello et al., 2020a; Ahmed et al., 2020; Goyal et al., 2019; Zablotskaia et al., 2020; Rahaman et al., 2020; Du et al., 2020; Ding et al., 2020; Goyal et al., 2020; Ke et al., 2021). These methods differ in details but share the notion of a fixed number of slots (see Figure 1 for example), also known as object files, each encapsulating information about a single object. Importantly, the slots are interchangeable, meaning that it doesn’t matter if a scene with an apple and an orange encodes the apple in slot 1 and orange in slot 2 or vice-versa.
A model of visual reasoning must not only be able to represent entities but must also express knowledge about entity dynamics and interactions. To ensure systematic predictions, a model must be capable of applying knowledge to an entity regardless of the slot it is in and must be capable of applying the same knowledge to multiple instances of an entity. Several distinct approaches exist in the literature. The predominant approach uses graph neural networks to model slot-to-slot interactions (Scarselli et al., 2008; Bronstein et al., 2017; Watters et al., 2017; Van Steenkiste et al., 2018; Kipf et al., 2018; Battaglia et al., 2018; Tacchetti et al., 2018). To ensure systematicity, the GNN must share parameters among the edges. In a recent article, Goyal et al. (2020) developed a more general framework in which parameters are shared but slots can dynamically select which parameters to use in a state-dependent manner. Each set of parameters is referred to as a schema, and slots use a query-key attention mechanism to select which schema to apply at each time step. Multiple slots can select the same schema. In both GNNs and SCOFF, modeling dynamics involves each slot interacting with each other slot. In the work we describe in this article, we replace the direct slot-to-slot interactions with rules, which mediate sparse interactions among slots (See arrows in Figure 1).
Thus our main contribution is that we introduce NPS, which offers a way to model dynamic and sparse interactions among the variables in a graph and also allows dynamic sharing of multiple sets of parameters among these interactions. Most architectures used for modelling interactions in the current literature use statically instantiated graph which model all possible interactions for a given variable at each step i.e. dense interactions. Also such dense architectures share a single set of parameters across all interactions which maybe quite restrictive in terms of representational capacity. A visual comparison between these two kinds of architectures is shown in Figure 1. Through our experiments we show the advantage of modeling interactions in the proposed manner using NPS in visually rich physical environments. We also show that our method results in an intuitive factorization of rules and entities.
2 Production System
Formally, our notion of a production system consists of a set of entities and a set of rules, along with a mechanism for selecting rules to apply on subsets of the entities. Implicit in a rule is a specification of the properties of relevant entities, e.g., a rule might apply to one type of sprite in a video game but not another. The control flow of a production system dynamically selects rules as well as bindings between rules and entities, allowing different rules to be chosen and different entities to be manipulated at each point in time.
The neural production system we describe shares essential properties with traditional production system, particularly with regard to the compositionality and generality of the knowledge they embody. Lovett & Anderson (2005) describe four desirable properties commonly attributed to symbolic systems that apply to our work as well.
Production rules are modular. Each production rule represents a unit of knowledge and are atomic such that any production rule can be intervened (added, modified or deleted) independently of other production rules in the system.
Production rules are abstract. Production rules allow for generalization because their conditions may be represented as high-level abstract knowledge that match to a wide range of patterns. These conditions specify the attributes of relationship(s) between entities without specifying the entities themselves. The ability to represent abstract knowledge allows for the transfer of learning across different environments as long as they fit within the conditions of the given production rule.
Production rules are sparse. In order that production rules have broad applicability, they involve only a subset of entities. This assumption imposes a strong prior that dependencies among entities are sparse. In the context of visual reasoning, we conjecture that this prior is superior to what has often been assumed in the past, particularly in the disentanglement literature—independence among entities Higgins et al. (2016); Chen et al. (2018).
Production rules represent causal knowledge and are thus asymmetric. Each rule can be decomposed into a {condition, action} pair, where the action reflects a state change that is a causal consequence of the conditions being met.
These four properties are sufficient conditions for knowledge to be expressed in production rule form. These properties specify how knowledge is represented, but not what knowledge is represented. The
latter is inferred by learning mechanisms under the inductive bias provided by the form of production rules.
3 Neural Production System: Slots and Sparse Rules
The Neural Production System (NPS), illustrated in Figure 2, provides an architectural backbone that supports the detection and inference of entity (object) representations in an input sequence, and the underlying rules which govern the interactions between these entities in time and space. The input sequence indexed by time step t, {x1, . . . ,xt, . . . ,xT }, for instance the frames in a video, are processed by a neural encoder (Burgess et al., 2019; Greff et al., 2019; Goyal et al., 2019, 2020) applied to each xt, to obtain a set of M entity representations {V t1 , . . . , . . . ,V tM}, one for each of the M slots. These representations describe an entity and are updated based on both the previous state, V t−1 and the current input, xt.
NPS consists of N separately encoded rules, {R1,R2, ..,RN}. Each rule consists of two components, Ri = ( ~Ri,MLPi), where ~Ri is a learned rule embedding vector, which can be thought of as a template defining the condition for when a rule applies; and MLPi, which determines the action taken by a rule. Both ~Ri and the parameters of MLPi are learned along with the other parameters of the model using back-propagation on an objective optimized end-to-end.
In the general form of the model, each slot selects a rule that will be applied to it to change its state. This can potentially be performed several times, with possibly different rules applied at each step. Rule selection is done using an attention mechanism described in detail below. Each rule specifies conditions and actions on a pair of slots. Therefore, while modifying the state of a slot using a rule, it can take the state of another slot into account. The slot which is being modified is called the primary slot and other is called the contextual slot. The contextual slot is also selected using an attention mechanism described in detail below.
3.1 Computational Steps in NPS
In this section, we give a detailed description of the rule selection and application procedure for the slots. First, we will formalize the definitions of a few terms that we will use to explain our method. We use the term primary slot to refer to slot Vp whose state gets modified by a rule Rr. We use the term contextual slot to refer to the slot Vc that the rule Rr takes into account while modifying the state of the primary slot Vp.
Notation. We consider a set of N rules {R1,R2, . . . ,RN} and a set of T input frames {x1,x2, . . . ,xT }. Each frame xt is encoded into a set of M slots {V t1 ,V t2 , . . . ,V tM}. In the following discussion, we omit the index over t for simplicity.
Step 1. is external to NPS and involves parsing an input image, xt, into slot-based entities conditioned on the previous state of the slot-based entities. Any of the methods proposed in the literature to obtain a slot-wise representation of entities can be used (Burgess et al., 2019; Greff et al., 2019; Goyal et al., 2019, 2020). The next three steps constitute the rule selection and application procedure.
Step 2. For each primary slot Vp, we attend to a rule Rr to be applied. Here, the queries come from the primary slot: qp = VpW q, and the keys come from the rules: ki = ~RiW k ∀i ∈ {1, . . . ,N}. The rule is selected using a straight-through Gumbel softmax (Jang et al., 2016) to achieve a learnable hard decision: r = argmaxi(qpki + γ), where γ ∼ Gumbel(0, 1). This competition is a noisy version of rule matching and prioritization in traditional production systems.
Step 3. For a given primary slot Vp and selected rule Rr, a contextual slot Vc is selected using another attention mechanism. In this case the query comes from the primary slot: qp = VpW q, and the keys from all the slots: kj = VjW q ∀j ∈ {1, . . . ,M}. The selection takes place using a straightthrough Gumbel softmax similar to step 2: c = argmaxj(qpkj + γ), where γ ∼ Gumbel(0, 1). Note that each rule application is sparse since it takes into account only 1 contextual slot for modifying
a primary slot, while other methods like GNNs take into account all slots for modifying a primary slot.
Step 4. Rule Application: the selected rule Rr is applied to the primary slot Vp based on the rule and the current contents of the primary and contextual slots. The rule-specific MLPr, takes as input the concatenated representation of the state of the primary and contextual slots, Vp and Vc, and produces an output, which is then used to change the state of the primary slot Vp by residual addition.
3.2 Rule Application: Sequential vs Parallel Rule Application
In the previous section, we have described how each rule application only considers another contextual slot for the given primary slot i.e., contextual sparsity. We can also consider application sparsity, wherein we use the rules to update the states of only a subset of the slots. In this scenario, only the selected slots would be primary slots. This setting will be helpful when there is an entity in an environment that is stationary, or it is following its own default dynamics unaffected by other entities. Therefore, it does not need to consider other entities to update its state. We explore two scenarios for enabling application sparsity.
Parallel Rule Application. Each of theM slots selects a rule to potentially change its state. To enable sparse changes, we provide an extra Null Rule in addition to the available N rules. If a slot picks the null rule in step 2 of the above procedure, we do not update its state.
Sequential Rule Application. In this setting, only one slot gets updated in each rule application step. Therefore, only one slot is selected as the primary slot. This can be facilitated by modifying step 2 above to select one {primary
slot, rule} pair among NM {rule, slot} pairs. The queries come from each slot: qj = VjW q ∀j ∈ {1, . . . ,M}, the keys come from the rules: ki = RiW k ∀i ∈ {1, . . . ,N}. The straight-through Gumbel softmax selects one (primary slot, rule) pair: p, r = argmaxi,j(qpki + γ), where γ ∼ Gumbel(0, 1). In the sequential regime, we allow the rule application procedure (step 2, 3, 4 above) to be performed multiple times iteratively in K rule application stages for each time-step t.
A pictorial demonstration of both rule application regimes can be found in Figure 3. We provide detailed algorithms for the sequential and parallel regimes in Appendix.
4 Experiments
We demonstrate the effectiveness of NPS on multiple tasks and compare to a comprehensive set of baselines. To show that NPS can learn intuitive rules from the data generating distribution, we design a couple of simple toy experiments with well-defined discrete operations. Results show that NPS can accurately recover each operation defined by the data and learn to represent each operation using a separate rule. We then move to a much more complicated and visually rich setting with abstract physical rules and show that factorization of knowledge into rules as offered by NPS does scale up to such settings. We study and compare the parallel and sequential rule application procedures and try to understand the settings which favour each. We then
evaluate the benefits of reusable, dynamic and sparse interactions as offered by NPS in a wide variety of physical environments by comparing it against various baselines. We conduct ablation studies to assess the contribution of different components of NPS. Here we briefly outline the tasks considered and direct the reader to the Appendix for full details on each task and details on hyperparameter settings.
Discussion of baselines. NPS is an interaction network, therefore we use other widely used interaction networks such as multihead attention and graph neural networks (Goyal et al. (2019), Goyal et al. (2020), Veerapaneni et al. (2019), Kipf et al. (2019)) for comparison. Goyal et al. (2019) and Goyal
et al. (2020) use an attention based interaction network to capture interactions between the slots, while Veerapaneni et al. (2019) and Kipf et al. (2019) use a GNN based interaction network. We also consider the recently introduced convolutional interaction network (CIN) (Qi et al., 2021) which captures dense pairwise interactions like GNN but uses a convolutional network instead of MLPs to better utilize spatial information. The proposed method, similar to other interaction networks, is agnostic to the encoder backbone used to encode the input image into slots, therefore we compare NPS to other interaction networks across a wide-variety of encoder backbones.
4.1 Learning intuitive rules with NPS: Toy Simulations
We designed a couple of simple tasks with well-defined discrete rules to show that NPS can learn intuitive and interpretable rules. We also show the efficiency and effectiveness of the selection procedure (step 2 and step 3 in section 3.1) by comparing against a baseline with many more parameters. Both tasks require a single modification of only one of the available entities, therefore the use of sequential or parallel rule application would not make a difference here since parallel rule application in which all-but-one slots select the null rule is similar to sequential rule application with 1 rule application step. To simplify the presentation, we describe the setup for both tasks using the sequential rule application procedure.
MNIST Transformation. We test whether NPS can learn simple rules for performing transformations on MNIST digits. We generate data with four transformations: {Translate Up, Translate Down, Rotate Right, Rotate Left}. We feed the input image (X) and the transformation (o) to be performed as a one-hot vector to the model. The detailed setup is described in Appendix. For this task, we evaluate whether NPS can learn to use a unique rule for each transformation.
We use 4 rules corresponding to the 4 transformations with the hope that the correct transformations are recovered. Indeed, we observe that NPS successfully learns to represent each transformation using a separate rule as shown in Table 1. Our model achieves an MSE of 0.02. A visualization of the outputs from our model and further details can be found in Appendix C.
Coordinate Arithmetic Task. The model is tasked with performing arithmetic operations on 2D coordinates. Given (X0, Y0) and (X1, Y1), we can apply the following operations: {X Addition: (Xr, Yr) = (X0 + X1, Y0), X Subtraction: (Xr, Yr) = (X0 − X1, Y0), Y Addition: (Xr, Yr) = (X0, Y0 + Y1), Y Subtraction: Xr, Yr = (X0, Y0−Y1)}, where (Xr, Yr) is the resultant coordinate.
In this task, the model is given 2 input coordinates X = [(xi, yi), (xj , yj)] and the expected output coordinates Y = [(x̂i, ŷi), (x̂j , ŷj)] . The model is supposed to infer the correct rule to produce the correct output coordinates. During data collection, the true output is obtained by performing a random transformation on a randomly selected coordinate in X (primary coordinate), taking another randomly selected coordinate from X (contextual coordinate) into account. The detailed setup is described in Appendix D. We use an NPS model with 4 rules for this task. We use the the selection procedure in step 2 and step 3 of algorithm 1 to select the primary coordinate, contextual coordinate, and the rule. For the baseline we replace the selection procedure in NPS (i.e. step 2 and step 3 in
algorithm 1) with a routing MLP similar to Fedus et al. (2021).
This routing MLP has 3 heads (one each for selecting the primary coordinate, contextual coordinate, and the rule). The baseline has 4 times more parameters than NPS. The final output is produced by
the rule MLP which does not have access to the true output, hence the model cannot simply copy the true output to produce the actual output. Unlike the MNIST transformation task, we do not provide the operation to be performed as a one-hot vector input to the model, therefore it needs to infer the available operations from the data demonstrations.
We show the segregation of rules for NPS and the baseline in Figure 4. We can see that NPS learns to use a unique rule for each operation while the baseline struggles to disentangle the underlying operations properly. NPS also outperforms the baseline in terms of MSE achieving an MSE of 0.01±0.001 while the baseline achieves an MSE of 0.04±0.008. To further confirm that NPS learns all the available operations correctly from raw data demonstrations, we use an NPS model with 5 rules. We expect that in this case NPS should utilize only 4 rules since the data describes only 4 unique operations and indeed we observe that NPS ends up mostly utilizing 4 of the available 5 rules as shown in Table 2.
4.2 Parallel vs Sequential Rule Application
We compare the parallel and sequential rule application procedures, to understand the settings that favour one or the other, over two tasks: (1) Bouncing Balls, (2) Shapes Stack. We use the term PNPS to refer to parallel rule application and SNPS to refer to sequential rule application.
Shapes Stack. We use the shapes stack dataset introduced by Groth et al. (2018). This dataset consists of objects stacked on top of each other as shown in Figure 5. These objects fall under the influence of gravity. For our experiments, We follow the same setup as Qi et al. (2021). In this task, given the first frame, the model is tasked with predicting the object bounding boxes for the next t timesteps. The first frame is encoded using a convolutional network followed by RoIPooling (Girshick (2015)) to extract object-centric visual features. The object-centric features are then passed to the dynamics model to predict object bounding boxes of the next t steps. Qi et al. (2021) propose a Region Proposal Interaction Network (RPIN) to solve this task. The dynamics model in RPIN consists of an Interaction Network proposed in Battaglia et al. (2016).
To better utilize spatial information, Qi et al. (2021) propose an extension of the interaction operators in interaction net to operate on 3D tensors. This is achieved by replacing the MLP operations in the original interaction networks with convolutions. They call this new network Convolutional Interaction Network (CIN). For the proposed model, we replace this CIN in RPIN by NPS. To ensure a fair comparison to CIN, we use CNNs to represent rules in NPS instead of MLPs. CIN captures all pairwise interactions between objects using a convolutional network. In NPS, we capture sparse interactions (contextual sparsity) as compared to dense pairwise interactions captured by CIN. Also, in NPS we update only a few subset of slots per step instead of all slots (application sparsity).
We consider two evaluation settings. (1) Test setting: The number of rollout timesteps is same as that seen during training (i.e. t = 15); (2) Transfer Setting: The number of rollout timesteps is higher than that seen during training (i.e. t = 30).
We present our results on the shapes stack dataset in Table 3. We can see that both PNPS and SNPS outperform the baseline RPIN in the transfer setting, while only PNPS outperforms the baseline in the test setting and SNPS fails to do so. We can see that PNPS outperforms SNPS. We attribute this to the reduced application sparsity with PNPS, i.e., it is more likely that the state of a slot gets updated in PNPS as compared to SNPS. For instance, consider an NPS model with N uniformly chosen rules and M slots. The probability that the state of a slot gets updated in PNPS is PPNPS = N − 1/N (since 1 rule is the null rule), while the same probability for SNPS is PSNPS = 1/M (since only 1 slot gets updated per rule application step).
For this task, we run both PNPS and SNPS for N = {1, 2, 4, 6} rules and M = 3. For any given N , we observe that PPNPS > PSNPS . Even when we have multiple rule application steps in SNPS, it might end up selecting the same slot to be updated in more than one of these steps. We report the best performance obtained for PNPS and SNPS across all N , which is N = {2 + 1 Null Rule} for PNPS and N = 4 for SNPS, in Table 3. Shapes stack is a dataset that would prefer a model with less application sparsity since all the objects are tightly bound to each other (objects are placed on top of each other), therefore all objects spend the majority of their time interacting with the objects directly above or below them. We attribute the higher performance of PNPS compared to RPIN to the higher contextual sparsity of PNPS. Each example in the shapes stack task consists of 3 objects. Even though the blocks are tightly bound to each other, each block is only affected by the objects it is in direct contact with. For example, the top-most object is only affected by the object directly below it. The contextual sparsity offered by PNPS is a strong inductive bias to model such sparse interactions while RPIN models all pairwise interactions between the objects. Figure 5 shows an intuitive illustration of the PNPS model for the shapes stack dataset. In the figure, Rule 2 actually refers to the Null Rule, while Rule 1 refers to all the other non-null rules. The bottom-most block picks the Null Rule most times, as the bottom-most block generally does not move.
Bouncing Balls. We consider a bouncingballs environment in which multiple balls move with billiard-ball dynamics. We validate our model on a colored version of this dataset. This is a next-step prediction task in which the model is tasked with predicting the final binary mask of each ball. We compare the following methods: (a) SCOFF (Goyal et al., 2020): factorization of knowledge in terms of slots (object properties) and schemata, the latter capturing object dynamics; (b) SCOFF++: we extend SCOFF by using the idea of iterative competition as proposed in slot attention (SA) (Locatello et al., 2020a); SCOFF + PNPS/SNPS: We replace pairwise slot-to-slot interaction in SCOFF++ with parallel or sequential rule application. For comparing different methods, we use the Adjusted Rand Index or ARI (Rand, 1971). To investigate how the factorization in the form of rules allows for extrapolating knowledge from fewer to more objects, we increase the number of objects from 4 during training to 6-8 during testing.
We present the results of our experiments in Table 4. Contrary to the shapes stack task, we
see that SNPS outperforms PNPS for the bouncing balls task. The balls are not tightly bound together into a single tower as in the shapes stack. Most of the time, a single ball follows its own dynamics, only occasionally interacting with another ball. Rules in NPS capture interaction dynamics between entities, hence they would only be required to change the state of an entity when it interacts with another entity. In the case of bouncing balls, this interaction takes place through a collision between multiple balls. Since for a single ball, such collisions are rare, SNPS, which has higher application sparsity (less probability of modifying the state of an entity), performs better as compared to PNPS (lower application sparsity). Also note that, SNPS has the ability to compose multiple rules together by virtue of having multiple rule application stages. A visualization of the rule and entity selections by the proposed algorithm can be found in Appendix Figure 9.
Given the analysis in this section, we can conclude that PNPS is expected to work better when interactions among entities are more frequent while SNPS is expected to work better when interactions are rare and most of the time, each entity follows its own dynamics. Note that, for both SNPS and PNPS, the rule application considers only 1 other entity as context. Therefore, both approaches have equal contextual sparsity while the baselines that we consider (SCOFF and RPIN) capture dense pairwise interactions. We discuss the benefits of contextual sparsity in more detail in the next section. More details regarding our setup for the above experiments can be found in Appendix.
4.3 Benefits of Sparse Interactions Offered by NPS
In NPS, one can view the computational graph as a dynamically constructed GNN resulting from applying dynamically selected rules, where the states of the slots are represented on the different nodes of the graph, and different rules dynamically instantiate an hyper-edge between a set of slots (the primary slot and the contextual slot). It is important to emphasize that the topology of the graph induced in NPS is dynamic and sparse (only a few nodes affected), while in most GNNs the topology is fixed and dense (all nodes affected). In this section, through a thorough set of experiments, we show that learning sparse and dynamic interactions using NPS indeed works better for the problems we consider than learning dense interactions using GNNs. We consider two types of tasks: (1) Learning Action Conditioned World Models (2) Physical Reasoning. We use SNPS for all these experiments since in the environments that we consider here, interactions among entities are rare.
Learning Action-Conditioned World Models. For learning action-conditioned world models, we follow the same experimental setup as Kipf et al. (2019). Therefore, all the tasks in this section are next-K step (K = {1, 5, 10}) prediction tasks, given the intermediate actions, and with the predictions being performed in the latent space. We use the Hits at Rank 1 (H@1) metrics described by Kipf et al. (2019) for evaluation. H@1 is 1 for a particular example if the predicted state representation is nearest to the encoded true observation and 0 otherwise. We report the average of this score over the test set (higher is better).
Physics Environment. The physics environment (Ke et al., 2021) simulates a simple physical world. It consists of blocks of unique but unknown weights. The dynamics for the interaction between blocks is that the movement of heavier blocks pushes lighter blocks on their path. This rule creates an acyclic causal graph between the blocks. For an accurate world model, the learner needs to infer the correct weights through demonstrations. Interactions in this environment are sparse and only involve two blocks at a time, therefore we expect NPS to outperform dense architectures like GNNs. This environment is demonstrated in Appendix Fig 11.
We follow the same setup as Kipf et al. (2019). We use their C-SWM model as baseline. For the proposed model, we only replace the GNN from C-SWM by NPS. GNNs generally share parameters across edges, but in NPS each rule has separate parameters. For a fair comparison to GNN, we use an NPS model with 1 rule. Note that this setting is still different from GNNs as in GNNs at each step every slot is updated by instantiating edges between all pairs of slots, while in NPS an edge is dynamically instantiated between a single pair of slots and only the state of the selected slot (i.e., primary slot) gets updated.
The results of our experiments are presented in Figure 6(a). We can see that NPS outperforms GNNs for all rollouts. Multi-step settings are more difficult to model as errors may get compounded over time steps. The sparsity of NPS (only a single slot affected per step) reduces compounding of errors and enhances symmetry-breaking in the assignment of transformations to rules, while in the
case of GNNs, since all entities are affected per step, there is a higher possibility of errors getting compounded. We can see that even with a single rule, we significantly outperform GNNs thus proving the effectiveness of dynamically instantiating edges between entities.
Atari Games. We also test the proposed model in the more complicated setting of Atari. Atari games also have sparse interactions between entities. For instance, in Pong, any interaction involves only 2 entities: (1) paddle and ball or (2) ball and the wall. Therefore, we expect sparse interactions captured by NPS to outperform GNNs here as well.
We follow the same setup as for the physics environment described in the previous section. We present the results for the Atari experiments in Figure 6(b), showing the average H@1 score across 5 games: Pong, Space Invaders, Freeway, Breakout, and QBert. As expected, we can see that the proposed model achieves a higher score than the GNN-based C-SWM. The results for the Atari experiments reinforce the claim that NPS is especially good at learning sparse interactions.
Learning Rules for Physical Reasoning. To show the effectiveness of the proposed approach for physical reasoning tasks, we evaluate NPS on another dataset: Sprites-MOT (He et al., 2018). The Sprites-MOT dataset was introduced by He et al. (2018). The dataset contains a set of moving objects of various shapes. This dataset aims to test whether a model can handle occlusions correctly. Each frame has consistent bounding boxes which may cause the objects to appear or disappear from the scene. A model which performs well should be able to track the motion of all objects irrespective of whether they are occluded or not. We follow the same setup as Weis et al. (2020). We use the OP3 model (Veerapaneni et al., 2019) as our baseline. To test the proposed model, we replace the GNN-based transition model in OP3 with the proposed NPS.
We use the same evaluation protocol as followed by Weis et al. (2020) which is based on the MOT (Multi-object tracking) challenge (Milan et al., 2016). The results on the MOTA and MOTP metrics for this task are presented in Table 5. The results on the other metrics are presented in appendix Table 10. We ask the reader to refer to appendix F.1 for more details about these metrics. We can see that for almost all metrics, NPS outperforms the OP3 baseline. Although this dataset does not contain physical interactions between the objects, sparse rule application should still be useful in dealing with occlusions. At any time step, only a single object is affected by occlusions i.e., it may get
occluded due to another object or due to a prespecified bounding box, while the other objects follow their default dynamics. Therefore, a rule should be applied to only the object (or entity) affected (i.e., not visible) due to occlusion and may take into account any other object or entity that is responsible for the occlusion.
5 Discussion and Conclusion
For AI agents such as robots trying to make sense of their environment, the only observables are low-level variables like pixels in images. To generalize well, an agent must induce high-level entities as well as discover and disentangle the rules that govern how these entities actually interact with each other. Here we have focused on perceptual inference problems and proposed NPS, a neural instantiation of production systems by introducing an important inductive bias in the architecture following the proposals of Bengio (2017); Goyal & Bengio (2020); Ke et al. (2021).
Limitations & Looking Forward. Our experiments highlight the advantages brought by the factorization of knowledge into a small set of entities and sparse sequentially applied rules. Immediate future work would investigate how to take advantage of these inductive biases for more complex physical environments (Ahmed et al., 2020) and novel planning methods, which might be more sample efficient than standard ones (Schrittwieser et al., 2020).
We also find that Sequential and Parallel NPS have different properties suited towards different domains. Future work should explore how to effectively combine these two approaches. We discuss this in more detail in Appendix section E.3.
6 Acknowledgements
The authors would like to thank Matthew Botvinick for useful discussions. The authors would also like to thank Alex Lamb, Stefan Bauer, Nicolas Chapados, Danilo Rezende and Kelsey Allen for brainstorming sessions. We are also thankful to Dianbo Liu, Damjan Kalajdzievski and Osama Ahmed for proofreading. We would like to thank Samsung Electronics Co. Ltd. and CIFAR for funding this research. We would also like to thank Google for providing Google cloud credits used in this work. | 1. What are the strengths and weaknesses of the proposed method in combining deep learning and production systems?
2. How effective is the method in achieving sparse interactions, and how does it handle task-dependent decisions?
3. What are the limitations of the current formulation regarding its generality and expressive power compared to traditional production systems?
4. Are there any inconsistencies or unclear notations in the description of the algorithms?
5. How do the results demonstrate marginal improvements over baseline methods, and what are the considerations for selecting baselines?
6. How has the updated algorithm improved the performance on various tasks, and what are the implications for its applicability in different domains? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a novel method that combines some of the strengths of deep learning with production systems, a classic approach in symbolic AI and cognitive science, and evaluates the method on physical reasoning tasks.
Review
The idea is very exciting and I think it has a lot of promise. Furthermore, the algorithms proposed in the paper creatively combine a range of deep learning techniques (slot-based object-centric representations, key/value attention, gumbel softmax) in an attempt to accomplish this unification. Unfortunately, I think in its current state this work suffers from two limitations:
The results:
The current set of experiments seem to demonstrate only marginal improvements over the baselines in some cases. In many instances there is no specification of how many random seeds are being averaged over, or the number of seeds is very small, and it is not clear whether there is even a statistically significant advantage over the baseline methods.
Furthermore there is typically only one baseline evaluated for each experiment, and it is not clearly specified why this baseline was chosen or whether it represents the current state-of-the-art for the particular metric being evaluated. This is not the only important consideration, but it is certainly important information to include.
The algorithm(s): There are significant questions about the generality of the proposed algorithm, at least in its current formulation.
First, the algorithm is applied somewhat inconsistently to the different experiments, in a way that isn’t entirely transparent. For instance, in the MNIST transformation task, it sounds like the task instruction (specifying the transformation to be performed) is used to query the rules, which, according to their formulation, should make the task instruction itself the ‘primary slot’, meaning that it should be updated by the selected rule. But presumably it is actually the image embedding that is treated as the primary slot, so as to apply the selected transformation. This experiment is only intended as a preliminary test of the model, but I think it actually reveals a way in which important aspects of the implementation have to be manually tailored to a particular task, because the rule selection and application needs to be handled in an entirely different way if rules are selected on the basis of an instruction vs. on the basis of the objects themselves. Another example comes from the coordinate arithmetic task, in which the coordinates need to be assigned to slots in a particular way in order for the model to successfully use the proposed rule selection method. Specifically, the input and output coordinates for each dimension need to be concatenated into a single slot in order to allow the rule to be inferred from pairwise comparisons between each slot and each rule embedding. If, instead, the input and output coordinates were each represented as separate slots (which seems like a natural way to apply the algorithm as currently specified) there would be no way to identify the rule governing the transformation from input to output.
Additionally, there are two versions of the proposed method, sequential vs. parallel NPS, and which version works better is somewhat task-dependent. The authors offer reasonable conjectures as to why a particular method works better on a particular task, but this task-dependency does somewhat undermine their claim that sparse interactions are a generally useful inductive bias. The parallel version, which is somewhat less sparse than the sequential version, works better on tasks involving multiple closely interacting objects, whereas the more sparse sequential version works better on tasks involving more spatially distributed objects that mostly don’t interact with each other. But presumably the task of real-world physical reasoning will involve a mixture of these scenarios, and there is no proposal for how this decision should be made other than by the intuitive judgment of the modeler.
Finally, the current formulation seems to be overly tailored to physical reasoning tasks, and lacking the more general expressive power of the original production systems. For instance, the current formulation only accommodates binary interactions, and there is no way to achieve any kind of hierarchy (i.e. in which the output of one rule serves as the input to another). Many of the design decisions regarding how rules are selected and applied seem as though they have been made specifically with physical reasoning in mind, and it’s not clear how they will be modified to handle other sorts of tasks to which production systems have traditionally been applied, especially tasks involving more abstract reasoning.
In summary, I think the current approach might be more conservatively described as a method for efficiently modeling sparse interactions for the purposes of physical reasoning, rather than as a ‘neural production system’ which seems to imply much more general functionality than it currently has. Nevertheless, I think that it might still merit acceptance if the issues regarding the results and baselines are resolved.
Minor points:
Some of the notation is inconsistent or unclear. For instance, in the description of step 2 (line 149), the key is obtained using a variable denoted as
R
i
, is this the same as the learned rule embedding vector (described in line 124) denoted as
R
→
i
? Is
k
i
simply equal to
R
→
i
as indicated in algorithms 1 and 2, or is it multiplied by the weights
W
k
as indicated on line 149? In Algorithm 2, are the weights
W
k
really used to obtain
q
p
or are these supposed to be
W
q
or
W
~
q
? Also in Algorithm 2, is
W
^
k
an additional set of weights not specified in the input description, or is it supposed to be
W
k
?
In the Appendix it says that Table 3 in the paper is wrong and that Table 9 (in the Appendix) reflects the correct results. Hopefully Table 9 will go into the main body of the paper, including the results for both the test and transfer regimes?
It would be helpful to include pointers to the specific relevant sections in the Appendix, and to try and arrange the sections of the Appendix in the same order as the corresponding sections in the main body of the paper.
Update after discussion period
The authors have made improvements to the algorithm, clarified the criteria for selecting baselines and how these relate to the current state-of-the-art, and performed more extensive experiments which demonstrate that the improved algorithm clearly outperforms other approaches on a range of tasks. Notably, these involved experiments on a Raven's Progressive Matrices benchmark, showing some improvements in out-of-distribution generalization, suggesting that NPS may useful in more symbolic domains in addition to the physical reasoning tasks that are the focus of most of the paper. I believe the paper merits acceptance and am updating my score to a 7. |
NIPS | Title
Neural Production Systems
Abstract
Visual environments are structured, consisting of distinct objects or entities. These entities have properties—visible or latent—that determine the manner in which they interact with one another. To partition images into entities, deep-learning researchers have proposed structural inductive biases such as slot-based architectures. To model interactions among entities, equivariant graph neural nets (GNNs) are used, but these are not particularly well suited to the task for two reasons. First, GNNs do not predispose interactions to be sparse, as relationships among independent entities are likely to be. Second, GNNs do not factorize knowledge about interactions in an entity-conditional manner. As an alternative, we take inspiration from cognitive science and resurrect a classic approach, production systems, which consist of a set of rule templates that are applied by binding placeholder variables in the rules to specific entities. Rules are scored on their match to entities, and the best fitting rules are applied to update entity properties. In a series of experiments, we demonstrate that this architecture achieves a flexible, dynamic flow of control and serves to factorize entity-specific and rule-based information. This disentangling of knowledge achieves robust future-state prediction in rich visual environments, outperforming state-of-the-art methods using GNNs, and allows for the extrapolation from simple (few object) environments to more complex environments.
1 Introduction
Despite never having taken a physics course, every child beyond a young age appreciates that pushing a plate off the dining table will cause the plate to break. The laws of physics accurately characterize the dynamics of our natural world, and although explicit knowledge of these laws is not necessary to reason, we can reason explicitly about objects interacting through these laws. Humans can verbalize knowledge in propositional expressions such as “If a plate drops from table height, it will break,” and “If a video-game opponent approaches from behind and they are carrying a weapon, they are likely to attack you.” Expressing propositional knowledge is not a strength of current deep learning methods for several reasons. First, propositions are discrete and independent from one another. Second, propositions must be quantified in the manner of first-order logic; for example, the video-game proposition applies to any X for which X is an opponent and has a weapon. Incorporating the ability to express and reason about propositions should improve generalization in deep learning methods because this knowledge is modular— propositions can be formulated independently of each other— and can therefore be acquired incrementally. Propositions can also be composed with each other and applied consistently to all entities that match, yielding a powerful form of systematic generalization.
The classical AI literature from the 1980s can offer deep learning researchers a valuable perspective. In this era, reasoning, planning, and prediction were handled by architectures that performed propositional inference on symbolic knowledge representations. A simple example of such an architecture is
* Equal Contribution, ** Equal Advising 1 Mila, University of Montreal, 2 Google Deepmind, 3 Waverly, 4 Google Research, Brain Team. Corresponding authors: [email protected], [email protected]
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
the production system (Laird et al., 1986; Anderson, 1987), which expresses knowledge by conditionaction rules. The rules operate on a working memory: rule conditions are matched to entities in working memory inspired by cognitive science, and such a match can trigger computational actions that update working memory or external actions that operate on the outside world.
Production systems were typically used to model high-level cognition, e.g., mathematical problem solving or procedure following; perception was not the focus of these models. It was assumed that the results of perception were placed into working memory in a symbolic form that could be operated on with the rules. In this article, we revisit production systems but from a deep learning perspective which naturally integrates perceptual processing and subsequent inference for visual reasoning problems. We describe an end-to-end deep learning model that constructs object-centric representations of entities in videos, and then operates on these entities with differentiable—and thus learnable—production rules. The essence of these rules, carried over from traditional symbolic system, is that they operate on variables that are bound, or linked, to the entities in the world. In the deep learning implementation, each production rule is represented by a distinct MLP with query-key attention mechanisms to specify the rule-entity binding and to determine when the rule should be triggered for a given entity. We are not the first to propose a neural instantiation of a production system architecture. Touretzky & Hinton (1988) gave a proof of principle that neural net hardware could be hardwired to implement a production system for symbolic reasoning; our work fundamentally differs from theirs in that (1) we focus on perceptual inference problems and (2) we use the architecture as an inductive bias for learning.
1.1 Variables and entities
What makes a rule general-purpose is that it incorporates placeholder variables that can be bound to arbitrary values or—the term we prefer in this article—entities. This notion of binding is familiar in functional programming languages, where these variables are called arguments. Analogously, the use of variables in the production rules we describe enable a model to reason about any set of entities that satisfy the selection criteria of the rule.
Consider a simple function in C like int add(int a, int b). This function binds its two integer operands to variables a and b. The function does not apply if the operands are, say, character strings. The use of variables enables a programmer to reuse the same function to add any two integer values
In order for rules to operate on entities, these entities must be represented explicitly. That is, the visual world needs to be parsed in a task-relevant manner, e.g., distinguishing the sprites in a video game or the vehicles and pedestrians approaching an autonomous vehicle. Only in the past few years have deep learning vision researchers developed methods for object-centric representation (Le Roux et al., 2011; Eslami et al., 2016; Greff et al., 2016; Raposo et al., 2017; Van Steenkiste et al., 2018; Kosiorek et al., 2018; Engelcke et al., 2019; Burgess et al., 2019; Greff et al., 2019; Locatello et al., 2020a; Ahmed et al., 2020; Goyal et al., 2019; Zablotskaia et al., 2020; Rahaman et al., 2020; Du et al., 2020; Ding et al., 2020; Goyal et al., 2020; Ke et al., 2021). These methods differ in details but share the notion of a fixed number of slots (see Figure 1 for example), also known as object files, each encapsulating information about a single object. Importantly, the slots are interchangeable, meaning that it doesn’t matter if a scene with an apple and an orange encodes the apple in slot 1 and orange in slot 2 or vice-versa.
A model of visual reasoning must not only be able to represent entities but must also express knowledge about entity dynamics and interactions. To ensure systematic predictions, a model must be capable of applying knowledge to an entity regardless of the slot it is in and must be capable of applying the same knowledge to multiple instances of an entity. Several distinct approaches exist in the literature. The predominant approach uses graph neural networks to model slot-to-slot interactions (Scarselli et al., 2008; Bronstein et al., 2017; Watters et al., 2017; Van Steenkiste et al., 2018; Kipf et al., 2018; Battaglia et al., 2018; Tacchetti et al., 2018). To ensure systematicity, the GNN must share parameters among the edges. In a recent article, Goyal et al. (2020) developed a more general framework in which parameters are shared but slots can dynamically select which parameters to use in a state-dependent manner. Each set of parameters is referred to as a schema, and slots use a query-key attention mechanism to select which schema to apply at each time step. Multiple slots can select the same schema. In both GNNs and SCOFF, modeling dynamics involves each slot interacting with each other slot. In the work we describe in this article, we replace the direct slot-to-slot interactions with rules, which mediate sparse interactions among slots (See arrows in Figure 1).
Thus our main contribution is that we introduce NPS, which offers a way to model dynamic and sparse interactions among the variables in a graph and also allows dynamic sharing of multiple sets of parameters among these interactions. Most architectures used for modelling interactions in the current literature use statically instantiated graph which model all possible interactions for a given variable at each step i.e. dense interactions. Also such dense architectures share a single set of parameters across all interactions which maybe quite restrictive in terms of representational capacity. A visual comparison between these two kinds of architectures is shown in Figure 1. Through our experiments we show the advantage of modeling interactions in the proposed manner using NPS in visually rich physical environments. We also show that our method results in an intuitive factorization of rules and entities.
2 Production System
Formally, our notion of a production system consists of a set of entities and a set of rules, along with a mechanism for selecting rules to apply on subsets of the entities. Implicit in a rule is a specification of the properties of relevant entities, e.g., a rule might apply to one type of sprite in a video game but not another. The control flow of a production system dynamically selects rules as well as bindings between rules and entities, allowing different rules to be chosen and different entities to be manipulated at each point in time.
The neural production system we describe shares essential properties with traditional production system, particularly with regard to the compositionality and generality of the knowledge they embody. Lovett & Anderson (2005) describe four desirable properties commonly attributed to symbolic systems that apply to our work as well.
Production rules are modular. Each production rule represents a unit of knowledge and are atomic such that any production rule can be intervened (added, modified or deleted) independently of other production rules in the system.
Production rules are abstract. Production rules allow for generalization because their conditions may be represented as high-level abstract knowledge that match to a wide range of patterns. These conditions specify the attributes of relationship(s) between entities without specifying the entities themselves. The ability to represent abstract knowledge allows for the transfer of learning across different environments as long as they fit within the conditions of the given production rule.
Production rules are sparse. In order that production rules have broad applicability, they involve only a subset of entities. This assumption imposes a strong prior that dependencies among entities are sparse. In the context of visual reasoning, we conjecture that this prior is superior to what has often been assumed in the past, particularly in the disentanglement literature—independence among entities Higgins et al. (2016); Chen et al. (2018).
Production rules represent causal knowledge and are thus asymmetric. Each rule can be decomposed into a {condition, action} pair, where the action reflects a state change that is a causal consequence of the conditions being met.
These four properties are sufficient conditions for knowledge to be expressed in production rule form. These properties specify how knowledge is represented, but not what knowledge is represented. The
latter is inferred by learning mechanisms under the inductive bias provided by the form of production rules.
3 Neural Production System: Slots and Sparse Rules
The Neural Production System (NPS), illustrated in Figure 2, provides an architectural backbone that supports the detection and inference of entity (object) representations in an input sequence, and the underlying rules which govern the interactions between these entities in time and space. The input sequence indexed by time step t, {x1, . . . ,xt, . . . ,xT }, for instance the frames in a video, are processed by a neural encoder (Burgess et al., 2019; Greff et al., 2019; Goyal et al., 2019, 2020) applied to each xt, to obtain a set of M entity representations {V t1 , . . . , . . . ,V tM}, one for each of the M slots. These representations describe an entity and are updated based on both the previous state, V t−1 and the current input, xt.
NPS consists of N separately encoded rules, {R1,R2, ..,RN}. Each rule consists of two components, Ri = ( ~Ri,MLPi), where ~Ri is a learned rule embedding vector, which can be thought of as a template defining the condition for when a rule applies; and MLPi, which determines the action taken by a rule. Both ~Ri and the parameters of MLPi are learned along with the other parameters of the model using back-propagation on an objective optimized end-to-end.
In the general form of the model, each slot selects a rule that will be applied to it to change its state. This can potentially be performed several times, with possibly different rules applied at each step. Rule selection is done using an attention mechanism described in detail below. Each rule specifies conditions and actions on a pair of slots. Therefore, while modifying the state of a slot using a rule, it can take the state of another slot into account. The slot which is being modified is called the primary slot and other is called the contextual slot. The contextual slot is also selected using an attention mechanism described in detail below.
3.1 Computational Steps in NPS
In this section, we give a detailed description of the rule selection and application procedure for the slots. First, we will formalize the definitions of a few terms that we will use to explain our method. We use the term primary slot to refer to slot Vp whose state gets modified by a rule Rr. We use the term contextual slot to refer to the slot Vc that the rule Rr takes into account while modifying the state of the primary slot Vp.
Notation. We consider a set of N rules {R1,R2, . . . ,RN} and a set of T input frames {x1,x2, . . . ,xT }. Each frame xt is encoded into a set of M slots {V t1 ,V t2 , . . . ,V tM}. In the following discussion, we omit the index over t for simplicity.
Step 1. is external to NPS and involves parsing an input image, xt, into slot-based entities conditioned on the previous state of the slot-based entities. Any of the methods proposed in the literature to obtain a slot-wise representation of entities can be used (Burgess et al., 2019; Greff et al., 2019; Goyal et al., 2019, 2020). The next three steps constitute the rule selection and application procedure.
Step 2. For each primary slot Vp, we attend to a rule Rr to be applied. Here, the queries come from the primary slot: qp = VpW q, and the keys come from the rules: ki = ~RiW k ∀i ∈ {1, . . . ,N}. The rule is selected using a straight-through Gumbel softmax (Jang et al., 2016) to achieve a learnable hard decision: r = argmaxi(qpki + γ), where γ ∼ Gumbel(0, 1). This competition is a noisy version of rule matching and prioritization in traditional production systems.
Step 3. For a given primary slot Vp and selected rule Rr, a contextual slot Vc is selected using another attention mechanism. In this case the query comes from the primary slot: qp = VpW q, and the keys from all the slots: kj = VjW q ∀j ∈ {1, . . . ,M}. The selection takes place using a straightthrough Gumbel softmax similar to step 2: c = argmaxj(qpkj + γ), where γ ∼ Gumbel(0, 1). Note that each rule application is sparse since it takes into account only 1 contextual slot for modifying
a primary slot, while other methods like GNNs take into account all slots for modifying a primary slot.
Step 4. Rule Application: the selected rule Rr is applied to the primary slot Vp based on the rule and the current contents of the primary and contextual slots. The rule-specific MLPr, takes as input the concatenated representation of the state of the primary and contextual slots, Vp and Vc, and produces an output, which is then used to change the state of the primary slot Vp by residual addition.
3.2 Rule Application: Sequential vs Parallel Rule Application
In the previous section, we have described how each rule application only considers another contextual slot for the given primary slot i.e., contextual sparsity. We can also consider application sparsity, wherein we use the rules to update the states of only a subset of the slots. In this scenario, only the selected slots would be primary slots. This setting will be helpful when there is an entity in an environment that is stationary, or it is following its own default dynamics unaffected by other entities. Therefore, it does not need to consider other entities to update its state. We explore two scenarios for enabling application sparsity.
Parallel Rule Application. Each of theM slots selects a rule to potentially change its state. To enable sparse changes, we provide an extra Null Rule in addition to the available N rules. If a slot picks the null rule in step 2 of the above procedure, we do not update its state.
Sequential Rule Application. In this setting, only one slot gets updated in each rule application step. Therefore, only one slot is selected as the primary slot. This can be facilitated by modifying step 2 above to select one {primary
slot, rule} pair among NM {rule, slot} pairs. The queries come from each slot: qj = VjW q ∀j ∈ {1, . . . ,M}, the keys come from the rules: ki = RiW k ∀i ∈ {1, . . . ,N}. The straight-through Gumbel softmax selects one (primary slot, rule) pair: p, r = argmaxi,j(qpki + γ), where γ ∼ Gumbel(0, 1). In the sequential regime, we allow the rule application procedure (step 2, 3, 4 above) to be performed multiple times iteratively in K rule application stages for each time-step t.
A pictorial demonstration of both rule application regimes can be found in Figure 3. We provide detailed algorithms for the sequential and parallel regimes in Appendix.
4 Experiments
We demonstrate the effectiveness of NPS on multiple tasks and compare to a comprehensive set of baselines. To show that NPS can learn intuitive rules from the data generating distribution, we design a couple of simple toy experiments with well-defined discrete operations. Results show that NPS can accurately recover each operation defined by the data and learn to represent each operation using a separate rule. We then move to a much more complicated and visually rich setting with abstract physical rules and show that factorization of knowledge into rules as offered by NPS does scale up to such settings. We study and compare the parallel and sequential rule application procedures and try to understand the settings which favour each. We then
evaluate the benefits of reusable, dynamic and sparse interactions as offered by NPS in a wide variety of physical environments by comparing it against various baselines. We conduct ablation studies to assess the contribution of different components of NPS. Here we briefly outline the tasks considered and direct the reader to the Appendix for full details on each task and details on hyperparameter settings.
Discussion of baselines. NPS is an interaction network, therefore we use other widely used interaction networks such as multihead attention and graph neural networks (Goyal et al. (2019), Goyal et al. (2020), Veerapaneni et al. (2019), Kipf et al. (2019)) for comparison. Goyal et al. (2019) and Goyal
et al. (2020) use an attention based interaction network to capture interactions between the slots, while Veerapaneni et al. (2019) and Kipf et al. (2019) use a GNN based interaction network. We also consider the recently introduced convolutional interaction network (CIN) (Qi et al., 2021) which captures dense pairwise interactions like GNN but uses a convolutional network instead of MLPs to better utilize spatial information. The proposed method, similar to other interaction networks, is agnostic to the encoder backbone used to encode the input image into slots, therefore we compare NPS to other interaction networks across a wide-variety of encoder backbones.
4.1 Learning intuitive rules with NPS: Toy Simulations
We designed a couple of simple tasks with well-defined discrete rules to show that NPS can learn intuitive and interpretable rules. We also show the efficiency and effectiveness of the selection procedure (step 2 and step 3 in section 3.1) by comparing against a baseline with many more parameters. Both tasks require a single modification of only one of the available entities, therefore the use of sequential or parallel rule application would not make a difference here since parallel rule application in which all-but-one slots select the null rule is similar to sequential rule application with 1 rule application step. To simplify the presentation, we describe the setup for both tasks using the sequential rule application procedure.
MNIST Transformation. We test whether NPS can learn simple rules for performing transformations on MNIST digits. We generate data with four transformations: {Translate Up, Translate Down, Rotate Right, Rotate Left}. We feed the input image (X) and the transformation (o) to be performed as a one-hot vector to the model. The detailed setup is described in Appendix. For this task, we evaluate whether NPS can learn to use a unique rule for each transformation.
We use 4 rules corresponding to the 4 transformations with the hope that the correct transformations are recovered. Indeed, we observe that NPS successfully learns to represent each transformation using a separate rule as shown in Table 1. Our model achieves an MSE of 0.02. A visualization of the outputs from our model and further details can be found in Appendix C.
Coordinate Arithmetic Task. The model is tasked with performing arithmetic operations on 2D coordinates. Given (X0, Y0) and (X1, Y1), we can apply the following operations: {X Addition: (Xr, Yr) = (X0 + X1, Y0), X Subtraction: (Xr, Yr) = (X0 − X1, Y0), Y Addition: (Xr, Yr) = (X0, Y0 + Y1), Y Subtraction: Xr, Yr = (X0, Y0−Y1)}, where (Xr, Yr) is the resultant coordinate.
In this task, the model is given 2 input coordinates X = [(xi, yi), (xj , yj)] and the expected output coordinates Y = [(x̂i, ŷi), (x̂j , ŷj)] . The model is supposed to infer the correct rule to produce the correct output coordinates. During data collection, the true output is obtained by performing a random transformation on a randomly selected coordinate in X (primary coordinate), taking another randomly selected coordinate from X (contextual coordinate) into account. The detailed setup is described in Appendix D. We use an NPS model with 4 rules for this task. We use the the selection procedure in step 2 and step 3 of algorithm 1 to select the primary coordinate, contextual coordinate, and the rule. For the baseline we replace the selection procedure in NPS (i.e. step 2 and step 3 in
algorithm 1) with a routing MLP similar to Fedus et al. (2021).
This routing MLP has 3 heads (one each for selecting the primary coordinate, contextual coordinate, and the rule). The baseline has 4 times more parameters than NPS. The final output is produced by
the rule MLP which does not have access to the true output, hence the model cannot simply copy the true output to produce the actual output. Unlike the MNIST transformation task, we do not provide the operation to be performed as a one-hot vector input to the model, therefore it needs to infer the available operations from the data demonstrations.
We show the segregation of rules for NPS and the baseline in Figure 4. We can see that NPS learns to use a unique rule for each operation while the baseline struggles to disentangle the underlying operations properly. NPS also outperforms the baseline in terms of MSE achieving an MSE of 0.01±0.001 while the baseline achieves an MSE of 0.04±0.008. To further confirm that NPS learns all the available operations correctly from raw data demonstrations, we use an NPS model with 5 rules. We expect that in this case NPS should utilize only 4 rules since the data describes only 4 unique operations and indeed we observe that NPS ends up mostly utilizing 4 of the available 5 rules as shown in Table 2.
4.2 Parallel vs Sequential Rule Application
We compare the parallel and sequential rule application procedures, to understand the settings that favour one or the other, over two tasks: (1) Bouncing Balls, (2) Shapes Stack. We use the term PNPS to refer to parallel rule application and SNPS to refer to sequential rule application.
Shapes Stack. We use the shapes stack dataset introduced by Groth et al. (2018). This dataset consists of objects stacked on top of each other as shown in Figure 5. These objects fall under the influence of gravity. For our experiments, We follow the same setup as Qi et al. (2021). In this task, given the first frame, the model is tasked with predicting the object bounding boxes for the next t timesteps. The first frame is encoded using a convolutional network followed by RoIPooling (Girshick (2015)) to extract object-centric visual features. The object-centric features are then passed to the dynamics model to predict object bounding boxes of the next t steps. Qi et al. (2021) propose a Region Proposal Interaction Network (RPIN) to solve this task. The dynamics model in RPIN consists of an Interaction Network proposed in Battaglia et al. (2016).
To better utilize spatial information, Qi et al. (2021) propose an extension of the interaction operators in interaction net to operate on 3D tensors. This is achieved by replacing the MLP operations in the original interaction networks with convolutions. They call this new network Convolutional Interaction Network (CIN). For the proposed model, we replace this CIN in RPIN by NPS. To ensure a fair comparison to CIN, we use CNNs to represent rules in NPS instead of MLPs. CIN captures all pairwise interactions between objects using a convolutional network. In NPS, we capture sparse interactions (contextual sparsity) as compared to dense pairwise interactions captured by CIN. Also, in NPS we update only a few subset of slots per step instead of all slots (application sparsity).
We consider two evaluation settings. (1) Test setting: The number of rollout timesteps is same as that seen during training (i.e. t = 15); (2) Transfer Setting: The number of rollout timesteps is higher than that seen during training (i.e. t = 30).
We present our results on the shapes stack dataset in Table 3. We can see that both PNPS and SNPS outperform the baseline RPIN in the transfer setting, while only PNPS outperforms the baseline in the test setting and SNPS fails to do so. We can see that PNPS outperforms SNPS. We attribute this to the reduced application sparsity with PNPS, i.e., it is more likely that the state of a slot gets updated in PNPS as compared to SNPS. For instance, consider an NPS model with N uniformly chosen rules and M slots. The probability that the state of a slot gets updated in PNPS is PPNPS = N − 1/N (since 1 rule is the null rule), while the same probability for SNPS is PSNPS = 1/M (since only 1 slot gets updated per rule application step).
For this task, we run both PNPS and SNPS for N = {1, 2, 4, 6} rules and M = 3. For any given N , we observe that PPNPS > PSNPS . Even when we have multiple rule application steps in SNPS, it might end up selecting the same slot to be updated in more than one of these steps. We report the best performance obtained for PNPS and SNPS across all N , which is N = {2 + 1 Null Rule} for PNPS and N = 4 for SNPS, in Table 3. Shapes stack is a dataset that would prefer a model with less application sparsity since all the objects are tightly bound to each other (objects are placed on top of each other), therefore all objects spend the majority of their time interacting with the objects directly above or below them. We attribute the higher performance of PNPS compared to RPIN to the higher contextual sparsity of PNPS. Each example in the shapes stack task consists of 3 objects. Even though the blocks are tightly bound to each other, each block is only affected by the objects it is in direct contact with. For example, the top-most object is only affected by the object directly below it. The contextual sparsity offered by PNPS is a strong inductive bias to model such sparse interactions while RPIN models all pairwise interactions between the objects. Figure 5 shows an intuitive illustration of the PNPS model for the shapes stack dataset. In the figure, Rule 2 actually refers to the Null Rule, while Rule 1 refers to all the other non-null rules. The bottom-most block picks the Null Rule most times, as the bottom-most block generally does not move.
Bouncing Balls. We consider a bouncingballs environment in which multiple balls move with billiard-ball dynamics. We validate our model on a colored version of this dataset. This is a next-step prediction task in which the model is tasked with predicting the final binary mask of each ball. We compare the following methods: (a) SCOFF (Goyal et al., 2020): factorization of knowledge in terms of slots (object properties) and schemata, the latter capturing object dynamics; (b) SCOFF++: we extend SCOFF by using the idea of iterative competition as proposed in slot attention (SA) (Locatello et al., 2020a); SCOFF + PNPS/SNPS: We replace pairwise slot-to-slot interaction in SCOFF++ with parallel or sequential rule application. For comparing different methods, we use the Adjusted Rand Index or ARI (Rand, 1971). To investigate how the factorization in the form of rules allows for extrapolating knowledge from fewer to more objects, we increase the number of objects from 4 during training to 6-8 during testing.
We present the results of our experiments in Table 4. Contrary to the shapes stack task, we
see that SNPS outperforms PNPS for the bouncing balls task. The balls are not tightly bound together into a single tower as in the shapes stack. Most of the time, a single ball follows its own dynamics, only occasionally interacting with another ball. Rules in NPS capture interaction dynamics between entities, hence they would only be required to change the state of an entity when it interacts with another entity. In the case of bouncing balls, this interaction takes place through a collision between multiple balls. Since for a single ball, such collisions are rare, SNPS, which has higher application sparsity (less probability of modifying the state of an entity), performs better as compared to PNPS (lower application sparsity). Also note that, SNPS has the ability to compose multiple rules together by virtue of having multiple rule application stages. A visualization of the rule and entity selections by the proposed algorithm can be found in Appendix Figure 9.
Given the analysis in this section, we can conclude that PNPS is expected to work better when interactions among entities are more frequent while SNPS is expected to work better when interactions are rare and most of the time, each entity follows its own dynamics. Note that, for both SNPS and PNPS, the rule application considers only 1 other entity as context. Therefore, both approaches have equal contextual sparsity while the baselines that we consider (SCOFF and RPIN) capture dense pairwise interactions. We discuss the benefits of contextual sparsity in more detail in the next section. More details regarding our setup for the above experiments can be found in Appendix.
4.3 Benefits of Sparse Interactions Offered by NPS
In NPS, one can view the computational graph as a dynamically constructed GNN resulting from applying dynamically selected rules, where the states of the slots are represented on the different nodes of the graph, and different rules dynamically instantiate an hyper-edge between a set of slots (the primary slot and the contextual slot). It is important to emphasize that the topology of the graph induced in NPS is dynamic and sparse (only a few nodes affected), while in most GNNs the topology is fixed and dense (all nodes affected). In this section, through a thorough set of experiments, we show that learning sparse and dynamic interactions using NPS indeed works better for the problems we consider than learning dense interactions using GNNs. We consider two types of tasks: (1) Learning Action Conditioned World Models (2) Physical Reasoning. We use SNPS for all these experiments since in the environments that we consider here, interactions among entities are rare.
Learning Action-Conditioned World Models. For learning action-conditioned world models, we follow the same experimental setup as Kipf et al. (2019). Therefore, all the tasks in this section are next-K step (K = {1, 5, 10}) prediction tasks, given the intermediate actions, and with the predictions being performed in the latent space. We use the Hits at Rank 1 (H@1) metrics described by Kipf et al. (2019) for evaluation. H@1 is 1 for a particular example if the predicted state representation is nearest to the encoded true observation and 0 otherwise. We report the average of this score over the test set (higher is better).
Physics Environment. The physics environment (Ke et al., 2021) simulates a simple physical world. It consists of blocks of unique but unknown weights. The dynamics for the interaction between blocks is that the movement of heavier blocks pushes lighter blocks on their path. This rule creates an acyclic causal graph between the blocks. For an accurate world model, the learner needs to infer the correct weights through demonstrations. Interactions in this environment are sparse and only involve two blocks at a time, therefore we expect NPS to outperform dense architectures like GNNs. This environment is demonstrated in Appendix Fig 11.
We follow the same setup as Kipf et al. (2019). We use their C-SWM model as baseline. For the proposed model, we only replace the GNN from C-SWM by NPS. GNNs generally share parameters across edges, but in NPS each rule has separate parameters. For a fair comparison to GNN, we use an NPS model with 1 rule. Note that this setting is still different from GNNs as in GNNs at each step every slot is updated by instantiating edges between all pairs of slots, while in NPS an edge is dynamically instantiated between a single pair of slots and only the state of the selected slot (i.e., primary slot) gets updated.
The results of our experiments are presented in Figure 6(a). We can see that NPS outperforms GNNs for all rollouts. Multi-step settings are more difficult to model as errors may get compounded over time steps. The sparsity of NPS (only a single slot affected per step) reduces compounding of errors and enhances symmetry-breaking in the assignment of transformations to rules, while in the
case of GNNs, since all entities are affected per step, there is a higher possibility of errors getting compounded. We can see that even with a single rule, we significantly outperform GNNs thus proving the effectiveness of dynamically instantiating edges between entities.
Atari Games. We also test the proposed model in the more complicated setting of Atari. Atari games also have sparse interactions between entities. For instance, in Pong, any interaction involves only 2 entities: (1) paddle and ball or (2) ball and the wall. Therefore, we expect sparse interactions captured by NPS to outperform GNNs here as well.
We follow the same setup as for the physics environment described in the previous section. We present the results for the Atari experiments in Figure 6(b), showing the average H@1 score across 5 games: Pong, Space Invaders, Freeway, Breakout, and QBert. As expected, we can see that the proposed model achieves a higher score than the GNN-based C-SWM. The results for the Atari experiments reinforce the claim that NPS is especially good at learning sparse interactions.
Learning Rules for Physical Reasoning. To show the effectiveness of the proposed approach for physical reasoning tasks, we evaluate NPS on another dataset: Sprites-MOT (He et al., 2018). The Sprites-MOT dataset was introduced by He et al. (2018). The dataset contains a set of moving objects of various shapes. This dataset aims to test whether a model can handle occlusions correctly. Each frame has consistent bounding boxes which may cause the objects to appear or disappear from the scene. A model which performs well should be able to track the motion of all objects irrespective of whether they are occluded or not. We follow the same setup as Weis et al. (2020). We use the OP3 model (Veerapaneni et al., 2019) as our baseline. To test the proposed model, we replace the GNN-based transition model in OP3 with the proposed NPS.
We use the same evaluation protocol as followed by Weis et al. (2020) which is based on the MOT (Multi-object tracking) challenge (Milan et al., 2016). The results on the MOTA and MOTP metrics for this task are presented in Table 5. The results on the other metrics are presented in appendix Table 10. We ask the reader to refer to appendix F.1 for more details about these metrics. We can see that for almost all metrics, NPS outperforms the OP3 baseline. Although this dataset does not contain physical interactions between the objects, sparse rule application should still be useful in dealing with occlusions. At any time step, only a single object is affected by occlusions i.e., it may get
occluded due to another object or due to a prespecified bounding box, while the other objects follow their default dynamics. Therefore, a rule should be applied to only the object (or entity) affected (i.e., not visible) due to occlusion and may take into account any other object or entity that is responsible for the occlusion.
5 Discussion and Conclusion
For AI agents such as robots trying to make sense of their environment, the only observables are low-level variables like pixels in images. To generalize well, an agent must induce high-level entities as well as discover and disentangle the rules that govern how these entities actually interact with each other. Here we have focused on perceptual inference problems and proposed NPS, a neural instantiation of production systems by introducing an important inductive bias in the architecture following the proposals of Bengio (2017); Goyal & Bengio (2020); Ke et al. (2021).
Limitations & Looking Forward. Our experiments highlight the advantages brought by the factorization of knowledge into a small set of entities and sparse sequentially applied rules. Immediate future work would investigate how to take advantage of these inductive biases for more complex physical environments (Ahmed et al., 2020) and novel planning methods, which might be more sample efficient than standard ones (Schrittwieser et al., 2020).
We also find that Sequential and Parallel NPS have different properties suited towards different domains. Future work should explore how to effectively combine these two approaches. We discuss this in more detail in Appendix section E.3.
6 Acknowledgements
The authors would like to thank Matthew Botvinick for useful discussions. The authors would also like to thank Alex Lamb, Stefan Bauer, Nicolas Chapados, Danilo Rezende and Kelsey Allen for brainstorming sessions. We are also thankful to Dianbo Liu, Damjan Kalajdzievski and Osama Ahmed for proofreading. We would like to thank Samsung Electronics Co. Ltd. and CIFAR for funding this research. We would also like to thank Google for providing Google cloud credits used in this work. | 1. What is the focus and contribution of the paper on Neural Production Systems (NPS)?
2. What are the strengths and weaknesses of the proposed approach compared to prior works, specifically SCOFF?
3. Do you have any concerns regarding the novelty and significance of the paper's content?
4. How does the reviewer assess the clarity and quality of the paper's writing and experimental results?
5. Are there any suggestions for improving the paper's content or comparisons with other works? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes Neural Production Systems (NPS) which is a (neural) rule-based learning system that operate on explicit entities. The main idea is to extend traditional Production systems to be end-to-end learnable with neural networks. Unlike previous NN systems that model entity-wise interactions such as GNN, NPS allows sparse interactions between entities which is favorable in certain scenarios.
Review
Originality & Significance
The idea of extending the traditional rule-based system to neural net rule-based system is intriguing as it can combine the benefit of excellent perception of NN and generalization of rule-based systems. Also, rules in a given environment can be naturally discovered through the learning process. This paper shows promising results on how rules can be learned. However, I have a concern that this paper seems like a marginal improvement over the previously proposed SCOFF (Goyal et al. 2020) model which I further discuss below.
Quality
SCOFF model also has soft-attention based selection process for entities and hard selection with Gumbel-softmax for schema. Although the intuition and exact formulation is different, the rule application process in NPS can be modelled using schema+soft competition among entities in SCOFF. In that case, the main benefit of NPS comes from having the sparse interaction between entities because it can be more efficient and easier to learn which could be valuable if there is a significant improvement in performance. Therefore, the experiments section should demonstrate that better in my opinion.
Exp could include more analysis on how NPS can learn certain rules that SCOFF cannot. It would be great to see if the improvements in NPS is enabling the perfect separation of rules in Sec4.1 and if SCOFF cannot do it perfectly.
The shapes stack and Sec4.3 could also compare with SCOFF as a stronger baseline.
The internal behavior of NPS is analyzed for simple tasks such as MNIST transformation, but systems like NPS can really shine when applied to complex environments with many entities such as Bouncing Balls. The results would be strengthend if there are detailed analysis or visualization of how entities are selected and how rules are applied to the selected entities.
Line 402 and Figure 5 are overlapping.
Clarity
The paper is easy to read and well-written. |
NIPS | Title
Doubly Robust Counterfactual Classification
Abstract
We study counterfactual classification as a new tool for decision-making under hypothetical (contrary to fact) scenarios. We propose a doubly-robust nonparametric estimator for a general counterfactual classifier, where we can incorporate flexible constraints by casting the classification problem as a nonlinear mathematical program involving counterfactuals. We go on to analyze the rates of convergence of the estimator and provide a closed-form expression for its asymptotic distribution. Our analysis shows that the proposed estimator is robust against nuisance model misspecification, and can attain fast √ n rates with tractable inference even when using nonparametric machine learning approaches. We study the empirical performance of our methods by simulation and apply them for recidivism risk prediction.
N/A
√ n rates with tractable inference even
when using nonparametric machine learning approaches. We study the empirical performance of our methods by simulation and apply them for recidivism risk prediction.
1 Introduction
Counterfactual or potential outcomes are often used to describe how an individual would respond to a specific treatment or event, irrespective of whether the event actually takes place. Counterfactual outcomes are commonly used for causal inference, where we are interested in measuring the effect of a treatment on an outcome variable [15, 16, 45].
Recently, counterfactual outcomes have also proved useful for predicting outcomes under hypothetical interventions. This is commonly referred to as counterfactual prediction. Counterfactual prediction can be particularly useful to inform decision-making in clinical practice. For example, in order for physicians to make effective treatment decisions, they often need to predict risk scores assuming no treatment is given; if a patient’s risk is relatively low, then she or he may not need treatment. However, when a treatment is initiated after baseline, simply operationalizing the hypothetical treatment as another baseline predictor will rarely give the correct (counterfactual) risk estimates because of confounding [58]. Counterfactual prediction can be also helpful when we want our prediction model developed in one setting to yield predictions successfully transportable to other settings with different treatment patterns. Suppose that we develop our risk prediction model in a setting where most patients have access to an effective (post-baseline) treatment. However, if we deploy our factual prediction model in a new setting in which few individuals have access to the treatment, our model is likely to fail in the sense that it may not be able to accurately identify high-risk individuals. Counterfactual prediction may allow us to achieve more robust model performance compared to factual prediction, even when model deployment influences behaviors that affect risk. [see, e.g., 10, 27, 54, for more examples].
However, the problem of counterfactual prediction brings challenges that do not arise in typical prediction problems because the data needed to build the predictive models are inherently not fully
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
observable. Surprisingly, while the development of modern prediction modeling has greatly enriched the counterfactual-outcome-based causal inference particularly via semi-parametric methods [20, 23], the use of causal inference to improve prediction modeling has received less attention [see, e.g., 10, 46, for a discussion on the subject].
In this work, we study counterfactual classification, a special case of counterfactual prediction where the outcome is discrete. Our approach allows investigators to flexibly incorporate various constraints into the models, not only to enhance their predictive performance but also to accommodate a wide range of practical constraints relevant to their classification tasks. Counterfactual classification poses both theoretical and practical challenges, as a result of the fact that in our setting, even without any constraints, the estimand is not expressible as a closed form functional unlike typical causal inference problems. We tackle this problem by framing counterfactual classification as nonlinear stochastic programming with counterfactual components.
1.1 Related Work
Our work lies at the intersection of causal inference and stochastic optimization.
Counterfactual prediction is closely related to estimation of the conditional average treatment effect (CATE) in causal inference, which plays a crucial role in precision medicine and individualized policy. Let Y a denote the counterfactual outcome that would have been observed under treatment or intervention A = a, A ∈ {0, 1}. The CATE for subjects with covariate X = x is defined as τ(x) = E[Y 1− Y 0 | X = x]. There exists a vast literature on estimating CATE. These include some important early works assuming that τ(x) follows some known parametric form [e.g., 44, 52, 55]. But more recently, there has been an effort to leverage flexible nonparametric machine learning methods [e.g., 1, 3, 22, 25, 29, 31, 39, 57]. A desirable property commonly held in the above CATE estimation methods is that the function τ(x) may be more structured and simple than its component main effect function E[Y a | X = x]. In counterfactual prediction, however, we are fundamentally interested in predicting Y a conditional on X = x under a “single" hypothetical intervention A = a, as opposed to the contrast of the conditional mean outcomes under two (or more) interventions as in CATE. Counterfactual prediction is often useful to support decision-making on its own. There are settings where estimating the contrast effect or relative risk is less relevant than understanding what may happen if a subject was given a certain intervention. As mentioned previously, this is particularly the case in clinical research when predicting risk in relation to treatment started after baseline [10, 27, 46, 54]. Moreover, in the context of multi-valued treatments, it can be more useful to estimate each individual conditional mean potential outcome separately than to estimate all the possible combinations of relative effects.
With no constraints, under appropriate identification assumptions (e.g., (C1)-(C3) in Section 2), counterfactual prediction is equivalent to estimating a standard regression function E[Y | X,A = a] so in principle one could use any regression estimator. This direct modeling or plug-in approach has been used for counterfactual prediction in randomized controlled trials [e.g., 26, 38] or as a component of CATE estimation methods [e.g., 3, 29]. An issue arises when we are estimating a projection of this function onto a finite-dimensional model, or where we instead want to estimate E[Y a | V ] = E{E[Y | X,A = a] | V } for some smaller subset V ⊂ X (e.g., under runtime confounding [9]), which typically renders the plug-in approach suboptimal. Moreover, the resulting estimator fails to have double robustness, a highly desirable property which provides an additional layer of robustness against model misspecification [4].
On the other hand, we often want to incorporate various constraints into our predictive models. Such constraints are often used for flexible penalization [18] or supplying prior information [13] to enhance model performance and interpretability. They can also be used to mitigate algorithmic biases [6, 14]. Further, depending on the scientific question, practitioners occasionally have some constraints which they wish to place on their prediction tasks, such as targeting specific sub-populations, restricting sign or magnitude on certain regression coefficients to be consistent with common sense, or accounting for the compositional nature of the data [7, 19, 28]. In the plug-in approach, however, it is not clear how to incorporate the given constraints into the modeling process.
In our approach, we directly formulate and solve an optimization problem that minimizes counterfactual classification risk, where we can flexibly incorporate various forms of constraints. Optimization problems involving counterfactuals or counterfactual optimization have not been extensively studied,
with few exceptions [e.g., 24, 30, 33, 34]. Our results are closest to [33] and [24], which study counterfactual optimization in a class of quadratic and nonlinear programming problems, respectively, yet this approach i) is not applicable to classification where the risk is defined with respect to the cross-entropy, and ii) considers only linear constraints.
As in [24], we tackle the problem of counterfactual classification from the perspective of stochastic programming. The two most common approaches in stochastic programming are stochastic approximation (SA) and sample average approximation (SAA) [e.g., 36, 50]. However, since i) we cannot compute sample moments or stochastic subgradients that involve unobserved counterfactuals, and ii) the SA and SAA approaches cannot harness efficient estimators for counterfactual components, e.g., doubly-robust or semiparametric estimators with cross-fitting [8, 37], more general approaches beyond the standard SA and SAA settings should be considered [e.g., 47–49] at the expense of stronger assumptions on the behavior of the optimal solution and its estimator.
1.2 Contribution
We study counterfactual classification as a new decision-making tool under hypothetical (contrary to fact) scenarios. Based on semiparametric theory for causal inference, we propose a doubly-robust, nonparametric estimator that can incorporate flexible constraints into the modeling process. Then we go on to analyze rates of convergence and provide a closed-form expression for the asymptotic distribution of our estimator. Our analysis shows that the proposed estimator can attain fast √ n rates even when its nuisance components are estimated using nonparametric machine learning tools at slower rates. We study the finite-sample performance of our estimator via simulation and provide a case based on real data. Importantly, our algorithm and analysis are applicable to other problems in which the estimand is given by the solutions to a general nonlinear optimization problem whose objective function involves counterfactuals, where closed-form solutions are not available.
2 Problem and Setup
Suppose that we have access to an i.i.d. sample (Z1, ..., Zn) of n tuples Z = (Y,A,X) ∼ P for some distribution P, binary outcome Y ∈ {0, 1}, covariates X ∈ X ⊂ Rdx , and binary intervention A ∈ A = {0, 1}. For simplicity, we assume A and Y are binary, but in principle they can be multi-valued. We consider a general setting where only a subset of covariates V ⊆ X can be used for predicting the counterfactual outcome Y a. This allows for runtime confounding, where factors used by decision-makers are recorded in the training data but are not available for prediction (see [9] and references therein). We are concerned with the following constrained optimization problem
minimize β∈B
L (Y a, σ(β, b(V ))) := −E {Y a log σ(β, b(V )) + (1− Y a) log(1− σ(β, b(V )))}
subject to β ∈ S := {β | gj(β) ≤ 0, j ∈ J} (P) for some compact subset B ∈ Rk, known C2-functions gj : B → R, σ : B × Rk ′ → (0, 1), and the index set J = {1, ...,m} for the inequality constraints. Here, σ is the score function and b(V ) = [b1(V ), ..., bk′(V )]
⊤ represents a set of basis functions for V (e.g., truncated power series, kernel or spline basis functions, etc.). Note that we do not need to have k = k′; for example, depending on the modeling techniques, it is possible to have a much larger number of model parameters than the number of basis functions, i.e., k > k′. L (Y a, σ(β, b(V ))) is our classification risk based on the cross-entropy. S consists of deterministic inequality constraints1 and can be used to pursue a variety of practical purposes described in Section 1. Let β∗ denote an optimal solution in (P). β∗ is our optimal model parameters (coefficients) that minimize the counterfactual classification risk under the given constraints.
Classification risk and score function. Our classification risk L(Y a, σ(β, b(V ))) is defined by the expected cross entropy loss between Y a and σ(β, b(V )). In order to estimate β∗, we first need to estimate this classification risk. Since it involves counterfactuals, the classification risk cannot be identified from observed data unless certain assumptions hold, which will be discussed shortly. The form of the score function σ(β, b(V )) depends on the specific classification technique we are using. Our default choice for σ is the sigmoid function with k = k′, which makes the classification
1Equality constraint can be always expressed by a pair of inequality constraints.
risk strictly convex with respect to β. It should be noted, however, that more complex and flexible classification techniques (e.g., neural networks) can also be used without affecting the subsequent results, as long as they satisfy the required regularity assumptions discussed later in Section 4. Importantly, our approach is nonparametric; β∗ is the parameter of the best linear classifier with the sigmoid score in the expanded feature space spanned by b(V ), but we never assume an exact ‘log-linear’ relationship between Y a and b(V ) as in ordinary logistic regression models.
Identification. To estimate the counterfactual quantity L(Y a, σ(β, b(V ))) from the observed sample (Z1, ..., Zn), it must be expressed in terms of the observational data distribution P. This can be accomplished via the following standard causal assumptions [e.g., 17, Chapter 12]:
• (C1) Consistency: Y = Y a if A = a
• (C2) No unmeasured confounding: A ⊥⊥ Y a | X • (C3) Positivity: P(A = a|X) > ε a.s. for some ε > 0
(C1) - (C3) will be assumed throughout this paper. Under these assumptions, our classification risk is identified as
L(β) = −E {E [Y | X,A = a] log σ(β, b(V )) + (1− E [Y | X,A = a]) log(1− σ(β, b(V )))} , (1)
where we let L(β) ≡ L(Y a, σ(β, b(V ))). Since we use the sigmoid function with an equal number of model parameters as basis functions, for clarity, hereafter we write σ(β⊤b(V )) = σ(β, b(V )). It is worth noting that even though we develop the estimator under the above set of causal assumptions, one may extend our methods to other identification strategies and settings (e.g., those of instrumental variables and mediation), since our approach is based on the analysis of a stochastic programming problem with generic estimated objective functions (see Appendix B).
Notation. Here we specify the basic notation used throughout the paper. For a real-valued vector v, let ∥v∥2 denote its Euclidean or L2-norm. Let Pn denote the empirical measure over (Z1, ..., Zn). Given a sample operator h (e.g., an estimated function), let P denote the conditional expectation over a new independent observation Z, as in P(h) = P{h(Z)} = ∫ h(z)dP(z). Use ∥h∥2,P to
denote the L2(P) norm of h, defined by ∥h∥2,P = [ P(h2) ] 1 2 = [∫ h(z)2dP(z) ] 1 2 . Finally, let s∗(P ) denote the set of optimal solutions of an optimization program P , i.e., β∗ ∈ s∗(P ), and define dist(x, S) = inf {∥x− y∥2 : y ∈ S} to denote the distance from a point x to a set S.
3 Estimation Algorithm
Since (P) is not directly solvable, we need to find an approximating program of the “true" program (P). To this end, we shall first discuss the problem of obtaining estimates for the identified classification risk (1). To simplify notation, we first introduce the following nuisance functions
πa(X) = P[A = a | X], µa(X) = E[Y | X,A = a],
and let π̂a and µ̂a be their corresponding estimators. πa and µa are referred to as the propensity score and outcome regression function, respectively.
A natural estimator for (1) is given by L̂(β) = −Pn { µ̂a(X) log σ(β ⊤b(V )) + (1− µ̂a(X)) log(1− σ(β⊤b(V ))) } , (2)
where we simply plug in the regression estimates µ̂a into the empirical average of (1). Here, we construct a more efficient estimator based on the semiparametric approach in causal inference [21, 23]. Let
φa(Z; η) = 1(A = a)
πa(X) {Y − µA(X)}+ µa(X),
denote the uncentered efficient influence function for the parameter E {E[Y | X,A = a]}, where nuisance functions are defined by η = {πa(X), µa(X)}. Then it can be deduced that for an arbitrary
Algorithm 1: Doubly robust estimator for counterfactual classification 1 input: b(·),K 2 Draw (B1, ..., Bn) with Bi ∈ {1, ...,K} 3 for b = 1, ...,K do 4 Let D0 = {Zi : Bi ̸= b} and D1 = {Zi : Bi = b} 5 Obtain η̂−b by constructing π̂a, µ̂a on D0 6 M1,b(β)← empirical average of φa(Z; η̂−b) log σ(β⊤b(V )) over D1 7 M0,b(β)← empirical average of (1− φa(Z; η̂−b)) log(1− σ(β⊤b(V ))) over D1 8 L̂(β)← ∑K b=1 { 1 n ∑n i=1 1(Bi = b) } (M1,b(β) +M0,b(β)) 9 solve (P̂) with L̂(β)
fixed real-valued function h : X → R, the uncentered efficient influence function for the parameter ψa := E {E[Y | X,A = a]h(X)} is given by φa(Z; η)h(X) (Lemma A.1 in the appendix). Now we provide an influence-function-based semiparametric estimator for ψa. Following [8, 22, 43, 59], we propose to use sample splitting to allow for arbitrarily complex nuisance estimators η̂. Specifically, we split the data into K disjoint groups, each with size of n/K approximately, by drawing variables (B1, ..., Bn) independent of the data, with Bi = b indicating that subject i was split into group b ∈ {1, ...,K}. Then the semiparametric estimator for ψa based on the efficient influence function and sample splitting is given by
ψ̂a = 1
K K∑ b=1 Pbn {φa(Z; η̂−b)h(X)} ≡ Pn {φa(Z; η̂−BK )h(X)} , (3)
where we let Pbn denote empirical averages over the set of units {i : Bi = b} in the group b and let η̂−b denote the nuisance estimator constructed only using those units {i : Bi ̸= b}. Under weak regularity conditions, this semiparametric estimator attains the efficiency bound with the double robustness property, and allows us to employ nonparametric machine learning methods while achieving the√ n-rate of convergence and valid inference under weak conditions (see Lemma A.1 in the appendix for the formal statement). If one is willing to rely on appropriate empirical process conditions (e.g., Donsker-type or low entropy conditions [53]), then η can be estimated on the same sample without sample splitting. However, this would limit the flexibility of the nuisance estimators.
The classification risk L(β) is a sum of two functionals, each of which is in the form of ψa, Thus, for each β, we propose to estimate the classification risk using (3) as follows
L̂(β) = −Pn { φa(Z; η̂−BK ) log σ(β ⊤b(V )) + (1− φa(Z; η̂−BK )) log(1− σ(β⊤b(V ))) } . (4)
Now that we have proposed the efficient method to estimate the counterfactual component L(β), in what follows we provide an approximating program for (P) which we aim to actually solve by substituting L̂(β) for L(β)
minimize β∈B L̂(β) subject to β ∈ S. (P̂)
Let β̂ ∈ s∗(P̂). Then β̂ is our estimator for β∗. We summarize our algorithm detailing how to compute the estimator β̂ in Algorithm 1.
(P̂) is a smooth nonlinear optimization problem whose objective function depends on data. Unfortunately, unlike (P), (P̂) is not guaranteed to be convex in finite samples even if S is convex. Non-convex problems are usually more difficult than convex ones due to high variance and slow computing time. Nonetheless, substantial progress has been made recently [5, 42], and a number of efficient global optimization algorithms are available in open-source libraries (e.g., NLOPT). Also in order for more flexible implementation, one may adapt neural networks for our approach without the need for specifying σ and b; we discuss this in more detail in Section 6 as a promising future direction.
4 Asymptotic Analysis
This section is devoted to analyzing the rates of convergence and asymptotic distribution for the estimated optimal solution β̂. Unlike stochastic optimization, analysis of the statistical properties of optimal solutions to a general counterfactual optimization problem appears much more sparse. In what was perhaps the first study of the problem, [24] analyzed asymptotic behavior of optimal solutions for a particular class of nonlinear counterfactual optimization problems that can be cast into a parametric program with finite-dimensional stochastic parameters. However, the true program (P) does not belong to the class to which their analysis is applicable. Here, we derive the asymptotic properties of β̂ by considering similar assumptions as in [24].
We first introduce the following assumptions for our counterfactual component estimator L̂.
(A1) P(π̂a ∈ [ϵ, 1− ϵ]) = 1 for some ϵ > 0 (A2) ∥µ̂a − µa∥2,P = oP(1) or ∥π̂a − πa∥2,P = oP(1)
(A3) ∥π̂a − πa∥2,P∥µ̂a − µa∥2,P = oP(n− 1 2 )
Assumptions (A1) - (A3) are commonly used in semiparametric estimation in the causal inference literature [20]. Next, for a feasible point β̄ ∈ S we define the active index set. Definition 4.1 (Active set). For β̄ ∈ S, we define the active index set J0 by
J0(β̄) = {1 ≤ j ≤ m | gj(β̄) = 0}.
Then we introduce the following technical condition on gj .
(B1) For each β∗ ∈ s∗(P),
d⊤∇2βgj(β∗)d ≥ 0 ∀d ∈ {d | ∇βgj(β∗) = 0, j ∈ J0(β̄)}.
Assumption (B1) holds, for example, if each gj is locally convex around β∗. In what follows, based on the result of [47], we characterize the rates of convergence for β̂ in terms of the nuisance estimation error under relatively weak conditions. Theorem 4.1 (Rate of Convergence). Assume that (A1), (A2), and (B1), hold. Then
dist ( β̂, s∗(P) ) = OP ( ∥π̂a − πa∥2,P∥µ̂a − µa∥2,P + n− 1 2 ) .
Hence, if we further assume the nonparametric condition (A3), we obtain dist ( β̂, s∗(P) ) = OP ( n− 1 2 ) .
Theorem 4.1 indicates that double robustness is possible for our estimator, and thereby √ n rates are attainable even when each of the nuisance regression functions is estimated flexibly at much slower rates (e.g., n−1/4 rates for each), with a wide variety of modern nonparametric tools. Since L is continuously differentiable with bounded derivative, the consistency of the optimal value naturally follows by the result of Theorem 4.1 and the continuous mapping theorem. More specifically, in the following corollary, we show that the same rates are attained for the optimal value under identical conditions. Corollary 4.1 (Rate of Convergence for Optimal Value). Suppose (A1), (A2), (A3), (B1) hold and let v∗ and v̂ be the optimal values corresponding to β∗ ∈ s∗(P) and β̂, respectively. Then we have |v̂ − v∗| = OP ( ∥π̂a − πa∥2,P∥µ̂a − µa∥2,P + n− 1 2 ) .
In order to conduct statistical inference, it is also desirable to characterize the asymptotic distribution of β̂. This requires stronger assumptions and a more specialized analysis [47]. Asymptotic properties of optimal solutions in stochastic programming are typically studied based on the generalization of the delta method for directionally differentiable mappings [e.g., 48–50]. Asymptotic normality is of particular interest since without asymptotic normality, consistency of the bootstrap is no longer guaranteed for the solution estimators [12].
We start with additional definitions of some popular regularity conditions with respect to (P).
Definition 4.2 (LICQ). Linear independence constraint qualification (LICQ) is satisfied at β̄ ∈ S if the vectors∇βgj(β̄), j ∈ J0(β̄) are linearly independent. Definition 4.3 (SC). Let L(β, γ) be the Lagrangian. Strict Complementarity (SC) is satisfied at β̄ ∈ S if, with multipliers γ̄j ≥ 0, j ∈ J0(β̄), the Karush-Kuhn-Tucker (KKT) condition
∇βL(β̄, γ̄) := ∇βL(β̄) + ∑
j∈J0(β̄)
γ̄j∇βgj(β̄) = 0,
is satisfied such that γ̄j > 0,∀j ∈ J0(β̄).
LICQ is arguably one of the most widely-used constraint qualifications that admit the first-order necessary conditions. SC means that if the j-th inequality constraint is active, then the corresponding dual variable is strictly positive, so exactly one of them is zero for each 1 ≤ j ≤ m. SC is widely used in the optimization literature, particularly in the context of parametric optimization [e.g., 50, 51]. We further require uniqueness of the optimal solution in (P).
(B2) Program (P) has a unique optimal solution β∗ (i.e., s∗(P) ≡ {β∗} is singleton).
Note that under (B2) if LICQ holds at β∗, then the corresponding multipliers are determined uniquely [56]. In the next theorem, we provide a closed-form expression for the asymptotic distribution of β̂. Theorem 4.2 (Asymptotic Distribution). Assume that (A1) - (A3), (B1), and (B2) hold, and that LICQ and SC hold at β∗ with the corresponding multipliers γ∗. Then
n− 1 2 ( β̂ − β∗ ) = [ ∇2βL(β∗, γ∗) B
B⊤ 0 ]−1 [ 1 0 ]⊤ Υ+ oP(1)
for some k × |J0(β∗)| matrix B and random variable Υ such that
Υ d−→ N (0, var (φa(Z; η)h1(V, β∗) + {1− φa(Z; η)}h0(V, β∗))) ,
where
B = [ ∇βgj(β∗)⊤, j ∈ J0(β∗) ] ,
h1(V, β) = 1
log σ(β⊤b(V )) b(V )σ(β⊤b(V )){1− σ(β⊤b(V ))},
h0(V, β) = − 1
log(1− σ(β⊤b(V ))) b(V )σ(β⊤b(V )){1− σ(β⊤b(V ))}.
The above theorem gives explicit conditions under which β̂ is √ n-consistent and asymptotically
normal. We harness the classical results of [48] that use an expansion of β̂ in terms of an auxiliary parametric program. To show asymptotic normality of β̂, linearity of the directional derivative of optimal solutions in the parametric program is required. We have accomplished this based on an appropriate form of the implicit function theorem [11]. This is in contrast to [33] that relied on the structure of the smooth, closed-form solution estimator that enables direct use of the delta method. Lastly, our results in this section can be extended to a more general constrained nonlinear optimization problem where the objective function involves counterfactuals (see Lemmas B.1, B.2 in the appendix).
5 Simulation and Case Study
5.1 Simulation
We explore the finite sample properties of our estimators in the simulated dataset where we aim to empirically demonstrate the double-robustness property described in Section 3. Our data generation process is as follows:
V ≡ X = (X1, ..., X6) ∼ N(0, I), πa(X) = expit(−X1 + 0.5X2 − 0.25X3 − 0.1X4 + 0.05X5 + 0.05X6),
Y = A1 {X1 + 2X2 − 2X3 −X4 +X5 + ε > 0}+ (1−A)1 {X1 + 2X2 − 2X3 −X4 +X6 + ε < 0} , ε ∼ N(0, 1).
Our classification target is Y 1. For b(X), we use X , X2 and their pairwise products. We assume that we have box constraints for our solution: |β∗j | ≤ 1, j = 1, ..., k. Since there exist no other natural baselines, we compare our methods to the plug-in method where we use (2) for our approximating program P̂. For nuisance estimation we use the cross-validation-based Super Learner ensemble via the SUPERLEARNER R package to combine generalized additive models, multivariate adaptive regression splines, and random forests. We use sample splitting as described in Algorithm 1 with K = 2 splits. We further consider two versions of each of our estimators, based on the correct and distorted X , where the distorted values are only used to estimate the outcome regression µa. The distortion is caused by a transformation X 7→ (X1X3X6, X22 , X4/(1 + exp(X5)), exp(X5/2)).
To solve P̂, we first use the StoGo algorithm [40] via the NLOPTR R package as it has shown the best performance in terms of accuracy in the survey study of [35]. After running the StoGo, we then use the global optimum as a starting point for the BOBYQA local optimization algorithm [41] to further polish the optimum to a greater accuracy. We use sample sizes n = 1k, 2.5k, 5k, 7.5k, 10k and repeat the simulation 100 times for each n. Then we compute the average of |v∗− v̂| and ∥β∗− β̂∥2. Using the estimated counterfactual predictor, we also compute the classification error on an independent sample with the equal sample size. Standard error bars are presented around each point. The results with the correct and distorted X are presented in Figures 1 and 2, respectively.
With the correct X , it appears that the proposed estimator performs as well or slightly better than the plug-in methods. However, in Figure 2 when µ̂a is constructed based on the distorted X , the proposed estimator gives substantially smaller errors in general and improves better with n. This is indicative of the fact that the proposed estimator has the doubly-robust, second-order multiplicative bias, thus supporting our theoretical results in Section 4.
5.2 Case Study: COMPAS Dataset
Next we apply our method for recidivism risk prediction using the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) dataset 2. This dataset was originally designed to assess the COMPAS recidivism risk scores, and has been utilized for studying machine bias in the context of algorithmic fairness [2]. More recently, the dataset has been reanalyzed in the framework of counterfactual outcomes [32–34]. Here, we focus purely on predictive purpose. We let A represent pretrial release, with A = 0 if defendants are released and A = 1 if they are incarcerated, following methodology suggested by [34].3 We aim to classify the binary counterfactual outcome Y 0 that indicates whether a defendant is rearrested within two years, should the defendant be released pretrial. We use the dataset for two-year recidivism records with five covariates: age, sex, number of prior arrests, charge degree, and race. We consider three racial groups: Black, White, and Hispanic. We split the data (n = 5787) randomly into two groups: a training set with 3000 observations and a test set with the rest. Other model settings remain the same as our simulation in the previous subsection, including the box constraints.
Figure 3 and Table 1 show that the proposed doubly-robust method achieves moderately higher ROC AUC and classification accuracy than both the plug-in and the raw COMPAS risk scores. This comparative advantage is likely to increase in settings where we expect the identification and regularity assumptions to be more likely to hold, for example, where we can have access to more covariates or more information about the treatment mechanism.
6 Discussion
In this paper we studied the problem of counterfactual classification under arbitrary smooth constraints, and proposed a doubly-robust estimator which leverages nonparametric machine learning methods. Our theoretical framework is not limited to counterfactual classification and can be applied to other settings where the estimand is the optimal solution of a general smooth nonlinear programming problem with a counterfactual objective function; thus, we complement the results of [24, 33], each of which considered a particular class of smooth nonlinear programming.
2https://github.com/propublica/compas-analysis 3The dataset itself does not include information whether defendants were released pretrial, but it includes dates in and out of jail. So we set the treatment A to 0 if defendants left jail within three days of being arrested, and 1 otherwise, as Florida state law generally requires individuals to be brought before a judge for a bail hearing within 2 days of arrest [34, Section 6.2].
We emphasize that one may use our proposed approach for other common problems in causal inference, e.g., estimation of the contrast effects or optimal treatment regimes, even under runtime confounding and/or other practical constraints. We may accomplish this by simply estimating each component E[Y a | X] via solving (P) for different values of a, and then taking the conditional mean contrast of interest. We can also readily adapt our procedure (P) for such standard estimands, for example by replacing Y a with the desired contrast or utility formula, in which the influence function will be very similar to those already presented in our manuscript. In ongoing work, we develop extensions for estimating the CATE and optimal treatment regimes under fairness constraints.
Although not explored in this work, our estimation procedure could be improved by applying more sophisticated and flexible modeling techniques for solving (P). One promising approach is to build a neural network that minimizes the loss (4) with the nuisance estimates {φa(Zi; η̂−BK )}i constructed on the separate independent sample; in this case, β is the weights of the network where k ≫ k′. Importantly, in the neural network approach we do not need to specify and construct the score and basis functions; the ideal form of those unknown functions are learned through backpropagation. Hence, we can avoid explicitly formulating and solving a complex non-convex optimization problem. Further, one may employ a rich source of deep-learning tools. In future work, we plan to pursue this extension and apply our methods to a large-scale real-world dataset.
We conclude with other potential limitations of our methods, and ways in which our work could be generalized. First, we considered the fixed feasible set that consists of only deterministic constraints. However, sometimes it may be useful to consider the general case where gj’s need to be estimated as well. This can be particularly helpful when incorporating general fairness constraints [14, 33, 34]. Dealing with the varying feasible set with general nonlinear constraints is a complicated task and requires even stronger assumptions [48]. As future work, we plan to generalize our framework to the case of a varying feasible set. Next, although we showed that the counterfactual objective function is estimated efficiently via L̂, it is unclear whether the solution estimator β̂ is efficient too, due to the inherent complexity of the optimal solution mapping in the presence of constraints. We conjecture that one may show that the semiparametric efficiency bound can also be attained for β̂ possibly under slightly stronger regularity assumptions, but we leave this for future work. | 1. What is the focus and contribution of the paper regarding counterfactual logistic regression?
2. What are the strengths of the proposed method, particularly in terms of its theoretical guarantees and empirical evaluation?
3. What are the weaknesses of the paper, especially regarding its limitations in practical applications?
4. Do you have any concerns or questions about the presentation or the algorithm provided in the paper?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper proposes a doubly robust estimator for counterfactual logistic regression with flexible constraints. The authors provide a novel algorithm along with theoretical guarantees and empirical evaluation.
Strengths And Weaknesses
Strengths:
addresses a common problem in causal inference (robust counterfactual or effect estimation with binary outcome)
provides theoretical guarantees for both estimation and inference
the presentation is clear and cohesive
the empirical study is compelling
Weaknesses:
The title is somewhat misleading as it only tackles a specific type of parametric classification (logistic regression). Does not easily extend to more complex classification methods
I wonder how practical the main algorithm is. While I appreciate that the constraints can be very general, I wager that most applications will require a
L
1
/
L
2
constraint (penalty) on
β
. One advantage of the naive/natural estimator in (2) is that it is equivalent to a weighted logistic regression (because
μ
^
a
(
x
)
∈
[
0
,
1
]
) so it can be used with any black-box logistics regression implementation (e.g. in Python). This is not true for the proposed estimator since some of the weights will be negative. Is there any simplification of the algorithm in the case of
L
1
/
L
2
constraints?
Questions
See section above.
Limitations
The authors have adequately addressed and potential negative societal impact of their work. |
NIPS | Title
Doubly Robust Counterfactual Classification
Abstract
We study counterfactual classification as a new tool for decision-making under hypothetical (contrary to fact) scenarios. We propose a doubly-robust nonparametric estimator for a general counterfactual classifier, where we can incorporate flexible constraints by casting the classification problem as a nonlinear mathematical program involving counterfactuals. We go on to analyze the rates of convergence of the estimator and provide a closed-form expression for its asymptotic distribution. Our analysis shows that the proposed estimator is robust against nuisance model misspecification, and can attain fast √ n rates with tractable inference even when using nonparametric machine learning approaches. We study the empirical performance of our methods by simulation and apply them for recidivism risk prediction.
N/A
√ n rates with tractable inference even
when using nonparametric machine learning approaches. We study the empirical performance of our methods by simulation and apply them for recidivism risk prediction.
1 Introduction
Counterfactual or potential outcomes are often used to describe how an individual would respond to a specific treatment or event, irrespective of whether the event actually takes place. Counterfactual outcomes are commonly used for causal inference, where we are interested in measuring the effect of a treatment on an outcome variable [15, 16, 45].
Recently, counterfactual outcomes have also proved useful for predicting outcomes under hypothetical interventions. This is commonly referred to as counterfactual prediction. Counterfactual prediction can be particularly useful to inform decision-making in clinical practice. For example, in order for physicians to make effective treatment decisions, they often need to predict risk scores assuming no treatment is given; if a patient’s risk is relatively low, then she or he may not need treatment. However, when a treatment is initiated after baseline, simply operationalizing the hypothetical treatment as another baseline predictor will rarely give the correct (counterfactual) risk estimates because of confounding [58]. Counterfactual prediction can be also helpful when we want our prediction model developed in one setting to yield predictions successfully transportable to other settings with different treatment patterns. Suppose that we develop our risk prediction model in a setting where most patients have access to an effective (post-baseline) treatment. However, if we deploy our factual prediction model in a new setting in which few individuals have access to the treatment, our model is likely to fail in the sense that it may not be able to accurately identify high-risk individuals. Counterfactual prediction may allow us to achieve more robust model performance compared to factual prediction, even when model deployment influences behaviors that affect risk. [see, e.g., 10, 27, 54, for more examples].
However, the problem of counterfactual prediction brings challenges that do not arise in typical prediction problems because the data needed to build the predictive models are inherently not fully
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
observable. Surprisingly, while the development of modern prediction modeling has greatly enriched the counterfactual-outcome-based causal inference particularly via semi-parametric methods [20, 23], the use of causal inference to improve prediction modeling has received less attention [see, e.g., 10, 46, for a discussion on the subject].
In this work, we study counterfactual classification, a special case of counterfactual prediction where the outcome is discrete. Our approach allows investigators to flexibly incorporate various constraints into the models, not only to enhance their predictive performance but also to accommodate a wide range of practical constraints relevant to their classification tasks. Counterfactual classification poses both theoretical and practical challenges, as a result of the fact that in our setting, even without any constraints, the estimand is not expressible as a closed form functional unlike typical causal inference problems. We tackle this problem by framing counterfactual classification as nonlinear stochastic programming with counterfactual components.
1.1 Related Work
Our work lies at the intersection of causal inference and stochastic optimization.
Counterfactual prediction is closely related to estimation of the conditional average treatment effect (CATE) in causal inference, which plays a crucial role in precision medicine and individualized policy. Let Y a denote the counterfactual outcome that would have been observed under treatment or intervention A = a, A ∈ {0, 1}. The CATE for subjects with covariate X = x is defined as τ(x) = E[Y 1− Y 0 | X = x]. There exists a vast literature on estimating CATE. These include some important early works assuming that τ(x) follows some known parametric form [e.g., 44, 52, 55]. But more recently, there has been an effort to leverage flexible nonparametric machine learning methods [e.g., 1, 3, 22, 25, 29, 31, 39, 57]. A desirable property commonly held in the above CATE estimation methods is that the function τ(x) may be more structured and simple than its component main effect function E[Y a | X = x]. In counterfactual prediction, however, we are fundamentally interested in predicting Y a conditional on X = x under a “single" hypothetical intervention A = a, as opposed to the contrast of the conditional mean outcomes under two (or more) interventions as in CATE. Counterfactual prediction is often useful to support decision-making on its own. There are settings where estimating the contrast effect or relative risk is less relevant than understanding what may happen if a subject was given a certain intervention. As mentioned previously, this is particularly the case in clinical research when predicting risk in relation to treatment started after baseline [10, 27, 46, 54]. Moreover, in the context of multi-valued treatments, it can be more useful to estimate each individual conditional mean potential outcome separately than to estimate all the possible combinations of relative effects.
With no constraints, under appropriate identification assumptions (e.g., (C1)-(C3) in Section 2), counterfactual prediction is equivalent to estimating a standard regression function E[Y | X,A = a] so in principle one could use any regression estimator. This direct modeling or plug-in approach has been used for counterfactual prediction in randomized controlled trials [e.g., 26, 38] or as a component of CATE estimation methods [e.g., 3, 29]. An issue arises when we are estimating a projection of this function onto a finite-dimensional model, or where we instead want to estimate E[Y a | V ] = E{E[Y | X,A = a] | V } for some smaller subset V ⊂ X (e.g., under runtime confounding [9]), which typically renders the plug-in approach suboptimal. Moreover, the resulting estimator fails to have double robustness, a highly desirable property which provides an additional layer of robustness against model misspecification [4].
On the other hand, we often want to incorporate various constraints into our predictive models. Such constraints are often used for flexible penalization [18] or supplying prior information [13] to enhance model performance and interpretability. They can also be used to mitigate algorithmic biases [6, 14]. Further, depending on the scientific question, practitioners occasionally have some constraints which they wish to place on their prediction tasks, such as targeting specific sub-populations, restricting sign or magnitude on certain regression coefficients to be consistent with common sense, or accounting for the compositional nature of the data [7, 19, 28]. In the plug-in approach, however, it is not clear how to incorporate the given constraints into the modeling process.
In our approach, we directly formulate and solve an optimization problem that minimizes counterfactual classification risk, where we can flexibly incorporate various forms of constraints. Optimization problems involving counterfactuals or counterfactual optimization have not been extensively studied,
with few exceptions [e.g., 24, 30, 33, 34]. Our results are closest to [33] and [24], which study counterfactual optimization in a class of quadratic and nonlinear programming problems, respectively, yet this approach i) is not applicable to classification where the risk is defined with respect to the cross-entropy, and ii) considers only linear constraints.
As in [24], we tackle the problem of counterfactual classification from the perspective of stochastic programming. The two most common approaches in stochastic programming are stochastic approximation (SA) and sample average approximation (SAA) [e.g., 36, 50]. However, since i) we cannot compute sample moments or stochastic subgradients that involve unobserved counterfactuals, and ii) the SA and SAA approaches cannot harness efficient estimators for counterfactual components, e.g., doubly-robust or semiparametric estimators with cross-fitting [8, 37], more general approaches beyond the standard SA and SAA settings should be considered [e.g., 47–49] at the expense of stronger assumptions on the behavior of the optimal solution and its estimator.
1.2 Contribution
We study counterfactual classification as a new decision-making tool under hypothetical (contrary to fact) scenarios. Based on semiparametric theory for causal inference, we propose a doubly-robust, nonparametric estimator that can incorporate flexible constraints into the modeling process. Then we go on to analyze rates of convergence and provide a closed-form expression for the asymptotic distribution of our estimator. Our analysis shows that the proposed estimator can attain fast √ n rates even when its nuisance components are estimated using nonparametric machine learning tools at slower rates. We study the finite-sample performance of our estimator via simulation and provide a case based on real data. Importantly, our algorithm and analysis are applicable to other problems in which the estimand is given by the solutions to a general nonlinear optimization problem whose objective function involves counterfactuals, where closed-form solutions are not available.
2 Problem and Setup
Suppose that we have access to an i.i.d. sample (Z1, ..., Zn) of n tuples Z = (Y,A,X) ∼ P for some distribution P, binary outcome Y ∈ {0, 1}, covariates X ∈ X ⊂ Rdx , and binary intervention A ∈ A = {0, 1}. For simplicity, we assume A and Y are binary, but in principle they can be multi-valued. We consider a general setting where only a subset of covariates V ⊆ X can be used for predicting the counterfactual outcome Y a. This allows for runtime confounding, where factors used by decision-makers are recorded in the training data but are not available for prediction (see [9] and references therein). We are concerned with the following constrained optimization problem
minimize β∈B
L (Y a, σ(β, b(V ))) := −E {Y a log σ(β, b(V )) + (1− Y a) log(1− σ(β, b(V )))}
subject to β ∈ S := {β | gj(β) ≤ 0, j ∈ J} (P) for some compact subset B ∈ Rk, known C2-functions gj : B → R, σ : B × Rk ′ → (0, 1), and the index set J = {1, ...,m} for the inequality constraints. Here, σ is the score function and b(V ) = [b1(V ), ..., bk′(V )]
⊤ represents a set of basis functions for V (e.g., truncated power series, kernel or spline basis functions, etc.). Note that we do not need to have k = k′; for example, depending on the modeling techniques, it is possible to have a much larger number of model parameters than the number of basis functions, i.e., k > k′. L (Y a, σ(β, b(V ))) is our classification risk based on the cross-entropy. S consists of deterministic inequality constraints1 and can be used to pursue a variety of practical purposes described in Section 1. Let β∗ denote an optimal solution in (P). β∗ is our optimal model parameters (coefficients) that minimize the counterfactual classification risk under the given constraints.
Classification risk and score function. Our classification risk L(Y a, σ(β, b(V ))) is defined by the expected cross entropy loss between Y a and σ(β, b(V )). In order to estimate β∗, we first need to estimate this classification risk. Since it involves counterfactuals, the classification risk cannot be identified from observed data unless certain assumptions hold, which will be discussed shortly. The form of the score function σ(β, b(V )) depends on the specific classification technique we are using. Our default choice for σ is the sigmoid function with k = k′, which makes the classification
1Equality constraint can be always expressed by a pair of inequality constraints.
risk strictly convex with respect to β. It should be noted, however, that more complex and flexible classification techniques (e.g., neural networks) can also be used without affecting the subsequent results, as long as they satisfy the required regularity assumptions discussed later in Section 4. Importantly, our approach is nonparametric; β∗ is the parameter of the best linear classifier with the sigmoid score in the expanded feature space spanned by b(V ), but we never assume an exact ‘log-linear’ relationship between Y a and b(V ) as in ordinary logistic regression models.
Identification. To estimate the counterfactual quantity L(Y a, σ(β, b(V ))) from the observed sample (Z1, ..., Zn), it must be expressed in terms of the observational data distribution P. This can be accomplished via the following standard causal assumptions [e.g., 17, Chapter 12]:
• (C1) Consistency: Y = Y a if A = a
• (C2) No unmeasured confounding: A ⊥⊥ Y a | X • (C3) Positivity: P(A = a|X) > ε a.s. for some ε > 0
(C1) - (C3) will be assumed throughout this paper. Under these assumptions, our classification risk is identified as
L(β) = −E {E [Y | X,A = a] log σ(β, b(V )) + (1− E [Y | X,A = a]) log(1− σ(β, b(V )))} , (1)
where we let L(β) ≡ L(Y a, σ(β, b(V ))). Since we use the sigmoid function with an equal number of model parameters as basis functions, for clarity, hereafter we write σ(β⊤b(V )) = σ(β, b(V )). It is worth noting that even though we develop the estimator under the above set of causal assumptions, one may extend our methods to other identification strategies and settings (e.g., those of instrumental variables and mediation), since our approach is based on the analysis of a stochastic programming problem with generic estimated objective functions (see Appendix B).
Notation. Here we specify the basic notation used throughout the paper. For a real-valued vector v, let ∥v∥2 denote its Euclidean or L2-norm. Let Pn denote the empirical measure over (Z1, ..., Zn). Given a sample operator h (e.g., an estimated function), let P denote the conditional expectation over a new independent observation Z, as in P(h) = P{h(Z)} = ∫ h(z)dP(z). Use ∥h∥2,P to
denote the L2(P) norm of h, defined by ∥h∥2,P = [ P(h2) ] 1 2 = [∫ h(z)2dP(z) ] 1 2 . Finally, let s∗(P ) denote the set of optimal solutions of an optimization program P , i.e., β∗ ∈ s∗(P ), and define dist(x, S) = inf {∥x− y∥2 : y ∈ S} to denote the distance from a point x to a set S.
3 Estimation Algorithm
Since (P) is not directly solvable, we need to find an approximating program of the “true" program (P). To this end, we shall first discuss the problem of obtaining estimates for the identified classification risk (1). To simplify notation, we first introduce the following nuisance functions
πa(X) = P[A = a | X], µa(X) = E[Y | X,A = a],
and let π̂a and µ̂a be their corresponding estimators. πa and µa are referred to as the propensity score and outcome regression function, respectively.
A natural estimator for (1) is given by L̂(β) = −Pn { µ̂a(X) log σ(β ⊤b(V )) + (1− µ̂a(X)) log(1− σ(β⊤b(V ))) } , (2)
where we simply plug in the regression estimates µ̂a into the empirical average of (1). Here, we construct a more efficient estimator based on the semiparametric approach in causal inference [21, 23]. Let
φa(Z; η) = 1(A = a)
πa(X) {Y − µA(X)}+ µa(X),
denote the uncentered efficient influence function for the parameter E {E[Y | X,A = a]}, where nuisance functions are defined by η = {πa(X), µa(X)}. Then it can be deduced that for an arbitrary
Algorithm 1: Doubly robust estimator for counterfactual classification 1 input: b(·),K 2 Draw (B1, ..., Bn) with Bi ∈ {1, ...,K} 3 for b = 1, ...,K do 4 Let D0 = {Zi : Bi ̸= b} and D1 = {Zi : Bi = b} 5 Obtain η̂−b by constructing π̂a, µ̂a on D0 6 M1,b(β)← empirical average of φa(Z; η̂−b) log σ(β⊤b(V )) over D1 7 M0,b(β)← empirical average of (1− φa(Z; η̂−b)) log(1− σ(β⊤b(V ))) over D1 8 L̂(β)← ∑K b=1 { 1 n ∑n i=1 1(Bi = b) } (M1,b(β) +M0,b(β)) 9 solve (P̂) with L̂(β)
fixed real-valued function h : X → R, the uncentered efficient influence function for the parameter ψa := E {E[Y | X,A = a]h(X)} is given by φa(Z; η)h(X) (Lemma A.1 in the appendix). Now we provide an influence-function-based semiparametric estimator for ψa. Following [8, 22, 43, 59], we propose to use sample splitting to allow for arbitrarily complex nuisance estimators η̂. Specifically, we split the data into K disjoint groups, each with size of n/K approximately, by drawing variables (B1, ..., Bn) independent of the data, with Bi = b indicating that subject i was split into group b ∈ {1, ...,K}. Then the semiparametric estimator for ψa based on the efficient influence function and sample splitting is given by
ψ̂a = 1
K K∑ b=1 Pbn {φa(Z; η̂−b)h(X)} ≡ Pn {φa(Z; η̂−BK )h(X)} , (3)
where we let Pbn denote empirical averages over the set of units {i : Bi = b} in the group b and let η̂−b denote the nuisance estimator constructed only using those units {i : Bi ̸= b}. Under weak regularity conditions, this semiparametric estimator attains the efficiency bound with the double robustness property, and allows us to employ nonparametric machine learning methods while achieving the√ n-rate of convergence and valid inference under weak conditions (see Lemma A.1 in the appendix for the formal statement). If one is willing to rely on appropriate empirical process conditions (e.g., Donsker-type or low entropy conditions [53]), then η can be estimated on the same sample without sample splitting. However, this would limit the flexibility of the nuisance estimators.
The classification risk L(β) is a sum of two functionals, each of which is in the form of ψa, Thus, for each β, we propose to estimate the classification risk using (3) as follows
L̂(β) = −Pn { φa(Z; η̂−BK ) log σ(β ⊤b(V )) + (1− φa(Z; η̂−BK )) log(1− σ(β⊤b(V ))) } . (4)
Now that we have proposed the efficient method to estimate the counterfactual component L(β), in what follows we provide an approximating program for (P) which we aim to actually solve by substituting L̂(β) for L(β)
minimize β∈B L̂(β) subject to β ∈ S. (P̂)
Let β̂ ∈ s∗(P̂). Then β̂ is our estimator for β∗. We summarize our algorithm detailing how to compute the estimator β̂ in Algorithm 1.
(P̂) is a smooth nonlinear optimization problem whose objective function depends on data. Unfortunately, unlike (P), (P̂) is not guaranteed to be convex in finite samples even if S is convex. Non-convex problems are usually more difficult than convex ones due to high variance and slow computing time. Nonetheless, substantial progress has been made recently [5, 42], and a number of efficient global optimization algorithms are available in open-source libraries (e.g., NLOPT). Also in order for more flexible implementation, one may adapt neural networks for our approach without the need for specifying σ and b; we discuss this in more detail in Section 6 as a promising future direction.
4 Asymptotic Analysis
This section is devoted to analyzing the rates of convergence and asymptotic distribution for the estimated optimal solution β̂. Unlike stochastic optimization, analysis of the statistical properties of optimal solutions to a general counterfactual optimization problem appears much more sparse. In what was perhaps the first study of the problem, [24] analyzed asymptotic behavior of optimal solutions for a particular class of nonlinear counterfactual optimization problems that can be cast into a parametric program with finite-dimensional stochastic parameters. However, the true program (P) does not belong to the class to which their analysis is applicable. Here, we derive the asymptotic properties of β̂ by considering similar assumptions as in [24].
We first introduce the following assumptions for our counterfactual component estimator L̂.
(A1) P(π̂a ∈ [ϵ, 1− ϵ]) = 1 for some ϵ > 0 (A2) ∥µ̂a − µa∥2,P = oP(1) or ∥π̂a − πa∥2,P = oP(1)
(A3) ∥π̂a − πa∥2,P∥µ̂a − µa∥2,P = oP(n− 1 2 )
Assumptions (A1) - (A3) are commonly used in semiparametric estimation in the causal inference literature [20]. Next, for a feasible point β̄ ∈ S we define the active index set. Definition 4.1 (Active set). For β̄ ∈ S, we define the active index set J0 by
J0(β̄) = {1 ≤ j ≤ m | gj(β̄) = 0}.
Then we introduce the following technical condition on gj .
(B1) For each β∗ ∈ s∗(P),
d⊤∇2βgj(β∗)d ≥ 0 ∀d ∈ {d | ∇βgj(β∗) = 0, j ∈ J0(β̄)}.
Assumption (B1) holds, for example, if each gj is locally convex around β∗. In what follows, based on the result of [47], we characterize the rates of convergence for β̂ in terms of the nuisance estimation error under relatively weak conditions. Theorem 4.1 (Rate of Convergence). Assume that (A1), (A2), and (B1), hold. Then
dist ( β̂, s∗(P) ) = OP ( ∥π̂a − πa∥2,P∥µ̂a − µa∥2,P + n− 1 2 ) .
Hence, if we further assume the nonparametric condition (A3), we obtain dist ( β̂, s∗(P) ) = OP ( n− 1 2 ) .
Theorem 4.1 indicates that double robustness is possible for our estimator, and thereby √ n rates are attainable even when each of the nuisance regression functions is estimated flexibly at much slower rates (e.g., n−1/4 rates for each), with a wide variety of modern nonparametric tools. Since L is continuously differentiable with bounded derivative, the consistency of the optimal value naturally follows by the result of Theorem 4.1 and the continuous mapping theorem. More specifically, in the following corollary, we show that the same rates are attained for the optimal value under identical conditions. Corollary 4.1 (Rate of Convergence for Optimal Value). Suppose (A1), (A2), (A3), (B1) hold and let v∗ and v̂ be the optimal values corresponding to β∗ ∈ s∗(P) and β̂, respectively. Then we have |v̂ − v∗| = OP ( ∥π̂a − πa∥2,P∥µ̂a − µa∥2,P + n− 1 2 ) .
In order to conduct statistical inference, it is also desirable to characterize the asymptotic distribution of β̂. This requires stronger assumptions and a more specialized analysis [47]. Asymptotic properties of optimal solutions in stochastic programming are typically studied based on the generalization of the delta method for directionally differentiable mappings [e.g., 48–50]. Asymptotic normality is of particular interest since without asymptotic normality, consistency of the bootstrap is no longer guaranteed for the solution estimators [12].
We start with additional definitions of some popular regularity conditions with respect to (P).
Definition 4.2 (LICQ). Linear independence constraint qualification (LICQ) is satisfied at β̄ ∈ S if the vectors∇βgj(β̄), j ∈ J0(β̄) are linearly independent. Definition 4.3 (SC). Let L(β, γ) be the Lagrangian. Strict Complementarity (SC) is satisfied at β̄ ∈ S if, with multipliers γ̄j ≥ 0, j ∈ J0(β̄), the Karush-Kuhn-Tucker (KKT) condition
∇βL(β̄, γ̄) := ∇βL(β̄) + ∑
j∈J0(β̄)
γ̄j∇βgj(β̄) = 0,
is satisfied such that γ̄j > 0,∀j ∈ J0(β̄).
LICQ is arguably one of the most widely-used constraint qualifications that admit the first-order necessary conditions. SC means that if the j-th inequality constraint is active, then the corresponding dual variable is strictly positive, so exactly one of them is zero for each 1 ≤ j ≤ m. SC is widely used in the optimization literature, particularly in the context of parametric optimization [e.g., 50, 51]. We further require uniqueness of the optimal solution in (P).
(B2) Program (P) has a unique optimal solution β∗ (i.e., s∗(P) ≡ {β∗} is singleton).
Note that under (B2) if LICQ holds at β∗, then the corresponding multipliers are determined uniquely [56]. In the next theorem, we provide a closed-form expression for the asymptotic distribution of β̂. Theorem 4.2 (Asymptotic Distribution). Assume that (A1) - (A3), (B1), and (B2) hold, and that LICQ and SC hold at β∗ with the corresponding multipliers γ∗. Then
n− 1 2 ( β̂ − β∗ ) = [ ∇2βL(β∗, γ∗) B
B⊤ 0 ]−1 [ 1 0 ]⊤ Υ+ oP(1)
for some k × |J0(β∗)| matrix B and random variable Υ such that
Υ d−→ N (0, var (φa(Z; η)h1(V, β∗) + {1− φa(Z; η)}h0(V, β∗))) ,
where
B = [ ∇βgj(β∗)⊤, j ∈ J0(β∗) ] ,
h1(V, β) = 1
log σ(β⊤b(V )) b(V )σ(β⊤b(V )){1− σ(β⊤b(V ))},
h0(V, β) = − 1
log(1− σ(β⊤b(V ))) b(V )σ(β⊤b(V )){1− σ(β⊤b(V ))}.
The above theorem gives explicit conditions under which β̂ is √ n-consistent and asymptotically
normal. We harness the classical results of [48] that use an expansion of β̂ in terms of an auxiliary parametric program. To show asymptotic normality of β̂, linearity of the directional derivative of optimal solutions in the parametric program is required. We have accomplished this based on an appropriate form of the implicit function theorem [11]. This is in contrast to [33] that relied on the structure of the smooth, closed-form solution estimator that enables direct use of the delta method. Lastly, our results in this section can be extended to a more general constrained nonlinear optimization problem where the objective function involves counterfactuals (see Lemmas B.1, B.2 in the appendix).
5 Simulation and Case Study
5.1 Simulation
We explore the finite sample properties of our estimators in the simulated dataset where we aim to empirically demonstrate the double-robustness property described in Section 3. Our data generation process is as follows:
V ≡ X = (X1, ..., X6) ∼ N(0, I), πa(X) = expit(−X1 + 0.5X2 − 0.25X3 − 0.1X4 + 0.05X5 + 0.05X6),
Y = A1 {X1 + 2X2 − 2X3 −X4 +X5 + ε > 0}+ (1−A)1 {X1 + 2X2 − 2X3 −X4 +X6 + ε < 0} , ε ∼ N(0, 1).
Our classification target is Y 1. For b(X), we use X , X2 and their pairwise products. We assume that we have box constraints for our solution: |β∗j | ≤ 1, j = 1, ..., k. Since there exist no other natural baselines, we compare our methods to the plug-in method where we use (2) for our approximating program P̂. For nuisance estimation we use the cross-validation-based Super Learner ensemble via the SUPERLEARNER R package to combine generalized additive models, multivariate adaptive regression splines, and random forests. We use sample splitting as described in Algorithm 1 with K = 2 splits. We further consider two versions of each of our estimators, based on the correct and distorted X , where the distorted values are only used to estimate the outcome regression µa. The distortion is caused by a transformation X 7→ (X1X3X6, X22 , X4/(1 + exp(X5)), exp(X5/2)).
To solve P̂, we first use the StoGo algorithm [40] via the NLOPTR R package as it has shown the best performance in terms of accuracy in the survey study of [35]. After running the StoGo, we then use the global optimum as a starting point for the BOBYQA local optimization algorithm [41] to further polish the optimum to a greater accuracy. We use sample sizes n = 1k, 2.5k, 5k, 7.5k, 10k and repeat the simulation 100 times for each n. Then we compute the average of |v∗− v̂| and ∥β∗− β̂∥2. Using the estimated counterfactual predictor, we also compute the classification error on an independent sample with the equal sample size. Standard error bars are presented around each point. The results with the correct and distorted X are presented in Figures 1 and 2, respectively.
With the correct X , it appears that the proposed estimator performs as well or slightly better than the plug-in methods. However, in Figure 2 when µ̂a is constructed based on the distorted X , the proposed estimator gives substantially smaller errors in general and improves better with n. This is indicative of the fact that the proposed estimator has the doubly-robust, second-order multiplicative bias, thus supporting our theoretical results in Section 4.
5.2 Case Study: COMPAS Dataset
Next we apply our method for recidivism risk prediction using the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) dataset 2. This dataset was originally designed to assess the COMPAS recidivism risk scores, and has been utilized for studying machine bias in the context of algorithmic fairness [2]. More recently, the dataset has been reanalyzed in the framework of counterfactual outcomes [32–34]. Here, we focus purely on predictive purpose. We let A represent pretrial release, with A = 0 if defendants are released and A = 1 if they are incarcerated, following methodology suggested by [34].3 We aim to classify the binary counterfactual outcome Y 0 that indicates whether a defendant is rearrested within two years, should the defendant be released pretrial. We use the dataset for two-year recidivism records with five covariates: age, sex, number of prior arrests, charge degree, and race. We consider three racial groups: Black, White, and Hispanic. We split the data (n = 5787) randomly into two groups: a training set with 3000 observations and a test set with the rest. Other model settings remain the same as our simulation in the previous subsection, including the box constraints.
Figure 3 and Table 1 show that the proposed doubly-robust method achieves moderately higher ROC AUC and classification accuracy than both the plug-in and the raw COMPAS risk scores. This comparative advantage is likely to increase in settings where we expect the identification and regularity assumptions to be more likely to hold, for example, where we can have access to more covariates or more information about the treatment mechanism.
6 Discussion
In this paper we studied the problem of counterfactual classification under arbitrary smooth constraints, and proposed a doubly-robust estimator which leverages nonparametric machine learning methods. Our theoretical framework is not limited to counterfactual classification and can be applied to other settings where the estimand is the optimal solution of a general smooth nonlinear programming problem with a counterfactual objective function; thus, we complement the results of [24, 33], each of which considered a particular class of smooth nonlinear programming.
2https://github.com/propublica/compas-analysis 3The dataset itself does not include information whether defendants were released pretrial, but it includes dates in and out of jail. So we set the treatment A to 0 if defendants left jail within three days of being arrested, and 1 otherwise, as Florida state law generally requires individuals to be brought before a judge for a bail hearing within 2 days of arrest [34, Section 6.2].
We emphasize that one may use our proposed approach for other common problems in causal inference, e.g., estimation of the contrast effects or optimal treatment regimes, even under runtime confounding and/or other practical constraints. We may accomplish this by simply estimating each component E[Y a | X] via solving (P) for different values of a, and then taking the conditional mean contrast of interest. We can also readily adapt our procedure (P) for such standard estimands, for example by replacing Y a with the desired contrast or utility formula, in which the influence function will be very similar to those already presented in our manuscript. In ongoing work, we develop extensions for estimating the CATE and optimal treatment regimes under fairness constraints.
Although not explored in this work, our estimation procedure could be improved by applying more sophisticated and flexible modeling techniques for solving (P). One promising approach is to build a neural network that minimizes the loss (4) with the nuisance estimates {φa(Zi; η̂−BK )}i constructed on the separate independent sample; in this case, β is the weights of the network where k ≫ k′. Importantly, in the neural network approach we do not need to specify and construct the score and basis functions; the ideal form of those unknown functions are learned through backpropagation. Hence, we can avoid explicitly formulating and solving a complex non-convex optimization problem. Further, one may employ a rich source of deep-learning tools. In future work, we plan to pursue this extension and apply our methods to a large-scale real-world dataset.
We conclude with other potential limitations of our methods, and ways in which our work could be generalized. First, we considered the fixed feasible set that consists of only deterministic constraints. However, sometimes it may be useful to consider the general case where gj’s need to be estimated as well. This can be particularly helpful when incorporating general fairness constraints [14, 33, 34]. Dealing with the varying feasible set with general nonlinear constraints is a complicated task and requires even stronger assumptions [48]. As future work, we plan to generalize our framework to the case of a varying feasible set. Next, although we showed that the counterfactual objective function is estimated efficiently via L̂, it is unclear whether the solution estimator β̂ is efficient too, due to the inherent complexity of the optimal solution mapping in the presence of constraints. We conjecture that one may show that the semiparametric efficiency bound can also be attained for β̂ possibly under slightly stronger regularity assumptions, but we leave this for future work. | 1. What is the main contribution of the paper regarding counterfactual classification?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its theoretical properties and practical implementation?
3. How does the paper motivate the importance of solving the objective (P), and how does it compare to alternative proposals?
4. What are some minor comments and suggestions for improving the paper's clarity and readability?
5. Are there any concerns or questions regarding the paper's treatment of runtime confounding and the use of basis functions? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper proposes to perform counterfactual classification (inferring categorical/binary potential outcomes) by minimizing a cross-entropy objective with constraints. The objective is not directly computable (because of unobserved counterfactual outcomes), so the authors propose to use uncentered efficient influence functions (based on model propensity and model expected outcome functions) to compute the objective. The authors then show double robustness properties of their estimate, i.e. they show that distance between their solution
β
^
and the true solution set
s
∗
(
P
)
is upper bounded by the “estimation quality”
|
|
π
^
a
−
π
a
|
|
and
|
|
μ
^
a
−
μ
a
|
|
of the propensity & outcome models, and the achieved objective value
v
^
is similarly close to the optimal
v
∗
. Under additional assumptions (LICQ, SC, B2), they show that
β
^
is
n
-consistent and asymptotically normal. They illustrate the stated theoretical properties with a toy example, and they also evaluate the classification accuracy of their method on the COMPAS recidivism prediction task.
Strengths And Weaknesses
I will first list the weaknesses of the paper, then its strengths, then give minor comments.
Weaknesses:
I think my biggest concern about the paper is that it is difficult to understand why we should care about solving problem (P) (Section 2). Typically, for categorical outcomes Y, people assess quantities like Relative Risk (RR) or (Conditional) Average Treatment Effects (CATE/ATE). They then can make claims about how good their estimates are relative to those “ground-truth” quantities (CATE/RR), and can analyze e.g., consistency & convergence to CATE/RR. But here it’s unclear why
β
∗
is an important quantity, and why we should care about consistency & rates of convergence of an estimate
β
^
to the “ground-truth” quantity
β
∗
.
Overall, I think the introduction needs some work. Currently, it is too short and does not motivate the proposed solution well. This is related to my first point – ideally, the intro should make it crystal clear why the reader/the causal inference community should care about solving the objective (P). Some concrete comments:
-- L35 “Our approach allows investigators to flexibly incorporate various constraints into the models [...] to accommodate a wide range of practical constraints relevant to their classification tasks.” – would be good to give examples here. Same comment for L54 “possibly with various constraints on our model.” Eventually, you do give some examples of constraints (L66/67) but these should be moved earlier in the text.
-- L37 “This poses both theoretical…” – what is “this” here? Ambiguous.
-- In general, the intro does not help me/the reader understand why your solution is appropriate, and what is wrong with alternative proposals.
--It wouldn’t hurt to further expand on the differences between your work and [31], i.e. expanding on the points in L77-79, and clearly delineating the differences.
Strengths:
Now that I’ve said the unpleasant stuff, I’d like to talk about the strengths of the paper. Suppose that the reader is convinced that problem (P) is worth solving, and that
β
∗
is indeed a quantity of interest.
Sections 2,3,4 are very well written, and it is mostly easy to follow the authors’ reasoning as they guide the reader to their solution.
The theoretical results stated in section 4 are nice, and the authors use the toy experiment (Sec 5.1) effectively to showcase their theoretical results.
Minor/Misc.:
Please make the y-axis the same in Figs 1 and 2 (i.e., 1 scale per row) so we can see the effect of X distortion.
L158: “Here, we propose a more efficient estimator”. Respectfully, you did not propose this. Rather, you can say something like “we use…” and cite a reference for uncentered influence functions. This is a minor point, just a matter of language.
Any practical guidance for readers on how to pick the basis functions
b
(
⋅
)
? How does the choice of basis function affect the solution to the problem?
In problem (P), I would just write
σ
(
β
⊺
b
(
V
)
)
from the beginning, not
σ
(
β
,
b
(
V
)
)
, this just makes it more difficult to understand (since you end up just using
σ
(
β
⊺
b
(
V
)
)
anyway).
Questions
Why do you make a point about runtime confounding if you are setting V==X in the toy experiment? Since you are making a point about how your method can deal with runtime confounding, I would suggest you either (a) showcase this or (b) remove the points about runtime confounding.
On L110,
σ
:
B
×
X
→
(
0
,
1
)
– should it be
X
here? Since we are using the features b(V) as input?
How do you justify the distorted X? Can you provide intuition/an explanation for why the plug-in method "fails" here? Would be good to put this in the text too.
Limitations
Yes, I think the authors adequately address limitations of their work & concretely outline next steps. |
NIPS | Title
Doubly Robust Counterfactual Classification
Abstract
We study counterfactual classification as a new tool for decision-making under hypothetical (contrary to fact) scenarios. We propose a doubly-robust nonparametric estimator for a general counterfactual classifier, where we can incorporate flexible constraints by casting the classification problem as a nonlinear mathematical program involving counterfactuals. We go on to analyze the rates of convergence of the estimator and provide a closed-form expression for its asymptotic distribution. Our analysis shows that the proposed estimator is robust against nuisance model misspecification, and can attain fast √ n rates with tractable inference even when using nonparametric machine learning approaches. We study the empirical performance of our methods by simulation and apply them for recidivism risk prediction.
N/A
√ n rates with tractable inference even
when using nonparametric machine learning approaches. We study the empirical performance of our methods by simulation and apply them for recidivism risk prediction.
1 Introduction
Counterfactual or potential outcomes are often used to describe how an individual would respond to a specific treatment or event, irrespective of whether the event actually takes place. Counterfactual outcomes are commonly used for causal inference, where we are interested in measuring the effect of a treatment on an outcome variable [15, 16, 45].
Recently, counterfactual outcomes have also proved useful for predicting outcomes under hypothetical interventions. This is commonly referred to as counterfactual prediction. Counterfactual prediction can be particularly useful to inform decision-making in clinical practice. For example, in order for physicians to make effective treatment decisions, they often need to predict risk scores assuming no treatment is given; if a patient’s risk is relatively low, then she or he may not need treatment. However, when a treatment is initiated after baseline, simply operationalizing the hypothetical treatment as another baseline predictor will rarely give the correct (counterfactual) risk estimates because of confounding [58]. Counterfactual prediction can be also helpful when we want our prediction model developed in one setting to yield predictions successfully transportable to other settings with different treatment patterns. Suppose that we develop our risk prediction model in a setting where most patients have access to an effective (post-baseline) treatment. However, if we deploy our factual prediction model in a new setting in which few individuals have access to the treatment, our model is likely to fail in the sense that it may not be able to accurately identify high-risk individuals. Counterfactual prediction may allow us to achieve more robust model performance compared to factual prediction, even when model deployment influences behaviors that affect risk. [see, e.g., 10, 27, 54, for more examples].
However, the problem of counterfactual prediction brings challenges that do not arise in typical prediction problems because the data needed to build the predictive models are inherently not fully
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
observable. Surprisingly, while the development of modern prediction modeling has greatly enriched the counterfactual-outcome-based causal inference particularly via semi-parametric methods [20, 23], the use of causal inference to improve prediction modeling has received less attention [see, e.g., 10, 46, for a discussion on the subject].
In this work, we study counterfactual classification, a special case of counterfactual prediction where the outcome is discrete. Our approach allows investigators to flexibly incorporate various constraints into the models, not only to enhance their predictive performance but also to accommodate a wide range of practical constraints relevant to their classification tasks. Counterfactual classification poses both theoretical and practical challenges, as a result of the fact that in our setting, even without any constraints, the estimand is not expressible as a closed form functional unlike typical causal inference problems. We tackle this problem by framing counterfactual classification as nonlinear stochastic programming with counterfactual components.
1.1 Related Work
Our work lies at the intersection of causal inference and stochastic optimization.
Counterfactual prediction is closely related to estimation of the conditional average treatment effect (CATE) in causal inference, which plays a crucial role in precision medicine and individualized policy. Let Y a denote the counterfactual outcome that would have been observed under treatment or intervention A = a, A ∈ {0, 1}. The CATE for subjects with covariate X = x is defined as τ(x) = E[Y 1− Y 0 | X = x]. There exists a vast literature on estimating CATE. These include some important early works assuming that τ(x) follows some known parametric form [e.g., 44, 52, 55]. But more recently, there has been an effort to leverage flexible nonparametric machine learning methods [e.g., 1, 3, 22, 25, 29, 31, 39, 57]. A desirable property commonly held in the above CATE estimation methods is that the function τ(x) may be more structured and simple than its component main effect function E[Y a | X = x]. In counterfactual prediction, however, we are fundamentally interested in predicting Y a conditional on X = x under a “single" hypothetical intervention A = a, as opposed to the contrast of the conditional mean outcomes under two (or more) interventions as in CATE. Counterfactual prediction is often useful to support decision-making on its own. There are settings where estimating the contrast effect or relative risk is less relevant than understanding what may happen if a subject was given a certain intervention. As mentioned previously, this is particularly the case in clinical research when predicting risk in relation to treatment started after baseline [10, 27, 46, 54]. Moreover, in the context of multi-valued treatments, it can be more useful to estimate each individual conditional mean potential outcome separately than to estimate all the possible combinations of relative effects.
With no constraints, under appropriate identification assumptions (e.g., (C1)-(C3) in Section 2), counterfactual prediction is equivalent to estimating a standard regression function E[Y | X,A = a] so in principle one could use any regression estimator. This direct modeling or plug-in approach has been used for counterfactual prediction in randomized controlled trials [e.g., 26, 38] or as a component of CATE estimation methods [e.g., 3, 29]. An issue arises when we are estimating a projection of this function onto a finite-dimensional model, or where we instead want to estimate E[Y a | V ] = E{E[Y | X,A = a] | V } for some smaller subset V ⊂ X (e.g., under runtime confounding [9]), which typically renders the plug-in approach suboptimal. Moreover, the resulting estimator fails to have double robustness, a highly desirable property which provides an additional layer of robustness against model misspecification [4].
On the other hand, we often want to incorporate various constraints into our predictive models. Such constraints are often used for flexible penalization [18] or supplying prior information [13] to enhance model performance and interpretability. They can also be used to mitigate algorithmic biases [6, 14]. Further, depending on the scientific question, practitioners occasionally have some constraints which they wish to place on their prediction tasks, such as targeting specific sub-populations, restricting sign or magnitude on certain regression coefficients to be consistent with common sense, or accounting for the compositional nature of the data [7, 19, 28]. In the plug-in approach, however, it is not clear how to incorporate the given constraints into the modeling process.
In our approach, we directly formulate and solve an optimization problem that minimizes counterfactual classification risk, where we can flexibly incorporate various forms of constraints. Optimization problems involving counterfactuals or counterfactual optimization have not been extensively studied,
with few exceptions [e.g., 24, 30, 33, 34]. Our results are closest to [33] and [24], which study counterfactual optimization in a class of quadratic and nonlinear programming problems, respectively, yet this approach i) is not applicable to classification where the risk is defined with respect to the cross-entropy, and ii) considers only linear constraints.
As in [24], we tackle the problem of counterfactual classification from the perspective of stochastic programming. The two most common approaches in stochastic programming are stochastic approximation (SA) and sample average approximation (SAA) [e.g., 36, 50]. However, since i) we cannot compute sample moments or stochastic subgradients that involve unobserved counterfactuals, and ii) the SA and SAA approaches cannot harness efficient estimators for counterfactual components, e.g., doubly-robust or semiparametric estimators with cross-fitting [8, 37], more general approaches beyond the standard SA and SAA settings should be considered [e.g., 47–49] at the expense of stronger assumptions on the behavior of the optimal solution and its estimator.
1.2 Contribution
We study counterfactual classification as a new decision-making tool under hypothetical (contrary to fact) scenarios. Based on semiparametric theory for causal inference, we propose a doubly-robust, nonparametric estimator that can incorporate flexible constraints into the modeling process. Then we go on to analyze rates of convergence and provide a closed-form expression for the asymptotic distribution of our estimator. Our analysis shows that the proposed estimator can attain fast √ n rates even when its nuisance components are estimated using nonparametric machine learning tools at slower rates. We study the finite-sample performance of our estimator via simulation and provide a case based on real data. Importantly, our algorithm and analysis are applicable to other problems in which the estimand is given by the solutions to a general nonlinear optimization problem whose objective function involves counterfactuals, where closed-form solutions are not available.
2 Problem and Setup
Suppose that we have access to an i.i.d. sample (Z1, ..., Zn) of n tuples Z = (Y,A,X) ∼ P for some distribution P, binary outcome Y ∈ {0, 1}, covariates X ∈ X ⊂ Rdx , and binary intervention A ∈ A = {0, 1}. For simplicity, we assume A and Y are binary, but in principle they can be multi-valued. We consider a general setting where only a subset of covariates V ⊆ X can be used for predicting the counterfactual outcome Y a. This allows for runtime confounding, where factors used by decision-makers are recorded in the training data but are not available for prediction (see [9] and references therein). We are concerned with the following constrained optimization problem
minimize β∈B
L (Y a, σ(β, b(V ))) := −E {Y a log σ(β, b(V )) + (1− Y a) log(1− σ(β, b(V )))}
subject to β ∈ S := {β | gj(β) ≤ 0, j ∈ J} (P) for some compact subset B ∈ Rk, known C2-functions gj : B → R, σ : B × Rk ′ → (0, 1), and the index set J = {1, ...,m} for the inequality constraints. Here, σ is the score function and b(V ) = [b1(V ), ..., bk′(V )]
⊤ represents a set of basis functions for V (e.g., truncated power series, kernel or spline basis functions, etc.). Note that we do not need to have k = k′; for example, depending on the modeling techniques, it is possible to have a much larger number of model parameters than the number of basis functions, i.e., k > k′. L (Y a, σ(β, b(V ))) is our classification risk based on the cross-entropy. S consists of deterministic inequality constraints1 and can be used to pursue a variety of practical purposes described in Section 1. Let β∗ denote an optimal solution in (P). β∗ is our optimal model parameters (coefficients) that minimize the counterfactual classification risk under the given constraints.
Classification risk and score function. Our classification risk L(Y a, σ(β, b(V ))) is defined by the expected cross entropy loss between Y a and σ(β, b(V )). In order to estimate β∗, we first need to estimate this classification risk. Since it involves counterfactuals, the classification risk cannot be identified from observed data unless certain assumptions hold, which will be discussed shortly. The form of the score function σ(β, b(V )) depends on the specific classification technique we are using. Our default choice for σ is the sigmoid function with k = k′, which makes the classification
1Equality constraint can be always expressed by a pair of inequality constraints.
risk strictly convex with respect to β. It should be noted, however, that more complex and flexible classification techniques (e.g., neural networks) can also be used without affecting the subsequent results, as long as they satisfy the required regularity assumptions discussed later in Section 4. Importantly, our approach is nonparametric; β∗ is the parameter of the best linear classifier with the sigmoid score in the expanded feature space spanned by b(V ), but we never assume an exact ‘log-linear’ relationship between Y a and b(V ) as in ordinary logistic regression models.
Identification. To estimate the counterfactual quantity L(Y a, σ(β, b(V ))) from the observed sample (Z1, ..., Zn), it must be expressed in terms of the observational data distribution P. This can be accomplished via the following standard causal assumptions [e.g., 17, Chapter 12]:
• (C1) Consistency: Y = Y a if A = a
• (C2) No unmeasured confounding: A ⊥⊥ Y a | X • (C3) Positivity: P(A = a|X) > ε a.s. for some ε > 0
(C1) - (C3) will be assumed throughout this paper. Under these assumptions, our classification risk is identified as
L(β) = −E {E [Y | X,A = a] log σ(β, b(V )) + (1− E [Y | X,A = a]) log(1− σ(β, b(V )))} , (1)
where we let L(β) ≡ L(Y a, σ(β, b(V ))). Since we use the sigmoid function with an equal number of model parameters as basis functions, for clarity, hereafter we write σ(β⊤b(V )) = σ(β, b(V )). It is worth noting that even though we develop the estimator under the above set of causal assumptions, one may extend our methods to other identification strategies and settings (e.g., those of instrumental variables and mediation), since our approach is based on the analysis of a stochastic programming problem with generic estimated objective functions (see Appendix B).
Notation. Here we specify the basic notation used throughout the paper. For a real-valued vector v, let ∥v∥2 denote its Euclidean or L2-norm. Let Pn denote the empirical measure over (Z1, ..., Zn). Given a sample operator h (e.g., an estimated function), let P denote the conditional expectation over a new independent observation Z, as in P(h) = P{h(Z)} = ∫ h(z)dP(z). Use ∥h∥2,P to
denote the L2(P) norm of h, defined by ∥h∥2,P = [ P(h2) ] 1 2 = [∫ h(z)2dP(z) ] 1 2 . Finally, let s∗(P ) denote the set of optimal solutions of an optimization program P , i.e., β∗ ∈ s∗(P ), and define dist(x, S) = inf {∥x− y∥2 : y ∈ S} to denote the distance from a point x to a set S.
3 Estimation Algorithm
Since (P) is not directly solvable, we need to find an approximating program of the “true" program (P). To this end, we shall first discuss the problem of obtaining estimates for the identified classification risk (1). To simplify notation, we first introduce the following nuisance functions
πa(X) = P[A = a | X], µa(X) = E[Y | X,A = a],
and let π̂a and µ̂a be their corresponding estimators. πa and µa are referred to as the propensity score and outcome regression function, respectively.
A natural estimator for (1) is given by L̂(β) = −Pn { µ̂a(X) log σ(β ⊤b(V )) + (1− µ̂a(X)) log(1− σ(β⊤b(V ))) } , (2)
where we simply plug in the regression estimates µ̂a into the empirical average of (1). Here, we construct a more efficient estimator based on the semiparametric approach in causal inference [21, 23]. Let
φa(Z; η) = 1(A = a)
πa(X) {Y − µA(X)}+ µa(X),
denote the uncentered efficient influence function for the parameter E {E[Y | X,A = a]}, where nuisance functions are defined by η = {πa(X), µa(X)}. Then it can be deduced that for an arbitrary
Algorithm 1: Doubly robust estimator for counterfactual classification 1 input: b(·),K 2 Draw (B1, ..., Bn) with Bi ∈ {1, ...,K} 3 for b = 1, ...,K do 4 Let D0 = {Zi : Bi ̸= b} and D1 = {Zi : Bi = b} 5 Obtain η̂−b by constructing π̂a, µ̂a on D0 6 M1,b(β)← empirical average of φa(Z; η̂−b) log σ(β⊤b(V )) over D1 7 M0,b(β)← empirical average of (1− φa(Z; η̂−b)) log(1− σ(β⊤b(V ))) over D1 8 L̂(β)← ∑K b=1 { 1 n ∑n i=1 1(Bi = b) } (M1,b(β) +M0,b(β)) 9 solve (P̂) with L̂(β)
fixed real-valued function h : X → R, the uncentered efficient influence function for the parameter ψa := E {E[Y | X,A = a]h(X)} is given by φa(Z; η)h(X) (Lemma A.1 in the appendix). Now we provide an influence-function-based semiparametric estimator for ψa. Following [8, 22, 43, 59], we propose to use sample splitting to allow for arbitrarily complex nuisance estimators η̂. Specifically, we split the data into K disjoint groups, each with size of n/K approximately, by drawing variables (B1, ..., Bn) independent of the data, with Bi = b indicating that subject i was split into group b ∈ {1, ...,K}. Then the semiparametric estimator for ψa based on the efficient influence function and sample splitting is given by
ψ̂a = 1
K K∑ b=1 Pbn {φa(Z; η̂−b)h(X)} ≡ Pn {φa(Z; η̂−BK )h(X)} , (3)
where we let Pbn denote empirical averages over the set of units {i : Bi = b} in the group b and let η̂−b denote the nuisance estimator constructed only using those units {i : Bi ̸= b}. Under weak regularity conditions, this semiparametric estimator attains the efficiency bound with the double robustness property, and allows us to employ nonparametric machine learning methods while achieving the√ n-rate of convergence and valid inference under weak conditions (see Lemma A.1 in the appendix for the formal statement). If one is willing to rely on appropriate empirical process conditions (e.g., Donsker-type or low entropy conditions [53]), then η can be estimated on the same sample without sample splitting. However, this would limit the flexibility of the nuisance estimators.
The classification risk L(β) is a sum of two functionals, each of which is in the form of ψa, Thus, for each β, we propose to estimate the classification risk using (3) as follows
L̂(β) = −Pn { φa(Z; η̂−BK ) log σ(β ⊤b(V )) + (1− φa(Z; η̂−BK )) log(1− σ(β⊤b(V ))) } . (4)
Now that we have proposed the efficient method to estimate the counterfactual component L(β), in what follows we provide an approximating program for (P) which we aim to actually solve by substituting L̂(β) for L(β)
minimize β∈B L̂(β) subject to β ∈ S. (P̂)
Let β̂ ∈ s∗(P̂). Then β̂ is our estimator for β∗. We summarize our algorithm detailing how to compute the estimator β̂ in Algorithm 1.
(P̂) is a smooth nonlinear optimization problem whose objective function depends on data. Unfortunately, unlike (P), (P̂) is not guaranteed to be convex in finite samples even if S is convex. Non-convex problems are usually more difficult than convex ones due to high variance and slow computing time. Nonetheless, substantial progress has been made recently [5, 42], and a number of efficient global optimization algorithms are available in open-source libraries (e.g., NLOPT). Also in order for more flexible implementation, one may adapt neural networks for our approach without the need for specifying σ and b; we discuss this in more detail in Section 6 as a promising future direction.
4 Asymptotic Analysis
This section is devoted to analyzing the rates of convergence and asymptotic distribution for the estimated optimal solution β̂. Unlike stochastic optimization, analysis of the statistical properties of optimal solutions to a general counterfactual optimization problem appears much more sparse. In what was perhaps the first study of the problem, [24] analyzed asymptotic behavior of optimal solutions for a particular class of nonlinear counterfactual optimization problems that can be cast into a parametric program with finite-dimensional stochastic parameters. However, the true program (P) does not belong to the class to which their analysis is applicable. Here, we derive the asymptotic properties of β̂ by considering similar assumptions as in [24].
We first introduce the following assumptions for our counterfactual component estimator L̂.
(A1) P(π̂a ∈ [ϵ, 1− ϵ]) = 1 for some ϵ > 0 (A2) ∥µ̂a − µa∥2,P = oP(1) or ∥π̂a − πa∥2,P = oP(1)
(A3) ∥π̂a − πa∥2,P∥µ̂a − µa∥2,P = oP(n− 1 2 )
Assumptions (A1) - (A3) are commonly used in semiparametric estimation in the causal inference literature [20]. Next, for a feasible point β̄ ∈ S we define the active index set. Definition 4.1 (Active set). For β̄ ∈ S, we define the active index set J0 by
J0(β̄) = {1 ≤ j ≤ m | gj(β̄) = 0}.
Then we introduce the following technical condition on gj .
(B1) For each β∗ ∈ s∗(P),
d⊤∇2βgj(β∗)d ≥ 0 ∀d ∈ {d | ∇βgj(β∗) = 0, j ∈ J0(β̄)}.
Assumption (B1) holds, for example, if each gj is locally convex around β∗. In what follows, based on the result of [47], we characterize the rates of convergence for β̂ in terms of the nuisance estimation error under relatively weak conditions. Theorem 4.1 (Rate of Convergence). Assume that (A1), (A2), and (B1), hold. Then
dist ( β̂, s∗(P) ) = OP ( ∥π̂a − πa∥2,P∥µ̂a − µa∥2,P + n− 1 2 ) .
Hence, if we further assume the nonparametric condition (A3), we obtain dist ( β̂, s∗(P) ) = OP ( n− 1 2 ) .
Theorem 4.1 indicates that double robustness is possible for our estimator, and thereby √ n rates are attainable even when each of the nuisance regression functions is estimated flexibly at much slower rates (e.g., n−1/4 rates for each), with a wide variety of modern nonparametric tools. Since L is continuously differentiable with bounded derivative, the consistency of the optimal value naturally follows by the result of Theorem 4.1 and the continuous mapping theorem. More specifically, in the following corollary, we show that the same rates are attained for the optimal value under identical conditions. Corollary 4.1 (Rate of Convergence for Optimal Value). Suppose (A1), (A2), (A3), (B1) hold and let v∗ and v̂ be the optimal values corresponding to β∗ ∈ s∗(P) and β̂, respectively. Then we have |v̂ − v∗| = OP ( ∥π̂a − πa∥2,P∥µ̂a − µa∥2,P + n− 1 2 ) .
In order to conduct statistical inference, it is also desirable to characterize the asymptotic distribution of β̂. This requires stronger assumptions and a more specialized analysis [47]. Asymptotic properties of optimal solutions in stochastic programming are typically studied based on the generalization of the delta method for directionally differentiable mappings [e.g., 48–50]. Asymptotic normality is of particular interest since without asymptotic normality, consistency of the bootstrap is no longer guaranteed for the solution estimators [12].
We start with additional definitions of some popular regularity conditions with respect to (P).
Definition 4.2 (LICQ). Linear independence constraint qualification (LICQ) is satisfied at β̄ ∈ S if the vectors∇βgj(β̄), j ∈ J0(β̄) are linearly independent. Definition 4.3 (SC). Let L(β, γ) be the Lagrangian. Strict Complementarity (SC) is satisfied at β̄ ∈ S if, with multipliers γ̄j ≥ 0, j ∈ J0(β̄), the Karush-Kuhn-Tucker (KKT) condition
∇βL(β̄, γ̄) := ∇βL(β̄) + ∑
j∈J0(β̄)
γ̄j∇βgj(β̄) = 0,
is satisfied such that γ̄j > 0,∀j ∈ J0(β̄).
LICQ is arguably one of the most widely-used constraint qualifications that admit the first-order necessary conditions. SC means that if the j-th inequality constraint is active, then the corresponding dual variable is strictly positive, so exactly one of them is zero for each 1 ≤ j ≤ m. SC is widely used in the optimization literature, particularly in the context of parametric optimization [e.g., 50, 51]. We further require uniqueness of the optimal solution in (P).
(B2) Program (P) has a unique optimal solution β∗ (i.e., s∗(P) ≡ {β∗} is singleton).
Note that under (B2) if LICQ holds at β∗, then the corresponding multipliers are determined uniquely [56]. In the next theorem, we provide a closed-form expression for the asymptotic distribution of β̂. Theorem 4.2 (Asymptotic Distribution). Assume that (A1) - (A3), (B1), and (B2) hold, and that LICQ and SC hold at β∗ with the corresponding multipliers γ∗. Then
n− 1 2 ( β̂ − β∗ ) = [ ∇2βL(β∗, γ∗) B
B⊤ 0 ]−1 [ 1 0 ]⊤ Υ+ oP(1)
for some k × |J0(β∗)| matrix B and random variable Υ such that
Υ d−→ N (0, var (φa(Z; η)h1(V, β∗) + {1− φa(Z; η)}h0(V, β∗))) ,
where
B = [ ∇βgj(β∗)⊤, j ∈ J0(β∗) ] ,
h1(V, β) = 1
log σ(β⊤b(V )) b(V )σ(β⊤b(V )){1− σ(β⊤b(V ))},
h0(V, β) = − 1
log(1− σ(β⊤b(V ))) b(V )σ(β⊤b(V )){1− σ(β⊤b(V ))}.
The above theorem gives explicit conditions under which β̂ is √ n-consistent and asymptotically
normal. We harness the classical results of [48] that use an expansion of β̂ in terms of an auxiliary parametric program. To show asymptotic normality of β̂, linearity of the directional derivative of optimal solutions in the parametric program is required. We have accomplished this based on an appropriate form of the implicit function theorem [11]. This is in contrast to [33] that relied on the structure of the smooth, closed-form solution estimator that enables direct use of the delta method. Lastly, our results in this section can be extended to a more general constrained nonlinear optimization problem where the objective function involves counterfactuals (see Lemmas B.1, B.2 in the appendix).
5 Simulation and Case Study
5.1 Simulation
We explore the finite sample properties of our estimators in the simulated dataset where we aim to empirically demonstrate the double-robustness property described in Section 3. Our data generation process is as follows:
V ≡ X = (X1, ..., X6) ∼ N(0, I), πa(X) = expit(−X1 + 0.5X2 − 0.25X3 − 0.1X4 + 0.05X5 + 0.05X6),
Y = A1 {X1 + 2X2 − 2X3 −X4 +X5 + ε > 0}+ (1−A)1 {X1 + 2X2 − 2X3 −X4 +X6 + ε < 0} , ε ∼ N(0, 1).
Our classification target is Y 1. For b(X), we use X , X2 and their pairwise products. We assume that we have box constraints for our solution: |β∗j | ≤ 1, j = 1, ..., k. Since there exist no other natural baselines, we compare our methods to the plug-in method where we use (2) for our approximating program P̂. For nuisance estimation we use the cross-validation-based Super Learner ensemble via the SUPERLEARNER R package to combine generalized additive models, multivariate adaptive regression splines, and random forests. We use sample splitting as described in Algorithm 1 with K = 2 splits. We further consider two versions of each of our estimators, based on the correct and distorted X , where the distorted values are only used to estimate the outcome regression µa. The distortion is caused by a transformation X 7→ (X1X3X6, X22 , X4/(1 + exp(X5)), exp(X5/2)).
To solve P̂, we first use the StoGo algorithm [40] via the NLOPTR R package as it has shown the best performance in terms of accuracy in the survey study of [35]. After running the StoGo, we then use the global optimum as a starting point for the BOBYQA local optimization algorithm [41] to further polish the optimum to a greater accuracy. We use sample sizes n = 1k, 2.5k, 5k, 7.5k, 10k and repeat the simulation 100 times for each n. Then we compute the average of |v∗− v̂| and ∥β∗− β̂∥2. Using the estimated counterfactual predictor, we also compute the classification error on an independent sample with the equal sample size. Standard error bars are presented around each point. The results with the correct and distorted X are presented in Figures 1 and 2, respectively.
With the correct X , it appears that the proposed estimator performs as well or slightly better than the plug-in methods. However, in Figure 2 when µ̂a is constructed based on the distorted X , the proposed estimator gives substantially smaller errors in general and improves better with n. This is indicative of the fact that the proposed estimator has the doubly-robust, second-order multiplicative bias, thus supporting our theoretical results in Section 4.
5.2 Case Study: COMPAS Dataset
Next we apply our method for recidivism risk prediction using the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) dataset 2. This dataset was originally designed to assess the COMPAS recidivism risk scores, and has been utilized for studying machine bias in the context of algorithmic fairness [2]. More recently, the dataset has been reanalyzed in the framework of counterfactual outcomes [32–34]. Here, we focus purely on predictive purpose. We let A represent pretrial release, with A = 0 if defendants are released and A = 1 if they are incarcerated, following methodology suggested by [34].3 We aim to classify the binary counterfactual outcome Y 0 that indicates whether a defendant is rearrested within two years, should the defendant be released pretrial. We use the dataset for two-year recidivism records with five covariates: age, sex, number of prior arrests, charge degree, and race. We consider three racial groups: Black, White, and Hispanic. We split the data (n = 5787) randomly into two groups: a training set with 3000 observations and a test set with the rest. Other model settings remain the same as our simulation in the previous subsection, including the box constraints.
Figure 3 and Table 1 show that the proposed doubly-robust method achieves moderately higher ROC AUC and classification accuracy than both the plug-in and the raw COMPAS risk scores. This comparative advantage is likely to increase in settings where we expect the identification and regularity assumptions to be more likely to hold, for example, where we can have access to more covariates or more information about the treatment mechanism.
6 Discussion
In this paper we studied the problem of counterfactual classification under arbitrary smooth constraints, and proposed a doubly-robust estimator which leverages nonparametric machine learning methods. Our theoretical framework is not limited to counterfactual classification and can be applied to other settings where the estimand is the optimal solution of a general smooth nonlinear programming problem with a counterfactual objective function; thus, we complement the results of [24, 33], each of which considered a particular class of smooth nonlinear programming.
2https://github.com/propublica/compas-analysis 3The dataset itself does not include information whether defendants were released pretrial, but it includes dates in and out of jail. So we set the treatment A to 0 if defendants left jail within three days of being arrested, and 1 otherwise, as Florida state law generally requires individuals to be brought before a judge for a bail hearing within 2 days of arrest [34, Section 6.2].
We emphasize that one may use our proposed approach for other common problems in causal inference, e.g., estimation of the contrast effects or optimal treatment regimes, even under runtime confounding and/or other practical constraints. We may accomplish this by simply estimating each component E[Y a | X] via solving (P) for different values of a, and then taking the conditional mean contrast of interest. We can also readily adapt our procedure (P) for such standard estimands, for example by replacing Y a with the desired contrast or utility formula, in which the influence function will be very similar to those already presented in our manuscript. In ongoing work, we develop extensions for estimating the CATE and optimal treatment regimes under fairness constraints.
Although not explored in this work, our estimation procedure could be improved by applying more sophisticated and flexible modeling techniques for solving (P). One promising approach is to build a neural network that minimizes the loss (4) with the nuisance estimates {φa(Zi; η̂−BK )}i constructed on the separate independent sample; in this case, β is the weights of the network where k ≫ k′. Importantly, in the neural network approach we do not need to specify and construct the score and basis functions; the ideal form of those unknown functions are learned through backpropagation. Hence, we can avoid explicitly formulating and solving a complex non-convex optimization problem. Further, one may employ a rich source of deep-learning tools. In future work, we plan to pursue this extension and apply our methods to a large-scale real-world dataset.
We conclude with other potential limitations of our methods, and ways in which our work could be generalized. First, we considered the fixed feasible set that consists of only deterministic constraints. However, sometimes it may be useful to consider the general case where gj’s need to be estimated as well. This can be particularly helpful when incorporating general fairness constraints [14, 33, 34]. Dealing with the varying feasible set with general nonlinear constraints is a complicated task and requires even stronger assumptions [48]. As future work, we plan to generalize our framework to the case of a varying feasible set. Next, although we showed that the counterfactual objective function is estimated efficiently via L̂, it is unclear whether the solution estimator β̂ is efficient too, due to the inherent complexity of the optimal solution mapping in the presence of constraints. We conjecture that one may show that the semiparametric efficiency bound can also be attained for β̂ possibly under slightly stronger regularity assumptions, but we leave this for future work. | 1. What is the focus and contribution of the paper regarding approximating programs for counterfactual classification?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its originality, quality, clarity, and significance?
3. Do you have any concerns or questions regarding the paper's content, such as the terminology used or the assumptions made?
4. How does the reviewer assess the quality of the proposed method, particularly in comparison to simpler methods?
5. What are the limitations of the paper, and how might they be addressed in future research? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper proposes an approximating program to the original so-called counterfactual classification program that can incorporate flexible constraints. The approximating program utilizes doubly robust estimators so that the optimal solution is
n
-consistent as long as nuisance functions satisfy the stated assumptions.
Strengths And Weaknesses
Originality: 4. The whole strategy is similar to DR-learner, and the work is close to "Fair double ensemble learning for observable and counterfactual outcomes." But involving a flexible constraint is interesting, and the asymptotic property analysis for beta is solid.
Quality: 3. I think the experiments are not so rich. There are also some typos or unclear statements.
Clarity: 4. The logistics and mathematical proofs are clear. But it would be great if the authors can state the connection between the so-called counterfactual classification and general causal inference problems.
Significance: 3. I think this method does not bring significant improvement compared with simpler methods.
Questions
If I am right, I think your "counterfactuals" include both observed and unobserved outcomes. More precisely, I think it should be called "potential outcomes" in your setting. It is inaccurate if regard counterfactuals as potential outcomes.
The key is to estimate Y^a more accurately. Why can't we just use mu_a(X) or varphi_a(Z; eta) to estimate Y^a, instead, we should solve beta from P?
Assumptions A2 and A3 seem to limit the convergence rate for nuisance parameters. Your claim that "beta can attain sqrt{n} consistent even nuisance parameters have slower rates" seems not consistent with the arguments A2&A3. As I expect, nuisance parameters should converge at a rate of o(n^{-1/4}) if beta is sqrt{n} consistent.
The quality of beta depends on i) the estimation of nuisance parameters pi_a and mu_a, ii) the plug-in or doubly robust learner. You only consider a case of ii), but I am wondering what the results would be if pi_a is not correctly specified. In this case, will the proposed method be better than the simple plug-in?
Limitations
In Section 6, the authors well discuss the limitation and potential extensions. |
NIPS | Title
Doubly Robust Counterfactual Classification
Abstract
We study counterfactual classification as a new tool for decision-making under hypothetical (contrary to fact) scenarios. We propose a doubly-robust nonparametric estimator for a general counterfactual classifier, where we can incorporate flexible constraints by casting the classification problem as a nonlinear mathematical program involving counterfactuals. We go on to analyze the rates of convergence of the estimator and provide a closed-form expression for its asymptotic distribution. Our analysis shows that the proposed estimator is robust against nuisance model misspecification, and can attain fast √ n rates with tractable inference even when using nonparametric machine learning approaches. We study the empirical performance of our methods by simulation and apply them for recidivism risk prediction.
N/A
√ n rates with tractable inference even
when using nonparametric machine learning approaches. We study the empirical performance of our methods by simulation and apply them for recidivism risk prediction.
1 Introduction
Counterfactual or potential outcomes are often used to describe how an individual would respond to a specific treatment or event, irrespective of whether the event actually takes place. Counterfactual outcomes are commonly used for causal inference, where we are interested in measuring the effect of a treatment on an outcome variable [15, 16, 45].
Recently, counterfactual outcomes have also proved useful for predicting outcomes under hypothetical interventions. This is commonly referred to as counterfactual prediction. Counterfactual prediction can be particularly useful to inform decision-making in clinical practice. For example, in order for physicians to make effective treatment decisions, they often need to predict risk scores assuming no treatment is given; if a patient’s risk is relatively low, then she or he may not need treatment. However, when a treatment is initiated after baseline, simply operationalizing the hypothetical treatment as another baseline predictor will rarely give the correct (counterfactual) risk estimates because of confounding [58]. Counterfactual prediction can be also helpful when we want our prediction model developed in one setting to yield predictions successfully transportable to other settings with different treatment patterns. Suppose that we develop our risk prediction model in a setting where most patients have access to an effective (post-baseline) treatment. However, if we deploy our factual prediction model in a new setting in which few individuals have access to the treatment, our model is likely to fail in the sense that it may not be able to accurately identify high-risk individuals. Counterfactual prediction may allow us to achieve more robust model performance compared to factual prediction, even when model deployment influences behaviors that affect risk. [see, e.g., 10, 27, 54, for more examples].
However, the problem of counterfactual prediction brings challenges that do not arise in typical prediction problems because the data needed to build the predictive models are inherently not fully
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
observable. Surprisingly, while the development of modern prediction modeling has greatly enriched the counterfactual-outcome-based causal inference particularly via semi-parametric methods [20, 23], the use of causal inference to improve prediction modeling has received less attention [see, e.g., 10, 46, for a discussion on the subject].
In this work, we study counterfactual classification, a special case of counterfactual prediction where the outcome is discrete. Our approach allows investigators to flexibly incorporate various constraints into the models, not only to enhance their predictive performance but also to accommodate a wide range of practical constraints relevant to their classification tasks. Counterfactual classification poses both theoretical and practical challenges, as a result of the fact that in our setting, even without any constraints, the estimand is not expressible as a closed form functional unlike typical causal inference problems. We tackle this problem by framing counterfactual classification as nonlinear stochastic programming with counterfactual components.
1.1 Related Work
Our work lies at the intersection of causal inference and stochastic optimization.
Counterfactual prediction is closely related to estimation of the conditional average treatment effect (CATE) in causal inference, which plays a crucial role in precision medicine and individualized policy. Let Y a denote the counterfactual outcome that would have been observed under treatment or intervention A = a, A ∈ {0, 1}. The CATE for subjects with covariate X = x is defined as τ(x) = E[Y 1− Y 0 | X = x]. There exists a vast literature on estimating CATE. These include some important early works assuming that τ(x) follows some known parametric form [e.g., 44, 52, 55]. But more recently, there has been an effort to leverage flexible nonparametric machine learning methods [e.g., 1, 3, 22, 25, 29, 31, 39, 57]. A desirable property commonly held in the above CATE estimation methods is that the function τ(x) may be more structured and simple than its component main effect function E[Y a | X = x]. In counterfactual prediction, however, we are fundamentally interested in predicting Y a conditional on X = x under a “single" hypothetical intervention A = a, as opposed to the contrast of the conditional mean outcomes under two (or more) interventions as in CATE. Counterfactual prediction is often useful to support decision-making on its own. There are settings where estimating the contrast effect or relative risk is less relevant than understanding what may happen if a subject was given a certain intervention. As mentioned previously, this is particularly the case in clinical research when predicting risk in relation to treatment started after baseline [10, 27, 46, 54]. Moreover, in the context of multi-valued treatments, it can be more useful to estimate each individual conditional mean potential outcome separately than to estimate all the possible combinations of relative effects.
With no constraints, under appropriate identification assumptions (e.g., (C1)-(C3) in Section 2), counterfactual prediction is equivalent to estimating a standard regression function E[Y | X,A = a] so in principle one could use any regression estimator. This direct modeling or plug-in approach has been used for counterfactual prediction in randomized controlled trials [e.g., 26, 38] or as a component of CATE estimation methods [e.g., 3, 29]. An issue arises when we are estimating a projection of this function onto a finite-dimensional model, or where we instead want to estimate E[Y a | V ] = E{E[Y | X,A = a] | V } for some smaller subset V ⊂ X (e.g., under runtime confounding [9]), which typically renders the plug-in approach suboptimal. Moreover, the resulting estimator fails to have double robustness, a highly desirable property which provides an additional layer of robustness against model misspecification [4].
On the other hand, we often want to incorporate various constraints into our predictive models. Such constraints are often used for flexible penalization [18] or supplying prior information [13] to enhance model performance and interpretability. They can also be used to mitigate algorithmic biases [6, 14]. Further, depending on the scientific question, practitioners occasionally have some constraints which they wish to place on their prediction tasks, such as targeting specific sub-populations, restricting sign or magnitude on certain regression coefficients to be consistent with common sense, or accounting for the compositional nature of the data [7, 19, 28]. In the plug-in approach, however, it is not clear how to incorporate the given constraints into the modeling process.
In our approach, we directly formulate and solve an optimization problem that minimizes counterfactual classification risk, where we can flexibly incorporate various forms of constraints. Optimization problems involving counterfactuals or counterfactual optimization have not been extensively studied,
with few exceptions [e.g., 24, 30, 33, 34]. Our results are closest to [33] and [24], which study counterfactual optimization in a class of quadratic and nonlinear programming problems, respectively, yet this approach i) is not applicable to classification where the risk is defined with respect to the cross-entropy, and ii) considers only linear constraints.
As in [24], we tackle the problem of counterfactual classification from the perspective of stochastic programming. The two most common approaches in stochastic programming are stochastic approximation (SA) and sample average approximation (SAA) [e.g., 36, 50]. However, since i) we cannot compute sample moments or stochastic subgradients that involve unobserved counterfactuals, and ii) the SA and SAA approaches cannot harness efficient estimators for counterfactual components, e.g., doubly-robust or semiparametric estimators with cross-fitting [8, 37], more general approaches beyond the standard SA and SAA settings should be considered [e.g., 47–49] at the expense of stronger assumptions on the behavior of the optimal solution and its estimator.
1.2 Contribution
We study counterfactual classification as a new decision-making tool under hypothetical (contrary to fact) scenarios. Based on semiparametric theory for causal inference, we propose a doubly-robust, nonparametric estimator that can incorporate flexible constraints into the modeling process. Then we go on to analyze rates of convergence and provide a closed-form expression for the asymptotic distribution of our estimator. Our analysis shows that the proposed estimator can attain fast √ n rates even when its nuisance components are estimated using nonparametric machine learning tools at slower rates. We study the finite-sample performance of our estimator via simulation and provide a case based on real data. Importantly, our algorithm and analysis are applicable to other problems in which the estimand is given by the solutions to a general nonlinear optimization problem whose objective function involves counterfactuals, where closed-form solutions are not available.
2 Problem and Setup
Suppose that we have access to an i.i.d. sample (Z1, ..., Zn) of n tuples Z = (Y,A,X) ∼ P for some distribution P, binary outcome Y ∈ {0, 1}, covariates X ∈ X ⊂ Rdx , and binary intervention A ∈ A = {0, 1}. For simplicity, we assume A and Y are binary, but in principle they can be multi-valued. We consider a general setting where only a subset of covariates V ⊆ X can be used for predicting the counterfactual outcome Y a. This allows for runtime confounding, where factors used by decision-makers are recorded in the training data but are not available for prediction (see [9] and references therein). We are concerned with the following constrained optimization problem
minimize β∈B
L (Y a, σ(β, b(V ))) := −E {Y a log σ(β, b(V )) + (1− Y a) log(1− σ(β, b(V )))}
subject to β ∈ S := {β | gj(β) ≤ 0, j ∈ J} (P) for some compact subset B ∈ Rk, known C2-functions gj : B → R, σ : B × Rk ′ → (0, 1), and the index set J = {1, ...,m} for the inequality constraints. Here, σ is the score function and b(V ) = [b1(V ), ..., bk′(V )]
⊤ represents a set of basis functions for V (e.g., truncated power series, kernel or spline basis functions, etc.). Note that we do not need to have k = k′; for example, depending on the modeling techniques, it is possible to have a much larger number of model parameters than the number of basis functions, i.e., k > k′. L (Y a, σ(β, b(V ))) is our classification risk based on the cross-entropy. S consists of deterministic inequality constraints1 and can be used to pursue a variety of practical purposes described in Section 1. Let β∗ denote an optimal solution in (P). β∗ is our optimal model parameters (coefficients) that minimize the counterfactual classification risk under the given constraints.
Classification risk and score function. Our classification risk L(Y a, σ(β, b(V ))) is defined by the expected cross entropy loss between Y a and σ(β, b(V )). In order to estimate β∗, we first need to estimate this classification risk. Since it involves counterfactuals, the classification risk cannot be identified from observed data unless certain assumptions hold, which will be discussed shortly. The form of the score function σ(β, b(V )) depends on the specific classification technique we are using. Our default choice for σ is the sigmoid function with k = k′, which makes the classification
1Equality constraint can be always expressed by a pair of inequality constraints.
risk strictly convex with respect to β. It should be noted, however, that more complex and flexible classification techniques (e.g., neural networks) can also be used without affecting the subsequent results, as long as they satisfy the required regularity assumptions discussed later in Section 4. Importantly, our approach is nonparametric; β∗ is the parameter of the best linear classifier with the sigmoid score in the expanded feature space spanned by b(V ), but we never assume an exact ‘log-linear’ relationship between Y a and b(V ) as in ordinary logistic regression models.
Identification. To estimate the counterfactual quantity L(Y a, σ(β, b(V ))) from the observed sample (Z1, ..., Zn), it must be expressed in terms of the observational data distribution P. This can be accomplished via the following standard causal assumptions [e.g., 17, Chapter 12]:
• (C1) Consistency: Y = Y a if A = a
• (C2) No unmeasured confounding: A ⊥⊥ Y a | X • (C3) Positivity: P(A = a|X) > ε a.s. for some ε > 0
(C1) - (C3) will be assumed throughout this paper. Under these assumptions, our classification risk is identified as
L(β) = −E {E [Y | X,A = a] log σ(β, b(V )) + (1− E [Y | X,A = a]) log(1− σ(β, b(V )))} , (1)
where we let L(β) ≡ L(Y a, σ(β, b(V ))). Since we use the sigmoid function with an equal number of model parameters as basis functions, for clarity, hereafter we write σ(β⊤b(V )) = σ(β, b(V )). It is worth noting that even though we develop the estimator under the above set of causal assumptions, one may extend our methods to other identification strategies and settings (e.g., those of instrumental variables and mediation), since our approach is based on the analysis of a stochastic programming problem with generic estimated objective functions (see Appendix B).
Notation. Here we specify the basic notation used throughout the paper. For a real-valued vector v, let ∥v∥2 denote its Euclidean or L2-norm. Let Pn denote the empirical measure over (Z1, ..., Zn). Given a sample operator h (e.g., an estimated function), let P denote the conditional expectation over a new independent observation Z, as in P(h) = P{h(Z)} = ∫ h(z)dP(z). Use ∥h∥2,P to
denote the L2(P) norm of h, defined by ∥h∥2,P = [ P(h2) ] 1 2 = [∫ h(z)2dP(z) ] 1 2 . Finally, let s∗(P ) denote the set of optimal solutions of an optimization program P , i.e., β∗ ∈ s∗(P ), and define dist(x, S) = inf {∥x− y∥2 : y ∈ S} to denote the distance from a point x to a set S.
3 Estimation Algorithm
Since (P) is not directly solvable, we need to find an approximating program of the “true" program (P). To this end, we shall first discuss the problem of obtaining estimates for the identified classification risk (1). To simplify notation, we first introduce the following nuisance functions
πa(X) = P[A = a | X], µa(X) = E[Y | X,A = a],
and let π̂a and µ̂a be their corresponding estimators. πa and µa are referred to as the propensity score and outcome regression function, respectively.
A natural estimator for (1) is given by L̂(β) = −Pn { µ̂a(X) log σ(β ⊤b(V )) + (1− µ̂a(X)) log(1− σ(β⊤b(V ))) } , (2)
where we simply plug in the regression estimates µ̂a into the empirical average of (1). Here, we construct a more efficient estimator based on the semiparametric approach in causal inference [21, 23]. Let
φa(Z; η) = 1(A = a)
πa(X) {Y − µA(X)}+ µa(X),
denote the uncentered efficient influence function for the parameter E {E[Y | X,A = a]}, where nuisance functions are defined by η = {πa(X), µa(X)}. Then it can be deduced that for an arbitrary
Algorithm 1: Doubly robust estimator for counterfactual classification 1 input: b(·),K 2 Draw (B1, ..., Bn) with Bi ∈ {1, ...,K} 3 for b = 1, ...,K do 4 Let D0 = {Zi : Bi ̸= b} and D1 = {Zi : Bi = b} 5 Obtain η̂−b by constructing π̂a, µ̂a on D0 6 M1,b(β)← empirical average of φa(Z; η̂−b) log σ(β⊤b(V )) over D1 7 M0,b(β)← empirical average of (1− φa(Z; η̂−b)) log(1− σ(β⊤b(V ))) over D1 8 L̂(β)← ∑K b=1 { 1 n ∑n i=1 1(Bi = b) } (M1,b(β) +M0,b(β)) 9 solve (P̂) with L̂(β)
fixed real-valued function h : X → R, the uncentered efficient influence function for the parameter ψa := E {E[Y | X,A = a]h(X)} is given by φa(Z; η)h(X) (Lemma A.1 in the appendix). Now we provide an influence-function-based semiparametric estimator for ψa. Following [8, 22, 43, 59], we propose to use sample splitting to allow for arbitrarily complex nuisance estimators η̂. Specifically, we split the data into K disjoint groups, each with size of n/K approximately, by drawing variables (B1, ..., Bn) independent of the data, with Bi = b indicating that subject i was split into group b ∈ {1, ...,K}. Then the semiparametric estimator for ψa based on the efficient influence function and sample splitting is given by
ψ̂a = 1
K K∑ b=1 Pbn {φa(Z; η̂−b)h(X)} ≡ Pn {φa(Z; η̂−BK )h(X)} , (3)
where we let Pbn denote empirical averages over the set of units {i : Bi = b} in the group b and let η̂−b denote the nuisance estimator constructed only using those units {i : Bi ̸= b}. Under weak regularity conditions, this semiparametric estimator attains the efficiency bound with the double robustness property, and allows us to employ nonparametric machine learning methods while achieving the√ n-rate of convergence and valid inference under weak conditions (see Lemma A.1 in the appendix for the formal statement). If one is willing to rely on appropriate empirical process conditions (e.g., Donsker-type or low entropy conditions [53]), then η can be estimated on the same sample without sample splitting. However, this would limit the flexibility of the nuisance estimators.
The classification risk L(β) is a sum of two functionals, each of which is in the form of ψa, Thus, for each β, we propose to estimate the classification risk using (3) as follows
L̂(β) = −Pn { φa(Z; η̂−BK ) log σ(β ⊤b(V )) + (1− φa(Z; η̂−BK )) log(1− σ(β⊤b(V ))) } . (4)
Now that we have proposed the efficient method to estimate the counterfactual component L(β), in what follows we provide an approximating program for (P) which we aim to actually solve by substituting L̂(β) for L(β)
minimize β∈B L̂(β) subject to β ∈ S. (P̂)
Let β̂ ∈ s∗(P̂). Then β̂ is our estimator for β∗. We summarize our algorithm detailing how to compute the estimator β̂ in Algorithm 1.
(P̂) is a smooth nonlinear optimization problem whose objective function depends on data. Unfortunately, unlike (P), (P̂) is not guaranteed to be convex in finite samples even if S is convex. Non-convex problems are usually more difficult than convex ones due to high variance and slow computing time. Nonetheless, substantial progress has been made recently [5, 42], and a number of efficient global optimization algorithms are available in open-source libraries (e.g., NLOPT). Also in order for more flexible implementation, one may adapt neural networks for our approach without the need for specifying σ and b; we discuss this in more detail in Section 6 as a promising future direction.
4 Asymptotic Analysis
This section is devoted to analyzing the rates of convergence and asymptotic distribution for the estimated optimal solution β̂. Unlike stochastic optimization, analysis of the statistical properties of optimal solutions to a general counterfactual optimization problem appears much more sparse. In what was perhaps the first study of the problem, [24] analyzed asymptotic behavior of optimal solutions for a particular class of nonlinear counterfactual optimization problems that can be cast into a parametric program with finite-dimensional stochastic parameters. However, the true program (P) does not belong to the class to which their analysis is applicable. Here, we derive the asymptotic properties of β̂ by considering similar assumptions as in [24].
We first introduce the following assumptions for our counterfactual component estimator L̂.
(A1) P(π̂a ∈ [ϵ, 1− ϵ]) = 1 for some ϵ > 0 (A2) ∥µ̂a − µa∥2,P = oP(1) or ∥π̂a − πa∥2,P = oP(1)
(A3) ∥π̂a − πa∥2,P∥µ̂a − µa∥2,P = oP(n− 1 2 )
Assumptions (A1) - (A3) are commonly used in semiparametric estimation in the causal inference literature [20]. Next, for a feasible point β̄ ∈ S we define the active index set. Definition 4.1 (Active set). For β̄ ∈ S, we define the active index set J0 by
J0(β̄) = {1 ≤ j ≤ m | gj(β̄) = 0}.
Then we introduce the following technical condition on gj .
(B1) For each β∗ ∈ s∗(P),
d⊤∇2βgj(β∗)d ≥ 0 ∀d ∈ {d | ∇βgj(β∗) = 0, j ∈ J0(β̄)}.
Assumption (B1) holds, for example, if each gj is locally convex around β∗. In what follows, based on the result of [47], we characterize the rates of convergence for β̂ in terms of the nuisance estimation error under relatively weak conditions. Theorem 4.1 (Rate of Convergence). Assume that (A1), (A2), and (B1), hold. Then
dist ( β̂, s∗(P) ) = OP ( ∥π̂a − πa∥2,P∥µ̂a − µa∥2,P + n− 1 2 ) .
Hence, if we further assume the nonparametric condition (A3), we obtain dist ( β̂, s∗(P) ) = OP ( n− 1 2 ) .
Theorem 4.1 indicates that double robustness is possible for our estimator, and thereby √ n rates are attainable even when each of the nuisance regression functions is estimated flexibly at much slower rates (e.g., n−1/4 rates for each), with a wide variety of modern nonparametric tools. Since L is continuously differentiable with bounded derivative, the consistency of the optimal value naturally follows by the result of Theorem 4.1 and the continuous mapping theorem. More specifically, in the following corollary, we show that the same rates are attained for the optimal value under identical conditions. Corollary 4.1 (Rate of Convergence for Optimal Value). Suppose (A1), (A2), (A3), (B1) hold and let v∗ and v̂ be the optimal values corresponding to β∗ ∈ s∗(P) and β̂, respectively. Then we have |v̂ − v∗| = OP ( ∥π̂a − πa∥2,P∥µ̂a − µa∥2,P + n− 1 2 ) .
In order to conduct statistical inference, it is also desirable to characterize the asymptotic distribution of β̂. This requires stronger assumptions and a more specialized analysis [47]. Asymptotic properties of optimal solutions in stochastic programming are typically studied based on the generalization of the delta method for directionally differentiable mappings [e.g., 48–50]. Asymptotic normality is of particular interest since without asymptotic normality, consistency of the bootstrap is no longer guaranteed for the solution estimators [12].
We start with additional definitions of some popular regularity conditions with respect to (P).
Definition 4.2 (LICQ). Linear independence constraint qualification (LICQ) is satisfied at β̄ ∈ S if the vectors∇βgj(β̄), j ∈ J0(β̄) are linearly independent. Definition 4.3 (SC). Let L(β, γ) be the Lagrangian. Strict Complementarity (SC) is satisfied at β̄ ∈ S if, with multipliers γ̄j ≥ 0, j ∈ J0(β̄), the Karush-Kuhn-Tucker (KKT) condition
∇βL(β̄, γ̄) := ∇βL(β̄) + ∑
j∈J0(β̄)
γ̄j∇βgj(β̄) = 0,
is satisfied such that γ̄j > 0,∀j ∈ J0(β̄).
LICQ is arguably one of the most widely-used constraint qualifications that admit the first-order necessary conditions. SC means that if the j-th inequality constraint is active, then the corresponding dual variable is strictly positive, so exactly one of them is zero for each 1 ≤ j ≤ m. SC is widely used in the optimization literature, particularly in the context of parametric optimization [e.g., 50, 51]. We further require uniqueness of the optimal solution in (P).
(B2) Program (P) has a unique optimal solution β∗ (i.e., s∗(P) ≡ {β∗} is singleton).
Note that under (B2) if LICQ holds at β∗, then the corresponding multipliers are determined uniquely [56]. In the next theorem, we provide a closed-form expression for the asymptotic distribution of β̂. Theorem 4.2 (Asymptotic Distribution). Assume that (A1) - (A3), (B1), and (B2) hold, and that LICQ and SC hold at β∗ with the corresponding multipliers γ∗. Then
n− 1 2 ( β̂ − β∗ ) = [ ∇2βL(β∗, γ∗) B
B⊤ 0 ]−1 [ 1 0 ]⊤ Υ+ oP(1)
for some k × |J0(β∗)| matrix B and random variable Υ such that
Υ d−→ N (0, var (φa(Z; η)h1(V, β∗) + {1− φa(Z; η)}h0(V, β∗))) ,
where
B = [ ∇βgj(β∗)⊤, j ∈ J0(β∗) ] ,
h1(V, β) = 1
log σ(β⊤b(V )) b(V )σ(β⊤b(V )){1− σ(β⊤b(V ))},
h0(V, β) = − 1
log(1− σ(β⊤b(V ))) b(V )σ(β⊤b(V )){1− σ(β⊤b(V ))}.
The above theorem gives explicit conditions under which β̂ is √ n-consistent and asymptotically
normal. We harness the classical results of [48] that use an expansion of β̂ in terms of an auxiliary parametric program. To show asymptotic normality of β̂, linearity of the directional derivative of optimal solutions in the parametric program is required. We have accomplished this based on an appropriate form of the implicit function theorem [11]. This is in contrast to [33] that relied on the structure of the smooth, closed-form solution estimator that enables direct use of the delta method. Lastly, our results in this section can be extended to a more general constrained nonlinear optimization problem where the objective function involves counterfactuals (see Lemmas B.1, B.2 in the appendix).
5 Simulation and Case Study
5.1 Simulation
We explore the finite sample properties of our estimators in the simulated dataset where we aim to empirically demonstrate the double-robustness property described in Section 3. Our data generation process is as follows:
V ≡ X = (X1, ..., X6) ∼ N(0, I), πa(X) = expit(−X1 + 0.5X2 − 0.25X3 − 0.1X4 + 0.05X5 + 0.05X6),
Y = A1 {X1 + 2X2 − 2X3 −X4 +X5 + ε > 0}+ (1−A)1 {X1 + 2X2 − 2X3 −X4 +X6 + ε < 0} , ε ∼ N(0, 1).
Our classification target is Y 1. For b(X), we use X , X2 and their pairwise products. We assume that we have box constraints for our solution: |β∗j | ≤ 1, j = 1, ..., k. Since there exist no other natural baselines, we compare our methods to the plug-in method where we use (2) for our approximating program P̂. For nuisance estimation we use the cross-validation-based Super Learner ensemble via the SUPERLEARNER R package to combine generalized additive models, multivariate adaptive regression splines, and random forests. We use sample splitting as described in Algorithm 1 with K = 2 splits. We further consider two versions of each of our estimators, based on the correct and distorted X , where the distorted values are only used to estimate the outcome regression µa. The distortion is caused by a transformation X 7→ (X1X3X6, X22 , X4/(1 + exp(X5)), exp(X5/2)).
To solve P̂, we first use the StoGo algorithm [40] via the NLOPTR R package as it has shown the best performance in terms of accuracy in the survey study of [35]. After running the StoGo, we then use the global optimum as a starting point for the BOBYQA local optimization algorithm [41] to further polish the optimum to a greater accuracy. We use sample sizes n = 1k, 2.5k, 5k, 7.5k, 10k and repeat the simulation 100 times for each n. Then we compute the average of |v∗− v̂| and ∥β∗− β̂∥2. Using the estimated counterfactual predictor, we also compute the classification error on an independent sample with the equal sample size. Standard error bars are presented around each point. The results with the correct and distorted X are presented in Figures 1 and 2, respectively.
With the correct X , it appears that the proposed estimator performs as well or slightly better than the plug-in methods. However, in Figure 2 when µ̂a is constructed based on the distorted X , the proposed estimator gives substantially smaller errors in general and improves better with n. This is indicative of the fact that the proposed estimator has the doubly-robust, second-order multiplicative bias, thus supporting our theoretical results in Section 4.
5.2 Case Study: COMPAS Dataset
Next we apply our method for recidivism risk prediction using the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) dataset 2. This dataset was originally designed to assess the COMPAS recidivism risk scores, and has been utilized for studying machine bias in the context of algorithmic fairness [2]. More recently, the dataset has been reanalyzed in the framework of counterfactual outcomes [32–34]. Here, we focus purely on predictive purpose. We let A represent pretrial release, with A = 0 if defendants are released and A = 1 if they are incarcerated, following methodology suggested by [34].3 We aim to classify the binary counterfactual outcome Y 0 that indicates whether a defendant is rearrested within two years, should the defendant be released pretrial. We use the dataset for two-year recidivism records with five covariates: age, sex, number of prior arrests, charge degree, and race. We consider three racial groups: Black, White, and Hispanic. We split the data (n = 5787) randomly into two groups: a training set with 3000 observations and a test set with the rest. Other model settings remain the same as our simulation in the previous subsection, including the box constraints.
Figure 3 and Table 1 show that the proposed doubly-robust method achieves moderately higher ROC AUC and classification accuracy than both the plug-in and the raw COMPAS risk scores. This comparative advantage is likely to increase in settings where we expect the identification and regularity assumptions to be more likely to hold, for example, where we can have access to more covariates or more information about the treatment mechanism.
6 Discussion
In this paper we studied the problem of counterfactual classification under arbitrary smooth constraints, and proposed a doubly-robust estimator which leverages nonparametric machine learning methods. Our theoretical framework is not limited to counterfactual classification and can be applied to other settings where the estimand is the optimal solution of a general smooth nonlinear programming problem with a counterfactual objective function; thus, we complement the results of [24, 33], each of which considered a particular class of smooth nonlinear programming.
2https://github.com/propublica/compas-analysis 3The dataset itself does not include information whether defendants were released pretrial, but it includes dates in and out of jail. So we set the treatment A to 0 if defendants left jail within three days of being arrested, and 1 otherwise, as Florida state law generally requires individuals to be brought before a judge for a bail hearing within 2 days of arrest [34, Section 6.2].
We emphasize that one may use our proposed approach for other common problems in causal inference, e.g., estimation of the contrast effects or optimal treatment regimes, even under runtime confounding and/or other practical constraints. We may accomplish this by simply estimating each component E[Y a | X] via solving (P) for different values of a, and then taking the conditional mean contrast of interest. We can also readily adapt our procedure (P) for such standard estimands, for example by replacing Y a with the desired contrast or utility formula, in which the influence function will be very similar to those already presented in our manuscript. In ongoing work, we develop extensions for estimating the CATE and optimal treatment regimes under fairness constraints.
Although not explored in this work, our estimation procedure could be improved by applying more sophisticated and flexible modeling techniques for solving (P). One promising approach is to build a neural network that minimizes the loss (4) with the nuisance estimates {φa(Zi; η̂−BK )}i constructed on the separate independent sample; in this case, β is the weights of the network where k ≫ k′. Importantly, in the neural network approach we do not need to specify and construct the score and basis functions; the ideal form of those unknown functions are learned through backpropagation. Hence, we can avoid explicitly formulating and solving a complex non-convex optimization problem. Further, one may employ a rich source of deep-learning tools. In future work, we plan to pursue this extension and apply our methods to a large-scale real-world dataset.
We conclude with other potential limitations of our methods, and ways in which our work could be generalized. First, we considered the fixed feasible set that consists of only deterministic constraints. However, sometimes it may be useful to consider the general case where gj’s need to be estimated as well. This can be particularly helpful when incorporating general fairness constraints [14, 33, 34]. Dealing with the varying feasible set with general nonlinear constraints is a complicated task and requires even stronger assumptions [48]. As future work, we plan to generalize our framework to the case of a varying feasible set. Next, although we showed that the counterfactual objective function is estimated efficiently via L̂, it is unclear whether the solution estimator β̂ is efficient too, due to the inherent complexity of the optimal solution mapping in the presence of constraints. We conjecture that one may show that the semiparametric efficiency bound can also be attained for β̂ possibly under slightly stronger regularity assumptions, but we leave this for future work. | 1. What is the focus of the paper regarding counterfactual classification problems?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its mathematical presentation and limitations?
3. Do you have any questions regarding the proof of Lemma A.1 and its relation to the centered influence function?
4. Why did the authors restrict their attention to the simple case of σ(β^Tb(V)) and limit the parameter space/basis functions to finite dimensions?
5. Are there any concerns or potential negative societal impacts associated with the work? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper proposes to solve counterfactual classification problems by solving a doubly robust estimator of the cross-entropy loss with smooth nonlinear programming, incorporating constraints. Theoretical analysis is provided in the asymptotic limit, including the rate of convergence of the solution to the set of optimal solutions, and asymptotic normality of the solution under the assumption that there is a unique solution.
Strengths And Weaknesses
Strengths
This is a well-written paper with sound and clear presentation of the mathematics. It makes clear the problem it is trying to solve, as well as the method it deploys to solve it.
The cross entropy loss function is widely-used, and the way authors extended it to the counterfactual setting, subject to constraints, seems natural and intuitive to me. The scope of contribution might be a bit limited, as it feels like a rather simple and straightforward marriage of very well-known and widely-used problem and technique, I still think the theoretical analysis and the algorithms are valuable to the community.
I do not work directly in the field of counterfactual prediction, where the accurate prediction of each counterfactual case is important rather than a comparison between counterfactuals (e.g. CATE), but the authors convince me that it is an important problem. Given this, and given how commonplace and intuitively important classification is, I find it surprising that no attempt has been made so far to tackle this problem.
Weaknesses
I do not like the fact that the proof of Lemma A.1 was omitted. I had a look through [21], to see if things really carried over seamlessly, but at least to me, things weren't so obvious. In particular, I would like to see worked out how a factor of h(X) makes no other difference than simply multiplying the influence function by the same factor, and why the authors choose to work with the uncentred influence function rather than the centred one. Perhaps it's obvious to people working in the field, but I have not seen such formulations in other papers.
Questions
lines 110-112: sigma is a function from \mathcal{B}\times\mathcal{X}, so b(V) should be in \mathcal{X}, but on line 112, b(V) seems to be in R^k? I see that later, authors restrict attention to the simple case of \sigma(\beta^Tb(V)), which means that \beta and b(V) indeed has to be in the same space. But before this restriction takes place, wouldn't it be better to retain generality and let b(V) take place in \mathcal{X}? Also, could the authors comment on why they restrict to finite (k-dimensional) parameter space / basis functions? This seems to rule out many useful cases; for example, the authors list the use of kernels as an example, but useful kernels such as the Gaussian kernel seem to be ruled out by this finite-dimensional restriction.
Limitations
Limitations are discussed at the end of the paper, and in my opinion, sufficiently. The work is mostly of theoretical nature, and I do not deem it necessary to discuss potential negative societal impact of the work. |
NIPS | Title
NS3: Neuro-symbolic Semantic Code Search
Abstract
Semantic code search is the task of retrieving a code snippet given a textual description of its functionality. Recent work has been focused on using similarity metrics between neural embeddings of text and code. However, current language models are known to struggle with longer, compositional text, and multi-step reasoning. To overcome this limitation, we propose supplementing the query sentence with a layout of its semantic structure. The semantic layout is used to break down the final reasoning decision into a series of lower-level decisions. We use a Neural Module Network architecture to implement this idea. We compare our model NS (Neuro-Symbolic Semantic Search) to a number of baselines, including state-of-the-art semantic code retrieval methods, and evaluate on two datasets CodeSearchNet and Code Search and Question Answering. We demonstrate that our approach results in more precise code retrieval, and we study the effectiveness of our modular design when handling compositional queries1.
1 Introduction
The increasing scale of software repositories makes retrieving relevant code snippets more challenging. Traditionally, source code retrieval has been limited to keyword [33, 30] or regex [7] search. Both rely on the user providing the exact keywords appearing in or around the sought code. However, neural models enabled new approaches for retrieving code from a textual description of its functionality, a task known as semantic code search (SCS). A model like Transformer [36] can map a database of code snippets and natural language queries to a shared high-dimensional space. Relevant code snippets are then retrieved by searching over this embedding space using a predefined similarity metric, or a learned distance function [26, 13, 12]. Some of the recent works capitalize on the rich structure of the code, and employ graph neural networks for the task [17, 28].
Despite impressive results on SCS, current neural approaches are far from satisfactory in dealing with a wide range of natural-language queries, especially on ones with compositional language structure. Encoding text into a dense vector for retrieval purposes can be problematic because we risk loosing faithfulness of the representation, and missing important details of the query. Not only does this a) affect the performance, but it can b) drastically reduce a model’s value for the users, because compositional queries such as “Check that directory does not exist before creating it” require performing multi-step reasoning on code.
⇤Currently at Google Research †Equal supervision 1Code and data are available at https://github.com/ShushanArakelyan/modular_code_search
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
We suggest overcoming these challenges by introducing a modular workflow based on the semantic structure of the query. Our approach is based on the intuition of how an engineer would approach a SCS task. For example, in performing search for code that navigates folders in Python they would first only pay attention to code that has cues about operating with paths, directories or folders. Afterwards, they would seek indications of iterating through some of the found objects or other entities in the code related to them. In other words, they would perform multiple steps of different nature - i.e. finding indications of specific types of data entities, or specific operations. Figure 1 illustrates which parts of the code would be
important to indicate that they have found the desired code snippet at each step. We attempt to imitate this process in this work. To formalize the decomposition of the query into such steps, we take inspiration from the idea that code is comprised of data, or entities, and transformations, or actions, over data. Thus, a SCS query is also likely to describe the code in terms of data entities and actions.
We break down the task of matching the query into smaller tasks of matching individual data entities and actions. In particular, we aim to identify parts of the code that indicate the presence of the corresponding data or action. We tackle each part with a distinct type of network – a neural module. Using the semantic parse of the query, we construct the layout of how modules’ outputs should be linked according to the relationships between data entities and actions, where each data entity represents a noun, or a noun phrase, and each action represents a verb, or a verbal phrase. Correspondingly, this layout specifies how the modules should be combined into a single neural module network (NMN) [4]. Evaluating the NMN on the candidate code approximates detecting the corresponding entities and actions in the code by testing whether the neural network can deduce one missing entity from the code and the rest of the query.
This approach has the following advantages. First, semantic parse captures the compositionality of a query. Second, it mitigates the challenges of faithful encoding of text by focusind only on a small portion of the query at a time. Finally, applying the neural modules in a succession can potentially mimic staged reasoning necessary for SCS.
We evaluate our proposed NS3 model on two SCS datasets - CodeSearchNet (CSN) [24] and CoSQA/WebQueryTest [23]. Additionally, we experiment with a limited training set size of CSN of 10K and 5K examples. We find that NS3 provides large improvements upon baselines in all cases. Our experiments demonstrate that the resulting model is more sensitive to small, but semantically significant changes in the query, and is more likely to correctly recognize that a modified query no longer matches its code pair.
Our main contributions are: (i) We propose looking at SCS as a compositional task that requires multi-step reasoning. (ii) We present an implementation of the aforementioned paradigm based on
NMNs. (iii) We demonstrate that our proposed model provides a large improvement on a number of well-established baseline models. (iv) We perform additional studies to evaluate the capacity of our model to handle compositional queries.
2 Background
2.1 Semantic Code Search
Semantic code search (SCS) is the process of retrieving a relevant code snippet based on a textual description of its functionality, also referred to as query. Let C be a database of code snippets ci. For each ci 2 C, there is a textual description of its functionality qi. In the example in Figure 2, the query qi is “Load all tables from dataset”. Let r be an indicator function such that r(qi, cj) = 1 if i = j; and 0 otherwise. Given some query q the goal of SCS is to find c⇤ such that r(q, c⇤) = 1. We assume that for each q⇤ there is exactly one such c⇤.2 Here we look to construct a model which takes as input a pair of query and a candidate code snippet: (qi, cj) and assign the pair a probability r̂ij for being a correct match. Following the common practice in information retrieval, we evaluate the performance of the model based on how high the correct answer c⇤ is ranked among a number of incorrect, or distractor instances {c}. This set of distractor instances can be the entire codebase C, or a subset of the codebase obtained through heuristic filtering, or another ranking method.
2.2 Neural Models for Semantic Code Search
Past works handling programs and code have focused on enriching their models with incorporating more semantic and syntactic information from code [1, 10, 34, 47]. Some prior works have cast the SCS as a sequence classification task, where the code is represented as a textual sequence and input pair (qi, cj) is concatenated with a special separator symbol into a single sequence, and the output is the score r̂ij : r̂ij = f(qi, cj). The function f performing the classification can be any sequence classification model, e.g. BERT [11].
Alternatively, one can define separate networks for independently representing the query (f ), the code (g) and measuring the similarity between them: r̂ij = sim(f(qi), g(cj)). This allows one to design the code encoding network g with additional program-specific information, such as abstract syntax trees [3, 44] or control flow graphs [15, 45]. Separating two modalities of natural language and code also allows further enrichment of code representation by adding contrastive learning objectives [25, 6]. In these approaches, the original code snippet c is automatically modified with semantic-preserving transformations, such as variable renaming, to introduce versions of the code snippet - c0 with the exact same functionality. Code encoder g is then trained with an appropriate contrastive loss, such as Noise Contrastive Estimation (NCE) [19], or InfoNCE [35].
Limitations However, there is also merit in reviewing how we represent and use the textual query to help guide the SCS process. Firstly, existing work derives a single embedding for the entire query. This means that specific details or nested subqueries of the query may be omitted or not represented faithfully - getting lost in the embedding. Secondly, prior approaches make the decision after a single pass over the code snippet. This ignores cases where reasoning about a query requires multiple steps and thus - multiple look-ups over the code, as is for example in cases with nested subqueries. Our proposed approach - NS3 - attempts to address these issues by breaking down the query into smaller phrases based on its semantic parse and locating each of them in the code snippet. This should allow us to match compositional and longer queries to code more precisely.
3 Neural Modular Code Search
We propose to supplement the query with a loose structure resembling its semantic parse, as illustrated in Figure 2. We follow the parse structure to break down the query into smaller, semantically coherent parts, so that each corresponds to an individual execution step. The steps are taken in succession by a neural module network composed from a layout that is determined from the semantic parse of the
2This is not the case in CoSQA dataset. For the sake of consistency, we perform the evaluation repeatedly, leaving only one correct code snippet among the candidates at a time, while removing the others.
query (Sec. 3.1). The neural module network is composed by stacking “modules”, or jointly trained networks, of distinct types, each carrying out a different functionality.
Method Overview In this work, we define two types of neural modules - entity discovery module (denoted by E; Sec. 3.2) and action module (denoted by A; Sec 3.3). The entity discovery module estimates semantic relatedness of each code token cji in the code snippet c j = [cj1, . . . , c j N ] to an entity mentioned in the query – e.g. “all tables” or “dataset” as in Figure 2. The action module estimates the likelihood of each code token to be related to an (unseen) entity affected by the action in the query e.g. “dataset” and “load from” correspondingly, conditioned on the rest of the input (seen), e.g. “all tables”. The similarity of the predictions of the entity discovery and action modules measures how well the code matches that part of the query. The modules are nested - the action modules are taking as input part of the output of another module - and the order of nesting is decided by the semantic parse layout. In the rest of the paper we refer to the inputs of a module as its arguments.
Every input instance fed to the model is a 3-tuple (qi, sqi , cj) consisting of a natural language query qi, the query’s semantic parse sqi , a candidate code (sequence) cj . The goal is producing a binary label r̂ij = 1 if the code is a match for the query, and 0 otherwise. The layout of the neural module network, denoted by L(sqi), is created from the semantic structure of the query sqi . During inference, given (qi, sqi , cj) as input the model instantiates a network based on the layout, passes qi, cj and sqi as inputs, and obtains the model prediction r̂ij . This pipeline is illustrated in Figure 2, and details about creating the layout of the neural module network are presented in Section 3.1.
During training, we first perform noisy supervision pretraining for both modules. Next, we perform end-to-end training, where in addition to the query, its parse, and a code snippet, the model is also provided a gold output label r(qi, cj) = 1 if the code is a match for the query, and r(qi, cj) = 0 otherwise. These labels provide signal for joint fine-tuning of both modules (Section 3.5).
3.1 Module Network Layout
Here we present our definition of the structural representation sqi for a query qi, and introduce how this structural representation is used for dynamically constructing the neural module network, i.e. building its layout L(sqi).
Query Parsing To infer the representation sqi , we pair the query (e.g., “Load all tables from dataset”, as in Figure 2), with a simple semantic parse that looks similar to: DO WHAT [ (to/from/in/...) WHAT, WHEN, WHERE, HOW, etc]. Following this semantic parse, we break down the query into shorter semantic phrases using the roles of different parts of speech. Nouns and noun phrases correspond to data entities in code, and verbs describe actions or transformations performed on the data entities. Thus, data and transformations are separated and handled by separate neural modules – an entity discovery module E and an action module A. We use a Combinatory Categorial Grammar-based (CCG) semantic parser [43, 5] to infer the semantic parse sqi for the natural language query qi. Parsing is described in further detail in Section 4.1 and Appendix A.2.
Specifying Network Layout In the layout L(sqi), every noun phrase (e.g., “dataset" in Figure 2) will be passed through the entity discovery module E. Module E then produces a probability score ek for every token cjk in the code snippet c
j to indicate its semantic relatedness to the noun phrase: E(“dataset”, cj) = [e1, e2, . . . , eN ]. Each verb in sqi (e.g., “load” in Figure 2) will be passed through an action module: A(“load”,pi, cj) = [a1, a2, . . . , aN ]. Here, pi is the span of arguments to the verb (action) in query qi, consisting of children of the verb in the parse sqi (e.g. subject and object arguments to the predicate “load”); a1, . . . , aN are estimates of the token scores e1, . . . , eN for an entity from pi. The top-level of the semantic parse is always an action module. Figure 2 also illustrates preposition FROM used with “dataset”, handling which is described in Section 3.3.
3.2 Entity Discovery Module
The entity discovery module receives a string that references a data entity. Its goal is to identify tokens in the code that have high relevance to that string. The architecture of the module is shown in Figure 3. Given an entity string, “dataset” in the example, and a sequence of code tokens [cj1, . . . , c j N ], entity module first obtains contextual code token representation using RoBERTa model that is initialized
from CodeBERT-base checkpoint. The resulting embedding is passed through a two-layer MLP to obtain a score for every individual code token cjk : 0 ek 1. Thus, the total output of the module is a vector of scores: [e1, e2, . . . , eN ]. To prime the entity discovery module for measuring relevancy between code tokens and input, we fine-tune it with noisy supervision, as detailed below.
Noisy Supervision We create noisy supervision for the entity discovery module by using keyword matching and a Python static code analyzer. For the keyword matching, if a code token is an exact match for one or more tokens in the input string, its supervision label is set to 1, otherwise it is 0. Same is true if the code token is a substring or a superstring of one or more input string tokens. For some common nouns we include their synonyms (e.g. “map” for
“dict”), the full list of those and further details are presented in Appendix B.
We used the static code analyzer to extract information about statically known data types. We cross-matched this information with the query to discover whether the query references any datatypes found in the code snippet. If that is the case, the corresponding code tokens are assigned supervision label 1, and all the other tokens are assigned to 0. In the pretraining we learned on equal numbers of (query, code) pairs from the dataset, as well as randomly mismatched pairs of queries and code snippets to avoid creating bias in the entity discovery module.
3.3 Action Module
First, we discuss the case where the action module has only entity module inputs. Figure 4 provides a high-level illustration of the action module. In the example, for the query “Load all tables from dataset”, the action module receives only part of the full query – “Load all tables from ???”. Action module then outputs token scores for the masked argument – “dataset”. If the code snippet corresponds to the query, then the action module should be able to deduce this missing part from the code and the rest of the query. For consistency, we always mask the last data entity argument. We pre-train the action module using the output scores of the entity discovery module as supervision.
Each data entity argument can be associated with 0 or 1 prepositions, but each action may have multiple entities with prepositions. For that reason, for each data entity argument we create one joint embedding of the action verb and the preposition. Joint embeddings are obtained with a 2-layer MLP model, as illustrated in the left-most part of Figure 4.
If a data entity does not have a preposition associated with it, the vector corresponding to the preposition is filled with zeros. The joint verb-preposition embedding is stacked with the code token embedding cjk and entity discovery module output for that token, this is referenced in the middle part of Figure 4. This vector is passed through a transformer encoder model, followed by a 2-layer MLP and a
sigmoid layer to output token score ak, illustrated in the right-most part of the Figure 4. Thus, the dimensionality of the input depends on the number of entities. We use a distinct copy of the module with the corresponding dimensionality for different numbers of inputs, from 1 to 3.
3.4 Model Prediction
The final score r̂ij = f(qi, cj) is computed based on the similarity of action and entity discovery module output scores. Formally, for an action module with verb x and parameters px = [px1 , . . . , pxk], the final model prediction is the dot product of respective outputs of action and entity discovery modules: r̂ij = A(x, px1 , . . . , pxk 1) · E(pxk). Since the action module estimates token scores for the entity affected by the verb, if its prediction is far from the truth - then either the action is not found in the code, or it is not fully corresponding to the query, for example, in the code snippet tables are loaded from web, instead of a dataset. We normalize this score to make it a probability. If this is the
only action in the query, this probability score will be the output of the entire model for (qi, cj) pair: r̂ij , otherwise r̂ij will be the product of probability scores of all nested actions in the layout.
Compositional query with nested actions Consider a compositional query “Load all tables from dataset using Lib library”. Here action with verb “Load from” has an additional argument “using” – also an action – with an entity argument “Lib library”. In case of nested actions, we flatten the layout by taking the conjunction of individual action similarity scores. Formally, for two verbs x and y and their corresponding arguments px = [px1 , . . . , pxk] and p y = [py1, . . . , p y l ] in a layout that looks like: A(x,px, A(y,py)), the output of the model is the conjunction of similarity scores computed for individual action modules: sim(A(x, px1 , . . . , pxk 1), E(p x k)) · sim(A(y, p y 1, . . . , p y l 1), E(p y l )). This process is repeated until all remaining px and py are data entities. This design ensures that code snippet is ranked highly if both actions are ranked highly, we leave explorations of alternative handling approaches for nested actions to future work.
3.5 Module Pretraining and Joint Fine-tuning
We train our model through supervised pre-training, as is discussed in Sections 3.2 and 3.3, followed by end-to-end training. End-to-end training objective is binary classification - given a pair of query qi and code cj , the model predicts probability r̂ij that they are related. In the end-to-end training, we use positive examples taken directly from the dataset - (qi, ci), as well as negative examples composed through the combination of randomly mismatched queries and code snippets. The goal of end-to-end training is fine-tuning parameters of entity discovery and action modules, including the weights of the RoBERTA models used for code token representation.
Batching is hard to achieve for our model, so for the interest of time efficiency we do not perform inference on all distractor code snippets in the code dataset. Instead, for a given query we re-rank top-K highest ranked code snippets as outputted by some baseline model, in our evaluations we used CodeBERT. Essentially, we use our model in a re-ranking setup, this is common in information retrieval and is known as L2 ranking. We interpret the probabilities outputted by the model as ranking scores. More details about this procedure are provided in Section 4.1.
4 Experiments
4.1 Experiment Setting
Dataset We conduct experiments on two datasets: Python portion of the CodeSearchNet (CSN) [24], and CoSQA [23]. We parse all queries with the CCG parser, as discussed later in this section, excluding unparsable examples from further experiments. This leaves us with approximately 40% of the CSN dataset and 70% of the CoSQA dataset, the exact data statistics are available in Appendix A in Table 3. We believe, that the difference in success rate of the parser between the two datasets can be attributed to the fact that CSN dataset, unlike CoSQA, does not contain real code search queries, but rather consists of docstrings, which are used as approximate queries. More details and examples can be found in Appendix A.3. For our baselines, we use the parsed portion of the dataset for fine-tuning to make the comparison fair. In addition, we also experiment with fine-tuning all models on an even smaller subset of CodeSearchNet dataset, using only 5K and 10K examples for fine-tuning. The goal is testing whether modular design makes NS3 more sample-efficient.
All experiment and ablation results discussed in Sections 4.2,4.3 and 4.4 are obtained on the test set of CSN for models trained on CSN training data, or WebQueryTest [31] – a small natural language web query dataset of document-code pairs – for models trained on CoSQA dataset.
Evaluation and Metrics We follow CodeSearchNet’s original approach for evaluation for a test instance (q, c), comparing the output against outputs over a fixed set of 999 distractor code snippets. We use two evaluation metrics: Mean Reciprocal Rank (MRR) and Precision@K (P@K) for K=1, 3, and 5, see Appendix A.1 for definitions and further details.
Following a common approach in information retrieval, we perform two-step evaluation. In the first step, we obtain CodeBERT’s output against 999 distractors. In the second step, we use NS3 to re-rank the top 10 predictions of CodeBERT. This way the evaluation is much faster, since unlike our
modular approach, CodeBERT can be fed examples in batches. And as we will see from the results, we see improvement in final performance in all scenarios.
Compared Methods We compare NS3 with various state-of-the-art methods, including some traditional approaches for document retrieval and pretrained large NLP language models. (1) BM25 is a ranking method to estimate the relevance of documents to a given query. (2) RoBERTa (code) is a variant of RoBERTa [29] pretrained on the CodeSearchNet corpus. (3) CuBERT [26] is a BERT Large model pretrained on 7.4M Python files from GitHub. (4) CodeBERT [13] is an encoder-only Transformer model trained on unlabeled source code via masked language modeling (MLM) and replaced token detection objectives. (5) GraphCodeBERT [17] is a pretrained Transformer model using MLM, data flow edge prediction, and variable alignment between code and the data flow. (6) GraphCodeBERT* is a re-ranking baseline. We used the same setup as for NS3, but used GraphCodeBERT to re-rank the top-10 predictions of the CodeBERT model.
Query Parser We started by building a vocabulary of predicates for common action verbs and entity nouns, such as “convert”, “find”, “dict”, “map”, etc. For those we constructed the lexicon (rules) of the parser. We have also included “catch-all” rules, for parsing sentences with less-common words. To increase the ratio of the parsed data, we preprocessed the queries by removing preceding question words, punctuation marks, etc. Full implementation of our parser including the entire lexicon and vocabulary can be found at https://anonymous.4open.science/ r/ccg_parser-4BC6. More details are available in Appendix A.2.
Pretrained Models Action and entity discovery modules each embed code tokens with a RoBERTa model, that has been initialized from a checkpoint of pretrained CodeBERT model 3. We fine-tune these models during the pretraining phases, as well as during final end-to-end training phase.
Hyperparameters The MLPs in entity discovery and action modules have 2 layers with input dimension of 768. We use dropout in these networks with rate 0.1. The learning rate for pretraining and end-to-end training phases was chosen from the range of 1e-6 to 6e-5. We use early stopping with evaluation on unseen validation set for model selection during action module pretraining and endto-end training. For entity discovery model selection we performed manual inspection of produced scores on unseen examples. For fine-tuning the CuBERT, CodeBERT and GraphCodeBERT baselines we use the hyperparameters reported in their original papers. For RoBERTa (code), we perform the search for learning rate during fine-tuning stage in the same interval as for our model. For model selection on baselines we also use early stopping.
3https://huggingface.co/microsoft/codebert-base
4.2 Results
Performance Comparison Tables 1 and 2 present the performance evaluated on testing portion of CodeSearchNet dataset, and WebQueryTest dataset correspondingly. As it can be seen, our proposed model outperforms the baselines.
Our evaluation strategy improves performance only if the correct code snippet was ranked among the top-10 results returned by the CodeBERT model, so rows labelled “Upper-bound” report best possible performance with this evaluation strategy.
Query Complexity vs. Performance Here we present the breakdown of the performance for our method vs baselines, using two proxies for the complexity and compositionality of the query. The first one is the maximum depth of the query. We define the maximum depth as the maximum number of nested action modules in the query. The results for this experiment are presented in Figure 5a. As we can see, NS3 improves over the baseline in all scenarios. It is interesting to note, that while CodeBERT achieves the best performance on queries with depth 3+, our model’s performance peaks at depth = 1. We hypothesize that this can be related to the automated parsing procedure, as parsing errors are more likely to be propagated in deeper queries. Further studies with carefully curated manual parses are necessary to better understand this phenomenon.
Another proxy for the query complexity we consider, is the number of data arguments to a single action module. While the previous scenario is breaking down the performance by the depth of the query, here we consider its “width”. We measure the average number of entity arguments per action module in the query. In the parsed portion of our dataset we have queries that range from 1 to 3 textual arguments per action verb. The results for this evaluation are presented in Figure 5. As it can be seen, there is no significant difference in performances between the two groups of queries in either CodeBERT or our proposed method - NS3.
4.3 Ablation Studies
Effect of Pretraining In an attempt to better understand the individual effect of the two modules as well as the roles of their pretraining and training procedures, we performed two additional ablation studies. In the first one, we compare the final performance of the original model with two versions where we skipped part of the pretraining. The model noted as (NS3 AP ) was trained with pretrained entity discovery module, but no pretraining was done for action module, instead we proceeded to the end-to-end training directly. For the model called NS3 (AP&EP ), we skipped both pretrainings
of the entity and action modules, and just performed end-to-end training. Figure 6a demonstrates that combined pretraining is important for the final performance. Additionally, we wanted to measure how effective the setup was without end-to-end training. The results are reported in Figure 6a under the name NS3 E2E. There is a huge performance dip in this scenario, and while the performance is better than random, it is obvious that end-to-end training is crucial for NS3.
Score Normalization We wanted to determine the importance of output normalization for the modules to a proper probability distribution. In Figure 6b we demonstrate the performance achieved using no normalization at all, normalizing either action or entity discovery module, or normalizing both. In all cases we used L1 normalization, since our output scores are non-negative. The version that is not normalized at all performs the worst on both datasets. The performances of the other three versions are close on both datasets.
Similarity Metric Additionally, we experimented with replacing the dot product similarity with a different similarity metric. In particular, in Figure 6c we compare the performance achieved using dot product similarity, L2 distance, and weighted cosine similarity. The difference in performance among different versions is marginal.
4.4 Analysis and Case Study
Appendix C contains additional studies on model generalization, such as handling completely unseen actions and entities, as well as the impact of the frequency of observing an action or entity during training has on model performance.
Case Study Finally, we demonstrate some examples of the scores produced by our modules at different stages of training. Figure 8 shows module score outputs for two different queries and with their corresponding code snippets. The first column shows the output of the entity discovery module after pretraining, while the second and third columns demonstrate the outputs of entity discovery and action modules after the end-to-end training. We can see that in the first column the model identifies syntactic matches, such as “folder” and a list comprehension, which “elements” could be related too. After fine-tuning we can see there is a wider range of both syntactic and some semantic matches present, e.g. “dirlist” and “filelist” are correctly identified as related to “folders”.
Perturbed Query Evaluation In this section we study how sensitive the models are to small changes in the query qi, so that it no longer correctly describes its corresponding code snippet ci. Our expectation is that evaluating a sensitive model on ci will rate the original query higher than the perturbed one. Whereas a model that tends to over-generalize and ignore details of the query will likely rate the perturbed query similar to the original. We start from 100 different pairs (qi, ci), that both our model and CodeBERT predict correctly.
We limited our study to queries with a single verb and a single data entity argument to that verb. For each pair we generated perturbations of two kinds, with 20 perturbed versions for every query. For the first type of perturbations, we replaced query’s data argument with a data argument sampled randomly from another query. For the second type, we replaced the verb argument with another randomly sampled verb. To account for calibration of the models, we measure the change in performance through ratio of the perturbed query score over original query score (lower is better). The results are shown in Figure 7, labelled “V (arg1) ! V (arg2)” and “V1(arg) ! V2(arg)”.
Discussion One of the main requirements for the application of our proposed method is being able to construct a semantic parse of the retrieval query. In general, it is reasonable to expect the users of the SCS to be able to come up with a formal representation of the query, e.g. by representing it in a form similar to SQL or CodeQL. However, due to the lack of such data for training and testing purposes, we implemented our own parser, which understandably does not have perfect performance since we are dealing with open-ended sentences.
5 Related work
Different deep learning models have proved quite efficient when applying to programming languages and code. Prior works have studied and reviewed the uses of deep learning for code analysis in general and code search in particular [39, 31].
A number of approaches to deep code search is based on creating a relevance-predicting model between text and code. [16] propose using RNNs for embedding both code and text to the same latent space. On the other hand, [27] capitalizes the inherent graph-like structure of programs to formulate code search as graph matching. A few works propose enriching the models handling code embedding by adding additional code analysis information, such as semantic and dependency parses [12, 2], variable renaming and statement permutation [14], as well as structures such as abstract syntax tree of the program [20, 37]. A few other approaches have dual formulations of code retrieval and code summarization [9, 40, 41, 6] In a different line of work, Heyman & Cutsem [21] propose considering the code search scenario where short annotative descriptions of code snippets are provided. Appendix E discusses more related work.
6 Conclusion
We presented NS3 a symbolic method for semantic code search based on neural module networks. Our method represents the query and code in terms of actions and data entities, and uses the semantic structure of the query to construct a neural module network. In contrast to existing code search methods, NS3 more precisely captures the nature of queries. In an extensive evaluation, we show that this method works better than strong but unstructured baselines. We further study model’s generalization capacities, robustness, and sensibility of outputs in a series of additional experiments.
Acknowledgments and Disclosure of Funding
This research is supported in part by the DARPA ReMath program under Contract No. HR00112190020, the DARPA MCS program under Contract No. N660011924033, Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract No. 2019-19051600007, the Defense Advanced Research Projects Agency with award W911NF-19-20271, NSF IIS 2048211, and gift awards from Google, Amazon, JP Morgan and Sony. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA, ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. We thank all the collaborators in USC INK research lab for their constructive feedback on the work. | 1. What is the focus and contribution of the paper on Neuro-Symbolic Semantic Code Search?
2. What are the strengths of the proposed approach, particularly in terms of its evaluation and comparison with other works?
3. What are the weaknesses of the paper regarding its reliance on rule-based parsing?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper proposes NS3, Neuro-Symbolic Semantic Code Search. NS3 supplements the query sentence with a layout of its semantic structure, which is then used to break down the final reasoning decision into a series of lower-level decisions. NS3 outperforms baselines on CodeSearchNet and CoSQA.
Strengths And Weaknesses
Strengths:
NS3 is evaluated in two well-established benchmarks, CodeSearchNet and CoSQA, and compared against strong baselines.
The empirical result is very strong. The proposed method, NS3, outperforms state-of-the-art by a large margin.
The paper includes detailed experiments on reduced dataset settings and ablation studies.
Weaknesses:
NS3 requires rule-based parsing of natural languages, while other baselines, such as CodeBERT, only involve natural language models. Rule-based parsing of natural languages is complicated, language-dependent, and less scalable than language models. In parser_dict.py of the released parser, there are hundreds of task-specific parsing/synonym rules for natural languages. These rules are not transferrable to different natural languages (e.g. English to Chinese), and they even cannot generalize to different programming languages (e.g. Python to C++).
Questions
What are the process of engineering parsing rules and NL-action mappings? Are they engineered according to the performance on the dev set or the test set?
Limitations
N/A |
NIPS | Title
NS3: Neuro-symbolic Semantic Code Search
Abstract
Semantic code search is the task of retrieving a code snippet given a textual description of its functionality. Recent work has been focused on using similarity metrics between neural embeddings of text and code. However, current language models are known to struggle with longer, compositional text, and multi-step reasoning. To overcome this limitation, we propose supplementing the query sentence with a layout of its semantic structure. The semantic layout is used to break down the final reasoning decision into a series of lower-level decisions. We use a Neural Module Network architecture to implement this idea. We compare our model NS (Neuro-Symbolic Semantic Search) to a number of baselines, including state-of-the-art semantic code retrieval methods, and evaluate on two datasets CodeSearchNet and Code Search and Question Answering. We demonstrate that our approach results in more precise code retrieval, and we study the effectiveness of our modular design when handling compositional queries1.
1 Introduction
The increasing scale of software repositories makes retrieving relevant code snippets more challenging. Traditionally, source code retrieval has been limited to keyword [33, 30] or regex [7] search. Both rely on the user providing the exact keywords appearing in or around the sought code. However, neural models enabled new approaches for retrieving code from a textual description of its functionality, a task known as semantic code search (SCS). A model like Transformer [36] can map a database of code snippets and natural language queries to a shared high-dimensional space. Relevant code snippets are then retrieved by searching over this embedding space using a predefined similarity metric, or a learned distance function [26, 13, 12]. Some of the recent works capitalize on the rich structure of the code, and employ graph neural networks for the task [17, 28].
Despite impressive results on SCS, current neural approaches are far from satisfactory in dealing with a wide range of natural-language queries, especially on ones with compositional language structure. Encoding text into a dense vector for retrieval purposes can be problematic because we risk loosing faithfulness of the representation, and missing important details of the query. Not only does this a) affect the performance, but it can b) drastically reduce a model’s value for the users, because compositional queries such as “Check that directory does not exist before creating it” require performing multi-step reasoning on code.
⇤Currently at Google Research †Equal supervision 1Code and data are available at https://github.com/ShushanArakelyan/modular_code_search
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
We suggest overcoming these challenges by introducing a modular workflow based on the semantic structure of the query. Our approach is based on the intuition of how an engineer would approach a SCS task. For example, in performing search for code that navigates folders in Python they would first only pay attention to code that has cues about operating with paths, directories or folders. Afterwards, they would seek indications of iterating through some of the found objects or other entities in the code related to them. In other words, they would perform multiple steps of different nature - i.e. finding indications of specific types of data entities, or specific operations. Figure 1 illustrates which parts of the code would be
important to indicate that they have found the desired code snippet at each step. We attempt to imitate this process in this work. To formalize the decomposition of the query into such steps, we take inspiration from the idea that code is comprised of data, or entities, and transformations, or actions, over data. Thus, a SCS query is also likely to describe the code in terms of data entities and actions.
We break down the task of matching the query into smaller tasks of matching individual data entities and actions. In particular, we aim to identify parts of the code that indicate the presence of the corresponding data or action. We tackle each part with a distinct type of network – a neural module. Using the semantic parse of the query, we construct the layout of how modules’ outputs should be linked according to the relationships between data entities and actions, where each data entity represents a noun, or a noun phrase, and each action represents a verb, or a verbal phrase. Correspondingly, this layout specifies how the modules should be combined into a single neural module network (NMN) [4]. Evaluating the NMN on the candidate code approximates detecting the corresponding entities and actions in the code by testing whether the neural network can deduce one missing entity from the code and the rest of the query.
This approach has the following advantages. First, semantic parse captures the compositionality of a query. Second, it mitigates the challenges of faithful encoding of text by focusind only on a small portion of the query at a time. Finally, applying the neural modules in a succession can potentially mimic staged reasoning necessary for SCS.
We evaluate our proposed NS3 model on two SCS datasets - CodeSearchNet (CSN) [24] and CoSQA/WebQueryTest [23]. Additionally, we experiment with a limited training set size of CSN of 10K and 5K examples. We find that NS3 provides large improvements upon baselines in all cases. Our experiments demonstrate that the resulting model is more sensitive to small, but semantically significant changes in the query, and is more likely to correctly recognize that a modified query no longer matches its code pair.
Our main contributions are: (i) We propose looking at SCS as a compositional task that requires multi-step reasoning. (ii) We present an implementation of the aforementioned paradigm based on
NMNs. (iii) We demonstrate that our proposed model provides a large improvement on a number of well-established baseline models. (iv) We perform additional studies to evaluate the capacity of our model to handle compositional queries.
2 Background
2.1 Semantic Code Search
Semantic code search (SCS) is the process of retrieving a relevant code snippet based on a textual description of its functionality, also referred to as query. Let C be a database of code snippets ci. For each ci 2 C, there is a textual description of its functionality qi. In the example in Figure 2, the query qi is “Load all tables from dataset”. Let r be an indicator function such that r(qi, cj) = 1 if i = j; and 0 otherwise. Given some query q the goal of SCS is to find c⇤ such that r(q, c⇤) = 1. We assume that for each q⇤ there is exactly one such c⇤.2 Here we look to construct a model which takes as input a pair of query and a candidate code snippet: (qi, cj) and assign the pair a probability r̂ij for being a correct match. Following the common practice in information retrieval, we evaluate the performance of the model based on how high the correct answer c⇤ is ranked among a number of incorrect, or distractor instances {c}. This set of distractor instances can be the entire codebase C, or a subset of the codebase obtained through heuristic filtering, or another ranking method.
2.2 Neural Models for Semantic Code Search
Past works handling programs and code have focused on enriching their models with incorporating more semantic and syntactic information from code [1, 10, 34, 47]. Some prior works have cast the SCS as a sequence classification task, where the code is represented as a textual sequence and input pair (qi, cj) is concatenated with a special separator symbol into a single sequence, and the output is the score r̂ij : r̂ij = f(qi, cj). The function f performing the classification can be any sequence classification model, e.g. BERT [11].
Alternatively, one can define separate networks for independently representing the query (f ), the code (g) and measuring the similarity between them: r̂ij = sim(f(qi), g(cj)). This allows one to design the code encoding network g with additional program-specific information, such as abstract syntax trees [3, 44] or control flow graphs [15, 45]. Separating two modalities of natural language and code also allows further enrichment of code representation by adding contrastive learning objectives [25, 6]. In these approaches, the original code snippet c is automatically modified with semantic-preserving transformations, such as variable renaming, to introduce versions of the code snippet - c0 with the exact same functionality. Code encoder g is then trained with an appropriate contrastive loss, such as Noise Contrastive Estimation (NCE) [19], or InfoNCE [35].
Limitations However, there is also merit in reviewing how we represent and use the textual query to help guide the SCS process. Firstly, existing work derives a single embedding for the entire query. This means that specific details or nested subqueries of the query may be omitted or not represented faithfully - getting lost in the embedding. Secondly, prior approaches make the decision after a single pass over the code snippet. This ignores cases where reasoning about a query requires multiple steps and thus - multiple look-ups over the code, as is for example in cases with nested subqueries. Our proposed approach - NS3 - attempts to address these issues by breaking down the query into smaller phrases based on its semantic parse and locating each of them in the code snippet. This should allow us to match compositional and longer queries to code more precisely.
3 Neural Modular Code Search
We propose to supplement the query with a loose structure resembling its semantic parse, as illustrated in Figure 2. We follow the parse structure to break down the query into smaller, semantically coherent parts, so that each corresponds to an individual execution step. The steps are taken in succession by a neural module network composed from a layout that is determined from the semantic parse of the
2This is not the case in CoSQA dataset. For the sake of consistency, we perform the evaluation repeatedly, leaving only one correct code snippet among the candidates at a time, while removing the others.
query (Sec. 3.1). The neural module network is composed by stacking “modules”, or jointly trained networks, of distinct types, each carrying out a different functionality.
Method Overview In this work, we define two types of neural modules - entity discovery module (denoted by E; Sec. 3.2) and action module (denoted by A; Sec 3.3). The entity discovery module estimates semantic relatedness of each code token cji in the code snippet c j = [cj1, . . . , c j N ] to an entity mentioned in the query – e.g. “all tables” or “dataset” as in Figure 2. The action module estimates the likelihood of each code token to be related to an (unseen) entity affected by the action in the query e.g. “dataset” and “load from” correspondingly, conditioned on the rest of the input (seen), e.g. “all tables”. The similarity of the predictions of the entity discovery and action modules measures how well the code matches that part of the query. The modules are nested - the action modules are taking as input part of the output of another module - and the order of nesting is decided by the semantic parse layout. In the rest of the paper we refer to the inputs of a module as its arguments.
Every input instance fed to the model is a 3-tuple (qi, sqi , cj) consisting of a natural language query qi, the query’s semantic parse sqi , a candidate code (sequence) cj . The goal is producing a binary label r̂ij = 1 if the code is a match for the query, and 0 otherwise. The layout of the neural module network, denoted by L(sqi), is created from the semantic structure of the query sqi . During inference, given (qi, sqi , cj) as input the model instantiates a network based on the layout, passes qi, cj and sqi as inputs, and obtains the model prediction r̂ij . This pipeline is illustrated in Figure 2, and details about creating the layout of the neural module network are presented in Section 3.1.
During training, we first perform noisy supervision pretraining for both modules. Next, we perform end-to-end training, where in addition to the query, its parse, and a code snippet, the model is also provided a gold output label r(qi, cj) = 1 if the code is a match for the query, and r(qi, cj) = 0 otherwise. These labels provide signal for joint fine-tuning of both modules (Section 3.5).
3.1 Module Network Layout
Here we present our definition of the structural representation sqi for a query qi, and introduce how this structural representation is used for dynamically constructing the neural module network, i.e. building its layout L(sqi).
Query Parsing To infer the representation sqi , we pair the query (e.g., “Load all tables from dataset”, as in Figure 2), with a simple semantic parse that looks similar to: DO WHAT [ (to/from/in/...) WHAT, WHEN, WHERE, HOW, etc]. Following this semantic parse, we break down the query into shorter semantic phrases using the roles of different parts of speech. Nouns and noun phrases correspond to data entities in code, and verbs describe actions or transformations performed on the data entities. Thus, data and transformations are separated and handled by separate neural modules – an entity discovery module E and an action module A. We use a Combinatory Categorial Grammar-based (CCG) semantic parser [43, 5] to infer the semantic parse sqi for the natural language query qi. Parsing is described in further detail in Section 4.1 and Appendix A.2.
Specifying Network Layout In the layout L(sqi), every noun phrase (e.g., “dataset" in Figure 2) will be passed through the entity discovery module E. Module E then produces a probability score ek for every token cjk in the code snippet c
j to indicate its semantic relatedness to the noun phrase: E(“dataset”, cj) = [e1, e2, . . . , eN ]. Each verb in sqi (e.g., “load” in Figure 2) will be passed through an action module: A(“load”,pi, cj) = [a1, a2, . . . , aN ]. Here, pi is the span of arguments to the verb (action) in query qi, consisting of children of the verb in the parse sqi (e.g. subject and object arguments to the predicate “load”); a1, . . . , aN are estimates of the token scores e1, . . . , eN for an entity from pi. The top-level of the semantic parse is always an action module. Figure 2 also illustrates preposition FROM used with “dataset”, handling which is described in Section 3.3.
3.2 Entity Discovery Module
The entity discovery module receives a string that references a data entity. Its goal is to identify tokens in the code that have high relevance to that string. The architecture of the module is shown in Figure 3. Given an entity string, “dataset” in the example, and a sequence of code tokens [cj1, . . . , c j N ], entity module first obtains contextual code token representation using RoBERTa model that is initialized
from CodeBERT-base checkpoint. The resulting embedding is passed through a two-layer MLP to obtain a score for every individual code token cjk : 0 ek 1. Thus, the total output of the module is a vector of scores: [e1, e2, . . . , eN ]. To prime the entity discovery module for measuring relevancy between code tokens and input, we fine-tune it with noisy supervision, as detailed below.
Noisy Supervision We create noisy supervision for the entity discovery module by using keyword matching and a Python static code analyzer. For the keyword matching, if a code token is an exact match for one or more tokens in the input string, its supervision label is set to 1, otherwise it is 0. Same is true if the code token is a substring or a superstring of one or more input string tokens. For some common nouns we include their synonyms (e.g. “map” for
“dict”), the full list of those and further details are presented in Appendix B.
We used the static code analyzer to extract information about statically known data types. We cross-matched this information with the query to discover whether the query references any datatypes found in the code snippet. If that is the case, the corresponding code tokens are assigned supervision label 1, and all the other tokens are assigned to 0. In the pretraining we learned on equal numbers of (query, code) pairs from the dataset, as well as randomly mismatched pairs of queries and code snippets to avoid creating bias in the entity discovery module.
3.3 Action Module
First, we discuss the case where the action module has only entity module inputs. Figure 4 provides a high-level illustration of the action module. In the example, for the query “Load all tables from dataset”, the action module receives only part of the full query – “Load all tables from ???”. Action module then outputs token scores for the masked argument – “dataset”. If the code snippet corresponds to the query, then the action module should be able to deduce this missing part from the code and the rest of the query. For consistency, we always mask the last data entity argument. We pre-train the action module using the output scores of the entity discovery module as supervision.
Each data entity argument can be associated with 0 or 1 prepositions, but each action may have multiple entities with prepositions. For that reason, for each data entity argument we create one joint embedding of the action verb and the preposition. Joint embeddings are obtained with a 2-layer MLP model, as illustrated in the left-most part of Figure 4.
If a data entity does not have a preposition associated with it, the vector corresponding to the preposition is filled with zeros. The joint verb-preposition embedding is stacked with the code token embedding cjk and entity discovery module output for that token, this is referenced in the middle part of Figure 4. This vector is passed through a transformer encoder model, followed by a 2-layer MLP and a
sigmoid layer to output token score ak, illustrated in the right-most part of the Figure 4. Thus, the dimensionality of the input depends on the number of entities. We use a distinct copy of the module with the corresponding dimensionality for different numbers of inputs, from 1 to 3.
3.4 Model Prediction
The final score r̂ij = f(qi, cj) is computed based on the similarity of action and entity discovery module output scores. Formally, for an action module with verb x and parameters px = [px1 , . . . , pxk], the final model prediction is the dot product of respective outputs of action and entity discovery modules: r̂ij = A(x, px1 , . . . , pxk 1) · E(pxk). Since the action module estimates token scores for the entity affected by the verb, if its prediction is far from the truth - then either the action is not found in the code, or it is not fully corresponding to the query, for example, in the code snippet tables are loaded from web, instead of a dataset. We normalize this score to make it a probability. If this is the
only action in the query, this probability score will be the output of the entire model for (qi, cj) pair: r̂ij , otherwise r̂ij will be the product of probability scores of all nested actions in the layout.
Compositional query with nested actions Consider a compositional query “Load all tables from dataset using Lib library”. Here action with verb “Load from” has an additional argument “using” – also an action – with an entity argument “Lib library”. In case of nested actions, we flatten the layout by taking the conjunction of individual action similarity scores. Formally, for two verbs x and y and their corresponding arguments px = [px1 , . . . , pxk] and p y = [py1, . . . , p y l ] in a layout that looks like: A(x,px, A(y,py)), the output of the model is the conjunction of similarity scores computed for individual action modules: sim(A(x, px1 , . . . , pxk 1), E(p x k)) · sim(A(y, p y 1, . . . , p y l 1), E(p y l )). This process is repeated until all remaining px and py are data entities. This design ensures that code snippet is ranked highly if both actions are ranked highly, we leave explorations of alternative handling approaches for nested actions to future work.
3.5 Module Pretraining and Joint Fine-tuning
We train our model through supervised pre-training, as is discussed in Sections 3.2 and 3.3, followed by end-to-end training. End-to-end training objective is binary classification - given a pair of query qi and code cj , the model predicts probability r̂ij that they are related. In the end-to-end training, we use positive examples taken directly from the dataset - (qi, ci), as well as negative examples composed through the combination of randomly mismatched queries and code snippets. The goal of end-to-end training is fine-tuning parameters of entity discovery and action modules, including the weights of the RoBERTA models used for code token representation.
Batching is hard to achieve for our model, so for the interest of time efficiency we do not perform inference on all distractor code snippets in the code dataset. Instead, for a given query we re-rank top-K highest ranked code snippets as outputted by some baseline model, in our evaluations we used CodeBERT. Essentially, we use our model in a re-ranking setup, this is common in information retrieval and is known as L2 ranking. We interpret the probabilities outputted by the model as ranking scores. More details about this procedure are provided in Section 4.1.
4 Experiments
4.1 Experiment Setting
Dataset We conduct experiments on two datasets: Python portion of the CodeSearchNet (CSN) [24], and CoSQA [23]. We parse all queries with the CCG parser, as discussed later in this section, excluding unparsable examples from further experiments. This leaves us with approximately 40% of the CSN dataset and 70% of the CoSQA dataset, the exact data statistics are available in Appendix A in Table 3. We believe, that the difference in success rate of the parser between the two datasets can be attributed to the fact that CSN dataset, unlike CoSQA, does not contain real code search queries, but rather consists of docstrings, which are used as approximate queries. More details and examples can be found in Appendix A.3. For our baselines, we use the parsed portion of the dataset for fine-tuning to make the comparison fair. In addition, we also experiment with fine-tuning all models on an even smaller subset of CodeSearchNet dataset, using only 5K and 10K examples for fine-tuning. The goal is testing whether modular design makes NS3 more sample-efficient.
All experiment and ablation results discussed in Sections 4.2,4.3 and 4.4 are obtained on the test set of CSN for models trained on CSN training data, or WebQueryTest [31] – a small natural language web query dataset of document-code pairs – for models trained on CoSQA dataset.
Evaluation and Metrics We follow CodeSearchNet’s original approach for evaluation for a test instance (q, c), comparing the output against outputs over a fixed set of 999 distractor code snippets. We use two evaluation metrics: Mean Reciprocal Rank (MRR) and Precision@K (P@K) for K=1, 3, and 5, see Appendix A.1 for definitions and further details.
Following a common approach in information retrieval, we perform two-step evaluation. In the first step, we obtain CodeBERT’s output against 999 distractors. In the second step, we use NS3 to re-rank the top 10 predictions of CodeBERT. This way the evaluation is much faster, since unlike our
modular approach, CodeBERT can be fed examples in batches. And as we will see from the results, we see improvement in final performance in all scenarios.
Compared Methods We compare NS3 with various state-of-the-art methods, including some traditional approaches for document retrieval and pretrained large NLP language models. (1) BM25 is a ranking method to estimate the relevance of documents to a given query. (2) RoBERTa (code) is a variant of RoBERTa [29] pretrained on the CodeSearchNet corpus. (3) CuBERT [26] is a BERT Large model pretrained on 7.4M Python files from GitHub. (4) CodeBERT [13] is an encoder-only Transformer model trained on unlabeled source code via masked language modeling (MLM) and replaced token detection objectives. (5) GraphCodeBERT [17] is a pretrained Transformer model using MLM, data flow edge prediction, and variable alignment between code and the data flow. (6) GraphCodeBERT* is a re-ranking baseline. We used the same setup as for NS3, but used GraphCodeBERT to re-rank the top-10 predictions of the CodeBERT model.
Query Parser We started by building a vocabulary of predicates for common action verbs and entity nouns, such as “convert”, “find”, “dict”, “map”, etc. For those we constructed the lexicon (rules) of the parser. We have also included “catch-all” rules, for parsing sentences with less-common words. To increase the ratio of the parsed data, we preprocessed the queries by removing preceding question words, punctuation marks, etc. Full implementation of our parser including the entire lexicon and vocabulary can be found at https://anonymous.4open.science/ r/ccg_parser-4BC6. More details are available in Appendix A.2.
Pretrained Models Action and entity discovery modules each embed code tokens with a RoBERTa model, that has been initialized from a checkpoint of pretrained CodeBERT model 3. We fine-tune these models during the pretraining phases, as well as during final end-to-end training phase.
Hyperparameters The MLPs in entity discovery and action modules have 2 layers with input dimension of 768. We use dropout in these networks with rate 0.1. The learning rate for pretraining and end-to-end training phases was chosen from the range of 1e-6 to 6e-5. We use early stopping with evaluation on unseen validation set for model selection during action module pretraining and endto-end training. For entity discovery model selection we performed manual inspection of produced scores on unseen examples. For fine-tuning the CuBERT, CodeBERT and GraphCodeBERT baselines we use the hyperparameters reported in their original papers. For RoBERTa (code), we perform the search for learning rate during fine-tuning stage in the same interval as for our model. For model selection on baselines we also use early stopping.
3https://huggingface.co/microsoft/codebert-base
4.2 Results
Performance Comparison Tables 1 and 2 present the performance evaluated on testing portion of CodeSearchNet dataset, and WebQueryTest dataset correspondingly. As it can be seen, our proposed model outperforms the baselines.
Our evaluation strategy improves performance only if the correct code snippet was ranked among the top-10 results returned by the CodeBERT model, so rows labelled “Upper-bound” report best possible performance with this evaluation strategy.
Query Complexity vs. Performance Here we present the breakdown of the performance for our method vs baselines, using two proxies for the complexity and compositionality of the query. The first one is the maximum depth of the query. We define the maximum depth as the maximum number of nested action modules in the query. The results for this experiment are presented in Figure 5a. As we can see, NS3 improves over the baseline in all scenarios. It is interesting to note, that while CodeBERT achieves the best performance on queries with depth 3+, our model’s performance peaks at depth = 1. We hypothesize that this can be related to the automated parsing procedure, as parsing errors are more likely to be propagated in deeper queries. Further studies with carefully curated manual parses are necessary to better understand this phenomenon.
Another proxy for the query complexity we consider, is the number of data arguments to a single action module. While the previous scenario is breaking down the performance by the depth of the query, here we consider its “width”. We measure the average number of entity arguments per action module in the query. In the parsed portion of our dataset we have queries that range from 1 to 3 textual arguments per action verb. The results for this evaluation are presented in Figure 5. As it can be seen, there is no significant difference in performances between the two groups of queries in either CodeBERT or our proposed method - NS3.
4.3 Ablation Studies
Effect of Pretraining In an attempt to better understand the individual effect of the two modules as well as the roles of their pretraining and training procedures, we performed two additional ablation studies. In the first one, we compare the final performance of the original model with two versions where we skipped part of the pretraining. The model noted as (NS3 AP ) was trained with pretrained entity discovery module, but no pretraining was done for action module, instead we proceeded to the end-to-end training directly. For the model called NS3 (AP&EP ), we skipped both pretrainings
of the entity and action modules, and just performed end-to-end training. Figure 6a demonstrates that combined pretraining is important for the final performance. Additionally, we wanted to measure how effective the setup was without end-to-end training. The results are reported in Figure 6a under the name NS3 E2E. There is a huge performance dip in this scenario, and while the performance is better than random, it is obvious that end-to-end training is crucial for NS3.
Score Normalization We wanted to determine the importance of output normalization for the modules to a proper probability distribution. In Figure 6b we demonstrate the performance achieved using no normalization at all, normalizing either action or entity discovery module, or normalizing both. In all cases we used L1 normalization, since our output scores are non-negative. The version that is not normalized at all performs the worst on both datasets. The performances of the other three versions are close on both datasets.
Similarity Metric Additionally, we experimented with replacing the dot product similarity with a different similarity metric. In particular, in Figure 6c we compare the performance achieved using dot product similarity, L2 distance, and weighted cosine similarity. The difference in performance among different versions is marginal.
4.4 Analysis and Case Study
Appendix C contains additional studies on model generalization, such as handling completely unseen actions and entities, as well as the impact of the frequency of observing an action or entity during training has on model performance.
Case Study Finally, we demonstrate some examples of the scores produced by our modules at different stages of training. Figure 8 shows module score outputs for two different queries and with their corresponding code snippets. The first column shows the output of the entity discovery module after pretraining, while the second and third columns demonstrate the outputs of entity discovery and action modules after the end-to-end training. We can see that in the first column the model identifies syntactic matches, such as “folder” and a list comprehension, which “elements” could be related too. After fine-tuning we can see there is a wider range of both syntactic and some semantic matches present, e.g. “dirlist” and “filelist” are correctly identified as related to “folders”.
Perturbed Query Evaluation In this section we study how sensitive the models are to small changes in the query qi, so that it no longer correctly describes its corresponding code snippet ci. Our expectation is that evaluating a sensitive model on ci will rate the original query higher than the perturbed one. Whereas a model that tends to over-generalize and ignore details of the query will likely rate the perturbed query similar to the original. We start from 100 different pairs (qi, ci), that both our model and CodeBERT predict correctly.
We limited our study to queries with a single verb and a single data entity argument to that verb. For each pair we generated perturbations of two kinds, with 20 perturbed versions for every query. For the first type of perturbations, we replaced query’s data argument with a data argument sampled randomly from another query. For the second type, we replaced the verb argument with another randomly sampled verb. To account for calibration of the models, we measure the change in performance through ratio of the perturbed query score over original query score (lower is better). The results are shown in Figure 7, labelled “V (arg1) ! V (arg2)” and “V1(arg) ! V2(arg)”.
Discussion One of the main requirements for the application of our proposed method is being able to construct a semantic parse of the retrieval query. In general, it is reasonable to expect the users of the SCS to be able to come up with a formal representation of the query, e.g. by representing it in a form similar to SQL or CodeQL. However, due to the lack of such data for training and testing purposes, we implemented our own parser, which understandably does not have perfect performance since we are dealing with open-ended sentences.
5 Related work
Different deep learning models have proved quite efficient when applying to programming languages and code. Prior works have studied and reviewed the uses of deep learning for code analysis in general and code search in particular [39, 31].
A number of approaches to deep code search is based on creating a relevance-predicting model between text and code. [16] propose using RNNs for embedding both code and text to the same latent space. On the other hand, [27] capitalizes the inherent graph-like structure of programs to formulate code search as graph matching. A few works propose enriching the models handling code embedding by adding additional code analysis information, such as semantic and dependency parses [12, 2], variable renaming and statement permutation [14], as well as structures such as abstract syntax tree of the program [20, 37]. A few other approaches have dual formulations of code retrieval and code summarization [9, 40, 41, 6] In a different line of work, Heyman & Cutsem [21] propose considering the code search scenario where short annotative descriptions of code snippets are provided. Appendix E discusses more related work.
6 Conclusion
We presented NS3 a symbolic method for semantic code search based on neural module networks. Our method represents the query and code in terms of actions and data entities, and uses the semantic structure of the query to construct a neural module network. In contrast to existing code search methods, NS3 more precisely captures the nature of queries. In an extensive evaluation, we show that this method works better than strong but unstructured baselines. We further study model’s generalization capacities, robustness, and sensibility of outputs in a series of additional experiments.
Acknowledgments and Disclosure of Funding
This research is supported in part by the DARPA ReMath program under Contract No. HR00112190020, the DARPA MCS program under Contract No. N660011924033, Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract No. 2019-19051600007, the Defense Advanced Research Projects Agency with award W911NF-19-20271, NSF IIS 2048211, and gift awards from Google, Amazon, JP Morgan and Sony. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA, ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. We thank all the collaborators in USC INK research lab for their constructive feedback on the work. | 1. What is the main contribution of the paper regarding semantic code search?
2. What are the strengths and weaknesses of the proposed neural module network (NS3) and modular workflow?
3. Do you have any concerns or questions regarding the motivation behind the work, particularly the claim about language models' limitations?
4. Are there any issues with the evaluation data statistics, such as excluding unparsable examples for evaluation?
5. How does the reviewer assess the effectiveness of the proposed method compared to other strong baselines, especially in different settings?
6. Can you provide more information or explanations regarding the query parser implementation and its reliance on human work?
7. How does the reviewer view the efficiency aspect of the proposed method, particularly in real-world retrieval settings?
8. Are there any questions or concerns regarding the focus on the two-step evaluation setting and the comparison with CodeBERT and GraphCodeBERT?
9. Do you have any insights into the performance difference between CodeBERT and GraphCodeBERT in certain cases? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The paper studies the problem of retrieving code snippets given textual queries (called semantic code search). The work is motivated by language models’ limitations on encoding longer and compositional text (which I question a bit about, see my comments below). The authors propose a neural module network (called NS3) and introduce a modular workflow according to the semantic structure of the query. More specifically, NS3 contains two types of neural modules, entity discovery module, and action module, to estimate the semantic relatedness of code tokens and entity mentions and actions in a query separately. It decomposes the task into multiple steps of matching each data entity and action in the query. The authors demonstrate the effectiveness of the proposed method on several code search benchmarks and show that their method outperforms some strong baselines in some settings (on which I’m a bit confused).
Strengths And Weaknesses
Strengths:
The problem of semantic code search is very interesting. It is easy to follow the writing of the paper. The authors compare the proposed methods with other works and show strong performance on multiple code search benchmarks.
Weaknesses:
The authors mention that one of the main motivations of the work is because language models struggle with encoding long and compositional text. I’m a bit suspicious. Text queries as examples shown in the paper (but not just these examples, generally speaking) are not very long and complicated. Some of them are even too simple and ignore some details, which leads to mismatching between the query and code (which could be the real challenge). Language models are able to encode much more complex and longer texts than these examples in the paper….
According to what the authors say in lines 265-266, CodeBERT and many other simpler approaches can be fed examples in batches, which make them much faster in retrieval settings. In real-world retrieval settings, efficiency is also a very important consideration. In this case, the proposed method NS3 is less attractive (also depending on other methods such as CodeBERT). This makes NS3 look more like a re-ranking model instead of a fully actionable code retrieval model.
The query parser implementation seems to be a lot of human work (e.g., building a vocab of action and entity words).
Questions
Line 246-248: I’m confused about the evaluation data statistics. Is it a common practice to exclude unparsable examples for evaluation? It looks like that the authors did that only because of the limitations of their method (only taking parsable queries). It would be helpful to provide data statistics both before (original) and after parsing in Appendix A.2 (Table 3).
Line 254: Did you randomly select 5k and 10k examples? If they are randomly chosen did you report results on multiple randomly selected examples?
Line 264-266: Why did you focus on the two-step evaluation? Again, do CodeBERT/GraphCodeBERT themselves (not by yourselves) also report results in this setting? It seems that they can be applied in a single-step setting. Why not evaluate your method in that setting too without depending on CodeBERT’s first step predictions (then only applying your method to rerank the top 10 CodeBERT predictions)? As you mentioned in Line 303-305, the highest possible results you could get with this evaluation strategy is kind of low (74% on CoSQA…)...
Line 307- Fig. 5: Do you have any explanations about why GraphCodeBERT performs much worse than CodeBERT in many cases? Is it the strongest baseline you compare with?
Limitations
N/A |
NIPS | Title
NS3: Neuro-symbolic Semantic Code Search
Abstract
Semantic code search is the task of retrieving a code snippet given a textual description of its functionality. Recent work has been focused on using similarity metrics between neural embeddings of text and code. However, current language models are known to struggle with longer, compositional text, and multi-step reasoning. To overcome this limitation, we propose supplementing the query sentence with a layout of its semantic structure. The semantic layout is used to break down the final reasoning decision into a series of lower-level decisions. We use a Neural Module Network architecture to implement this idea. We compare our model NS (Neuro-Symbolic Semantic Search) to a number of baselines, including state-of-the-art semantic code retrieval methods, and evaluate on two datasets CodeSearchNet and Code Search and Question Answering. We demonstrate that our approach results in more precise code retrieval, and we study the effectiveness of our modular design when handling compositional queries1.
1 Introduction
The increasing scale of software repositories makes retrieving relevant code snippets more challenging. Traditionally, source code retrieval has been limited to keyword [33, 30] or regex [7] search. Both rely on the user providing the exact keywords appearing in or around the sought code. However, neural models enabled new approaches for retrieving code from a textual description of its functionality, a task known as semantic code search (SCS). A model like Transformer [36] can map a database of code snippets and natural language queries to a shared high-dimensional space. Relevant code snippets are then retrieved by searching over this embedding space using a predefined similarity metric, or a learned distance function [26, 13, 12]. Some of the recent works capitalize on the rich structure of the code, and employ graph neural networks for the task [17, 28].
Despite impressive results on SCS, current neural approaches are far from satisfactory in dealing with a wide range of natural-language queries, especially on ones with compositional language structure. Encoding text into a dense vector for retrieval purposes can be problematic because we risk loosing faithfulness of the representation, and missing important details of the query. Not only does this a) affect the performance, but it can b) drastically reduce a model’s value for the users, because compositional queries such as “Check that directory does not exist before creating it” require performing multi-step reasoning on code.
⇤Currently at Google Research †Equal supervision 1Code and data are available at https://github.com/ShushanArakelyan/modular_code_search
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
We suggest overcoming these challenges by introducing a modular workflow based on the semantic structure of the query. Our approach is based on the intuition of how an engineer would approach a SCS task. For example, in performing search for code that navigates folders in Python they would first only pay attention to code that has cues about operating with paths, directories or folders. Afterwards, they would seek indications of iterating through some of the found objects or other entities in the code related to them. In other words, they would perform multiple steps of different nature - i.e. finding indications of specific types of data entities, or specific operations. Figure 1 illustrates which parts of the code would be
important to indicate that they have found the desired code snippet at each step. We attempt to imitate this process in this work. To formalize the decomposition of the query into such steps, we take inspiration from the idea that code is comprised of data, or entities, and transformations, or actions, over data. Thus, a SCS query is also likely to describe the code in terms of data entities and actions.
We break down the task of matching the query into smaller tasks of matching individual data entities and actions. In particular, we aim to identify parts of the code that indicate the presence of the corresponding data or action. We tackle each part with a distinct type of network – a neural module. Using the semantic parse of the query, we construct the layout of how modules’ outputs should be linked according to the relationships between data entities and actions, where each data entity represents a noun, or a noun phrase, and each action represents a verb, or a verbal phrase. Correspondingly, this layout specifies how the modules should be combined into a single neural module network (NMN) [4]. Evaluating the NMN on the candidate code approximates detecting the corresponding entities and actions in the code by testing whether the neural network can deduce one missing entity from the code and the rest of the query.
This approach has the following advantages. First, semantic parse captures the compositionality of a query. Second, it mitigates the challenges of faithful encoding of text by focusind only on a small portion of the query at a time. Finally, applying the neural modules in a succession can potentially mimic staged reasoning necessary for SCS.
We evaluate our proposed NS3 model on two SCS datasets - CodeSearchNet (CSN) [24] and CoSQA/WebQueryTest [23]. Additionally, we experiment with a limited training set size of CSN of 10K and 5K examples. We find that NS3 provides large improvements upon baselines in all cases. Our experiments demonstrate that the resulting model is more sensitive to small, but semantically significant changes in the query, and is more likely to correctly recognize that a modified query no longer matches its code pair.
Our main contributions are: (i) We propose looking at SCS as a compositional task that requires multi-step reasoning. (ii) We present an implementation of the aforementioned paradigm based on
NMNs. (iii) We demonstrate that our proposed model provides a large improvement on a number of well-established baseline models. (iv) We perform additional studies to evaluate the capacity of our model to handle compositional queries.
2 Background
2.1 Semantic Code Search
Semantic code search (SCS) is the process of retrieving a relevant code snippet based on a textual description of its functionality, also referred to as query. Let C be a database of code snippets ci. For each ci 2 C, there is a textual description of its functionality qi. In the example in Figure 2, the query qi is “Load all tables from dataset”. Let r be an indicator function such that r(qi, cj) = 1 if i = j; and 0 otherwise. Given some query q the goal of SCS is to find c⇤ such that r(q, c⇤) = 1. We assume that for each q⇤ there is exactly one such c⇤.2 Here we look to construct a model which takes as input a pair of query and a candidate code snippet: (qi, cj) and assign the pair a probability r̂ij for being a correct match. Following the common practice in information retrieval, we evaluate the performance of the model based on how high the correct answer c⇤ is ranked among a number of incorrect, or distractor instances {c}. This set of distractor instances can be the entire codebase C, or a subset of the codebase obtained through heuristic filtering, or another ranking method.
2.2 Neural Models for Semantic Code Search
Past works handling programs and code have focused on enriching their models with incorporating more semantic and syntactic information from code [1, 10, 34, 47]. Some prior works have cast the SCS as a sequence classification task, where the code is represented as a textual sequence and input pair (qi, cj) is concatenated with a special separator symbol into a single sequence, and the output is the score r̂ij : r̂ij = f(qi, cj). The function f performing the classification can be any sequence classification model, e.g. BERT [11].
Alternatively, one can define separate networks for independently representing the query (f ), the code (g) and measuring the similarity between them: r̂ij = sim(f(qi), g(cj)). This allows one to design the code encoding network g with additional program-specific information, such as abstract syntax trees [3, 44] or control flow graphs [15, 45]. Separating two modalities of natural language and code also allows further enrichment of code representation by adding contrastive learning objectives [25, 6]. In these approaches, the original code snippet c is automatically modified with semantic-preserving transformations, such as variable renaming, to introduce versions of the code snippet - c0 with the exact same functionality. Code encoder g is then trained with an appropriate contrastive loss, such as Noise Contrastive Estimation (NCE) [19], or InfoNCE [35].
Limitations However, there is also merit in reviewing how we represent and use the textual query to help guide the SCS process. Firstly, existing work derives a single embedding for the entire query. This means that specific details or nested subqueries of the query may be omitted or not represented faithfully - getting lost in the embedding. Secondly, prior approaches make the decision after a single pass over the code snippet. This ignores cases where reasoning about a query requires multiple steps and thus - multiple look-ups over the code, as is for example in cases with nested subqueries. Our proposed approach - NS3 - attempts to address these issues by breaking down the query into smaller phrases based on its semantic parse and locating each of them in the code snippet. This should allow us to match compositional and longer queries to code more precisely.
3 Neural Modular Code Search
We propose to supplement the query with a loose structure resembling its semantic parse, as illustrated in Figure 2. We follow the parse structure to break down the query into smaller, semantically coherent parts, so that each corresponds to an individual execution step. The steps are taken in succession by a neural module network composed from a layout that is determined from the semantic parse of the
2This is not the case in CoSQA dataset. For the sake of consistency, we perform the evaluation repeatedly, leaving only one correct code snippet among the candidates at a time, while removing the others.
query (Sec. 3.1). The neural module network is composed by stacking “modules”, or jointly trained networks, of distinct types, each carrying out a different functionality.
Method Overview In this work, we define two types of neural modules - entity discovery module (denoted by E; Sec. 3.2) and action module (denoted by A; Sec 3.3). The entity discovery module estimates semantic relatedness of each code token cji in the code snippet c j = [cj1, . . . , c j N ] to an entity mentioned in the query – e.g. “all tables” or “dataset” as in Figure 2. The action module estimates the likelihood of each code token to be related to an (unseen) entity affected by the action in the query e.g. “dataset” and “load from” correspondingly, conditioned on the rest of the input (seen), e.g. “all tables”. The similarity of the predictions of the entity discovery and action modules measures how well the code matches that part of the query. The modules are nested - the action modules are taking as input part of the output of another module - and the order of nesting is decided by the semantic parse layout. In the rest of the paper we refer to the inputs of a module as its arguments.
Every input instance fed to the model is a 3-tuple (qi, sqi , cj) consisting of a natural language query qi, the query’s semantic parse sqi , a candidate code (sequence) cj . The goal is producing a binary label r̂ij = 1 if the code is a match for the query, and 0 otherwise. The layout of the neural module network, denoted by L(sqi), is created from the semantic structure of the query sqi . During inference, given (qi, sqi , cj) as input the model instantiates a network based on the layout, passes qi, cj and sqi as inputs, and obtains the model prediction r̂ij . This pipeline is illustrated in Figure 2, and details about creating the layout of the neural module network are presented in Section 3.1.
During training, we first perform noisy supervision pretraining for both modules. Next, we perform end-to-end training, where in addition to the query, its parse, and a code snippet, the model is also provided a gold output label r(qi, cj) = 1 if the code is a match for the query, and r(qi, cj) = 0 otherwise. These labels provide signal for joint fine-tuning of both modules (Section 3.5).
3.1 Module Network Layout
Here we present our definition of the structural representation sqi for a query qi, and introduce how this structural representation is used for dynamically constructing the neural module network, i.e. building its layout L(sqi).
Query Parsing To infer the representation sqi , we pair the query (e.g., “Load all tables from dataset”, as in Figure 2), with a simple semantic parse that looks similar to: DO WHAT [ (to/from/in/...) WHAT, WHEN, WHERE, HOW, etc]. Following this semantic parse, we break down the query into shorter semantic phrases using the roles of different parts of speech. Nouns and noun phrases correspond to data entities in code, and verbs describe actions or transformations performed on the data entities. Thus, data and transformations are separated and handled by separate neural modules – an entity discovery module E and an action module A. We use a Combinatory Categorial Grammar-based (CCG) semantic parser [43, 5] to infer the semantic parse sqi for the natural language query qi. Parsing is described in further detail in Section 4.1 and Appendix A.2.
Specifying Network Layout In the layout L(sqi), every noun phrase (e.g., “dataset" in Figure 2) will be passed through the entity discovery module E. Module E then produces a probability score ek for every token cjk in the code snippet c
j to indicate its semantic relatedness to the noun phrase: E(“dataset”, cj) = [e1, e2, . . . , eN ]. Each verb in sqi (e.g., “load” in Figure 2) will be passed through an action module: A(“load”,pi, cj) = [a1, a2, . . . , aN ]. Here, pi is the span of arguments to the verb (action) in query qi, consisting of children of the verb in the parse sqi (e.g. subject and object arguments to the predicate “load”); a1, . . . , aN are estimates of the token scores e1, . . . , eN for an entity from pi. The top-level of the semantic parse is always an action module. Figure 2 also illustrates preposition FROM used with “dataset”, handling which is described in Section 3.3.
3.2 Entity Discovery Module
The entity discovery module receives a string that references a data entity. Its goal is to identify tokens in the code that have high relevance to that string. The architecture of the module is shown in Figure 3. Given an entity string, “dataset” in the example, and a sequence of code tokens [cj1, . . . , c j N ], entity module first obtains contextual code token representation using RoBERTa model that is initialized
from CodeBERT-base checkpoint. The resulting embedding is passed through a two-layer MLP to obtain a score for every individual code token cjk : 0 ek 1. Thus, the total output of the module is a vector of scores: [e1, e2, . . . , eN ]. To prime the entity discovery module for measuring relevancy between code tokens and input, we fine-tune it with noisy supervision, as detailed below.
Noisy Supervision We create noisy supervision for the entity discovery module by using keyword matching and a Python static code analyzer. For the keyword matching, if a code token is an exact match for one or more tokens in the input string, its supervision label is set to 1, otherwise it is 0. Same is true if the code token is a substring or a superstring of one or more input string tokens. For some common nouns we include their synonyms (e.g. “map” for
“dict”), the full list of those and further details are presented in Appendix B.
We used the static code analyzer to extract information about statically known data types. We cross-matched this information with the query to discover whether the query references any datatypes found in the code snippet. If that is the case, the corresponding code tokens are assigned supervision label 1, and all the other tokens are assigned to 0. In the pretraining we learned on equal numbers of (query, code) pairs from the dataset, as well as randomly mismatched pairs of queries and code snippets to avoid creating bias in the entity discovery module.
3.3 Action Module
First, we discuss the case where the action module has only entity module inputs. Figure 4 provides a high-level illustration of the action module. In the example, for the query “Load all tables from dataset”, the action module receives only part of the full query – “Load all tables from ???”. Action module then outputs token scores for the masked argument – “dataset”. If the code snippet corresponds to the query, then the action module should be able to deduce this missing part from the code and the rest of the query. For consistency, we always mask the last data entity argument. We pre-train the action module using the output scores of the entity discovery module as supervision.
Each data entity argument can be associated with 0 or 1 prepositions, but each action may have multiple entities with prepositions. For that reason, for each data entity argument we create one joint embedding of the action verb and the preposition. Joint embeddings are obtained with a 2-layer MLP model, as illustrated in the left-most part of Figure 4.
If a data entity does not have a preposition associated with it, the vector corresponding to the preposition is filled with zeros. The joint verb-preposition embedding is stacked with the code token embedding cjk and entity discovery module output for that token, this is referenced in the middle part of Figure 4. This vector is passed through a transformer encoder model, followed by a 2-layer MLP and a
sigmoid layer to output token score ak, illustrated in the right-most part of the Figure 4. Thus, the dimensionality of the input depends on the number of entities. We use a distinct copy of the module with the corresponding dimensionality for different numbers of inputs, from 1 to 3.
3.4 Model Prediction
The final score r̂ij = f(qi, cj) is computed based on the similarity of action and entity discovery module output scores. Formally, for an action module with verb x and parameters px = [px1 , . . . , pxk], the final model prediction is the dot product of respective outputs of action and entity discovery modules: r̂ij = A(x, px1 , . . . , pxk 1) · E(pxk). Since the action module estimates token scores for the entity affected by the verb, if its prediction is far from the truth - then either the action is not found in the code, or it is not fully corresponding to the query, for example, in the code snippet tables are loaded from web, instead of a dataset. We normalize this score to make it a probability. If this is the
only action in the query, this probability score will be the output of the entire model for (qi, cj) pair: r̂ij , otherwise r̂ij will be the product of probability scores of all nested actions in the layout.
Compositional query with nested actions Consider a compositional query “Load all tables from dataset using Lib library”. Here action with verb “Load from” has an additional argument “using” – also an action – with an entity argument “Lib library”. In case of nested actions, we flatten the layout by taking the conjunction of individual action similarity scores. Formally, for two verbs x and y and their corresponding arguments px = [px1 , . . . , pxk] and p y = [py1, . . . , p y l ] in a layout that looks like: A(x,px, A(y,py)), the output of the model is the conjunction of similarity scores computed for individual action modules: sim(A(x, px1 , . . . , pxk 1), E(p x k)) · sim(A(y, p y 1, . . . , p y l 1), E(p y l )). This process is repeated until all remaining px and py are data entities. This design ensures that code snippet is ranked highly if both actions are ranked highly, we leave explorations of alternative handling approaches for nested actions to future work.
3.5 Module Pretraining and Joint Fine-tuning
We train our model through supervised pre-training, as is discussed in Sections 3.2 and 3.3, followed by end-to-end training. End-to-end training objective is binary classification - given a pair of query qi and code cj , the model predicts probability r̂ij that they are related. In the end-to-end training, we use positive examples taken directly from the dataset - (qi, ci), as well as negative examples composed through the combination of randomly mismatched queries and code snippets. The goal of end-to-end training is fine-tuning parameters of entity discovery and action modules, including the weights of the RoBERTA models used for code token representation.
Batching is hard to achieve for our model, so for the interest of time efficiency we do not perform inference on all distractor code snippets in the code dataset. Instead, for a given query we re-rank top-K highest ranked code snippets as outputted by some baseline model, in our evaluations we used CodeBERT. Essentially, we use our model in a re-ranking setup, this is common in information retrieval and is known as L2 ranking. We interpret the probabilities outputted by the model as ranking scores. More details about this procedure are provided in Section 4.1.
4 Experiments
4.1 Experiment Setting
Dataset We conduct experiments on two datasets: Python portion of the CodeSearchNet (CSN) [24], and CoSQA [23]. We parse all queries with the CCG parser, as discussed later in this section, excluding unparsable examples from further experiments. This leaves us with approximately 40% of the CSN dataset and 70% of the CoSQA dataset, the exact data statistics are available in Appendix A in Table 3. We believe, that the difference in success rate of the parser between the two datasets can be attributed to the fact that CSN dataset, unlike CoSQA, does not contain real code search queries, but rather consists of docstrings, which are used as approximate queries. More details and examples can be found in Appendix A.3. For our baselines, we use the parsed portion of the dataset for fine-tuning to make the comparison fair. In addition, we also experiment with fine-tuning all models on an even smaller subset of CodeSearchNet dataset, using only 5K and 10K examples for fine-tuning. The goal is testing whether modular design makes NS3 more sample-efficient.
All experiment and ablation results discussed in Sections 4.2,4.3 and 4.4 are obtained on the test set of CSN for models trained on CSN training data, or WebQueryTest [31] – a small natural language web query dataset of document-code pairs – for models trained on CoSQA dataset.
Evaluation and Metrics We follow CodeSearchNet’s original approach for evaluation for a test instance (q, c), comparing the output against outputs over a fixed set of 999 distractor code snippets. We use two evaluation metrics: Mean Reciprocal Rank (MRR) and Precision@K (P@K) for K=1, 3, and 5, see Appendix A.1 for definitions and further details.
Following a common approach in information retrieval, we perform two-step evaluation. In the first step, we obtain CodeBERT’s output against 999 distractors. In the second step, we use NS3 to re-rank the top 10 predictions of CodeBERT. This way the evaluation is much faster, since unlike our
modular approach, CodeBERT can be fed examples in batches. And as we will see from the results, we see improvement in final performance in all scenarios.
Compared Methods We compare NS3 with various state-of-the-art methods, including some traditional approaches for document retrieval and pretrained large NLP language models. (1) BM25 is a ranking method to estimate the relevance of documents to a given query. (2) RoBERTa (code) is a variant of RoBERTa [29] pretrained on the CodeSearchNet corpus. (3) CuBERT [26] is a BERT Large model pretrained on 7.4M Python files from GitHub. (4) CodeBERT [13] is an encoder-only Transformer model trained on unlabeled source code via masked language modeling (MLM) and replaced token detection objectives. (5) GraphCodeBERT [17] is a pretrained Transformer model using MLM, data flow edge prediction, and variable alignment between code and the data flow. (6) GraphCodeBERT* is a re-ranking baseline. We used the same setup as for NS3, but used GraphCodeBERT to re-rank the top-10 predictions of the CodeBERT model.
Query Parser We started by building a vocabulary of predicates for common action verbs and entity nouns, such as “convert”, “find”, “dict”, “map”, etc. For those we constructed the lexicon (rules) of the parser. We have also included “catch-all” rules, for parsing sentences with less-common words. To increase the ratio of the parsed data, we preprocessed the queries by removing preceding question words, punctuation marks, etc. Full implementation of our parser including the entire lexicon and vocabulary can be found at https://anonymous.4open.science/ r/ccg_parser-4BC6. More details are available in Appendix A.2.
Pretrained Models Action and entity discovery modules each embed code tokens with a RoBERTa model, that has been initialized from a checkpoint of pretrained CodeBERT model 3. We fine-tune these models during the pretraining phases, as well as during final end-to-end training phase.
Hyperparameters The MLPs in entity discovery and action modules have 2 layers with input dimension of 768. We use dropout in these networks with rate 0.1. The learning rate for pretraining and end-to-end training phases was chosen from the range of 1e-6 to 6e-5. We use early stopping with evaluation on unseen validation set for model selection during action module pretraining and endto-end training. For entity discovery model selection we performed manual inspection of produced scores on unseen examples. For fine-tuning the CuBERT, CodeBERT and GraphCodeBERT baselines we use the hyperparameters reported in their original papers. For RoBERTa (code), we perform the search for learning rate during fine-tuning stage in the same interval as for our model. For model selection on baselines we also use early stopping.
3https://huggingface.co/microsoft/codebert-base
4.2 Results
Performance Comparison Tables 1 and 2 present the performance evaluated on testing portion of CodeSearchNet dataset, and WebQueryTest dataset correspondingly. As it can be seen, our proposed model outperforms the baselines.
Our evaluation strategy improves performance only if the correct code snippet was ranked among the top-10 results returned by the CodeBERT model, so rows labelled “Upper-bound” report best possible performance with this evaluation strategy.
Query Complexity vs. Performance Here we present the breakdown of the performance for our method vs baselines, using two proxies for the complexity and compositionality of the query. The first one is the maximum depth of the query. We define the maximum depth as the maximum number of nested action modules in the query. The results for this experiment are presented in Figure 5a. As we can see, NS3 improves over the baseline in all scenarios. It is interesting to note, that while CodeBERT achieves the best performance on queries with depth 3+, our model’s performance peaks at depth = 1. We hypothesize that this can be related to the automated parsing procedure, as parsing errors are more likely to be propagated in deeper queries. Further studies with carefully curated manual parses are necessary to better understand this phenomenon.
Another proxy for the query complexity we consider, is the number of data arguments to a single action module. While the previous scenario is breaking down the performance by the depth of the query, here we consider its “width”. We measure the average number of entity arguments per action module in the query. In the parsed portion of our dataset we have queries that range from 1 to 3 textual arguments per action verb. The results for this evaluation are presented in Figure 5. As it can be seen, there is no significant difference in performances between the two groups of queries in either CodeBERT or our proposed method - NS3.
4.3 Ablation Studies
Effect of Pretraining In an attempt to better understand the individual effect of the two modules as well as the roles of their pretraining and training procedures, we performed two additional ablation studies. In the first one, we compare the final performance of the original model with two versions where we skipped part of the pretraining. The model noted as (NS3 AP ) was trained with pretrained entity discovery module, but no pretraining was done for action module, instead we proceeded to the end-to-end training directly. For the model called NS3 (AP&EP ), we skipped both pretrainings
of the entity and action modules, and just performed end-to-end training. Figure 6a demonstrates that combined pretraining is important for the final performance. Additionally, we wanted to measure how effective the setup was without end-to-end training. The results are reported in Figure 6a under the name NS3 E2E. There is a huge performance dip in this scenario, and while the performance is better than random, it is obvious that end-to-end training is crucial for NS3.
Score Normalization We wanted to determine the importance of output normalization for the modules to a proper probability distribution. In Figure 6b we demonstrate the performance achieved using no normalization at all, normalizing either action or entity discovery module, or normalizing both. In all cases we used L1 normalization, since our output scores are non-negative. The version that is not normalized at all performs the worst on both datasets. The performances of the other three versions are close on both datasets.
Similarity Metric Additionally, we experimented with replacing the dot product similarity with a different similarity metric. In particular, in Figure 6c we compare the performance achieved using dot product similarity, L2 distance, and weighted cosine similarity. The difference in performance among different versions is marginal.
4.4 Analysis and Case Study
Appendix C contains additional studies on model generalization, such as handling completely unseen actions and entities, as well as the impact of the frequency of observing an action or entity during training has on model performance.
Case Study Finally, we demonstrate some examples of the scores produced by our modules at different stages of training. Figure 8 shows module score outputs for two different queries and with their corresponding code snippets. The first column shows the output of the entity discovery module after pretraining, while the second and third columns demonstrate the outputs of entity discovery and action modules after the end-to-end training. We can see that in the first column the model identifies syntactic matches, such as “folder” and a list comprehension, which “elements” could be related too. After fine-tuning we can see there is a wider range of both syntactic and some semantic matches present, e.g. “dirlist” and “filelist” are correctly identified as related to “folders”.
Perturbed Query Evaluation In this section we study how sensitive the models are to small changes in the query qi, so that it no longer correctly describes its corresponding code snippet ci. Our expectation is that evaluating a sensitive model on ci will rate the original query higher than the perturbed one. Whereas a model that tends to over-generalize and ignore details of the query will likely rate the perturbed query similar to the original. We start from 100 different pairs (qi, ci), that both our model and CodeBERT predict correctly.
We limited our study to queries with a single verb and a single data entity argument to that verb. For each pair we generated perturbations of two kinds, with 20 perturbed versions for every query. For the first type of perturbations, we replaced query’s data argument with a data argument sampled randomly from another query. For the second type, we replaced the verb argument with another randomly sampled verb. To account for calibration of the models, we measure the change in performance through ratio of the perturbed query score over original query score (lower is better). The results are shown in Figure 7, labelled “V (arg1) ! V (arg2)” and “V1(arg) ! V2(arg)”.
Discussion One of the main requirements for the application of our proposed method is being able to construct a semantic parse of the retrieval query. In general, it is reasonable to expect the users of the SCS to be able to come up with a formal representation of the query, e.g. by representing it in a form similar to SQL or CodeQL. However, due to the lack of such data for training and testing purposes, we implemented our own parser, which understandably does not have perfect performance since we are dealing with open-ended sentences.
5 Related work
Different deep learning models have proved quite efficient when applying to programming languages and code. Prior works have studied and reviewed the uses of deep learning for code analysis in general and code search in particular [39, 31].
A number of approaches to deep code search is based on creating a relevance-predicting model between text and code. [16] propose using RNNs for embedding both code and text to the same latent space. On the other hand, [27] capitalizes the inherent graph-like structure of programs to formulate code search as graph matching. A few works propose enriching the models handling code embedding by adding additional code analysis information, such as semantic and dependency parses [12, 2], variable renaming and statement permutation [14], as well as structures such as abstract syntax tree of the program [20, 37]. A few other approaches have dual formulations of code retrieval and code summarization [9, 40, 41, 6] In a different line of work, Heyman & Cutsem [21] propose considering the code search scenario where short annotative descriptions of code snippets are provided. Appendix E discusses more related work.
6 Conclusion
We presented NS3 a symbolic method for semantic code search based on neural module networks. Our method represents the query and code in terms of actions and data entities, and uses the semantic structure of the query to construct a neural module network. In contrast to existing code search methods, NS3 more precisely captures the nature of queries. In an extensive evaluation, we show that this method works better than strong but unstructured baselines. We further study model’s generalization capacities, robustness, and sensibility of outputs in a series of additional experiments.
Acknowledgments and Disclosure of Funding
This research is supported in part by the DARPA ReMath program under Contract No. HR00112190020, the DARPA MCS program under Contract No. N660011924033, Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract No. 2019-19051600007, the Defense Advanced Research Projects Agency with award W911NF-19-20271, NSF IIS 2048211, and gift awards from Google, Amazon, JP Morgan and Sony. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA, ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. We thank all the collaborators in USC INK research lab for their constructive feedback on the work. | 1. What is the focus and contribution of the paper regarding semantic search?
2. What are the strengths of the proposed approach, particularly in terms of utilizing Categorial Grammar-based semantic parser and Transformer-based neural model?
3. What are the weaknesses of the paper, especially regarding its limitations in handling arbitrary natural language scenarios and inconsistency in mitigating challenges of encoding long texts?
4. How does the reviewer assess the effectiveness of the proposed modules and pretraining strategy?
5. What are the suggestions provided by the reviewer to improve the experimental data and validation of the NS3 model? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The paper proposes the NS3 (Neuro-Symbolic Semantic Search) model, which breaks down the query into small phrases using the Categorial Grammar-based semantic parse module, which can better understand the compositional and longer queries.
To identify the similarity of each query and code snippet, the NS3 model uses two types of neural models, the entity discovery module and the action module. The entity discovery module uses a transformer encoder model and a two-layer MLP to identify the entities and their relevances. RoBERTa model initialization and noisy supervision training are applied in its self-supervised pretraining phase. The Action module architecture is similar to the entity discovery module. It estimates the action similarity through the prediction of the masked entity embedding. The module can be pre-trained with a mask-predict process for the masked entity. After pretraining, the model performs end-to-end fine-tuning for two modules.
The experiments on the CSN and CoSQA datasets show the superiority of the proposed model over the baseline methods on the parsable samples. Furthermore, the ablation study validates the effectiveness of the pretraining and investigates different score normalization methods and similarity metrics.
Strengths And Weaknesses
Strengths:
The proposed NS3 model utilizes the Categorial Grammar-based(CCG) semantic parser to comprehend the semantic structure of the query better and combines the Transformer-based neural model to capture semantic information of the query text, which is an interesting idea.
The experiment validates the effectiveness of the proposed modules and the pretraining strategy. Furthermore, the proposed model achieves noticeable improvements over the baseline models in the parsable samples.
Weakness:
The proposed model could not operate properly in arbitrary natural language scenarios that 60% of the CSN dataset and 30% of the CoSQA dataset records are not parsable.
According to Line 58, the author claims the model mitigates the challenges of encoding long texts and mimics the staged reasoning for SCS. However, according to Figure 5(a), the NS3 model is not significantly improved in the deeper query situation (D=3+). The model performs better on the query with a simple semantic structure (D=1), which is inconsistent with the initial claim.
The unparsable issue restricts the quantity and quality of the experimental data. According to Table 1 and [1], the MRR of the GraphCodeBert model is higher on the parsable dataset (0.812 vs. 0.692). The parsable data may be easier to comprehend through the NS3 model, making the experiment comparison unfair. Reference:
[1] Guo, D., Ren, S., Lu, S., Feng, Z., Tang, D., Liu, S., ... & Zhou, M. (2020). Graphcodebert: Pre-training code representations with data flow. arXiv preprint arXiv:2009.08366.
Questions
Could you elaborate on the influence of the parsable sample selection on the dataset? For example, are the long sentences and samples hard-to-understand remained?
Is there any sample that can validate the NS3 model mimics the staged reasoning for SCS?
Limitations
In the experiments, according to my understanding, only parsable data are used in the evaluation. I would suggest using all the data in the experiments to verify if the proposed method still helps in improving the SCS performance. In other words, it is not fair to compare with other baselines and models using only the dataset bias towards your method. |
NIPS | Title
NS3: Neuro-symbolic Semantic Code Search
Abstract
Semantic code search is the task of retrieving a code snippet given a textual description of its functionality. Recent work has been focused on using similarity metrics between neural embeddings of text and code. However, current language models are known to struggle with longer, compositional text, and multi-step reasoning. To overcome this limitation, we propose supplementing the query sentence with a layout of its semantic structure. The semantic layout is used to break down the final reasoning decision into a series of lower-level decisions. We use a Neural Module Network architecture to implement this idea. We compare our model NS (Neuro-Symbolic Semantic Search) to a number of baselines, including state-of-the-art semantic code retrieval methods, and evaluate on two datasets CodeSearchNet and Code Search and Question Answering. We demonstrate that our approach results in more precise code retrieval, and we study the effectiveness of our modular design when handling compositional queries1.
1 Introduction
The increasing scale of software repositories makes retrieving relevant code snippets more challenging. Traditionally, source code retrieval has been limited to keyword [33, 30] or regex [7] search. Both rely on the user providing the exact keywords appearing in or around the sought code. However, neural models enabled new approaches for retrieving code from a textual description of its functionality, a task known as semantic code search (SCS). A model like Transformer [36] can map a database of code snippets and natural language queries to a shared high-dimensional space. Relevant code snippets are then retrieved by searching over this embedding space using a predefined similarity metric, or a learned distance function [26, 13, 12]. Some of the recent works capitalize on the rich structure of the code, and employ graph neural networks for the task [17, 28].
Despite impressive results on SCS, current neural approaches are far from satisfactory in dealing with a wide range of natural-language queries, especially on ones with compositional language structure. Encoding text into a dense vector for retrieval purposes can be problematic because we risk loosing faithfulness of the representation, and missing important details of the query. Not only does this a) affect the performance, but it can b) drastically reduce a model’s value for the users, because compositional queries such as “Check that directory does not exist before creating it” require performing multi-step reasoning on code.
⇤Currently at Google Research †Equal supervision 1Code and data are available at https://github.com/ShushanArakelyan/modular_code_search
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
We suggest overcoming these challenges by introducing a modular workflow based on the semantic structure of the query. Our approach is based on the intuition of how an engineer would approach a SCS task. For example, in performing search for code that navigates folders in Python they would first only pay attention to code that has cues about operating with paths, directories or folders. Afterwards, they would seek indications of iterating through some of the found objects or other entities in the code related to them. In other words, they would perform multiple steps of different nature - i.e. finding indications of specific types of data entities, or specific operations. Figure 1 illustrates which parts of the code would be
important to indicate that they have found the desired code snippet at each step. We attempt to imitate this process in this work. To formalize the decomposition of the query into such steps, we take inspiration from the idea that code is comprised of data, or entities, and transformations, or actions, over data. Thus, a SCS query is also likely to describe the code in terms of data entities and actions.
We break down the task of matching the query into smaller tasks of matching individual data entities and actions. In particular, we aim to identify parts of the code that indicate the presence of the corresponding data or action. We tackle each part with a distinct type of network – a neural module. Using the semantic parse of the query, we construct the layout of how modules’ outputs should be linked according to the relationships between data entities and actions, where each data entity represents a noun, or a noun phrase, and each action represents a verb, or a verbal phrase. Correspondingly, this layout specifies how the modules should be combined into a single neural module network (NMN) [4]. Evaluating the NMN on the candidate code approximates detecting the corresponding entities and actions in the code by testing whether the neural network can deduce one missing entity from the code and the rest of the query.
This approach has the following advantages. First, semantic parse captures the compositionality of a query. Second, it mitigates the challenges of faithful encoding of text by focusind only on a small portion of the query at a time. Finally, applying the neural modules in a succession can potentially mimic staged reasoning necessary for SCS.
We evaluate our proposed NS3 model on two SCS datasets - CodeSearchNet (CSN) [24] and CoSQA/WebQueryTest [23]. Additionally, we experiment with a limited training set size of CSN of 10K and 5K examples. We find that NS3 provides large improvements upon baselines in all cases. Our experiments demonstrate that the resulting model is more sensitive to small, but semantically significant changes in the query, and is more likely to correctly recognize that a modified query no longer matches its code pair.
Our main contributions are: (i) We propose looking at SCS as a compositional task that requires multi-step reasoning. (ii) We present an implementation of the aforementioned paradigm based on
NMNs. (iii) We demonstrate that our proposed model provides a large improvement on a number of well-established baseline models. (iv) We perform additional studies to evaluate the capacity of our model to handle compositional queries.
2 Background
2.1 Semantic Code Search
Semantic code search (SCS) is the process of retrieving a relevant code snippet based on a textual description of its functionality, also referred to as query. Let C be a database of code snippets ci. For each ci 2 C, there is a textual description of its functionality qi. In the example in Figure 2, the query qi is “Load all tables from dataset”. Let r be an indicator function such that r(qi, cj) = 1 if i = j; and 0 otherwise. Given some query q the goal of SCS is to find c⇤ such that r(q, c⇤) = 1. We assume that for each q⇤ there is exactly one such c⇤.2 Here we look to construct a model which takes as input a pair of query and a candidate code snippet: (qi, cj) and assign the pair a probability r̂ij for being a correct match. Following the common practice in information retrieval, we evaluate the performance of the model based on how high the correct answer c⇤ is ranked among a number of incorrect, or distractor instances {c}. This set of distractor instances can be the entire codebase C, or a subset of the codebase obtained through heuristic filtering, or another ranking method.
2.2 Neural Models for Semantic Code Search
Past works handling programs and code have focused on enriching their models with incorporating more semantic and syntactic information from code [1, 10, 34, 47]. Some prior works have cast the SCS as a sequence classification task, where the code is represented as a textual sequence and input pair (qi, cj) is concatenated with a special separator symbol into a single sequence, and the output is the score r̂ij : r̂ij = f(qi, cj). The function f performing the classification can be any sequence classification model, e.g. BERT [11].
Alternatively, one can define separate networks for independently representing the query (f ), the code (g) and measuring the similarity between them: r̂ij = sim(f(qi), g(cj)). This allows one to design the code encoding network g with additional program-specific information, such as abstract syntax trees [3, 44] or control flow graphs [15, 45]. Separating two modalities of natural language and code also allows further enrichment of code representation by adding contrastive learning objectives [25, 6]. In these approaches, the original code snippet c is automatically modified with semantic-preserving transformations, such as variable renaming, to introduce versions of the code snippet - c0 with the exact same functionality. Code encoder g is then trained with an appropriate contrastive loss, such as Noise Contrastive Estimation (NCE) [19], or InfoNCE [35].
Limitations However, there is also merit in reviewing how we represent and use the textual query to help guide the SCS process. Firstly, existing work derives a single embedding for the entire query. This means that specific details or nested subqueries of the query may be omitted or not represented faithfully - getting lost in the embedding. Secondly, prior approaches make the decision after a single pass over the code snippet. This ignores cases where reasoning about a query requires multiple steps and thus - multiple look-ups over the code, as is for example in cases with nested subqueries. Our proposed approach - NS3 - attempts to address these issues by breaking down the query into smaller phrases based on its semantic parse and locating each of them in the code snippet. This should allow us to match compositional and longer queries to code more precisely.
3 Neural Modular Code Search
We propose to supplement the query with a loose structure resembling its semantic parse, as illustrated in Figure 2. We follow the parse structure to break down the query into smaller, semantically coherent parts, so that each corresponds to an individual execution step. The steps are taken in succession by a neural module network composed from a layout that is determined from the semantic parse of the
2This is not the case in CoSQA dataset. For the sake of consistency, we perform the evaluation repeatedly, leaving only one correct code snippet among the candidates at a time, while removing the others.
query (Sec. 3.1). The neural module network is composed by stacking “modules”, or jointly trained networks, of distinct types, each carrying out a different functionality.
Method Overview In this work, we define two types of neural modules - entity discovery module (denoted by E; Sec. 3.2) and action module (denoted by A; Sec 3.3). The entity discovery module estimates semantic relatedness of each code token cji in the code snippet c j = [cj1, . . . , c j N ] to an entity mentioned in the query – e.g. “all tables” or “dataset” as in Figure 2. The action module estimates the likelihood of each code token to be related to an (unseen) entity affected by the action in the query e.g. “dataset” and “load from” correspondingly, conditioned on the rest of the input (seen), e.g. “all tables”. The similarity of the predictions of the entity discovery and action modules measures how well the code matches that part of the query. The modules are nested - the action modules are taking as input part of the output of another module - and the order of nesting is decided by the semantic parse layout. In the rest of the paper we refer to the inputs of a module as its arguments.
Every input instance fed to the model is a 3-tuple (qi, sqi , cj) consisting of a natural language query qi, the query’s semantic parse sqi , a candidate code (sequence) cj . The goal is producing a binary label r̂ij = 1 if the code is a match for the query, and 0 otherwise. The layout of the neural module network, denoted by L(sqi), is created from the semantic structure of the query sqi . During inference, given (qi, sqi , cj) as input the model instantiates a network based on the layout, passes qi, cj and sqi as inputs, and obtains the model prediction r̂ij . This pipeline is illustrated in Figure 2, and details about creating the layout of the neural module network are presented in Section 3.1.
During training, we first perform noisy supervision pretraining for both modules. Next, we perform end-to-end training, where in addition to the query, its parse, and a code snippet, the model is also provided a gold output label r(qi, cj) = 1 if the code is a match for the query, and r(qi, cj) = 0 otherwise. These labels provide signal for joint fine-tuning of both modules (Section 3.5).
3.1 Module Network Layout
Here we present our definition of the structural representation sqi for a query qi, and introduce how this structural representation is used for dynamically constructing the neural module network, i.e. building its layout L(sqi).
Query Parsing To infer the representation sqi , we pair the query (e.g., “Load all tables from dataset”, as in Figure 2), with a simple semantic parse that looks similar to: DO WHAT [ (to/from/in/...) WHAT, WHEN, WHERE, HOW, etc]. Following this semantic parse, we break down the query into shorter semantic phrases using the roles of different parts of speech. Nouns and noun phrases correspond to data entities in code, and verbs describe actions or transformations performed on the data entities. Thus, data and transformations are separated and handled by separate neural modules – an entity discovery module E and an action module A. We use a Combinatory Categorial Grammar-based (CCG) semantic parser [43, 5] to infer the semantic parse sqi for the natural language query qi. Parsing is described in further detail in Section 4.1 and Appendix A.2.
Specifying Network Layout In the layout L(sqi), every noun phrase (e.g., “dataset" in Figure 2) will be passed through the entity discovery module E. Module E then produces a probability score ek for every token cjk in the code snippet c
j to indicate its semantic relatedness to the noun phrase: E(“dataset”, cj) = [e1, e2, . . . , eN ]. Each verb in sqi (e.g., “load” in Figure 2) will be passed through an action module: A(“load”,pi, cj) = [a1, a2, . . . , aN ]. Here, pi is the span of arguments to the verb (action) in query qi, consisting of children of the verb in the parse sqi (e.g. subject and object arguments to the predicate “load”); a1, . . . , aN are estimates of the token scores e1, . . . , eN for an entity from pi. The top-level of the semantic parse is always an action module. Figure 2 also illustrates preposition FROM used with “dataset”, handling which is described in Section 3.3.
3.2 Entity Discovery Module
The entity discovery module receives a string that references a data entity. Its goal is to identify tokens in the code that have high relevance to that string. The architecture of the module is shown in Figure 3. Given an entity string, “dataset” in the example, and a sequence of code tokens [cj1, . . . , c j N ], entity module first obtains contextual code token representation using RoBERTa model that is initialized
from CodeBERT-base checkpoint. The resulting embedding is passed through a two-layer MLP to obtain a score for every individual code token cjk : 0 ek 1. Thus, the total output of the module is a vector of scores: [e1, e2, . . . , eN ]. To prime the entity discovery module for measuring relevancy between code tokens and input, we fine-tune it with noisy supervision, as detailed below.
Noisy Supervision We create noisy supervision for the entity discovery module by using keyword matching and a Python static code analyzer. For the keyword matching, if a code token is an exact match for one or more tokens in the input string, its supervision label is set to 1, otherwise it is 0. Same is true if the code token is a substring or a superstring of one or more input string tokens. For some common nouns we include their synonyms (e.g. “map” for
“dict”), the full list of those and further details are presented in Appendix B.
We used the static code analyzer to extract information about statically known data types. We cross-matched this information with the query to discover whether the query references any datatypes found in the code snippet. If that is the case, the corresponding code tokens are assigned supervision label 1, and all the other tokens are assigned to 0. In the pretraining we learned on equal numbers of (query, code) pairs from the dataset, as well as randomly mismatched pairs of queries and code snippets to avoid creating bias in the entity discovery module.
3.3 Action Module
First, we discuss the case where the action module has only entity module inputs. Figure 4 provides a high-level illustration of the action module. In the example, for the query “Load all tables from dataset”, the action module receives only part of the full query – “Load all tables from ???”. Action module then outputs token scores for the masked argument – “dataset”. If the code snippet corresponds to the query, then the action module should be able to deduce this missing part from the code and the rest of the query. For consistency, we always mask the last data entity argument. We pre-train the action module using the output scores of the entity discovery module as supervision.
Each data entity argument can be associated with 0 or 1 prepositions, but each action may have multiple entities with prepositions. For that reason, for each data entity argument we create one joint embedding of the action verb and the preposition. Joint embeddings are obtained with a 2-layer MLP model, as illustrated in the left-most part of Figure 4.
If a data entity does not have a preposition associated with it, the vector corresponding to the preposition is filled with zeros. The joint verb-preposition embedding is stacked with the code token embedding cjk and entity discovery module output for that token, this is referenced in the middle part of Figure 4. This vector is passed through a transformer encoder model, followed by a 2-layer MLP and a
sigmoid layer to output token score ak, illustrated in the right-most part of the Figure 4. Thus, the dimensionality of the input depends on the number of entities. We use a distinct copy of the module with the corresponding dimensionality for different numbers of inputs, from 1 to 3.
3.4 Model Prediction
The final score r̂ij = f(qi, cj) is computed based on the similarity of action and entity discovery module output scores. Formally, for an action module with verb x and parameters px = [px1 , . . . , pxk], the final model prediction is the dot product of respective outputs of action and entity discovery modules: r̂ij = A(x, px1 , . . . , pxk 1) · E(pxk). Since the action module estimates token scores for the entity affected by the verb, if its prediction is far from the truth - then either the action is not found in the code, or it is not fully corresponding to the query, for example, in the code snippet tables are loaded from web, instead of a dataset. We normalize this score to make it a probability. If this is the
only action in the query, this probability score will be the output of the entire model for (qi, cj) pair: r̂ij , otherwise r̂ij will be the product of probability scores of all nested actions in the layout.
Compositional query with nested actions Consider a compositional query “Load all tables from dataset using Lib library”. Here action with verb “Load from” has an additional argument “using” – also an action – with an entity argument “Lib library”. In case of nested actions, we flatten the layout by taking the conjunction of individual action similarity scores. Formally, for two verbs x and y and their corresponding arguments px = [px1 , . . . , pxk] and p y = [py1, . . . , p y l ] in a layout that looks like: A(x,px, A(y,py)), the output of the model is the conjunction of similarity scores computed for individual action modules: sim(A(x, px1 , . . . , pxk 1), E(p x k)) · sim(A(y, p y 1, . . . , p y l 1), E(p y l )). This process is repeated until all remaining px and py are data entities. This design ensures that code snippet is ranked highly if both actions are ranked highly, we leave explorations of alternative handling approaches for nested actions to future work.
3.5 Module Pretraining and Joint Fine-tuning
We train our model through supervised pre-training, as is discussed in Sections 3.2 and 3.3, followed by end-to-end training. End-to-end training objective is binary classification - given a pair of query qi and code cj , the model predicts probability r̂ij that they are related. In the end-to-end training, we use positive examples taken directly from the dataset - (qi, ci), as well as negative examples composed through the combination of randomly mismatched queries and code snippets. The goal of end-to-end training is fine-tuning parameters of entity discovery and action modules, including the weights of the RoBERTA models used for code token representation.
Batching is hard to achieve for our model, so for the interest of time efficiency we do not perform inference on all distractor code snippets in the code dataset. Instead, for a given query we re-rank top-K highest ranked code snippets as outputted by some baseline model, in our evaluations we used CodeBERT. Essentially, we use our model in a re-ranking setup, this is common in information retrieval and is known as L2 ranking. We interpret the probabilities outputted by the model as ranking scores. More details about this procedure are provided in Section 4.1.
4 Experiments
4.1 Experiment Setting
Dataset We conduct experiments on two datasets: Python portion of the CodeSearchNet (CSN) [24], and CoSQA [23]. We parse all queries with the CCG parser, as discussed later in this section, excluding unparsable examples from further experiments. This leaves us with approximately 40% of the CSN dataset and 70% of the CoSQA dataset, the exact data statistics are available in Appendix A in Table 3. We believe, that the difference in success rate of the parser between the two datasets can be attributed to the fact that CSN dataset, unlike CoSQA, does not contain real code search queries, but rather consists of docstrings, which are used as approximate queries. More details and examples can be found in Appendix A.3. For our baselines, we use the parsed portion of the dataset for fine-tuning to make the comparison fair. In addition, we also experiment with fine-tuning all models on an even smaller subset of CodeSearchNet dataset, using only 5K and 10K examples for fine-tuning. The goal is testing whether modular design makes NS3 more sample-efficient.
All experiment and ablation results discussed in Sections 4.2,4.3 and 4.4 are obtained on the test set of CSN for models trained on CSN training data, or WebQueryTest [31] – a small natural language web query dataset of document-code pairs – for models trained on CoSQA dataset.
Evaluation and Metrics We follow CodeSearchNet’s original approach for evaluation for a test instance (q, c), comparing the output against outputs over a fixed set of 999 distractor code snippets. We use two evaluation metrics: Mean Reciprocal Rank (MRR) and Precision@K (P@K) for K=1, 3, and 5, see Appendix A.1 for definitions and further details.
Following a common approach in information retrieval, we perform two-step evaluation. In the first step, we obtain CodeBERT’s output against 999 distractors. In the second step, we use NS3 to re-rank the top 10 predictions of CodeBERT. This way the evaluation is much faster, since unlike our
modular approach, CodeBERT can be fed examples in batches. And as we will see from the results, we see improvement in final performance in all scenarios.
Compared Methods We compare NS3 with various state-of-the-art methods, including some traditional approaches for document retrieval and pretrained large NLP language models. (1) BM25 is a ranking method to estimate the relevance of documents to a given query. (2) RoBERTa (code) is a variant of RoBERTa [29] pretrained on the CodeSearchNet corpus. (3) CuBERT [26] is a BERT Large model pretrained on 7.4M Python files from GitHub. (4) CodeBERT [13] is an encoder-only Transformer model trained on unlabeled source code via masked language modeling (MLM) and replaced token detection objectives. (5) GraphCodeBERT [17] is a pretrained Transformer model using MLM, data flow edge prediction, and variable alignment between code and the data flow. (6) GraphCodeBERT* is a re-ranking baseline. We used the same setup as for NS3, but used GraphCodeBERT to re-rank the top-10 predictions of the CodeBERT model.
Query Parser We started by building a vocabulary of predicates for common action verbs and entity nouns, such as “convert”, “find”, “dict”, “map”, etc. For those we constructed the lexicon (rules) of the parser. We have also included “catch-all” rules, for parsing sentences with less-common words. To increase the ratio of the parsed data, we preprocessed the queries by removing preceding question words, punctuation marks, etc. Full implementation of our parser including the entire lexicon and vocabulary can be found at https://anonymous.4open.science/ r/ccg_parser-4BC6. More details are available in Appendix A.2.
Pretrained Models Action and entity discovery modules each embed code tokens with a RoBERTa model, that has been initialized from a checkpoint of pretrained CodeBERT model 3. We fine-tune these models during the pretraining phases, as well as during final end-to-end training phase.
Hyperparameters The MLPs in entity discovery and action modules have 2 layers with input dimension of 768. We use dropout in these networks with rate 0.1. The learning rate for pretraining and end-to-end training phases was chosen from the range of 1e-6 to 6e-5. We use early stopping with evaluation on unseen validation set for model selection during action module pretraining and endto-end training. For entity discovery model selection we performed manual inspection of produced scores on unseen examples. For fine-tuning the CuBERT, CodeBERT and GraphCodeBERT baselines we use the hyperparameters reported in their original papers. For RoBERTa (code), we perform the search for learning rate during fine-tuning stage in the same interval as for our model. For model selection on baselines we also use early stopping.
3https://huggingface.co/microsoft/codebert-base
4.2 Results
Performance Comparison Tables 1 and 2 present the performance evaluated on testing portion of CodeSearchNet dataset, and WebQueryTest dataset correspondingly. As it can be seen, our proposed model outperforms the baselines.
Our evaluation strategy improves performance only if the correct code snippet was ranked among the top-10 results returned by the CodeBERT model, so rows labelled “Upper-bound” report best possible performance with this evaluation strategy.
Query Complexity vs. Performance Here we present the breakdown of the performance for our method vs baselines, using two proxies for the complexity and compositionality of the query. The first one is the maximum depth of the query. We define the maximum depth as the maximum number of nested action modules in the query. The results for this experiment are presented in Figure 5a. As we can see, NS3 improves over the baseline in all scenarios. It is interesting to note, that while CodeBERT achieves the best performance on queries with depth 3+, our model’s performance peaks at depth = 1. We hypothesize that this can be related to the automated parsing procedure, as parsing errors are more likely to be propagated in deeper queries. Further studies with carefully curated manual parses are necessary to better understand this phenomenon.
Another proxy for the query complexity we consider, is the number of data arguments to a single action module. While the previous scenario is breaking down the performance by the depth of the query, here we consider its “width”. We measure the average number of entity arguments per action module in the query. In the parsed portion of our dataset we have queries that range from 1 to 3 textual arguments per action verb. The results for this evaluation are presented in Figure 5. As it can be seen, there is no significant difference in performances between the two groups of queries in either CodeBERT or our proposed method - NS3.
4.3 Ablation Studies
Effect of Pretraining In an attempt to better understand the individual effect of the two modules as well as the roles of their pretraining and training procedures, we performed two additional ablation studies. In the first one, we compare the final performance of the original model with two versions where we skipped part of the pretraining. The model noted as (NS3 AP ) was trained with pretrained entity discovery module, but no pretraining was done for action module, instead we proceeded to the end-to-end training directly. For the model called NS3 (AP&EP ), we skipped both pretrainings
of the entity and action modules, and just performed end-to-end training. Figure 6a demonstrates that combined pretraining is important for the final performance. Additionally, we wanted to measure how effective the setup was without end-to-end training. The results are reported in Figure 6a under the name NS3 E2E. There is a huge performance dip in this scenario, and while the performance is better than random, it is obvious that end-to-end training is crucial for NS3.
Score Normalization We wanted to determine the importance of output normalization for the modules to a proper probability distribution. In Figure 6b we demonstrate the performance achieved using no normalization at all, normalizing either action or entity discovery module, or normalizing both. In all cases we used L1 normalization, since our output scores are non-negative. The version that is not normalized at all performs the worst on both datasets. The performances of the other three versions are close on both datasets.
Similarity Metric Additionally, we experimented with replacing the dot product similarity with a different similarity metric. In particular, in Figure 6c we compare the performance achieved using dot product similarity, L2 distance, and weighted cosine similarity. The difference in performance among different versions is marginal.
4.4 Analysis and Case Study
Appendix C contains additional studies on model generalization, such as handling completely unseen actions and entities, as well as the impact of the frequency of observing an action or entity during training has on model performance.
Case Study Finally, we demonstrate some examples of the scores produced by our modules at different stages of training. Figure 8 shows module score outputs for two different queries and with their corresponding code snippets. The first column shows the output of the entity discovery module after pretraining, while the second and third columns demonstrate the outputs of entity discovery and action modules after the end-to-end training. We can see that in the first column the model identifies syntactic matches, such as “folder” and a list comprehension, which “elements” could be related too. After fine-tuning we can see there is a wider range of both syntactic and some semantic matches present, e.g. “dirlist” and “filelist” are correctly identified as related to “folders”.
Perturbed Query Evaluation In this section we study how sensitive the models are to small changes in the query qi, so that it no longer correctly describes its corresponding code snippet ci. Our expectation is that evaluating a sensitive model on ci will rate the original query higher than the perturbed one. Whereas a model that tends to over-generalize and ignore details of the query will likely rate the perturbed query similar to the original. We start from 100 different pairs (qi, ci), that both our model and CodeBERT predict correctly.
We limited our study to queries with a single verb and a single data entity argument to that verb. For each pair we generated perturbations of two kinds, with 20 perturbed versions for every query. For the first type of perturbations, we replaced query’s data argument with a data argument sampled randomly from another query. For the second type, we replaced the verb argument with another randomly sampled verb. To account for calibration of the models, we measure the change in performance through ratio of the perturbed query score over original query score (lower is better). The results are shown in Figure 7, labelled “V (arg1) ! V (arg2)” and “V1(arg) ! V2(arg)”.
Discussion One of the main requirements for the application of our proposed method is being able to construct a semantic parse of the retrieval query. In general, it is reasonable to expect the users of the SCS to be able to come up with a formal representation of the query, e.g. by representing it in a form similar to SQL or CodeQL. However, due to the lack of such data for training and testing purposes, we implemented our own parser, which understandably does not have perfect performance since we are dealing with open-ended sentences.
5 Related work
Different deep learning models have proved quite efficient when applying to programming languages and code. Prior works have studied and reviewed the uses of deep learning for code analysis in general and code search in particular [39, 31].
A number of approaches to deep code search is based on creating a relevance-predicting model between text and code. [16] propose using RNNs for embedding both code and text to the same latent space. On the other hand, [27] capitalizes the inherent graph-like structure of programs to formulate code search as graph matching. A few works propose enriching the models handling code embedding by adding additional code analysis information, such as semantic and dependency parses [12, 2], variable renaming and statement permutation [14], as well as structures such as abstract syntax tree of the program [20, 37]. A few other approaches have dual formulations of code retrieval and code summarization [9, 40, 41, 6] In a different line of work, Heyman & Cutsem [21] propose considering the code search scenario where short annotative descriptions of code snippets are provided. Appendix E discusses more related work.
6 Conclusion
We presented NS3 a symbolic method for semantic code search based on neural module networks. Our method represents the query and code in terms of actions and data entities, and uses the semantic structure of the query to construct a neural module network. In contrast to existing code search methods, NS3 more precisely captures the nature of queries. In an extensive evaluation, we show that this method works better than strong but unstructured baselines. We further study model’s generalization capacities, robustness, and sensibility of outputs in a series of additional experiments.
Acknowledgments and Disclosure of Funding
This research is supported in part by the DARPA ReMath program under Contract No. HR00112190020, the DARPA MCS program under Contract No. N660011924033, Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract No. 2019-19051600007, the Defense Advanced Research Projects Agency with award W911NF-19-20271, NSF IIS 2048211, and gift awards from Google, Amazon, JP Morgan and Sony. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA, ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. We thank all the collaborators in USC INK research lab for their constructive feedback on the work. | 1. What is the novel approach introduced by the paper in semantic code search using neural module networks?
2. What are the strengths of the proposed method, particularly in its ability to improve state-of-the-art results?
3. What are the weaknesses of the paper regarding understandability and clarity in certain sections and parts?
4. How does the reviewer suggest improving the understanding of the intuition behind action and entity discovery modules and their role in computing the relatedness score?
5. What are the reviewer's concerns regarding the dual role of prepositions and their embedding with verbs, and how can this be addressed?
6. What are the suggestions for improving the clarity of certain lines and sections in the paper, such as Line 208, Line 246, and Lines 268-277?
7. Does the reviewer have any questions about the batching process and its potential impact on the model's performance?
8. Is there a possible explanation for why the model's performance peaks at depth=1, given how compositional queries are handled?
9. What are the reviewer's thoughts on Figure 4, and how could it be improved or made less confusing?
10. Are there any limitations that the reviewer believes should be addressed but haven't been mentioned in the paper? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The paper introduces a novel method for semantic code search using neural module networks. The layout of the network is produced from a semantic parse of the query. There are two types of modules: entity discovery modules and action modules. Entity discovery modules correspond to the nouns in the semantic parse and each such module tries to discover the given entity (noun) of the query in the code (they assign a relevance score to each code token).
Action modules correspond to the verbs in the query. Each action module receives only part of the full query with the last entity argument masked, and tries to discover the masked entity (estimate its relevance scores). The intuition is that if the code snippet indeed corresponds to the query, then the action module should be able to estimate the relavence scores of the masked entity based on the code and the rest of the query. Nested action modules are flattened and their scores are multiplied together.
The final score which measures the relatedness of the code snippet to the query is computed by taking the normalized dot product of the relevance scores of an entity computed by its entity discovery module and by the action module in which it was masked. If there are multiple such scores (because there are multiple action modules) they are multiplied together.
Strengths And Weaknesses
Strengths
The paper is an interesting and novel application of neural module networks to semantic code search and improves the state-of-the-art results.
Weaknesses
I think the main problem with the paper is that some parts are hard to understand in their current form. The introduction could be more concrete with information incorporated from the caption of Figure 2 and Section 3. For example, I think that the introduction should mention the intuition behind action and entity discovery modules, and how that is used to compute the relatedness score.
I found the second paragraph of Section 3.3 especially hard to understand, I think it should be elaborated. One reason for my confusion was the dual role of prepositions: they belong to the entities, but we embed them with the verbs. I still don't understand what happens when we have multiple entities as the input dimension of the transformer would change depending on the number of entities. Also, the dimensions, and what is concatenated to what and in which direction is not clear.
Line 208: the code token embedding should be
c
k
j
and not
t
k
.
Line 246: ", Section 4.1," is somewhat confusing, "later in this Section" would be better
Lines 268-277: some of the methods are not cited.
I find it odd that there is a "Background" and a "Related work" section, as the Background already discusses the limitations of related work.
Questions
Why is batching hard for the model? (it was mentioned in line 237)
It was mentioned that the performance of the model peaks at depth=1. Could that be because of how compositional queries are handled (as described in lines 219-228)?
I found Figure 4 confusing. Why are both "Load 0" and "Load from" present in the figure? On Figure 2 we just have "Load from" for the same action module and query. Also, there is just one unmasked entity and it's without a preposition, so maybe "Load 0" would be appropriate?
Limitations
I cannot think of any limitations which are not addressed. |
NIPS | Title
Scalable Global Optimization via Local Bayesian Optimization
Abstract
Bayesian optimization has recently emerged as a popular method for the sampleefficient optimization of expensive black-box functions. However, the application to high-dimensional problems with several thousand observations remains challenging, and on difficult problems Bayesian optimization is often not competitive with other paradigms. In this paper we take the view that this is due to the implicit homogeneity of the global probabilistic models and an overemphasized exploration that results from global acquisition. This motivates the design of a local probabilistic approach for global optimization of large-scale high-dimensional problems. We propose the TuRBO algorithm that fits a collection of local models and performs a principled global allocation of samples across these models via an implicit bandit approach. A comprehensive evaluation demonstrates that TuRBO outperforms stateof-the-art methods from machine learning and operations research on problems spanning reinforcement learning, robotics, and the natural sciences.
1 Introduction
The global optimization of high-dimensional black-box functions—where closed form expressions and derivatives are unavailable—is a ubiquitous task arising in hyperparameter tuning [36]; in reinforcement learning, when searching for an optimal parametrized policy [7]; in simulation, when calibrating a simulator to real world data; and in chemical engineering and materials discovery, when selecting candidates for high-throughput screening [18]. While Bayesian optimization (BO) has emerged as a highly competitive tool for problems with a small number of tunable parameters (e.g., see [13, 35]), it often scales poorly to high dimensions and large sample budgets. Several methods have been proposed for high-dimensional problems with small budgets of a few hundred samples (see the literature review below). However, these methods make strong assumptions about the objective function such as low-dimensional subspace structure. The recent algorithms of Wang et al. [45] and Hernández-Lobato et al. [18] are explicitly designed for a large sample budget and do not make these assumptions. However, they do not compare favorably with state-of-the-art methods from stochastic optimization like CMA-ES [17] in practice.
The optimization of high-dimensional problems is hard for several reasons. First, the search space grows exponentially with the dimension, and while local optima may become more plentiful, global optima become more difficult to find. Second, the function is often heterogeneous, making the task of fitting a global surrogate model challenging. For example, in reinforcement learning problems with sparse rewards, we expect the objective function to be nearly constant in large parts of the search space. For the latter, note that the commonly used global Gaussian process (GP) models [13, 46]
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
implicitly suppose that characteristic lengthscales and signal variances of the function are constant in the search space. Previous work on non-stationary kernels does not make this assumption, but these approaches are too computationally expensive to be applicable in our large-scale setting [37, 40, 3]. Finally, the fact that search spaces grow considerably faster than sampling budgets due to the curse of dimensionality implies the inherent presence of regions with large posterior uncertainty. For common myopic acquisition functions, this results in an overemphasized exploration and a failure to exploit promising areas.
To overcome these challenges, we adopt a local strategy for BO. We introduce trust region BO (TuRBO), a technique for global optimization, that uses a collection of simultaneous local optimization runs using independent probabilistic models. Each local surrogate model enjoys the typical benefits of Bayesian modeling —robustness to noisy observations and rigorous uncertainty estimates— however, these local surrogates allow for heterogeneous modeling of the objective function and do not suffer from over-exploration. To optimize globally, we leverage an implicit multi-armed bandit strategy at each iteration to allocate samples between these local areas and thus decide which local optimization runs to continue.
We provide a comprehensive experimental evaluation demonstrating that TuRBO outperforms the state-of-the-art from BO, evolutionary methods, simulation optimization, and stochastic optimization on a variety of benchmarks that span from reinforcement learning to robotics and natural sciences. An implementation of TuRBO is available at https://github.com/uber-research/TuRBO.
1.1 Related work
BO has recently become the premier technique for global optimization of expensive functions, with applications in hyperparameter tuning, aerospace design, chemical engineering, and materials discovery; see [13, 35] for an overview. However, most of BO’s successes have been on lowdimensional problems and small sample budgets. This is not for a lack of trying; there have been many attempts to scale BO to more dimensions and observations. A common approach is to replace the GP model: Hutter et al. [19] uses random forests, whereas Snoek et al. [38] applies Bayesian linear regression on features from neural networks. This neural network approach was refined by Springenberg et al. [39] whose BOHAMIANN algorithm uses a modified Hamiltonian Monte Carlo method, which is more robust and scalable than standard Bayesian neural networks. HernándezLobato et al. [18] combines Bayesian neural networks with Thompson sampling (TS), which easily scales to large batch sizes. We will return to this acquisition function later.
There is a considerable body of work in high-dimensional BO [8, 21, 5, 44, 14, 45, 32, 26, 27, 6]. Many methods exist that exploit potential additive structure in the objective function [21, 14, 45]. These methods typically rely on training a large number of GPs (corresponding to different additive structures) and therefore do not scale to large evaluation budgets. Other methods exist that rely on a mapping between the high-dimensional space and an unknown low-dimensional subspace to scale to large numbers of observations [44, 27, 15]. The BOCK algorithm of Oh et al. [29] uses a cylindrical transformation of the search space to achieve scalability to high dimensions. Ensemble Bayesian optimization (EBO) [45] uses an ensemble of additive GPs together with a batch acquisition function to scale BO to tens of thousands of observations and high-dimensional spaces. Recently, Nayebi et al. [27] have proposed the general HeSBO framework that extends GP-based BO algorithms to high-dimensional problems using a novel subspace embedding that overcomes the limitations of the Gaussian projections used in [44, 5, 6]. From this area of research, we compare to BOCK, BOHAMIANN, EBO, and HeSBO.
To acquire large numbers of observations, large-scale BO usually selects points in batches to be evaluated in parallel. While several batch acquisition functions have recently been proposed [9, 34, 43, 47, 48, 24, 16], these approaches do not scale to large batch sizes in practice. TS [41] is particularly lightweight and easy to implement as a batch acquisition function as the computational cost scales linearly with the batch size. Although originally developed for bandit problems [33], it has recently shown its value in BO [18, 4, 22]. In practice, TS is usually implemented by drawing a realization of the unknown objective function from the surrogate model’s posterior on a discretized search space. Then, TS finds the optimum of the realization and evaluates the objective function at that location. This technique is easily extended to batches by drawing multiple realizations as (see the supplementary material for details).
Evolutionary algorithms are a popular approach for optimizing black-box functions when thousands of evaluations are available, see Jin et al. [20] for an overview in stochastic settings. We compare to the successful covariance matrix adaptation evolution strategy (CMA-ES) of Hansen [17]. CMA-ES performs a stochastic search and maintains a multivariate normal sampling distribution over the search space. The evolutionary techniques of recombination and mutation correspond to adaptions of the mean and covariance matrix of that distribution.
High-dimensional problems with large sample budgets have also been studied extensively in operations research and simulation optimization, see [11] for a survey. Here the successful trust region (TR) methods are based on a local surrogate model in a region (often a sphere) around the best solution. The trust region is expanded or shrunk depending on the improvement in obtained solutions; see Yuan [49] for an overview. We compare to BOBYQA [31], a state-of-the-art TR method that uses a quadratic approximation of the objective function. We also include the Nelder-Mead (NM) algorithm [28]. For a d-dimensional space, NM creates a (d+ 1)-dimensional simplex that adaptively moves along the surface by projecting the vertex of the worst function value through the center of the simplex spanned by the remaining vertices. Finally, we also consider the popular quasi-Newton method BFGS [50], where gradients are obtained using finite differences. For other work that uses local surrogate models, see e.g., [23, 42, 1, 2, 25].
2 The trust region Bayesian optimization algorithm
In this section, we propose an algorithm for optimizing high-dimensional black-box functions. In particular, suppose that we wish to solve:
Find x∗ ∈ Ω such that f(x∗) ≤ f(x), ∀x ∈ Ω,
where f : Ω→ R and Ω = [0, 1]d. We observe potentially noisy values y(x) = f(x) + ε, where ε ∼ N (0, σ2). BO relies on the ability to construct a global model that is eventually accurate enough to uncover a global optimizer. As discussed previously, this is challenging due to the curse of dimensionality and the heterogeneity of the function. To address these challenges, we propose to abandon global surrogate modeling, and achieve global optimization by maintaining several independent local models, each involved in a separate local optimization run. To achieve global optimization in this framework, we maintain multiple local models simultaneously and allocate samples via an implicit multi-armed bandit approach. This yields an efficient acquisition strategy that directs samples towards promising local optimization runs. We begin by detailing a single local optimization run, and then discuss how multiple runs are managed.
Local modeling. To achieve principled local optimization in the gradient-free setting, we draw inspiration from a class of TR methods from stochastic optimization [49]. These methods make suggestions using a (simple) surrogate model inside a TR. The region is often a sphere or a polytope centered at the best solution, within which the surrogate model is believed to accurately model the function. For example, the popular COBYLA [30] method approximates the objective function using a local linear model. Intuitively, while linear and quadratic surrogates are likely to be inadequate models globally, they can be accurate in a sufficiently small TR. However, there are two challenges with traditional TR methods. First, deterministic examples such as COBYLA are notorious for handling noisy observations poorly. Second, simple surrogate models might require overly small trust regions to provide accurate modeling behavior. Therefore, we will use GP surrogate models within a TR. This allows us to inherit the robustness to noise and rigorous reasoning about uncertainty that global BO enjoys.
Trust regions. We choose our TR to be a hyperrectangle centered at the best solution found so far, denoted by x?. In the noise-free case, we set x? to the location of the best observation so far. In the presence of noise, we use the observation with the smallest posterior mean under the surrogate model. At the beginning of a given local optimization run, we initialize the base side length of the TR to L ← Linit. The actual side length for each dimension is obtained from this base side length by rescaling according to its lengthscale λi in the GP model while maintaining a total volume of Ld. That is, Li = λiL/( ∏d j=1 λj)
1/d. To perform a single local optimization run, we utilize an acquisition function at each iteration t to select a batch of q candidates {x(t)1 , . . . ,x (t) q }, restricted to be within the TR. If L was large enough for the TR to contain the whole space, this would be
equivalent to running standard global BO. Therefore, the evolution of L is critical. On the one hand, a TR should be sufficiently large to contain good solutions. On the other hand, it should be small enough to ensure that the local model is accurate within the TR. The typical behavior is to expand a TR when the optimizer “makes progress”, i.e., it finds better solutions in that region, and shrink it when the optimizer appears stuck. Therefore, following, e.g., Nelder and Mead [28], we will shrink a TR after too many consecutive “failures”, and expand it after many consecutive “successes”. We define a “success” as a candidate that improves upon x?, and a “failure” as a candidate that does not. After τsucc consecutive successes, we double the size of the TR, i.e., L← min{Lmax, 2L}. After τfail consecutive failures, we halve the size of the TR: L← L/2. We reset the success and failure counters to zero after we change the size of the TR. Whenever L falls below a given minimum threshold Lmin, we discard the respective TR and initialize a new one with side length Linit. Additionally, we do not let the side length expand to be larger than a maximum threshold Lmax. Note that τsucc, τfail, Lmin, Lmax, and Linit are hyperparameters of TuRBO; see the supplementary material for the values used in the experimental evaluation.
Trust region Bayesian optimization. So far, we have detailed a single local BO strategy using a TR method. Intuitively, we could make this algorithm (more) global by random restarts. However, from a probabilistic perspective, this is likely to utilize our evaluation budget inefficiently. Just as we reason about which candidates are most promising within a local optimization run, we can reason about which local optimization run is “most promising.”
Therefore, TuRBO maintains m trust regions simultaneously. Each trust region TR` with ` ∈ {1, . . . ,m} is a hyperrectangle of base side length L` ≤ Lmax, and utilizes an independent local GP model. This gives rise to a classical exploitation-exploration trade-off that we model by a multi-armed bandit that treats each TR as a lever. Note that this provides an advantage over traditional TR algorithms in that TuRBO puts a stronger emphasis on promising regions.
In each iteration, we need to select a batch of q candidates drawn from the union of all trust regions, and update all local optimization problems for which candidates were drawn. To solve this problem, we find that TS provides a principled solution to both the problem of selecting candidates within a single TR, and selecting candidates across the set of trust regions simultaneously. To select the i-th candidate from across the trust regions, we draw a realization of the posterior function from the local GP within each TR: f (i)` ∼ GP (t) ` (µ`(x), k`(x,x
′)), where GP(t)` is the GP posterior for TR` at iteration t. We then select the i-th candidate such that it minimizes the function value across all m samples and all trust regions:
x (t) i ∈ argmin
` argmin x∈TR`
f (i) ` where f (i) ` ∼ GP (t) ` (µ`(x), k`(x,x ′)).
That is, we select as point with the smallest function value after concatenating a Thompson sample from each TR for i = 1, . . . , q. We refer to the supplementary material for additional details.
3 Numerical experiments
In this section, we evaluate TuRBO on a wide range of problems: a 14D robot pushing problem, a 60D rover trajectory planning problem, a 12D cosmological constant estimation problem, a 12D lunar landing reinforcement learning problem, and a 200D synthetic problem. All problems are multimodal and challenging for many global optimization algorithms. We consider a variety of batch sizes and evaluation budgets to fully examine the performance and robustness of TuRBO. The values of τsucc, τfail, Lmin, Lmax, and Linit are given in the supplementary material.
We compare TuRBO to a comprehensive selection of state-of-the-art baselines: BFGS, BOCK, BOHAMIANN, CMA-ES, BOBYQA, EBO, GP-TS, HeSBO-TS, Nelder-Mead (NM), and random search (RS). Here, GP-TS refers to TS with a global GP model using the Matérn-5/2 kernel. HeSBO-TS combines GP-TS with a subspace embedding and thus effectively optimizes in a low-dimensional space; this target dimension is set by the user. Therefore, a small sample budget may suffice, which allows to run p invocations in parallel, following [44]. This may improve the performance, since each embedding may "fail" with some probability [27], i.e., it does not contain the active subspace even if it exists. Note that HeSBO-TS-p recommends a point of optimal posterior mean among the p GP-models; we use that point for the evaluation. The standard acquisition criterion EI used in BOCK and BOHAMIANN is replaced by (batch) TS, i.e., all methods use the same criterion which allows for a
direct comparison. Methods that attempt to learn an additive decomposition lack scalability and are thus omitted. BFGS approximates the gradient via finite differences and thus requires d+1 evaluations for each step. Furthermore, NM, BFGS, and BOBYQA are inherently sequential and therefore have an edge by leveraging all gathered observations. However, they are considerably more time consuming on a per-wall-time evaluation basis since we are working with large batches.
We supplement the optimization test problems with three additional experiments: i) one that shows that TuRBO achieves a linear speed-up from large batch sizes, ii) a comparison of local GPs and global GPs on a control problem, and iii) an analytical experiment demonstrating the locality of TuRBO. Performance plots show the mean performances with one standard error. Overall, we observe that TuRBO consistently finds excellent solutions, outperforming the other methods on most problems. Experimental results for a small budget experiment on four synthetic functions are shown in the supplement, where we also provide details on the experimental setup and runtimes for all algorithms.
3.1 Robot pushing
The robot pushing problem is a noisy 14D control problem considered in Wang et al. [45]. We run each method for a total of 10K evaluations and batch size of q = 50. TuRBO-1 and all other methods are initialized with 100 points except for TuRBO-20 where we use 50 initial points for each trust region. This is to avoid having TuRBO-20 consume its full evaluation budget on the initial points. We use HeSBO-TS-5 with target dimension 8. TuRBO-m denotes the variant of TuRBO that maintains m local models in parallel. Fig. 2 shows the results: TuRBO-1 and TuRBO-20 outperform the alternatives. TuRBO-20 starts slower since it is initialized with 1K points, but eventually outperforms TuRBO-1. CMA-ES and BOBYQA outperform the other BO methods. Note that Wang et al. [45] reported a median value of 8.3 for EBO after 30K evaluations, while TuRBO-1 achieves a mean and median reward of around 9.4 after only 2K samples.
3.2 Rover trajectory planning
Here the goal is to optimize the locations of 30 points in the 2D-plane that determine the trajectory of a rover [45]. Every algorithm is run for 200 steps with a batch size of q = 100, thus collecting a total of 20K evaluations. We use 200 initial points for all methods except for TuRBO-20, where we use 100 initial points for each region. Fig. 2 summarizes the performance. We observe that TuRBO-1 and TuRBO-20 outperform all other algorithms after a few thousand evaluations. TuRBO-20 once again starts slowly because of the initial 2K random evaluations. Wang et al. [45] reported a mean value of 1.5 for EBO after 35K evaluations, while TuRBO-1 achieves a mean and median reward of about 2 after only 1K evaluations. We use a target dimension of 10 for HeSBO-TS-15 in this experiment.
3.3 Cosmological constant learning
In the “cosmological constants” problem, the task is to calibrate a physics simulator1 to observed data. The tunable parameters include various physical constants like the density of certain types of matter and Hubble’s constant. In this paper, we use a more challenging version of the problem in [21] by tuning 12 parameters rather than 9, and by using substantially larger parameter bounds. We used 2K evaluations, a batch size of q = 50, and 50 initial points. TuRBO-5 uses 20 initial points for each local model and HeSBO-TS-4 uses a target dimension of 8. Fig. 3 (left) shows the results, with TuRBO-5 performing the best, followed by BOBYQA and TuRBO-1. TuRBO-1 sometimes converges to a bad local optimum, which deteriorates the mean performance and demonstrates the importance of allocating samples across multiple trust regions.
3.4 Lunar landing reinforcement learning
Here the goal is to learn a controller for a lunar lander implemented in the OpenAI gym2. The state space for the lunar lander is the position, angle, time derivatives, and whether or not either leg is in contact with the ground. There are four possible action for each frame, each corresponding to firing a booster engine left, right, up, or doing nothing. The objective is to maximize the average final reward over a fixed constant set of 50 randomly generated terrains, initial positions, and velocities. We observed that the simulation can be sensitive to even tiny perturbations. Fig. 3 shows the results for a total of 1500 function evaluations, batch size q = 50, and 50 initial points for all algorithms except for TuRBO-5 which uses 20 initial points for each local region. For this problem, we use HeSBO-TS-3 in an 8-dimensional subspace. TuRBO-5 and TuRBO-1 learn the best controllers; and in particular achieves better rewards than the handcrafted controller provided by OpenAI whose performance is depicted by the blue horizontal line.
3.5 The 200-dimensional Ackley function
We examine performances on the 200-dimensional Ackley function in the domain [−5, 10]200. We only consider TuRBO-1 because of the large number of dimensions where there may not be a benefit from using multiple TRs. EBO is excluded from the plot since its computation time exceeded 30 days per replication. HeSBO-TS-5 uses a target dimension of 20. Fig. 4 shows the results for a total of 10K function evaluations, batch size q = 100, and 200 initial points for all algorithms.
1https://lambda.gsfc.nasa.gov/toolbox/lrgdr/ 2https://gym.openai.com/envs/LunarLander-v2
HeSBO-TS-5, with a target dimension of 20, and BOBYQA perform well initially, but are eventually outperformed by TuRBO-1 that achieves the best solutions. The good performance of HeSBO-TS is particularly interesting, since this benchmark has no redundant dimensions and thus should be challenging for that embedding-based approach. This confirms similar findings in [27]. BO methods that use a global GP model over-emphasize exploration and make little progress.
3.6 The advantage of local models over global models
We investigate the performance of local and global GP models on the 14D robot pushing problem from Sect. 3.1. We replicate the conditions from the optimization experiments as closely as possible for a regression experiment, including for example parameter bounds. We choose 20 uniformly distributed hypercubes of (base) side length 0.4, each containing 200 uniformly distributed training points. We train a global GP on all 4000 samples, as well as a separate local GP for each hypercube. For the sake of illustration, we used an isotropic kernel for these experiments. The local GPs have the advantage of being able to learn different hyperparameters in each region while the global GP has the advantage of having access to all of the data. Fig. 5 shows the predictive performance (in log loss) on held-out data. We also show the distribution of fitted hyperparameters for both the local and global GPs. We see that the hyperparameters (especially the signal variance) vary substantially across regions. Furthermore, the local GPs perform better than the global GP in every repeated trial. The global model has an average log loss of 1.284 while the local model has an average log loss of 1.174
across 50 trials; the improvement is significant under a t-test at p < 10−4. This experiment confirms that we improve the predictive power of the models and also reduce the computational overhead of the GP by using the local approach. The learned local noise variance in Fig. 5 is bimodal, confirming the heteroscedasticity in the objective across regions. The global GP is required to learn the high noise value to avoid a penalty for outliers.
3.7 Why high-dimensional spaces are challenging
In this section, we illustrate why the restarting and banditing strategy of TuRBO is so effective. Each TR restart finds distant solutions of varying quality, which highlights the multimodal nature of the problem. This gives TuRBO-m a distinct advantage.
We ran TuRBO-1 (with a single trust region) for 50 restarts on the 60D rover trajectory planning problem from Sect. 3.2 and logged the volume of the TR and its center after each iteration. Fig. 6 shows the volume of the TR, the arclength of the TR center’s trajectory, the final objective value, and the distance each final solution has to its nearest neighbor. The left two plots confirm that, within a trust region, the optimization is indeed highly local. The volume of any given trust region decreases rapidly and is only a small fraction of the total search space. From the two plots on the right, we see that the solutions found by TuRBO are far apart with varying quality, demonstrating the value of performing multiple local search runs in parallel.
3.8 The efficiency of large batches
Recall that combining multiple samples into single batches provides substantial speed-ups in terms of wall-clock time but poses the risk of inefficiencies since sequential sampling has the advantage of leveraging more information. In this section, we investigate whether large batches are efficient for TuRBO. Note that Hernández-Lobato et al. [18] and Kandasamy et al. [22] have shown that the TS acquisition function is efficient for batch acquisition with a single global surrogate model. We study TuRBO-1 on the robot pushing problem from Sect. 3.1 with batch sizes q ∈ {1, 2, 4, . . . , 64}. The algorithm takes max{200q, 6400} samples for each batch size and we average the results over 30 replications. Fig. 7 (left) shows the reward for each batch size with respect to the number of batches: we see that larger batch sizes obtain better results for the same number of iterations. Fig. 7 (right) shows the performance as a function of evaluations. We see that the speed-up is essentially linear.
4 Conclusions
The global optimization of computationally expensive black-box functions in high-dimensional spaces is an important and timely topic [13, 27]. We proposed the TuRBO algorithm which takes a novel local approach to global optimization. Instead of fitting a global surrogate model and trading off exploration and exploitation on the whole search space, TuRBO maintains a collection of local probabilistic models. These models provide local search trajectories that are able to quickly discover excellent objective values. This local approach is complemented with a global bandit strategy that allocates samples across these trust regions, implicitly trading off exploration and exploitation. A comprehensive experimental evaluation demonstrates that TuRBO outperforms the state-of-the-art Bayesian optimization and operations research methods on a variety of real-world complex tasks.
In the future, we plan on extending TuRBO to learn local low-dimensional structure to improve the accuracy of the local Gaussian process model. This extension is particularly interesting in highdimensional optimization when derivative information is available [10, 12, 48]. This situation often arises in engineering, where objectives are often modeled by PDEs solved by adjoint methods, and in machine learning where gradients are available via automated differentiation. Ultimately, it is our hope that this work spurs interest in the merits of Bayesian local optimization, particularly in the high-dimensional setting. | 1. How does the proposed methodology address the challenges of Bayesian Optimization, particularly for large datasets?
2. What are the strengths and weaknesses of the proposed approach compared to existing methods such as BOCK and BOHAMIANN?
3. Can the authors provide further explanations or references regarding the choice of hypercubes and the use of TS as an infill criterion?
4. How does the method handle the exploration-exploitation trade-off, and how does it differ from other approaches in this regard?
5. Are there any specific assumptions or limitations regarding the applicability of the proposed method? | Review | Review
Major * I found this paper to be very exciting, presenting a promising methodology addressing some of the most critical bottlenecks of Bayesian Optimization, with a focus on large data sets (being therefore relevant for high-dimensional BO as well, where sample sizes typically need to be substantially increased with the dimension). * With respect to the dimension indeed, several questions arose with respect to the considered hypercubes: * Page 4: Each trust region TR_ell with ell in {1,...,m} is a hypercube of side length L_ell \leq 1, and utilizes an independent local GP model. So, one is far from filling the space, right? * Page 4, about Lmin=(1/2)^6: could some more explanations be provided on the underlying rationale? * Page 4, equation just before the start of Section 3: why randomizing when the whole distribution is known and tractable? * Page 5, about "We replaced the standard acquisition criterion EI used in BOCK and BOHAMIANN by TS to achieve the required parallelism": but there exist several ways of making EI batch-sequential (see for instance a few papers dealing with this: Marmin et al. (2015): https://link.springer.com/chapter/10.1007%2F978-3-319-27926-8_4 González et al, (2016) http://proceedings.mlr.press/v51/gonzalez16a.pdf Wang et al. (2019): https://arxiv.org/pdf/1602.05149.pdf Not using these for some good reason is one thing, but putting it the way it is put here sounds like it is not possible to go batch-sequential with EI... * In the main contributions presented throughout Section 3, two main ideas are confounded here: splitting the data so as to obtain local models AND using TS as infill criterion. Which is (most) responsible for improved performances over the state of the art? Minor (selected points) * Page 1: What does "outputscales" mean? * Page 2, about "For commonly used myopic acquisition functions, this results in an overemphasized exploration and a failure to exploit promising areas.": better explaining why and/or referring to other works where this is analyzed in more detail would be nice. * Page 5, syntax issue in "Note that BFGS requires gradient approximations via finite differences, which is a fair comparison when the number of function evaluations is counted accordingly. * Throughout Section 3: is the (log) loss introduced? ******** Update afer rebuttal ********** I am happy with the way the authors addressed reviewer comments in their rebuttal, and while several points raised by the reviewing team give food for thoughts towards follow-up contributions, I feel that this paper deserves to be published in NeurIPS 2019. I do not increase my score as it is already high. |
NIPS | Title
Scalable Global Optimization via Local Bayesian Optimization
Abstract
Bayesian optimization has recently emerged as a popular method for the sampleefficient optimization of expensive black-box functions. However, the application to high-dimensional problems with several thousand observations remains challenging, and on difficult problems Bayesian optimization is often not competitive with other paradigms. In this paper we take the view that this is due to the implicit homogeneity of the global probabilistic models and an overemphasized exploration that results from global acquisition. This motivates the design of a local probabilistic approach for global optimization of large-scale high-dimensional problems. We propose the TuRBO algorithm that fits a collection of local models and performs a principled global allocation of samples across these models via an implicit bandit approach. A comprehensive evaluation demonstrates that TuRBO outperforms stateof-the-art methods from machine learning and operations research on problems spanning reinforcement learning, robotics, and the natural sciences.
1 Introduction
The global optimization of high-dimensional black-box functions—where closed form expressions and derivatives are unavailable—is a ubiquitous task arising in hyperparameter tuning [36]; in reinforcement learning, when searching for an optimal parametrized policy [7]; in simulation, when calibrating a simulator to real world data; and in chemical engineering and materials discovery, when selecting candidates for high-throughput screening [18]. While Bayesian optimization (BO) has emerged as a highly competitive tool for problems with a small number of tunable parameters (e.g., see [13, 35]), it often scales poorly to high dimensions and large sample budgets. Several methods have been proposed for high-dimensional problems with small budgets of a few hundred samples (see the literature review below). However, these methods make strong assumptions about the objective function such as low-dimensional subspace structure. The recent algorithms of Wang et al. [45] and Hernández-Lobato et al. [18] are explicitly designed for a large sample budget and do not make these assumptions. However, they do not compare favorably with state-of-the-art methods from stochastic optimization like CMA-ES [17] in practice.
The optimization of high-dimensional problems is hard for several reasons. First, the search space grows exponentially with the dimension, and while local optima may become more plentiful, global optima become more difficult to find. Second, the function is often heterogeneous, making the task of fitting a global surrogate model challenging. For example, in reinforcement learning problems with sparse rewards, we expect the objective function to be nearly constant in large parts of the search space. For the latter, note that the commonly used global Gaussian process (GP) models [13, 46]
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
implicitly suppose that characteristic lengthscales and signal variances of the function are constant in the search space. Previous work on non-stationary kernels does not make this assumption, but these approaches are too computationally expensive to be applicable in our large-scale setting [37, 40, 3]. Finally, the fact that search spaces grow considerably faster than sampling budgets due to the curse of dimensionality implies the inherent presence of regions with large posterior uncertainty. For common myopic acquisition functions, this results in an overemphasized exploration and a failure to exploit promising areas.
To overcome these challenges, we adopt a local strategy for BO. We introduce trust region BO (TuRBO), a technique for global optimization, that uses a collection of simultaneous local optimization runs using independent probabilistic models. Each local surrogate model enjoys the typical benefits of Bayesian modeling —robustness to noisy observations and rigorous uncertainty estimates— however, these local surrogates allow for heterogeneous modeling of the objective function and do not suffer from over-exploration. To optimize globally, we leverage an implicit multi-armed bandit strategy at each iteration to allocate samples between these local areas and thus decide which local optimization runs to continue.
We provide a comprehensive experimental evaluation demonstrating that TuRBO outperforms the state-of-the-art from BO, evolutionary methods, simulation optimization, and stochastic optimization on a variety of benchmarks that span from reinforcement learning to robotics and natural sciences. An implementation of TuRBO is available at https://github.com/uber-research/TuRBO.
1.1 Related work
BO has recently become the premier technique for global optimization of expensive functions, with applications in hyperparameter tuning, aerospace design, chemical engineering, and materials discovery; see [13, 35] for an overview. However, most of BO’s successes have been on lowdimensional problems and small sample budgets. This is not for a lack of trying; there have been many attempts to scale BO to more dimensions and observations. A common approach is to replace the GP model: Hutter et al. [19] uses random forests, whereas Snoek et al. [38] applies Bayesian linear regression on features from neural networks. This neural network approach was refined by Springenberg et al. [39] whose BOHAMIANN algorithm uses a modified Hamiltonian Monte Carlo method, which is more robust and scalable than standard Bayesian neural networks. HernándezLobato et al. [18] combines Bayesian neural networks with Thompson sampling (TS), which easily scales to large batch sizes. We will return to this acquisition function later.
There is a considerable body of work in high-dimensional BO [8, 21, 5, 44, 14, 45, 32, 26, 27, 6]. Many methods exist that exploit potential additive structure in the objective function [21, 14, 45]. These methods typically rely on training a large number of GPs (corresponding to different additive structures) and therefore do not scale to large evaluation budgets. Other methods exist that rely on a mapping between the high-dimensional space and an unknown low-dimensional subspace to scale to large numbers of observations [44, 27, 15]. The BOCK algorithm of Oh et al. [29] uses a cylindrical transformation of the search space to achieve scalability to high dimensions. Ensemble Bayesian optimization (EBO) [45] uses an ensemble of additive GPs together with a batch acquisition function to scale BO to tens of thousands of observations and high-dimensional spaces. Recently, Nayebi et al. [27] have proposed the general HeSBO framework that extends GP-based BO algorithms to high-dimensional problems using a novel subspace embedding that overcomes the limitations of the Gaussian projections used in [44, 5, 6]. From this area of research, we compare to BOCK, BOHAMIANN, EBO, and HeSBO.
To acquire large numbers of observations, large-scale BO usually selects points in batches to be evaluated in parallel. While several batch acquisition functions have recently been proposed [9, 34, 43, 47, 48, 24, 16], these approaches do not scale to large batch sizes in practice. TS [41] is particularly lightweight and easy to implement as a batch acquisition function as the computational cost scales linearly with the batch size. Although originally developed for bandit problems [33], it has recently shown its value in BO [18, 4, 22]. In practice, TS is usually implemented by drawing a realization of the unknown objective function from the surrogate model’s posterior on a discretized search space. Then, TS finds the optimum of the realization and evaluates the objective function at that location. This technique is easily extended to batches by drawing multiple realizations as (see the supplementary material for details).
Evolutionary algorithms are a popular approach for optimizing black-box functions when thousands of evaluations are available, see Jin et al. [20] for an overview in stochastic settings. We compare to the successful covariance matrix adaptation evolution strategy (CMA-ES) of Hansen [17]. CMA-ES performs a stochastic search and maintains a multivariate normal sampling distribution over the search space. The evolutionary techniques of recombination and mutation correspond to adaptions of the mean and covariance matrix of that distribution.
High-dimensional problems with large sample budgets have also been studied extensively in operations research and simulation optimization, see [11] for a survey. Here the successful trust region (TR) methods are based on a local surrogate model in a region (often a sphere) around the best solution. The trust region is expanded or shrunk depending on the improvement in obtained solutions; see Yuan [49] for an overview. We compare to BOBYQA [31], a state-of-the-art TR method that uses a quadratic approximation of the objective function. We also include the Nelder-Mead (NM) algorithm [28]. For a d-dimensional space, NM creates a (d+ 1)-dimensional simplex that adaptively moves along the surface by projecting the vertex of the worst function value through the center of the simplex spanned by the remaining vertices. Finally, we also consider the popular quasi-Newton method BFGS [50], where gradients are obtained using finite differences. For other work that uses local surrogate models, see e.g., [23, 42, 1, 2, 25].
2 The trust region Bayesian optimization algorithm
In this section, we propose an algorithm for optimizing high-dimensional black-box functions. In particular, suppose that we wish to solve:
Find x∗ ∈ Ω such that f(x∗) ≤ f(x), ∀x ∈ Ω,
where f : Ω→ R and Ω = [0, 1]d. We observe potentially noisy values y(x) = f(x) + ε, where ε ∼ N (0, σ2). BO relies on the ability to construct a global model that is eventually accurate enough to uncover a global optimizer. As discussed previously, this is challenging due to the curse of dimensionality and the heterogeneity of the function. To address these challenges, we propose to abandon global surrogate modeling, and achieve global optimization by maintaining several independent local models, each involved in a separate local optimization run. To achieve global optimization in this framework, we maintain multiple local models simultaneously and allocate samples via an implicit multi-armed bandit approach. This yields an efficient acquisition strategy that directs samples towards promising local optimization runs. We begin by detailing a single local optimization run, and then discuss how multiple runs are managed.
Local modeling. To achieve principled local optimization in the gradient-free setting, we draw inspiration from a class of TR methods from stochastic optimization [49]. These methods make suggestions using a (simple) surrogate model inside a TR. The region is often a sphere or a polytope centered at the best solution, within which the surrogate model is believed to accurately model the function. For example, the popular COBYLA [30] method approximates the objective function using a local linear model. Intuitively, while linear and quadratic surrogates are likely to be inadequate models globally, they can be accurate in a sufficiently small TR. However, there are two challenges with traditional TR methods. First, deterministic examples such as COBYLA are notorious for handling noisy observations poorly. Second, simple surrogate models might require overly small trust regions to provide accurate modeling behavior. Therefore, we will use GP surrogate models within a TR. This allows us to inherit the robustness to noise and rigorous reasoning about uncertainty that global BO enjoys.
Trust regions. We choose our TR to be a hyperrectangle centered at the best solution found so far, denoted by x?. In the noise-free case, we set x? to the location of the best observation so far. In the presence of noise, we use the observation with the smallest posterior mean under the surrogate model. At the beginning of a given local optimization run, we initialize the base side length of the TR to L ← Linit. The actual side length for each dimension is obtained from this base side length by rescaling according to its lengthscale λi in the GP model while maintaining a total volume of Ld. That is, Li = λiL/( ∏d j=1 λj)
1/d. To perform a single local optimization run, we utilize an acquisition function at each iteration t to select a batch of q candidates {x(t)1 , . . . ,x (t) q }, restricted to be within the TR. If L was large enough for the TR to contain the whole space, this would be
equivalent to running standard global BO. Therefore, the evolution of L is critical. On the one hand, a TR should be sufficiently large to contain good solutions. On the other hand, it should be small enough to ensure that the local model is accurate within the TR. The typical behavior is to expand a TR when the optimizer “makes progress”, i.e., it finds better solutions in that region, and shrink it when the optimizer appears stuck. Therefore, following, e.g., Nelder and Mead [28], we will shrink a TR after too many consecutive “failures”, and expand it after many consecutive “successes”. We define a “success” as a candidate that improves upon x?, and a “failure” as a candidate that does not. After τsucc consecutive successes, we double the size of the TR, i.e., L← min{Lmax, 2L}. After τfail consecutive failures, we halve the size of the TR: L← L/2. We reset the success and failure counters to zero after we change the size of the TR. Whenever L falls below a given minimum threshold Lmin, we discard the respective TR and initialize a new one with side length Linit. Additionally, we do not let the side length expand to be larger than a maximum threshold Lmax. Note that τsucc, τfail, Lmin, Lmax, and Linit are hyperparameters of TuRBO; see the supplementary material for the values used in the experimental evaluation.
Trust region Bayesian optimization. So far, we have detailed a single local BO strategy using a TR method. Intuitively, we could make this algorithm (more) global by random restarts. However, from a probabilistic perspective, this is likely to utilize our evaluation budget inefficiently. Just as we reason about which candidates are most promising within a local optimization run, we can reason about which local optimization run is “most promising.”
Therefore, TuRBO maintains m trust regions simultaneously. Each trust region TR` with ` ∈ {1, . . . ,m} is a hyperrectangle of base side length L` ≤ Lmax, and utilizes an independent local GP model. This gives rise to a classical exploitation-exploration trade-off that we model by a multi-armed bandit that treats each TR as a lever. Note that this provides an advantage over traditional TR algorithms in that TuRBO puts a stronger emphasis on promising regions.
In each iteration, we need to select a batch of q candidates drawn from the union of all trust regions, and update all local optimization problems for which candidates were drawn. To solve this problem, we find that TS provides a principled solution to both the problem of selecting candidates within a single TR, and selecting candidates across the set of trust regions simultaneously. To select the i-th candidate from across the trust regions, we draw a realization of the posterior function from the local GP within each TR: f (i)` ∼ GP (t) ` (µ`(x), k`(x,x
′)), where GP(t)` is the GP posterior for TR` at iteration t. We then select the i-th candidate such that it minimizes the function value across all m samples and all trust regions:
x (t) i ∈ argmin
` argmin x∈TR`
f (i) ` where f (i) ` ∼ GP (t) ` (µ`(x), k`(x,x ′)).
That is, we select as point with the smallest function value after concatenating a Thompson sample from each TR for i = 1, . . . , q. We refer to the supplementary material for additional details.
3 Numerical experiments
In this section, we evaluate TuRBO on a wide range of problems: a 14D robot pushing problem, a 60D rover trajectory planning problem, a 12D cosmological constant estimation problem, a 12D lunar landing reinforcement learning problem, and a 200D synthetic problem. All problems are multimodal and challenging for many global optimization algorithms. We consider a variety of batch sizes and evaluation budgets to fully examine the performance and robustness of TuRBO. The values of τsucc, τfail, Lmin, Lmax, and Linit are given in the supplementary material.
We compare TuRBO to a comprehensive selection of state-of-the-art baselines: BFGS, BOCK, BOHAMIANN, CMA-ES, BOBYQA, EBO, GP-TS, HeSBO-TS, Nelder-Mead (NM), and random search (RS). Here, GP-TS refers to TS with a global GP model using the Matérn-5/2 kernel. HeSBO-TS combines GP-TS with a subspace embedding and thus effectively optimizes in a low-dimensional space; this target dimension is set by the user. Therefore, a small sample budget may suffice, which allows to run p invocations in parallel, following [44]. This may improve the performance, since each embedding may "fail" with some probability [27], i.e., it does not contain the active subspace even if it exists. Note that HeSBO-TS-p recommends a point of optimal posterior mean among the p GP-models; we use that point for the evaluation. The standard acquisition criterion EI used in BOCK and BOHAMIANN is replaced by (batch) TS, i.e., all methods use the same criterion which allows for a
direct comparison. Methods that attempt to learn an additive decomposition lack scalability and are thus omitted. BFGS approximates the gradient via finite differences and thus requires d+1 evaluations for each step. Furthermore, NM, BFGS, and BOBYQA are inherently sequential and therefore have an edge by leveraging all gathered observations. However, they are considerably more time consuming on a per-wall-time evaluation basis since we are working with large batches.
We supplement the optimization test problems with three additional experiments: i) one that shows that TuRBO achieves a linear speed-up from large batch sizes, ii) a comparison of local GPs and global GPs on a control problem, and iii) an analytical experiment demonstrating the locality of TuRBO. Performance plots show the mean performances with one standard error. Overall, we observe that TuRBO consistently finds excellent solutions, outperforming the other methods on most problems. Experimental results for a small budget experiment on four synthetic functions are shown in the supplement, where we also provide details on the experimental setup and runtimes for all algorithms.
3.1 Robot pushing
The robot pushing problem is a noisy 14D control problem considered in Wang et al. [45]. We run each method for a total of 10K evaluations and batch size of q = 50. TuRBO-1 and all other methods are initialized with 100 points except for TuRBO-20 where we use 50 initial points for each trust region. This is to avoid having TuRBO-20 consume its full evaluation budget on the initial points. We use HeSBO-TS-5 with target dimension 8. TuRBO-m denotes the variant of TuRBO that maintains m local models in parallel. Fig. 2 shows the results: TuRBO-1 and TuRBO-20 outperform the alternatives. TuRBO-20 starts slower since it is initialized with 1K points, but eventually outperforms TuRBO-1. CMA-ES and BOBYQA outperform the other BO methods. Note that Wang et al. [45] reported a median value of 8.3 for EBO after 30K evaluations, while TuRBO-1 achieves a mean and median reward of around 9.4 after only 2K samples.
3.2 Rover trajectory planning
Here the goal is to optimize the locations of 30 points in the 2D-plane that determine the trajectory of a rover [45]. Every algorithm is run for 200 steps with a batch size of q = 100, thus collecting a total of 20K evaluations. We use 200 initial points for all methods except for TuRBO-20, where we use 100 initial points for each region. Fig. 2 summarizes the performance. We observe that TuRBO-1 and TuRBO-20 outperform all other algorithms after a few thousand evaluations. TuRBO-20 once again starts slowly because of the initial 2K random evaluations. Wang et al. [45] reported a mean value of 1.5 for EBO after 35K evaluations, while TuRBO-1 achieves a mean and median reward of about 2 after only 1K evaluations. We use a target dimension of 10 for HeSBO-TS-15 in this experiment.
3.3 Cosmological constant learning
In the “cosmological constants” problem, the task is to calibrate a physics simulator1 to observed data. The tunable parameters include various physical constants like the density of certain types of matter and Hubble’s constant. In this paper, we use a more challenging version of the problem in [21] by tuning 12 parameters rather than 9, and by using substantially larger parameter bounds. We used 2K evaluations, a batch size of q = 50, and 50 initial points. TuRBO-5 uses 20 initial points for each local model and HeSBO-TS-4 uses a target dimension of 8. Fig. 3 (left) shows the results, with TuRBO-5 performing the best, followed by BOBYQA and TuRBO-1. TuRBO-1 sometimes converges to a bad local optimum, which deteriorates the mean performance and demonstrates the importance of allocating samples across multiple trust regions.
3.4 Lunar landing reinforcement learning
Here the goal is to learn a controller for a lunar lander implemented in the OpenAI gym2. The state space for the lunar lander is the position, angle, time derivatives, and whether or not either leg is in contact with the ground. There are four possible action for each frame, each corresponding to firing a booster engine left, right, up, or doing nothing. The objective is to maximize the average final reward over a fixed constant set of 50 randomly generated terrains, initial positions, and velocities. We observed that the simulation can be sensitive to even tiny perturbations. Fig. 3 shows the results for a total of 1500 function evaluations, batch size q = 50, and 50 initial points for all algorithms except for TuRBO-5 which uses 20 initial points for each local region. For this problem, we use HeSBO-TS-3 in an 8-dimensional subspace. TuRBO-5 and TuRBO-1 learn the best controllers; and in particular achieves better rewards than the handcrafted controller provided by OpenAI whose performance is depicted by the blue horizontal line.
3.5 The 200-dimensional Ackley function
We examine performances on the 200-dimensional Ackley function in the domain [−5, 10]200. We only consider TuRBO-1 because of the large number of dimensions where there may not be a benefit from using multiple TRs. EBO is excluded from the plot since its computation time exceeded 30 days per replication. HeSBO-TS-5 uses a target dimension of 20. Fig. 4 shows the results for a total of 10K function evaluations, batch size q = 100, and 200 initial points for all algorithms.
1https://lambda.gsfc.nasa.gov/toolbox/lrgdr/ 2https://gym.openai.com/envs/LunarLander-v2
HeSBO-TS-5, with a target dimension of 20, and BOBYQA perform well initially, but are eventually outperformed by TuRBO-1 that achieves the best solutions. The good performance of HeSBO-TS is particularly interesting, since this benchmark has no redundant dimensions and thus should be challenging for that embedding-based approach. This confirms similar findings in [27]. BO methods that use a global GP model over-emphasize exploration and make little progress.
3.6 The advantage of local models over global models
We investigate the performance of local and global GP models on the 14D robot pushing problem from Sect. 3.1. We replicate the conditions from the optimization experiments as closely as possible for a regression experiment, including for example parameter bounds. We choose 20 uniformly distributed hypercubes of (base) side length 0.4, each containing 200 uniformly distributed training points. We train a global GP on all 4000 samples, as well as a separate local GP for each hypercube. For the sake of illustration, we used an isotropic kernel for these experiments. The local GPs have the advantage of being able to learn different hyperparameters in each region while the global GP has the advantage of having access to all of the data. Fig. 5 shows the predictive performance (in log loss) on held-out data. We also show the distribution of fitted hyperparameters for both the local and global GPs. We see that the hyperparameters (especially the signal variance) vary substantially across regions. Furthermore, the local GPs perform better than the global GP in every repeated trial. The global model has an average log loss of 1.284 while the local model has an average log loss of 1.174
across 50 trials; the improvement is significant under a t-test at p < 10−4. This experiment confirms that we improve the predictive power of the models and also reduce the computational overhead of the GP by using the local approach. The learned local noise variance in Fig. 5 is bimodal, confirming the heteroscedasticity in the objective across regions. The global GP is required to learn the high noise value to avoid a penalty for outliers.
3.7 Why high-dimensional spaces are challenging
In this section, we illustrate why the restarting and banditing strategy of TuRBO is so effective. Each TR restart finds distant solutions of varying quality, which highlights the multimodal nature of the problem. This gives TuRBO-m a distinct advantage.
We ran TuRBO-1 (with a single trust region) for 50 restarts on the 60D rover trajectory planning problem from Sect. 3.2 and logged the volume of the TR and its center after each iteration. Fig. 6 shows the volume of the TR, the arclength of the TR center’s trajectory, the final objective value, and the distance each final solution has to its nearest neighbor. The left two plots confirm that, within a trust region, the optimization is indeed highly local. The volume of any given trust region decreases rapidly and is only a small fraction of the total search space. From the two plots on the right, we see that the solutions found by TuRBO are far apart with varying quality, demonstrating the value of performing multiple local search runs in parallel.
3.8 The efficiency of large batches
Recall that combining multiple samples into single batches provides substantial speed-ups in terms of wall-clock time but poses the risk of inefficiencies since sequential sampling has the advantage of leveraging more information. In this section, we investigate whether large batches are efficient for TuRBO. Note that Hernández-Lobato et al. [18] and Kandasamy et al. [22] have shown that the TS acquisition function is efficient for batch acquisition with a single global surrogate model. We study TuRBO-1 on the robot pushing problem from Sect. 3.1 with batch sizes q ∈ {1, 2, 4, . . . , 64}. The algorithm takes max{200q, 6400} samples for each batch size and we average the results over 30 replications. Fig. 7 (left) shows the reward for each batch size with respect to the number of batches: we see that larger batch sizes obtain better results for the same number of iterations. Fig. 7 (right) shows the performance as a function of evaluations. We see that the speed-up is essentially linear.
4 Conclusions
The global optimization of computationally expensive black-box functions in high-dimensional spaces is an important and timely topic [13, 27]. We proposed the TuRBO algorithm which takes a novel local approach to global optimization. Instead of fitting a global surrogate model and trading off exploration and exploitation on the whole search space, TuRBO maintains a collection of local probabilistic models. These models provide local search trajectories that are able to quickly discover excellent objective values. This local approach is complemented with a global bandit strategy that allocates samples across these trust regions, implicitly trading off exploration and exploitation. A comprehensive experimental evaluation demonstrates that TuRBO outperforms the state-of-the-art Bayesian optimization and operations research methods on a variety of real-world complex tasks.
In the future, we plan on extending TuRBO to learn local low-dimensional structure to improve the accuracy of the local Gaussian process model. This extension is particularly interesting in highdimensional optimization when derivative information is available [10, 12, 48]. This situation often arises in engineering, where objectives are often modeled by PDEs solved by adjoint methods, and in machine learning where gradients are available via automated differentiation. Ultimately, it is our hope that this work spurs interest in the merits of Bayesian local optimization, particularly in the high-dimensional setting. | 1. What are the strengths and weaknesses of the proposed approach compared to previous works in Bayesian optimization?
2. How does the reviewer assess the novelty and originality of the paper's contribution?
3. Are there any questions or concerns regarding the experimental design and comparisons with other methods?
4. How does the reviewer evaluate the significance and impact of the paper on the field of Bayesian optimization? | Review | Review
I found this paper quite interesting and I think the contribution is quite original and appealing to the community. The paper is nicely written, easy to follow and it is evaluated in a fair number of challenging scenarios and multiple methods. Mi main criticism is the lack of comparison of previous "local" Bayesian optimization methods. Bayesian optimization with a dual (local and global) strategy, or with a locally-biased strategy has been previously explored in the past by several authors. Just to give some examples: -K. P. Wabersich and M. Toussaint: Advancing Bayesian Optimization: The Mixed-Global-Local (MGL) Kernel and Length-Scale Cool Down. NIPS Workshop on Bayesian Optimization, Preprint at arxiv.org/abs/1612.03117, 2016. -Martinez-Cantin R. Funneled Bayesian optimization for design, tuning and control of autonomous systems. IEEE transactions on cybernetics. 2018 Feb 27(99):1-2. -Acerbi, L. and Ma, W. J. (2017). Practical Bayesian Optimization for Model Fitting with Bayesian Adaptive Direct Search. Proc. Advances in Neural Information Processing Systems 30 (NeurIPS â17), Long Beach, USA. -Akrour R, Sorokin D, Peters J, Neumann G. Local Bayesian optimization of motor skills. InProceedings of the 34th International Conference on Machine Learning-Volume 70 2017 Aug 6 (pp. 41-50). JMLR. org. -McLeod, M., Roberts, S. and Osborne, M.A.. (2018). Optimization, fast and slow: optimally switching between local and Bayesian optimization. Proceedings of the 35th International Conference on Machine Learning, in PMLR 80:3443-3452 The most related to this papers are the works of Wabersich and Toussaint; and Martinez-Cantin which also splits the model and the resources in a local and global region. Wabersich even computes the local region in terms of the accuracy of the quadratic approximation, similar to a trust region for quadratic algorithms, such as BOBYQA. Also, in the introduction, it is mentioned that "... note that the commonly used global Gaussian process models implicitly suppose that characteristic lengthscales and outputscales are the function of constants in the search space...". That is also not the case in some previous works on nonstationarity in Bayesian optimization. For example, there are nonstationary kernels like the previous work from Martinez-Cantin; warping spaces like the work of Snoek et at.; or treed GPs like the works of Taddy et al. and Assael et al. - Snoek J, Swersky K, Zemel R, Adams R. Input warping for bayesian optimization of non-stationary functions. InInternational Conference on Machine Learning 2014 Jan 27 (pp. 1674-1682). - Taddy MA, Lee HK, Gray GA, Griffin JD. Bayesian guided pattern search for robust local optimization. Technometrics. 2009 Nov 1;51(4):389-401. - Assael JA, Wang Z, Shahriari B, de Freitas N. Heteroscedastic treed bayesian optimisation. arXiv preprint arXiv:1410.7172. 2014 Oct 27. In fact, the whole discussion about RL having sparse rewards and therefore requiring a nonstationay process is also the motivation of Martinez-Cantin's paper. Instead, the authors focus on the parallelization of the algorithm, which seems secondary, distract from the main point of the paper and leads to some questionable decisions in the experimental process. For example, while I praise the choice of "non-standard" algorithms for comparison, the fact that they replace the EI acquisition in BOHAMIAN and BOCK for TS, which is known to be less effective in the sequential case. Furthermore, most of the experiments presented are actually sequential, such as the robot pushing (one would probably have a single robot) or the rover planing (those plans are computed in limited onboard computers). Following with the experimental section, there are a couple of minor comments: -The results of EBO for the robot experiments are quite different from the results on the original paper, specially the variance of the rover planning. -Given that the objective of BO is sample efficiency, it would be interesting to see the results of a standard GP+EI, maybe limiting the results to few hundreds of evaluation. In theory, the 60D problem should be intractable, but the 12-14D problems could be solved with a standard GP. -Why using COBYLA instead of BOBYQA from the same author? BOBYQA should be faster as it uses quadratic functions. It assumes that the function is twice differentiable, but so does the Matern 5/2 kernel. -In all the problems. TuRBO gets a different number of initial samples. For example, in the rover case, all methods gets 200 while TuRBO-30 gets 3000, which is one order of magnitude more. This seems unfair. Regarding the method, the only comment I have is with respect to the bandit equation, which is purely based on the function sample and not the information/uncertainty on that region. That might result in a lack of exploration and poor global convergence, specially because the sample is based only on the local GP. Wouldn't be better to express the bandit equation in a exploration/exploitation dependent function such as a global acquisition function? Maybe that could explain why TuRBO requires such a large initial set. Despite all my comments, the results are quite impressive. ---- Update: Most of my concerns have been properly addressed by the authors. |
NIPS | Title
Scalable Global Optimization via Local Bayesian Optimization
Abstract
Bayesian optimization has recently emerged as a popular method for the sampleefficient optimization of expensive black-box functions. However, the application to high-dimensional problems with several thousand observations remains challenging, and on difficult problems Bayesian optimization is often not competitive with other paradigms. In this paper we take the view that this is due to the implicit homogeneity of the global probabilistic models and an overemphasized exploration that results from global acquisition. This motivates the design of a local probabilistic approach for global optimization of large-scale high-dimensional problems. We propose the TuRBO algorithm that fits a collection of local models and performs a principled global allocation of samples across these models via an implicit bandit approach. A comprehensive evaluation demonstrates that TuRBO outperforms stateof-the-art methods from machine learning and operations research on problems spanning reinforcement learning, robotics, and the natural sciences.
1 Introduction
The global optimization of high-dimensional black-box functions—where closed form expressions and derivatives are unavailable—is a ubiquitous task arising in hyperparameter tuning [36]; in reinforcement learning, when searching for an optimal parametrized policy [7]; in simulation, when calibrating a simulator to real world data; and in chemical engineering and materials discovery, when selecting candidates for high-throughput screening [18]. While Bayesian optimization (BO) has emerged as a highly competitive tool for problems with a small number of tunable parameters (e.g., see [13, 35]), it often scales poorly to high dimensions and large sample budgets. Several methods have been proposed for high-dimensional problems with small budgets of a few hundred samples (see the literature review below). However, these methods make strong assumptions about the objective function such as low-dimensional subspace structure. The recent algorithms of Wang et al. [45] and Hernández-Lobato et al. [18] are explicitly designed for a large sample budget and do not make these assumptions. However, they do not compare favorably with state-of-the-art methods from stochastic optimization like CMA-ES [17] in practice.
The optimization of high-dimensional problems is hard for several reasons. First, the search space grows exponentially with the dimension, and while local optima may become more plentiful, global optima become more difficult to find. Second, the function is often heterogeneous, making the task of fitting a global surrogate model challenging. For example, in reinforcement learning problems with sparse rewards, we expect the objective function to be nearly constant in large parts of the search space. For the latter, note that the commonly used global Gaussian process (GP) models [13, 46]
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
implicitly suppose that characteristic lengthscales and signal variances of the function are constant in the search space. Previous work on non-stationary kernels does not make this assumption, but these approaches are too computationally expensive to be applicable in our large-scale setting [37, 40, 3]. Finally, the fact that search spaces grow considerably faster than sampling budgets due to the curse of dimensionality implies the inherent presence of regions with large posterior uncertainty. For common myopic acquisition functions, this results in an overemphasized exploration and a failure to exploit promising areas.
To overcome these challenges, we adopt a local strategy for BO. We introduce trust region BO (TuRBO), a technique for global optimization, that uses a collection of simultaneous local optimization runs using independent probabilistic models. Each local surrogate model enjoys the typical benefits of Bayesian modeling —robustness to noisy observations and rigorous uncertainty estimates— however, these local surrogates allow for heterogeneous modeling of the objective function and do not suffer from over-exploration. To optimize globally, we leverage an implicit multi-armed bandit strategy at each iteration to allocate samples between these local areas and thus decide which local optimization runs to continue.
We provide a comprehensive experimental evaluation demonstrating that TuRBO outperforms the state-of-the-art from BO, evolutionary methods, simulation optimization, and stochastic optimization on a variety of benchmarks that span from reinforcement learning to robotics and natural sciences. An implementation of TuRBO is available at https://github.com/uber-research/TuRBO.
1.1 Related work
BO has recently become the premier technique for global optimization of expensive functions, with applications in hyperparameter tuning, aerospace design, chemical engineering, and materials discovery; see [13, 35] for an overview. However, most of BO’s successes have been on lowdimensional problems and small sample budgets. This is not for a lack of trying; there have been many attempts to scale BO to more dimensions and observations. A common approach is to replace the GP model: Hutter et al. [19] uses random forests, whereas Snoek et al. [38] applies Bayesian linear regression on features from neural networks. This neural network approach was refined by Springenberg et al. [39] whose BOHAMIANN algorithm uses a modified Hamiltonian Monte Carlo method, which is more robust and scalable than standard Bayesian neural networks. HernándezLobato et al. [18] combines Bayesian neural networks with Thompson sampling (TS), which easily scales to large batch sizes. We will return to this acquisition function later.
There is a considerable body of work in high-dimensional BO [8, 21, 5, 44, 14, 45, 32, 26, 27, 6]. Many methods exist that exploit potential additive structure in the objective function [21, 14, 45]. These methods typically rely on training a large number of GPs (corresponding to different additive structures) and therefore do not scale to large evaluation budgets. Other methods exist that rely on a mapping between the high-dimensional space and an unknown low-dimensional subspace to scale to large numbers of observations [44, 27, 15]. The BOCK algorithm of Oh et al. [29] uses a cylindrical transformation of the search space to achieve scalability to high dimensions. Ensemble Bayesian optimization (EBO) [45] uses an ensemble of additive GPs together with a batch acquisition function to scale BO to tens of thousands of observations and high-dimensional spaces. Recently, Nayebi et al. [27] have proposed the general HeSBO framework that extends GP-based BO algorithms to high-dimensional problems using a novel subspace embedding that overcomes the limitations of the Gaussian projections used in [44, 5, 6]. From this area of research, we compare to BOCK, BOHAMIANN, EBO, and HeSBO.
To acquire large numbers of observations, large-scale BO usually selects points in batches to be evaluated in parallel. While several batch acquisition functions have recently been proposed [9, 34, 43, 47, 48, 24, 16], these approaches do not scale to large batch sizes in practice. TS [41] is particularly lightweight and easy to implement as a batch acquisition function as the computational cost scales linearly with the batch size. Although originally developed for bandit problems [33], it has recently shown its value in BO [18, 4, 22]. In practice, TS is usually implemented by drawing a realization of the unknown objective function from the surrogate model’s posterior on a discretized search space. Then, TS finds the optimum of the realization and evaluates the objective function at that location. This technique is easily extended to batches by drawing multiple realizations as (see the supplementary material for details).
Evolutionary algorithms are a popular approach for optimizing black-box functions when thousands of evaluations are available, see Jin et al. [20] for an overview in stochastic settings. We compare to the successful covariance matrix adaptation evolution strategy (CMA-ES) of Hansen [17]. CMA-ES performs a stochastic search and maintains a multivariate normal sampling distribution over the search space. The evolutionary techniques of recombination and mutation correspond to adaptions of the mean and covariance matrix of that distribution.
High-dimensional problems with large sample budgets have also been studied extensively in operations research and simulation optimization, see [11] for a survey. Here the successful trust region (TR) methods are based on a local surrogate model in a region (often a sphere) around the best solution. The trust region is expanded or shrunk depending on the improvement in obtained solutions; see Yuan [49] for an overview. We compare to BOBYQA [31], a state-of-the-art TR method that uses a quadratic approximation of the objective function. We also include the Nelder-Mead (NM) algorithm [28]. For a d-dimensional space, NM creates a (d+ 1)-dimensional simplex that adaptively moves along the surface by projecting the vertex of the worst function value through the center of the simplex spanned by the remaining vertices. Finally, we also consider the popular quasi-Newton method BFGS [50], where gradients are obtained using finite differences. For other work that uses local surrogate models, see e.g., [23, 42, 1, 2, 25].
2 The trust region Bayesian optimization algorithm
In this section, we propose an algorithm for optimizing high-dimensional black-box functions. In particular, suppose that we wish to solve:
Find x∗ ∈ Ω such that f(x∗) ≤ f(x), ∀x ∈ Ω,
where f : Ω→ R and Ω = [0, 1]d. We observe potentially noisy values y(x) = f(x) + ε, where ε ∼ N (0, σ2). BO relies on the ability to construct a global model that is eventually accurate enough to uncover a global optimizer. As discussed previously, this is challenging due to the curse of dimensionality and the heterogeneity of the function. To address these challenges, we propose to abandon global surrogate modeling, and achieve global optimization by maintaining several independent local models, each involved in a separate local optimization run. To achieve global optimization in this framework, we maintain multiple local models simultaneously and allocate samples via an implicit multi-armed bandit approach. This yields an efficient acquisition strategy that directs samples towards promising local optimization runs. We begin by detailing a single local optimization run, and then discuss how multiple runs are managed.
Local modeling. To achieve principled local optimization in the gradient-free setting, we draw inspiration from a class of TR methods from stochastic optimization [49]. These methods make suggestions using a (simple) surrogate model inside a TR. The region is often a sphere or a polytope centered at the best solution, within which the surrogate model is believed to accurately model the function. For example, the popular COBYLA [30] method approximates the objective function using a local linear model. Intuitively, while linear and quadratic surrogates are likely to be inadequate models globally, they can be accurate in a sufficiently small TR. However, there are two challenges with traditional TR methods. First, deterministic examples such as COBYLA are notorious for handling noisy observations poorly. Second, simple surrogate models might require overly small trust regions to provide accurate modeling behavior. Therefore, we will use GP surrogate models within a TR. This allows us to inherit the robustness to noise and rigorous reasoning about uncertainty that global BO enjoys.
Trust regions. We choose our TR to be a hyperrectangle centered at the best solution found so far, denoted by x?. In the noise-free case, we set x? to the location of the best observation so far. In the presence of noise, we use the observation with the smallest posterior mean under the surrogate model. At the beginning of a given local optimization run, we initialize the base side length of the TR to L ← Linit. The actual side length for each dimension is obtained from this base side length by rescaling according to its lengthscale λi in the GP model while maintaining a total volume of Ld. That is, Li = λiL/( ∏d j=1 λj)
1/d. To perform a single local optimization run, we utilize an acquisition function at each iteration t to select a batch of q candidates {x(t)1 , . . . ,x (t) q }, restricted to be within the TR. If L was large enough for the TR to contain the whole space, this would be
equivalent to running standard global BO. Therefore, the evolution of L is critical. On the one hand, a TR should be sufficiently large to contain good solutions. On the other hand, it should be small enough to ensure that the local model is accurate within the TR. The typical behavior is to expand a TR when the optimizer “makes progress”, i.e., it finds better solutions in that region, and shrink it when the optimizer appears stuck. Therefore, following, e.g., Nelder and Mead [28], we will shrink a TR after too many consecutive “failures”, and expand it after many consecutive “successes”. We define a “success” as a candidate that improves upon x?, and a “failure” as a candidate that does not. After τsucc consecutive successes, we double the size of the TR, i.e., L← min{Lmax, 2L}. After τfail consecutive failures, we halve the size of the TR: L← L/2. We reset the success and failure counters to zero after we change the size of the TR. Whenever L falls below a given minimum threshold Lmin, we discard the respective TR and initialize a new one with side length Linit. Additionally, we do not let the side length expand to be larger than a maximum threshold Lmax. Note that τsucc, τfail, Lmin, Lmax, and Linit are hyperparameters of TuRBO; see the supplementary material for the values used in the experimental evaluation.
Trust region Bayesian optimization. So far, we have detailed a single local BO strategy using a TR method. Intuitively, we could make this algorithm (more) global by random restarts. However, from a probabilistic perspective, this is likely to utilize our evaluation budget inefficiently. Just as we reason about which candidates are most promising within a local optimization run, we can reason about which local optimization run is “most promising.”
Therefore, TuRBO maintains m trust regions simultaneously. Each trust region TR` with ` ∈ {1, . . . ,m} is a hyperrectangle of base side length L` ≤ Lmax, and utilizes an independent local GP model. This gives rise to a classical exploitation-exploration trade-off that we model by a multi-armed bandit that treats each TR as a lever. Note that this provides an advantage over traditional TR algorithms in that TuRBO puts a stronger emphasis on promising regions.
In each iteration, we need to select a batch of q candidates drawn from the union of all trust regions, and update all local optimization problems for which candidates were drawn. To solve this problem, we find that TS provides a principled solution to both the problem of selecting candidates within a single TR, and selecting candidates across the set of trust regions simultaneously. To select the i-th candidate from across the trust regions, we draw a realization of the posterior function from the local GP within each TR: f (i)` ∼ GP (t) ` (µ`(x), k`(x,x
′)), where GP(t)` is the GP posterior for TR` at iteration t. We then select the i-th candidate such that it minimizes the function value across all m samples and all trust regions:
x (t) i ∈ argmin
` argmin x∈TR`
f (i) ` where f (i) ` ∼ GP (t) ` (µ`(x), k`(x,x ′)).
That is, we select as point with the smallest function value after concatenating a Thompson sample from each TR for i = 1, . . . , q. We refer to the supplementary material for additional details.
3 Numerical experiments
In this section, we evaluate TuRBO on a wide range of problems: a 14D robot pushing problem, a 60D rover trajectory planning problem, a 12D cosmological constant estimation problem, a 12D lunar landing reinforcement learning problem, and a 200D synthetic problem. All problems are multimodal and challenging for many global optimization algorithms. We consider a variety of batch sizes and evaluation budgets to fully examine the performance and robustness of TuRBO. The values of τsucc, τfail, Lmin, Lmax, and Linit are given in the supplementary material.
We compare TuRBO to a comprehensive selection of state-of-the-art baselines: BFGS, BOCK, BOHAMIANN, CMA-ES, BOBYQA, EBO, GP-TS, HeSBO-TS, Nelder-Mead (NM), and random search (RS). Here, GP-TS refers to TS with a global GP model using the Matérn-5/2 kernel. HeSBO-TS combines GP-TS with a subspace embedding and thus effectively optimizes in a low-dimensional space; this target dimension is set by the user. Therefore, a small sample budget may suffice, which allows to run p invocations in parallel, following [44]. This may improve the performance, since each embedding may "fail" with some probability [27], i.e., it does not contain the active subspace even if it exists. Note that HeSBO-TS-p recommends a point of optimal posterior mean among the p GP-models; we use that point for the evaluation. The standard acquisition criterion EI used in BOCK and BOHAMIANN is replaced by (batch) TS, i.e., all methods use the same criterion which allows for a
direct comparison. Methods that attempt to learn an additive decomposition lack scalability and are thus omitted. BFGS approximates the gradient via finite differences and thus requires d+1 evaluations for each step. Furthermore, NM, BFGS, and BOBYQA are inherently sequential and therefore have an edge by leveraging all gathered observations. However, they are considerably more time consuming on a per-wall-time evaluation basis since we are working with large batches.
We supplement the optimization test problems with three additional experiments: i) one that shows that TuRBO achieves a linear speed-up from large batch sizes, ii) a comparison of local GPs and global GPs on a control problem, and iii) an analytical experiment demonstrating the locality of TuRBO. Performance plots show the mean performances with one standard error. Overall, we observe that TuRBO consistently finds excellent solutions, outperforming the other methods on most problems. Experimental results for a small budget experiment on four synthetic functions are shown in the supplement, where we also provide details on the experimental setup and runtimes for all algorithms.
3.1 Robot pushing
The robot pushing problem is a noisy 14D control problem considered in Wang et al. [45]. We run each method for a total of 10K evaluations and batch size of q = 50. TuRBO-1 and all other methods are initialized with 100 points except for TuRBO-20 where we use 50 initial points for each trust region. This is to avoid having TuRBO-20 consume its full evaluation budget on the initial points. We use HeSBO-TS-5 with target dimension 8. TuRBO-m denotes the variant of TuRBO that maintains m local models in parallel. Fig. 2 shows the results: TuRBO-1 and TuRBO-20 outperform the alternatives. TuRBO-20 starts slower since it is initialized with 1K points, but eventually outperforms TuRBO-1. CMA-ES and BOBYQA outperform the other BO methods. Note that Wang et al. [45] reported a median value of 8.3 for EBO after 30K evaluations, while TuRBO-1 achieves a mean and median reward of around 9.4 after only 2K samples.
3.2 Rover trajectory planning
Here the goal is to optimize the locations of 30 points in the 2D-plane that determine the trajectory of a rover [45]. Every algorithm is run for 200 steps with a batch size of q = 100, thus collecting a total of 20K evaluations. We use 200 initial points for all methods except for TuRBO-20, where we use 100 initial points for each region. Fig. 2 summarizes the performance. We observe that TuRBO-1 and TuRBO-20 outperform all other algorithms after a few thousand evaluations. TuRBO-20 once again starts slowly because of the initial 2K random evaluations. Wang et al. [45] reported a mean value of 1.5 for EBO after 35K evaluations, while TuRBO-1 achieves a mean and median reward of about 2 after only 1K evaluations. We use a target dimension of 10 for HeSBO-TS-15 in this experiment.
3.3 Cosmological constant learning
In the “cosmological constants” problem, the task is to calibrate a physics simulator1 to observed data. The tunable parameters include various physical constants like the density of certain types of matter and Hubble’s constant. In this paper, we use a more challenging version of the problem in [21] by tuning 12 parameters rather than 9, and by using substantially larger parameter bounds. We used 2K evaluations, a batch size of q = 50, and 50 initial points. TuRBO-5 uses 20 initial points for each local model and HeSBO-TS-4 uses a target dimension of 8. Fig. 3 (left) shows the results, with TuRBO-5 performing the best, followed by BOBYQA and TuRBO-1. TuRBO-1 sometimes converges to a bad local optimum, which deteriorates the mean performance and demonstrates the importance of allocating samples across multiple trust regions.
3.4 Lunar landing reinforcement learning
Here the goal is to learn a controller for a lunar lander implemented in the OpenAI gym2. The state space for the lunar lander is the position, angle, time derivatives, and whether or not either leg is in contact with the ground. There are four possible action for each frame, each corresponding to firing a booster engine left, right, up, or doing nothing. The objective is to maximize the average final reward over a fixed constant set of 50 randomly generated terrains, initial positions, and velocities. We observed that the simulation can be sensitive to even tiny perturbations. Fig. 3 shows the results for a total of 1500 function evaluations, batch size q = 50, and 50 initial points for all algorithms except for TuRBO-5 which uses 20 initial points for each local region. For this problem, we use HeSBO-TS-3 in an 8-dimensional subspace. TuRBO-5 and TuRBO-1 learn the best controllers; and in particular achieves better rewards than the handcrafted controller provided by OpenAI whose performance is depicted by the blue horizontal line.
3.5 The 200-dimensional Ackley function
We examine performances on the 200-dimensional Ackley function in the domain [−5, 10]200. We only consider TuRBO-1 because of the large number of dimensions where there may not be a benefit from using multiple TRs. EBO is excluded from the plot since its computation time exceeded 30 days per replication. HeSBO-TS-5 uses a target dimension of 20. Fig. 4 shows the results for a total of 10K function evaluations, batch size q = 100, and 200 initial points for all algorithms.
1https://lambda.gsfc.nasa.gov/toolbox/lrgdr/ 2https://gym.openai.com/envs/LunarLander-v2
HeSBO-TS-5, with a target dimension of 20, and BOBYQA perform well initially, but are eventually outperformed by TuRBO-1 that achieves the best solutions. The good performance of HeSBO-TS is particularly interesting, since this benchmark has no redundant dimensions and thus should be challenging for that embedding-based approach. This confirms similar findings in [27]. BO methods that use a global GP model over-emphasize exploration and make little progress.
3.6 The advantage of local models over global models
We investigate the performance of local and global GP models on the 14D robot pushing problem from Sect. 3.1. We replicate the conditions from the optimization experiments as closely as possible for a regression experiment, including for example parameter bounds. We choose 20 uniformly distributed hypercubes of (base) side length 0.4, each containing 200 uniformly distributed training points. We train a global GP on all 4000 samples, as well as a separate local GP for each hypercube. For the sake of illustration, we used an isotropic kernel for these experiments. The local GPs have the advantage of being able to learn different hyperparameters in each region while the global GP has the advantage of having access to all of the data. Fig. 5 shows the predictive performance (in log loss) on held-out data. We also show the distribution of fitted hyperparameters for both the local and global GPs. We see that the hyperparameters (especially the signal variance) vary substantially across regions. Furthermore, the local GPs perform better than the global GP in every repeated trial. The global model has an average log loss of 1.284 while the local model has an average log loss of 1.174
across 50 trials; the improvement is significant under a t-test at p < 10−4. This experiment confirms that we improve the predictive power of the models and also reduce the computational overhead of the GP by using the local approach. The learned local noise variance in Fig. 5 is bimodal, confirming the heteroscedasticity in the objective across regions. The global GP is required to learn the high noise value to avoid a penalty for outliers.
3.7 Why high-dimensional spaces are challenging
In this section, we illustrate why the restarting and banditing strategy of TuRBO is so effective. Each TR restart finds distant solutions of varying quality, which highlights the multimodal nature of the problem. This gives TuRBO-m a distinct advantage.
We ran TuRBO-1 (with a single trust region) for 50 restarts on the 60D rover trajectory planning problem from Sect. 3.2 and logged the volume of the TR and its center after each iteration. Fig. 6 shows the volume of the TR, the arclength of the TR center’s trajectory, the final objective value, and the distance each final solution has to its nearest neighbor. The left two plots confirm that, within a trust region, the optimization is indeed highly local. The volume of any given trust region decreases rapidly and is only a small fraction of the total search space. From the two plots on the right, we see that the solutions found by TuRBO are far apart with varying quality, demonstrating the value of performing multiple local search runs in parallel.
3.8 The efficiency of large batches
Recall that combining multiple samples into single batches provides substantial speed-ups in terms of wall-clock time but poses the risk of inefficiencies since sequential sampling has the advantage of leveraging more information. In this section, we investigate whether large batches are efficient for TuRBO. Note that Hernández-Lobato et al. [18] and Kandasamy et al. [22] have shown that the TS acquisition function is efficient for batch acquisition with a single global surrogate model. We study TuRBO-1 on the robot pushing problem from Sect. 3.1 with batch sizes q ∈ {1, 2, 4, . . . , 64}. The algorithm takes max{200q, 6400} samples for each batch size and we average the results over 30 replications. Fig. 7 (left) shows the reward for each batch size with respect to the number of batches: we see that larger batch sizes obtain better results for the same number of iterations. Fig. 7 (right) shows the performance as a function of evaluations. We see that the speed-up is essentially linear.
4 Conclusions
The global optimization of computationally expensive black-box functions in high-dimensional spaces is an important and timely topic [13, 27]. We proposed the TuRBO algorithm which takes a novel local approach to global optimization. Instead of fitting a global surrogate model and trading off exploration and exploitation on the whole search space, TuRBO maintains a collection of local probabilistic models. These models provide local search trajectories that are able to quickly discover excellent objective values. This local approach is complemented with a global bandit strategy that allocates samples across these trust regions, implicitly trading off exploration and exploitation. A comprehensive experimental evaluation demonstrates that TuRBO outperforms the state-of-the-art Bayesian optimization and operations research methods on a variety of real-world complex tasks.
In the future, we plan on extending TuRBO to learn local low-dimensional structure to improve the accuracy of the local Gaussian process model. This extension is particularly interesting in highdimensional optimization when derivative information is available [10, 12, 48]. This situation often arises in engineering, where objectives are often modeled by PDEs solved by adjoint methods, and in machine learning where gradients are available via automated differentiation. Ultimately, it is our hope that this work spurs interest in the merits of Bayesian local optimization, particularly in the high-dimensional setting. | 1. What is the main contribution of the paper, and how does it aim to improve Bayesian optimization?
2. What are the strengths and weaknesses of the proposed TuRBO strategy, particularly regarding its ability to handle high dimensions and large numbers of queries?
3. How does the reviewer assess the clarity and definition of the term "heterogeneous function"?
4. What are the minor suggestions for improving the abstract and the description of the overall acquisition strategy?
5. Has the author considered whether their procedure can be viewed as "regular BO", and what are the implications of this? | Review | Review
Summary: This paper proposes a new Bayesian optimization strategy called TuRBO, which aims to perform global optimization via a set of local Bayesian optimization routines. The goal of TuRBO is to show good performance in both high dimensions, and with large numbers of queries/observations. This strategy uses trust region methods to adaptively constrain the domain, which using a multi-armed bandit strategy to choose between different local optimizers. A Thompson sampling approach is used to select a subsequent point given the multiple trust regions. Comments: > My main criticism is that the empirical results donât seem to go up to very high dimensions. In the empirical results, three of the four tasks are from 10-20 dimensions, while one task is in 60 dimensions. In some of the high dimensional BO papers listed in the related work, tasks are shown from 50-120 dimensions. > It would be great to explicitly define or clarify what is meant by a âheterogeneous functionâ. There is a brief description involving reinforcement learning problems (in Section 1). However, I feel that this does not provide a clear description or definition of what the authors mean. > Minor: the abstract has the line âthe application to high-dimensional problems with several thousand observations remains challengingâ. At first pass, the phrasing here makes it seem like you are defining âhigh-dimensional problemsâ as those with âseveral thousand observationsâ, while I think you actually mean the setting with both a high-dimensional design space and several thousands of observations. Re-phrasing this could improve the clarity. > The overall acquisition strategy (described at the end of Section 2) is to concatenate the Thompson samples from all of the local models and choose the minimum. It therefore seems like this algorithm might be described as doing (âregularâ) Thompson sampling over some type of approximate Bayesian model (e.g. some sort of piecewise model defined on independent and adaptively growing trust regions). Have the authors considered whether their procedure can be viewed as âregular BOâ, i.e. standard Thompson sampling in a sophisticated model? ---------- Update after author response ---------- Thank you for including a high dimensional (200D) result in the author response. This seems to show good performance of TuRBO in the large-iteration regime. I have therefore bumped my score up to a 7. |
NIPS | Title
Structural Causal Bandits: Where to Intervene?
Abstract
We study the problem of identifying the best action in a sequential decisionmaking setting when the reward distributions of the arms exhibit a non-trivial dependence structure, which is governed by the underlying causal model of the domain where the agent is deployed. In this setting, playing an arm corresponds to intervening on a set of variables and setting them to specific values. In this paper, we show that whenever the underlying causal model is not taken into account during the decision-making process, the standard strategies of simultaneously intervening on all variables or on all the subsets of the variables may, in general, lead to suboptimal policies, regardless of the number of interventions performed by the agent in the environment. We formally acknowledge this phenomenon and investigate structural properties implied by the underlying causal model, which lead to a complete characterization of the relationships between the arms’ distributions. We leverage this characterization to build a new algorithm that takes as input a causal structure and finds a minimal, sound, and complete set of qualified arms that an agent should play to maximize its expected reward. We empirically demonstrate that the new strategy learns an optimal policy and leads to orders of magnitude faster convergence rates when compared with its causal-insensitive counterparts.
1 Introduction
The multi-armed bandit (MAB) problem is one of the prototypical settings studied in the sequential decision-making literature [Lai and Robbins, 1985, Even-Dar et al., 2006, Bubeck and Cesa-Bianchi, 2012]. An agent needs to decide which arm to pull and receives a corresponding reward at each time step while keeping the goal of maximizing its cumulative reward in the long run. The challenge is the inherent trade-off between exploiting known arms versus exploring new reward opportunities [Sutton and Barto, 1998, Szepesvári, 2010]. There is a wide range of assumptions underlying MABs, but in most of the traditional settings, the arms’ rewards are assumed to be independent, which means that knowing the reward distribution of one arm has no implication to the reward of the other arms. Many strategies were developed to solve this problem, including classic algorithms such as ✏-greedy, variants of UCB (Auer et al., 2002, Cappé et al., 2013), and Thompson sampling [Thompson, 1933].
Recently, the existence of some non-trivial dependencies among arms has been acknowledged in the literature and studied under the rubric of structured bandits, which include settings such as linear [Dani et al., 2008], combinatorial [Cesa-Bianchi and Lugosi, 2012], unimodal [Combes and Proutiere, 2014], and Lipschitz [Magureanu et al., 2014], just to name a few. For example, a linear (or combinatorial) bandit imposes that an action xt 2 Rd (or {0, 1}d) at a time step t incurs a cost `>t xt, where `t is a loss vector chosen by, e.g., an adversary. In this case, an index-based MAB algorithm, oblivious to the structural properties, can be suboptimal.
In another line of investigation, rich environments with complex dependency structures are modeled explicitly through the use of causal graphs, where nodes represent decisions and outcome variables, and direct edges represent direct influence of one variable on another [Pearl, 2000]. Despite the
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
apparent connection between MABs and causality, only recently has the use of causal reasoning been incorporated into the design of MAB algorithms. For instance, Bareinboim et al. [2015] first explored the connection between causal models with unobserved confounders (UCs) and reinforcement learning, where latent factors affect both the reward distribution and the player’s intuition. The key observation used in the paper is that while standard MAB algorithms optimize based on the do-distribution (formally written as E[Y |do(X)] or E[Yx]), the simplest type of counterfactuals, this approach is dominated by another strategy using a more detailed counterfactual as the basis of the optimization process (i.e., E[Yx|X = x0]); this general strategy was called regret decision criterion (RDC). This strategy was later extended to handle counterfactual distributions of higher dimensionality by Forney et al. [2017]. Further, Lattimore et al. [2016] and Sen et al. [2017] studied the problem of best arm identification through importance weighting, where information on how playing arms influences the direct causes (parents, in causal terminology) of a reward variable is available. Zhang and Bareinboim [2017] leveraged causal graphs to solve the problem of off-policy evaluation in the presence of UCs. They noted that whenever UCs are present, traditional off-policy methods can be arbitrarily biased, leading to linear regret. They then showed how to solve the offpolicy evaluation problem by incorporating the causal bounds into the decision-making procedure.1 Overall, these works showed different aspects of the same phenomenon — whenever UCs are present in the real world, the expected guarantees provided by standard methods are no longer valid, which translates to an inability to converge to any reasonable policy. They then showed that convergence can be restored once the causal structure is acknowledged and used during the decision-making process.
In this paper, we focus on the challenge of identifying the best action in MABs where the arms correspond to interventions on an arbitrary causal graph, including when latent variables confound the observed relations (i..e, semi-Markovian causal models). To understand this challenge, we first note that a standard MAB can be seen as the simple causal model as shown in Fig. 1a, where X represents an arm (with K different values), Y the reward variable, and U the unobserved variable that generates the randomness of Y .2 After a sufficiently large number of pulls of X (chosen by the specific algorithm), Y ’s average reward can be determined with high confidence.
Whenever a set of UCs affect more than one observed variable, however, novel, non-trivial challenges arise. To witness, consider the more involved MAB structure shown in Fig. 1b, where an unobserved confounder U affects both the action variable X1 and the reward Y . A naive approach for an algorithm to play such a bandit would be to pull arms in a combinatorial manner, i.e., combining both variables (X1⇥X2) so that arms are D(X1)⇥D(X2), where D(X) is the domain of X . One may surmise that this is a valid strategy, albeit not the most efficient one. Somewhat unexpectedly, however, Fig. 1c shows that this is not the case — the optimal action comes from pulling X2 and ignoring X1, while pulling {X1,X2} together would lead to subpar cumulative rewards (regardless of the number of iterations) since it simply cannot pull the optimal arm (Fig. 1d). After all, if one is oblivious to the causal structure and decides to take all intervenable variables as one (in this case, X1⇥X2), indiscriminately, one may be doomed to learn a suboptimal policy.
1On another line of investigation, Ortega and Braun [2014] introduced a generalized version of Thompson sampling applied to the problem of adaptive control.
2In causal notation, Y fY (U ,X), which means that Y ’s value is determined by X and the realization of the latent variable U . If fY is linear, we would have a (stochastic) linear bandit. Our results do not constrain the types of structural functions, which is usually within nonparametric causal inference [Pearl, 2000, Ch. 7].
In this paper, we investigate this phenomenon, and more broadly, causal MABs with non-trivial dependency structure between the arms. More specifically, our contributions are as follows: (1) We formulate a SCM-MAB problem, which is a structured multi-armed bandit instance within the causal framework. We then derive the structural properties of a SCM-MAB, which are computable from any causal model, including arms’ equivalence based on do-calculus [Pearl, 1995], and partial orderedness among sets of variables associated with arms in regards to the maximum rewards achievable. (2) We characterize a special set of variables called POMIS (possibly-optimal minimal intervention set), which is worth intervening based on the aforementioned partial orders. We then introduce an algorithm that identifies a complete set of POMISs so that only the subset of arms associated with them can be explored in a MAB algorithm. Simulations corroborate our findings.
Big picture The multi-armed bandit is a rich setting in which a huge number of variants has been studied in the literature. Different aspects of the decision-making process have been analyzed and well-understood in the last decades, which include different functional forms (e.g., linear, Lipschitz, Gaussian process), types of feedback experienced by the agent (bandit, semi-bandit, full), the adversarial or i.i.d. nature of the interactions, just to cite some of the most popular ones. Our study of SCM-MABs puts the causal dimension front and center in the map. In particular, we fully acknowledge the existence of a causal structure among the underlying variables (whenever not known a priori, see Footnote 3), and leverage the qualitative relations among them. This is in clear contrast with the prevailing practice that is more quantitative and, almost invariably, is oblivious to the underlying causal structure (as shown in Fig. 1a). We outline in
Fig. 2 an initial map that shows the relationship between these dimensions; our goal here is not to be exhaustive, nor prescriptive, but to help to give some perspective. In this paper, we study bandits with no constraints over the underlying functional form (nonparametric, in causality language), i.i.d. stochastic rewards, and with an explicit causal structure acknowledged by the agent.
Preliminaries: notations and structural causal models
We follow the notation used in the causal inference literature. A capital letter is used for a variable or a mathematical object. The domain of X is denoted by D (X). A bold capital letter is for a set of variables, e.g., X = {Xi}ni=1, while a lowercase letter x 2 D (X) is a value assigned to X , and x 2 D (X) = ⇥X2X (D (X)). We denote by x [W], values of x corresponding to W \X. A graph G = hV,Ei is a pair of vertices V and edges E. We adopt family relationships — pa, ch, an, and de to denote parents, children, ancestors, and descendants of a given variable; Pa, Ch, An, and De extends pa, ch, an, and de by including the argument as the result, e.g., Pa (X)
G = pa (X) G [{X}.
With a set of variables as argument, pa (X) G
= S
X2X pa (X)G and similarly defined for other relations. We denote by V (G) the set of variables in G. G [V0] for V0 ✓ V (G) is a vertex-induced subgraph where all edges among V0 are preserved. We define G\X as G [V (G) \X] for X ✓ V (G). We adopt the language of Structural Causal Models (SCM) [Pearl, 2000, Ch. 7]. An SCM M is a tuple hU,V,F,P (U)i, where U is a set of exogenous (unobserved or latent) variables and V is a set of endogenous (observed) variables. F is a set of deterministic functions F = {fi}, where fi determines the value of Vi 2 V based on endogenous variables PAi ✓ V\ {Vi} and exogenous variables Ui ✓ U, that is, e.g., vi fi(pai,ui). P (U) is a joint distribution over the exogenous variables. A causal diagram G = hV,Ei, associated with M , is a tuple of vertices V (the endogenous variables) and edges E, where a directed edge Vi ! Vj 2 E if Vi 2 PAj , and a bidirected edge between Vi and Vj if they share an unobserved confounder, i.e., Ui \Uj 6= ;. Note that pa(Vi)G corresponds to PAi. Probability of Y = y when X is held fixed at x (i.e., intervened) is denoted by P (y|do(x)), where intervention on X is graphically represented by GX, the graph G with incoming edges onto X removed. We denote by CC (X)
G the c-component of G that contains X where a
c-component is a maximal set of vertices connected with bidirected edges [Tian and Pearl, 2002]. We define CC (X)
G = S X2X CC (X)G. For a more detailed discussion on the properties of SCMs, we
refer readers to [Pearl, 2000, Bareinboim and Pearl, 2016]. For all the proofs and appendices, please refer to the full technical report [Lee and Bareinboim, 2018].
2 Multi-armed bandits with structural causal models
We recall that MABs consider a sequential decision-making setting where pulling one of the K available arms at each round gives the player a stochastic reward from an unknown distribution associated with the corresponding arm. The goal is to minimize (maximize) the cumulative regret (reward) after T rounds. The mean reward of an arm a is denoted by µa and the maximal reward is µ⇤ = max1aK µa. We focus on the cumulative regret, RegT = Tµ⇤ P T
t=1 E [YAt ] =P K
a=1 aE [Ta (T )], where At is the arm played at time t, Ta (t) is the number of arm a has been played after t rounds, and a = µ⇤ µa. We now can explicitly connect a MAB instance to its SCM counterpart. Let M be a SCM hU,V,F,P (U)i and Y 2 V be a reward variable, where D (Y ) ✓ R. The bandit contains arms {x 2 D (X) | X ✓ V\{Y }}, a set of all possible interventions on endogenous variables except the reward variable. Each arm Ax (or simply x) associates with a reward distribution P (Y |do(x)) where its mean reward µx is E [Y |do(x)]. We call this setting a SCM-MAB, which is fully represented by the pair hM ,Y i. Throughout this paper, we assume that the causal graph G of M is fully accessible to the agent,3 although its parametrization is unknown: that is, an agent facing a SCM-MAB hM ,Y i plays arms with knowledge of G and Y , but not of F and P (U). For simplicity, we denote information provided to an agent playing a SCM-MAB by JG,Y K. We now investigate some key structural properties that follow from the causal structure G of the SCM-MAB.
Property 1. Equivalence among arms
We start by noting that do-calculus [Pearl, 1995] provides rules to evaluate invariances in the interventional space. In particular, we focus here on the Rule 3, which ascertains the condition such that a set of interventions does not have an effect on the outcome variable, i.e., P (y|do(x, z),w) = P (y|do(x),w). Since arms correspond to interventions (including the null intervention) and there is no contextual information, we consider examining P (y|do(x, z)) = P (y|do(x)) through Y ? Z | X in GX[Z, which implies µx,z = µx. If valid, this condition implies that it is sufficient to play only one arm among arms in the equivalence class. Definition 1 (Minimal Intervention Set (MIS)). A set of variables X ✓ V\{Y } is said to be a minimal intervention set relative to JG,Y K if there is no X0 ⇢ X such that µx[X0] = µx for every SCM conforming to the G.
For instance, the MISs corresponding to the causal graphs in Fig. 3 are {;, {X}, {Z}}, which do not include {X,Z} since µx = µx,z . The MISs are determined without considering the UCs in a causal graph. The empty set and all singletons in an (Y )
G are MISs for G with respect to Y . The task of
finding the best arm among all possible arms can be reduced to a search within the MISs. Proposition 1 (Minimality). A set of variables X ✓ V\{Y } is a minimal intervention set for G with respect to Y if and only if X ✓ an (Y )
GX .
All the MISs given JG,Y K can be determined without explicitly enumerating 2V\{Y } while checking the condition in Prop. 1. We provide an efficient recursive algorithm enumerating the complete set of MISs given G and Y (Appendix A), which runs in O(mn2) where m is the number of MISs.
3In settings where this is not the case, one can spend the first interactions with the environment to learn the causal graph G from observational [Spirtes et al., 2001] or experimental data [Kocaoglu et al., 2017].
Property 2. Partial-orders among arms
We now explore the partial-orders among subsets of V\{Y } within the MISs. Given the causal diagram G, it is possible that intervening on some variables is always as good as intervening on another set of variables (regardless of the parametrization of the underlying model). Formally, there can be two different sets of variables W,Z ✓ V\{Y } such that
max w2D(W) µw max z2D(Z) µz
in every possible SCM conforming to G. If that is the case, it would be unnecessary (and possibly harmful in terms of sample efficiency) to play arms D (W). We next define Possibly-Optimal MIS, which incorporates the partial-orderedness among subsets of V\{Y } into MIS denoting the optimal value for a X ✓ V\{Y } given a SCM by x⇤. Definition 2 (Possibly-Optimal Minimal Intervention Set (POMIS)). Given information JG,Y K, let X be a MIS. If there exists a SCM conforming to G such that µx⇤ > 8Z2Z\{X}µz⇤ , where Z is the set of MISs with respect to G and Y , then X is a possibly-optimal minimal intervention set with respect to the information JG,Y K.
Intuitively, one may believe that the best action will be to intervene on the direct causes (parents) of the reward variable Y , since this would entail a higher degree of “controllability” of Y within the system. This, in fact, holds true if Y is not confounded with any of its ancestors, which includes the case where no unobserved confounders are present in the system (i.e., Markovian models). Proposition 2. Given information JG,Y K, if Y is not confounded with an(Y )G via unobserved confounders, then pa(Y )G is the only POMIS.
Corollary 3 (Markovian POMIS). Given JG,Y K, if G is Markovian, then pa(Y )G is the only POMIS.
For instance, in Fig. 3a, {{X}} is the set of POMISs. Whenever unobserved confounders (UCs) are present,4 on the other hand, the analysis becomes more involved. To witness, let us analyze the maximum achievable rewards of the MISs in the other causal diagrams in Fig. 3. We start with Fig. 3b and note that µz⇤ µx⇤ since µz⇤ = P x µxP (x|do(z⇤)) P x µ⇤ x P (x|do(z⇤)) = µx⇤ . On the other hand, µ; is not comparable to µx⇤ . For a concrete example, consider a SCM where the domains of variables are {0, 1}. Let U be the UC between Y and Z where P (U = 1) = 0.5. Let fZ(u) = 1 u, fX(z) = z, and fY (x,u) = x u, where is the exclusive-or function. If X is not intervened on, x will be 1 u yielding y = 1 for both cases u = 0 or u = 1 so that µ; = 1. However, if X is intervened to either 0 or 1, y will be 1 only half the time since P (U = 1) = 0.5, which results in µx⇤ = 0.5. We also provide in Appendix A a SCM such that µ; < µx⇤ holds true. This model (µ; > µx⇤ ) illustrates an interesting phenomenon — allowing an UC to affect Y freely may lead to a higher reward, which may be broken upon interventions. We now consider the different confounding structure shown in Fig. 3c (similar to Fig. 1b), where the variable Z lies outside of the influence of the UC associated with Y . In this case, intervening on Z leads to a higher reward, µz⇤ µ;. To witness, note that µ; = P z E [Y |z]P (z) = P z µzP (z) P z µz⇤P (z) = µz⇤ . However, µz⇤ and µx⇤ are incomparable, which is shown through two models provided in Appendix A. Finally, we can add the confounders of the two previous models, which is shown in Fig. 3d. In this case, all three µx⇤ , µz⇤ , and µ; are incomparable. One can imagine scenarios where the influence of the UCs are weak enough so that corresponding models produce results similar to Figs. 3a to 3c.
It’s clear that the interplay between the location of the intervened variable, the outcome variable, and the UCs entails non-trivial interactions and consequences in terms of the reward. The table in Fig. 3e highlights the arms that are contenders to generate the highest rewards in each model (i.e., each arm intervenes a POMIS to specific values), while intervening on a non-POMIS represents a waste of resources. Interestingly, the only parent of Y , i.e., X , is not dominated by any other arms in any of the scenarios discussed. In words, this suggests that the intuition that controlling variables closer to Y is not entirely lost even when UCs are present; they are not the only POMIS, but certainly one of them. Given that more complex mechanisms cannot be, in general, ruled out, performing experiments would be required to identify the best arm. Still, the results of the table guarantee that the search can be refined so that MAB solvers can discard arms that cannot lead to profitable outcomes, and converge faster to playing the optimal arm.
4Recall that unobserved confounders are represented in the graph as bidirected dashed edges.
3 Graphical characterization of POMIS
Our goal in this section is to graphically characterize POMISs. We will leverage the discussion in the previous section and note that UCs connected to a reward variable affect the reward distributions in a way that intervening on a variable outside the coverage of such UCs (including no UC) can be optimal — e.g., {X} for Fig. 3a, ; for Figs. 3b and 3d, and {Z} for Fig. 3c. We introduce two graphical concepts to help characterizing this property. Definition 3 (Unobserved-Confounders’ Territory). Given information JG,Y K, let H be G [An (Y )
G ]. A set of variables T ✓ V (H) containing Y is called an UC-territory on G with
respect to Y if De (T) H = T and CC (T) H = T.
An UC-territory T is said to be minimal if no T0 ⇢ T is an UC-territory. A minimal UC-Territory (MUCT) for G and Y can be constructed by extending a set of variables, starting from {Y }, alternatively updating the set with the c-component and descendants of the set. Definition 4 (Interventional Border). Let T be a minimal UC-territory on G with respect to Y . Then, X = pa (T)
G \T is called an interventional border for G with respect to Y .
The interventional border (IB) encompasses essentially the parents of the MUCT. For concreteness, consider Fig. 4a, and note that {W ,X,Y ,Z} is the MUCT for the causal graph with respect to Y , and the IB is {S,T} (marked in pink and blue in the graph, respectively). As its name suggests, MUCT is a set of endogenous variables governed by a set of UCs where at least one UC is adjacent to a reward variable. Specifically, the reward is determined by values of: (1) the UCs governing the MUCT; (2) a set of unobserved variables (other than the UCs) where each affects an endogenous variable in the MUCT; and (3) the IB. In other words, there is no UC interplaying across MUCT and its outside so that µx = E[Y |x] where x is a value assigned to the IB X. We now connect MUCT and IB with POMIS. Let MUCT(G,Y ) and IB(G,Y ) be, respectively, the MUCT and IB given JG,Y K. Proposition 4. IB(G,Y ) is a POMIS given JG,Y K.
The main strategy of the proof is to construct a SCM M where intervening on any variable in MUCT(G,Y ) causes significant loss of reward. It seems that MUCT and IB can only identify a single POMIS given JG,Y K. However, they, in fact, serve as basic units to identify all POMISs. Proposition 5. Given JG,Y K, IB(GW,Y ) is a POMIS, for any W ✓ V\ {Y }.
Prop. 5 generalizes Prop. 4 for when W 6= ; while taking care of UCs across MUCT(GW,Y ), and its outside in the original causal graph G. See Fig. 4d, for an instance, where IB(G
W ,Y ) = {W ,T}.
Intervening on W cuts the influence of S and the UC between W and X , while still allowing the UC to affect X .5 Similarly, one can see in Fig. 4b that IB(G
X ,Y ) = {T ,W ,X} where
intervening on X lets Y be the only element of MUCT making its parents an interventional border, hence, a POMIS. Note that pa(Y )G is always a POMIS since MUCT(Gpa(Y )G ,Y ) = {Y } and IB(G
pa(Y )G ,Y ) = pa(Y )G. With Prop. 5, one can enumerate the POMISs given JG,Y K considering
all subsets of V\ {Y }. We show in the sequel that this strategy encompasses all the POMISs. Theorem 6. Given JG,Y K, X ✓ V\{Y } is a POMIS if and only if IB(GX,Y ) = X.
5Note that exogenous variables that do not affect more than one endogenous variable (i.e., non-UCs) are not explicitly represented in the graph.
Algorithm 1 Algorithm enumerating all POMISs with JG,Y K 1: function POMISS(G, Y ) 2: T,X = MUCT (G,Y ) , IB (G,Y ); H = GX [T [X] 3: return {X} [ subPOMISs (H, Y , reversed (topological-sort (H)) \ (T \ {Y }) , ;) 4: function SUBPOMISS(G, Y , ⇡, O) 5: P = ; 6: for ⇡i 2 ⇡ do 7: T, X, ⇡0, O0 = MUCT(G⇡i ,Y ), IB(G⇡i ,Y ), ⇡
i+1:|⇡| \T, O [ ⇡1:i 1 8: if X \O0 = ; then 9: P = P [ {X} [ (subPOMISs (GX [T [X] , Y , ⇡
0, O0) if ⇡0 6= ; else ;) 10: return P
Algorithm 2 POMIS-based kl-UCB 1: function POMIS-KL-UCB(B,G,Y , f ,T ) 2: Input: B, a SCM-MAB, G, a causal diagram; Y , a reward variable 3: A = S X2POMISs(G, Y ) D(X)
4: kl-UCB(B, A, f , T )
Thm. 6 provides a graphical necessary and sufficient condition for a set of variables being a POMIS given JG,Y K. This characterization allows one to determine all possible arms in a SCM-MAB that are worth intervening on, and, therefore, being free from pulling the other unnecessary arms.
4 Algorithmic characterization of POMIS
Although the graphical characterization provides a means to enumerate the complete set of POMISs given JG,Y K, a naively implemented algorithm requires time exponential in |V|. We construct an efficient algorithm (Alg. 1) that enumerates all the POMISs based on Props. 7 and 8 below and the graphical characterization introduced in the previous section (Thm. 6). Proposition 7. Let T and X be the MUCT(GW,Y ) and IB(GW,Y ), respectively, relative to G and Y . Then, for any Z ✓ V\T, MUCT(GX[Z,Y ) = T and IB(GX[Z,Y ) = X.
Proposition 8. Let H=GX [T [X] where T and X are MUCT and IB given JGW,Y K, respectively. Then, for any W
0 ✓ T\ {Y }, HW0 and GW[W0 yield the same MUCT and IB with respect to Y .
Prop. 7 allows one to avoid having to examine GW for every W ✓ V\{Y }. Prop. 8 characterizes the recursive nature of MUCT and IB, where identification of POMISs can be evaluated by subgraphs. Based on these results, we design a recursive algorithm (Alg. 1) to explore subsets of V\{Y } with a certain order. See Fig. 4e for an example where subsets of {X,Z,W} are connected based on set inclusion relationship and an order of variables, e.g., (X,Z,W ). That is, there exists a directed edge between two sets if (i) one set is larger than the other by a variable and (ii) the variable’s index (as in the order) is larger than other variable’s index in the smaller set. The diagram traces how the algorithm will explore the subsets following the edges, while effectively skipping nodes.
Given G and Y , POMISs (Alg. 1) computes a POMIS, i.e., IB(G,Y ). Then, a recursive procedure subPOMISs is called with an order of variables (Line 3). Then subPOMISs examines POMISs by intervening on a single variable against the given graph (Line 6–9). If the IB (X in Line 7) of such an intervened graph intersects with O0 (a set of variables that should be considered in other branch), then no subsequent call is made (Line 8). Otherwise, a subsequent subPOMISs call will take as arguments an MUCT-IB induced subgraph (Prop. 8), a refined order, and a set of variables not to be intervened in the given branch. For clarity, we provide a detailed working example in Appendix C with Fig. 4a where the algorithm explores only four intervened graphs (G, G{X}, G{Z}, G{W}) and generates the complete set of POMISs {{S,T}, {T ,W}, {T ,W ,X}}. Theorem 9 (Soundness and Completeness). Given information JG,Y K, the algorithm POMISs (Alg. 1) returns all, and only POMISs.
The POMISs algorithm can be combined with a MAB algorithm, such as the kl-UCB, creating a simple yet effective SCM-MAB solver (see Alg. 2). kl-UCB satisfies lim sup
n!1 E[Regn] log(n)
P x:µx<µ⇤ µ ⇤ µx KL(µx,µ⇤) where KL is Kullback-Leibler divergence between two Bernoulli distributions [Garivier and Cappé, 2011]. It is clear that the reduction in the size of arms will lower the upper bounds of the corresponding cumulative regrets.
5 Experiments
In this section, we present empirical results demonstrating that the selection of arms based on POMISs makes standard MAB solvers converge faster to an optimal arm. We employ two popular MAB solvers, kl-UCB, which enjoys cumulative regret growing logarithmically with the number of rounds [Cappé et al., 2013], and Thompson sampling (TS, Thompson [1933]), which has strong empirical performance [Kaufmann et al., 2012]. We considered four strategies for selecting arms, including POMISs, MISs, Brute-force, and All-at-once, where Brute-force evaluates all combinations of arms S X✓V\{Y } D (X), and All-at-once considers intervening in all variables simultaneously, D (V\{Y }), oblivious to the causal structure and any knowledge about the action space. The performance of the eight (4 ⇥ 2) algorithms are evaluated relative to three different SCM-MAB instances (the detailed parametrizations are provided in Appendix D). We set the horizon large enough so as to observe near convergence, and repeat each simulation 300 times. We plot (i) the average cumulative regrets (CR) along with their respective standard deviations and (ii) the probability of an optimal arm being selected averaged over the repeated tests (OAP).6,7
Task 1: We start by analyzing a Markovian model. We note that by Cor. 3, searching for the arms within the parent set is sufficient in this case. The number of arms for POMISs, MISs, Brute-force, and All-at-once are 4, 49, 81, and 16, respectively. Note that there are 4 optimal arms within All-at-once arms — for instance, if the parent configuration is X1 = x1,X2 = x2, this strategy will also include combinations of Z1 = z1,Z2 = z2, 8z1, z2. The simulated results are shown in Fig. 5a. CR at round 1000 with kl-UCB are 3.0, 48.0, 72, and 12 (in the order), and all strategies were able to find the optimal arms at this time. POMIS and All-at-once first reached 95% OAP at round 20 and 66, respectively. There are two interesting observations at this point. First, at an
6All the code is available at https://github.com/sanghack81/SCMMAB-NIPS2018 7One may surmise that combinatorial bandit (CB) algorithms can be used to solve SCM-MAB instances by noting that an intervention can be encoded as a binary vector, where each dimension in the vector corresponds to intervening on a single variable with a specific value. However, the two settings invoke a very different set of assumptions, which makes their solvers somewhat difficult to compare in some reasonably fair way. For instance, the current generation of CB algorithms is oblivious to the underlying causal structure, which makes them resemble very closely the Brute-force strategy, the worst possible method for SCM-MABs. Further, the assumption of linearity is arguably one of the most popular considered by CB solvers. The corresponding algorithms, however, will be unable to learn the arms’ rewards properly since a SCM-MAB is nonparametric, making no assumption about the underlying structural mechanisms. These are just a few immediate examples of the mismatches between the current generation of algorithms for both causal and combinatorial bandits.
early stage, OAP for MISs is smaller than Brute-force since it has only 1 optimal arm among 49 arms, while Brute-force has 9 among 81. The advantage of employing MIS over Brute-force is only observed after a sufficiently large number of plays. More interestingly, POMIS and All-at-once both have the common optimal to non-optimal arms-ratio (1:3 versus 4:12), however, POMIS dominates All-at-once since the agent can learn better about the mean reward of the optimal arm while playing non-optimal arms less. Naturally, this translates into less variability and additional certainty about the optimal arm even in Markovian settings.
Task 2: We consider the setting known as instrumental variable (IV), which was shown in Fig. 3c. The optimal arm in this simulation is setting Z = 0. The number of arms for the four strategies is 4, 5, 9, and 4, respectively. The results are shown in Fig. 5b. Since the All-at-once strategy only considers non-optimal arms (i.e., pulling Z,X together), it incurs in a linear regret without selecting an optimal arm (0%). CR (and OAP) at round 1000 with TS are POMIS 16.1 (98.67%), MIS 21.4 (99.00%), Brute-force 42.9 (93.33%), and All-at-once 272.1 (0%). At round 5000, where Brute-force nearly converged, the ratio of CRs for POMIS and Brute-force is 54.218.1 = 2.99 ' 2.67 = 9 14 1 . POMIS, MIS, and Brute-force first hits 95% OAP at 172, 214, and 435.
Task 3: Finally, we study the more involved scenario shown in Fig. 4a. In this case, the optimal arm is intervening on {S,T}, which means that the system should follow its natural flow of UCs, which All-at-once is unable to “pull.” There are 16, 75, 243, and 32 arms for the strategies (in the order). The results are shown in Fig. 5c. The CR (and OAP) at round 10000 with TS are POMIS 91.4 (99.0%), MIS 472.4 (97.0%), Brute-force 1469.0 (85.0%), and All-at-once 2784.8 (0%). Similarly, the ratio (in round 10000) is 1469.091.4 = 16.07 ⇡ 16.13 = 243 1 16 1 which is expected to increase since Brute-force is not yet converged at the moment. Only POMIS and MIS achieved OAP of 95% first in 684 and 3544 steps, respectively.
We start by noticing that the reduction in the CRs is approximately proportional to the reduction in the number of non-optimal arms pulled by (PO)MIS by the corresponding algorithm, which makes the POMIS-based solver the clear winner throughout the simulations. It’s still not inconceivable that the number of arms examined by All-at-once is smaller than for POMIS in a specific SCM-MAB instance, which would entail a lower CR to the former. However, such a lower CR in some instances does not constitute any sort of assurance since arms excluded from All-at-once, but included in POMIS, can be optimal in some SCM-MAB instance conforming to JG,Y K. Furthermore, a POMIS-based strategy always dominates the corresponding MIS and Brute-force ones. These observations together suggest that, in practice, a POMIS-based strategy should be preferred given that it will always converge and will usually be faster than its counterparts. Remarkably, there is an interesting trade-off between having knowledge of the causal structure versus not knowing the corresponding dependency structure among arms, and potentially incurring in linear regret (All-at-once) or exponential slowdown (Brute-force). In practice, for the cases in which the causal structure is unknown, the pull of the arms themselves can be used as experiments and could be coupled with efficient strategies to simultaneously learn the causal structure [Kocaoglu et al., 2017].
6 Conclusions
We studied the problem of deciding whether an agent should perform a causal intervention and, if so, which variables it should intervene upon. The problem was formalized using the logic of structural causal models (SCMs) and formalized through a new type of multi-armed bandit called SCM-MABs. We started by noting that whenever the agent cannot measure all the variables in the environment (i.e., unobserved confounders exist), standard MAB algorithms that are oblivious to the underlying causal structure may not converge, regardless of the number of interventions performed in the environment. (We note that the causal structure can easily be learned in a typical MAB setting since the agent always has interventional capabilities.) We introduced a novel decision-making strategy based on properties following the do-calculus, which allowed the removal of redundant arms, and the partial-orders among the sets of variables existent in the underlying causal system, which led to the understanding of the maximum achievable reward of each interventional set. Leveraging this new strategy based on the possibly-optimal minimal intervention sets (called POMIS), we developed an algorithm that decides whether (and if so, where) interventions should be performed in the underlying system. Finally, we showed by simulations that this causally-sensible strategy performs more efficiently and more robustly than their non-causal counterparts. We hope that formal machinery and the algorithms developed here can help decision-makers to make more principled and efficient decisions.
Acknowledgments
This research is supported in parts by grants from IBM Research, Adobe Research, NSF IIS-1704352, and IIS-1750807 (CAREER). | 1. What is the focus of the paper in terms of the multi-armed bandit problem?
2. What are the strengths of the proposed approach, particularly regarding the efficiency of enumerating possible intervention sets?
3. What are the limitations of the paper regarding the assumption of known causal structures?
4. How does the reviewer assess the clarity and validity of the paper's content?
5. Are there any concerns or suggestions for future research related to this work? | Review | Review
The authors formulated a multiarmed bandit problem over a given causal structure. The arms of the bandits are interventions of the observed variables (excl. reward) to certain specific values. The causal structure is assumed to be known, but parameters are not. The authors show that only arms corresponding to possibly optimal minimal intervention set are worth considering. Such sets can be efficiently enumerated by their algorithm. When previous algorithms use these sets, they make optimal decision earlier in different MAB scenarious. The paper is written very clearly, I did not check the details of the proofs but the theory seems valid. The presented problem seems a sensible one, although I am not sure how realistic knowing the causal structure in this scenario is. It certainly builds on previous (causal) MABs. The restriction to POMIS seems valid theoretical contribution. On the whole the paper presents an interesting and worthwhile contribution, further research along these lines is to be expected. The important example on l 46 seems to require as specific parametrization: please add a reference to it to the main text. |
NIPS | Title
Structural Causal Bandits: Where to Intervene?
Abstract
We study the problem of identifying the best action in a sequential decisionmaking setting when the reward distributions of the arms exhibit a non-trivial dependence structure, which is governed by the underlying causal model of the domain where the agent is deployed. In this setting, playing an arm corresponds to intervening on a set of variables and setting them to specific values. In this paper, we show that whenever the underlying causal model is not taken into account during the decision-making process, the standard strategies of simultaneously intervening on all variables or on all the subsets of the variables may, in general, lead to suboptimal policies, regardless of the number of interventions performed by the agent in the environment. We formally acknowledge this phenomenon and investigate structural properties implied by the underlying causal model, which lead to a complete characterization of the relationships between the arms’ distributions. We leverage this characterization to build a new algorithm that takes as input a causal structure and finds a minimal, sound, and complete set of qualified arms that an agent should play to maximize its expected reward. We empirically demonstrate that the new strategy learns an optimal policy and leads to orders of magnitude faster convergence rates when compared with its causal-insensitive counterparts.
1 Introduction
The multi-armed bandit (MAB) problem is one of the prototypical settings studied in the sequential decision-making literature [Lai and Robbins, 1985, Even-Dar et al., 2006, Bubeck and Cesa-Bianchi, 2012]. An agent needs to decide which arm to pull and receives a corresponding reward at each time step while keeping the goal of maximizing its cumulative reward in the long run. The challenge is the inherent trade-off between exploiting known arms versus exploring new reward opportunities [Sutton and Barto, 1998, Szepesvári, 2010]. There is a wide range of assumptions underlying MABs, but in most of the traditional settings, the arms’ rewards are assumed to be independent, which means that knowing the reward distribution of one arm has no implication to the reward of the other arms. Many strategies were developed to solve this problem, including classic algorithms such as ✏-greedy, variants of UCB (Auer et al., 2002, Cappé et al., 2013), and Thompson sampling [Thompson, 1933].
Recently, the existence of some non-trivial dependencies among arms has been acknowledged in the literature and studied under the rubric of structured bandits, which include settings such as linear [Dani et al., 2008], combinatorial [Cesa-Bianchi and Lugosi, 2012], unimodal [Combes and Proutiere, 2014], and Lipschitz [Magureanu et al., 2014], just to name a few. For example, a linear (or combinatorial) bandit imposes that an action xt 2 Rd (or {0, 1}d) at a time step t incurs a cost `>t xt, where `t is a loss vector chosen by, e.g., an adversary. In this case, an index-based MAB algorithm, oblivious to the structural properties, can be suboptimal.
In another line of investigation, rich environments with complex dependency structures are modeled explicitly through the use of causal graphs, where nodes represent decisions and outcome variables, and direct edges represent direct influence of one variable on another [Pearl, 2000]. Despite the
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
apparent connection between MABs and causality, only recently has the use of causal reasoning been incorporated into the design of MAB algorithms. For instance, Bareinboim et al. [2015] first explored the connection between causal models with unobserved confounders (UCs) and reinforcement learning, where latent factors affect both the reward distribution and the player’s intuition. The key observation used in the paper is that while standard MAB algorithms optimize based on the do-distribution (formally written as E[Y |do(X)] or E[Yx]), the simplest type of counterfactuals, this approach is dominated by another strategy using a more detailed counterfactual as the basis of the optimization process (i.e., E[Yx|X = x0]); this general strategy was called regret decision criterion (RDC). This strategy was later extended to handle counterfactual distributions of higher dimensionality by Forney et al. [2017]. Further, Lattimore et al. [2016] and Sen et al. [2017] studied the problem of best arm identification through importance weighting, where information on how playing arms influences the direct causes (parents, in causal terminology) of a reward variable is available. Zhang and Bareinboim [2017] leveraged causal graphs to solve the problem of off-policy evaluation in the presence of UCs. They noted that whenever UCs are present, traditional off-policy methods can be arbitrarily biased, leading to linear regret. They then showed how to solve the offpolicy evaluation problem by incorporating the causal bounds into the decision-making procedure.1 Overall, these works showed different aspects of the same phenomenon — whenever UCs are present in the real world, the expected guarantees provided by standard methods are no longer valid, which translates to an inability to converge to any reasonable policy. They then showed that convergence can be restored once the causal structure is acknowledged and used during the decision-making process.
In this paper, we focus on the challenge of identifying the best action in MABs where the arms correspond to interventions on an arbitrary causal graph, including when latent variables confound the observed relations (i..e, semi-Markovian causal models). To understand this challenge, we first note that a standard MAB can be seen as the simple causal model as shown in Fig. 1a, where X represents an arm (with K different values), Y the reward variable, and U the unobserved variable that generates the randomness of Y .2 After a sufficiently large number of pulls of X (chosen by the specific algorithm), Y ’s average reward can be determined with high confidence.
Whenever a set of UCs affect more than one observed variable, however, novel, non-trivial challenges arise. To witness, consider the more involved MAB structure shown in Fig. 1b, where an unobserved confounder U affects both the action variable X1 and the reward Y . A naive approach for an algorithm to play such a bandit would be to pull arms in a combinatorial manner, i.e., combining both variables (X1⇥X2) so that arms are D(X1)⇥D(X2), where D(X) is the domain of X . One may surmise that this is a valid strategy, albeit not the most efficient one. Somewhat unexpectedly, however, Fig. 1c shows that this is not the case — the optimal action comes from pulling X2 and ignoring X1, while pulling {X1,X2} together would lead to subpar cumulative rewards (regardless of the number of iterations) since it simply cannot pull the optimal arm (Fig. 1d). After all, if one is oblivious to the causal structure and decides to take all intervenable variables as one (in this case, X1⇥X2), indiscriminately, one may be doomed to learn a suboptimal policy.
1On another line of investigation, Ortega and Braun [2014] introduced a generalized version of Thompson sampling applied to the problem of adaptive control.
2In causal notation, Y fY (U ,X), which means that Y ’s value is determined by X and the realization of the latent variable U . If fY is linear, we would have a (stochastic) linear bandit. Our results do not constrain the types of structural functions, which is usually within nonparametric causal inference [Pearl, 2000, Ch. 7].
In this paper, we investigate this phenomenon, and more broadly, causal MABs with non-trivial dependency structure between the arms. More specifically, our contributions are as follows: (1) We formulate a SCM-MAB problem, which is a structured multi-armed bandit instance within the causal framework. We then derive the structural properties of a SCM-MAB, which are computable from any causal model, including arms’ equivalence based on do-calculus [Pearl, 1995], and partial orderedness among sets of variables associated with arms in regards to the maximum rewards achievable. (2) We characterize a special set of variables called POMIS (possibly-optimal minimal intervention set), which is worth intervening based on the aforementioned partial orders. We then introduce an algorithm that identifies a complete set of POMISs so that only the subset of arms associated with them can be explored in a MAB algorithm. Simulations corroborate our findings.
Big picture The multi-armed bandit is a rich setting in which a huge number of variants has been studied in the literature. Different aspects of the decision-making process have been analyzed and well-understood in the last decades, which include different functional forms (e.g., linear, Lipschitz, Gaussian process), types of feedback experienced by the agent (bandit, semi-bandit, full), the adversarial or i.i.d. nature of the interactions, just to cite some of the most popular ones. Our study of SCM-MABs puts the causal dimension front and center in the map. In particular, we fully acknowledge the existence of a causal structure among the underlying variables (whenever not known a priori, see Footnote 3), and leverage the qualitative relations among them. This is in clear contrast with the prevailing practice that is more quantitative and, almost invariably, is oblivious to the underlying causal structure (as shown in Fig. 1a). We outline in
Fig. 2 an initial map that shows the relationship between these dimensions; our goal here is not to be exhaustive, nor prescriptive, but to help to give some perspective. In this paper, we study bandits with no constraints over the underlying functional form (nonparametric, in causality language), i.i.d. stochastic rewards, and with an explicit causal structure acknowledged by the agent.
Preliminaries: notations and structural causal models
We follow the notation used in the causal inference literature. A capital letter is used for a variable or a mathematical object. The domain of X is denoted by D (X). A bold capital letter is for a set of variables, e.g., X = {Xi}ni=1, while a lowercase letter x 2 D (X) is a value assigned to X , and x 2 D (X) = ⇥X2X (D (X)). We denote by x [W], values of x corresponding to W \X. A graph G = hV,Ei is a pair of vertices V and edges E. We adopt family relationships — pa, ch, an, and de to denote parents, children, ancestors, and descendants of a given variable; Pa, Ch, An, and De extends pa, ch, an, and de by including the argument as the result, e.g., Pa (X)
G = pa (X) G [{X}.
With a set of variables as argument, pa (X) G
= S
X2X pa (X)G and similarly defined for other relations. We denote by V (G) the set of variables in G. G [V0] for V0 ✓ V (G) is a vertex-induced subgraph where all edges among V0 are preserved. We define G\X as G [V (G) \X] for X ✓ V (G). We adopt the language of Structural Causal Models (SCM) [Pearl, 2000, Ch. 7]. An SCM M is a tuple hU,V,F,P (U)i, where U is a set of exogenous (unobserved or latent) variables and V is a set of endogenous (observed) variables. F is a set of deterministic functions F = {fi}, where fi determines the value of Vi 2 V based on endogenous variables PAi ✓ V\ {Vi} and exogenous variables Ui ✓ U, that is, e.g., vi fi(pai,ui). P (U) is a joint distribution over the exogenous variables. A causal diagram G = hV,Ei, associated with M , is a tuple of vertices V (the endogenous variables) and edges E, where a directed edge Vi ! Vj 2 E if Vi 2 PAj , and a bidirected edge between Vi and Vj if they share an unobserved confounder, i.e., Ui \Uj 6= ;. Note that pa(Vi)G corresponds to PAi. Probability of Y = y when X is held fixed at x (i.e., intervened) is denoted by P (y|do(x)), where intervention on X is graphically represented by GX, the graph G with incoming edges onto X removed. We denote by CC (X)
G the c-component of G that contains X where a
c-component is a maximal set of vertices connected with bidirected edges [Tian and Pearl, 2002]. We define CC (X)
G = S X2X CC (X)G. For a more detailed discussion on the properties of SCMs, we
refer readers to [Pearl, 2000, Bareinboim and Pearl, 2016]. For all the proofs and appendices, please refer to the full technical report [Lee and Bareinboim, 2018].
2 Multi-armed bandits with structural causal models
We recall that MABs consider a sequential decision-making setting where pulling one of the K available arms at each round gives the player a stochastic reward from an unknown distribution associated with the corresponding arm. The goal is to minimize (maximize) the cumulative regret (reward) after T rounds. The mean reward of an arm a is denoted by µa and the maximal reward is µ⇤ = max1aK µa. We focus on the cumulative regret, RegT = Tµ⇤ P T
t=1 E [YAt ] =P K
a=1 aE [Ta (T )], where At is the arm played at time t, Ta (t) is the number of arm a has been played after t rounds, and a = µ⇤ µa. We now can explicitly connect a MAB instance to its SCM counterpart. Let M be a SCM hU,V,F,P (U)i and Y 2 V be a reward variable, where D (Y ) ✓ R. The bandit contains arms {x 2 D (X) | X ✓ V\{Y }}, a set of all possible interventions on endogenous variables except the reward variable. Each arm Ax (or simply x) associates with a reward distribution P (Y |do(x)) where its mean reward µx is E [Y |do(x)]. We call this setting a SCM-MAB, which is fully represented by the pair hM ,Y i. Throughout this paper, we assume that the causal graph G of M is fully accessible to the agent,3 although its parametrization is unknown: that is, an agent facing a SCM-MAB hM ,Y i plays arms with knowledge of G and Y , but not of F and P (U). For simplicity, we denote information provided to an agent playing a SCM-MAB by JG,Y K. We now investigate some key structural properties that follow from the causal structure G of the SCM-MAB.
Property 1. Equivalence among arms
We start by noting that do-calculus [Pearl, 1995] provides rules to evaluate invariances in the interventional space. In particular, we focus here on the Rule 3, which ascertains the condition such that a set of interventions does not have an effect on the outcome variable, i.e., P (y|do(x, z),w) = P (y|do(x),w). Since arms correspond to interventions (including the null intervention) and there is no contextual information, we consider examining P (y|do(x, z)) = P (y|do(x)) through Y ? Z | X in GX[Z, which implies µx,z = µx. If valid, this condition implies that it is sufficient to play only one arm among arms in the equivalence class. Definition 1 (Minimal Intervention Set (MIS)). A set of variables X ✓ V\{Y } is said to be a minimal intervention set relative to JG,Y K if there is no X0 ⇢ X such that µx[X0] = µx for every SCM conforming to the G.
For instance, the MISs corresponding to the causal graphs in Fig. 3 are {;, {X}, {Z}}, which do not include {X,Z} since µx = µx,z . The MISs are determined without considering the UCs in a causal graph. The empty set and all singletons in an (Y )
G are MISs for G with respect to Y . The task of
finding the best arm among all possible arms can be reduced to a search within the MISs. Proposition 1 (Minimality). A set of variables X ✓ V\{Y } is a minimal intervention set for G with respect to Y if and only if X ✓ an (Y )
GX .
All the MISs given JG,Y K can be determined without explicitly enumerating 2V\{Y } while checking the condition in Prop. 1. We provide an efficient recursive algorithm enumerating the complete set of MISs given G and Y (Appendix A), which runs in O(mn2) where m is the number of MISs.
3In settings where this is not the case, one can spend the first interactions with the environment to learn the causal graph G from observational [Spirtes et al., 2001] or experimental data [Kocaoglu et al., 2017].
Property 2. Partial-orders among arms
We now explore the partial-orders among subsets of V\{Y } within the MISs. Given the causal diagram G, it is possible that intervening on some variables is always as good as intervening on another set of variables (regardless of the parametrization of the underlying model). Formally, there can be two different sets of variables W,Z ✓ V\{Y } such that
max w2D(W) µw max z2D(Z) µz
in every possible SCM conforming to G. If that is the case, it would be unnecessary (and possibly harmful in terms of sample efficiency) to play arms D (W). We next define Possibly-Optimal MIS, which incorporates the partial-orderedness among subsets of V\{Y } into MIS denoting the optimal value for a X ✓ V\{Y } given a SCM by x⇤. Definition 2 (Possibly-Optimal Minimal Intervention Set (POMIS)). Given information JG,Y K, let X be a MIS. If there exists a SCM conforming to G such that µx⇤ > 8Z2Z\{X}µz⇤ , where Z is the set of MISs with respect to G and Y , then X is a possibly-optimal minimal intervention set with respect to the information JG,Y K.
Intuitively, one may believe that the best action will be to intervene on the direct causes (parents) of the reward variable Y , since this would entail a higher degree of “controllability” of Y within the system. This, in fact, holds true if Y is not confounded with any of its ancestors, which includes the case where no unobserved confounders are present in the system (i.e., Markovian models). Proposition 2. Given information JG,Y K, if Y is not confounded with an(Y )G via unobserved confounders, then pa(Y )G is the only POMIS.
Corollary 3 (Markovian POMIS). Given JG,Y K, if G is Markovian, then pa(Y )G is the only POMIS.
For instance, in Fig. 3a, {{X}} is the set of POMISs. Whenever unobserved confounders (UCs) are present,4 on the other hand, the analysis becomes more involved. To witness, let us analyze the maximum achievable rewards of the MISs in the other causal diagrams in Fig. 3. We start with Fig. 3b and note that µz⇤ µx⇤ since µz⇤ = P x µxP (x|do(z⇤)) P x µ⇤ x P (x|do(z⇤)) = µx⇤ . On the other hand, µ; is not comparable to µx⇤ . For a concrete example, consider a SCM where the domains of variables are {0, 1}. Let U be the UC between Y and Z where P (U = 1) = 0.5. Let fZ(u) = 1 u, fX(z) = z, and fY (x,u) = x u, where is the exclusive-or function. If X is not intervened on, x will be 1 u yielding y = 1 for both cases u = 0 or u = 1 so that µ; = 1. However, if X is intervened to either 0 or 1, y will be 1 only half the time since P (U = 1) = 0.5, which results in µx⇤ = 0.5. We also provide in Appendix A a SCM such that µ; < µx⇤ holds true. This model (µ; > µx⇤ ) illustrates an interesting phenomenon — allowing an UC to affect Y freely may lead to a higher reward, which may be broken upon interventions. We now consider the different confounding structure shown in Fig. 3c (similar to Fig. 1b), where the variable Z lies outside of the influence of the UC associated with Y . In this case, intervening on Z leads to a higher reward, µz⇤ µ;. To witness, note that µ; = P z E [Y |z]P (z) = P z µzP (z) P z µz⇤P (z) = µz⇤ . However, µz⇤ and µx⇤ are incomparable, which is shown through two models provided in Appendix A. Finally, we can add the confounders of the two previous models, which is shown in Fig. 3d. In this case, all three µx⇤ , µz⇤ , and µ; are incomparable. One can imagine scenarios where the influence of the UCs are weak enough so that corresponding models produce results similar to Figs. 3a to 3c.
It’s clear that the interplay between the location of the intervened variable, the outcome variable, and the UCs entails non-trivial interactions and consequences in terms of the reward. The table in Fig. 3e highlights the arms that are contenders to generate the highest rewards in each model (i.e., each arm intervenes a POMIS to specific values), while intervening on a non-POMIS represents a waste of resources. Interestingly, the only parent of Y , i.e., X , is not dominated by any other arms in any of the scenarios discussed. In words, this suggests that the intuition that controlling variables closer to Y is not entirely lost even when UCs are present; they are not the only POMIS, but certainly one of them. Given that more complex mechanisms cannot be, in general, ruled out, performing experiments would be required to identify the best arm. Still, the results of the table guarantee that the search can be refined so that MAB solvers can discard arms that cannot lead to profitable outcomes, and converge faster to playing the optimal arm.
4Recall that unobserved confounders are represented in the graph as bidirected dashed edges.
3 Graphical characterization of POMIS
Our goal in this section is to graphically characterize POMISs. We will leverage the discussion in the previous section and note that UCs connected to a reward variable affect the reward distributions in a way that intervening on a variable outside the coverage of such UCs (including no UC) can be optimal — e.g., {X} for Fig. 3a, ; for Figs. 3b and 3d, and {Z} for Fig. 3c. We introduce two graphical concepts to help characterizing this property. Definition 3 (Unobserved-Confounders’ Territory). Given information JG,Y K, let H be G [An (Y )
G ]. A set of variables T ✓ V (H) containing Y is called an UC-territory on G with
respect to Y if De (T) H = T and CC (T) H = T.
An UC-territory T is said to be minimal if no T0 ⇢ T is an UC-territory. A minimal UC-Territory (MUCT) for G and Y can be constructed by extending a set of variables, starting from {Y }, alternatively updating the set with the c-component and descendants of the set. Definition 4 (Interventional Border). Let T be a minimal UC-territory on G with respect to Y . Then, X = pa (T)
G \T is called an interventional border for G with respect to Y .
The interventional border (IB) encompasses essentially the parents of the MUCT. For concreteness, consider Fig. 4a, and note that {W ,X,Y ,Z} is the MUCT for the causal graph with respect to Y , and the IB is {S,T} (marked in pink and blue in the graph, respectively). As its name suggests, MUCT is a set of endogenous variables governed by a set of UCs where at least one UC is adjacent to a reward variable. Specifically, the reward is determined by values of: (1) the UCs governing the MUCT; (2) a set of unobserved variables (other than the UCs) where each affects an endogenous variable in the MUCT; and (3) the IB. In other words, there is no UC interplaying across MUCT and its outside so that µx = E[Y |x] where x is a value assigned to the IB X. We now connect MUCT and IB with POMIS. Let MUCT(G,Y ) and IB(G,Y ) be, respectively, the MUCT and IB given JG,Y K. Proposition 4. IB(G,Y ) is a POMIS given JG,Y K.
The main strategy of the proof is to construct a SCM M where intervening on any variable in MUCT(G,Y ) causes significant loss of reward. It seems that MUCT and IB can only identify a single POMIS given JG,Y K. However, they, in fact, serve as basic units to identify all POMISs. Proposition 5. Given JG,Y K, IB(GW,Y ) is a POMIS, for any W ✓ V\ {Y }.
Prop. 5 generalizes Prop. 4 for when W 6= ; while taking care of UCs across MUCT(GW,Y ), and its outside in the original causal graph G. See Fig. 4d, for an instance, where IB(G
W ,Y ) = {W ,T}.
Intervening on W cuts the influence of S and the UC between W and X , while still allowing the UC to affect X .5 Similarly, one can see in Fig. 4b that IB(G
X ,Y ) = {T ,W ,X} where
intervening on X lets Y be the only element of MUCT making its parents an interventional border, hence, a POMIS. Note that pa(Y )G is always a POMIS since MUCT(Gpa(Y )G ,Y ) = {Y } and IB(G
pa(Y )G ,Y ) = pa(Y )G. With Prop. 5, one can enumerate the POMISs given JG,Y K considering
all subsets of V\ {Y }. We show in the sequel that this strategy encompasses all the POMISs. Theorem 6. Given JG,Y K, X ✓ V\{Y } is a POMIS if and only if IB(GX,Y ) = X.
5Note that exogenous variables that do not affect more than one endogenous variable (i.e., non-UCs) are not explicitly represented in the graph.
Algorithm 1 Algorithm enumerating all POMISs with JG,Y K 1: function POMISS(G, Y ) 2: T,X = MUCT (G,Y ) , IB (G,Y ); H = GX [T [X] 3: return {X} [ subPOMISs (H, Y , reversed (topological-sort (H)) \ (T \ {Y }) , ;) 4: function SUBPOMISS(G, Y , ⇡, O) 5: P = ; 6: for ⇡i 2 ⇡ do 7: T, X, ⇡0, O0 = MUCT(G⇡i ,Y ), IB(G⇡i ,Y ), ⇡
i+1:|⇡| \T, O [ ⇡1:i 1 8: if X \O0 = ; then 9: P = P [ {X} [ (subPOMISs (GX [T [X] , Y , ⇡
0, O0) if ⇡0 6= ; else ;) 10: return P
Algorithm 2 POMIS-based kl-UCB 1: function POMIS-KL-UCB(B,G,Y , f ,T ) 2: Input: B, a SCM-MAB, G, a causal diagram; Y , a reward variable 3: A = S X2POMISs(G, Y ) D(X)
4: kl-UCB(B, A, f , T )
Thm. 6 provides a graphical necessary and sufficient condition for a set of variables being a POMIS given JG,Y K. This characterization allows one to determine all possible arms in a SCM-MAB that are worth intervening on, and, therefore, being free from pulling the other unnecessary arms.
4 Algorithmic characterization of POMIS
Although the graphical characterization provides a means to enumerate the complete set of POMISs given JG,Y K, a naively implemented algorithm requires time exponential in |V|. We construct an efficient algorithm (Alg. 1) that enumerates all the POMISs based on Props. 7 and 8 below and the graphical characterization introduced in the previous section (Thm. 6). Proposition 7. Let T and X be the MUCT(GW,Y ) and IB(GW,Y ), respectively, relative to G and Y . Then, for any Z ✓ V\T, MUCT(GX[Z,Y ) = T and IB(GX[Z,Y ) = X.
Proposition 8. Let H=GX [T [X] where T and X are MUCT and IB given JGW,Y K, respectively. Then, for any W
0 ✓ T\ {Y }, HW0 and GW[W0 yield the same MUCT and IB with respect to Y .
Prop. 7 allows one to avoid having to examine GW for every W ✓ V\{Y }. Prop. 8 characterizes the recursive nature of MUCT and IB, where identification of POMISs can be evaluated by subgraphs. Based on these results, we design a recursive algorithm (Alg. 1) to explore subsets of V\{Y } with a certain order. See Fig. 4e for an example where subsets of {X,Z,W} are connected based on set inclusion relationship and an order of variables, e.g., (X,Z,W ). That is, there exists a directed edge between two sets if (i) one set is larger than the other by a variable and (ii) the variable’s index (as in the order) is larger than other variable’s index in the smaller set. The diagram traces how the algorithm will explore the subsets following the edges, while effectively skipping nodes.
Given G and Y , POMISs (Alg. 1) computes a POMIS, i.e., IB(G,Y ). Then, a recursive procedure subPOMISs is called with an order of variables (Line 3). Then subPOMISs examines POMISs by intervening on a single variable against the given graph (Line 6–9). If the IB (X in Line 7) of such an intervened graph intersects with O0 (a set of variables that should be considered in other branch), then no subsequent call is made (Line 8). Otherwise, a subsequent subPOMISs call will take as arguments an MUCT-IB induced subgraph (Prop. 8), a refined order, and a set of variables not to be intervened in the given branch. For clarity, we provide a detailed working example in Appendix C with Fig. 4a where the algorithm explores only four intervened graphs (G, G{X}, G{Z}, G{W}) and generates the complete set of POMISs {{S,T}, {T ,W}, {T ,W ,X}}. Theorem 9 (Soundness and Completeness). Given information JG,Y K, the algorithm POMISs (Alg. 1) returns all, and only POMISs.
The POMISs algorithm can be combined with a MAB algorithm, such as the kl-UCB, creating a simple yet effective SCM-MAB solver (see Alg. 2). kl-UCB satisfies lim sup
n!1 E[Regn] log(n)
P x:µx<µ⇤ µ ⇤ µx KL(µx,µ⇤) where KL is Kullback-Leibler divergence between two Bernoulli distributions [Garivier and Cappé, 2011]. It is clear that the reduction in the size of arms will lower the upper bounds of the corresponding cumulative regrets.
5 Experiments
In this section, we present empirical results demonstrating that the selection of arms based on POMISs makes standard MAB solvers converge faster to an optimal arm. We employ two popular MAB solvers, kl-UCB, which enjoys cumulative regret growing logarithmically with the number of rounds [Cappé et al., 2013], and Thompson sampling (TS, Thompson [1933]), which has strong empirical performance [Kaufmann et al., 2012]. We considered four strategies for selecting arms, including POMISs, MISs, Brute-force, and All-at-once, where Brute-force evaluates all combinations of arms S X✓V\{Y } D (X), and All-at-once considers intervening in all variables simultaneously, D (V\{Y }), oblivious to the causal structure and any knowledge about the action space. The performance of the eight (4 ⇥ 2) algorithms are evaluated relative to three different SCM-MAB instances (the detailed parametrizations are provided in Appendix D). We set the horizon large enough so as to observe near convergence, and repeat each simulation 300 times. We plot (i) the average cumulative regrets (CR) along with their respective standard deviations and (ii) the probability of an optimal arm being selected averaged over the repeated tests (OAP).6,7
Task 1: We start by analyzing a Markovian model. We note that by Cor. 3, searching for the arms within the parent set is sufficient in this case. The number of arms for POMISs, MISs, Brute-force, and All-at-once are 4, 49, 81, and 16, respectively. Note that there are 4 optimal arms within All-at-once arms — for instance, if the parent configuration is X1 = x1,X2 = x2, this strategy will also include combinations of Z1 = z1,Z2 = z2, 8z1, z2. The simulated results are shown in Fig. 5a. CR at round 1000 with kl-UCB are 3.0, 48.0, 72, and 12 (in the order), and all strategies were able to find the optimal arms at this time. POMIS and All-at-once first reached 95% OAP at round 20 and 66, respectively. There are two interesting observations at this point. First, at an
6All the code is available at https://github.com/sanghack81/SCMMAB-NIPS2018 7One may surmise that combinatorial bandit (CB) algorithms can be used to solve SCM-MAB instances by noting that an intervention can be encoded as a binary vector, where each dimension in the vector corresponds to intervening on a single variable with a specific value. However, the two settings invoke a very different set of assumptions, which makes their solvers somewhat difficult to compare in some reasonably fair way. For instance, the current generation of CB algorithms is oblivious to the underlying causal structure, which makes them resemble very closely the Brute-force strategy, the worst possible method for SCM-MABs. Further, the assumption of linearity is arguably one of the most popular considered by CB solvers. The corresponding algorithms, however, will be unable to learn the arms’ rewards properly since a SCM-MAB is nonparametric, making no assumption about the underlying structural mechanisms. These are just a few immediate examples of the mismatches between the current generation of algorithms for both causal and combinatorial bandits.
early stage, OAP for MISs is smaller than Brute-force since it has only 1 optimal arm among 49 arms, while Brute-force has 9 among 81. The advantage of employing MIS over Brute-force is only observed after a sufficiently large number of plays. More interestingly, POMIS and All-at-once both have the common optimal to non-optimal arms-ratio (1:3 versus 4:12), however, POMIS dominates All-at-once since the agent can learn better about the mean reward of the optimal arm while playing non-optimal arms less. Naturally, this translates into less variability and additional certainty about the optimal arm even in Markovian settings.
Task 2: We consider the setting known as instrumental variable (IV), which was shown in Fig. 3c. The optimal arm in this simulation is setting Z = 0. The number of arms for the four strategies is 4, 5, 9, and 4, respectively. The results are shown in Fig. 5b. Since the All-at-once strategy only considers non-optimal arms (i.e., pulling Z,X together), it incurs in a linear regret without selecting an optimal arm (0%). CR (and OAP) at round 1000 with TS are POMIS 16.1 (98.67%), MIS 21.4 (99.00%), Brute-force 42.9 (93.33%), and All-at-once 272.1 (0%). At round 5000, where Brute-force nearly converged, the ratio of CRs for POMIS and Brute-force is 54.218.1 = 2.99 ' 2.67 = 9 14 1 . POMIS, MIS, and Brute-force first hits 95% OAP at 172, 214, and 435.
Task 3: Finally, we study the more involved scenario shown in Fig. 4a. In this case, the optimal arm is intervening on {S,T}, which means that the system should follow its natural flow of UCs, which All-at-once is unable to “pull.” There are 16, 75, 243, and 32 arms for the strategies (in the order). The results are shown in Fig. 5c. The CR (and OAP) at round 10000 with TS are POMIS 91.4 (99.0%), MIS 472.4 (97.0%), Brute-force 1469.0 (85.0%), and All-at-once 2784.8 (0%). Similarly, the ratio (in round 10000) is 1469.091.4 = 16.07 ⇡ 16.13 = 243 1 16 1 which is expected to increase since Brute-force is not yet converged at the moment. Only POMIS and MIS achieved OAP of 95% first in 684 and 3544 steps, respectively.
We start by noticing that the reduction in the CRs is approximately proportional to the reduction in the number of non-optimal arms pulled by (PO)MIS by the corresponding algorithm, which makes the POMIS-based solver the clear winner throughout the simulations. It’s still not inconceivable that the number of arms examined by All-at-once is smaller than for POMIS in a specific SCM-MAB instance, which would entail a lower CR to the former. However, such a lower CR in some instances does not constitute any sort of assurance since arms excluded from All-at-once, but included in POMIS, can be optimal in some SCM-MAB instance conforming to JG,Y K. Furthermore, a POMIS-based strategy always dominates the corresponding MIS and Brute-force ones. These observations together suggest that, in practice, a POMIS-based strategy should be preferred given that it will always converge and will usually be faster than its counterparts. Remarkably, there is an interesting trade-off between having knowledge of the causal structure versus not knowing the corresponding dependency structure among arms, and potentially incurring in linear regret (All-at-once) or exponential slowdown (Brute-force). In practice, for the cases in which the causal structure is unknown, the pull of the arms themselves can be used as experiments and could be coupled with efficient strategies to simultaneously learn the causal structure [Kocaoglu et al., 2017].
6 Conclusions
We studied the problem of deciding whether an agent should perform a causal intervention and, if so, which variables it should intervene upon. The problem was formalized using the logic of structural causal models (SCMs) and formalized through a new type of multi-armed bandit called SCM-MABs. We started by noting that whenever the agent cannot measure all the variables in the environment (i.e., unobserved confounders exist), standard MAB algorithms that are oblivious to the underlying causal structure may not converge, regardless of the number of interventions performed in the environment. (We note that the causal structure can easily be learned in a typical MAB setting since the agent always has interventional capabilities.) We introduced a novel decision-making strategy based on properties following the do-calculus, which allowed the removal of redundant arms, and the partial-orders among the sets of variables existent in the underlying causal system, which led to the understanding of the maximum achievable reward of each interventional set. Leveraging this new strategy based on the possibly-optimal minimal intervention sets (called POMIS), we developed an algorithm that decides whether (and if so, where) interventions should be performed in the underlying system. Finally, we showed by simulations that this causally-sensible strategy performs more efficiently and more robustly than their non-causal counterparts. We hope that formal machinery and the algorithms developed here can help decision-makers to make more principled and efficient decisions.
Acknowledgments
This research is supported in parts by grants from IBM Research, Adobe Research, NSF IIS-1704352, and IIS-1750807 (CAREER). | 1. What is the main contribution of the paper in the context of multi-arm bandit problems?
2. How does the proposed algorithm differ from other approaches in terms of exploiting graphical criteria?
3. Can you provide more information about the experimental results, such as the comparison with other methods and the specific scenarios where the proposed method excelled?
4. Are there any limitations or potential drawbacks to the proposed approach, particularly in certain scenarios or cases?
5. How does the paper's focus on causal graphs and do-calculus impact the solution to the multi-arm bandit problem? | Review | Review
The paper discusses the multi arm bandit problem to identify the best action in sequential decision making. It focuses on models in which there are non-trivial dependencies between the reward distribution of the arms. The approach formulates the problem with causal graphs and do-calculus. The key contribution is the proposal of an algorithm to find a set of variables called POMIS (possibly-optimal minimal intervention set). POMIS consists of a subset of all variables which can be intervened on to obtain the optimal strategy and the algorithm exploits graphical criteria. The paper provides useful results as the strategy of removing redundant arms is shown in the experiments to improve the cumulative regret relative to others that ignore the structure of relationships. It achieves this by essentially reducing the space to search for optimal actions and avoids pulling unnecessary arms. As described in section 5, there are potential cases where alternative approaches would assess a smaller number of arms. |
NIPS | Title
Structural Causal Bandits: Where to Intervene?
Abstract
We study the problem of identifying the best action in a sequential decisionmaking setting when the reward distributions of the arms exhibit a non-trivial dependence structure, which is governed by the underlying causal model of the domain where the agent is deployed. In this setting, playing an arm corresponds to intervening on a set of variables and setting them to specific values. In this paper, we show that whenever the underlying causal model is not taken into account during the decision-making process, the standard strategies of simultaneously intervening on all variables or on all the subsets of the variables may, in general, lead to suboptimal policies, regardless of the number of interventions performed by the agent in the environment. We formally acknowledge this phenomenon and investigate structural properties implied by the underlying causal model, which lead to a complete characterization of the relationships between the arms’ distributions. We leverage this characterization to build a new algorithm that takes as input a causal structure and finds a minimal, sound, and complete set of qualified arms that an agent should play to maximize its expected reward. We empirically demonstrate that the new strategy learns an optimal policy and leads to orders of magnitude faster convergence rates when compared with its causal-insensitive counterparts.
1 Introduction
The multi-armed bandit (MAB) problem is one of the prototypical settings studied in the sequential decision-making literature [Lai and Robbins, 1985, Even-Dar et al., 2006, Bubeck and Cesa-Bianchi, 2012]. An agent needs to decide which arm to pull and receives a corresponding reward at each time step while keeping the goal of maximizing its cumulative reward in the long run. The challenge is the inherent trade-off between exploiting known arms versus exploring new reward opportunities [Sutton and Barto, 1998, Szepesvári, 2010]. There is a wide range of assumptions underlying MABs, but in most of the traditional settings, the arms’ rewards are assumed to be independent, which means that knowing the reward distribution of one arm has no implication to the reward of the other arms. Many strategies were developed to solve this problem, including classic algorithms such as ✏-greedy, variants of UCB (Auer et al., 2002, Cappé et al., 2013), and Thompson sampling [Thompson, 1933].
Recently, the existence of some non-trivial dependencies among arms has been acknowledged in the literature and studied under the rubric of structured bandits, which include settings such as linear [Dani et al., 2008], combinatorial [Cesa-Bianchi and Lugosi, 2012], unimodal [Combes and Proutiere, 2014], and Lipschitz [Magureanu et al., 2014], just to name a few. For example, a linear (or combinatorial) bandit imposes that an action xt 2 Rd (or {0, 1}d) at a time step t incurs a cost `>t xt, where `t is a loss vector chosen by, e.g., an adversary. In this case, an index-based MAB algorithm, oblivious to the structural properties, can be suboptimal.
In another line of investigation, rich environments with complex dependency structures are modeled explicitly through the use of causal graphs, where nodes represent decisions and outcome variables, and direct edges represent direct influence of one variable on another [Pearl, 2000]. Despite the
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
apparent connection between MABs and causality, only recently has the use of causal reasoning been incorporated into the design of MAB algorithms. For instance, Bareinboim et al. [2015] first explored the connection between causal models with unobserved confounders (UCs) and reinforcement learning, where latent factors affect both the reward distribution and the player’s intuition. The key observation used in the paper is that while standard MAB algorithms optimize based on the do-distribution (formally written as E[Y |do(X)] or E[Yx]), the simplest type of counterfactuals, this approach is dominated by another strategy using a more detailed counterfactual as the basis of the optimization process (i.e., E[Yx|X = x0]); this general strategy was called regret decision criterion (RDC). This strategy was later extended to handle counterfactual distributions of higher dimensionality by Forney et al. [2017]. Further, Lattimore et al. [2016] and Sen et al. [2017] studied the problem of best arm identification through importance weighting, where information on how playing arms influences the direct causes (parents, in causal terminology) of a reward variable is available. Zhang and Bareinboim [2017] leveraged causal graphs to solve the problem of off-policy evaluation in the presence of UCs. They noted that whenever UCs are present, traditional off-policy methods can be arbitrarily biased, leading to linear regret. They then showed how to solve the offpolicy evaluation problem by incorporating the causal bounds into the decision-making procedure.1 Overall, these works showed different aspects of the same phenomenon — whenever UCs are present in the real world, the expected guarantees provided by standard methods are no longer valid, which translates to an inability to converge to any reasonable policy. They then showed that convergence can be restored once the causal structure is acknowledged and used during the decision-making process.
In this paper, we focus on the challenge of identifying the best action in MABs where the arms correspond to interventions on an arbitrary causal graph, including when latent variables confound the observed relations (i..e, semi-Markovian causal models). To understand this challenge, we first note that a standard MAB can be seen as the simple causal model as shown in Fig. 1a, where X represents an arm (with K different values), Y the reward variable, and U the unobserved variable that generates the randomness of Y .2 After a sufficiently large number of pulls of X (chosen by the specific algorithm), Y ’s average reward can be determined with high confidence.
Whenever a set of UCs affect more than one observed variable, however, novel, non-trivial challenges arise. To witness, consider the more involved MAB structure shown in Fig. 1b, where an unobserved confounder U affects both the action variable X1 and the reward Y . A naive approach for an algorithm to play such a bandit would be to pull arms in a combinatorial manner, i.e., combining both variables (X1⇥X2) so that arms are D(X1)⇥D(X2), where D(X) is the domain of X . One may surmise that this is a valid strategy, albeit not the most efficient one. Somewhat unexpectedly, however, Fig. 1c shows that this is not the case — the optimal action comes from pulling X2 and ignoring X1, while pulling {X1,X2} together would lead to subpar cumulative rewards (regardless of the number of iterations) since it simply cannot pull the optimal arm (Fig. 1d). After all, if one is oblivious to the causal structure and decides to take all intervenable variables as one (in this case, X1⇥X2), indiscriminately, one may be doomed to learn a suboptimal policy.
1On another line of investigation, Ortega and Braun [2014] introduced a generalized version of Thompson sampling applied to the problem of adaptive control.
2In causal notation, Y fY (U ,X), which means that Y ’s value is determined by X and the realization of the latent variable U . If fY is linear, we would have a (stochastic) linear bandit. Our results do not constrain the types of structural functions, which is usually within nonparametric causal inference [Pearl, 2000, Ch. 7].
In this paper, we investigate this phenomenon, and more broadly, causal MABs with non-trivial dependency structure between the arms. More specifically, our contributions are as follows: (1) We formulate a SCM-MAB problem, which is a structured multi-armed bandit instance within the causal framework. We then derive the structural properties of a SCM-MAB, which are computable from any causal model, including arms’ equivalence based on do-calculus [Pearl, 1995], and partial orderedness among sets of variables associated with arms in regards to the maximum rewards achievable. (2) We characterize a special set of variables called POMIS (possibly-optimal minimal intervention set), which is worth intervening based on the aforementioned partial orders. We then introduce an algorithm that identifies a complete set of POMISs so that only the subset of arms associated with them can be explored in a MAB algorithm. Simulations corroborate our findings.
Big picture The multi-armed bandit is a rich setting in which a huge number of variants has been studied in the literature. Different aspects of the decision-making process have been analyzed and well-understood in the last decades, which include different functional forms (e.g., linear, Lipschitz, Gaussian process), types of feedback experienced by the agent (bandit, semi-bandit, full), the adversarial or i.i.d. nature of the interactions, just to cite some of the most popular ones. Our study of SCM-MABs puts the causal dimension front and center in the map. In particular, we fully acknowledge the existence of a causal structure among the underlying variables (whenever not known a priori, see Footnote 3), and leverage the qualitative relations among them. This is in clear contrast with the prevailing practice that is more quantitative and, almost invariably, is oblivious to the underlying causal structure (as shown in Fig. 1a). We outline in
Fig. 2 an initial map that shows the relationship between these dimensions; our goal here is not to be exhaustive, nor prescriptive, but to help to give some perspective. In this paper, we study bandits with no constraints over the underlying functional form (nonparametric, in causality language), i.i.d. stochastic rewards, and with an explicit causal structure acknowledged by the agent.
Preliminaries: notations and structural causal models
We follow the notation used in the causal inference literature. A capital letter is used for a variable or a mathematical object. The domain of X is denoted by D (X). A bold capital letter is for a set of variables, e.g., X = {Xi}ni=1, while a lowercase letter x 2 D (X) is a value assigned to X , and x 2 D (X) = ⇥X2X (D (X)). We denote by x [W], values of x corresponding to W \X. A graph G = hV,Ei is a pair of vertices V and edges E. We adopt family relationships — pa, ch, an, and de to denote parents, children, ancestors, and descendants of a given variable; Pa, Ch, An, and De extends pa, ch, an, and de by including the argument as the result, e.g., Pa (X)
G = pa (X) G [{X}.
With a set of variables as argument, pa (X) G
= S
X2X pa (X)G and similarly defined for other relations. We denote by V (G) the set of variables in G. G [V0] for V0 ✓ V (G) is a vertex-induced subgraph where all edges among V0 are preserved. We define G\X as G [V (G) \X] for X ✓ V (G). We adopt the language of Structural Causal Models (SCM) [Pearl, 2000, Ch. 7]. An SCM M is a tuple hU,V,F,P (U)i, where U is a set of exogenous (unobserved or latent) variables and V is a set of endogenous (observed) variables. F is a set of deterministic functions F = {fi}, where fi determines the value of Vi 2 V based on endogenous variables PAi ✓ V\ {Vi} and exogenous variables Ui ✓ U, that is, e.g., vi fi(pai,ui). P (U) is a joint distribution over the exogenous variables. A causal diagram G = hV,Ei, associated with M , is a tuple of vertices V (the endogenous variables) and edges E, where a directed edge Vi ! Vj 2 E if Vi 2 PAj , and a bidirected edge between Vi and Vj if they share an unobserved confounder, i.e., Ui \Uj 6= ;. Note that pa(Vi)G corresponds to PAi. Probability of Y = y when X is held fixed at x (i.e., intervened) is denoted by P (y|do(x)), where intervention on X is graphically represented by GX, the graph G with incoming edges onto X removed. We denote by CC (X)
G the c-component of G that contains X where a
c-component is a maximal set of vertices connected with bidirected edges [Tian and Pearl, 2002]. We define CC (X)
G = S X2X CC (X)G. For a more detailed discussion on the properties of SCMs, we
refer readers to [Pearl, 2000, Bareinboim and Pearl, 2016]. For all the proofs and appendices, please refer to the full technical report [Lee and Bareinboim, 2018].
2 Multi-armed bandits with structural causal models
We recall that MABs consider a sequential decision-making setting where pulling one of the K available arms at each round gives the player a stochastic reward from an unknown distribution associated with the corresponding arm. The goal is to minimize (maximize) the cumulative regret (reward) after T rounds. The mean reward of an arm a is denoted by µa and the maximal reward is µ⇤ = max1aK µa. We focus on the cumulative regret, RegT = Tµ⇤ P T
t=1 E [YAt ] =P K
a=1 aE [Ta (T )], where At is the arm played at time t, Ta (t) is the number of arm a has been played after t rounds, and a = µ⇤ µa. We now can explicitly connect a MAB instance to its SCM counterpart. Let M be a SCM hU,V,F,P (U)i and Y 2 V be a reward variable, where D (Y ) ✓ R. The bandit contains arms {x 2 D (X) | X ✓ V\{Y }}, a set of all possible interventions on endogenous variables except the reward variable. Each arm Ax (or simply x) associates with a reward distribution P (Y |do(x)) where its mean reward µx is E [Y |do(x)]. We call this setting a SCM-MAB, which is fully represented by the pair hM ,Y i. Throughout this paper, we assume that the causal graph G of M is fully accessible to the agent,3 although its parametrization is unknown: that is, an agent facing a SCM-MAB hM ,Y i plays arms with knowledge of G and Y , but not of F and P (U). For simplicity, we denote information provided to an agent playing a SCM-MAB by JG,Y K. We now investigate some key structural properties that follow from the causal structure G of the SCM-MAB.
Property 1. Equivalence among arms
We start by noting that do-calculus [Pearl, 1995] provides rules to evaluate invariances in the interventional space. In particular, we focus here on the Rule 3, which ascertains the condition such that a set of interventions does not have an effect on the outcome variable, i.e., P (y|do(x, z),w) = P (y|do(x),w). Since arms correspond to interventions (including the null intervention) and there is no contextual information, we consider examining P (y|do(x, z)) = P (y|do(x)) through Y ? Z | X in GX[Z, which implies µx,z = µx. If valid, this condition implies that it is sufficient to play only one arm among arms in the equivalence class. Definition 1 (Minimal Intervention Set (MIS)). A set of variables X ✓ V\{Y } is said to be a minimal intervention set relative to JG,Y K if there is no X0 ⇢ X such that µx[X0] = µx for every SCM conforming to the G.
For instance, the MISs corresponding to the causal graphs in Fig. 3 are {;, {X}, {Z}}, which do not include {X,Z} since µx = µx,z . The MISs are determined without considering the UCs in a causal graph. The empty set and all singletons in an (Y )
G are MISs for G with respect to Y . The task of
finding the best arm among all possible arms can be reduced to a search within the MISs. Proposition 1 (Minimality). A set of variables X ✓ V\{Y } is a minimal intervention set for G with respect to Y if and only if X ✓ an (Y )
GX .
All the MISs given JG,Y K can be determined without explicitly enumerating 2V\{Y } while checking the condition in Prop. 1. We provide an efficient recursive algorithm enumerating the complete set of MISs given G and Y (Appendix A), which runs in O(mn2) where m is the number of MISs.
3In settings where this is not the case, one can spend the first interactions with the environment to learn the causal graph G from observational [Spirtes et al., 2001] or experimental data [Kocaoglu et al., 2017].
Property 2. Partial-orders among arms
We now explore the partial-orders among subsets of V\{Y } within the MISs. Given the causal diagram G, it is possible that intervening on some variables is always as good as intervening on another set of variables (regardless of the parametrization of the underlying model). Formally, there can be two different sets of variables W,Z ✓ V\{Y } such that
max w2D(W) µw max z2D(Z) µz
in every possible SCM conforming to G. If that is the case, it would be unnecessary (and possibly harmful in terms of sample efficiency) to play arms D (W). We next define Possibly-Optimal MIS, which incorporates the partial-orderedness among subsets of V\{Y } into MIS denoting the optimal value for a X ✓ V\{Y } given a SCM by x⇤. Definition 2 (Possibly-Optimal Minimal Intervention Set (POMIS)). Given information JG,Y K, let X be a MIS. If there exists a SCM conforming to G such that µx⇤ > 8Z2Z\{X}µz⇤ , where Z is the set of MISs with respect to G and Y , then X is a possibly-optimal minimal intervention set with respect to the information JG,Y K.
Intuitively, one may believe that the best action will be to intervene on the direct causes (parents) of the reward variable Y , since this would entail a higher degree of “controllability” of Y within the system. This, in fact, holds true if Y is not confounded with any of its ancestors, which includes the case where no unobserved confounders are present in the system (i.e., Markovian models). Proposition 2. Given information JG,Y K, if Y is not confounded with an(Y )G via unobserved confounders, then pa(Y )G is the only POMIS.
Corollary 3 (Markovian POMIS). Given JG,Y K, if G is Markovian, then pa(Y )G is the only POMIS.
For instance, in Fig. 3a, {{X}} is the set of POMISs. Whenever unobserved confounders (UCs) are present,4 on the other hand, the analysis becomes more involved. To witness, let us analyze the maximum achievable rewards of the MISs in the other causal diagrams in Fig. 3. We start with Fig. 3b and note that µz⇤ µx⇤ since µz⇤ = P x µxP (x|do(z⇤)) P x µ⇤ x P (x|do(z⇤)) = µx⇤ . On the other hand, µ; is not comparable to µx⇤ . For a concrete example, consider a SCM where the domains of variables are {0, 1}. Let U be the UC between Y and Z where P (U = 1) = 0.5. Let fZ(u) = 1 u, fX(z) = z, and fY (x,u) = x u, where is the exclusive-or function. If X is not intervened on, x will be 1 u yielding y = 1 for both cases u = 0 or u = 1 so that µ; = 1. However, if X is intervened to either 0 or 1, y will be 1 only half the time since P (U = 1) = 0.5, which results in µx⇤ = 0.5. We also provide in Appendix A a SCM such that µ; < µx⇤ holds true. This model (µ; > µx⇤ ) illustrates an interesting phenomenon — allowing an UC to affect Y freely may lead to a higher reward, which may be broken upon interventions. We now consider the different confounding structure shown in Fig. 3c (similar to Fig. 1b), where the variable Z lies outside of the influence of the UC associated with Y . In this case, intervening on Z leads to a higher reward, µz⇤ µ;. To witness, note that µ; = P z E [Y |z]P (z) = P z µzP (z) P z µz⇤P (z) = µz⇤ . However, µz⇤ and µx⇤ are incomparable, which is shown through two models provided in Appendix A. Finally, we can add the confounders of the two previous models, which is shown in Fig. 3d. In this case, all three µx⇤ , µz⇤ , and µ; are incomparable. One can imagine scenarios where the influence of the UCs are weak enough so that corresponding models produce results similar to Figs. 3a to 3c.
It’s clear that the interplay between the location of the intervened variable, the outcome variable, and the UCs entails non-trivial interactions and consequences in terms of the reward. The table in Fig. 3e highlights the arms that are contenders to generate the highest rewards in each model (i.e., each arm intervenes a POMIS to specific values), while intervening on a non-POMIS represents a waste of resources. Interestingly, the only parent of Y , i.e., X , is not dominated by any other arms in any of the scenarios discussed. In words, this suggests that the intuition that controlling variables closer to Y is not entirely lost even when UCs are present; they are not the only POMIS, but certainly one of them. Given that more complex mechanisms cannot be, in general, ruled out, performing experiments would be required to identify the best arm. Still, the results of the table guarantee that the search can be refined so that MAB solvers can discard arms that cannot lead to profitable outcomes, and converge faster to playing the optimal arm.
4Recall that unobserved confounders are represented in the graph as bidirected dashed edges.
3 Graphical characterization of POMIS
Our goal in this section is to graphically characterize POMISs. We will leverage the discussion in the previous section and note that UCs connected to a reward variable affect the reward distributions in a way that intervening on a variable outside the coverage of such UCs (including no UC) can be optimal — e.g., {X} for Fig. 3a, ; for Figs. 3b and 3d, and {Z} for Fig. 3c. We introduce two graphical concepts to help characterizing this property. Definition 3 (Unobserved-Confounders’ Territory). Given information JG,Y K, let H be G [An (Y )
G ]. A set of variables T ✓ V (H) containing Y is called an UC-territory on G with
respect to Y if De (T) H = T and CC (T) H = T.
An UC-territory T is said to be minimal if no T0 ⇢ T is an UC-territory. A minimal UC-Territory (MUCT) for G and Y can be constructed by extending a set of variables, starting from {Y }, alternatively updating the set with the c-component and descendants of the set. Definition 4 (Interventional Border). Let T be a minimal UC-territory on G with respect to Y . Then, X = pa (T)
G \T is called an interventional border for G with respect to Y .
The interventional border (IB) encompasses essentially the parents of the MUCT. For concreteness, consider Fig. 4a, and note that {W ,X,Y ,Z} is the MUCT for the causal graph with respect to Y , and the IB is {S,T} (marked in pink and blue in the graph, respectively). As its name suggests, MUCT is a set of endogenous variables governed by a set of UCs where at least one UC is adjacent to a reward variable. Specifically, the reward is determined by values of: (1) the UCs governing the MUCT; (2) a set of unobserved variables (other than the UCs) where each affects an endogenous variable in the MUCT; and (3) the IB. In other words, there is no UC interplaying across MUCT and its outside so that µx = E[Y |x] where x is a value assigned to the IB X. We now connect MUCT and IB with POMIS. Let MUCT(G,Y ) and IB(G,Y ) be, respectively, the MUCT and IB given JG,Y K. Proposition 4. IB(G,Y ) is a POMIS given JG,Y K.
The main strategy of the proof is to construct a SCM M where intervening on any variable in MUCT(G,Y ) causes significant loss of reward. It seems that MUCT and IB can only identify a single POMIS given JG,Y K. However, they, in fact, serve as basic units to identify all POMISs. Proposition 5. Given JG,Y K, IB(GW,Y ) is a POMIS, for any W ✓ V\ {Y }.
Prop. 5 generalizes Prop. 4 for when W 6= ; while taking care of UCs across MUCT(GW,Y ), and its outside in the original causal graph G. See Fig. 4d, for an instance, where IB(G
W ,Y ) = {W ,T}.
Intervening on W cuts the influence of S and the UC between W and X , while still allowing the UC to affect X .5 Similarly, one can see in Fig. 4b that IB(G
X ,Y ) = {T ,W ,X} where
intervening on X lets Y be the only element of MUCT making its parents an interventional border, hence, a POMIS. Note that pa(Y )G is always a POMIS since MUCT(Gpa(Y )G ,Y ) = {Y } and IB(G
pa(Y )G ,Y ) = pa(Y )G. With Prop. 5, one can enumerate the POMISs given JG,Y K considering
all subsets of V\ {Y }. We show in the sequel that this strategy encompasses all the POMISs. Theorem 6. Given JG,Y K, X ✓ V\{Y } is a POMIS if and only if IB(GX,Y ) = X.
5Note that exogenous variables that do not affect more than one endogenous variable (i.e., non-UCs) are not explicitly represented in the graph.
Algorithm 1 Algorithm enumerating all POMISs with JG,Y K 1: function POMISS(G, Y ) 2: T,X = MUCT (G,Y ) , IB (G,Y ); H = GX [T [X] 3: return {X} [ subPOMISs (H, Y , reversed (topological-sort (H)) \ (T \ {Y }) , ;) 4: function SUBPOMISS(G, Y , ⇡, O) 5: P = ; 6: for ⇡i 2 ⇡ do 7: T, X, ⇡0, O0 = MUCT(G⇡i ,Y ), IB(G⇡i ,Y ), ⇡
i+1:|⇡| \T, O [ ⇡1:i 1 8: if X \O0 = ; then 9: P = P [ {X} [ (subPOMISs (GX [T [X] , Y , ⇡
0, O0) if ⇡0 6= ; else ;) 10: return P
Algorithm 2 POMIS-based kl-UCB 1: function POMIS-KL-UCB(B,G,Y , f ,T ) 2: Input: B, a SCM-MAB, G, a causal diagram; Y , a reward variable 3: A = S X2POMISs(G, Y ) D(X)
4: kl-UCB(B, A, f , T )
Thm. 6 provides a graphical necessary and sufficient condition for a set of variables being a POMIS given JG,Y K. This characterization allows one to determine all possible arms in a SCM-MAB that are worth intervening on, and, therefore, being free from pulling the other unnecessary arms.
4 Algorithmic characterization of POMIS
Although the graphical characterization provides a means to enumerate the complete set of POMISs given JG,Y K, a naively implemented algorithm requires time exponential in |V|. We construct an efficient algorithm (Alg. 1) that enumerates all the POMISs based on Props. 7 and 8 below and the graphical characterization introduced in the previous section (Thm. 6). Proposition 7. Let T and X be the MUCT(GW,Y ) and IB(GW,Y ), respectively, relative to G and Y . Then, for any Z ✓ V\T, MUCT(GX[Z,Y ) = T and IB(GX[Z,Y ) = X.
Proposition 8. Let H=GX [T [X] where T and X are MUCT and IB given JGW,Y K, respectively. Then, for any W
0 ✓ T\ {Y }, HW0 and GW[W0 yield the same MUCT and IB with respect to Y .
Prop. 7 allows one to avoid having to examine GW for every W ✓ V\{Y }. Prop. 8 characterizes the recursive nature of MUCT and IB, where identification of POMISs can be evaluated by subgraphs. Based on these results, we design a recursive algorithm (Alg. 1) to explore subsets of V\{Y } with a certain order. See Fig. 4e for an example where subsets of {X,Z,W} are connected based on set inclusion relationship and an order of variables, e.g., (X,Z,W ). That is, there exists a directed edge between two sets if (i) one set is larger than the other by a variable and (ii) the variable’s index (as in the order) is larger than other variable’s index in the smaller set. The diagram traces how the algorithm will explore the subsets following the edges, while effectively skipping nodes.
Given G and Y , POMISs (Alg. 1) computes a POMIS, i.e., IB(G,Y ). Then, a recursive procedure subPOMISs is called with an order of variables (Line 3). Then subPOMISs examines POMISs by intervening on a single variable against the given graph (Line 6–9). If the IB (X in Line 7) of such an intervened graph intersects with O0 (a set of variables that should be considered in other branch), then no subsequent call is made (Line 8). Otherwise, a subsequent subPOMISs call will take as arguments an MUCT-IB induced subgraph (Prop. 8), a refined order, and a set of variables not to be intervened in the given branch. For clarity, we provide a detailed working example in Appendix C with Fig. 4a where the algorithm explores only four intervened graphs (G, G{X}, G{Z}, G{W}) and generates the complete set of POMISs {{S,T}, {T ,W}, {T ,W ,X}}. Theorem 9 (Soundness and Completeness). Given information JG,Y K, the algorithm POMISs (Alg. 1) returns all, and only POMISs.
The POMISs algorithm can be combined with a MAB algorithm, such as the kl-UCB, creating a simple yet effective SCM-MAB solver (see Alg. 2). kl-UCB satisfies lim sup
n!1 E[Regn] log(n)
P x:µx<µ⇤ µ ⇤ µx KL(µx,µ⇤) where KL is Kullback-Leibler divergence between two Bernoulli distributions [Garivier and Cappé, 2011]. It is clear that the reduction in the size of arms will lower the upper bounds of the corresponding cumulative regrets.
5 Experiments
In this section, we present empirical results demonstrating that the selection of arms based on POMISs makes standard MAB solvers converge faster to an optimal arm. We employ two popular MAB solvers, kl-UCB, which enjoys cumulative regret growing logarithmically with the number of rounds [Cappé et al., 2013], and Thompson sampling (TS, Thompson [1933]), which has strong empirical performance [Kaufmann et al., 2012]. We considered four strategies for selecting arms, including POMISs, MISs, Brute-force, and All-at-once, where Brute-force evaluates all combinations of arms S X✓V\{Y } D (X), and All-at-once considers intervening in all variables simultaneously, D (V\{Y }), oblivious to the causal structure and any knowledge about the action space. The performance of the eight (4 ⇥ 2) algorithms are evaluated relative to three different SCM-MAB instances (the detailed parametrizations are provided in Appendix D). We set the horizon large enough so as to observe near convergence, and repeat each simulation 300 times. We plot (i) the average cumulative regrets (CR) along with their respective standard deviations and (ii) the probability of an optimal arm being selected averaged over the repeated tests (OAP).6,7
Task 1: We start by analyzing a Markovian model. We note that by Cor. 3, searching for the arms within the parent set is sufficient in this case. The number of arms for POMISs, MISs, Brute-force, and All-at-once are 4, 49, 81, and 16, respectively. Note that there are 4 optimal arms within All-at-once arms — for instance, if the parent configuration is X1 = x1,X2 = x2, this strategy will also include combinations of Z1 = z1,Z2 = z2, 8z1, z2. The simulated results are shown in Fig. 5a. CR at round 1000 with kl-UCB are 3.0, 48.0, 72, and 12 (in the order), and all strategies were able to find the optimal arms at this time. POMIS and All-at-once first reached 95% OAP at round 20 and 66, respectively. There are two interesting observations at this point. First, at an
6All the code is available at https://github.com/sanghack81/SCMMAB-NIPS2018 7One may surmise that combinatorial bandit (CB) algorithms can be used to solve SCM-MAB instances by noting that an intervention can be encoded as a binary vector, where each dimension in the vector corresponds to intervening on a single variable with a specific value. However, the two settings invoke a very different set of assumptions, which makes their solvers somewhat difficult to compare in some reasonably fair way. For instance, the current generation of CB algorithms is oblivious to the underlying causal structure, which makes them resemble very closely the Brute-force strategy, the worst possible method for SCM-MABs. Further, the assumption of linearity is arguably one of the most popular considered by CB solvers. The corresponding algorithms, however, will be unable to learn the arms’ rewards properly since a SCM-MAB is nonparametric, making no assumption about the underlying structural mechanisms. These are just a few immediate examples of the mismatches between the current generation of algorithms for both causal and combinatorial bandits.
early stage, OAP for MISs is smaller than Brute-force since it has only 1 optimal arm among 49 arms, while Brute-force has 9 among 81. The advantage of employing MIS over Brute-force is only observed after a sufficiently large number of plays. More interestingly, POMIS and All-at-once both have the common optimal to non-optimal arms-ratio (1:3 versus 4:12), however, POMIS dominates All-at-once since the agent can learn better about the mean reward of the optimal arm while playing non-optimal arms less. Naturally, this translates into less variability and additional certainty about the optimal arm even in Markovian settings.
Task 2: We consider the setting known as instrumental variable (IV), which was shown in Fig. 3c. The optimal arm in this simulation is setting Z = 0. The number of arms for the four strategies is 4, 5, 9, and 4, respectively. The results are shown in Fig. 5b. Since the All-at-once strategy only considers non-optimal arms (i.e., pulling Z,X together), it incurs in a linear regret without selecting an optimal arm (0%). CR (and OAP) at round 1000 with TS are POMIS 16.1 (98.67%), MIS 21.4 (99.00%), Brute-force 42.9 (93.33%), and All-at-once 272.1 (0%). At round 5000, where Brute-force nearly converged, the ratio of CRs for POMIS and Brute-force is 54.218.1 = 2.99 ' 2.67 = 9 14 1 . POMIS, MIS, and Brute-force first hits 95% OAP at 172, 214, and 435.
Task 3: Finally, we study the more involved scenario shown in Fig. 4a. In this case, the optimal arm is intervening on {S,T}, which means that the system should follow its natural flow of UCs, which All-at-once is unable to “pull.” There are 16, 75, 243, and 32 arms for the strategies (in the order). The results are shown in Fig. 5c. The CR (and OAP) at round 10000 with TS are POMIS 91.4 (99.0%), MIS 472.4 (97.0%), Brute-force 1469.0 (85.0%), and All-at-once 2784.8 (0%). Similarly, the ratio (in round 10000) is 1469.091.4 = 16.07 ⇡ 16.13 = 243 1 16 1 which is expected to increase since Brute-force is not yet converged at the moment. Only POMIS and MIS achieved OAP of 95% first in 684 and 3544 steps, respectively.
We start by noticing that the reduction in the CRs is approximately proportional to the reduction in the number of non-optimal arms pulled by (PO)MIS by the corresponding algorithm, which makes the POMIS-based solver the clear winner throughout the simulations. It’s still not inconceivable that the number of arms examined by All-at-once is smaller than for POMIS in a specific SCM-MAB instance, which would entail a lower CR to the former. However, such a lower CR in some instances does not constitute any sort of assurance since arms excluded from All-at-once, but included in POMIS, can be optimal in some SCM-MAB instance conforming to JG,Y K. Furthermore, a POMIS-based strategy always dominates the corresponding MIS and Brute-force ones. These observations together suggest that, in practice, a POMIS-based strategy should be preferred given that it will always converge and will usually be faster than its counterparts. Remarkably, there is an interesting trade-off between having knowledge of the causal structure versus not knowing the corresponding dependency structure among arms, and potentially incurring in linear regret (All-at-once) or exponential slowdown (Brute-force). In practice, for the cases in which the causal structure is unknown, the pull of the arms themselves can be used as experiments and could be coupled with efficient strategies to simultaneously learn the causal structure [Kocaoglu et al., 2017].
6 Conclusions
We studied the problem of deciding whether an agent should perform a causal intervention and, if so, which variables it should intervene upon. The problem was formalized using the logic of structural causal models (SCMs) and formalized through a new type of multi-armed bandit called SCM-MABs. We started by noting that whenever the agent cannot measure all the variables in the environment (i.e., unobserved confounders exist), standard MAB algorithms that are oblivious to the underlying causal structure may not converge, regardless of the number of interventions performed in the environment. (We note that the causal structure can easily be learned in a typical MAB setting since the agent always has interventional capabilities.) We introduced a novel decision-making strategy based on properties following the do-calculus, which allowed the removal of redundant arms, and the partial-orders among the sets of variables existent in the underlying causal system, which led to the understanding of the maximum achievable reward of each interventional set. Leveraging this new strategy based on the possibly-optimal minimal intervention sets (called POMIS), we developed an algorithm that decides whether (and if so, where) interventions should be performed in the underlying system. Finally, we showed by simulations that this causally-sensible strategy performs more efficiently and more robustly than their non-causal counterparts. We hope that formal machinery and the algorithms developed here can help decision-makers to make more principled and efficient decisions.
Acknowledgments
This research is supported in parts by grants from IBM Research, Adobe Research, NSF IIS-1704352, and IIS-1750807 (CAREER). | 1. What is the main contribution of the paper regarding multi-bandit algorithms and causal graphs?
2. What are the strengths and weaknesses of the proposed algorithm in terms of its efficiency and ability to handle latent confounders?
3. How does the reviewer assess the clarity and consistency of the notation used throughout the paper?
4. Are there any minor details or typos that could be improved in the paper?
5. Is the theory presented in the paper interesting and novel, despite the limitations of the experiments? | Review | Review
The authors propose an algorithm that could improve the efficiency of multi-bandits algorithms by selecting a minimal, sound and complete set of possibly optimal arms. In their frameworks arms correspond to interventions on a causal graph, which could possibly have latent confounders. If the structure of the causal graph is known (but not the parameters), the proposed algorithms can exploit this structure to decide which interventions are provably nonoptimal and filter them out. I think the paper presents some interesting and novel ideas. The clarity could be improved, especially for a reader who is not familiar with the do-calculus literature (e.g. section 2). The experiments are limited, but in my opinion the theory makes the paper interesting in itself. Minor details: Abstract: Iâm a bit confused about how could one empirically demonstrate that an algorithm leads to optimal (...) convergence rates. Some inconsistency in the notation, e.g.: line 88: PA_i is never defined, although it is clearly pa(X_i), Lines 117-123: maybe itâs pedantic, but shouldnât there be some universal quantification on x,y,z, w? For example P(y|do(x), do(z),w) = P(y|do(x),w) for all x \in D(X), z \in D(Z) etc? Line 129: isnât X = X intersected with an(Y)_{G after do(X)} just X \subset an(Y)_{G after do(X)} ? line 198: pa(T)_G\T ⦠doesnât pa(.) already exclude itself (in this case T)? Typos: footnote 4: âthat are not latent confounders are not explicitly representedâ, line 252 âfour strategiesâ, line 214, 216 âintervening ONâ Appendix B: Instead of Algorithm 4, wouldnât it be enough to have a much simpler âwhile (T != oldT) { oldT=T; T=desc(cc(T)); }â? |
NIPS | Title
Adding One Neuron Can Eliminate All Bad Local Minima
Abstract
One of the main difficulties in analyzing neural networks is the non-convexity of the loss function which may have many bad local minima. In this paper, we study the landscape of neural networks for binary classification tasks. Under mild assumptions, we prove that after adding one special neuron with a skip connection to the output, or one special neuron per layer, every local minimum is a global minimum.
1 Introduction
Deep neural networks have recently achieved huge success in various machine learning tasks (see, Krizhevsky et al. 2012; Goodfellow et al. 2013; Wan et al. 2013, for example). However, a theoretical understanding of neural networks is largely lacking. One of the difficulties in analyzing neural networks is the non-convexity of the loss function which allows the existence of many local minima with large losses. This was long considered a bottleneck of neural networks, and one of the reasons why convex formulations such as support vector machine (Cortes & Vapnik, 1995) were preferred previously. Given the recent empirical success of the deep neural networks, an interesting question is whether the non-convexity of the neural network is really an issue. It has been widely conjectured that all local minima of the empirical loss lead to similar training performance (LeCun et al., 2015; Choromanska et al., 2015). For example, prior works empirically showed that neural networks with identical architectures but different initialization points can converge to local minima with similar classification performance (Krizhevsky et al., 2012; He et al., 2016; Huang & Liu, 2017). On the theoretical side, there have been many recent attempts to analyze the landscape of the neural network loss functions. A few works have studied deep networks, but they either require linear activation functions (Baldi & Hornik, 1989; Kawaguchi, 2016; Freeman & Bruna, 2016; Hardt & Ma, 2017; Yun et al., 2017), or require assumptions such as independence of ReLU activations (Choromanska et al., 2015) and significant overparametrization (Nguyen & Hein, 2017a,b; Livni et al., 2014). There is a large body of works that study single-hidden-layer neural networks and provide various conditions under which a local search algorithm can find a global minimum (Du & Lee, 2018; Ge et al., 2018; Andoni et al., 2014; Sedghi & Anandkumar, 2014; Janzamin et al., 2015; Haeffele & Vidal, 2015; Gautier et al., 2016; Brutzkus & Globerson,
∗Correpondence to R. Srikant, [email protected] and Ruoyu Sun, [email protected]
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
2017; Soltanolkotabi, 2017; Soudry & Hoffer, 2017; Goel & Klivans, 2017; Du et al., 2017; Zhong et al., 2017; Li & Yuan, 2017; Liang et al., 2018; Mei et al., 2018). It can be roughly divided into two categories: non-global landscape analysis and global landscape analysis. For the first category, the result do not apply to all local minima. One typical conclusion is about the local geometry, i.e., in a small neighborhood of the global minima no bad local minima exist (Zhong et al., 2017; Du et al., 2017; Li & Yuan, 2017). Another typical conclusion is that a subset of local minima are global minima (Haeffele et al., 2014; Haeffele & Vidal, 2015; Soudry & Carmon, 2016; Nguyen & Hein, 2017a,b). Shamir (2018) has shown that a subset of second-order local minima can perform nearly as well as linear predictors. The presence of various conclusions reflects the difficulty of the problem: while analyzing the global landscape seems hard, we may step back and analyze the local landscape or a “majority” of the landscape. For the second category of global landscape analysis, the typical result is that every local minimum is a global minimum. However, even for single-layer networks, strong assumptions such as over-parameterization, very special neuron activation functions, fixed second layer parameters and/or Gaussian data distribution are often needed in the existing works. The presence of various strong assumptions also reflects the difficulty of the problem: even for the single-hidden-layer nonlinear neural network, it seems hard to analyze the landscape, so it is reasonable to make various assumptions. One exception is the recent work Liang et al. (2018) which adopts a different path: instead of simply making several assumptions to obtain positive results, it carefully studies the effect of various conditions on the landscape of neural networks for binary classification. It gives both positive and negative results on the existence of bad local minimum under different conditions. In particular, it studies many common types of neuron activation functions and shows that for a class of neurons there is no bad local minimum, and for other neurons there is. This clearly shows that the choice of neurons can affect the landscape. Then a natural question is: while Liang et al. (2018) considers some special types of data and a broad class of neurons, can we obtain results for more general data when limiting to a smaller class of neurons?
1.1 Our Contributions
Given this context, our main result is quite surprising: for a neural network with a special type of neurons, every local minimum is a global minimum of the loss function. Our result requires no assumption on the network size, the specific type of the original neural network, etc., yet our result applies to every local minimum. Besides the requirement on the neuron activation type, the major trick is an associated regularizer. Our major results and their implications are as follows:
• We focus on the binary classification problem with a smooth hinge loss function. We prove the following result: for any neural network, by adding a special neuron (e.g., exponential neuron) to the network and adding a quadratic regularizer of this neuron, the new loss function has no bad local minimum. In addition, every local minimum achieves the minimum misclassification error.
• In the main result, the augmented neuron can be viewed as a skip connection from the input to the output layer. However, this skip connection is not critical, as the same result also holds if we add one special neuron to each layer of a fully-connected feedforward neural network.
• To our knowledge, this is the first result that no spurious local minimum exists for a wide class of deep nonlinear networks. Our result indicates that the class of “good neural networks” (neural networks such that there is an associated loss function with no spurious local minima) contains any network with one special neuron, thus this class is rather “dense” in the class of all neural networks: the distance between any neural network and a good neural network is just a neuron away.
The outline of the paper is as follows. In Section 2, we present several notations. In Section 3, we present the main result and several extensions on the main results are presented in Section 4. We present the proof idea of the main result in Section 5 and conclude this paper in Section 6. All proofs are presented in Appendix.
2 Preliminaries
Feed-forward networks. Given an input vector of dimension d, we consider a neural network with L layers of neurons for binary classification. We denote by Ml the number of neurons in the l-th layer (note that M0 = d). We denote the neural activation function by σ. LetWl ∈ RMl−1×Ml denote the weight matrix connecting the (l − 1)-th and l-th layer and bl denote the bias vector for neurons in
the l-th layer. LetWL+1 ∈ RML and bL ∈ R denote the weight vector and bias scalar in the output layer, respectively. Therefore, the output of the network f : Rd → R can be expressed by
f(x;θ) =W>L+1σ ( WLσ ( ...σ ( W>1 x+ b1 ) + bL−1 ) + bL ) + bL+1. (1)
Loss and error. We useD = {(xi, yi)}ni=1 to denote a dataset containing n samples, where xi ∈ Rd and yi ∈ {−1, 1} denote the feature vector and the label of the i-th sample, respectively. Given a neural network f(x;θ) parameterized by θ and a loss function ` : R→ R, in binary classification tasks, we define the empirical loss Ln(θ) as the average loss of the network f on a sample in the dataset and define the training error (also called the misclassification error) Rn(θ; f) as the misclassification rate of the network f on the dataset D, i.e.,
Ln(θ) = n∑ i=1 `(−yif(xi;θ)) and Rn(θ; f) = 1 n n∑ i=1 I{yi 6= sgn(f(xi;θ))}. (2)
where I is the indicator function. Tensors products. We use a⊗b to denote the tensor product of vectors a and b and use a⊗k to denote the tensor product a⊗ ...⊗a where a appears k times. For an N -th order tensor T ∈ Rd1×d2×...×dN and N vectors u1 ∈ Rd1 ,u2 ∈ Rd2 , ...,uN ∈ RdN , we define
T ⊗ u1...⊗ uN = ∑
i1∈[d1],...,iN∈[dN ]
T (i1, ..., iN )u1(i1)...uN (iN ),
where we use T (i1, ..., iN ) to denote the (i1, ..., iN )-th component of the tensor T , uk(ik) to denote the ik-th component of the vector uk, k = 1, ..., N and [dk] to denote the set {1, ..., dk}.
3 Main Result
In this section, we first present several important conditions on the loss function and the dataset in order to derive the main results. After that, we will present the main results.
3.1 Assumptions
In this subsection, we introduce two assumptions on the loss function and the dataset.
Assumption 1 (Loss function) Assume that the loss function ` : R → R is monotonically nondecreasing and twice differentiable, i.e., ` ∈ C2. Assume that every critical point of the loss function `(z) is also a global minimum and every global minimum z satisfies z < 0.
A simple example of the loss function satisfying Assumption 1 is the polynomial hinge loss, i.e., `(z) = [max{z+1, 0}]p, p ≥ 3. It is always zero for z ≤ −1 and behaves like a polynomial function in the region z > −1. Note that the condition that every global minimum of the loss function `(z) is negative is not needed to prove the result that every local minimum of the empirical loss is globally minimal, but is necessary to prove that the global minimizer of the empirical loss is also the minimizer of the misclassification rate.
Assumption 2 (Realizability) Assume that there exists a set of parameters θ such that the neural network f(·;θ) is able to correctly classify all samples in the dataset D.
By Assumption 2, we assume that the dataset is realizable by the neural architecture f . We note that this assumption is consistent with previous empirical observations (Zhang et al., 2016; Krizhevsky et al., 2012; He et al., 2016) showing that at the end of the training process, neural networks usually achieve zero misclassification rates on the training sets. However, as we will show later, if the loss function ` is convex, then we can prove the main result even without Assumption 2.
3.2 Main Result
In this subsection, we first introduce several notations and next present the main result of the paper. Given a neural architecture f(·;θ) defined on a d-dimensional Euclidean space and parameterized by a set of parameters θ, we define a new architecture f̃ by adding the output of an exponential neuron to the output of the network f , i.e.,
f̃(x, θ̃) = f(x;θ) + a exp ( w>x+ b ) , (3)
where the vector θ̃ = (θ, a,w, b) denote the parametrization of the network f̃ . For this designed model, we define the empirical loss function as follows,
L̃n(θ̃) = n∑ i=1 ` ( −yif̃(x; θ̃) ) + λa2 2 , (4)
where the scalar λ is a positive real number, i.e., λ > 0. Different from the empirical loss function Ln, the loss L̃n has an additional regularizer on the parameter a, since we aim to eliminate the impact of the exponential neuron on the output of the network f̃ at every local minimum of L̃n. As we will show later, the exponential neuron is inactive at every local minimum of the empirical loss L̃n. Now we present the following theorem to show that every local minimum of the loss function L̃n is also a global minimum. Remark: Instead of viewing the exponential term in Equation (3) as a neuron, one can also equivalently think of modifying the loss function to be
L̃n(θ̃) = n∑ i=1 ` ( −yi(f(xi;θ) + a exp(w>xi + b)) ) + λa2 2 .
Then, one can interpret Equation (3) and (4) as maintaining the original neural architecture and slightly modifying the loss function.
Theorem 1 Suppose that Assumption 1 and 2 hold. Then both of the following statements are true:
(i) The empirical loss function L̃n(θ̃) has at least one local minimum.
(ii) Assume that θ̃∗ = (θ∗, a∗,w∗, b∗) is a local minimum of the empirical loss function L̃n(θ̃), then θ̃∗ is a global minimum of L̃n(θ̃). Furthermore, θ∗ achieves the minimum loss value and the minimum misclassification rate on the dataset D, i.e., θ∗ ∈ argminθ Ln(θ) and θ∗ ∈ argminθ Rn(θ; f).
Remarks: (i) Theorem 1 shows that every local minimum θ̃∗ of the empirical loss L̃n is also a global minimum and shows that θ∗ achieves the minimum training error and the minimum loss value on the original loss function Ln at the same time. (ii) Since we do not require the explicit form of the neural architecture f , Theorem 1 applies to the neural architectures widely used in practice such as convolutional neural network (Krizhevsky et al., 2012), deep residual networks (He et al., 2016), etc. This further indicates that the result holds for any real neural activation functions such as rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), etc. (iii) As we will show in the following corollary, at every local minimum θ̃∗, the exponential neuron is inactive. Therefore, at every local minimum θ̃∗ = (θ∗, a∗,w∗, b∗), the neural network f̃ with an augmented exponential neuron is equivalent to the original neural network f .
Corollary 1 Under the conditions of Theorem 1, if θ̃∗ = (θ∗, a∗,w∗, b∗) is a local minimum of the empirical loss function L̃n(θ̃), then two neural networks f(·;θ∗) and f̃(·; θ̃∗) are equivalent, i.e., f(x;θ∗) = f̃(x; θ̃∗), ∀x ∈ Rd. Corollary 1 shows that at every local minimum, the exponential neuron does not contribute to the output of the neural network f̃ . However, this does not imply that the exponential neuron is unnecessary, since several previous results (Safran & Shamir, 2018; Liang et al., 2018) have already shown that the loss surface of pure ReLU neural networks are guaranteed to have bad local minima. Furthermore, to prove the main result under any dataset, the regularizer is also necessary, since Liang et al. (2018) has already shown that even with an augmented exponential neuron, the empirical loss without the regularizer still have bad local minima under some datasets.
4 Extensions
4.1 Eliminating the Skip Connection
As noted in the previous section, the exponential term in Equation (3) can be viewed as a skip connection or a modification to the loss function. Our analysis also works under other architectures as well. When the exponential term is viewed as a skip connection, the network architecture is as shown in Fig. 1(a). This architecture is different from the canonical feedforward neural architectures
as there is a direct path from the input layer to the output layer. In this subsection, we will show that the main result still holds if the model f̃ is defined as a feedforward neural network shown in Fig. 1(b), where each layer of the network f is augmented by an additional exponential neuron. This is a standard fully connected neural network except for one special neuron at each layer.
Notations. Given a fully-connected feedforward neural network f(·;θ) defined by Equation (1), we define a new fully connected feedforward neural network f̃ by adding an additional exponential neuron to each layer of the network f . We use the vector θ̃ = (θ,θexp) to denote the parameterization of the network f̃ , where θexp denotes the vector consisting of all augmented weights and biases. Let W̃l ∈ R(Ml−1+1)×(Ml+1) and b̃l ∈ RMl+1 denote the weight matrix and the bias vector in the l-th layer of the network f̃ , respectively. Let W̃L+1 ∈ R(ML+1) and b̃L+1 ∈ R denote the weight vector and the bias scalar in the output layer of the network f̃ , respectively. Without the loss of generality, we assume that the (Ml +1)-th neuron in the l-th layer is the augmented exponential neuron. Thus, the output of the network f̃ is expressed by
f̃(x;θ) = W̃>L+1σ̃L+1 ( W̃Lσ̃L ( ...σ̃1 ( W̃>1 x+ b̃1 ) + b̃L−1 ) + b̃L ) + b̃L+1, (5)
where σ̃l : RMl−1+1 → RMl+1 is a vector-valued activation function with the first Ml components being the activation functions σ in the network f and with the last component being the exponential function, i.e., σ̃l(z) = (σ(z), ..., σ(z), exp(z)). Furthermore, we use the w̃l to denote the vector in the (Ml−1 + 1)-th row of the matrix W̃l. In other words, the components of the vector w̃l are the weights on the edges connecting the exponential neuron in the (l − 1)-th layer and the neurons in the l-th layer. For this feedforward network, we define an empirical loss function as
L̃n(θ̃) = n∑ i=1 `(−yif̃(xi; θ̃)) + λ 2 L+1∑ l=2 ‖w̃l‖2L2L (6)
where ‖a‖p denotes the p-norm of a vector a and λ is a positive real number, i.e., λ > 0. Similar to the empirical loss discussed in the previous section, we add a regularizer to eliminate the impacts of all exponential neurons on the output of the network. Similarly, we can prove that at every local minimum of L̃n, all exponential neurons are inactive. Now we present the following theorem to show that if the set of parameters θ̃∗ = (θ∗,θ∗exp) is a local minimum of the empirical loss function L̃n(θ̃), then θ̃∗ is a global minimum and θ∗ is a global minimum of both minimization problems minθ Ln(θ) and minθ Rn(θ; f). This means that the neural network f(·;θ∗) simultaneously achieves the globally minimal loss value and misclassification rate on the dataset D. Theorem 2 Suppose that Assumption 1 and 2 hold. Suppose that the activation function σ is differentiable. Assume that θ̃∗ = (θ∗,θ∗exp) is a local minimum of the empirical loss function L̃n(θ̃), then θ̃∗ is a global minimum of L̃n(θ̃). Furthermore, θ∗ achieves the minimum loss value and the minimum misclassification rate on the dataset D, i.e., θ∗ ∈ argminθ Ln(θ) and θ∗ ∈ argminθ Rn(θ; f). Remarks: (i) This theorem is not a direct corollary of the result in the previous section, but the proof ideas are similar. (ii) Due to the assumption on the differentiability of the activation function σ, Theorem 2 does not apply to the neural networks consisting of non-smooth neurons such as ReLUs, Leaky ReLUs, etc. (iii) Similar to Corollary 1, we will present the following corollary to show that at every local minimum θ̃∗ = (θ∗,θ∗exp), the neural network f̃ with augmented exponential neurons is equivalent to the original neural network f .
Corollary 2 Under the conditions in Theorem 2, if θ̃∗ = (θ∗,θ∗exp) is a local minimum of the empirical loss function L̃n(θ̃), then two neural networks f(·;θ∗) and f̃(·; θ̃∗) are equivalent, i.e., f(x;θ∗) = f̃(x; θ̃∗),∀x ∈ Rd.
Corollary 2 further shows that even if we add an exponential neuron to each layer of the original network f , at every local minimum of the empirical loss, all exponential neurons are inactive.
4.2 Neurons
In this subsection, we will show that even if the exponential neuron is replaced by a monomial neuron, the main result still holds under additional assumptions. Similar to the case where exponential neurons are used, given a neural network f(x;θ), we define a new neural network f̃ by adding the output of a monomial neuron of degree p to the output of the original model f , i.e.,
f̃(x; θ̃) = f(x;θ) + a ( w>x+ b )p . (7)
In addition, the empirical loss function L̃n is exactly the same as the loss function defined by Equation (4). Next, we will present the following theorem to show that if all samples in the dataset D can be correctly classified by a polynomial of degree t and the degree of the augmented monomial is not smaller than t (i.e., p ≥ t), then every local minimum of the empirical loss function L̃n(θ̃) is also a global minimum. We note that the degree of a monomial is the sum of powers of all variables in this monomial and the degree of a polynomial is the maximum degree of its monomial.
Proposition 1 Suppose that Assumptions 1 and 2 hold. Assume that all samples in the dataset D can be correctly classified by a polynomial of degree t and p ≥ t. Assume that θ̃∗ = (θ∗, a∗,w∗, b∗) is a local minimum of the empirical loss function L̃n(θ̃), then θ̃∗ is a global minimum of L̃n(θ̃). Furthermore, θ∗ is a global minimizer of both problems minθ Ln(θ) and minRn(θ; f).
Remarks: (i) We note that, similar to Theorem 1, Proposition 1 applies to all neural architectures and all neural activation functions defined on R, as we do not require the explicit form of the neural network f . (ii) It follows from the Lagrangian interpolating polynomial and Assumption 2 that for a dataset consisted of n different samples, there always exists a polynomial P of degree smaller n such that the polynomial P can correctly classify all points in the dataset. This indicates that Proposition 1 always holds if p ≥ n. (iii) Similar to Corollary 1 and 2, we can show that at every local minimum θ̃∗ = (θ∗, a∗,w∗, b∗), the neural network f̃ with an augmented monomial neuron is equivalent to the original neural network f .
4.3 Allowing Random Labels
In previous subsections, we assume the realizability of the dataset by the neural network which implies that the label of a given feature vector is unique. It does not cover the case where the dataset contains two samples with the same feature vector but with different labels (for example, the same image can be labeled differently by two different people). Clearly, in this case, no model can correctly classify all samples in this dataset. Another simple example of this case is the mixture of two Gaussians where the data samples are drawn from each of the two Gaussian distributions with certain probability. In this subsection, we will show that under this broader setting that one feature vector may correspond to two different labels, with a slightly stronger assumption on the convexity of the loss `, the same result still holds. The formal statement is present by the following proposition.
Proposition 2 Suppose that Assumption 1 holds and the loss function ` is convex. Assume that θ̃∗ = (θ∗, a∗,w∗, b∗) is a local minimum of the empirical loss function L̃n(θ̃), then θ̃∗ is a global minimum of L̃n(θ̃). Furthermore, θ∗ achieves the minimum loss value and the minimum misclassification rate on the dataset D, i.e., θ∗ ∈ argminθ Ln(θ) and θ∗ ∈ argminθ Rn(θ; f). Remark: The differences of Proposition 2 and Theorem 1 can be understood in the following ways. First, as stated previously, Proposition 2 allows a feature vector to have two different labels, but Theorem 1 does not. Second, the minimum misclassification rate under the conditions in Theorem 1 must be zero, while in Proposition 2, the minimum misclassification rate can be nonzero.
4.4 High-order Stationary Points
In this subsection, we characterize the high-order stationary points of the empirical loss L̃n shown in Section 3.2. We first introduce the definition of the high-order stationary point and next show that every stationary point of the loss L̃n with a sufficiently high order is also a global minimum.
Definition 1 (k-th order stationary point) A critical point θ0 of a function L(θ) is a k-th order stationary point, if there exists positive constant C, ε > 0 such that for every θ with ‖θ − θ0‖2 ≤ ε, L(θ) ≥ L(θ0)− C‖θ − θ0‖k+12 . Next, we will show that if a polynomial of degree p can correctly classify all points in the dataset, then every stationary point of the order at least 2p is a global minimum and the set of parameters corresponding to this stationary point achieves the minimum training error.
Proposition 3 Suppose that Assumptions 1 and 2 hold. Assume that all samples in the dataset can be correctly classified by a polynomial of degree p. Assume that θ̃∗ = (θ∗, a∗,w∗, b∗) is a k-th order stationary point of the empirical loss function L̃n(θ̃) and k ≥ 2p, then θ̃∗ is a global minimum of L̃n(θ̃). Furthermore, the neural network f(·;θ∗) achieves the minimum misclassification rate on the dataset D, i.e., θ∗ ∈ argminθ Rn(θ; f). One implication of Proposition 3 is that if a dataset is linearly separable, then every second order stationary point of the empirical loss function is a global minimum and, at this stationary point, the neural network achieves zero training error. When the dataset is not linearly separable, our result only covers fourth or higher order stationary point of the empirical loss.
5 Proof Idea
In this section, we provide overviews of the proof of Theorem 1.
5.1 Important Lemmas
In this subsection, we present two important lemmas where the proof of Theorem 1 is based.
Lemma 1 Under Assumption 1 and λ > 0, if θ̃∗ = (θ∗, a∗,w∗, b∗) is a local minimum of L̃n, then (i) a∗ = 0, (ii) for any integer p ≥ 0, the following equation holds for all unit vector u : ‖u‖2 = 1,
n∑ i=1 `′ (−yif(xi;θ∗)) yiew ∗>xi+b ∗ (u>xi) p = 0. (8)
Lemma 2 For any integer k ≥ 0 and any sequence {ci}ni=1, if ∑n i=1 ci(u >xi)
k = 0 holds for all unit vector u : ‖u‖2 = 1, then the k-th order tensor Tk = ∑n i=1 cix ⊗k i is a k-th order zero tensor.
5.2 Proof Sketch of Lemma 1
Proof sketch of Lemma 1(i): To prove a∗ = 0, we only need to check the first order conditions of local minima. By assumption that θ̃∗ = (θ∗, a∗,w∗, b∗) is a local minimum of L̃n, then the derivative of L̃n with respect to a and b at the point θ̃∗ are all zeros, i.e.,
∇aL̃n(θ̃) ∣∣∣ θ̃=θ̃∗ = − n∑
i=1
`′ ( −yif(xi;θ∗)− yia∗ew ∗>xi+b ∗ ) yi exp(w ∗>xi + b ∗) + λa∗ = 0,
∇bL̃n(θ̃) ∣∣∣ θ̃=θ̃∗ = −a∗ n∑
i=1
`′ ( −yif(xi;θ∗)− yia∗ew ∗>xi+b ∗ ) yi exp(w ∗>xi + b ∗) = 0.
From the above equations, it is not difficult to see that a∗ satisfies λa∗2 = 0 or, equivalently, a∗ = 0. We note that the main observation we are using here is that the derivative of the exponential neuron is itself. Therefore, it is not difficult to see that the same proof holds for all neuron activation function σ satisfying σ′(z) = cσ(z),∀z ∈ R for some constant c. In fact, with a small modification of the proof, we can show that the same proof works for all neuron activation functions satisfying σ(z) = (c1z + c0)σ
′(z),∀z ∈ R for some constants c0 and c1. This further indicates that the same proof holds for the monomial neurons and thus the proof of Proposition 1 follows directly from the proof of Theorem 1. Proof sketch of Lemma 1(ii): The main idea of the proof is to use the high order information of the local minimum to derive Equation (8). Due to the assumption that θ̃ = (θ∗, a∗,w∗, b∗) is a local minimum of the empirical loss function L̃n, there exists a bounded local region such
that the parameters θ̃∗ achieve the minimum loss value in this region, i.e., ∃δ ∈ (0, 1) such that L̃n(θ̃
∗ + ∆) ≥ L̃n(θ̃∗) for ∀∆ : ‖∆‖2 ≤ δ. Now, we use δa, δw to denote the perturbations on the parameters a and w, respectively. Next, we consider the loss value at the point θ̃∗+∆ = (θ∗, a∗+ δa,w∗+δw, b∗), where we set |δa| = e−1/ε and δw = εu for an arbitrary unit vector u : ‖u‖2 = 1. Therefore, as ε goes to zero, the perturbation magnitude ‖∆‖2 also goes to zero and this indicates that there exists an ε0 ∈ (0, 1) such that L̃n(θ̃
∗ + ∆) ≥ L̃n(θ̃∗) for ∀ε ∈ [0, ε0). By the result a∗ = 0, shown in Lemma 1(i), the output of the model f̃ under parameters θ̃∗ + ∆ can be expressed by
f̃(x; θ̃∗ + ∆) = f(x;θ∗) + δa exp(δ > wx) exp(w ∗>x+ b∗).
For simplicity of notation, let g(x; θ̃∗, δw) = exp(δ>wx) exp(w ∗>x+ b∗). From the second order Taylor expansion with Lagrangian remainder and the assumption that ` is twice differentiable, it follows that there exists a constant C(θ̃∗,D) depending only on the local minimizer θ̃ and the dataset D such that the following inequality holds for every sample in the dataset and every ε ∈ [0, ε0),
`(−yif̃(xi; θ̃∗ + ∆)) ≤ `(−yif(xi;θ∗)) + `′(−yif(xi;θ∗))(−yi)δag(xi; θ̃∗, δw) + C(θ̃∗,D)δ2a.
Summing the above inequality over all samples in the dataset and recalling that L̃n(θ̃∗+∆) ≥ L̃n(θ̃∗) holds for all ε ∈ [0, ε0), we obtain −sgn(δa) n∑
i=1
`′(−yif(xi;θ∗))yi exp(εu>xi) exp(w∗>xi+b∗)+[nC(θ̃∗,D)+λ/2] exp(−1/ε) ≥ 0.
Finally, we complete the proof by induction. Specifically, for the base hypothesis where p = 0, we can take the limit on the both sides of the above inequality as ε→ 0, using the property that δa can be either positive or negative and thus establish the base case where p = 0. For the higher order case, we can first assume that Equation (8) holds for p = 0, ..., k and then subtract these equations from the above inequality. After taking the limit on the both sides of the inequality as ε→ 0, we can prove that Equation (8) holds for p = k + 1. Therefore, by induction, we can prove that Equation (8) holds for any non-negative integer p.
5.3 Proof Sketch of Lemma 2
The proof of Lemma 2 follows directly from the results in reference (Zhang et al., 2012). It is easy to check that, for every sequence {ci}ni=1 and every non-negative integer k ≥ 0, the k-th order tensor Tk = ∑n i=1 cix ⊗k i is a symmetric tensor. From Theorem 1 in (Zhang et al., 2012), it directly follows that max
u1,...,uk:‖u1‖2=...=‖uk‖2=1 |Tk(u1, ...,uk)| = max u:‖u‖2=1 |Tk(u, ...,u)|.
Furthermore, by assumption that Tk(u, ...,u) = ∑n i=1 ci(u >xi) k = 0 holds for all ‖u‖2 = 1, then
max u1,...,uk:‖u1‖2=...=‖uk‖2=1
|Tk(u1, ...,uk)| = 0,
and this is equivalent to Tk = 0⊗kd , where 0d is the zero vector in the d-dimensional space.
5.4 Proof Sketch of Theorem 1
For every dataset D satisfying Assumption 2, by the Lagrangian interpolating polynomial, there always exists a polynomial P (x) = ∑ j cjπj(x) defined on Rd such that it can correctly classify all samples in the dataset with margin at least one, i.e., yiP (xi) ≥ 1,∀i ∈ [n], where πj denotes the j-th monomial in the polynomial P (x). Therefore, from Lemma 1 and 2, it follows that
n∑ i=1 `′(−yif(xi;θ∗))ew ∗>xi+b ∗ yiP (xi) = ∑ j cj n∑ i=1 `′(−yif(xi;θ∗))yiew ∗>xi+b ∗ πj(xi) = 0.
Since yiP (xi) ≥ 1 and ew ∗>xi+b ∗ > 0 hold for ∀i ∈ [n] and the loss function ` is a non-decreasing function, i.e., `′(z) ≥ 0,∀z ∈ R, then `′(−yif(xi;θ∗)) = 0 holds for all i ∈ [n]. In addition, from the assumption that every critical point of the loss function ` is a global minimum, it follows that zi = −yif(xi;θ∗) achieves the global minimum of the loss function ` and this further indicates that
θ∗ is a global minimum of the empirical loss Ln(θ). Furthermore, since at every local minimum, the exponential neuron is inactive, a∗ = 0, then the set of parameters θ̃∗ is a global minimum of the loss function L̃n(θ̃). Finally, since every critical point of the loss function `(z) satisfies z < 0, then for every sample, `′(−yif(xi;θ∗)) = 0 indicates that yif(xi;θ∗) > 0, or, equivalently, yi = sgn(f(xi;θ∗)). Therefore, the set of parameters θ∗ also minimizes the training error. In summary, the set of parameters θ̃∗ = (θ∗, a∗,w∗, b∗) minimizes the loss function L̃n(θ̃) and the set of parameters θ∗ simultaneously minimizes the empirical loss function Ln(θ) and the training error Rn(θ; f).
6 Conclusions and Discussions
One of the difficulties in analyzing neural networks is the non-convexity of the loss functions which allows the existence of many spurious minima with large loss values. In this paper, we prove that for any neural network, by adding a special neuron and an associated regularizer, the new loss function has no spurious local minimum. In addition, we prove that, at every local minimum of this new loss function, the exponential neuron is inactive and this means that the augmented neuron and regularizer improve the landscape of the loss surface without affecting the representing power of the original neural network. We also extend the main result in a few ways. First, while adding a special neuron makes the network different from a classical neural network architecture, the same result also holds for a standard fully connected network with one special neuron added to each layer. Second, the same result holds if we change the exponential neuron to a polynomial neuron with a degree dependent on the data. Third, the same result holds even if one feature vector corresponds to both labels. This paper is an effort in designing neural networks that are “good”. Here “good” can mean various things such as nice landscape, stronger representation power or better generalization, and in this paper we focus on the landscape –in particular, the very specific property “every local minimum is a global minimum”. While our results enhance the understanding of the landscape, the practical implications are not straightforward to see since we did not consider other aspects such as algorithms and generalization. It is an interesting direction to improve the landscape results by considering other aspects, such as studying when a specific algorithm will converge to local minima and thus global minima.
7 Acknowledgment
Research is supported by the following grants: USDA/NSF CPS grant AG 2018-67007-2837, NSF NeTS 1718203, NSF CPS ECCS 1739189, DTRA Grant DTRA grant HDTRA1-15-1-0003, NSF CCF 1755847 and a start-up grant from Dept. of ISE, University of Illinois Urbana-Champaign. | 1. What is the main contribution of the paper regarding neural networks?
2. What are the strengths of the paper, particularly in its theoretical analysis and explanations?
3. What are the weaknesses of the paper, especially regarding its experiments and practical applications?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Review | Review
STRENGTHS: Overall I found this to be a very interesting and well-written paper. The main contribution is to show that for relatively general network architectures, if either an additional augmenting skip neuron is added directly from the input to the output (or one augmenting neuron is added per layer) all local minima will be globally optimal. In addition, the authors also show that the output of the augmenting neuron(s) will be 0 at all local minima, which implies that one can add an augmenting neuron to a relatively arbitrary network and still be guaranteed that a recovered local minima will also be a global minimum of the original network. The authors provide a comprehensive review of existing results and clearly place their contribution within that context. The proofs are well explained and relatively easy to follow. WEAKNESSES: I found this to be a high quality paper and do not have too many strong criticisms of this paper, but a few points that could improve the paper follow below. 1) There is no experimental testing using the proposed augmenting neurons. A clear prediction of the theory is that the optimization landscape becomes nicer with the addition of the augmenting neuron(s). While understandably these results only apply to local minima and not necessarily arbitrary stationary points (modulo the results of Proposition 3), a fairly simple experiment would be to just add the augmenting neuron and test if the recovered values of the original loss function are smaller when the augmenting neuron(s) are present. 2) Similar to the above comment, a small discussion (perhaps following Proposition 3) regarding the limitations of the results in practical training could be beneficial to some readers (for example, pointing out that most commonly used algorithms are only guaranteed to converge to first order stationary points and not local minima in the general case). |
NIPS | Title
Adding One Neuron Can Eliminate All Bad Local Minima
Abstract
One of the main difficulties in analyzing neural networks is the non-convexity of the loss function which may have many bad local minima. In this paper, we study the landscape of neural networks for binary classification tasks. Under mild assumptions, we prove that after adding one special neuron with a skip connection to the output, or one special neuron per layer, every local minimum is a global minimum.
1 Introduction
Deep neural networks have recently achieved huge success in various machine learning tasks (see, Krizhevsky et al. 2012; Goodfellow et al. 2013; Wan et al. 2013, for example). However, a theoretical understanding of neural networks is largely lacking. One of the difficulties in analyzing neural networks is the non-convexity of the loss function which allows the existence of many local minima with large losses. This was long considered a bottleneck of neural networks, and one of the reasons why convex formulations such as support vector machine (Cortes & Vapnik, 1995) were preferred previously. Given the recent empirical success of the deep neural networks, an interesting question is whether the non-convexity of the neural network is really an issue. It has been widely conjectured that all local minima of the empirical loss lead to similar training performance (LeCun et al., 2015; Choromanska et al., 2015). For example, prior works empirically showed that neural networks with identical architectures but different initialization points can converge to local minima with similar classification performance (Krizhevsky et al., 2012; He et al., 2016; Huang & Liu, 2017). On the theoretical side, there have been many recent attempts to analyze the landscape of the neural network loss functions. A few works have studied deep networks, but they either require linear activation functions (Baldi & Hornik, 1989; Kawaguchi, 2016; Freeman & Bruna, 2016; Hardt & Ma, 2017; Yun et al., 2017), or require assumptions such as independence of ReLU activations (Choromanska et al., 2015) and significant overparametrization (Nguyen & Hein, 2017a,b; Livni et al., 2014). There is a large body of works that study single-hidden-layer neural networks and provide various conditions under which a local search algorithm can find a global minimum (Du & Lee, 2018; Ge et al., 2018; Andoni et al., 2014; Sedghi & Anandkumar, 2014; Janzamin et al., 2015; Haeffele & Vidal, 2015; Gautier et al., 2016; Brutzkus & Globerson,
∗Correpondence to R. Srikant, [email protected] and Ruoyu Sun, [email protected]
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
2017; Soltanolkotabi, 2017; Soudry & Hoffer, 2017; Goel & Klivans, 2017; Du et al., 2017; Zhong et al., 2017; Li & Yuan, 2017; Liang et al., 2018; Mei et al., 2018). It can be roughly divided into two categories: non-global landscape analysis and global landscape analysis. For the first category, the result do not apply to all local minima. One typical conclusion is about the local geometry, i.e., in a small neighborhood of the global minima no bad local minima exist (Zhong et al., 2017; Du et al., 2017; Li & Yuan, 2017). Another typical conclusion is that a subset of local minima are global minima (Haeffele et al., 2014; Haeffele & Vidal, 2015; Soudry & Carmon, 2016; Nguyen & Hein, 2017a,b). Shamir (2018) has shown that a subset of second-order local minima can perform nearly as well as linear predictors. The presence of various conclusions reflects the difficulty of the problem: while analyzing the global landscape seems hard, we may step back and analyze the local landscape or a “majority” of the landscape. For the second category of global landscape analysis, the typical result is that every local minimum is a global minimum. However, even for single-layer networks, strong assumptions such as over-parameterization, very special neuron activation functions, fixed second layer parameters and/or Gaussian data distribution are often needed in the existing works. The presence of various strong assumptions also reflects the difficulty of the problem: even for the single-hidden-layer nonlinear neural network, it seems hard to analyze the landscape, so it is reasonable to make various assumptions. One exception is the recent work Liang et al. (2018) which adopts a different path: instead of simply making several assumptions to obtain positive results, it carefully studies the effect of various conditions on the landscape of neural networks for binary classification. It gives both positive and negative results on the existence of bad local minimum under different conditions. In particular, it studies many common types of neuron activation functions and shows that for a class of neurons there is no bad local minimum, and for other neurons there is. This clearly shows that the choice of neurons can affect the landscape. Then a natural question is: while Liang et al. (2018) considers some special types of data and a broad class of neurons, can we obtain results for more general data when limiting to a smaller class of neurons?
1.1 Our Contributions
Given this context, our main result is quite surprising: for a neural network with a special type of neurons, every local minimum is a global minimum of the loss function. Our result requires no assumption on the network size, the specific type of the original neural network, etc., yet our result applies to every local minimum. Besides the requirement on the neuron activation type, the major trick is an associated regularizer. Our major results and their implications are as follows:
• We focus on the binary classification problem with a smooth hinge loss function. We prove the following result: for any neural network, by adding a special neuron (e.g., exponential neuron) to the network and adding a quadratic regularizer of this neuron, the new loss function has no bad local minimum. In addition, every local minimum achieves the minimum misclassification error.
• In the main result, the augmented neuron can be viewed as a skip connection from the input to the output layer. However, this skip connection is not critical, as the same result also holds if we add one special neuron to each layer of a fully-connected feedforward neural network.
• To our knowledge, this is the first result that no spurious local minimum exists for a wide class of deep nonlinear networks. Our result indicates that the class of “good neural networks” (neural networks such that there is an associated loss function with no spurious local minima) contains any network with one special neuron, thus this class is rather “dense” in the class of all neural networks: the distance between any neural network and a good neural network is just a neuron away.
The outline of the paper is as follows. In Section 2, we present several notations. In Section 3, we present the main result and several extensions on the main results are presented in Section 4. We present the proof idea of the main result in Section 5 and conclude this paper in Section 6. All proofs are presented in Appendix.
2 Preliminaries
Feed-forward networks. Given an input vector of dimension d, we consider a neural network with L layers of neurons for binary classification. We denote by Ml the number of neurons in the l-th layer (note that M0 = d). We denote the neural activation function by σ. LetWl ∈ RMl−1×Ml denote the weight matrix connecting the (l − 1)-th and l-th layer and bl denote the bias vector for neurons in
the l-th layer. LetWL+1 ∈ RML and bL ∈ R denote the weight vector and bias scalar in the output layer, respectively. Therefore, the output of the network f : Rd → R can be expressed by
f(x;θ) =W>L+1σ ( WLσ ( ...σ ( W>1 x+ b1 ) + bL−1 ) + bL ) + bL+1. (1)
Loss and error. We useD = {(xi, yi)}ni=1 to denote a dataset containing n samples, where xi ∈ Rd and yi ∈ {−1, 1} denote the feature vector and the label of the i-th sample, respectively. Given a neural network f(x;θ) parameterized by θ and a loss function ` : R→ R, in binary classification tasks, we define the empirical loss Ln(θ) as the average loss of the network f on a sample in the dataset and define the training error (also called the misclassification error) Rn(θ; f) as the misclassification rate of the network f on the dataset D, i.e.,
Ln(θ) = n∑ i=1 `(−yif(xi;θ)) and Rn(θ; f) = 1 n n∑ i=1 I{yi 6= sgn(f(xi;θ))}. (2)
where I is the indicator function. Tensors products. We use a⊗b to denote the tensor product of vectors a and b and use a⊗k to denote the tensor product a⊗ ...⊗a where a appears k times. For an N -th order tensor T ∈ Rd1×d2×...×dN and N vectors u1 ∈ Rd1 ,u2 ∈ Rd2 , ...,uN ∈ RdN , we define
T ⊗ u1...⊗ uN = ∑
i1∈[d1],...,iN∈[dN ]
T (i1, ..., iN )u1(i1)...uN (iN ),
where we use T (i1, ..., iN ) to denote the (i1, ..., iN )-th component of the tensor T , uk(ik) to denote the ik-th component of the vector uk, k = 1, ..., N and [dk] to denote the set {1, ..., dk}.
3 Main Result
In this section, we first present several important conditions on the loss function and the dataset in order to derive the main results. After that, we will present the main results.
3.1 Assumptions
In this subsection, we introduce two assumptions on the loss function and the dataset.
Assumption 1 (Loss function) Assume that the loss function ` : R → R is monotonically nondecreasing and twice differentiable, i.e., ` ∈ C2. Assume that every critical point of the loss function `(z) is also a global minimum and every global minimum z satisfies z < 0.
A simple example of the loss function satisfying Assumption 1 is the polynomial hinge loss, i.e., `(z) = [max{z+1, 0}]p, p ≥ 3. It is always zero for z ≤ −1 and behaves like a polynomial function in the region z > −1. Note that the condition that every global minimum of the loss function `(z) is negative is not needed to prove the result that every local minimum of the empirical loss is globally minimal, but is necessary to prove that the global minimizer of the empirical loss is also the minimizer of the misclassification rate.
Assumption 2 (Realizability) Assume that there exists a set of parameters θ such that the neural network f(·;θ) is able to correctly classify all samples in the dataset D.
By Assumption 2, we assume that the dataset is realizable by the neural architecture f . We note that this assumption is consistent with previous empirical observations (Zhang et al., 2016; Krizhevsky et al., 2012; He et al., 2016) showing that at the end of the training process, neural networks usually achieve zero misclassification rates on the training sets. However, as we will show later, if the loss function ` is convex, then we can prove the main result even without Assumption 2.
3.2 Main Result
In this subsection, we first introduce several notations and next present the main result of the paper. Given a neural architecture f(·;θ) defined on a d-dimensional Euclidean space and parameterized by a set of parameters θ, we define a new architecture f̃ by adding the output of an exponential neuron to the output of the network f , i.e.,
f̃(x, θ̃) = f(x;θ) + a exp ( w>x+ b ) , (3)
where the vector θ̃ = (θ, a,w, b) denote the parametrization of the network f̃ . For this designed model, we define the empirical loss function as follows,
L̃n(θ̃) = n∑ i=1 ` ( −yif̃(x; θ̃) ) + λa2 2 , (4)
where the scalar λ is a positive real number, i.e., λ > 0. Different from the empirical loss function Ln, the loss L̃n has an additional regularizer on the parameter a, since we aim to eliminate the impact of the exponential neuron on the output of the network f̃ at every local minimum of L̃n. As we will show later, the exponential neuron is inactive at every local minimum of the empirical loss L̃n. Now we present the following theorem to show that every local minimum of the loss function L̃n is also a global minimum. Remark: Instead of viewing the exponential term in Equation (3) as a neuron, one can also equivalently think of modifying the loss function to be
L̃n(θ̃) = n∑ i=1 ` ( −yi(f(xi;θ) + a exp(w>xi + b)) ) + λa2 2 .
Then, one can interpret Equation (3) and (4) as maintaining the original neural architecture and slightly modifying the loss function.
Theorem 1 Suppose that Assumption 1 and 2 hold. Then both of the following statements are true:
(i) The empirical loss function L̃n(θ̃) has at least one local minimum.
(ii) Assume that θ̃∗ = (θ∗, a∗,w∗, b∗) is a local minimum of the empirical loss function L̃n(θ̃), then θ̃∗ is a global minimum of L̃n(θ̃). Furthermore, θ∗ achieves the minimum loss value and the minimum misclassification rate on the dataset D, i.e., θ∗ ∈ argminθ Ln(θ) and θ∗ ∈ argminθ Rn(θ; f).
Remarks: (i) Theorem 1 shows that every local minimum θ̃∗ of the empirical loss L̃n is also a global minimum and shows that θ∗ achieves the minimum training error and the minimum loss value on the original loss function Ln at the same time. (ii) Since we do not require the explicit form of the neural architecture f , Theorem 1 applies to the neural architectures widely used in practice such as convolutional neural network (Krizhevsky et al., 2012), deep residual networks (He et al., 2016), etc. This further indicates that the result holds for any real neural activation functions such as rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), etc. (iii) As we will show in the following corollary, at every local minimum θ̃∗, the exponential neuron is inactive. Therefore, at every local minimum θ̃∗ = (θ∗, a∗,w∗, b∗), the neural network f̃ with an augmented exponential neuron is equivalent to the original neural network f .
Corollary 1 Under the conditions of Theorem 1, if θ̃∗ = (θ∗, a∗,w∗, b∗) is a local minimum of the empirical loss function L̃n(θ̃), then two neural networks f(·;θ∗) and f̃(·; θ̃∗) are equivalent, i.e., f(x;θ∗) = f̃(x; θ̃∗), ∀x ∈ Rd. Corollary 1 shows that at every local minimum, the exponential neuron does not contribute to the output of the neural network f̃ . However, this does not imply that the exponential neuron is unnecessary, since several previous results (Safran & Shamir, 2018; Liang et al., 2018) have already shown that the loss surface of pure ReLU neural networks are guaranteed to have bad local minima. Furthermore, to prove the main result under any dataset, the regularizer is also necessary, since Liang et al. (2018) has already shown that even with an augmented exponential neuron, the empirical loss without the regularizer still have bad local minima under some datasets.
4 Extensions
4.1 Eliminating the Skip Connection
As noted in the previous section, the exponential term in Equation (3) can be viewed as a skip connection or a modification to the loss function. Our analysis also works under other architectures as well. When the exponential term is viewed as a skip connection, the network architecture is as shown in Fig. 1(a). This architecture is different from the canonical feedforward neural architectures
as there is a direct path from the input layer to the output layer. In this subsection, we will show that the main result still holds if the model f̃ is defined as a feedforward neural network shown in Fig. 1(b), where each layer of the network f is augmented by an additional exponential neuron. This is a standard fully connected neural network except for one special neuron at each layer.
Notations. Given a fully-connected feedforward neural network f(·;θ) defined by Equation (1), we define a new fully connected feedforward neural network f̃ by adding an additional exponential neuron to each layer of the network f . We use the vector θ̃ = (θ,θexp) to denote the parameterization of the network f̃ , where θexp denotes the vector consisting of all augmented weights and biases. Let W̃l ∈ R(Ml−1+1)×(Ml+1) and b̃l ∈ RMl+1 denote the weight matrix and the bias vector in the l-th layer of the network f̃ , respectively. Let W̃L+1 ∈ R(ML+1) and b̃L+1 ∈ R denote the weight vector and the bias scalar in the output layer of the network f̃ , respectively. Without the loss of generality, we assume that the (Ml +1)-th neuron in the l-th layer is the augmented exponential neuron. Thus, the output of the network f̃ is expressed by
f̃(x;θ) = W̃>L+1σ̃L+1 ( W̃Lσ̃L ( ...σ̃1 ( W̃>1 x+ b̃1 ) + b̃L−1 ) + b̃L ) + b̃L+1, (5)
where σ̃l : RMl−1+1 → RMl+1 is a vector-valued activation function with the first Ml components being the activation functions σ in the network f and with the last component being the exponential function, i.e., σ̃l(z) = (σ(z), ..., σ(z), exp(z)). Furthermore, we use the w̃l to denote the vector in the (Ml−1 + 1)-th row of the matrix W̃l. In other words, the components of the vector w̃l are the weights on the edges connecting the exponential neuron in the (l − 1)-th layer and the neurons in the l-th layer. For this feedforward network, we define an empirical loss function as
L̃n(θ̃) = n∑ i=1 `(−yif̃(xi; θ̃)) + λ 2 L+1∑ l=2 ‖w̃l‖2L2L (6)
where ‖a‖p denotes the p-norm of a vector a and λ is a positive real number, i.e., λ > 0. Similar to the empirical loss discussed in the previous section, we add a regularizer to eliminate the impacts of all exponential neurons on the output of the network. Similarly, we can prove that at every local minimum of L̃n, all exponential neurons are inactive. Now we present the following theorem to show that if the set of parameters θ̃∗ = (θ∗,θ∗exp) is a local minimum of the empirical loss function L̃n(θ̃), then θ̃∗ is a global minimum and θ∗ is a global minimum of both minimization problems minθ Ln(θ) and minθ Rn(θ; f). This means that the neural network f(·;θ∗) simultaneously achieves the globally minimal loss value and misclassification rate on the dataset D. Theorem 2 Suppose that Assumption 1 and 2 hold. Suppose that the activation function σ is differentiable. Assume that θ̃∗ = (θ∗,θ∗exp) is a local minimum of the empirical loss function L̃n(θ̃), then θ̃∗ is a global minimum of L̃n(θ̃). Furthermore, θ∗ achieves the minimum loss value and the minimum misclassification rate on the dataset D, i.e., θ∗ ∈ argminθ Ln(θ) and θ∗ ∈ argminθ Rn(θ; f). Remarks: (i) This theorem is not a direct corollary of the result in the previous section, but the proof ideas are similar. (ii) Due to the assumption on the differentiability of the activation function σ, Theorem 2 does not apply to the neural networks consisting of non-smooth neurons such as ReLUs, Leaky ReLUs, etc. (iii) Similar to Corollary 1, we will present the following corollary to show that at every local minimum θ̃∗ = (θ∗,θ∗exp), the neural network f̃ with augmented exponential neurons is equivalent to the original neural network f .
Corollary 2 Under the conditions in Theorem 2, if θ̃∗ = (θ∗,θ∗exp) is a local minimum of the empirical loss function L̃n(θ̃), then two neural networks f(·;θ∗) and f̃(·; θ̃∗) are equivalent, i.e., f(x;θ∗) = f̃(x; θ̃∗),∀x ∈ Rd.
Corollary 2 further shows that even if we add an exponential neuron to each layer of the original network f , at every local minimum of the empirical loss, all exponential neurons are inactive.
4.2 Neurons
In this subsection, we will show that even if the exponential neuron is replaced by a monomial neuron, the main result still holds under additional assumptions. Similar to the case where exponential neurons are used, given a neural network f(x;θ), we define a new neural network f̃ by adding the output of a monomial neuron of degree p to the output of the original model f , i.e.,
f̃(x; θ̃) = f(x;θ) + a ( w>x+ b )p . (7)
In addition, the empirical loss function L̃n is exactly the same as the loss function defined by Equation (4). Next, we will present the following theorem to show that if all samples in the dataset D can be correctly classified by a polynomial of degree t and the degree of the augmented monomial is not smaller than t (i.e., p ≥ t), then every local minimum of the empirical loss function L̃n(θ̃) is also a global minimum. We note that the degree of a monomial is the sum of powers of all variables in this monomial and the degree of a polynomial is the maximum degree of its monomial.
Proposition 1 Suppose that Assumptions 1 and 2 hold. Assume that all samples in the dataset D can be correctly classified by a polynomial of degree t and p ≥ t. Assume that θ̃∗ = (θ∗, a∗,w∗, b∗) is a local minimum of the empirical loss function L̃n(θ̃), then θ̃∗ is a global minimum of L̃n(θ̃). Furthermore, θ∗ is a global minimizer of both problems minθ Ln(θ) and minRn(θ; f).
Remarks: (i) We note that, similar to Theorem 1, Proposition 1 applies to all neural architectures and all neural activation functions defined on R, as we do not require the explicit form of the neural network f . (ii) It follows from the Lagrangian interpolating polynomial and Assumption 2 that for a dataset consisted of n different samples, there always exists a polynomial P of degree smaller n such that the polynomial P can correctly classify all points in the dataset. This indicates that Proposition 1 always holds if p ≥ n. (iii) Similar to Corollary 1 and 2, we can show that at every local minimum θ̃∗ = (θ∗, a∗,w∗, b∗), the neural network f̃ with an augmented monomial neuron is equivalent to the original neural network f .
4.3 Allowing Random Labels
In previous subsections, we assume the realizability of the dataset by the neural network which implies that the label of a given feature vector is unique. It does not cover the case where the dataset contains two samples with the same feature vector but with different labels (for example, the same image can be labeled differently by two different people). Clearly, in this case, no model can correctly classify all samples in this dataset. Another simple example of this case is the mixture of two Gaussians where the data samples are drawn from each of the two Gaussian distributions with certain probability. In this subsection, we will show that under this broader setting that one feature vector may correspond to two different labels, with a slightly stronger assumption on the convexity of the loss `, the same result still holds. The formal statement is present by the following proposition.
Proposition 2 Suppose that Assumption 1 holds and the loss function ` is convex. Assume that θ̃∗ = (θ∗, a∗,w∗, b∗) is a local minimum of the empirical loss function L̃n(θ̃), then θ̃∗ is a global minimum of L̃n(θ̃). Furthermore, θ∗ achieves the minimum loss value and the minimum misclassification rate on the dataset D, i.e., θ∗ ∈ argminθ Ln(θ) and θ∗ ∈ argminθ Rn(θ; f). Remark: The differences of Proposition 2 and Theorem 1 can be understood in the following ways. First, as stated previously, Proposition 2 allows a feature vector to have two different labels, but Theorem 1 does not. Second, the minimum misclassification rate under the conditions in Theorem 1 must be zero, while in Proposition 2, the minimum misclassification rate can be nonzero.
4.4 High-order Stationary Points
In this subsection, we characterize the high-order stationary points of the empirical loss L̃n shown in Section 3.2. We first introduce the definition of the high-order stationary point and next show that every stationary point of the loss L̃n with a sufficiently high order is also a global minimum.
Definition 1 (k-th order stationary point) A critical point θ0 of a function L(θ) is a k-th order stationary point, if there exists positive constant C, ε > 0 such that for every θ with ‖θ − θ0‖2 ≤ ε, L(θ) ≥ L(θ0)− C‖θ − θ0‖k+12 . Next, we will show that if a polynomial of degree p can correctly classify all points in the dataset, then every stationary point of the order at least 2p is a global minimum and the set of parameters corresponding to this stationary point achieves the minimum training error.
Proposition 3 Suppose that Assumptions 1 and 2 hold. Assume that all samples in the dataset can be correctly classified by a polynomial of degree p. Assume that θ̃∗ = (θ∗, a∗,w∗, b∗) is a k-th order stationary point of the empirical loss function L̃n(θ̃) and k ≥ 2p, then θ̃∗ is a global minimum of L̃n(θ̃). Furthermore, the neural network f(·;θ∗) achieves the minimum misclassification rate on the dataset D, i.e., θ∗ ∈ argminθ Rn(θ; f). One implication of Proposition 3 is that if a dataset is linearly separable, then every second order stationary point of the empirical loss function is a global minimum and, at this stationary point, the neural network achieves zero training error. When the dataset is not linearly separable, our result only covers fourth or higher order stationary point of the empirical loss.
5 Proof Idea
In this section, we provide overviews of the proof of Theorem 1.
5.1 Important Lemmas
In this subsection, we present two important lemmas where the proof of Theorem 1 is based.
Lemma 1 Under Assumption 1 and λ > 0, if θ̃∗ = (θ∗, a∗,w∗, b∗) is a local minimum of L̃n, then (i) a∗ = 0, (ii) for any integer p ≥ 0, the following equation holds for all unit vector u : ‖u‖2 = 1,
n∑ i=1 `′ (−yif(xi;θ∗)) yiew ∗>xi+b ∗ (u>xi) p = 0. (8)
Lemma 2 For any integer k ≥ 0 and any sequence {ci}ni=1, if ∑n i=1 ci(u >xi)
k = 0 holds for all unit vector u : ‖u‖2 = 1, then the k-th order tensor Tk = ∑n i=1 cix ⊗k i is a k-th order zero tensor.
5.2 Proof Sketch of Lemma 1
Proof sketch of Lemma 1(i): To prove a∗ = 0, we only need to check the first order conditions of local minima. By assumption that θ̃∗ = (θ∗, a∗,w∗, b∗) is a local minimum of L̃n, then the derivative of L̃n with respect to a and b at the point θ̃∗ are all zeros, i.e.,
∇aL̃n(θ̃) ∣∣∣ θ̃=θ̃∗ = − n∑
i=1
`′ ( −yif(xi;θ∗)− yia∗ew ∗>xi+b ∗ ) yi exp(w ∗>xi + b ∗) + λa∗ = 0,
∇bL̃n(θ̃) ∣∣∣ θ̃=θ̃∗ = −a∗ n∑
i=1
`′ ( −yif(xi;θ∗)− yia∗ew ∗>xi+b ∗ ) yi exp(w ∗>xi + b ∗) = 0.
From the above equations, it is not difficult to see that a∗ satisfies λa∗2 = 0 or, equivalently, a∗ = 0. We note that the main observation we are using here is that the derivative of the exponential neuron is itself. Therefore, it is not difficult to see that the same proof holds for all neuron activation function σ satisfying σ′(z) = cσ(z),∀z ∈ R for some constant c. In fact, with a small modification of the proof, we can show that the same proof works for all neuron activation functions satisfying σ(z) = (c1z + c0)σ
′(z),∀z ∈ R for some constants c0 and c1. This further indicates that the same proof holds for the monomial neurons and thus the proof of Proposition 1 follows directly from the proof of Theorem 1. Proof sketch of Lemma 1(ii): The main idea of the proof is to use the high order information of the local minimum to derive Equation (8). Due to the assumption that θ̃ = (θ∗, a∗,w∗, b∗) is a local minimum of the empirical loss function L̃n, there exists a bounded local region such
that the parameters θ̃∗ achieve the minimum loss value in this region, i.e., ∃δ ∈ (0, 1) such that L̃n(θ̃
∗ + ∆) ≥ L̃n(θ̃∗) for ∀∆ : ‖∆‖2 ≤ δ. Now, we use δa, δw to denote the perturbations on the parameters a and w, respectively. Next, we consider the loss value at the point θ̃∗+∆ = (θ∗, a∗+ δa,w∗+δw, b∗), where we set |δa| = e−1/ε and δw = εu for an arbitrary unit vector u : ‖u‖2 = 1. Therefore, as ε goes to zero, the perturbation magnitude ‖∆‖2 also goes to zero and this indicates that there exists an ε0 ∈ (0, 1) such that L̃n(θ̃
∗ + ∆) ≥ L̃n(θ̃∗) for ∀ε ∈ [0, ε0). By the result a∗ = 0, shown in Lemma 1(i), the output of the model f̃ under parameters θ̃∗ + ∆ can be expressed by
f̃(x; θ̃∗ + ∆) = f(x;θ∗) + δa exp(δ > wx) exp(w ∗>x+ b∗).
For simplicity of notation, let g(x; θ̃∗, δw) = exp(δ>wx) exp(w ∗>x+ b∗). From the second order Taylor expansion with Lagrangian remainder and the assumption that ` is twice differentiable, it follows that there exists a constant C(θ̃∗,D) depending only on the local minimizer θ̃ and the dataset D such that the following inequality holds for every sample in the dataset and every ε ∈ [0, ε0),
`(−yif̃(xi; θ̃∗ + ∆)) ≤ `(−yif(xi;θ∗)) + `′(−yif(xi;θ∗))(−yi)δag(xi; θ̃∗, δw) + C(θ̃∗,D)δ2a.
Summing the above inequality over all samples in the dataset and recalling that L̃n(θ̃∗+∆) ≥ L̃n(θ̃∗) holds for all ε ∈ [0, ε0), we obtain −sgn(δa) n∑
i=1
`′(−yif(xi;θ∗))yi exp(εu>xi) exp(w∗>xi+b∗)+[nC(θ̃∗,D)+λ/2] exp(−1/ε) ≥ 0.
Finally, we complete the proof by induction. Specifically, for the base hypothesis where p = 0, we can take the limit on the both sides of the above inequality as ε→ 0, using the property that δa can be either positive or negative and thus establish the base case where p = 0. For the higher order case, we can first assume that Equation (8) holds for p = 0, ..., k and then subtract these equations from the above inequality. After taking the limit on the both sides of the inequality as ε→ 0, we can prove that Equation (8) holds for p = k + 1. Therefore, by induction, we can prove that Equation (8) holds for any non-negative integer p.
5.3 Proof Sketch of Lemma 2
The proof of Lemma 2 follows directly from the results in reference (Zhang et al., 2012). It is easy to check that, for every sequence {ci}ni=1 and every non-negative integer k ≥ 0, the k-th order tensor Tk = ∑n i=1 cix ⊗k i is a symmetric tensor. From Theorem 1 in (Zhang et al., 2012), it directly follows that max
u1,...,uk:‖u1‖2=...=‖uk‖2=1 |Tk(u1, ...,uk)| = max u:‖u‖2=1 |Tk(u, ...,u)|.
Furthermore, by assumption that Tk(u, ...,u) = ∑n i=1 ci(u >xi) k = 0 holds for all ‖u‖2 = 1, then
max u1,...,uk:‖u1‖2=...=‖uk‖2=1
|Tk(u1, ...,uk)| = 0,
and this is equivalent to Tk = 0⊗kd , where 0d is the zero vector in the d-dimensional space.
5.4 Proof Sketch of Theorem 1
For every dataset D satisfying Assumption 2, by the Lagrangian interpolating polynomial, there always exists a polynomial P (x) = ∑ j cjπj(x) defined on Rd such that it can correctly classify all samples in the dataset with margin at least one, i.e., yiP (xi) ≥ 1,∀i ∈ [n], where πj denotes the j-th monomial in the polynomial P (x). Therefore, from Lemma 1 and 2, it follows that
n∑ i=1 `′(−yif(xi;θ∗))ew ∗>xi+b ∗ yiP (xi) = ∑ j cj n∑ i=1 `′(−yif(xi;θ∗))yiew ∗>xi+b ∗ πj(xi) = 0.
Since yiP (xi) ≥ 1 and ew ∗>xi+b ∗ > 0 hold for ∀i ∈ [n] and the loss function ` is a non-decreasing function, i.e., `′(z) ≥ 0,∀z ∈ R, then `′(−yif(xi;θ∗)) = 0 holds for all i ∈ [n]. In addition, from the assumption that every critical point of the loss function ` is a global minimum, it follows that zi = −yif(xi;θ∗) achieves the global minimum of the loss function ` and this further indicates that
θ∗ is a global minimum of the empirical loss Ln(θ). Furthermore, since at every local minimum, the exponential neuron is inactive, a∗ = 0, then the set of parameters θ̃∗ is a global minimum of the loss function L̃n(θ̃). Finally, since every critical point of the loss function `(z) satisfies z < 0, then for every sample, `′(−yif(xi;θ∗)) = 0 indicates that yif(xi;θ∗) > 0, or, equivalently, yi = sgn(f(xi;θ∗)). Therefore, the set of parameters θ∗ also minimizes the training error. In summary, the set of parameters θ̃∗ = (θ∗, a∗,w∗, b∗) minimizes the loss function L̃n(θ̃) and the set of parameters θ∗ simultaneously minimizes the empirical loss function Ln(θ) and the training error Rn(θ; f).
6 Conclusions and Discussions
One of the difficulties in analyzing neural networks is the non-convexity of the loss functions which allows the existence of many spurious minima with large loss values. In this paper, we prove that for any neural network, by adding a special neuron and an associated regularizer, the new loss function has no spurious local minimum. In addition, we prove that, at every local minimum of this new loss function, the exponential neuron is inactive and this means that the augmented neuron and regularizer improve the landscape of the loss surface without affecting the representing power of the original neural network. We also extend the main result in a few ways. First, while adding a special neuron makes the network different from a classical neural network architecture, the same result also holds for a standard fully connected network with one special neuron added to each layer. Second, the same result holds if we change the exponential neuron to a polynomial neuron with a degree dependent on the data. Third, the same result holds even if one feature vector corresponds to both labels. This paper is an effort in designing neural networks that are “good”. Here “good” can mean various things such as nice landscape, stronger representation power or better generalization, and in this paper we focus on the landscape –in particular, the very specific property “every local minimum is a global minimum”. While our results enhance the understanding of the landscape, the practical implications are not straightforward to see since we did not consider other aspects such as algorithms and generalization. It is an interesting direction to improve the landscape results by considering other aspects, such as studying when a specific algorithm will converge to local minima and thus global minima.
7 Acknowledgment
Research is supported by the following grants: USDA/NSF CPS grant AG 2018-67007-2837, NSF NeTS 1718203, NSF CPS ECCS 1739189, DTRA Grant DTRA grant HDTRA1-15-1-0003, NSF CCF 1755847 and a start-up grant from Dept. of ISE, University of Illinois Urbana-Champaign. | 1. What is the main contribution of the paper regarding deep neural networks and poor local minima?
2. What are the strengths and weaknesses of the proposed approach in proving the absence of non-global local minima?
3. How does the reviewer assess the significance and novelty of the result compared to prior works?
4. Are there any concerns regarding the practicality and computational complexity of finding local minima for the modified loss function?
5. Do you have any suggestions for improving the paper, such as including discussions on realizability assumptions and generalization abilities? | Review | Review
=== added after author response === I have two more comments: (a) the scaling issue mentioned by R5 actually leads to an immediate trivial proof for Lemma 1 (i) and invites the following question: the exponential neuron, from an optimization perspective, is entirely "redundant" as it must vanish at any local minimizer but yet it changes the potential set of local minima, by putting more stringent conditions on the local minima. This phenomenon is a bit curious and perhaps deserves more elaboration. (b) I want to emphasize again "eliminating local minima" by itself is no big deal, because you can get a reformulation that eliminates all local-but-not-global minima and yet is NP-hard to solve (e.g., finding a local minimum). This, I am afraid, is likely what is going on here (if you drop the separable assumption). Prove me wrong. === end === Deep neural nets are known to be empirically "immune" to poor local minima, and a lot of recent efforts have been spent on understanding why. The main contribution of this work is to prove that by adding a single exponential function (directly) from input to output and adding a mild l_2 regularizer, the slightly modified, highly nonconvex loss function does not have any non-global local minima. Moreover, all of these local minima actually correspond to the global minima of the original, unmodified nonconvex loss. This surprising result, to the best of my knowledge, is new and of genuine interest. The paper is also very well-written and I enjoyed most in reading this paper out of my 6 assignments. As usual, there is perhaps still some room to improve here. While the main result does appear to be quite intricating at first sight: any local minima is global? and they correspond to the global minima of the original network? Wow! But if we think a bit harder, this result is perhaps not too surprising after all: simply take the Fenchel bi-conjugate of the original loss, then immediately we can conclude any local minima of the biconjugate is global, and under mild assumptions we can also show these local minima correspond to the global minima of the original loss. So, this conclusion itself is not surprising. The nice part of the authors' construction lies in that the modified function is explicitly available and resembles the original neural network so one can actually optimize it for real, while the biconjugate is more of a conceptual tool that is hardly implementable. Nevertheless, I wish the authors had included this comment. It would be a good idea to point out that the so-claimed local minima of the modified loss does exist, for one need only take a global minimizer of the original loss (whose existence we are willing to assume) and augment with 0 to get a local minima of the modified loss. But most importantly, the authors dodged an utterly important question: how difficult it is to find such local minima (of the modified loss)? This question must be answered if we want to actually exploit the nice results that the authors have obtained. My worry is that we probably cannot find good algorithms converging in reasonable amount of time to any of those local minima: simply take a neural network that we know is NP-hard to train, then it follows the modified loss is also NP-hard to train (without the realizability assumption of course). If this question is not answered, I am afraid the author's nice construction would not be too much different from the conceptual biconjugate... Another concern is the authors only focused on the training error, and did not investigate the generalization of the modified network at all.. If we are willing to assume the training data is separable (linear or polynomial), then achieving a zero training error in polytime is really not a big deal (there are plenty of ways). Proposition 2 alleviates some of this concern, but I would suggest the authors add more discussion on the realizability assumption, especially from a non neural network perspective. Some minor comments: Line 77: b_L should be b_{L+1}. Eq (4): the regularization constant lambda can be any positive number? This to me is another alarm: the modified loss likely to be very ill-behaved... |
NIPS | Title
Adding One Neuron Can Eliminate All Bad Local Minima
Abstract
One of the main difficulties in analyzing neural networks is the non-convexity of the loss function which may have many bad local minima. In this paper, we study the landscape of neural networks for binary classification tasks. Under mild assumptions, we prove that after adding one special neuron with a skip connection to the output, or one special neuron per layer, every local minimum is a global minimum.
1 Introduction
Deep neural networks have recently achieved huge success in various machine learning tasks (see, Krizhevsky et al. 2012; Goodfellow et al. 2013; Wan et al. 2013, for example). However, a theoretical understanding of neural networks is largely lacking. One of the difficulties in analyzing neural networks is the non-convexity of the loss function which allows the existence of many local minima with large losses. This was long considered a bottleneck of neural networks, and one of the reasons why convex formulations such as support vector machine (Cortes & Vapnik, 1995) were preferred previously. Given the recent empirical success of the deep neural networks, an interesting question is whether the non-convexity of the neural network is really an issue. It has been widely conjectured that all local minima of the empirical loss lead to similar training performance (LeCun et al., 2015; Choromanska et al., 2015). For example, prior works empirically showed that neural networks with identical architectures but different initialization points can converge to local minima with similar classification performance (Krizhevsky et al., 2012; He et al., 2016; Huang & Liu, 2017). On the theoretical side, there have been many recent attempts to analyze the landscape of the neural network loss functions. A few works have studied deep networks, but they either require linear activation functions (Baldi & Hornik, 1989; Kawaguchi, 2016; Freeman & Bruna, 2016; Hardt & Ma, 2017; Yun et al., 2017), or require assumptions such as independence of ReLU activations (Choromanska et al., 2015) and significant overparametrization (Nguyen & Hein, 2017a,b; Livni et al., 2014). There is a large body of works that study single-hidden-layer neural networks and provide various conditions under which a local search algorithm can find a global minimum (Du & Lee, 2018; Ge et al., 2018; Andoni et al., 2014; Sedghi & Anandkumar, 2014; Janzamin et al., 2015; Haeffele & Vidal, 2015; Gautier et al., 2016; Brutzkus & Globerson,
∗Correpondence to R. Srikant, [email protected] and Ruoyu Sun, [email protected]
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
2017; Soltanolkotabi, 2017; Soudry & Hoffer, 2017; Goel & Klivans, 2017; Du et al., 2017; Zhong et al., 2017; Li & Yuan, 2017; Liang et al., 2018; Mei et al., 2018). It can be roughly divided into two categories: non-global landscape analysis and global landscape analysis. For the first category, the result do not apply to all local minima. One typical conclusion is about the local geometry, i.e., in a small neighborhood of the global minima no bad local minima exist (Zhong et al., 2017; Du et al., 2017; Li & Yuan, 2017). Another typical conclusion is that a subset of local minima are global minima (Haeffele et al., 2014; Haeffele & Vidal, 2015; Soudry & Carmon, 2016; Nguyen & Hein, 2017a,b). Shamir (2018) has shown that a subset of second-order local minima can perform nearly as well as linear predictors. The presence of various conclusions reflects the difficulty of the problem: while analyzing the global landscape seems hard, we may step back and analyze the local landscape or a “majority” of the landscape. For the second category of global landscape analysis, the typical result is that every local minimum is a global minimum. However, even for single-layer networks, strong assumptions such as over-parameterization, very special neuron activation functions, fixed second layer parameters and/or Gaussian data distribution are often needed in the existing works. The presence of various strong assumptions also reflects the difficulty of the problem: even for the single-hidden-layer nonlinear neural network, it seems hard to analyze the landscape, so it is reasonable to make various assumptions. One exception is the recent work Liang et al. (2018) which adopts a different path: instead of simply making several assumptions to obtain positive results, it carefully studies the effect of various conditions on the landscape of neural networks for binary classification. It gives both positive and negative results on the existence of bad local minimum under different conditions. In particular, it studies many common types of neuron activation functions and shows that for a class of neurons there is no bad local minimum, and for other neurons there is. This clearly shows that the choice of neurons can affect the landscape. Then a natural question is: while Liang et al. (2018) considers some special types of data and a broad class of neurons, can we obtain results for more general data when limiting to a smaller class of neurons?
1.1 Our Contributions
Given this context, our main result is quite surprising: for a neural network with a special type of neurons, every local minimum is a global minimum of the loss function. Our result requires no assumption on the network size, the specific type of the original neural network, etc., yet our result applies to every local minimum. Besides the requirement on the neuron activation type, the major trick is an associated regularizer. Our major results and their implications are as follows:
• We focus on the binary classification problem with a smooth hinge loss function. We prove the following result: for any neural network, by adding a special neuron (e.g., exponential neuron) to the network and adding a quadratic regularizer of this neuron, the new loss function has no bad local minimum. In addition, every local minimum achieves the minimum misclassification error.
• In the main result, the augmented neuron can be viewed as a skip connection from the input to the output layer. However, this skip connection is not critical, as the same result also holds if we add one special neuron to each layer of a fully-connected feedforward neural network.
• To our knowledge, this is the first result that no spurious local minimum exists for a wide class of deep nonlinear networks. Our result indicates that the class of “good neural networks” (neural networks such that there is an associated loss function with no spurious local minima) contains any network with one special neuron, thus this class is rather “dense” in the class of all neural networks: the distance between any neural network and a good neural network is just a neuron away.
The outline of the paper is as follows. In Section 2, we present several notations. In Section 3, we present the main result and several extensions on the main results are presented in Section 4. We present the proof idea of the main result in Section 5 and conclude this paper in Section 6. All proofs are presented in Appendix.
2 Preliminaries
Feed-forward networks. Given an input vector of dimension d, we consider a neural network with L layers of neurons for binary classification. We denote by Ml the number of neurons in the l-th layer (note that M0 = d). We denote the neural activation function by σ. LetWl ∈ RMl−1×Ml denote the weight matrix connecting the (l − 1)-th and l-th layer and bl denote the bias vector for neurons in
the l-th layer. LetWL+1 ∈ RML and bL ∈ R denote the weight vector and bias scalar in the output layer, respectively. Therefore, the output of the network f : Rd → R can be expressed by
f(x;θ) =W>L+1σ ( WLσ ( ...σ ( W>1 x+ b1 ) + bL−1 ) + bL ) + bL+1. (1)
Loss and error. We useD = {(xi, yi)}ni=1 to denote a dataset containing n samples, where xi ∈ Rd and yi ∈ {−1, 1} denote the feature vector and the label of the i-th sample, respectively. Given a neural network f(x;θ) parameterized by θ and a loss function ` : R→ R, in binary classification tasks, we define the empirical loss Ln(θ) as the average loss of the network f on a sample in the dataset and define the training error (also called the misclassification error) Rn(θ; f) as the misclassification rate of the network f on the dataset D, i.e.,
Ln(θ) = n∑ i=1 `(−yif(xi;θ)) and Rn(θ; f) = 1 n n∑ i=1 I{yi 6= sgn(f(xi;θ))}. (2)
where I is the indicator function. Tensors products. We use a⊗b to denote the tensor product of vectors a and b and use a⊗k to denote the tensor product a⊗ ...⊗a where a appears k times. For an N -th order tensor T ∈ Rd1×d2×...×dN and N vectors u1 ∈ Rd1 ,u2 ∈ Rd2 , ...,uN ∈ RdN , we define
T ⊗ u1...⊗ uN = ∑
i1∈[d1],...,iN∈[dN ]
T (i1, ..., iN )u1(i1)...uN (iN ),
where we use T (i1, ..., iN ) to denote the (i1, ..., iN )-th component of the tensor T , uk(ik) to denote the ik-th component of the vector uk, k = 1, ..., N and [dk] to denote the set {1, ..., dk}.
3 Main Result
In this section, we first present several important conditions on the loss function and the dataset in order to derive the main results. After that, we will present the main results.
3.1 Assumptions
In this subsection, we introduce two assumptions on the loss function and the dataset.
Assumption 1 (Loss function) Assume that the loss function ` : R → R is monotonically nondecreasing and twice differentiable, i.e., ` ∈ C2. Assume that every critical point of the loss function `(z) is also a global minimum and every global minimum z satisfies z < 0.
A simple example of the loss function satisfying Assumption 1 is the polynomial hinge loss, i.e., `(z) = [max{z+1, 0}]p, p ≥ 3. It is always zero for z ≤ −1 and behaves like a polynomial function in the region z > −1. Note that the condition that every global minimum of the loss function `(z) is negative is not needed to prove the result that every local minimum of the empirical loss is globally minimal, but is necessary to prove that the global minimizer of the empirical loss is also the minimizer of the misclassification rate.
Assumption 2 (Realizability) Assume that there exists a set of parameters θ such that the neural network f(·;θ) is able to correctly classify all samples in the dataset D.
By Assumption 2, we assume that the dataset is realizable by the neural architecture f . We note that this assumption is consistent with previous empirical observations (Zhang et al., 2016; Krizhevsky et al., 2012; He et al., 2016) showing that at the end of the training process, neural networks usually achieve zero misclassification rates on the training sets. However, as we will show later, if the loss function ` is convex, then we can prove the main result even without Assumption 2.
3.2 Main Result
In this subsection, we first introduce several notations and next present the main result of the paper. Given a neural architecture f(·;θ) defined on a d-dimensional Euclidean space and parameterized by a set of parameters θ, we define a new architecture f̃ by adding the output of an exponential neuron to the output of the network f , i.e.,
f̃(x, θ̃) = f(x;θ) + a exp ( w>x+ b ) , (3)
where the vector θ̃ = (θ, a,w, b) denote the parametrization of the network f̃ . For this designed model, we define the empirical loss function as follows,
L̃n(θ̃) = n∑ i=1 ` ( −yif̃(x; θ̃) ) + λa2 2 , (4)
where the scalar λ is a positive real number, i.e., λ > 0. Different from the empirical loss function Ln, the loss L̃n has an additional regularizer on the parameter a, since we aim to eliminate the impact of the exponential neuron on the output of the network f̃ at every local minimum of L̃n. As we will show later, the exponential neuron is inactive at every local minimum of the empirical loss L̃n. Now we present the following theorem to show that every local minimum of the loss function L̃n is also a global minimum. Remark: Instead of viewing the exponential term in Equation (3) as a neuron, one can also equivalently think of modifying the loss function to be
L̃n(θ̃) = n∑ i=1 ` ( −yi(f(xi;θ) + a exp(w>xi + b)) ) + λa2 2 .
Then, one can interpret Equation (3) and (4) as maintaining the original neural architecture and slightly modifying the loss function.
Theorem 1 Suppose that Assumption 1 and 2 hold. Then both of the following statements are true:
(i) The empirical loss function L̃n(θ̃) has at least one local minimum.
(ii) Assume that θ̃∗ = (θ∗, a∗,w∗, b∗) is a local minimum of the empirical loss function L̃n(θ̃), then θ̃∗ is a global minimum of L̃n(θ̃). Furthermore, θ∗ achieves the minimum loss value and the minimum misclassification rate on the dataset D, i.e., θ∗ ∈ argminθ Ln(θ) and θ∗ ∈ argminθ Rn(θ; f).
Remarks: (i) Theorem 1 shows that every local minimum θ̃∗ of the empirical loss L̃n is also a global minimum and shows that θ∗ achieves the minimum training error and the minimum loss value on the original loss function Ln at the same time. (ii) Since we do not require the explicit form of the neural architecture f , Theorem 1 applies to the neural architectures widely used in practice such as convolutional neural network (Krizhevsky et al., 2012), deep residual networks (He et al., 2016), etc. This further indicates that the result holds for any real neural activation functions such as rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), etc. (iii) As we will show in the following corollary, at every local minimum θ̃∗, the exponential neuron is inactive. Therefore, at every local minimum θ̃∗ = (θ∗, a∗,w∗, b∗), the neural network f̃ with an augmented exponential neuron is equivalent to the original neural network f .
Corollary 1 Under the conditions of Theorem 1, if θ̃∗ = (θ∗, a∗,w∗, b∗) is a local minimum of the empirical loss function L̃n(θ̃), then two neural networks f(·;θ∗) and f̃(·; θ̃∗) are equivalent, i.e., f(x;θ∗) = f̃(x; θ̃∗), ∀x ∈ Rd. Corollary 1 shows that at every local minimum, the exponential neuron does not contribute to the output of the neural network f̃ . However, this does not imply that the exponential neuron is unnecessary, since several previous results (Safran & Shamir, 2018; Liang et al., 2018) have already shown that the loss surface of pure ReLU neural networks are guaranteed to have bad local minima. Furthermore, to prove the main result under any dataset, the regularizer is also necessary, since Liang et al. (2018) has already shown that even with an augmented exponential neuron, the empirical loss without the regularizer still have bad local minima under some datasets.
4 Extensions
4.1 Eliminating the Skip Connection
As noted in the previous section, the exponential term in Equation (3) can be viewed as a skip connection or a modification to the loss function. Our analysis also works under other architectures as well. When the exponential term is viewed as a skip connection, the network architecture is as shown in Fig. 1(a). This architecture is different from the canonical feedforward neural architectures
as there is a direct path from the input layer to the output layer. In this subsection, we will show that the main result still holds if the model f̃ is defined as a feedforward neural network shown in Fig. 1(b), where each layer of the network f is augmented by an additional exponential neuron. This is a standard fully connected neural network except for one special neuron at each layer.
Notations. Given a fully-connected feedforward neural network f(·;θ) defined by Equation (1), we define a new fully connected feedforward neural network f̃ by adding an additional exponential neuron to each layer of the network f . We use the vector θ̃ = (θ,θexp) to denote the parameterization of the network f̃ , where θexp denotes the vector consisting of all augmented weights and biases. Let W̃l ∈ R(Ml−1+1)×(Ml+1) and b̃l ∈ RMl+1 denote the weight matrix and the bias vector in the l-th layer of the network f̃ , respectively. Let W̃L+1 ∈ R(ML+1) and b̃L+1 ∈ R denote the weight vector and the bias scalar in the output layer of the network f̃ , respectively. Without the loss of generality, we assume that the (Ml +1)-th neuron in the l-th layer is the augmented exponential neuron. Thus, the output of the network f̃ is expressed by
f̃(x;θ) = W̃>L+1σ̃L+1 ( W̃Lσ̃L ( ...σ̃1 ( W̃>1 x+ b̃1 ) + b̃L−1 ) + b̃L ) + b̃L+1, (5)
where σ̃l : RMl−1+1 → RMl+1 is a vector-valued activation function with the first Ml components being the activation functions σ in the network f and with the last component being the exponential function, i.e., σ̃l(z) = (σ(z), ..., σ(z), exp(z)). Furthermore, we use the w̃l to denote the vector in the (Ml−1 + 1)-th row of the matrix W̃l. In other words, the components of the vector w̃l are the weights on the edges connecting the exponential neuron in the (l − 1)-th layer and the neurons in the l-th layer. For this feedforward network, we define an empirical loss function as
L̃n(θ̃) = n∑ i=1 `(−yif̃(xi; θ̃)) + λ 2 L+1∑ l=2 ‖w̃l‖2L2L (6)
where ‖a‖p denotes the p-norm of a vector a and λ is a positive real number, i.e., λ > 0. Similar to the empirical loss discussed in the previous section, we add a regularizer to eliminate the impacts of all exponential neurons on the output of the network. Similarly, we can prove that at every local minimum of L̃n, all exponential neurons are inactive. Now we present the following theorem to show that if the set of parameters θ̃∗ = (θ∗,θ∗exp) is a local minimum of the empirical loss function L̃n(θ̃), then θ̃∗ is a global minimum and θ∗ is a global minimum of both minimization problems minθ Ln(θ) and minθ Rn(θ; f). This means that the neural network f(·;θ∗) simultaneously achieves the globally minimal loss value and misclassification rate on the dataset D. Theorem 2 Suppose that Assumption 1 and 2 hold. Suppose that the activation function σ is differentiable. Assume that θ̃∗ = (θ∗,θ∗exp) is a local minimum of the empirical loss function L̃n(θ̃), then θ̃∗ is a global minimum of L̃n(θ̃). Furthermore, θ∗ achieves the minimum loss value and the minimum misclassification rate on the dataset D, i.e., θ∗ ∈ argminθ Ln(θ) and θ∗ ∈ argminθ Rn(θ; f). Remarks: (i) This theorem is not a direct corollary of the result in the previous section, but the proof ideas are similar. (ii) Due to the assumption on the differentiability of the activation function σ, Theorem 2 does not apply to the neural networks consisting of non-smooth neurons such as ReLUs, Leaky ReLUs, etc. (iii) Similar to Corollary 1, we will present the following corollary to show that at every local minimum θ̃∗ = (θ∗,θ∗exp), the neural network f̃ with augmented exponential neurons is equivalent to the original neural network f .
Corollary 2 Under the conditions in Theorem 2, if θ̃∗ = (θ∗,θ∗exp) is a local minimum of the empirical loss function L̃n(θ̃), then two neural networks f(·;θ∗) and f̃(·; θ̃∗) are equivalent, i.e., f(x;θ∗) = f̃(x; θ̃∗),∀x ∈ Rd.
Corollary 2 further shows that even if we add an exponential neuron to each layer of the original network f , at every local minimum of the empirical loss, all exponential neurons are inactive.
4.2 Neurons
In this subsection, we will show that even if the exponential neuron is replaced by a monomial neuron, the main result still holds under additional assumptions. Similar to the case where exponential neurons are used, given a neural network f(x;θ), we define a new neural network f̃ by adding the output of a monomial neuron of degree p to the output of the original model f , i.e.,
f̃(x; θ̃) = f(x;θ) + a ( w>x+ b )p . (7)
In addition, the empirical loss function L̃n is exactly the same as the loss function defined by Equation (4). Next, we will present the following theorem to show that if all samples in the dataset D can be correctly classified by a polynomial of degree t and the degree of the augmented monomial is not smaller than t (i.e., p ≥ t), then every local minimum of the empirical loss function L̃n(θ̃) is also a global minimum. We note that the degree of a monomial is the sum of powers of all variables in this monomial and the degree of a polynomial is the maximum degree of its monomial.
Proposition 1 Suppose that Assumptions 1 and 2 hold. Assume that all samples in the dataset D can be correctly classified by a polynomial of degree t and p ≥ t. Assume that θ̃∗ = (θ∗, a∗,w∗, b∗) is a local minimum of the empirical loss function L̃n(θ̃), then θ̃∗ is a global minimum of L̃n(θ̃). Furthermore, θ∗ is a global minimizer of both problems minθ Ln(θ) and minRn(θ; f).
Remarks: (i) We note that, similar to Theorem 1, Proposition 1 applies to all neural architectures and all neural activation functions defined on R, as we do not require the explicit form of the neural network f . (ii) It follows from the Lagrangian interpolating polynomial and Assumption 2 that for a dataset consisted of n different samples, there always exists a polynomial P of degree smaller n such that the polynomial P can correctly classify all points in the dataset. This indicates that Proposition 1 always holds if p ≥ n. (iii) Similar to Corollary 1 and 2, we can show that at every local minimum θ̃∗ = (θ∗, a∗,w∗, b∗), the neural network f̃ with an augmented monomial neuron is equivalent to the original neural network f .
4.3 Allowing Random Labels
In previous subsections, we assume the realizability of the dataset by the neural network which implies that the label of a given feature vector is unique. It does not cover the case where the dataset contains two samples with the same feature vector but with different labels (for example, the same image can be labeled differently by two different people). Clearly, in this case, no model can correctly classify all samples in this dataset. Another simple example of this case is the mixture of two Gaussians where the data samples are drawn from each of the two Gaussian distributions with certain probability. In this subsection, we will show that under this broader setting that one feature vector may correspond to two different labels, with a slightly stronger assumption on the convexity of the loss `, the same result still holds. The formal statement is present by the following proposition.
Proposition 2 Suppose that Assumption 1 holds and the loss function ` is convex. Assume that θ̃∗ = (θ∗, a∗,w∗, b∗) is a local minimum of the empirical loss function L̃n(θ̃), then θ̃∗ is a global minimum of L̃n(θ̃). Furthermore, θ∗ achieves the minimum loss value and the minimum misclassification rate on the dataset D, i.e., θ∗ ∈ argminθ Ln(θ) and θ∗ ∈ argminθ Rn(θ; f). Remark: The differences of Proposition 2 and Theorem 1 can be understood in the following ways. First, as stated previously, Proposition 2 allows a feature vector to have two different labels, but Theorem 1 does not. Second, the minimum misclassification rate under the conditions in Theorem 1 must be zero, while in Proposition 2, the minimum misclassification rate can be nonzero.
4.4 High-order Stationary Points
In this subsection, we characterize the high-order stationary points of the empirical loss L̃n shown in Section 3.2. We first introduce the definition of the high-order stationary point and next show that every stationary point of the loss L̃n with a sufficiently high order is also a global minimum.
Definition 1 (k-th order stationary point) A critical point θ0 of a function L(θ) is a k-th order stationary point, if there exists positive constant C, ε > 0 such that for every θ with ‖θ − θ0‖2 ≤ ε, L(θ) ≥ L(θ0)− C‖θ − θ0‖k+12 . Next, we will show that if a polynomial of degree p can correctly classify all points in the dataset, then every stationary point of the order at least 2p is a global minimum and the set of parameters corresponding to this stationary point achieves the minimum training error.
Proposition 3 Suppose that Assumptions 1 and 2 hold. Assume that all samples in the dataset can be correctly classified by a polynomial of degree p. Assume that θ̃∗ = (θ∗, a∗,w∗, b∗) is a k-th order stationary point of the empirical loss function L̃n(θ̃) and k ≥ 2p, then θ̃∗ is a global minimum of L̃n(θ̃). Furthermore, the neural network f(·;θ∗) achieves the minimum misclassification rate on the dataset D, i.e., θ∗ ∈ argminθ Rn(θ; f). One implication of Proposition 3 is that if a dataset is linearly separable, then every second order stationary point of the empirical loss function is a global minimum and, at this stationary point, the neural network achieves zero training error. When the dataset is not linearly separable, our result only covers fourth or higher order stationary point of the empirical loss.
5 Proof Idea
In this section, we provide overviews of the proof of Theorem 1.
5.1 Important Lemmas
In this subsection, we present two important lemmas where the proof of Theorem 1 is based.
Lemma 1 Under Assumption 1 and λ > 0, if θ̃∗ = (θ∗, a∗,w∗, b∗) is a local minimum of L̃n, then (i) a∗ = 0, (ii) for any integer p ≥ 0, the following equation holds for all unit vector u : ‖u‖2 = 1,
n∑ i=1 `′ (−yif(xi;θ∗)) yiew ∗>xi+b ∗ (u>xi) p = 0. (8)
Lemma 2 For any integer k ≥ 0 and any sequence {ci}ni=1, if ∑n i=1 ci(u >xi)
k = 0 holds for all unit vector u : ‖u‖2 = 1, then the k-th order tensor Tk = ∑n i=1 cix ⊗k i is a k-th order zero tensor.
5.2 Proof Sketch of Lemma 1
Proof sketch of Lemma 1(i): To prove a∗ = 0, we only need to check the first order conditions of local minima. By assumption that θ̃∗ = (θ∗, a∗,w∗, b∗) is a local minimum of L̃n, then the derivative of L̃n with respect to a and b at the point θ̃∗ are all zeros, i.e.,
∇aL̃n(θ̃) ∣∣∣ θ̃=θ̃∗ = − n∑
i=1
`′ ( −yif(xi;θ∗)− yia∗ew ∗>xi+b ∗ ) yi exp(w ∗>xi + b ∗) + λa∗ = 0,
∇bL̃n(θ̃) ∣∣∣ θ̃=θ̃∗ = −a∗ n∑
i=1
`′ ( −yif(xi;θ∗)− yia∗ew ∗>xi+b ∗ ) yi exp(w ∗>xi + b ∗) = 0.
From the above equations, it is not difficult to see that a∗ satisfies λa∗2 = 0 or, equivalently, a∗ = 0. We note that the main observation we are using here is that the derivative of the exponential neuron is itself. Therefore, it is not difficult to see that the same proof holds for all neuron activation function σ satisfying σ′(z) = cσ(z),∀z ∈ R for some constant c. In fact, with a small modification of the proof, we can show that the same proof works for all neuron activation functions satisfying σ(z) = (c1z + c0)σ
′(z),∀z ∈ R for some constants c0 and c1. This further indicates that the same proof holds for the monomial neurons and thus the proof of Proposition 1 follows directly from the proof of Theorem 1. Proof sketch of Lemma 1(ii): The main idea of the proof is to use the high order information of the local minimum to derive Equation (8). Due to the assumption that θ̃ = (θ∗, a∗,w∗, b∗) is a local minimum of the empirical loss function L̃n, there exists a bounded local region such
that the parameters θ̃∗ achieve the minimum loss value in this region, i.e., ∃δ ∈ (0, 1) such that L̃n(θ̃
∗ + ∆) ≥ L̃n(θ̃∗) for ∀∆ : ‖∆‖2 ≤ δ. Now, we use δa, δw to denote the perturbations on the parameters a and w, respectively. Next, we consider the loss value at the point θ̃∗+∆ = (θ∗, a∗+ δa,w∗+δw, b∗), where we set |δa| = e−1/ε and δw = εu for an arbitrary unit vector u : ‖u‖2 = 1. Therefore, as ε goes to zero, the perturbation magnitude ‖∆‖2 also goes to zero and this indicates that there exists an ε0 ∈ (0, 1) such that L̃n(θ̃
∗ + ∆) ≥ L̃n(θ̃∗) for ∀ε ∈ [0, ε0). By the result a∗ = 0, shown in Lemma 1(i), the output of the model f̃ under parameters θ̃∗ + ∆ can be expressed by
f̃(x; θ̃∗ + ∆) = f(x;θ∗) + δa exp(δ > wx) exp(w ∗>x+ b∗).
For simplicity of notation, let g(x; θ̃∗, δw) = exp(δ>wx) exp(w ∗>x+ b∗). From the second order Taylor expansion with Lagrangian remainder and the assumption that ` is twice differentiable, it follows that there exists a constant C(θ̃∗,D) depending only on the local minimizer θ̃ and the dataset D such that the following inequality holds for every sample in the dataset and every ε ∈ [0, ε0),
`(−yif̃(xi; θ̃∗ + ∆)) ≤ `(−yif(xi;θ∗)) + `′(−yif(xi;θ∗))(−yi)δag(xi; θ̃∗, δw) + C(θ̃∗,D)δ2a.
Summing the above inequality over all samples in the dataset and recalling that L̃n(θ̃∗+∆) ≥ L̃n(θ̃∗) holds for all ε ∈ [0, ε0), we obtain −sgn(δa) n∑
i=1
`′(−yif(xi;θ∗))yi exp(εu>xi) exp(w∗>xi+b∗)+[nC(θ̃∗,D)+λ/2] exp(−1/ε) ≥ 0.
Finally, we complete the proof by induction. Specifically, for the base hypothesis where p = 0, we can take the limit on the both sides of the above inequality as ε→ 0, using the property that δa can be either positive or negative and thus establish the base case where p = 0. For the higher order case, we can first assume that Equation (8) holds for p = 0, ..., k and then subtract these equations from the above inequality. After taking the limit on the both sides of the inequality as ε→ 0, we can prove that Equation (8) holds for p = k + 1. Therefore, by induction, we can prove that Equation (8) holds for any non-negative integer p.
5.3 Proof Sketch of Lemma 2
The proof of Lemma 2 follows directly from the results in reference (Zhang et al., 2012). It is easy to check that, for every sequence {ci}ni=1 and every non-negative integer k ≥ 0, the k-th order tensor Tk = ∑n i=1 cix ⊗k i is a symmetric tensor. From Theorem 1 in (Zhang et al., 2012), it directly follows that max
u1,...,uk:‖u1‖2=...=‖uk‖2=1 |Tk(u1, ...,uk)| = max u:‖u‖2=1 |Tk(u, ...,u)|.
Furthermore, by assumption that Tk(u, ...,u) = ∑n i=1 ci(u >xi) k = 0 holds for all ‖u‖2 = 1, then
max u1,...,uk:‖u1‖2=...=‖uk‖2=1
|Tk(u1, ...,uk)| = 0,
and this is equivalent to Tk = 0⊗kd , where 0d is the zero vector in the d-dimensional space.
5.4 Proof Sketch of Theorem 1
For every dataset D satisfying Assumption 2, by the Lagrangian interpolating polynomial, there always exists a polynomial P (x) = ∑ j cjπj(x) defined on Rd such that it can correctly classify all samples in the dataset with margin at least one, i.e., yiP (xi) ≥ 1,∀i ∈ [n], where πj denotes the j-th monomial in the polynomial P (x). Therefore, from Lemma 1 and 2, it follows that
n∑ i=1 `′(−yif(xi;θ∗))ew ∗>xi+b ∗ yiP (xi) = ∑ j cj n∑ i=1 `′(−yif(xi;θ∗))yiew ∗>xi+b ∗ πj(xi) = 0.
Since yiP (xi) ≥ 1 and ew ∗>xi+b ∗ > 0 hold for ∀i ∈ [n] and the loss function ` is a non-decreasing function, i.e., `′(z) ≥ 0,∀z ∈ R, then `′(−yif(xi;θ∗)) = 0 holds for all i ∈ [n]. In addition, from the assumption that every critical point of the loss function ` is a global minimum, it follows that zi = −yif(xi;θ∗) achieves the global minimum of the loss function ` and this further indicates that
θ∗ is a global minimum of the empirical loss Ln(θ). Furthermore, since at every local minimum, the exponential neuron is inactive, a∗ = 0, then the set of parameters θ̃∗ is a global minimum of the loss function L̃n(θ̃). Finally, since every critical point of the loss function `(z) satisfies z < 0, then for every sample, `′(−yif(xi;θ∗)) = 0 indicates that yif(xi;θ∗) > 0, or, equivalently, yi = sgn(f(xi;θ∗)). Therefore, the set of parameters θ∗ also minimizes the training error. In summary, the set of parameters θ̃∗ = (θ∗, a∗,w∗, b∗) minimizes the loss function L̃n(θ̃) and the set of parameters θ∗ simultaneously minimizes the empirical loss function Ln(θ) and the training error Rn(θ; f).
6 Conclusions and Discussions
One of the difficulties in analyzing neural networks is the non-convexity of the loss functions which allows the existence of many spurious minima with large loss values. In this paper, we prove that for any neural network, by adding a special neuron and an associated regularizer, the new loss function has no spurious local minimum. In addition, we prove that, at every local minimum of this new loss function, the exponential neuron is inactive and this means that the augmented neuron and regularizer improve the landscape of the loss surface without affecting the representing power of the original neural network. We also extend the main result in a few ways. First, while adding a special neuron makes the network different from a classical neural network architecture, the same result also holds for a standard fully connected network with one special neuron added to each layer. Second, the same result holds if we change the exponential neuron to a polynomial neuron with a degree dependent on the data. Third, the same result holds even if one feature vector corresponds to both labels. This paper is an effort in designing neural networks that are “good”. Here “good” can mean various things such as nice landscape, stronger representation power or better generalization, and in this paper we focus on the landscape –in particular, the very specific property “every local minimum is a global minimum”. While our results enhance the understanding of the landscape, the practical implications are not straightforward to see since we did not consider other aspects such as algorithms and generalization. It is an interesting direction to improve the landscape results by considering other aspects, such as studying when a specific algorithm will converge to local minima and thus global minima.
7 Acknowledgment
Research is supported by the following grants: USDA/NSF CPS grant AG 2018-67007-2837, NSF NeTS 1718203, NSF CPS ECCS 1739189, DTRA Grant DTRA grant HDTRA1-15-1-0003, NSF CCF 1755847 and a start-up grant from Dept. of ISE, University of Illinois Urbana-Champaign. | 1. What is the main contribution of the paper in the context of neural networks and local minima?
2. What are the strengths and weaknesses of the paper's theoretical analysis, particularly regarding Assumptions 1 and 2?
3. Why do the authors add an extra neuron in their approach, and what is its significance?
4. How does the reviewer interpret the results of Theorem 1 and Corollary 1 in relation to the added neuron?
5. Are there any concerns or limitations regarding the applicability of the paper's findings to real-world scenarios, considering non-linear activation functions like RELU? | Review | Review
This paper considers neural networks and claim that adding one neuron results in making all local minima global for binary classification problem. I might be misinterpreting the theoretical statements of the paper, but I don't quite get why adding the neuron is useful. Assumption 1 readily provides a huge information on the loss function (e.g., every critical point is a global minima) and Assumption 2 implies the the neural net can solve the problem to zero error, both of which (in my opinion) are really strong assumptions. Furthermore, Theorem 1 claims that \tilde{\theta} minimizes \tilde{L} and \delta minimizes L, so I don't quite get why do authors add the extra neuron. They discuss this issue after Corollary 1, but it is not satisfactory. If there exists a local minima that is not global (which is the case in RELU nets as the authors state), then the statement of Theorem 1 doesn't hold, which suggests Assumption 1 is not valid for those scenarios. |
NIPS | Title
Adding One Neuron Can Eliminate All Bad Local Minima
Abstract
One of the main difficulties in analyzing neural networks is the non-convexity of the loss function which may have many bad local minima. In this paper, we study the landscape of neural networks for binary classification tasks. Under mild assumptions, we prove that after adding one special neuron with a skip connection to the output, or one special neuron per layer, every local minimum is a global minimum.
1 Introduction
Deep neural networks have recently achieved huge success in various machine learning tasks (see, Krizhevsky et al. 2012; Goodfellow et al. 2013; Wan et al. 2013, for example). However, a theoretical understanding of neural networks is largely lacking. One of the difficulties in analyzing neural networks is the non-convexity of the loss function which allows the existence of many local minima with large losses. This was long considered a bottleneck of neural networks, and one of the reasons why convex formulations such as support vector machine (Cortes & Vapnik, 1995) were preferred previously. Given the recent empirical success of the deep neural networks, an interesting question is whether the non-convexity of the neural network is really an issue. It has been widely conjectured that all local minima of the empirical loss lead to similar training performance (LeCun et al., 2015; Choromanska et al., 2015). For example, prior works empirically showed that neural networks with identical architectures but different initialization points can converge to local minima with similar classification performance (Krizhevsky et al., 2012; He et al., 2016; Huang & Liu, 2017). On the theoretical side, there have been many recent attempts to analyze the landscape of the neural network loss functions. A few works have studied deep networks, but they either require linear activation functions (Baldi & Hornik, 1989; Kawaguchi, 2016; Freeman & Bruna, 2016; Hardt & Ma, 2017; Yun et al., 2017), or require assumptions such as independence of ReLU activations (Choromanska et al., 2015) and significant overparametrization (Nguyen & Hein, 2017a,b; Livni et al., 2014). There is a large body of works that study single-hidden-layer neural networks and provide various conditions under which a local search algorithm can find a global minimum (Du & Lee, 2018; Ge et al., 2018; Andoni et al., 2014; Sedghi & Anandkumar, 2014; Janzamin et al., 2015; Haeffele & Vidal, 2015; Gautier et al., 2016; Brutzkus & Globerson,
∗Correpondence to R. Srikant, [email protected] and Ruoyu Sun, [email protected]
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
2017; Soltanolkotabi, 2017; Soudry & Hoffer, 2017; Goel & Klivans, 2017; Du et al., 2017; Zhong et al., 2017; Li & Yuan, 2017; Liang et al., 2018; Mei et al., 2018). It can be roughly divided into two categories: non-global landscape analysis and global landscape analysis. For the first category, the result do not apply to all local minima. One typical conclusion is about the local geometry, i.e., in a small neighborhood of the global minima no bad local minima exist (Zhong et al., 2017; Du et al., 2017; Li & Yuan, 2017). Another typical conclusion is that a subset of local minima are global minima (Haeffele et al., 2014; Haeffele & Vidal, 2015; Soudry & Carmon, 2016; Nguyen & Hein, 2017a,b). Shamir (2018) has shown that a subset of second-order local minima can perform nearly as well as linear predictors. The presence of various conclusions reflects the difficulty of the problem: while analyzing the global landscape seems hard, we may step back and analyze the local landscape or a “majority” of the landscape. For the second category of global landscape analysis, the typical result is that every local minimum is a global minimum. However, even for single-layer networks, strong assumptions such as over-parameterization, very special neuron activation functions, fixed second layer parameters and/or Gaussian data distribution are often needed in the existing works. The presence of various strong assumptions also reflects the difficulty of the problem: even for the single-hidden-layer nonlinear neural network, it seems hard to analyze the landscape, so it is reasonable to make various assumptions. One exception is the recent work Liang et al. (2018) which adopts a different path: instead of simply making several assumptions to obtain positive results, it carefully studies the effect of various conditions on the landscape of neural networks for binary classification. It gives both positive and negative results on the existence of bad local minimum under different conditions. In particular, it studies many common types of neuron activation functions and shows that for a class of neurons there is no bad local minimum, and for other neurons there is. This clearly shows that the choice of neurons can affect the landscape. Then a natural question is: while Liang et al. (2018) considers some special types of data and a broad class of neurons, can we obtain results for more general data when limiting to a smaller class of neurons?
1.1 Our Contributions
Given this context, our main result is quite surprising: for a neural network with a special type of neurons, every local minimum is a global minimum of the loss function. Our result requires no assumption on the network size, the specific type of the original neural network, etc., yet our result applies to every local minimum. Besides the requirement on the neuron activation type, the major trick is an associated regularizer. Our major results and their implications are as follows:
• We focus on the binary classification problem with a smooth hinge loss function. We prove the following result: for any neural network, by adding a special neuron (e.g., exponential neuron) to the network and adding a quadratic regularizer of this neuron, the new loss function has no bad local minimum. In addition, every local minimum achieves the minimum misclassification error.
• In the main result, the augmented neuron can be viewed as a skip connection from the input to the output layer. However, this skip connection is not critical, as the same result also holds if we add one special neuron to each layer of a fully-connected feedforward neural network.
• To our knowledge, this is the first result that no spurious local minimum exists for a wide class of deep nonlinear networks. Our result indicates that the class of “good neural networks” (neural networks such that there is an associated loss function with no spurious local minima) contains any network with one special neuron, thus this class is rather “dense” in the class of all neural networks: the distance between any neural network and a good neural network is just a neuron away.
The outline of the paper is as follows. In Section 2, we present several notations. In Section 3, we present the main result and several extensions on the main results are presented in Section 4. We present the proof idea of the main result in Section 5 and conclude this paper in Section 6. All proofs are presented in Appendix.
2 Preliminaries
Feed-forward networks. Given an input vector of dimension d, we consider a neural network with L layers of neurons for binary classification. We denote by Ml the number of neurons in the l-th layer (note that M0 = d). We denote the neural activation function by σ. LetWl ∈ RMl−1×Ml denote the weight matrix connecting the (l − 1)-th and l-th layer and bl denote the bias vector for neurons in
the l-th layer. LetWL+1 ∈ RML and bL ∈ R denote the weight vector and bias scalar in the output layer, respectively. Therefore, the output of the network f : Rd → R can be expressed by
f(x;θ) =W>L+1σ ( WLσ ( ...σ ( W>1 x+ b1 ) + bL−1 ) + bL ) + bL+1. (1)
Loss and error. We useD = {(xi, yi)}ni=1 to denote a dataset containing n samples, where xi ∈ Rd and yi ∈ {−1, 1} denote the feature vector and the label of the i-th sample, respectively. Given a neural network f(x;θ) parameterized by θ and a loss function ` : R→ R, in binary classification tasks, we define the empirical loss Ln(θ) as the average loss of the network f on a sample in the dataset and define the training error (also called the misclassification error) Rn(θ; f) as the misclassification rate of the network f on the dataset D, i.e.,
Ln(θ) = n∑ i=1 `(−yif(xi;θ)) and Rn(θ; f) = 1 n n∑ i=1 I{yi 6= sgn(f(xi;θ))}. (2)
where I is the indicator function. Tensors products. We use a⊗b to denote the tensor product of vectors a and b and use a⊗k to denote the tensor product a⊗ ...⊗a where a appears k times. For an N -th order tensor T ∈ Rd1×d2×...×dN and N vectors u1 ∈ Rd1 ,u2 ∈ Rd2 , ...,uN ∈ RdN , we define
T ⊗ u1...⊗ uN = ∑
i1∈[d1],...,iN∈[dN ]
T (i1, ..., iN )u1(i1)...uN (iN ),
where we use T (i1, ..., iN ) to denote the (i1, ..., iN )-th component of the tensor T , uk(ik) to denote the ik-th component of the vector uk, k = 1, ..., N and [dk] to denote the set {1, ..., dk}.
3 Main Result
In this section, we first present several important conditions on the loss function and the dataset in order to derive the main results. After that, we will present the main results.
3.1 Assumptions
In this subsection, we introduce two assumptions on the loss function and the dataset.
Assumption 1 (Loss function) Assume that the loss function ` : R → R is monotonically nondecreasing and twice differentiable, i.e., ` ∈ C2. Assume that every critical point of the loss function `(z) is also a global minimum and every global minimum z satisfies z < 0.
A simple example of the loss function satisfying Assumption 1 is the polynomial hinge loss, i.e., `(z) = [max{z+1, 0}]p, p ≥ 3. It is always zero for z ≤ −1 and behaves like a polynomial function in the region z > −1. Note that the condition that every global minimum of the loss function `(z) is negative is not needed to prove the result that every local minimum of the empirical loss is globally minimal, but is necessary to prove that the global minimizer of the empirical loss is also the minimizer of the misclassification rate.
Assumption 2 (Realizability) Assume that there exists a set of parameters θ such that the neural network f(·;θ) is able to correctly classify all samples in the dataset D.
By Assumption 2, we assume that the dataset is realizable by the neural architecture f . We note that this assumption is consistent with previous empirical observations (Zhang et al., 2016; Krizhevsky et al., 2012; He et al., 2016) showing that at the end of the training process, neural networks usually achieve zero misclassification rates on the training sets. However, as we will show later, if the loss function ` is convex, then we can prove the main result even without Assumption 2.
3.2 Main Result
In this subsection, we first introduce several notations and next present the main result of the paper. Given a neural architecture f(·;θ) defined on a d-dimensional Euclidean space and parameterized by a set of parameters θ, we define a new architecture f̃ by adding the output of an exponential neuron to the output of the network f , i.e.,
f̃(x, θ̃) = f(x;θ) + a exp ( w>x+ b ) , (3)
where the vector θ̃ = (θ, a,w, b) denote the parametrization of the network f̃ . For this designed model, we define the empirical loss function as follows,
L̃n(θ̃) = n∑ i=1 ` ( −yif̃(x; θ̃) ) + λa2 2 , (4)
where the scalar λ is a positive real number, i.e., λ > 0. Different from the empirical loss function Ln, the loss L̃n has an additional regularizer on the parameter a, since we aim to eliminate the impact of the exponential neuron on the output of the network f̃ at every local minimum of L̃n. As we will show later, the exponential neuron is inactive at every local minimum of the empirical loss L̃n. Now we present the following theorem to show that every local minimum of the loss function L̃n is also a global minimum. Remark: Instead of viewing the exponential term in Equation (3) as a neuron, one can also equivalently think of modifying the loss function to be
L̃n(θ̃) = n∑ i=1 ` ( −yi(f(xi;θ) + a exp(w>xi + b)) ) + λa2 2 .
Then, one can interpret Equation (3) and (4) as maintaining the original neural architecture and slightly modifying the loss function.
Theorem 1 Suppose that Assumption 1 and 2 hold. Then both of the following statements are true:
(i) The empirical loss function L̃n(θ̃) has at least one local minimum.
(ii) Assume that θ̃∗ = (θ∗, a∗,w∗, b∗) is a local minimum of the empirical loss function L̃n(θ̃), then θ̃∗ is a global minimum of L̃n(θ̃). Furthermore, θ∗ achieves the minimum loss value and the minimum misclassification rate on the dataset D, i.e., θ∗ ∈ argminθ Ln(θ) and θ∗ ∈ argminθ Rn(θ; f).
Remarks: (i) Theorem 1 shows that every local minimum θ̃∗ of the empirical loss L̃n is also a global minimum and shows that θ∗ achieves the minimum training error and the minimum loss value on the original loss function Ln at the same time. (ii) Since we do not require the explicit form of the neural architecture f , Theorem 1 applies to the neural architectures widely used in practice such as convolutional neural network (Krizhevsky et al., 2012), deep residual networks (He et al., 2016), etc. This further indicates that the result holds for any real neural activation functions such as rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), etc. (iii) As we will show in the following corollary, at every local minimum θ̃∗, the exponential neuron is inactive. Therefore, at every local minimum θ̃∗ = (θ∗, a∗,w∗, b∗), the neural network f̃ with an augmented exponential neuron is equivalent to the original neural network f .
Corollary 1 Under the conditions of Theorem 1, if θ̃∗ = (θ∗, a∗,w∗, b∗) is a local minimum of the empirical loss function L̃n(θ̃), then two neural networks f(·;θ∗) and f̃(·; θ̃∗) are equivalent, i.e., f(x;θ∗) = f̃(x; θ̃∗), ∀x ∈ Rd. Corollary 1 shows that at every local minimum, the exponential neuron does not contribute to the output of the neural network f̃ . However, this does not imply that the exponential neuron is unnecessary, since several previous results (Safran & Shamir, 2018; Liang et al., 2018) have already shown that the loss surface of pure ReLU neural networks are guaranteed to have bad local minima. Furthermore, to prove the main result under any dataset, the regularizer is also necessary, since Liang et al. (2018) has already shown that even with an augmented exponential neuron, the empirical loss without the regularizer still have bad local minima under some datasets.
4 Extensions
4.1 Eliminating the Skip Connection
As noted in the previous section, the exponential term in Equation (3) can be viewed as a skip connection or a modification to the loss function. Our analysis also works under other architectures as well. When the exponential term is viewed as a skip connection, the network architecture is as shown in Fig. 1(a). This architecture is different from the canonical feedforward neural architectures
as there is a direct path from the input layer to the output layer. In this subsection, we will show that the main result still holds if the model f̃ is defined as a feedforward neural network shown in Fig. 1(b), where each layer of the network f is augmented by an additional exponential neuron. This is a standard fully connected neural network except for one special neuron at each layer.
Notations. Given a fully-connected feedforward neural network f(·;θ) defined by Equation (1), we define a new fully connected feedforward neural network f̃ by adding an additional exponential neuron to each layer of the network f . We use the vector θ̃ = (θ,θexp) to denote the parameterization of the network f̃ , where θexp denotes the vector consisting of all augmented weights and biases. Let W̃l ∈ R(Ml−1+1)×(Ml+1) and b̃l ∈ RMl+1 denote the weight matrix and the bias vector in the l-th layer of the network f̃ , respectively. Let W̃L+1 ∈ R(ML+1) and b̃L+1 ∈ R denote the weight vector and the bias scalar in the output layer of the network f̃ , respectively. Without the loss of generality, we assume that the (Ml +1)-th neuron in the l-th layer is the augmented exponential neuron. Thus, the output of the network f̃ is expressed by
f̃(x;θ) = W̃>L+1σ̃L+1 ( W̃Lσ̃L ( ...σ̃1 ( W̃>1 x+ b̃1 ) + b̃L−1 ) + b̃L ) + b̃L+1, (5)
where σ̃l : RMl−1+1 → RMl+1 is a vector-valued activation function with the first Ml components being the activation functions σ in the network f and with the last component being the exponential function, i.e., σ̃l(z) = (σ(z), ..., σ(z), exp(z)). Furthermore, we use the w̃l to denote the vector in the (Ml−1 + 1)-th row of the matrix W̃l. In other words, the components of the vector w̃l are the weights on the edges connecting the exponential neuron in the (l − 1)-th layer and the neurons in the l-th layer. For this feedforward network, we define an empirical loss function as
L̃n(θ̃) = n∑ i=1 `(−yif̃(xi; θ̃)) + λ 2 L+1∑ l=2 ‖w̃l‖2L2L (6)
where ‖a‖p denotes the p-norm of a vector a and λ is a positive real number, i.e., λ > 0. Similar to the empirical loss discussed in the previous section, we add a regularizer to eliminate the impacts of all exponential neurons on the output of the network. Similarly, we can prove that at every local minimum of L̃n, all exponential neurons are inactive. Now we present the following theorem to show that if the set of parameters θ̃∗ = (θ∗,θ∗exp) is a local minimum of the empirical loss function L̃n(θ̃), then θ̃∗ is a global minimum and θ∗ is a global minimum of both minimization problems minθ Ln(θ) and minθ Rn(θ; f). This means that the neural network f(·;θ∗) simultaneously achieves the globally minimal loss value and misclassification rate on the dataset D. Theorem 2 Suppose that Assumption 1 and 2 hold. Suppose that the activation function σ is differentiable. Assume that θ̃∗ = (θ∗,θ∗exp) is a local minimum of the empirical loss function L̃n(θ̃), then θ̃∗ is a global minimum of L̃n(θ̃). Furthermore, θ∗ achieves the minimum loss value and the minimum misclassification rate on the dataset D, i.e., θ∗ ∈ argminθ Ln(θ) and θ∗ ∈ argminθ Rn(θ; f). Remarks: (i) This theorem is not a direct corollary of the result in the previous section, but the proof ideas are similar. (ii) Due to the assumption on the differentiability of the activation function σ, Theorem 2 does not apply to the neural networks consisting of non-smooth neurons such as ReLUs, Leaky ReLUs, etc. (iii) Similar to Corollary 1, we will present the following corollary to show that at every local minimum θ̃∗ = (θ∗,θ∗exp), the neural network f̃ with augmented exponential neurons is equivalent to the original neural network f .
Corollary 2 Under the conditions in Theorem 2, if θ̃∗ = (θ∗,θ∗exp) is a local minimum of the empirical loss function L̃n(θ̃), then two neural networks f(·;θ∗) and f̃(·; θ̃∗) are equivalent, i.e., f(x;θ∗) = f̃(x; θ̃∗),∀x ∈ Rd.
Corollary 2 further shows that even if we add an exponential neuron to each layer of the original network f , at every local minimum of the empirical loss, all exponential neurons are inactive.
4.2 Neurons
In this subsection, we will show that even if the exponential neuron is replaced by a monomial neuron, the main result still holds under additional assumptions. Similar to the case where exponential neurons are used, given a neural network f(x;θ), we define a new neural network f̃ by adding the output of a monomial neuron of degree p to the output of the original model f , i.e.,
f̃(x; θ̃) = f(x;θ) + a ( w>x+ b )p . (7)
In addition, the empirical loss function L̃n is exactly the same as the loss function defined by Equation (4). Next, we will present the following theorem to show that if all samples in the dataset D can be correctly classified by a polynomial of degree t and the degree of the augmented monomial is not smaller than t (i.e., p ≥ t), then every local minimum of the empirical loss function L̃n(θ̃) is also a global minimum. We note that the degree of a monomial is the sum of powers of all variables in this monomial and the degree of a polynomial is the maximum degree of its monomial.
Proposition 1 Suppose that Assumptions 1 and 2 hold. Assume that all samples in the dataset D can be correctly classified by a polynomial of degree t and p ≥ t. Assume that θ̃∗ = (θ∗, a∗,w∗, b∗) is a local minimum of the empirical loss function L̃n(θ̃), then θ̃∗ is a global minimum of L̃n(θ̃). Furthermore, θ∗ is a global minimizer of both problems minθ Ln(θ) and minRn(θ; f).
Remarks: (i) We note that, similar to Theorem 1, Proposition 1 applies to all neural architectures and all neural activation functions defined on R, as we do not require the explicit form of the neural network f . (ii) It follows from the Lagrangian interpolating polynomial and Assumption 2 that for a dataset consisted of n different samples, there always exists a polynomial P of degree smaller n such that the polynomial P can correctly classify all points in the dataset. This indicates that Proposition 1 always holds if p ≥ n. (iii) Similar to Corollary 1 and 2, we can show that at every local minimum θ̃∗ = (θ∗, a∗,w∗, b∗), the neural network f̃ with an augmented monomial neuron is equivalent to the original neural network f .
4.3 Allowing Random Labels
In previous subsections, we assume the realizability of the dataset by the neural network which implies that the label of a given feature vector is unique. It does not cover the case where the dataset contains two samples with the same feature vector but with different labels (for example, the same image can be labeled differently by two different people). Clearly, in this case, no model can correctly classify all samples in this dataset. Another simple example of this case is the mixture of two Gaussians where the data samples are drawn from each of the two Gaussian distributions with certain probability. In this subsection, we will show that under this broader setting that one feature vector may correspond to two different labels, with a slightly stronger assumption on the convexity of the loss `, the same result still holds. The formal statement is present by the following proposition.
Proposition 2 Suppose that Assumption 1 holds and the loss function ` is convex. Assume that θ̃∗ = (θ∗, a∗,w∗, b∗) is a local minimum of the empirical loss function L̃n(θ̃), then θ̃∗ is a global minimum of L̃n(θ̃). Furthermore, θ∗ achieves the minimum loss value and the minimum misclassification rate on the dataset D, i.e., θ∗ ∈ argminθ Ln(θ) and θ∗ ∈ argminθ Rn(θ; f). Remark: The differences of Proposition 2 and Theorem 1 can be understood in the following ways. First, as stated previously, Proposition 2 allows a feature vector to have two different labels, but Theorem 1 does not. Second, the minimum misclassification rate under the conditions in Theorem 1 must be zero, while in Proposition 2, the minimum misclassification rate can be nonzero.
4.4 High-order Stationary Points
In this subsection, we characterize the high-order stationary points of the empirical loss L̃n shown in Section 3.2. We first introduce the definition of the high-order stationary point and next show that every stationary point of the loss L̃n with a sufficiently high order is also a global minimum.
Definition 1 (k-th order stationary point) A critical point θ0 of a function L(θ) is a k-th order stationary point, if there exists positive constant C, ε > 0 such that for every θ with ‖θ − θ0‖2 ≤ ε, L(θ) ≥ L(θ0)− C‖θ − θ0‖k+12 . Next, we will show that if a polynomial of degree p can correctly classify all points in the dataset, then every stationary point of the order at least 2p is a global minimum and the set of parameters corresponding to this stationary point achieves the minimum training error.
Proposition 3 Suppose that Assumptions 1 and 2 hold. Assume that all samples in the dataset can be correctly classified by a polynomial of degree p. Assume that θ̃∗ = (θ∗, a∗,w∗, b∗) is a k-th order stationary point of the empirical loss function L̃n(θ̃) and k ≥ 2p, then θ̃∗ is a global minimum of L̃n(θ̃). Furthermore, the neural network f(·;θ∗) achieves the minimum misclassification rate on the dataset D, i.e., θ∗ ∈ argminθ Rn(θ; f). One implication of Proposition 3 is that if a dataset is linearly separable, then every second order stationary point of the empirical loss function is a global minimum and, at this stationary point, the neural network achieves zero training error. When the dataset is not linearly separable, our result only covers fourth or higher order stationary point of the empirical loss.
5 Proof Idea
In this section, we provide overviews of the proof of Theorem 1.
5.1 Important Lemmas
In this subsection, we present two important lemmas where the proof of Theorem 1 is based.
Lemma 1 Under Assumption 1 and λ > 0, if θ̃∗ = (θ∗, a∗,w∗, b∗) is a local minimum of L̃n, then (i) a∗ = 0, (ii) for any integer p ≥ 0, the following equation holds for all unit vector u : ‖u‖2 = 1,
n∑ i=1 `′ (−yif(xi;θ∗)) yiew ∗>xi+b ∗ (u>xi) p = 0. (8)
Lemma 2 For any integer k ≥ 0 and any sequence {ci}ni=1, if ∑n i=1 ci(u >xi)
k = 0 holds for all unit vector u : ‖u‖2 = 1, then the k-th order tensor Tk = ∑n i=1 cix ⊗k i is a k-th order zero tensor.
5.2 Proof Sketch of Lemma 1
Proof sketch of Lemma 1(i): To prove a∗ = 0, we only need to check the first order conditions of local minima. By assumption that θ̃∗ = (θ∗, a∗,w∗, b∗) is a local minimum of L̃n, then the derivative of L̃n with respect to a and b at the point θ̃∗ are all zeros, i.e.,
∇aL̃n(θ̃) ∣∣∣ θ̃=θ̃∗ = − n∑
i=1
`′ ( −yif(xi;θ∗)− yia∗ew ∗>xi+b ∗ ) yi exp(w ∗>xi + b ∗) + λa∗ = 0,
∇bL̃n(θ̃) ∣∣∣ θ̃=θ̃∗ = −a∗ n∑
i=1
`′ ( −yif(xi;θ∗)− yia∗ew ∗>xi+b ∗ ) yi exp(w ∗>xi + b ∗) = 0.
From the above equations, it is not difficult to see that a∗ satisfies λa∗2 = 0 or, equivalently, a∗ = 0. We note that the main observation we are using here is that the derivative of the exponential neuron is itself. Therefore, it is not difficult to see that the same proof holds for all neuron activation function σ satisfying σ′(z) = cσ(z),∀z ∈ R for some constant c. In fact, with a small modification of the proof, we can show that the same proof works for all neuron activation functions satisfying σ(z) = (c1z + c0)σ
′(z),∀z ∈ R for some constants c0 and c1. This further indicates that the same proof holds for the monomial neurons and thus the proof of Proposition 1 follows directly from the proof of Theorem 1. Proof sketch of Lemma 1(ii): The main idea of the proof is to use the high order information of the local minimum to derive Equation (8). Due to the assumption that θ̃ = (θ∗, a∗,w∗, b∗) is a local minimum of the empirical loss function L̃n, there exists a bounded local region such
that the parameters θ̃∗ achieve the minimum loss value in this region, i.e., ∃δ ∈ (0, 1) such that L̃n(θ̃
∗ + ∆) ≥ L̃n(θ̃∗) for ∀∆ : ‖∆‖2 ≤ δ. Now, we use δa, δw to denote the perturbations on the parameters a and w, respectively. Next, we consider the loss value at the point θ̃∗+∆ = (θ∗, a∗+ δa,w∗+δw, b∗), where we set |δa| = e−1/ε and δw = εu for an arbitrary unit vector u : ‖u‖2 = 1. Therefore, as ε goes to zero, the perturbation magnitude ‖∆‖2 also goes to zero and this indicates that there exists an ε0 ∈ (0, 1) such that L̃n(θ̃
∗ + ∆) ≥ L̃n(θ̃∗) for ∀ε ∈ [0, ε0). By the result a∗ = 0, shown in Lemma 1(i), the output of the model f̃ under parameters θ̃∗ + ∆ can be expressed by
f̃(x; θ̃∗ + ∆) = f(x;θ∗) + δa exp(δ > wx) exp(w ∗>x+ b∗).
For simplicity of notation, let g(x; θ̃∗, δw) = exp(δ>wx) exp(w ∗>x+ b∗). From the second order Taylor expansion with Lagrangian remainder and the assumption that ` is twice differentiable, it follows that there exists a constant C(θ̃∗,D) depending only on the local minimizer θ̃ and the dataset D such that the following inequality holds for every sample in the dataset and every ε ∈ [0, ε0),
`(−yif̃(xi; θ̃∗ + ∆)) ≤ `(−yif(xi;θ∗)) + `′(−yif(xi;θ∗))(−yi)δag(xi; θ̃∗, δw) + C(θ̃∗,D)δ2a.
Summing the above inequality over all samples in the dataset and recalling that L̃n(θ̃∗+∆) ≥ L̃n(θ̃∗) holds for all ε ∈ [0, ε0), we obtain −sgn(δa) n∑
i=1
`′(−yif(xi;θ∗))yi exp(εu>xi) exp(w∗>xi+b∗)+[nC(θ̃∗,D)+λ/2] exp(−1/ε) ≥ 0.
Finally, we complete the proof by induction. Specifically, for the base hypothesis where p = 0, we can take the limit on the both sides of the above inequality as ε→ 0, using the property that δa can be either positive or negative and thus establish the base case where p = 0. For the higher order case, we can first assume that Equation (8) holds for p = 0, ..., k and then subtract these equations from the above inequality. After taking the limit on the both sides of the inequality as ε→ 0, we can prove that Equation (8) holds for p = k + 1. Therefore, by induction, we can prove that Equation (8) holds for any non-negative integer p.
5.3 Proof Sketch of Lemma 2
The proof of Lemma 2 follows directly from the results in reference (Zhang et al., 2012). It is easy to check that, for every sequence {ci}ni=1 and every non-negative integer k ≥ 0, the k-th order tensor Tk = ∑n i=1 cix ⊗k i is a symmetric tensor. From Theorem 1 in (Zhang et al., 2012), it directly follows that max
u1,...,uk:‖u1‖2=...=‖uk‖2=1 |Tk(u1, ...,uk)| = max u:‖u‖2=1 |Tk(u, ...,u)|.
Furthermore, by assumption that Tk(u, ...,u) = ∑n i=1 ci(u >xi) k = 0 holds for all ‖u‖2 = 1, then
max u1,...,uk:‖u1‖2=...=‖uk‖2=1
|Tk(u1, ...,uk)| = 0,
and this is equivalent to Tk = 0⊗kd , where 0d is the zero vector in the d-dimensional space.
5.4 Proof Sketch of Theorem 1
For every dataset D satisfying Assumption 2, by the Lagrangian interpolating polynomial, there always exists a polynomial P (x) = ∑ j cjπj(x) defined on Rd such that it can correctly classify all samples in the dataset with margin at least one, i.e., yiP (xi) ≥ 1,∀i ∈ [n], where πj denotes the j-th monomial in the polynomial P (x). Therefore, from Lemma 1 and 2, it follows that
n∑ i=1 `′(−yif(xi;θ∗))ew ∗>xi+b ∗ yiP (xi) = ∑ j cj n∑ i=1 `′(−yif(xi;θ∗))yiew ∗>xi+b ∗ πj(xi) = 0.
Since yiP (xi) ≥ 1 and ew ∗>xi+b ∗ > 0 hold for ∀i ∈ [n] and the loss function ` is a non-decreasing function, i.e., `′(z) ≥ 0,∀z ∈ R, then `′(−yif(xi;θ∗)) = 0 holds for all i ∈ [n]. In addition, from the assumption that every critical point of the loss function ` is a global minimum, it follows that zi = −yif(xi;θ∗) achieves the global minimum of the loss function ` and this further indicates that
θ∗ is a global minimum of the empirical loss Ln(θ). Furthermore, since at every local minimum, the exponential neuron is inactive, a∗ = 0, then the set of parameters θ̃∗ is a global minimum of the loss function L̃n(θ̃). Finally, since every critical point of the loss function `(z) satisfies z < 0, then for every sample, `′(−yif(xi;θ∗)) = 0 indicates that yif(xi;θ∗) > 0, or, equivalently, yi = sgn(f(xi;θ∗)). Therefore, the set of parameters θ∗ also minimizes the training error. In summary, the set of parameters θ̃∗ = (θ∗, a∗,w∗, b∗) minimizes the loss function L̃n(θ̃) and the set of parameters θ∗ simultaneously minimizes the empirical loss function Ln(θ) and the training error Rn(θ; f).
6 Conclusions and Discussions
One of the difficulties in analyzing neural networks is the non-convexity of the loss functions which allows the existence of many spurious minima with large loss values. In this paper, we prove that for any neural network, by adding a special neuron and an associated regularizer, the new loss function has no spurious local minimum. In addition, we prove that, at every local minimum of this new loss function, the exponential neuron is inactive and this means that the augmented neuron and regularizer improve the landscape of the loss surface without affecting the representing power of the original neural network. We also extend the main result in a few ways. First, while adding a special neuron makes the network different from a classical neural network architecture, the same result also holds for a standard fully connected network with one special neuron added to each layer. Second, the same result holds if we change the exponential neuron to a polynomial neuron with a degree dependent on the data. Third, the same result holds even if one feature vector corresponds to both labels. This paper is an effort in designing neural networks that are “good”. Here “good” can mean various things such as nice landscape, stronger representation power or better generalization, and in this paper we focus on the landscape –in particular, the very specific property “every local minimum is a global minimum”. While our results enhance the understanding of the landscape, the practical implications are not straightforward to see since we did not consider other aspects such as algorithms and generalization. It is an interesting direction to improve the landscape results by considering other aspects, such as studying when a specific algorithm will converge to local minima and thus global minima.
7 Acknowledgment
Research is supported by the following grants: USDA/NSF CPS grant AG 2018-67007-2837, NSF NeTS 1718203, NSF CPS ECCS 1739189, DTRA Grant DTRA grant HDTRA1-15-1-0003, NSF CCF 1755847 and a start-up grant from Dept. of ISE, University of Illinois Urbana-Champaign. | 1. What is the focus of the review regarding the paper's contribution?
2. Are there any concerns or questions regarding the main result of the paper?
3. How does the reviewer assess the quality and clarity of the paper's content?
4. What are the strengths and weaknesses of the proposed approach compared to prior works?
5. Do you have any suggestions for improving the paper's significance regarding its application to neural networks? | Review | Review
[Updates] After reading the author's response, I think my concerns about the existence of minima has been partly, but not completely addressed. In the begging of the response a proof for the existence of a global minima is provided. However, an important distinction has been ignored. The author wrote that "By assumption 2, there exists θ* such that f(·; θ*) achieves zero training error". However, the existence a parameter for which all the data can be correctly classified (Assumption 2) is not the same as having a parameter for which the loss function is zero. That is precisely how the counterexample I provided works. Of course, the author could avoid this problem by modifying Assumption 2 to "there exists a parameter for which the loss function is zero", or by adding one assumption stating that $f$ can be rescaled as they did in the response, which I believe they should. Another thing I'm very interested in is how difficult is to find a local minima of the modified network. If I understand correctly, after adding the neuron, each stationary point in the previous network becomes a corresponding saddle point (if one just keep the added neuron inactive) in the modified network (except for the global minima). How does such a loss landscape affect the optimization process? Is it computationally efficient to actually find the minima? How well does the minima generalize? It would be more convincing if the authors can provide some numerical experiments. Overall I believe this is a very good paper, and should be accepted. I've changed my overall score to 7. [Summary of the paper] This paper presents a rather surprising theoretical result for the loss surface of binary classification models. It is shown that under mild assumptions, if one adds a specific type of neuron with skip connection to a binary classification model, as well as a quadratic regularization term to the loss function, then every local minima on the loss surface is also a global minima. The result is surprising because virtually no assumptions have been made about the classification model itself, other than that the dataset is realizable by the model (namely, there exists a parameter under which the model can classify all samples in the dataset correctly), hence the result is applicable to many models. The paper also provides some extensions to their main result. [Quality] I have concerns about the main result of the paper: -- In section 3.2, the authors add the output of an exponential neuron to the output of the network, namely, the new architecture $\tilde{f}$ is defined by $\tilde{f}(x, \theta) = f(x, \theta) + a \exp (w^T x + b)$ Note that the added term has an invariance in its parameters, namely, if one perform the following transformation: $a^\prime = a / C, b^\prime = b + \log C$ then the model will stay exactly the same, i.e., $a^\prime \exp (w^T x + b^\prime) = a \exp (w^T x + b)$ holds for any input $x$. Now, consider the loss function $\tilde{L}_n(\theta, a/C, w, b + \log C)$. Because there is also a regularization term for $a$, by the invariance argument above we can see that $\tilde{L}_n(\theta, a/C, w, b + \log C)$ decreases monotonically as $C$ increases (assuming $a \neq 0$). But as $C$ increases, $b$ is pushed to infinitely faraway. This argument naturally leads to the following concern: Theorem 1 is stated as "if there is a local minima for $\tilde{L}_n, then it is a global minima", but how does one ensure that $\tilde{L}_n$ actually has a local minima at all? To further illustrate my point, consider the following very simple example I constructed. Suppose $f(x, \theta) = 1$, i.e., the model always output 1. Suppose we only have one sample in the dataset, $(x, y) = (0, 1)$. Note that the realizability assumption (Assumption 2) is satisfied. Let the loss function be $l(z) = \max(0, 2 + z)^3$, so that Assumption 1 is satisfied. Finally let $\lambda = 2$. Now, we have $\tilde{L}_n = \max(0, 1 - a \exp(b))^3 + a^2$ One can immediately see that this function has no local minima. To see this, note that when $a = 0$, we have $\tilde{L}_n = 1$; on the other hand, let $a = t$ for some $t > 0$, and $b = - \log t$, and we have $L_n -> 0$ as $t -> 0$, but this would also make $b -> +\infty$. Hence the function has no global minima, and by Theorem 1 it cannot have any local minima. While this observation does not mean Theorem 1 is wrong (because theorem 1 assumes the existence of a local minima), it does limit the scope of Theorem 1 in the case where local minimas do not exist. [Clarity] The paper is well written and well organized. The proof sketch is also easy to follow. [Originality] To the best of my knowledge, the results presented in the paper are original. [Significance] I really like the results presented in this paper. It is quite surprising that by making very simple modifications of the model, one can eliminate bad local minimas, especially given the fact that little assumptions on the model itself are needed. Despite so, I feel that the significance of the results might be slightly less than it appears: -- As mentioned in the [Quality] part, there are cases where the loss function of the modified model has no local minima at all. In such cases, the theorems in the paper do not apply. It is not clear to me what conditions are needed to guarantee the existence of local minimas. It would be nice if the authors can address this issue. -- The theorems in the paper do not actually make any assumptions on the model $f$ except that there exist parameters with which $f$ can correctly classify all samples in the dataset. While this makes the results very general, this unfortunately also implies that the paper is not really about loss surface of neural networks, but rather a general way to modify the loss surface that can be applied to any model so long as the realizability assumption is satisfied. The results seem to have nothing to do with neural networks, and hence it does not really add anything to our understanding of the loss surface of neural networks. The assumptions made in the paper seem reasonable enough to be satisfied in realistic settings. It would be nice if the authors can present some numerical experiments. |
NIPS | Title
Multi-objective Bayesian optimisation with preferences over objectives
Abstract
We present a multi-objective Bayesian optimisation algorithm that allows the user to express preference-order constraints on the objectives of the type “objective A is more important than objective B”. These preferences are defined based on the stability of the obtained solutions with respect to preferred objective functions. Rather than attempting to find a representative subset of the complete Pareto front, our algorithm selects those Pareto-optimal points that satisfy these constraints. We formulate a new acquisition function based on expected improvement in dominated hypervolume (EHI) to ensure that the subset of Pareto front satisfying the constraints is thoroughly explored. The hypervolume calculation is weighted by the probability of a point satisfying the constraints from a gradient Gaussian Process model. We demonstrate our algorithm on both synthetic and real-world problems.
1 Introduction
In many real world problems, practitioners are required to sequentially evaluate a noisy black-box and expensive to evaluate function f with the goal of finding its optimum in some domain X. Bayesian optimisation is a well-known algorithm for such problems. There are a variety of studies such as hyperparameter tuning [27, 13, 12], expensive multi-objective optimisation for Robotics [2, 1], and experimentation optimisation in product design such as short polymer fiber materials [16].
Multi-objective Bayesian optimisation involves at least two conflicting, black-box, and expensive to evaluate objectives to be optimised simultaneously. Multi-objective optimisation usually assumes that all objectives are equally important, and solutions are found by seeking the Pareto front in the objective space [4, 5, 3]. However, in most cases, users can stipulate preferences over objectives. This information will impart on the relative importance on sections of the Pareto front. Thus using this information to preferentially sample the Pareto front will boost the efficiency of the optimiser, which is particularly advantageous when the objective functions are expensive.
In this study, preferences over objectives are stipulated based on the stability of the solutions with respect to a set of objective functions. As an example, there are scenarios when investment strategists are looking for Pareto optimal investment strategies that prefer stable solutions for return (objective 1) but more diverse solutions with respect to risk (objective 2) as they can later decide their appetite for risk. As can be inferred, the stability in one objective produces more diverse solutions for the other objectives. We believe in many real-world problems our proposed method can be useful in order to reduce the cost, and improve the safety of experimental design.
Whilst multi-objective Bayesian optimisation for sample efficient discovery of Pareto front is an established research track [9, 18, 8, 15], limited work has examined the incorporation of preferences. Recently, there has been a study [18] wherein given a user specified preferred region in objective space,
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
∂x have opposite signs since the weighted sum of gradients of the objectives
with respect to x must be zero: sT ∂∂x f (x) = 0. In (b) we additionally require that || ∂f1(x)
∂x || > ||∂f0(x)∂x ||, so perturbation of x will cause relatively more change in f1 than f0 - i.e. such solutions are (relatively) stable in objective f0. (c) Shows the converse, namely ||∂f0(x)∂x || > || ∂f1(x) ∂x || favoring solutions that are (relatively) stable in objective f1 and diverse in f0.
the optimiser focuses its sampling to derive the Pareto front efficiently. However, such preferences are based on the assumption of having an accurate prior knowledge about objective space and the preferred region (generally a hyperbox) for Pareto front solutions. The main contribution of this study is formulating the concept of preference-order constraints and incorporating that into a multi-objective Bayesian optimisation framework to address the unavailability of prior knowledge and boosting the performance of optimisation in such scenarios.
We are formulating the preference-order constraints through ordering of derivatives and incorporating that into multi-objective optimisation using the geometry of the constraints space whilst needing no prior information about the functions. Formally, we find a representative set of Pareto-optimal solutions to the following multi-objective optimisation problem:
D? ⊂ X? = argmax x∈X f (x) (1)
subject to preference-order constraints - that is, assuming f = [f0, f1, . . . , fm], f0 is more important (in terms of stability) than f1 and so on. Our algorithm aims to maximise the dominated hypervolume of the solution in a way that the solutions that meet the constraints are given more weights.
To formalise the concept of preference-order constraints, we first note that a point is locally Pareto optimal if any sufficiently small perturbation of a single design parameter of that point does not simultaneously increase (or decrease) all objectives. Thus, equivalently, a point is locally Pareto optimal if we can define a set of weight vectors such that, for each design parameter, the weighted sum of gradients of the objectives with respect to that design parameter is zero (see Figure 1a). Therefore, the weight vectors define the relative importance of each objective at that point. Figure 1b illustrates this concept where the blue box defines the region of stability for the function f0. Since in this section the magnitude of partial derivative for f0 is smaller compared to that of f1, the weights required to satisfy Pareto optimality would need higher weight corresponding to the gradient of f0 compared to that of f1 (see Figure 1b). Conversely, in Figure 1c, the red box highlights the section of the Pareto front where solutions have high stability in f1. To obtain samples from this section of the Pareto front, we need to make the weights corresponding to the gradient of f0 to be smaller to that of the f1.
Our solution is based on understanding the geometry of the constraints in the weight space. We show that preference order constraints gives rise to a polyhedral proper cone in this space. We show that for the pareto-optimality condition, it necessitates the gradients of the objectives at pareto-optimal points to lie in a perpendicular cone to that polyhedral. We then quantify the posterior probability that any point satisfies the preference-order constraints given a set of observations. We show how these posterior probabilities may be incorporated into the EHI acquisition function [11] to steer the Bayesian optimiser toward Pareto optimal points that satisfy the preference-order constraint and away from those that do not.
2 Notation
Sets are written A,B,C, . . . where R+ is the positive reals, R̄+ = R+ ∪ {0}, Z+ = {1, 2, . . .}, and Zn = {0, 1, . . . , n − 1}. |A| is the cardinality of the set A. Tuples (ordered sets) are denoted A,B,C, . . .. Distributions are denoted A,B, C, . . .. column vectors are bold lower case a,b, c, . . .. Matrices bold upper case A,B,C, . . .. Element i of vector a is ai, and element i, j of matrix A is Ai,j (all indexed i, j = 0, 1, . . .). The transpose is denoted aT,AT. I is the identity matrix, 1 is a vector of 1s, 0 is a vector of 0s, and ei is a vector e(i)j = δij , where δij is the Kronecker-Delta. ∇x = [ ∂∂x0 ∂ ∂x1 . . . ∂∂xn−1 ] T, sgn(x) is the sign of x (where sgn(0) = 0), and the indicator function is denoted as 1(A).
3 Background
3.1 Gaussian Processes
Let X ⊂ Rn be compact. A Gaussian process [23] GP(µ,K) is a distribution on the function space f : X → R defined by mean µ : X → R (assumed zero without loss of generality) and kernel (covariance) K : X× X→ R. If f(x) ∼ GP(0,K(x,x′)) then the posterior of f given D = {(x(j), y(j)) ∈ Rn×R|y(j) = f(x(j))+ , ∼ N (0, σ2), j ∈ ZN}, f(x)|D ∼ N (µD(x), σD(x,x′)), where:
µD (x) = k T (x) ( K + σ2I )−1 y
σD (x,x ′) = K (x,x′)− kT (x) ( K + σ2I )−1 k (x′)
(2)
and y,k(x) ∈ R|D|, K ∈ R|D|×|D|, k(x)j = K(x,x(j)), Kjk = K(x(j),x(k)). Since differentiation is a linear operation, the derivative of a Gaussian process is also a Gaussian process [17, 22]. The posterior of ∇xf given D is∇xf(x)|D ∼ N (µ′D(x),σ′D(x,x′)), where:
µ′D (x) = ( ∇xkT (x) ) ( K + σ2I )−1 y
σ′D (x,x ′) = ∇x∇Tx′K (x,x′)− ( ∇xkT (x) ) (K + σ2i I) −1 (∇x′kT (x′))T (3)
3.2 Multi-Objective Optimisation
A multi-objective optimisation problem has the form:
argmax x∈X f (x) (4)
where the components of f : X ⊂ Rn → Y ⊂ Rm represent the m distinct objectives fi : X→ R. X and Y are called design space and objective space, respectively. A Pareto-optimal solution is a point x? ∈ X for which it is not possible to find another solution x ∈ X such that fi(x) > fi(x
?) for all objectives f0, f1, . . . fm−1. The set of all Pareto optimal solutions is the Pareto set X? = {x? ∈ X|@x ∈ X : f (x) f (x?)} where y y′ (y dominates y′) means y 6= y′, yi ≥ y′i ∀i, and y y′ means y y′ or y = y′. Given observations D = {(x(j),y(j)) ∈ Rn × Rm|y(j) = f(x(j)) + , i ∼ N (0, σ2i )} of f the dominant set D∗ = { (x∗,y∗) ∈ D|@ (x,y) ∈ D : y y∗} is the most optimal subset of D (in the Pareto sense). The “goodness” of D is often measured by the dominated hypervolume (S-metric, [31, 10]) with respect to some reference point z ∈ Rm: S (D) = S (D∗) = ∫ y≥z 1 ( ∃y(i) ∈ D
∣∣y(i) y) dy. Thus our aim is to find the set D that maximises the hypervolume. Optimised algorithms exist for calculating hypervolume [29, 25], S(D), which is typically calculated by sorting the dominant observations along each axis in objective space to form a grid. Dominated hypervolume (with respect to z) is then the sum of the hypervolumes of the dominated cells (ck) - i.e. S (D) = ∑ k vol (ck) .
3.3 Bayesian Multi-Objective Optimisation
In the multi-objective case one typically assumes that the components of f are draws from independent Gaussian processes, i.e. fi(x) ∼ GP(0,K(i)(x,x′)), and fi and fi′ are independent ∀i 6= i′. A
popular acquisition function for multi-objective Bayesian optimisation is expected hypervolume improvement (EHI). The EHI acquisition function is defined by:
at (x|D) = Ef(x)|D [S (D ∪ {(x, f (x))})− S (D)] (5)
[26, 30] and represents the expected change in the dominated hypervolume by the set of observations based on the posterior Gaussian process.
4 Problem Formulation
Let f : X ⊂ Rn → Y ⊂ Rm be a vector of m independent draws fi ∼ GP(0,K(i)(x,x)) from zeromean Gaussian processes. Assume that f is expensive to evaluate. Our aim is to find a representative set of Pareto-optimal solutions to the following multi-objective optimisation problem:
D? ⊂ X? = argmax x∈XI⊂X f (x) (6)
subject to preference-order constraints. Specifically, we want to explore only that subset of solutions XI ⊂ X that place more importance on one objective fi0 than objective fi1 , and so on, as specified by the (ordered) preference tuple I = (i0, i1, . . . iQ|{i0, i1, . . .} ⊂ Zm, ik 6= ik′∀k 6= k′), where Q ∈ Zm is the number of defined preferences over objectives.
4.1 Preference-Order Constraints
Let x? ∈ int(X)∩X? be a Pareto-optimal point in the interior ofX. Necessary (but not sufficient, local) Pareto optimality conditions require that, for all sufficiently small δx ∈ Rn, f(x? + δx) f(x), or, equivalently ( δxT∇x ) f (x?) /∈ Rm+ . A necessary (again not sufficient) equivalent condition is that, for each axis j ∈ Zn in design space, sufficiently small changes in xj do not cause all objectives to simultaneously increase (and/or remain unchanged) or decrease (and/or remain unchanged). Failure of this condition would indicate that simply changing design parameter xj could improve all objectives, and hence that x? was not in fact Pareto optimal. In summary, local Pareto optimality requires that ∀j ∈ Zn there exists s(j) ∈ R̄m+\{0} such that:
sT(j) ∂ ∂xj f (x) = 0 (7)
It is important to note that this is not the same as the optimality conditions that may be derived from linear scalarisation, as the optimality conditions that arrise from linear scalarisation additionally require that s(0) = s(1) = . . . = s(n−1). Moreover (7) applies to all Pareto-optimal points, whereas linear scalarisation optimisation conditions fail for Pareto points on non-convex regions [28].
Definition 1 (Preference-Order Constraints) Let I = (i0, i1, . . . iQ|{i0, i1, . . .} ⊂ Zm, ik 6= ik′∀k 6= k′) be an (ordered) preference tuple. A vector x ∈ X satisfies the associated preference-order constraint if ∃s(0), s(1), . . . , s(n−1) ∈ SI such that:
sT(j) ∂ ∂xj f (x) = 0 ∀j ∈ Zn
where SI , { s ∈ R̄m+\ {0} ∣∣ si0 ≥ si1 ≥ si2 ≥ . . .} . Further we define XI to be the set of all x ∈ X satisfying the preference-order constraint. Equivalently:
XI = {x ∈ X| ∂∂xj f (x) ∈ S ⊥ I ∀j ∈ Zn} where S⊥I , { x ∈ X| ∃s ∈ SI, sTx = 0 } .
It is noteworthy to mention that (7) and Definition 1 are the key for calculating the compliance of a recommended solution with the preference-order constraints. Having defined preference-order constraints we then calculate the posterior probability that x ∈ XI, and showing how these posterior probabilities may be incorporated into the EHI acquisition function to steer the Bayesian optimiser toward Pareto optimal points that satisfy the preference-order constraint. Before proceeding, however, it is necessary to briefly consider the geometry of SI and S⊥I .
4.2 The geometry of SI and S⊥I
In the following we assume, w.l.o.g, that the preference-order constraints follows the order of indices in objective functions (reorder, otherwise), and that there is at least one constraint.
We now define the preference-order constraints by assumption I = (0, 1, . . . , Q|Q ∈ Zm\{0}), where Q > 0. This defines the sets SI and S⊥I , which in turn define the constraints that must be met by the gradients of f(x) - either ∃s(0), s(1), . . . , s(n−1) ∈ SI such that sT(j) ∂ ∂xj
f (x) = 0 ∀j ∈ Zn or, equivalently ∂∂xj f (x) ∈ S ⊥ I ∀j ∈ Zn. Next, Theorem 1 defines the representation of SI.
Theorem 1 Let I = (0, 1, . . . , Q|Q ∈ Zm\{0}) be an (ordered) preference tuple. Define SI as per definition 1. Then SI is a polyhedral (finitely-generated) proper cone (excluding the origin) that may be represented using either a polyhedral representation:
SI = { s ∈ Rm|aT(i)s ≥ 0∀i ∈ Zm } \ {0} (8)
or a generative representation:
SI = { ∑
i∈Zm ciã(i) ∣∣ c ∈ R̄m+ }\ {0} (9) where ∀i ∈ Zm:
a(i) =
{ 1√ 2
(ei − ei+1) if i ∈ ZQ ei otherwise
ã(i) =
{ 1√ i+1 ∑ l∈Zi+1 el if i ∈ ZQ+1
ei otherwise
and e0, e1, . . . , em−1 are the Euclidean basis of Rm.
Proof of Theorem 1 is available in the supplementary material. To test if a point satisfies this requirement we need to understand the geometry of the set SI. The Theorem 1 shows that SI∪{0} is a polyhedral (finitely generated) proper cone, represented either in terms of half-space constraints (polyhedral form) or as a positive span of extreme directions (generative representation). The geometrical intuition for this is given in Figure 2 for a simple, 2-objective case with a single preference order constraint.
Algorithm 1 Test if v ∈ S⊥I . Input: Preference tuple I Test vector v ∈ Rm. Output: 1(v ∈ S⊥I ). // Calculate 1(v ∈ S⊥I ). Let bj = ãT(j)v ∀j ∈ Zm. if ∃i 6= k ∈ Zm : sgn(bi) 6= sgn(bk) return TRUE elseif b = 0 return TRUE else return FALSE.
Algorithm 2 Preference-Order Constrained Bayesian Optimisation (MOBO-PC).
Input: preference-order tuple I. Observations D = {(x(i),y(i)) ∈ X× Y}. for t = 0, 1, . . . , T − 1 do
Select the test point: x = argmax
x∈X aPEHIt (x|Dt).
(aPEHIt is evaluated using algorithm 4). Perform Experiment y = f(x) + . Update Dt+1 := Dt ∪ {(x,y)}.
end for
Algorithm 3 Calculate Pr(x ∈ XI|D). Input: Observations D = {(x(i),y(i)) ∈ X× Y}. Number of Monte Carlo samples R. Test vector x ∈ X. Output: Pr(x ∈ XI|D). Let q = 0. for k = 0, 1, . . . , R− 1 do
//Construct samples v(0),v(1), . . . ,v(n−1) ∈ Rm. Let v(j) = 0 ∀j ∈ Zn. for i = 0, 1, . . . ,m− 1 do
Sample u ∼ N (µ′Di(x),σ′Di(x,x)) (see (3)). Let [v(0)i, v(1)i, . . . , v(n−1)i] := uT.
end for //Test if v(j) ∈ S⊥I ∀j ∈ Zn. Let q := q + ∏ j∈Zn 1(v(j) ∈ S⊥I ) (see algo
rithm 1). end for Return qR .
Algorithm 4 Calculate aPEHIt (x|D). Input: Observations D = {(x(i),y(i)) ∈ X× Y}. Number of Monte Carlo samples R̃. Test vector x ∈ X. Output: aPEHIt (x|D). Using algorithm 3, calculate: sx = Pr (x ∈ XI|D) s(j) = Pr ( x(j) ∈ XI
∣∣D) ∀ (x(j),y(j)) ∈ D Let q = 0. for k = 0, 1, . . . , R̃− 1 do
Sample yi ∼ N (µDi(x), σDi(x))) ∀i ∈ Zm (see (2)).
Construct cells c0, c1, . . . from D∪ {(x,y)} by sorting along each axis in objective space to form a grid. Calculate: q = q+
sx ∑
k:y ỹck vol (ck) ∏ j∈ZN :y(j) ỹck ( 1− s(j) ) end for Return q/R̃.
The subsequent corollary allows us to construct a simple algorithm (algorithm 1) to test if a vector v lies in the set S⊥I . We will use this algorithm to test if ∂ ∂xj
f(x) ∈ S⊥I ∀j ∈ Zn - that is, if x satisfies the preference-order constraints. The proof of corollary 1 is available in the supplementary material.
Corollary 1 Let I = (0, 1, . . . , Q|Q ∈ Zm\{0}) be an (ordered) preference tuple. Define S⊥I as per definition 1. Using the notation of Theorem 1, v ∈ S⊥I if and only if v = 0 or ∃i 6= k ∈ Zm such that sgn(ãT(i)v) 6= sgn(ã T (k)v), where sgn(0) = 0.
5 Preference Constrained Bayesian Optimisation
In this section we do two things. First, we show how the Gaussian process models of the objectives fi (and their derivatives) may be used to calculate the posterior probability that x ∈ XI defined by I = (0, 1, . . . , Q|Q ∈ Zm\{0}). Second, we show how the EHI acquisition function may be modified and calculated to incorporate these probabilities and hence only reward points that satisfy the preference-order conditions. Finally, we give our algorithm using this acquisition function.
5.1 Calculating Posterior Probabilities
Given that fi ∼ GP(0,K(i)(x,x)) are draws from independent Gaussian processes, and given observations D, we wish to calculate the posterior probability that x ∈ XI -
i.e.: Pr (x ∈ XI|D) = Pr (
∂ ∂xj f (x) ∈ S⊥I ∀j ∈ Zn ) . As fi ∼ GP(0,K(i)(x,x)) it follows that
∇xfi(x)|D ∼ Ni , N (µ′Di(x),σ′Di(x,x′)), as defined by (3). Hence:
Pr (x ∈ XI|D) = Pr v(j) ∈ S⊥I ∀j ∈ Zn ∣∣∣∣∣∣∣∣ v(0)i v(1)i
... v(n−1)i
∼ Ni∀i ∈ Zm
where v ∼ P (∇xf |D). We estimate it using Monte-Carlo [6] sampling as per algorithm 3.
5.2 Preference-Order Constrained Bayesian Optimisation Algorithm (MOBO-PC)
Our complete Bayesian optimisation algorithm with Preference-order constraints is given in algorithm 2. The acquisition function introduced in this algorithm gives higher importance to points satisfying the preference-order constraints. Unlike standard EHI, we take expectation over both the expected experimental outcomes fi(x) ∼ N (µDi(x), σDi(x,x)), ∀i ∈ Zm, and the probability that points x(i) ∈ XI and x ∈ XI satisfy the preference-order constraints. We define our preference-based EHI acquisition function as:
aPEHIt (x|D) = E [SI (D ∪ {(x, f (x))})− SI (D)|D] (10)
where SI(D) is the hypervolume dominated by the observations (x,y) ∈ D satisfying the preference-order constraints. The calculation of SI(D) is illustrated in the supplementary material. The expectation of SI(D) given D is:
E [SI (D)|D] = ∑ k vol (ck) Pr(∃ (x,y)∈D|y ỹck ∧ . . .x ∈ XI) . . .
= ∑ k vol (ck) (1− ∏
(x,y)∈D:y ỹck
(1− Pr (x ∈ XI|D)))
where ỹck is the dominant corner of cell ck, vol(ck) is the hypervolume of cell ck, and the cells ck are constructed by sorting D along each axis in objective space. The posterior probabilities Pr(x ∈ XI|D) are calculated using algorithm 3. It follows that:
aPEHIt (x|D) = Pr (x ∈ XI|D)E [ ∑
k:y ỹck vol (ck) ∏ j∈ZN :y(j) ỹck ( 1− Pr ( x(j) ∈ XI ∣∣D)) ∣∣∣yi ∼ . . . N (µDi (x) , σDi (x)) ∀i ∈ Zm
] where the cells ck are constructed using the set D ∪ {(x,y)} by sorting along the axis in objective space.We estimate this acquisition function using Monte-Carlo simulation shown in algorithm 4.
6 Experiments
We conduct a series of experiments to test the empirical performance of our proposed method MOBO-PC and compare with other strategies. These experiments including synthetic data as well as optimizing the hyper-parameters of a feed-forward neural network. For Gaussian process, we use maximum likelihood estimation for setting hyperparameters [21].
6.1 Baselines
To the best of our knowledge there are no studies aiming to solve our proposed problem, however we are using PESMO, SMSego, SUR, ParEGO and EHI [9, 20, 19, 14, 7] to confirm the validity of the obtained Pareto front solutions. The obtained Pareto front must be in the ground-truth whilst also satisfying the preference-order constraints. We compare our results with MOBO-RS [18] by suitably specifying bounding boxes in the objective space that can replicate a preference-order constraint.
6.2 Synthetic Functions
We begin with a comparison on minimising synthetic function Schaffer function N. 1 with 2 conflicting objectives f0, f1 and 1-dimensional input. (see [24]). Figure 3a shows the ground-truth Pareto front
for this function. To illustrate the behavior of our method, we impose distinct preferences. Three test cases are designed to illustrate the effects of imposing preference-order constraints on the objective functions for stability. Case (1): s0 ≈ s1, Case (2): s0 < s1 and Case (3): s0 > s1. For our method it is only required to define the preference-order constraints, however for MOBO-RS, additional information as a bounding box is obligatory. Figure 3b (case 1), shows the results of preference-order constraints SI , { s ∈ R̄m+\ {0}
∣∣ s0 ≈ s1} for our proposed method, where s0 represents the importance of stability in minimising f0 and s1 is the importance of stability in minimising f1. Due to same importance of both objectives, a balanced optimisation is expected. Higher weights are obtained for the Pareto front points in the middle region with highest stability for both objectives. Figure 3c (case 2) is based on the preference-order of s0 < s1 that implies the importance of stability in f1 is more than f0. The results show more stable Pareto points for f1 than f0. Figure 3d (case 3) shows the results of s0 > s1 preference-order constraint. As expected, we see more number of stable Pareto points for the important objective (i.e. f0 in this case). We defined two bounding boxes for MOBO-RS approach which can represent the preference-order constraints in our approach (Figure 3e and 3f). There are infinite possible bounding boxes can serve as constraints on objectives in such problems, consequently, the instability of results is expected across the various definitions of bounding boxes. We believe our method can obtain more stable Pareto front solutions especially when prior information is sparse. Also, having extra information as the weight (importance) of the Pareto front points is another advantage.
Figure 4 illustrates a special test case in which s0 > s1 and s2 > s1, yet no preferences specified over f2 and f0 while minimising Viennet function. The proposed complex preference-order constraint does not form a proper cone as elaborated in Theorem 1. However, s0 > s1 independently constructs a proper cone, likewise for s2 > s1. Figure 4a shows the results of processing these two independent constraints separately, merging their results and finding the Pareto front. Figure 4b implies more stable solutions for f0 comparing to f1. Figure 4c shows the Pareto front points comply with s2 > s1.
6.3 Finding a Fast and Accurate Neural Network
Next, we train a neural network with two objectives of minimising both prediction error and prediction time, as per [9]. These are conflicting objectives because reducing the prediction error generally involves larger networks and consequently longer testing time. We are using MNIST dataset and the tuning parameters include number of hidden layers (x1 ∈ [1, 3]), the number of hidden units per layer (x2 ∈ [50, 300]), the learning rate (x3 ∈ (0, 0.2]), amount of dropout (x4 ∈ [0.4, 0.8]), and the level of l1 (x5 ∈ (0, 0.1]) and l2 (x6 ∈ (0, 0.1]) regularization. For this problem we assume stability of f1(time) in minimising procedure is more important than the f0(error). For MOBO-RS method, we selected [[0.02, 0], [0.03, 2]] bounding box to represent an accurate prior knowledge (see Figure 5). The results were averaged over 5 independent runs. Figure 5 illustrates that one can simply ask for more stable solutions with respect to test time (without any prior knowledge) of a neural network while optimising the hyperparameters. As all the solutions found with MOBO-PC are in range of (0, 5) test time. In addition, it seems the proposed method finds more number of Pareto front solutions in comparison with MOBO-RS.
7 Conclusion
In this paper we proposed a novel multi-objective Bayesian optimisation algorithm with preferences over objectives. We define objective preferences in terms of stability and formulate a common framework to focus on the sections of the Pareto front where preferred objectives are more stable, as is required. We evaluate our method on both synthetic and real-world problems and show that the obtained Pareto fronts comply with the preference-order constraints.
Acknowledgments
This research was partially funded by Australian Government through the Australian Research Council (ARC). Prof Venkatesh is the recipient of an ARC Australian Laureate Fellowship (FL170100006). | 1. How does the proposed method extend hypervolume-based acquisition functions for Bayesian optimization?
2. Can you explain the significance of using algorithm (1) in checking if v \in \matbb{S}_{\mathcal{J}}?
3. How does the proposed method compare to a trivial approach where the preference order sorting is carried out as a post-processing step for filtering a Pareto front obtained without considering any preference order? | Review | Review
Originality: The work is original in its setup of preference order in multi-objective bayesian optimisation. It extends hypervolume based acquisition function for BO with an algorithm that tests for satisfiability of preference order in a sample. Quality and clarity: The work done is complete in its motivation, formulation, aproach and experimentation. It is clearly presented. Significance: To my understanding, the use of algorithm (1) in checking if v \in \matbb{S}_{\mathcal{J}} is the only step where preference order plays a role. However, I would be interested to know how this compares to a trivial approach where the preference order sorting is carried out as a post processing step for filtering a pareto front obtained without consideration of any preference order. |
NIPS | Title
Multi-objective Bayesian optimisation with preferences over objectives
Abstract
We present a multi-objective Bayesian optimisation algorithm that allows the user to express preference-order constraints on the objectives of the type “objective A is more important than objective B”. These preferences are defined based on the stability of the obtained solutions with respect to preferred objective functions. Rather than attempting to find a representative subset of the complete Pareto front, our algorithm selects those Pareto-optimal points that satisfy these constraints. We formulate a new acquisition function based on expected improvement in dominated hypervolume (EHI) to ensure that the subset of Pareto front satisfying the constraints is thoroughly explored. The hypervolume calculation is weighted by the probability of a point satisfying the constraints from a gradient Gaussian Process model. We demonstrate our algorithm on both synthetic and real-world problems.
1 Introduction
In many real world problems, practitioners are required to sequentially evaluate a noisy black-box and expensive to evaluate function f with the goal of finding its optimum in some domain X. Bayesian optimisation is a well-known algorithm for such problems. There are a variety of studies such as hyperparameter tuning [27, 13, 12], expensive multi-objective optimisation for Robotics [2, 1], and experimentation optimisation in product design such as short polymer fiber materials [16].
Multi-objective Bayesian optimisation involves at least two conflicting, black-box, and expensive to evaluate objectives to be optimised simultaneously. Multi-objective optimisation usually assumes that all objectives are equally important, and solutions are found by seeking the Pareto front in the objective space [4, 5, 3]. However, in most cases, users can stipulate preferences over objectives. This information will impart on the relative importance on sections of the Pareto front. Thus using this information to preferentially sample the Pareto front will boost the efficiency of the optimiser, which is particularly advantageous when the objective functions are expensive.
In this study, preferences over objectives are stipulated based on the stability of the solutions with respect to a set of objective functions. As an example, there are scenarios when investment strategists are looking for Pareto optimal investment strategies that prefer stable solutions for return (objective 1) but more diverse solutions with respect to risk (objective 2) as they can later decide their appetite for risk. As can be inferred, the stability in one objective produces more diverse solutions for the other objectives. We believe in many real-world problems our proposed method can be useful in order to reduce the cost, and improve the safety of experimental design.
Whilst multi-objective Bayesian optimisation for sample efficient discovery of Pareto front is an established research track [9, 18, 8, 15], limited work has examined the incorporation of preferences. Recently, there has been a study [18] wherein given a user specified preferred region in objective space,
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
∂x have opposite signs since the weighted sum of gradients of the objectives
with respect to x must be zero: sT ∂∂x f (x) = 0. In (b) we additionally require that || ∂f1(x)
∂x || > ||∂f0(x)∂x ||, so perturbation of x will cause relatively more change in f1 than f0 - i.e. such solutions are (relatively) stable in objective f0. (c) Shows the converse, namely ||∂f0(x)∂x || > || ∂f1(x) ∂x || favoring solutions that are (relatively) stable in objective f1 and diverse in f0.
the optimiser focuses its sampling to derive the Pareto front efficiently. However, such preferences are based on the assumption of having an accurate prior knowledge about objective space and the preferred region (generally a hyperbox) for Pareto front solutions. The main contribution of this study is formulating the concept of preference-order constraints and incorporating that into a multi-objective Bayesian optimisation framework to address the unavailability of prior knowledge and boosting the performance of optimisation in such scenarios.
We are formulating the preference-order constraints through ordering of derivatives and incorporating that into multi-objective optimisation using the geometry of the constraints space whilst needing no prior information about the functions. Formally, we find a representative set of Pareto-optimal solutions to the following multi-objective optimisation problem:
D? ⊂ X? = argmax x∈X f (x) (1)
subject to preference-order constraints - that is, assuming f = [f0, f1, . . . , fm], f0 is more important (in terms of stability) than f1 and so on. Our algorithm aims to maximise the dominated hypervolume of the solution in a way that the solutions that meet the constraints are given more weights.
To formalise the concept of preference-order constraints, we first note that a point is locally Pareto optimal if any sufficiently small perturbation of a single design parameter of that point does not simultaneously increase (or decrease) all objectives. Thus, equivalently, a point is locally Pareto optimal if we can define a set of weight vectors such that, for each design parameter, the weighted sum of gradients of the objectives with respect to that design parameter is zero (see Figure 1a). Therefore, the weight vectors define the relative importance of each objective at that point. Figure 1b illustrates this concept where the blue box defines the region of stability for the function f0. Since in this section the magnitude of partial derivative for f0 is smaller compared to that of f1, the weights required to satisfy Pareto optimality would need higher weight corresponding to the gradient of f0 compared to that of f1 (see Figure 1b). Conversely, in Figure 1c, the red box highlights the section of the Pareto front where solutions have high stability in f1. To obtain samples from this section of the Pareto front, we need to make the weights corresponding to the gradient of f0 to be smaller to that of the f1.
Our solution is based on understanding the geometry of the constraints in the weight space. We show that preference order constraints gives rise to a polyhedral proper cone in this space. We show that for the pareto-optimality condition, it necessitates the gradients of the objectives at pareto-optimal points to lie in a perpendicular cone to that polyhedral. We then quantify the posterior probability that any point satisfies the preference-order constraints given a set of observations. We show how these posterior probabilities may be incorporated into the EHI acquisition function [11] to steer the Bayesian optimiser toward Pareto optimal points that satisfy the preference-order constraint and away from those that do not.
2 Notation
Sets are written A,B,C, . . . where R+ is the positive reals, R̄+ = R+ ∪ {0}, Z+ = {1, 2, . . .}, and Zn = {0, 1, . . . , n − 1}. |A| is the cardinality of the set A. Tuples (ordered sets) are denoted A,B,C, . . .. Distributions are denoted A,B, C, . . .. column vectors are bold lower case a,b, c, . . .. Matrices bold upper case A,B,C, . . .. Element i of vector a is ai, and element i, j of matrix A is Ai,j (all indexed i, j = 0, 1, . . .). The transpose is denoted aT,AT. I is the identity matrix, 1 is a vector of 1s, 0 is a vector of 0s, and ei is a vector e(i)j = δij , where δij is the Kronecker-Delta. ∇x = [ ∂∂x0 ∂ ∂x1 . . . ∂∂xn−1 ] T, sgn(x) is the sign of x (where sgn(0) = 0), and the indicator function is denoted as 1(A).
3 Background
3.1 Gaussian Processes
Let X ⊂ Rn be compact. A Gaussian process [23] GP(µ,K) is a distribution on the function space f : X → R defined by mean µ : X → R (assumed zero without loss of generality) and kernel (covariance) K : X× X→ R. If f(x) ∼ GP(0,K(x,x′)) then the posterior of f given D = {(x(j), y(j)) ∈ Rn×R|y(j) = f(x(j))+ , ∼ N (0, σ2), j ∈ ZN}, f(x)|D ∼ N (µD(x), σD(x,x′)), where:
µD (x) = k T (x) ( K + σ2I )−1 y
σD (x,x ′) = K (x,x′)− kT (x) ( K + σ2I )−1 k (x′)
(2)
and y,k(x) ∈ R|D|, K ∈ R|D|×|D|, k(x)j = K(x,x(j)), Kjk = K(x(j),x(k)). Since differentiation is a linear operation, the derivative of a Gaussian process is also a Gaussian process [17, 22]. The posterior of ∇xf given D is∇xf(x)|D ∼ N (µ′D(x),σ′D(x,x′)), where:
µ′D (x) = ( ∇xkT (x) ) ( K + σ2I )−1 y
σ′D (x,x ′) = ∇x∇Tx′K (x,x′)− ( ∇xkT (x) ) (K + σ2i I) −1 (∇x′kT (x′))T (3)
3.2 Multi-Objective Optimisation
A multi-objective optimisation problem has the form:
argmax x∈X f (x) (4)
where the components of f : X ⊂ Rn → Y ⊂ Rm represent the m distinct objectives fi : X→ R. X and Y are called design space and objective space, respectively. A Pareto-optimal solution is a point x? ∈ X for which it is not possible to find another solution x ∈ X such that fi(x) > fi(x
?) for all objectives f0, f1, . . . fm−1. The set of all Pareto optimal solutions is the Pareto set X? = {x? ∈ X|@x ∈ X : f (x) f (x?)} where y y′ (y dominates y′) means y 6= y′, yi ≥ y′i ∀i, and y y′ means y y′ or y = y′. Given observations D = {(x(j),y(j)) ∈ Rn × Rm|y(j) = f(x(j)) + , i ∼ N (0, σ2i )} of f the dominant set D∗ = { (x∗,y∗) ∈ D|@ (x,y) ∈ D : y y∗} is the most optimal subset of D (in the Pareto sense). The “goodness” of D is often measured by the dominated hypervolume (S-metric, [31, 10]) with respect to some reference point z ∈ Rm: S (D) = S (D∗) = ∫ y≥z 1 ( ∃y(i) ∈ D
∣∣y(i) y) dy. Thus our aim is to find the set D that maximises the hypervolume. Optimised algorithms exist for calculating hypervolume [29, 25], S(D), which is typically calculated by sorting the dominant observations along each axis in objective space to form a grid. Dominated hypervolume (with respect to z) is then the sum of the hypervolumes of the dominated cells (ck) - i.e. S (D) = ∑ k vol (ck) .
3.3 Bayesian Multi-Objective Optimisation
In the multi-objective case one typically assumes that the components of f are draws from independent Gaussian processes, i.e. fi(x) ∼ GP(0,K(i)(x,x′)), and fi and fi′ are independent ∀i 6= i′. A
popular acquisition function for multi-objective Bayesian optimisation is expected hypervolume improvement (EHI). The EHI acquisition function is defined by:
at (x|D) = Ef(x)|D [S (D ∪ {(x, f (x))})− S (D)] (5)
[26, 30] and represents the expected change in the dominated hypervolume by the set of observations based on the posterior Gaussian process.
4 Problem Formulation
Let f : X ⊂ Rn → Y ⊂ Rm be a vector of m independent draws fi ∼ GP(0,K(i)(x,x)) from zeromean Gaussian processes. Assume that f is expensive to evaluate. Our aim is to find a representative set of Pareto-optimal solutions to the following multi-objective optimisation problem:
D? ⊂ X? = argmax x∈XI⊂X f (x) (6)
subject to preference-order constraints. Specifically, we want to explore only that subset of solutions XI ⊂ X that place more importance on one objective fi0 than objective fi1 , and so on, as specified by the (ordered) preference tuple I = (i0, i1, . . . iQ|{i0, i1, . . .} ⊂ Zm, ik 6= ik′∀k 6= k′), where Q ∈ Zm is the number of defined preferences over objectives.
4.1 Preference-Order Constraints
Let x? ∈ int(X)∩X? be a Pareto-optimal point in the interior ofX. Necessary (but not sufficient, local) Pareto optimality conditions require that, for all sufficiently small δx ∈ Rn, f(x? + δx) f(x), or, equivalently ( δxT∇x ) f (x?) /∈ Rm+ . A necessary (again not sufficient) equivalent condition is that, for each axis j ∈ Zn in design space, sufficiently small changes in xj do not cause all objectives to simultaneously increase (and/or remain unchanged) or decrease (and/or remain unchanged). Failure of this condition would indicate that simply changing design parameter xj could improve all objectives, and hence that x? was not in fact Pareto optimal. In summary, local Pareto optimality requires that ∀j ∈ Zn there exists s(j) ∈ R̄m+\{0} such that:
sT(j) ∂ ∂xj f (x) = 0 (7)
It is important to note that this is not the same as the optimality conditions that may be derived from linear scalarisation, as the optimality conditions that arrise from linear scalarisation additionally require that s(0) = s(1) = . . . = s(n−1). Moreover (7) applies to all Pareto-optimal points, whereas linear scalarisation optimisation conditions fail for Pareto points on non-convex regions [28].
Definition 1 (Preference-Order Constraints) Let I = (i0, i1, . . . iQ|{i0, i1, . . .} ⊂ Zm, ik 6= ik′∀k 6= k′) be an (ordered) preference tuple. A vector x ∈ X satisfies the associated preference-order constraint if ∃s(0), s(1), . . . , s(n−1) ∈ SI such that:
sT(j) ∂ ∂xj f (x) = 0 ∀j ∈ Zn
where SI , { s ∈ R̄m+\ {0} ∣∣ si0 ≥ si1 ≥ si2 ≥ . . .} . Further we define XI to be the set of all x ∈ X satisfying the preference-order constraint. Equivalently:
XI = {x ∈ X| ∂∂xj f (x) ∈ S ⊥ I ∀j ∈ Zn} where S⊥I , { x ∈ X| ∃s ∈ SI, sTx = 0 } .
It is noteworthy to mention that (7) and Definition 1 are the key for calculating the compliance of a recommended solution with the preference-order constraints. Having defined preference-order constraints we then calculate the posterior probability that x ∈ XI, and showing how these posterior probabilities may be incorporated into the EHI acquisition function to steer the Bayesian optimiser toward Pareto optimal points that satisfy the preference-order constraint. Before proceeding, however, it is necessary to briefly consider the geometry of SI and S⊥I .
4.2 The geometry of SI and S⊥I
In the following we assume, w.l.o.g, that the preference-order constraints follows the order of indices in objective functions (reorder, otherwise), and that there is at least one constraint.
We now define the preference-order constraints by assumption I = (0, 1, . . . , Q|Q ∈ Zm\{0}), where Q > 0. This defines the sets SI and S⊥I , which in turn define the constraints that must be met by the gradients of f(x) - either ∃s(0), s(1), . . . , s(n−1) ∈ SI such that sT(j) ∂ ∂xj
f (x) = 0 ∀j ∈ Zn or, equivalently ∂∂xj f (x) ∈ S ⊥ I ∀j ∈ Zn. Next, Theorem 1 defines the representation of SI.
Theorem 1 Let I = (0, 1, . . . , Q|Q ∈ Zm\{0}) be an (ordered) preference tuple. Define SI as per definition 1. Then SI is a polyhedral (finitely-generated) proper cone (excluding the origin) that may be represented using either a polyhedral representation:
SI = { s ∈ Rm|aT(i)s ≥ 0∀i ∈ Zm } \ {0} (8)
or a generative representation:
SI = { ∑
i∈Zm ciã(i) ∣∣ c ∈ R̄m+ }\ {0} (9) where ∀i ∈ Zm:
a(i) =
{ 1√ 2
(ei − ei+1) if i ∈ ZQ ei otherwise
ã(i) =
{ 1√ i+1 ∑ l∈Zi+1 el if i ∈ ZQ+1
ei otherwise
and e0, e1, . . . , em−1 are the Euclidean basis of Rm.
Proof of Theorem 1 is available in the supplementary material. To test if a point satisfies this requirement we need to understand the geometry of the set SI. The Theorem 1 shows that SI∪{0} is a polyhedral (finitely generated) proper cone, represented either in terms of half-space constraints (polyhedral form) or as a positive span of extreme directions (generative representation). The geometrical intuition for this is given in Figure 2 for a simple, 2-objective case with a single preference order constraint.
Algorithm 1 Test if v ∈ S⊥I . Input: Preference tuple I Test vector v ∈ Rm. Output: 1(v ∈ S⊥I ). // Calculate 1(v ∈ S⊥I ). Let bj = ãT(j)v ∀j ∈ Zm. if ∃i 6= k ∈ Zm : sgn(bi) 6= sgn(bk) return TRUE elseif b = 0 return TRUE else return FALSE.
Algorithm 2 Preference-Order Constrained Bayesian Optimisation (MOBO-PC).
Input: preference-order tuple I. Observations D = {(x(i),y(i)) ∈ X× Y}. for t = 0, 1, . . . , T − 1 do
Select the test point: x = argmax
x∈X aPEHIt (x|Dt).
(aPEHIt is evaluated using algorithm 4). Perform Experiment y = f(x) + . Update Dt+1 := Dt ∪ {(x,y)}.
end for
Algorithm 3 Calculate Pr(x ∈ XI|D). Input: Observations D = {(x(i),y(i)) ∈ X× Y}. Number of Monte Carlo samples R. Test vector x ∈ X. Output: Pr(x ∈ XI|D). Let q = 0. for k = 0, 1, . . . , R− 1 do
//Construct samples v(0),v(1), . . . ,v(n−1) ∈ Rm. Let v(j) = 0 ∀j ∈ Zn. for i = 0, 1, . . . ,m− 1 do
Sample u ∼ N (µ′Di(x),σ′Di(x,x)) (see (3)). Let [v(0)i, v(1)i, . . . , v(n−1)i] := uT.
end for //Test if v(j) ∈ S⊥I ∀j ∈ Zn. Let q := q + ∏ j∈Zn 1(v(j) ∈ S⊥I ) (see algo
rithm 1). end for Return qR .
Algorithm 4 Calculate aPEHIt (x|D). Input: Observations D = {(x(i),y(i)) ∈ X× Y}. Number of Monte Carlo samples R̃. Test vector x ∈ X. Output: aPEHIt (x|D). Using algorithm 3, calculate: sx = Pr (x ∈ XI|D) s(j) = Pr ( x(j) ∈ XI
∣∣D) ∀ (x(j),y(j)) ∈ D Let q = 0. for k = 0, 1, . . . , R̃− 1 do
Sample yi ∼ N (µDi(x), σDi(x))) ∀i ∈ Zm (see (2)).
Construct cells c0, c1, . . . from D∪ {(x,y)} by sorting along each axis in objective space to form a grid. Calculate: q = q+
sx ∑
k:y ỹck vol (ck) ∏ j∈ZN :y(j) ỹck ( 1− s(j) ) end for Return q/R̃.
The subsequent corollary allows us to construct a simple algorithm (algorithm 1) to test if a vector v lies in the set S⊥I . We will use this algorithm to test if ∂ ∂xj
f(x) ∈ S⊥I ∀j ∈ Zn - that is, if x satisfies the preference-order constraints. The proof of corollary 1 is available in the supplementary material.
Corollary 1 Let I = (0, 1, . . . , Q|Q ∈ Zm\{0}) be an (ordered) preference tuple. Define S⊥I as per definition 1. Using the notation of Theorem 1, v ∈ S⊥I if and only if v = 0 or ∃i 6= k ∈ Zm such that sgn(ãT(i)v) 6= sgn(ã T (k)v), where sgn(0) = 0.
5 Preference Constrained Bayesian Optimisation
In this section we do two things. First, we show how the Gaussian process models of the objectives fi (and their derivatives) may be used to calculate the posterior probability that x ∈ XI defined by I = (0, 1, . . . , Q|Q ∈ Zm\{0}). Second, we show how the EHI acquisition function may be modified and calculated to incorporate these probabilities and hence only reward points that satisfy the preference-order conditions. Finally, we give our algorithm using this acquisition function.
5.1 Calculating Posterior Probabilities
Given that fi ∼ GP(0,K(i)(x,x)) are draws from independent Gaussian processes, and given observations D, we wish to calculate the posterior probability that x ∈ XI -
i.e.: Pr (x ∈ XI|D) = Pr (
∂ ∂xj f (x) ∈ S⊥I ∀j ∈ Zn ) . As fi ∼ GP(0,K(i)(x,x)) it follows that
∇xfi(x)|D ∼ Ni , N (µ′Di(x),σ′Di(x,x′)), as defined by (3). Hence:
Pr (x ∈ XI|D) = Pr v(j) ∈ S⊥I ∀j ∈ Zn ∣∣∣∣∣∣∣∣ v(0)i v(1)i
... v(n−1)i
∼ Ni∀i ∈ Zm
where v ∼ P (∇xf |D). We estimate it using Monte-Carlo [6] sampling as per algorithm 3.
5.2 Preference-Order Constrained Bayesian Optimisation Algorithm (MOBO-PC)
Our complete Bayesian optimisation algorithm with Preference-order constraints is given in algorithm 2. The acquisition function introduced in this algorithm gives higher importance to points satisfying the preference-order constraints. Unlike standard EHI, we take expectation over both the expected experimental outcomes fi(x) ∼ N (µDi(x), σDi(x,x)), ∀i ∈ Zm, and the probability that points x(i) ∈ XI and x ∈ XI satisfy the preference-order constraints. We define our preference-based EHI acquisition function as:
aPEHIt (x|D) = E [SI (D ∪ {(x, f (x))})− SI (D)|D] (10)
where SI(D) is the hypervolume dominated by the observations (x,y) ∈ D satisfying the preference-order constraints. The calculation of SI(D) is illustrated in the supplementary material. The expectation of SI(D) given D is:
E [SI (D)|D] = ∑ k vol (ck) Pr(∃ (x,y)∈D|y ỹck ∧ . . .x ∈ XI) . . .
= ∑ k vol (ck) (1− ∏
(x,y)∈D:y ỹck
(1− Pr (x ∈ XI|D)))
where ỹck is the dominant corner of cell ck, vol(ck) is the hypervolume of cell ck, and the cells ck are constructed by sorting D along each axis in objective space. The posterior probabilities Pr(x ∈ XI|D) are calculated using algorithm 3. It follows that:
aPEHIt (x|D) = Pr (x ∈ XI|D)E [ ∑
k:y ỹck vol (ck) ∏ j∈ZN :y(j) ỹck ( 1− Pr ( x(j) ∈ XI ∣∣D)) ∣∣∣yi ∼ . . . N (µDi (x) , σDi (x)) ∀i ∈ Zm
] where the cells ck are constructed using the set D ∪ {(x,y)} by sorting along the axis in objective space.We estimate this acquisition function using Monte-Carlo simulation shown in algorithm 4.
6 Experiments
We conduct a series of experiments to test the empirical performance of our proposed method MOBO-PC and compare with other strategies. These experiments including synthetic data as well as optimizing the hyper-parameters of a feed-forward neural network. For Gaussian process, we use maximum likelihood estimation for setting hyperparameters [21].
6.1 Baselines
To the best of our knowledge there are no studies aiming to solve our proposed problem, however we are using PESMO, SMSego, SUR, ParEGO and EHI [9, 20, 19, 14, 7] to confirm the validity of the obtained Pareto front solutions. The obtained Pareto front must be in the ground-truth whilst also satisfying the preference-order constraints. We compare our results with MOBO-RS [18] by suitably specifying bounding boxes in the objective space that can replicate a preference-order constraint.
6.2 Synthetic Functions
We begin with a comparison on minimising synthetic function Schaffer function N. 1 with 2 conflicting objectives f0, f1 and 1-dimensional input. (see [24]). Figure 3a shows the ground-truth Pareto front
for this function. To illustrate the behavior of our method, we impose distinct preferences. Three test cases are designed to illustrate the effects of imposing preference-order constraints on the objective functions for stability. Case (1): s0 ≈ s1, Case (2): s0 < s1 and Case (3): s0 > s1. For our method it is only required to define the preference-order constraints, however for MOBO-RS, additional information as a bounding box is obligatory. Figure 3b (case 1), shows the results of preference-order constraints SI , { s ∈ R̄m+\ {0}
∣∣ s0 ≈ s1} for our proposed method, where s0 represents the importance of stability in minimising f0 and s1 is the importance of stability in minimising f1. Due to same importance of both objectives, a balanced optimisation is expected. Higher weights are obtained for the Pareto front points in the middle region with highest stability for both objectives. Figure 3c (case 2) is based on the preference-order of s0 < s1 that implies the importance of stability in f1 is more than f0. The results show more stable Pareto points for f1 than f0. Figure 3d (case 3) shows the results of s0 > s1 preference-order constraint. As expected, we see more number of stable Pareto points for the important objective (i.e. f0 in this case). We defined two bounding boxes for MOBO-RS approach which can represent the preference-order constraints in our approach (Figure 3e and 3f). There are infinite possible bounding boxes can serve as constraints on objectives in such problems, consequently, the instability of results is expected across the various definitions of bounding boxes. We believe our method can obtain more stable Pareto front solutions especially when prior information is sparse. Also, having extra information as the weight (importance) of the Pareto front points is another advantage.
Figure 4 illustrates a special test case in which s0 > s1 and s2 > s1, yet no preferences specified over f2 and f0 while minimising Viennet function. The proposed complex preference-order constraint does not form a proper cone as elaborated in Theorem 1. However, s0 > s1 independently constructs a proper cone, likewise for s2 > s1. Figure 4a shows the results of processing these two independent constraints separately, merging their results and finding the Pareto front. Figure 4b implies more stable solutions for f0 comparing to f1. Figure 4c shows the Pareto front points comply with s2 > s1.
6.3 Finding a Fast and Accurate Neural Network
Next, we train a neural network with two objectives of minimising both prediction error and prediction time, as per [9]. These are conflicting objectives because reducing the prediction error generally involves larger networks and consequently longer testing time. We are using MNIST dataset and the tuning parameters include number of hidden layers (x1 ∈ [1, 3]), the number of hidden units per layer (x2 ∈ [50, 300]), the learning rate (x3 ∈ (0, 0.2]), amount of dropout (x4 ∈ [0.4, 0.8]), and the level of l1 (x5 ∈ (0, 0.1]) and l2 (x6 ∈ (0, 0.1]) regularization. For this problem we assume stability of f1(time) in minimising procedure is more important than the f0(error). For MOBO-RS method, we selected [[0.02, 0], [0.03, 2]] bounding box to represent an accurate prior knowledge (see Figure 5). The results were averaged over 5 independent runs. Figure 5 illustrates that one can simply ask for more stable solutions with respect to test time (without any prior knowledge) of a neural network while optimising the hyperparameters. As all the solutions found with MOBO-PC are in range of (0, 5) test time. In addition, it seems the proposed method finds more number of Pareto front solutions in comparison with MOBO-RS.
7 Conclusion
In this paper we proposed a novel multi-objective Bayesian optimisation algorithm with preferences over objectives. We define objective preferences in terms of stability and formulate a common framework to focus on the sections of the Pareto front where preferred objectives are more stable, as is required. We evaluate our method on both synthetic and real-world problems and show that the obtained Pareto fronts comply with the preference-order constraints.
Acknowledgments
This research was partially funded by Australian Government through the Australian Research Council (ARC). Prof Venkatesh is the recipient of an ARC Australian Laureate Fellowship (FL170100006). | 1. What is the focus of the paper regarding multiobjective optimization?
2. What are the related works in the field, and how does the paper differentiate itself from them?
3. What are the concerns regarding the presentation of the main algorithm?
4. How significant is the issue of measurement in preference-based approaches, and why was it not addressed in the paper?
5. Can the reviewer provide examples or explanations to help understand the subjectivity of measuring the quality of found solutions? | Review | Review
There are some related works on preference-based or interactive multiobjective optimization. The novelty of the work is not very high. From my point of view, the main algorithm in Section 4 is not clearly presented. To access the preference-based approaches, the measurements are quite important. However, this issue is very subjective. The paper did not discuss this issue. Therefore, it is hard to judge whether the found solutions are good or not. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.